WorldWideScience

Sample records for model testing systematic

  1. Testing flow diversion in animal models: a systematic review.

    Science.gov (United States)

    Fahed, Robert; Raymond, Jean; Ducroux, Célina; Gentric, Jean-Christophe; Salazkin, Igor; Ziegler, Daniela; Gevry, Guylaine; Darsaut, Tim E

    2016-04-01

    Flow diversion (FD) is increasingly used to treat intracranial aneurysms. We sought to systematically review published studies to assess the quality of reporting and summarize the results of FD in various animal models. Databases were searched to retrieve all animal studies on FD from 2000 to 2015. Extracted data included species and aneurysm models, aneurysm and neck dimensions, type of flow diverter, occlusion rates, and complications. Articles were evaluated using a checklist derived from the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines. Forty-two articles reporting the results of FD in nine different aneurysm models were included. The rabbit elastase-induced aneurysm model was the most commonly used, with 3-month occlusion rates of 73.5%, (95%CI [61.9-82.6%]). FD of surgical sidewall aneurysms, constructed in rabbits or canines, resulted in high occlusion rates (100% [65.5-100%]). FD resulted in modest occlusion rates (15.4% [8.9-25.1%]) when tested in six complex canine aneurysm models designed to reproduce more difficult clinical contexts (large necks, bifurcation, or fusiform aneurysms). Adverse events, including branch occlusion, were rarely reported. There were no hemorrhagic complications. Articles complied with 20.8 ± 3.9 of 41 ARRIVE items; only a small number used randomization (3/42 articles [7.1%]) or a control group (13/42 articles [30.9%]). Preclinical studies on FD have shown various results. Occlusion of elastase-induced aneurysms was common after FD. The model is not challenging but standardized in many laboratories. Failures of FD can be reproduced in less standardized but more challenging surgical canine constructions. The quality of reporting could be improved.

  2. Systematic reviews of diagnostic test accuracy

    DEFF Research Database (Denmark)

    Leeflang, Mariska M G; Deeks, Jonathan J; Gatsonis, Constantine

    2008-01-01

    More and more systematic reviews of diagnostic test accuracy studies are being published, but they can be methodologically challenging. In this paper, the authors present some of the recent developments in the methodology for conducting systematic reviews of diagnostic test accuracy studies......-operating characteristic or the bivariate model for the data analysis. Challenges that remain are the poor reporting of original diagnostic test accuracy studies and difficulties with the interpretation of the results of diagnostic test accuracy research....

  3. Diagnostic models of the pre-test probability of stable coronary artery disease: A systematic review

    Directory of Open Access Journals (Sweden)

    Ting He

    Full Text Available A comprehensive search of PubMed and Embase was performed in January 2015 to examine the available literature on validated diagnostic models of the pre-test probability of stable coronary artery disease and to describe the characteristics of the models. Studies that were designed to develop and validate diagnostic models of pre-test probability for stable coronary artery disease were included. Data regarding baseline patient characteristics, procedural characteristics, modeling methods, metrics of model performance, risk of bias, and clinical usefulness were extracted. Ten studies involving the development of 12 models and two studies focusing on external validation were identified. Seven models were validated internally, and seven models were validated externally. Discrimination varied between studies that were validated internally (C statistic 0.66-0.81 and externally (0.49-0.87. Only one study presented reclassification indices. The majority of better performing models included sex, age, symptoms, diabetes, smoking, and hyperlipidemia as variables. Only two diagnostic models evaluated the effects on clinical decision making processes or patient outcomes. Most diagnostic models of the pre-test probability of stable coronary artery disease have had modest success, and very few present data regarding the effects of these models on clinical decision making processes or patient outcomes.

  4. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  5. Evidence used in model-based economic evaluations for evaluating pharmacogenetic and pharmacogenomic tests: a systematic review protocol.

    Science.gov (United States)

    Peters, Jaime L; Cooper, Chris; Buchanan, James

    2015-11-11

    Decision models can be used to conduct economic evaluations of new pharmacogenetic and pharmacogenomic tests to ensure they offer value for money to healthcare systems. These models require a great deal of evidence, yet research suggests the evidence used is diverse and of uncertain quality. By conducting a systematic review, we aim to investigate the test-related evidence used to inform decision models developed for the economic evaluation of genetic tests. We will search electronic databases including MEDLINE, EMBASE and NHS EEDs to identify model-based economic evaluations of pharmacogenetic and pharmacogenomic tests. The search will not be limited by language or date. Title and abstract screening will be conducted independently by 2 reviewers, with screening of full texts and data extraction conducted by 1 reviewer, and checked by another. Characteristics of the decision problem, the decision model and the test evidence used to inform the model will be extracted. Specifically, we will identify the reported evidence sources for the test-related evidence used, describe the study design and how the evidence was identified. A checklist developed specifically for decision analytic models will be used to critically appraise the models described in these studies. Variations in the test evidence used in the decision models will be explored across the included studies, and we will identify gaps in the evidence in terms of both quantity and quality. The findings of this work will be disseminated via a peer-reviewed journal publication and at national and international conferences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. Systematic review, meta-analysis and economic modelling of molecular diagnostic tests for antibiotic resistance in tuberculosis.

    Science.gov (United States)

    Drobniewski, Francis; Cooke, Mary; Jordan, Jake; Casali, Nicola; Mugwagwa, Tendai; Broda, Agnieszka; Townsend, Catherine; Sivaramakrishnan, Anand; Green, Nathan; Jit, Mark; Lipman, Marc; Lord, Joanne; White, Peter J; Abubakar, Ibrahim

    2015-01-01

    BACKGROUND Drug-resistant tuberculosis (TB), especially multidrug-resistant (MDR, resistance to rifampicin and isoniazid) disease, is associated with a worse patient outcome. Drug resistance diagnosed using microbiological culture takes days to weeks, as TB bacteria grow slowly. Rapid molecular tests for drug resistance detection (1 day) are commercially available and may promote faster initiation of appropriate treatment. OBJECTIVES To (1) conduct a systematic review of evidence regarding diagnostic accuracy of molecular genetic tests for drug resistance, (2) conduct a health-economic evaluation of screening and diagnostic strategies, including comparison of alternative models of service provision and assessment of the value of targeting rapid testing at high-risk subgroups, and (3) construct a transmission-dynamic mathematical model that translates the estimates of diagnostic accuracy into estimates of clinical impact. REVIEW METHODS AND DATA SOURCES A standardised search strategy identified relevant studies from EMBASE, PubMed, MEDLINE, Bioscience Information Service (BIOSIS), System for Information on Grey Literature in Europe Social Policy & Practice (SIGLE) and Web of Science, published between 1 January 2000 and 15 August 2013. Additional 'grey' sources were included. Quality was assessed using quality assessment of diagnostic accuracy studies version 2 (QUADAS-2). For each diagnostic strategy and population subgroup, a care pathway was constructed to specify which medical treatments and health services that individuals would receive from presentation to the point where they either did or did not complete TB treatment successfully. A total cost was estimated from a health service perspective for each care pathway, and the health impact was estimated in terms of the mean discounted quality-adjusted life-years (QALYs) lost as a result of disease and treatment. Costs and QALYs were both discounted at 3.5% per year. An integrated transmission-dynamic and

  7. Systematic review, meta-analysis and economic modelling of molecular diagnostic tests for antibiotic resistance in tuberculosis.

    Science.gov (United States)

    Drobniewski, Francis; Cooke, Mary; Jordan, Jake; Casali, Nicola; Mugwagwa, Tendai; Broda, Agnieszka; Townsend, Catherine; Sivaramakrishnan, Anand; Green, Nathan; Jit, Mark; Lipman, Marc; Lord, Joanne; White, Peter J; Abubakar, Ibrahim

    2015-05-01

    Drug-resistant tuberculosis (TB), especially multidrug-resistant (MDR, resistance to rifampicin and isoniazid) disease, is associated with a worse patient outcome. Drug resistance diagnosed using microbiological culture takes days to weeks, as TB bacteria grow slowly. Rapid molecular tests for drug resistance detection (1 day) are commercially available and may promote faster initiation of appropriate treatment. To (1) conduct a systematic review of evidence regarding diagnostic accuracy of molecular genetic tests for drug resistance, (2) conduct a health-economic evaluation of screening and diagnostic strategies, including comparison of alternative models of service provision and assessment of the value of targeting rapid testing at high-risk subgroups, and (3) construct a transmission-dynamic mathematical model that translates the estimates of diagnostic accuracy into estimates of clinical impact. A standardised search strategy identified relevant studies from EMBASE, PubMed, MEDLINE, Bioscience Information Service (BIOSIS), System for Information on Grey Literature in Europe Social Policy & Practice (SIGLE) and Web of Science, published between 1 January 2000 and 15 August 2013. Additional 'grey' sources were included. Quality was assessed using quality assessment of diagnostic accuracy studies version 2 (QUADAS-2). For each diagnostic strategy and population subgroup, a care pathway was constructed to specify which medical treatments and health services that individuals would receive from presentation to the point where they either did or did not complete TB treatment successfully. A total cost was estimated from a health service perspective for each care pathway, and the health impact was estimated in terms of the mean discounted quality-adjusted life-years (QALYs) lost as a result of disease and treatment. Costs and QALYs were both discounted at 3.5% per year. An integrated transmission-dynamic and economic model was used to evaluate the cost-effectiveness of

  8. Testing Scientific Software: A Systematic Literature Review.

    Science.gov (United States)

    Kanewala, Upulee; Bieman, James M

    2014-10-01

    Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.

  9. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  10. Systematic effects in CALOR simulation code to model experimental configurations

    International Nuclear Information System (INIS)

    Job, P.K.; Proudfoot, J.; Handler, T.

    1991-01-01

    CALOR89 code system is being used to simulate test beam results and the design parameters of several calorimeter configurations. It has been bench-marked against the ZEUS, Dθ and HELIOS data. This study identifies the systematic effects in CALOR simulation to model the experimental configurations. Five major systematic effects are identified. These are the choice of high energy nuclear collision model, material composition, scintillator saturation, shower integration time, and the shower containment. Quantitative estimates of these systematic effects are presented. 23 refs., 6 figs., 7 tabs

  11. Loglinear Rasch model tests

    NARCIS (Netherlands)

    Kelderman, Hendrikus

    1984-01-01

    Existing statistical tests for the fit of the Rasch model have been criticized, because they are only sensitive to specific violations of its assumptions. Contingency table methods using loglinear models have been used to test various psychometric models. In this paper, the assumptions of the Rasch

  12. The psychological impact of testing for thrombophilia: a systematic review

    NARCIS (Netherlands)

    Cohn, D. M.; Vansenne, F.; Kaptein, A. A.; de Borgie, C. A. J. M.; Middeldorp, S.

    2008-01-01

    BACKGROUND: Nowadays, large numbers of patients are tested for thrombophilia, even though the benefits of this strategy remain unclear. A potential disadvantage of this predominantly genetic testing is the psychological impact, including fear, depression and worry. OBJECTIVES: To systematically

  13. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  14. Whole bone testing in small animals: systematic characterization of the mechanical properties of different rodent bones available for rat fracture models.

    Science.gov (United States)

    Prodinger, Peter M; Foehr, Peter; Bürklein, Dominik; Bissinger, Oliver; Pilge, Hakan; Kreutzer, Kilian; von Eisenhart-Rothe, Rüdiger; Tischer, Thomas

    2018-02-14

    Rat fracture models are extensively used to characterize normal and pathological bone healing. Despite, systematic research on inter- and intra-individual differences of common rat bones examined is surprisingly not available. Thus, we studied the biomechanical behaviour and radiological characteristics of the humerus, the tibia and the femur of the male Wistar rat-all of which are potentially available in the experimental situation-to identify useful or detrimental biomechanical properties of each bone and to facilitate sample size calculations. 40 paired femura, tibiae and humeri of male Wistar rats (10-38 weeks, weight between 240 and 720 g) were analysed by DXA, pQCT scan and three-point-bending. Bearing and loading bars of the biomechanical setup were adapted percentually to the bone's length. Subgroups of light (skeletal immature) rats under 400 g (N = 11, 22 specimens of each bone) and heavy (mature) rats over 400 g (N = 9, 18 specimens of each bone) were formed and evaluated separately. Radiologically, neither significant differences between left and right bones, nor a specific side preference was evident. Mean side differences of the BMC were relatively small (1-3% measured by DXA and 2.5-5% by pQCT). Over all, bone mineral content (BMC) assessed by DXA and pQCT (TOT CNT, CORT CNT) showed high correlations between each other (BMC vs. TOT and CORT CNT: R 2  = 0.94-0.99). The load-displacement diagram showed a typical, reproducible curve for each type of bone. Tibiae were the longest bones (mean 41.8 ± 4.12 mm) followed by femurs (mean 38.9 ± 4.12 mm) and humeri (mean 29.88 ± 3.33 mm). Failure loads and stiffness ranged from 175.4 ± 45.23 N / 315.6 ± 63.00 N/mm for the femurs, 124.6 ± 41.13 N / 260.5 ± 59.97 N/mm for the humeri to 117.1 ± 33.94 N / 143.8 ± 36.99 N/mm for the tibiae. Smallest interindividual differences were observed in failure loads of the femurs (CV% 8.6) and tibiae (CV% 10.7) of heavy

  15. Antenatal HIV Testing in Sub-Saharan Africa During the Implementation of the Millennium Development Goals: A Systematic Review Using the PEN-3 Cultural Model.

    Science.gov (United States)

    Blackstone, Sarah R; Nwaozuru, Ucheoma; Iwelunmor, Juliet

    2018-01-01

    This study systematically explored the barriers and facilitators to routine antenatal HIV testing from the perspective of pregnant women in sub-Saharan Africa during the implementation period of the Millennium Development Goals. Articles published between 2000 and 2015 were selected after reviewing the title, abstract, and references. Twenty-seven studies published in 11 African countries were eligible for the current study and reviewed. The most common barriers identified include communication with male partners, patient convenience and accessibility, health system and health-care provider issues, fear of disclosure, HIV-related stigma, the burden of other responsibilities at home, and the perception of antenatal care as a "woman's job." Routine testing among pregnant women is crucial for the eradication of infant and child HIV infections. Further understanding the interplay of social and cultural factors, particularly the role of women in intimate relationships and the influence of men on antenatal care seeking behaviors, is necessary to continue the work of the Millennium Development Goals.

  16. HIV testing and counseling among female sexworkers: a systematic review

    NARCIS (Netherlands)

    Tokar, A.T.; Broerse, J.E.W.; Blanchard, J.; Roura, M.

    2018-01-01

    HIV testing uptake continues to be low among Female Sex Workers (FSWs). We synthesizes evidence on barriers and facilitators to HIV testing among FSW as well as frequencies of testing, willingness to test, and return rates to collect results. We systematically searched the MEDLINE/PubMed, EMBASE,

  17. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  18. Systematic modelling and simulation of refrigeration systems

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1998-01-01

    The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose of the s......The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....

  19. Wave Reflection Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Larsen, Brian Juul

    The investigation concerns the design of a new internal breakwater in the main port of Ibiza. The objective of the model tests was in the first hand to optimize the cross section to make the wave reflection low enough to ensure that unacceptable wave agitation will not occur in the port. Secondly...

  20. Testing the Standard Model

    CERN Document Server

    Riles, K

    1998-01-01

    The Large Electron Project (LEP) accelerator near Geneva, more than any other instrument, has rigorously tested the predictions of the Standard Model of elementary particles. LEP measurements have probed the theory from many different directions and, so far, the Standard Model has prevailed. The rigour of these tests has allowed LEP physicists to determine unequivocally the number of fundamental 'generations' of elementary particles. These tests also allowed physicists to ascertain the mass of the top quark in advance of its discovery. Recent increases in the accelerator's energy allow new measurements to be undertaken, measurements that may uncover directly or indirectly the long-sought Higgs particle, believed to impart mass to all other particles.

  1. A SYSTEMATIC STUDY OF SOFTWARE QUALITY MODELS

    OpenAIRE

    Dr.Vilas. M. Thakare; Ashwin B. Tomar

    2011-01-01

    This paper aims to provide a basis for software quality model research, through a systematic study ofpapers. It identifies nearly seventy software quality research papers from journals and classifies paper asper research topic, estimation approach, study context and data set. The paper results combined withother knowledge provides support for recommendations in future software quality model research, toincrease the area of search for relevant studies, carefully select the papers within a set ...

  2. Systematization of Angra-1 operation attendance - Maintenance and periodic testings

    International Nuclear Information System (INIS)

    Furieri, E.B.; Carvalho Bruno, N. de; Salaverry, N.A.

    1988-01-01

    A maintenance analysis, their types and their functions for the safety of nuclear power plants is done. Programs and present trends in the reactor maintenance, as well as the maintenance program and periodic tests of Angra I, are analysed. The necessities of safety analysis and a systematization for maintenance attendance are discussed and the periodic testing as well as the attendance of international experience. (M.C.K.) [pt

  3. A Unified Framework for Systematic Model Improvement

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2003-01-01

    A unified framework for improving the quality of continuous time models of dynamic systems based on experimental data is presented. The framework is based on an interplay between stochastic differential equation (SDE) modelling, statistical tests and multivariate nonparametric regression...

  4. Thermal sensation models: a systematic comparison.

    Science.gov (United States)

    Koelblen, B; Psikuta, A; Bogdan, A; Annaheim, S; Rossi, R M

    2017-05-01

    Thermal sensation models, capable of predicting human's perception of thermal surroundings, are commonly used to assess given indoor conditions. These models differ in many aspects, such as the number and type of input conditions, the range of conditions in which the models can be applied, and the complexity of equations. Moreover, the models are associated with various thermal sensation scales. In this study, a systematic comparison of seven existing thermal sensation models has been performed with regard to exposures including various air temperatures, clothing thermal insulation, and metabolic rate values after a careful investigation of the models' range of applicability. Thermo-physiological data needed as input for some of the models were obtained from a mathematical model for human physiological responses. The comparison showed differences between models' predictions for the analyzed conditions, mostly higher than typical intersubject differences in votes. Therefore, it can be concluded that the choice of model strongly influences the assessment of indoor spaces. The issue of comparing different thermal sensation scales has also been discussed. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. Personal utility in genomic testing: a systematic literature review.

    Science.gov (United States)

    Kohler, Jennefer N; Turbitt, Erin; Biesecker, Barbara B

    2017-06-01

    Researchers and clinicians refer to outcomes of genomic testing that extend beyond clinical utility as 'personal utility'. No systematic delineation of personal utility exists, making it challenging to appreciate its scope. Identifying empirical elements of personal utility reported in the literature offers an inventory that can be subsequently ranked for its relative value by those who have undergone genomic testing. A systematic review was conducted of the peer-reviewed literature reporting non-health-related outcomes of genomic testing from 1 January 2003 to 5 August 2016. Inclusion criteria specified English language, date of publication, and presence of empirical evidence. Identified outcomes were iteratively coded into unique domains. The search returned 551 abstracts from which 31 studies met the inclusion criteria. Study populations and type of genomic testing varied. Coding resulted in 15 distinct elements of personal utility, organized into three domains related to personal outcomes: affective, cognitive, and behavioral; and one domain related to social outcomes. The domains of personal utility may inform pre-test counseling by helping patients anticipate potential value of test results beyond clinical utility. Identified elements may also inform investigations into the prevalence and importance of personal utility to future test users.

  6. Composite Material Testing Data Reduction to Adjust for the Systematic 6-DOF Testing Machine Aberrations

    Science.gov (United States)

    Athanasios lliopoulos; John G. Michopoulos; John G. C. Hermanson

    2012-01-01

    This paper describes a data reduction methodology for eliminating the systematic aberrations introduced by the unwanted behavior of a multiaxial testing machine, into the massive amounts of experimental data collected from testing of composite material coupons. The machine in reference is a custom made 6-DoF system called NRL66.3 and developed at the NAval...

  7. Systematic model building with flavor symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Plentinger, Florian

    2009-12-19

    The observation of neutrino masses and lepton mixing has highlighted the incompleteness of the Standard Model of particle physics. In conjunction with this discovery, new questions arise: why are the neutrino masses so small, which form has their mass hierarchy, why is the mixing in the quark and lepton sectors so different or what is the structure of the Higgs sector. In order to address these issues and to predict future experimental results, different approaches are considered. One particularly interesting possibility, are Grand Unified Theories such as SU(5) or SO(10). GUTs are vertical symmetries since they unify the SM particles into multiplets and usually predict new particles which can naturally explain the smallness of the neutrino masses via the seesaw mechanism. On the other hand, also horizontal symmetries, i.e., flavor symmetries, acting on the generation space of the SM particles, are promising. They can serve as an explanation for the quark and lepton mass hierarchies as well as for the different mixings in the quark and lepton sectors. In addition, flavor symmetries are significantly involved in the Higgs sector and predict certain forms of mass matrices. This high predictivity makes GUTs and flavor symmetries interesting for both, theorists and experimentalists. These extensions of the SM can be also combined with theories such as supersymmetry or extra dimensions. In addition, they usually have implications on the observed matter-antimatter asymmetry of the universe or can provide a dark matter candidate. In general, they also predict the lepton flavor violating rare decays {mu} {yields} e{gamma}, {tau} {yields} {mu}{gamma}, and {tau} {yields} e{gamma} which are strongly bounded by experiments but might be observed in the future. In this thesis, we combine all of these approaches, i.e., GUTs, the seesaw mechanism and flavor symmetries. Moreover, our request is to develop and perform a systematic model building approach with flavor symmetries and

  8. Systematic Unit Testing in a Read-eval-print Loop

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2010-01-01

    Lisp programmers constantly carry out experiments in a read-eval-print loop.  The experimental activities convince the Lisp programmers that new or modified pieces of programs work as expected.  But the experiments typically do not represent systematic and comprehensive unit testing efforts.......  Rather, the experiments are quick and dirty one shot validations which do not add lasting value to the software, which is being developed.  In this paper we propose a tool that is able to collect, organize, and re-validate test cases, which are entered as expressions in a read-eval-print loop.......  The process of collecting the expressions and their results imposes only little extra work on the programmer.  The use of the tool provides for creation of test repositories, and it is intended to catalyze a much more systematic approach to unit testing in a read-eval-print loop.  In the paper we also discuss...

  9. Systematic comparison of model polymer nanocomposite mechanics.

    Science.gov (United States)

    Xiao, Senbo; Peter, Christine; Kremer, Kurt

    2016-09-13

    Polymer nanocomposites render a range of outstanding materials from natural products such as silk, sea shells and bones, to synthesized nanoclay or carbon nanotube reinforced polymer systems. In contrast to the fast expanding interest in this type of material, the fundamental mechanisms of their mixing, phase behavior and reinforcement, especially for higher nanoparticle content as relevant for bio-inorganic composites, are still not fully understood. Although polymer nanocomposites exhibit diverse morphologies, qualitatively their mechanical properties are believed to be governed by a few parameters, namely their internal polymer network topology, nanoparticle volume fraction, particle surface properties and so on. Relating material mechanics to such elementary parameters is the purpose of this work. By taking a coarse-grained molecular modeling approach, we study an range of different polymer nanocomposites. We vary polymer nanoparticle connectivity, surface geometry and volume fraction to systematically study rheological/mechanical properties. Our models cover different materials, and reproduce key characteristics of real nanocomposites, such as phase separation, mechanical reinforcement. The results shed light on establishing elementary structure, property and function relationship of polymer nanocomposites.

  10. Systematic simulations of modified gravity: chameleon models

    Energy Technology Data Exchange (ETDEWEB)

    Brax, Philippe [Institut de Physique Theorique, CEA, IPhT, CNRS, URA 2306, F-91191Gif/Yvette Cedex (France); Davis, Anne-Christine [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Li, Baojiu [Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Winther, Hans A. [Institute of Theoretical Astrophysics, University of Oslo, 0315 Oslo (Norway); Zhao, Gong-Bo, E-mail: philippe.brax@cea.fr, E-mail: a.c.davis@damtp.cam.ac.uk, E-mail: baojiu.li@durham.ac.uk, E-mail: h.a.winther@astro.uio.no, E-mail: gong-bo.zhao@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom)

    2013-04-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.

  11. Systematic simulations of modified gravity: chameleon models

    International Nuclear Information System (INIS)

    Brax, Philippe; Davis, Anne-Christine; Li, Baojiu; Winther, Hans A.; Zhao, Gong-Bo

    2013-01-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc −1 , since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future

  12. Accuracy of spinal orthopaedic tests: a systematic review

    Directory of Open Access Journals (Sweden)

    Gemmell Hugh

    2006-10-01

    Full Text Available Abstract Background The purpose of this systematic review was to critically appraise the literature on the accuracy of orthopaedic tests for the spine. Methods Multiple orthopaedic texts were reviewed to produce a comprehensive list of spine orthopaedic test names and synonyms. A search was conducted in MEDLINE, MANTIS, CINAHL, AMED and the Cochrane Library for relevant articles from inception up to December 2005. The studies were evaluated using the tool for quality assessment for diagnostic accuracy studies (QUADAS. Results Twenty-one papers met the inclusion criteria. The QUADAS scores ranged from 4 to 12 of a possible 14. Twenty-nine percent of the studies achieved a score of 10 or more. The papers covered a wide range of tests for spine conditions. Conclusion There was a lack of quantity and quality of orthopaedic tests for the spine found in the literature. There is a lack of high quality research regarding the accuracy of spinal orthopaedic tests. Due to this lack of evidence it is suggested that over-reliance on single orthopaedic tests is not appropriate.

  13. Ship Model Testing

    Science.gov (United States)

    2016-01-15

    zero degrees angle of attack than the conventional foil at eight degrees angle of attack . This increase in lift is believed to be limited to low...Bureau of Shipping (ABS) supported this effort through the purchase of the 60 specimens used in this thesis. Metal Shark boats also provided aluminum...strength of welded aluminum panels. Metal Shark Boats, again, provided the necessary test panels for this effort. The optical extensometer was not

  14. Systematic evaluation of atmospheric chemistry-transport model CHIMERE

    Science.gov (United States)

    Khvorostyanov, Dmitry; Menut, Laurent; Mailler, Sylvain; Siour, Guillaume; Couvidat, Florian; Bessagnet, Bertrand; Turquety, Solene

    2017-04-01

    Regional-scale atmospheric chemistry-transport models (CTM) are used to develop air quality regulatory measures, to support environmentally sensitive decisions in the industry, and to address variety of scientific questions involving the atmospheric composition. Model performance evaluation with measurement data is critical to understand their limits and the degree of confidence in model results. CHIMERE CTM (http://www.lmd.polytechnique.fr/chimere/) is a French national tool for operational forecast and decision support and is widely used in the international research community in various areas of atmospheric chemistry and physics, climate, and environment (http://www.lmd.polytechnique.fr/chimere/CW-articles.php). This work presents the model evaluation framework applied systematically to the new CHIMERE CTM versions in the course of the continuous model development. The framework uses three of the four CTM evaluation types identified by the Environmental Protection Agency (EPA) and the American Meteorological Society (AMS): operational, diagnostic, and dynamic. It allows to compare the overall model performance in subsequent model versions (operational evaluation), identify specific processes and/or model inputs that could be improved (diagnostic evaluation), and test the model sensitivity to the changes in air quality, such as emission reductions and meteorological events (dynamic evaluation). The observation datasets currently used for the evaluation are: EMEP (surface concentrations), AERONET (optical depths), and WOUDC (ozone sounding profiles). The framework is implemented as an automated processing chain and allows interactive exploration of the results via a web interface.

  15. Models Predicting Success of Infertility Treatment: A Systematic Review

    Science.gov (United States)

    Zarinara, Alireza; Zeraati, Hojjat; Kamali, Koorosh; Mohammad, Kazem; Shahnazari, Parisa; Akhondi, Mohammad Mehdi

    2016-01-01

    Background: Infertile couples are faced with problems that affect their marital life. Infertility treatment is expensive and time consuming and occasionally isn’t simply possible. Prediction models for infertility treatment have been proposed and prediction of treatment success is a new field in infertility treatment. Because prediction of treatment success is a new need for infertile couples, this paper reviewed previous studies for catching a general concept in applicability of the models. Methods: This study was conducted as a systematic review at Avicenna Research Institute in 2015. Six data bases were searched based on WHO definitions and MESH key words. Papers about prediction models in infertility were evaluated. Results: Eighty one papers were eligible for the study. Papers covered years after 1986 and studies were designed retrospectively and prospectively. IVF prediction models have more shares in papers. Most common predictors were age, duration of infertility, ovarian and tubal problems. Conclusion: Prediction model can be clinically applied if the model can be statistically evaluated and has a good validation for treatment success. To achieve better results, the physician and the couples’ needs estimation for treatment success rate were based on history, the examination and clinical tests. Models must be checked for theoretical approach and appropriate validation. The privileges for applying the prediction models are the decrease in the cost and time, avoiding painful treatment of patients, assessment of treatment approach for physicians and decision making for health managers. The selection of the approach for designing and using these models is inevitable. PMID:27141461

  16. The science of systematic reviewing studies of diagnostic tests

    NARCIS (Netherlands)

    Oosterhuis, W. P.; Niessen, R. W.; Bossuyt, P. M.

    2000-01-01

    BACKGROUND: Systematic reviews have gradually replaced single studies as the highest level of documented effectiveness of health care interventions. Systematic reviewing is a new scientific method, concerned with the development and application of methods for identifying relevant literature,

  17. Probabilistic modeling of systematic errors in two-hybrid experiments.

    Science.gov (United States)

    Sontag, David; Singh, Rohit; Berger, Bonnie

    2007-01-01

    We describe a novel probabilistic approach to estimating errors in two-hybrid (2H) experiments. Such experiments are frequently used to elucidate protein-protein interaction networks in a high-throughput fashion; however, a significant challenge with these is their relatively high error rate, specifically, a high false-positive rate. We describe a comprehensive error model for 2H data, accounting for both random and systematic errors. The latter arise from limitations of the 2H experimental protocol: in theory, the reporting mechanism of a 2H experiment should be activated if and only if the two proteins being tested truly interact; in practice, even in the absence of a true interaction, it may be activated by some proteins - either by themselves or through promiscuous interaction with other proteins. We describe a probabilistic relational model that explicitly models the above phenomenon and use Markov Chain Monte Carlo (MCMC) algorithms to compute both the probability of an observed 2H interaction being true as well as the probability of individual proteins being self-activating/promiscuous. This is the first approach that explicitly models systematic errors in protein-protein interaction data; in contrast, previous work on this topic has modeled errors as being independent and random. By explicitly modeling the sources of noise in 2H systems, we find that we are better able to make use of the available experimental data. In comparison with Bader et al.'s method for estimating confidence in 2H predicted interactions, the proposed method performed 5-10% better overall, and in particular regimes improved prediction accuracy by as much as 76%. http://theory.csail.mit.edu/probmod2H

  18. A 'Turing' Test for Landscape Evolution Models

    Science.gov (United States)

    Parsons, A. J.; Wise, S. M.; Wainwright, J.; Swift, D. A.

    2008-12-01

    Resolving the interactions among tectonics, climate and surface processes at long timescales has benefited from the development of computer models of landscape evolution. However, testing these Landscape Evolution Models (LEMs) has been piecemeal and partial. We argue that a more systematic approach is required. What is needed is a test that will establish how 'realistic' an LEM is and thus the extent to which its predictions may be trusted. We propose a test based upon the Turing Test of artificial intelligence as a way forward. In 1950 Alan Turing posed the question of whether a machine could think. Rather than attempt to address the question directly he proposed a test in which an interrogator asked questions of a person and a machine, with no means of telling which was which. If the machine's answer could not be distinguished from those of the human, the machine could be said to demonstrate artificial intelligence. By analogy, if an LEM cannot be distinguished from a real landscape it can be deemed to be realistic. The Turing test of intelligence is a test of the way in which a computer behaves. The analogy in the case of an LEM is that it should show realistic behaviour in terms of form and process, both at a given moment in time (punctual) and in the way both form and process evolve over time (dynamic). For some of these behaviours, tests already exist. For example there are numerous morphometric tests of punctual form and measurements of punctual process. The test discussed in this paper provides new ways of assessing dynamic behaviour of an LEM over realistically long timescales. However challenges remain in developing an appropriate suite of challenging tests, in applying these tests to current LEMs and in developing LEMs that pass them.

  19. Systematic identification of crystallization kinetics within a generic modelling framework

    DEFF Research Database (Denmark)

    Abdul Samad, Noor Asma Fazli Bin; Meisler, Kresten Troelstrup; Gernaey, Krist

    2012-01-01

    A systematic development of constitutive models within a generic modelling framework has been developed for use in design, analysis and simulation of crystallization operations. The framework contains a tool for model identification connected with a generic crystallizer modelling tool-box, a tool...

  20. Superheater hydraulic model test plan

    Energy Technology Data Exchange (ETDEWEB)

    Gabler, M.; Oliva, R.M.

    1973-10-01

    The plan for conducting a hydraulic test on a full scale model of the AI Steam Generator Module design is presented. The model will incorporate all items necessary to simulate the hydraulic performance characteristics of the superheater but will utilize materials other than the 2-1/4 Cr - 1 Mo in its construction in order to minimize costs and expedite schedule. Testing will be performed in the Rockwell International Rocketdyne High Flow Test Facility which is capable of flowing up to 32,00 gpm of water at ambient temperatures. All necessary support instrumentation is also available at this facility.

  1. Development and pilot test of a process to identify research needs from a systematic review.

    Science.gov (United States)

    Saldanha, Ian J; Wilson, Lisa M; Bennett, Wendy L; Nicholson, Wanda K; Robinson, Karen A

    2013-05-01

    To ensure appropriate allocation of research funds, we need methods for identifying high-priority research needs. We developed and pilot tested a process to identify needs for primary clinical research using a systematic review in gestational diabetes mellitus. We conducted eight steps: abstract research gaps from a systematic review using the Population, Intervention, Comparison, Outcomes, and Settings (PICOS) framework; solicit feedback from the review authors; translate gaps into researchable questions using the PICOS framework; solicit feedback from multidisciplinary stakeholders at our institution; establish consensus among multidisciplinary external stakeholders on the importance of the research questions using the Delphi method; prioritize outcomes; develop conceptual models to highlight research needs; and evaluate the process. We identified 19 research questions. During the Delphi method, external stakeholders established consensus for 16 of these 19 questions (15 with "high" and 1 with "medium" clinical benefit/importance). We pilot tested an eight-step process to identify clinically important research needs. Before wider application of this process, it should be tested using systematic reviews of other diseases. Further evaluation should include assessment of the usefulness of the research needs generated using this process for primary researchers and funders. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Maturity Models in Supply Chain Sustainability: A Systematic Literature Review

    Directory of Open Access Journals (Sweden)

    Elisabete Correia

    2017-01-01

    Full Text Available A systematic literature review of supply chain maturity models with sustainability concerns is presented. The objective is to give insights into methodological issues related to maturity models, namely the research objectives; the research methods used to develop, validate and test them; the scope; and the main characteristics associated with their design. The literature review was performed based on journal articles and conference papers from 2000 to 2015 using the SCOPUS, Emerald Insight, EBSCO and Web of Science databases. Most of the analysed papers have as main objective the development of maturity models and their validation. The case study is the methodology that is most widely used by researchers to develop and validate maturity models. From the sustainability perspective, the scope of the analysed maturity models is the Triple Bottom Line (TBL and environmental dimension, focusing on a specific process (eco-design and new product development and without a broad SC perspective. The dominant characteristics associated with the design of the maturity models are the maturity grids and a continuous representation. In addition, results do not allow identifying a trend for a specific number of maturity levels. The comprehensive review, analysis, and synthesis of the maturity model literature represent an important contribution to the organization of this research area, making possible to clarify some confusion that exists about concepts, approaches and components of maturity models in sustainability. Various aspects associated with the maturity models (i.e., research objectives, research methods, scope and characteristics of the design of models are explored to contribute to the evolution and significance of this multidimensional area.

  3. Conceptual Model for Systematic Construction Waste Management

    OpenAIRE

    Abd Rahim Mohd Hilmi Izwan; Kasim Narimah

    2017-01-01

    Development of the construction industry generated construction waste which can contribute towards environmental issues. Weaknesses of compliance in construction waste management especially in construction site have also contributed to the big issues of waste generated in landfills and illegal dumping area. This gives sign that construction projects are needed a systematic construction waste management. To date, a comprehensive criteria of construction waste management, particularly for const...

  4. Simulation models in population breast cancer screening : A systematic review

    NARCIS (Netherlands)

    Koleva-Kolarova, Rositsa G; Zhan, Zhuozhao; Greuter, Marcel J W; Feenstra, Talitha L; De Bock, Geertruida H

    The aim of this review was to critically evaluate published simulation models for breast cancer screening of the general population and provide a direction for future modeling. A systematic literature search was performed to identify simulation models with more than one application. A framework for

  5. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1991-01-01

    Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data

  6. Systematic review of model-based cervical screening evaluations.

    Science.gov (United States)

    Mendes, Diana; Bains, Iren; Vanni, Tazio; Jit, Mark

    2015-05-01

    Optimising population-based cervical screening policies is becoming more complex due to the expanding range of screening technologies available and the interplay with vaccine-induced changes in epidemiology. Mathematical models are increasingly being applied to assess the impact of cervical cancer screening strategies. We systematically reviewed MEDLINE®, Embase, Web of Science®, EconLit, Health Economic Evaluation Database, and The Cochrane Library databases in order to identify the mathematical models of human papillomavirus (HPV) infection and cervical cancer progression used to assess the effectiveness and/or cost-effectiveness of cervical cancer screening strategies. Key model features and conclusions relevant to decision-making were extracted. We found 153 articles meeting our eligibility criteria published up to May 2013. Most studies (72/153) evaluated the introduction of a new screening technology, with particular focus on the comparison of HPV DNA testing and cytology (n = 58). Twenty-eight in forty of these analyses supported HPV DNA primary screening implementation. A few studies analysed more recent technologies - rapid HPV DNA testing (n = 3), HPV DNA self-sampling (n = 4), and genotyping (n = 1) - and were also supportive of their introduction. However, no study was found on emerging molecular markers and their potential utility in future screening programmes. Most evaluations (113/153) were based on models simulating aggregate groups of women at risk of cervical cancer over time without accounting for HPV infection transmission. Calibration to country-specific outcome data is becoming more common, but has not yet become standard practice. Models of cervical screening are increasingly used, and allow extrapolation of trial data to project the population-level health and economic impact of different screening policy. However, post-vaccination analyses have rarely incorporated transmission dynamics. Model calibration to country

  7. Systematic approach to verification and validation: High explosive burn models

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Laboratory; Scovel, Christina A. [Los Alamos National Laboratory

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code

  8. Systematic reviews of animal models: methodology versus epistemology.

    Science.gov (United States)

    Greek, Ray; Menache, Andre

    2013-01-01

    Systematic reviews are currently favored methods of evaluating research in order to reach conclusions regarding medical practice. The need for such reviews is necessitated by the fact that no research is perfect and experts are prone to bias. By combining many studies that fulfill specific criteria, one hopes that the strengths can be multiplied and thus reliable conclusions attained. Potential flaws in this process include the assumptions that underlie the research under examination. If the assumptions, or axioms, upon which the research studies are based, are untenable either scientifically or logically, then the results must be highly suspect regardless of the otherwise high quality of the studies or the systematic reviews. We outline recent criticisms of animal-based research, namely that animal models are failing to predict human responses. It is this failure that is purportedly being corrected via systematic reviews. We then examine the assumption that animal models can predict human outcomes to perturbations such as disease or drugs, even under the best of circumstances. We examine the use of animal models in light of empirical evidence comparing human outcomes to those from animal models, complexity theory, and evolutionary biology. We conclude that even if legitimate criticisms of animal models were addressed, through standardization of protocols and systematic reviews, the animal model would still fail as a predictive modality for human response to drugs and disease. Therefore, systematic reviews and meta-analyses of animal-based research are poor tools for attempting to reach conclusions regarding human interventions.

  9. NET model coil test possibilities

    International Nuclear Information System (INIS)

    Erb, J.; Gruenhagen, A.; Herz, W.; Jentzsch, K.; Komarek, P.; Lotz, E.; Malang, S.; Maurer, W.; Noether, G.; Ulbricht, A.; Vogt, A.; Zahn, G.; Horvath, I.; Kwasnitza, K.; Marinucci, C.; Pasztor, G.; Sborchia, C.; Weymuth, P.; Peters, A.; Roeterdink, A.

    1987-11-01

    A single full size coil for NET/INTOR represents an investment of the order of 40 MUC (Million Unit Costs). Before such an amount of money or even more for the 16 TF coils is invested as much risks as possible must be eliminated by a comprehensive development programme. In the course of such a programme a coil technology verification test should finally prove the feasibility of NET/INTOR TF coils. This study report is almost exclusively dealing with such a verification test by model coil testing. These coils will be built out of two Nb 3 Sn-conductors based on two concepts already under development and investigation. Two possible coil arrangements are discussed: A cluster facility, where two model coils out of the two Nb 3 TF-conductors are used, and the already tested LCT-coils producing a background field. A solenoid arrangement, where in addition to the two TF model coils another model coil out of a PF-conductor for the central PF-coils of NET/INTOR is used instead of LCT background coils. Technical advantages and disadvantages are worked out in order to compare and judge both facilities. Costs estimates and the time schedules broaden the base for a decision about the realisation of such a facility. (orig.) [de

  10. Systematic experimental based modeling of a rotary piezoelectric ultrasonic motor

    DEFF Research Database (Denmark)

    Mojallali, Hamed; Amini, Rouzbeh; Izadi-Zamanabadi, Roozbeh

    2007-01-01

    In this paper, a new method for equivalent circuit modeling of a traveling wave ultrasonic motor is presented. The free stator of the motor is modeled by an equivalent circuit containing complex circuit elements. A systematic approach for identifying the elements of the equivalent circuit is sugg...

  11. Models as instruments for optimizing hospital processes: a systematic review

    NARCIS (Netherlands)

    van Sambeek, J. R. C.; Cornelissen, F. A.; Bakker, P. J. M.; Krabbendam, J. J.

    2010-01-01

    PURPOSE: The purpose of this article is to find decision-making models for the design and control of processes regarding patient flows, considering various problem types, and to find out how usable these models are for managerial decision making. DESIGN/METHODOLOGY/APPROACH: A systematic review of

  12. A Systematic Identification Method for Thermodynamic Property Modelling

    DEFF Research Database (Denmark)

    Ana Perederic, Olivia; Cunico, Larissa; Sarup, Bent

    2017-01-01

    In this work, a systematic identification method for thermodynamic property modelling is proposed. The aim of the method is to improve the quality of phase equilibria prediction by group contribution based property prediction models. The method is applied to lipid systems where the Original UNIFAC...

  13. Item response theory analysis of cognitive tests in people with dementia: a systematic review.

    Science.gov (United States)

    McGrory, Sarah; Doherty, Jason M; Austin, Elizabeth J; Starr, John M; Shenkin, Susan D

    2014-02-19

    Performance on psychometric tests is key to diagnosis and monitoring treatment of dementia. Results are often reported as a total score, but there is additional information in individual items of tests which vary in their difficulty and discriminatory value. Item difficulty refers to an ability level at which the probability of responding correctly is 50%. Discrimination is an index of how well an item can differentiate between patients of varying levels of severity. Item response theory (IRT) analysis can use this information to examine and refine measures of cognitive functioning. This systematic review aimed to identify all published literature which had applied IRT to instruments assessing global cognitive function in people with dementia. A systematic review was carried out across Medline, Embase, PsychInfo and CINHAL articles. Search terms relating to IRT and dementia were combined to find all IRT analyses of global functioning scales of dementia. Of 384 articles identified four studies met inclusion criteria including a total of 2,920 people with dementia from six centers in two countries. These studies used three cognitive tests (MMSE, ADAS-Cog, BIMCT) and three IRT methods (Item Characteristic Curve analysis, Samejima's graded response model, the 2-Parameter Model). Memory items were most difficult. Naming the date in the MMSE and memory items, specifically word recall, of the ADAS-cog were most discriminatory. Four published studies were identified which used IRT on global cognitive tests in people with dementia. This technique increased the interpretative power of the cognitive scales, and could be used to provide clinicians with key items from a larger test battery which would have high predictive value. There is need for further studies using IRT in a wider range of tests involving people with dementia of different etiology and severity.

  14. Models as instruments for optimizing hospital processes: a systematic review.

    Science.gov (United States)

    van Sambeek, J R C; Cornelissen, F A; Bakker, P J M; Krabbendam, J J

    2010-01-01

    The purpose of this article is to find decision-making models for the design and control of processes regarding patient flows, considering various problem types, and to find out how usable these models are for managerial decision making. A systematic review of the literature was carried out. Relevant literature from three databases was selected based on inclusion and exclusion criteria and the results were analyzed. A total of 68 articles were selected. Of these, 31 contained computer simulation models, ten contained descriptive models, and 27 contained analytical models. The review showed that descriptive models are only applied to process design problems, and that analytical and computer simulation models are applied to all types of problems to approximately the same extent. Only a few models have been validated in practice, and it seems that most models are not used for their intended purpose: to support management in decision making. The comparability of the relevant databases appears to be limited and there is an insufficient number of suitable keywords and MeSH headings, which makes searching systematically within the broad field of health care management relatively hard to accomplish. The findings give managers insight into the characteristics of various types of decision-support models and into the kinds of situations in which they are used. This is the first time literature on various kinds of models for supporting managerial decision making in hospitals has been systematically collected and assessed.

  15. Systematic development of reduced reaction mechanisms for dynamic modeling

    Science.gov (United States)

    Frenklach, M.; Kailasanath, K.; Oran, E. S.

    1986-01-01

    A method for systematically developing a reduced chemical reaction mechanism for dynamic modeling of chemically reactive flows is presented. The method is based on the postulate that if a reduced reaction mechanism faithfully describes the time evolution of both thermal and chain reaction processes characteristic of a more complete mechanism, then the reduced mechanism will describe the chemical processes in a chemically reacting flow with approximately the same degree of accuracy. Here this postulate is tested by producing a series of mechanisms of reduced accuracy, which are derived from a full detailed mechanism for methane-oxygen combustion. These mechanisms were then tested in a series of reactive flow calculations in which a large-amplitude sinusoidal perturbation is applied to a system that is initially quiescent and whose temperature is high enough to start ignition processes. Comparison of the results for systems with and without convective flow show that this approach produces reduced mechanisms that are useful for calculations of explosions and detonations. Extensions and applicability to flames are discussed.

  16. Glucose challenge test for detecting gestational diabetes mellitus: a systematic review

    NARCIS (Netherlands)

    van Leeuwen, M.; Louwerse, M. D.; Opmeer, B. C.; Limpens, J.; Serlie, M. J.; Reitsma, J. B.; Mol, B. W. J.

    2012-01-01

    Background The best strategy to identify women with gestational diabetes mellitus (GDM) is unclear. Objectives To perform a systematic review to calculate summary estimates of the sensitivity and specificity of the 50-g glucose challenge test for GDM. Search strategy Systematic search of MEDLINE,

  17. Meta-epidemiologic analysis indicates that MEDLINE searches are sufficient for diagnostic test accuracy systematic reviews

    NARCIS (Netherlands)

    van Enst, Wynanda A.; Scholten, Rob J. P. M.; Whiting, Penny; Zwinderman, Aeilko H.; Hooft, Lotty

    2014-01-01

    To investigate how the summary estimates in diagnostic test accuracy (DTA) systematic reviews are affected when searches are limited to MEDLINE. A systematic search was performed to identify DTA reviews that had conducted exhaustive searches and included a meta-analysis. Primary studies included in

  18. Systematic Methods and Tools for Computer Aided Modelling

    DEFF Research Database (Denmark)

    Fedorova, Marina

    Models are playing important roles in design and analysis of chemicals/bio-chemicals based products and the processes that manufacture them. Model-based methods and tools have the potential to decrease the number of experiments, which can be expensive and time consuming, and point to candidates......, where the experimental effort could be focused. In this project a general modelling framework for systematic model building through modelling templates, which supports the reuse of existing models via its new model import and export capabilities, have been developed. The new feature for model transfer...... has been developed by establishing a connection with an external modelling environment for code generation. The main contribution of this thesis is a creation of modelling templates and their connection with other modelling tools within a modelling framework. The goal was to create a user...

  19. Background model systematics for the Fermi GeV excess

    Energy Technology Data Exchange (ETDEWEB)

    Calore, Francesca; Cholis, Ilias; Weniger, Christoph

    2015-03-01

    The possible gamma-ray excess in the inner Galaxy and the Galactic center (GC) suggested by Fermi-LAT observations has triggered a large number of studies. It has been interpreted as a variety of different phenomena such as a signal from WIMP dark matter annihilation, gamma-ray emission from a population of millisecond pulsars, or emission from cosmic rays injected in a sequence of burst-like events or continuously at the GC. We present the first comprehensive study of model systematics coming from the Galactic diffuse emission in the inner part of our Galaxy and their impact on the inferred properties of the excess emission at Galactic latitudes 2° < |b| < 20° and 300 MeV to 500 GeV. We study both theoretical and empirical model systematics, which we deduce from a large range of Galactic diffuse emission models and a principal component analysis of residuals in numerous test regions along the Galactic plane. We show that the hypothesis of an extended spherical excess emission with a uniform energy spectrum is compatible with the Fermi-LAT data in our region of interest at 95% CL. Assuming that this excess is the extended counterpart of the one seen in the inner few degrees of the Galaxy, we derive a lower limit of 10.0° (95% CL) on its extension away from the GC. We show that, in light of the large correlated uncertainties that affect the subtraction of the Galactic diffuse emission in the relevant regions, the energy spectrum of the excess is equally compatible with both a simple broken power-law of break energy E(break) = 2.1 ± 0.2 GeV, and with spectra predicted by the self-annihilation of dark matter, implying in the case of bar bb final states a dark matter mass of m(χ)=49(+6.4)(-)(5.4)  GeV.

  20. Prediction models in women with postmenopausal bleeding: a systematic review

    NARCIS (Netherlands)

    van Hanegem, Nehalennia; Breijer, Maria C.; Opmeer, Brent C.; Mol, Ben W. J.; Timmermans, Anne

    2012-01-01

    Postmenopausal bleeding is associated with an elevated risk of having endometrial cancer. The aim of this review is to give an overview of existing prediction models on endometrial cancer in women with postmenopausal bleeding. In a systematic search of the literature, we identified nine prognostic

  1. The Social Relations Model in Family Studies: A Systematic Review

    Science.gov (United States)

    Eichelsheim, Veroni I.; Dekovic, Maja; Buist, Kirsten L.; Cook, William L.

    2009-01-01

    The Social Relations Model (SRM) allows for examination of family relations on three different levels: the individual level (actor and partner effects), the dyadic level (relationship effects), and the family level (family effect). The aim of this study was to present a systematic review of SRM family studies and identify general patterns in the…

  2. Test model of WWER core

    International Nuclear Information System (INIS)

    Tikhomirov, A. V.; Gorokhov, A. K.

    2007-01-01

    The objective of this paper is creation of precision test model for WWER RP neutron-physics calculations. The model is considered as a tool for verification of deterministic computer codes that enables to reduce conservatism of design calculations and enhance WWER RP competitiveness. Precision calculations were performed using code MCNP5/1/ (Monte Carlo method). Engineering computer package Sapfir 9 5andRC V VER/2/ is used in comparative analysis of the results, it was certified for design calculations of WWER RU neutron-physics characteristic. The object of simulation is the first fuel loading of Volgodon NPP RP. Peculiarities of transition in calculation using MCNP5 from 2D geometry to 3D geometry are shown on the full-scale model. All core components as well as radial and face reflectors, automatic regulation in control and protection system control rod are represented in detail description according to the design. The first stage of application of the model is assessment of accuracy of calculation of the core power. At the second stage control and protection system control rod worth was assessed. Full scale RP representation in calculation using code MCNP5 is time consuming that calls for parallelization of computational problem on multiprocessing computer (Authors)

  3. Causal judgment from contingency information: a systematic test of the pCI rule.

    Science.gov (United States)

    White, Peter A

    2004-04-01

    Contingency information is information about the occurrence or nonoccurrence of an effect when a possible cause is present or absent. Under the evidential evaluation model, instances of contingency information are transformed into evidence and causal judgment is based on the proportion of relevant instances evaluated as confirmatory for the candidate cause. In this article, two experiments are reported that were designed to test systematic manipulations of the proportion of confirming instances in relation to other variables: the proportion of instances on which the candidate cause is present, the proportion of instances in which the effect occurs when the cause is present, and the objective contingency. Results showed that both unweighted and weighted versions of the proportion-of-confirmatory-instances rule successfully predicted the main features of the results, with the weighted version proving more successful. Other models, including the power PC theory, failed to predict the results.

  4. Standardized Tests of Handwriting Readiness: A Systematic Review of the Literature

    Science.gov (United States)

    van Hartingsveldt, Margo J.; de Groot, Imelda J. M.; Aarts, Pauline B. M.; Nijhuis-van der Sanden, Maria W. G.

    2011-01-01

    Aim: To establish if there are psychometrically sound standardized tests or test items to assess handwriting readiness in 5- and 6-year-old children on the levels of occupations activities/tasks and performance. Method: Electronic databases were searched to identify measurement instruments. Tests were included in a systematic review if: (1)…

  5. Model test of boson mappings

    International Nuclear Information System (INIS)

    Navratil, P.; Dobes, J.

    1992-01-01

    Methods of boson mapping are tested in calculations for a simple model system of four protons and four neutrons in single-j distinguishable orbits. Two-body terms in the boson images of the fermion operators are considered. Effects of the seniority v=4 states are thus included. The treatment of unphysical states and the influence of boson space truncation are particularly studied. Both the Dyson boson mapping and the seniority boson mapping as dictated by the similarity transformed Dyson mapping do not seem to be simply amenable to truncation. This situation improves when the one-body form of the seniority image of the quadrupole operator is employed. Truncation of the boson space is addressed by using the effective operator theory with a notable improvement of results

  6. Diagnostic test accuracy of glutamate dehydrogenase for Clostridium difficile: Systematic review and meta-analysis.

    Science.gov (United States)

    Arimoto, Jun; Horita, Nobuyuki; Kato, Shingo; Fuyuki, Akiko; Higurashi, Takuma; Ohkubo, Hidenori; Endo, Hiroki; Takashi, Nonaka; Kaneko, Takeshi; Nakajima, Atsushi

    2016-07-15

    We performed this systematic review and meta-analysis to assess the diagnostic accuracy of detecting glutamate dehydrogenase (GDH) for Clostridium difficile infection (CDI) based on the hierarchical model. Two investigators electrically searched four databases. Reference tests were stool cell cytotoxicity neutralization assay (CCNA) and stool toxigenic culture (TC). To assess the overall accuracy, we calculated the diagnostic odds ratio (DOR) using a DerSimonian-Laird random-model and area the under hierarchical summary receiver operating characteristics (AUC) using Holling's proportional hazard models. The summary estimate of the sensitivity and the specificity were obtained using the bivariate model. According to 42 reports consisting of 3055 reference positive comparisons, and 26188 reference negative comparisons, the DOR was 115 (95%CI: 77-172, I(2) = 12.0%) and the AUC was 0.970 (95%CI: 0.958-0.982). The summary estimate of sensitivity and specificity were 0.911 (95%CI: 0.871-0.940) and 0.912 (95%CI: 0.892-0.928). The positive and negative likelihood ratios were 10.4 (95%CI 8.4-12.7) and 0.098 (95%CI 0.066-0.142), respectively. Detecting GDH for the diagnosis of CDI had both high sensitivity and specificity. Considering its low cost and prevalence, it is appropriate for a screening test for CDI.

  7. Economic Evaluations of Pharmacogenetic and Pharmacogenomic Screening Tests: A Systematic Review. Second Update of the Literature.

    Directory of Open Access Journals (Sweden)

    Elizabeth J J Berm

    Full Text Available Due to extended application of pharmacogenetic and pharmacogenomic screening (PGx tests it is important to assess whether they provide good value for money. This review provides an update of the literature.A literature search was performed in PubMed and papers published between August 2010 and September 2014, investigating the cost-effectiveness of PGx screening tests, were included. Papers from 2000 until July 2010 were included via two previous systematic reviews. Studies' overall quality was assessed with the Quality of Health Economic Studies (QHES instrument.We found 38 studies, which combined with the previous 42 studies resulted in a total of 80 included studies. An average QHES score of 76 was found. Since 2010, more studies were funded by pharmaceutical companies. Most recent studies performed cost-utility analysis, univariate and probabilistic sensitivity analyses, and discussed limitations of their economic evaluations. Most studies indicated favorable cost-effectiveness. Majority of evaluations did not provide information regarding the intrinsic value of the PGx test. There were considerable differences in the costs for PGx testing. Reporting of the direction and magnitude of bias on the cost-effectiveness estimates as well as motivation for the chosen economic model and perspective were frequently missing.Application of PGx tests was mostly found to be a cost-effective or cost-saving strategy. We found that only the minority of recent pharmacoeconomic evaluations assessed the intrinsic value of the PGx tests. There was an increase in the number of studies and in the reporting of quality associated characteristics. To improve future evaluations, scenario analysis including a broad range of PGx tests costs and equal costs of comparator drugs to assess the intrinsic value of the PGx tests, are recommended. In addition, robust clinical evidence regarding PGx tests' efficacy remains of utmost importance.

  8. Hospitality and Tourism Online Review Research: A Systematic Analysis and Heuristic-Systematic Model

    Directory of Open Access Journals (Sweden)

    Sunyoung Hlee

    2018-04-01

    Full Text Available With tremendous growth and potential of online consumer reviews, online reviews of hospitality and tourism are now playing a significant role in consumer attitude and buying behaviors. This study reviewed and analyzed hospitality and tourism related articles published in academic journals. The systematic approach was used to analyze 55 research articles between January 2008 and December 2017. This study presented a brief synthesis of research by investigating content-related characteristics of hospitality and tourism online reviews (HTORs in different market segments. Two research questions were addressed. Building upon our literature analysis, we used the heuristic-systematic model (HSM to summarize and classify the characteristics affecting consumer perception in previous HTOR studies. We believe that the framework helps researchers to identify the research topic in extended HTORs literature and to point out possible direction for future studies.

  9. Systematic approach in protection and ergonomics testing personal protective equipment

    NARCIS (Netherlands)

    Hartog. E.A. den

    2009-01-01

    In the area of personal protection against chemical and biological (CB) agents there is a strong focus on testing the materials against the relevant threats. The testing programs in this area are elaborate and are aimed to guarantee that the material protects according to specifications. This

  10. How is genetic testing evaluated? A systematic review of the literature.

    Science.gov (United States)

    Pitini, Erica; De Vito, Corrado; Marzuillo, Carolina; D'Andrea, Elvira; Rosso, Annalisa; Federici, Antonio; Di Maria, Emilio; Villari, Paolo

    2018-02-08

    Given the rapid development of genetic tests, an assessment of their benefits, risks, and limitations is crucial for public health practice. We performed a systematic review aimed at identifying and comparing the existing evaluation frameworks for genetic tests. We searched PUBMED, SCOPUS, ISI Web of Knowledge, Google Scholar, Google, and gray literature sources for any documents describing such frameworks. We identified 29 evaluation frameworks published between 2000 and 2017, mostly based on the ACCE Framework (n = 13 models), or on the HTA process (n = 6), or both (n = 2). Others refer to the Wilson and Jungner screening criteria (n = 3) or to a mixture of different criteria (n = 5). Due to the widespread use of the ACCE Framework, the most frequently used evaluation criteria are analytic and clinical validity, clinical utility and ethical, legal and social implications. Less attention is given to the context of implementation. An economic dimension is always considered, but not in great detail. Consideration of delivery models, organizational aspects, and consumer viewpoint is often lacking. A deeper analysis of such context-related evaluation dimensions may strengthen a comprehensive evaluation of genetic tests and support the decision-making process.

  11. Business model framework applications in health care: A systematic review.

    Science.gov (United States)

    Fredriksson, Jens Jacob; Mazzocato, Pamela; Muhammed, Rafiq; Savage, Carl

    2017-11-01

    It has proven to be a challenge for health care organizations to achieve the Triple Aim. In the business literature, business model frameworks have been used to understand how organizations are aligned to achieve their goals. We conducted a systematic literature review with an explanatory synthesis approach to understand how business model frameworks have been applied in health care. We found a large increase in applications of business model frameworks during the last decade. E-health was the most common context of application. We identified six applications of business model frameworks: business model description, financial assessment, classification based on pre-defined typologies, business model analysis, development, and evaluation. Our synthesis suggests that the choice of business model framework and constituent elements should be informed by the intent and context of application. We see a need for harmonization in the choice of elements in order to increase generalizability, simplify application, and help organizations realize the Triple Aim.

  12. Simulation models in population breast cancer screening: A systematic review.

    Science.gov (United States)

    Koleva-Kolarova, Rositsa G; Zhan, Zhuozhao; Greuter, Marcel J W; Feenstra, Talitha L; De Bock, Geertruida H

    2015-08-01

    The aim of this review was to critically evaluate published simulation models for breast cancer screening of the general population and provide a direction for future modeling. A systematic literature search was performed to identify simulation models with more than one application. A framework for qualitative assessment which incorporated model type; input parameters; modeling approach, transparency of input data sources/assumptions, sensitivity analyses and risk of bias; validation, and outcomes was developed. Predicted mortality reduction (MR) and cost-effectiveness (CE) were compared to estimates from meta-analyses of randomized control trials (RCTs) and acceptability thresholds. Seven original simulation models were distinguished, all sharing common input parameters. The modeling approach was based on tumor progression (except one model) with internal and cross validation of the resulting models, but without any external validation. Differences in lead times for invasive or non-invasive tumors, and the option for cancers not to progress were not explicitly modeled. The models tended to overestimate the MR (11-24%) due to screening as compared to optimal RCTs 10% (95% CI - 2-21%) MR. Only recently, potential harms due to regular breast cancer screening were reported. Most scenarios resulted in acceptable cost-effectiveness estimates given current thresholds. The selected models have been repeatedly applied in various settings to inform decision making and the critical analysis revealed high risk of bias in their outcomes. Given the importance of the models, there is a need for externally validated models which use systematical evidence for input data to allow for more critical evaluation of breast cancer screening. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. 46 CFR 154.431 - Model test.

    Science.gov (United States)

    2010-10-01

    ...(c). (b) Analyzed data of a model test for the primary and secondary barrier of the membrane tank... Model test. (a) The primary and secondary barrier of a membrane tank, including the corners and joints...

  14. Systematic reduction of a detailed atrial myocyte model

    Science.gov (United States)

    Lombardo, Daniel M.; Rappel, Wouter-Jan

    2017-09-01

    Cardiac arrhythmias are a major health concern and often involve poorly understood mechanisms. Mathematical modeling is able to provide insights into these mechanisms which might result in better treatment options. A key element of this modeling is a description of the electrophysiological properties of cardiac cells. A number of electrophysiological models have been developed, ranging from highly detailed and complex models, containing numerous parameters and variables, to simplified models in which variables and parameters no longer directly correspond to electrophysiological quantities. In this study, we present a systematic reduction of the complexity of the detailed model of Koivumaki et al. using the recently developed manifold boundary approximation method. We reduce the original model, containing 42 variables and 37 parameters, to a model with only 11 variables and 5 parameters and show that this reduced model can accurately reproduce the action potential shape and restitution curve of the original model. The reduced model contains only five currents and all variables and parameters can be directly linked to electrophysiological quantities. Due to its reduction in complexity, simulation times of our model are decreased more than three-fold. Furthermore, fitting the reduced model to clinical data is much more efficient, a potentially important step towards patient-specific modeling.

  15. Exploring sources of heterogeneity in systematic reviews of diagnostic tests

    NARCIS (Netherlands)

    Lijmer, Jeroen G.; Bossuyt, Patrick M. M.; Heisterkamp, Siem H.

    2002-01-01

    It is indispensable for any meta-analysis that potential sources of heterogeneity are examined, before one considers pooling the results of primary studies into summary estimates with enhanced precision. In reviews of studies on the diagnostic accuracy of tests, variability beyond chance can be

  16. Accuracy of clinical tests in the diagnosis of anterior cruciate ligament injury: A systematic review

    NARCIS (Netherlands)

    M.S. Swain (Michael S.); N. Henschke (Nicholas); S.J. Kamper (Steven); A.S. Downie (Aron S.); B.W. Koes (Bart); C. Maher (Chris)

    2014-01-01

    textabstractBackground: Numerous clinical tests are used in the diagnosis of anterior cruciate ligament (ACL) injury but their accuracy is unclear. The purpose of this study is to evaluate the diagnostic accuracy of clinical tests for the diagnosis of ACL injury.Methods: Study Design: Systematic

  17. In-vitro orthodontic bond strength testing : A systematic review and meta-analysis

    NARCIS (Netherlands)

    Finnema, K.J.; Ozcan, M.; Post, W.J.; Ren, Y.J.; Dijkstra, P.U.

    INTRODUCTION: The aims of this study were to systematically review the available literature regarding in-vitro orthodontic shear bond strength testing and to analyze the influence of test conditions on bond strength. METHODS: Our data sources were Embase and Medline. Relevant studies were selected

  18. Accuracy of diagnostic tests for clinically suspected upper extremity deep vein thrombosis: a systematic review

    NARCIS (Netherlands)

    Di Nisio, M.; van Sluis, G. L.; Bossuyt, P. M. M.; Büller, H. R.; Porreca, E.; Rutjes, A. W. S.

    2010-01-01

    Background: The best available test for the diagnosis of upper extremity deep venous thrombosis (UEDVT) is contrast venography. The aim of this systematic review was to assess whether the diagnostic accuracy of other tests for clinically suspected UEDVT is high enough to justify their use in

  19. A Model for Quantifying Sources of Variation in Test-day Milk Yield ...

    African Journals Online (AJOL)

    A cow's test-day milk yield is influenced by several systematic environmental effects, which have to be removed when estimating the genetic potential of an animal. The present study quantified the variation due to test date and month of test in test-day lactation yield records using full and reduced models. The data consisted ...

  20. Model-based testing for software safety

    NARCIS (Netherlands)

    Gurbuz, Havva Gulay; Tekinerdogan, Bedir

    2017-01-01

    Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a

  1. Phosphate Kinetic Models in Hemodialysis: A Systematic Review.

    Science.gov (United States)

    Laursen, Sisse H; Vestergaard, Peter; Hejlesen, Ole K

    2018-01-01

    Understanding phosphate kinetics in dialysis patients is important for the prevention of hyperphosphatemia and related complications. One approach to gain new insights into phosphate behavior is physiologic modeling. Various models that describe and quantify intra- and/or interdialytic phosphate kinetics have been proposed, but there is a dearth of comprehensive comparisons of the available models. The objective of this analysis was to provide a systematic review of existing published models of phosphate metabolism in the setting of maintenance hemodialysis therapy. Systematic review. Hemodialysis patients. Studies published in peer-reviewed journals in English about phosphate kinetic modeling in the setting of hemodialysis therapy. Modeling equations from specific reviewed studies. Changes in plasma phosphate or serum phosphate concentrations. Of 1,964 nonduplicate studies evaluated, 11 were included, comprising 9 different phosphate models with 1-, 2-, 3-, or 4-compartment assumptions. Between 2 and 11 model parameters were included in the models studied. Quality scores of the studies using the Newcastle-Ottawa Scale ranged from 2 to 11 (scale, 0-14). 2 studies were considered low quality, 6 were considered medium quality, and 3 were considered high quality. Only English-language studies were included. Many parameters known to influence phosphate balance are not included in existing phosphate models that do not fully reflect the physiology of phosphate metabolism in the setting of hemodialysis. Moreover, models have not been sufficiently validated for their use as a tool to simulate phosphate kinetics in hemodialysis therapy. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  2. A systematic review of current osteoporotic metaphyseal fracture animal models.

    Science.gov (United States)

    Wong, R M Y; Choy, M H V; Li, M C M; Leung, K-S; K-H Chow, S; Cheung, W-H; Cheng, J C Y

    2018-01-01

    The treatment of osteoporotic fractures is a major challenge, and the enhancement of healing is critical as a major goal in modern fracture management. Most osteoporotic fractures occur at the metaphyseal bone region but few models exist and the healing is still poorly understood. A systematic review was conducted to identify and analyse the appropriateness of current osteoporotic metaphyseal fracture animal models. A literature search was performed on the Pubmed, Embase, and Web of Science databases, and relevant articles were selected. A total of 19 studies were included. Information on the animal, induction of osteoporosis, fracture technique, site and fixation, healing results, and utility of the model were extracted. Fracture techniques included drill hole defects (3 of 19), bone defects (3 of 19), partial osteotomy (1 of 19), and complete osteotomies (12 of 19). Drill hole models and incomplete osteotomy models are easy to perform and allow the study of therapeutic agents but do not represent the usual clinical setting. Additionally, biomaterials can be filled into drill hole defects for analysis. Complete osteotomy models are most commonly used and are best suited for the investigation of therapeutic drugs or noninvasive interventions. The metaphyseal defect models allow the study of biomaterials, which are associated with complex and comminuted osteoporotic fractures. For a clinically relevant model, we propose that an animal model should satisfy the following criteria to study osteoporotic fracture healing: 1) induction of osteoporosis, 2) complete osteotomy or defect at the metaphysis unilaterally, and 3) internal fixation. Cite this article : R. M. Y. Wong, M. H. V. Choy, M. C. M. Li, K-S. Leung, S. K-H. Chow, W-H. Cheung, J. C. Y. Cheng. A systematic review of current osteoporotic metaphyseal fracture animal models. Bone Joint Res 2018;7:6-11. DOI: 10.1302/2046-3758.71.BJR-2016-0334.R2. © 2018 Wong et al.

  3. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    International Nuclear Information System (INIS)

    Strömberg, Sten; Nistor, Mihaela; Liu, Jing

    2014-01-01

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2 4 full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world

  4. Vehicle rollover sensor test modeling

    NARCIS (Netherlands)

    McCoy, R.W.; Chou, C.C.; Velde, R. van de; Twisk, D.; Schie, C. van

    2007-01-01

    A computational model of a mid-size sport utility vehicle was developed using MADYMO. The model includes a detailed description of the suspension system and tire characteristics that incorporated the Delft-Tyre magic formula description. The model was correlated by simulating a vehicle suspension

  5. [Reliability and validity of the modified Allen test: a systematic review and metanalysis].

    Science.gov (United States)

    Romeu-Bordas, Óscar; Ballesteros-Peña, Sendoa

    2017-01-01

    The objective was to evaluate the reliability and validity of the modified Allen test in screening for collateral circulation deficits in the palm and for predicting distal hand ischemia. We performed a systematic review of the literature indexed in 6 databases. We developed a search strategy to locate studies comparing the Allen test to Doppler ultrasound to detect circulation deficits in the hand, studies assessing the incidence of ischemic events on arterial puncture after an abnormal Allen test result, and studies of Allen test interobserver agreement. Fourteen articles met the inclusion criteria. Nine assessed the validity of the test as a screening tool for detecting collateral circulation deficits. From data published in 3 studies that had followed comparable designs we calculated a sensitivity of 77% and specificity of 93% for the Allen test. Four studies that assessed the ability of the test to predict ischemia did not predict any ischemic hand events following arterial puncture in patients with abnormal Allen test results. A single study assessing the test's reliability reported an interobserver agreement rate of 71.5%. This systematic review and metanalysis allows to conclude that the Allen test does not have sufficient diagnostic validity to serve as a screening tool for collateral circulation deficits in the hand. Nor is it a good predictor of hand ischemia after arterial puncture. Moreover, its reliability is limited. There is insufficient evidence to support its systematic use before arterial puncture.

  6. Systematic testing of flood adaptation options in urban areas through simulations

    Science.gov (United States)

    Löwe, Roland; Urich, Christian; Sto. Domingo, Nina; Mark, Ole; Deletic, Ana; Arnbjerg-Nielsen, Karsten

    2016-04-01

    While models can quantify flood risk in great detail, the results are subject to a number of deep uncertainties. Climate dependent drivers such as sea level and rainfall intensities, population growth and economic development all have a strong influence on future flood risk, but future developments can only be estimated coarsely. In such a situation, robust decision making frameworks call for the systematic evaluation of mitigation measures against ensembles of potential futures. We have coupled the urban development software DAnCE4Water and the 1D-2D hydraulic simulation package MIKE FLOOD to create a framework that allows for such systematic evaluations, considering mitigation measures under a variety of climate futures and urban development scenarios. A wide spectrum of mitigation measures can be considered in this setup, ranging from structural measures such as modifications of the sewer network over local retention of rainwater and the modification of surface flow paths to policy measures such as restrictions on urban development in flood prone areas or master plans that encourage compact development. The setup was tested in a 300 ha residential catchment in Melbourne, Australia. The results clearly demonstrate the importance of considering a range of potential futures in the planning process. For example, local rainwater retention measures strongly reduce flood risk a scenario with moderate increase of rain intensities and moderate urban growth, but their performance strongly varies, yielding very little improvement in situations with pronounced climate change. The systematic testing of adaptation measures further allows for the identification of so-called adaptation tipping points, i.e. levels for the drivers of flood risk where the desired level of flood risk is exceeded despite the implementation of (a combination of) mitigation measures. Assuming a range of development rates for the drivers of flood risk, such tipping points can be translated into

  7. Testing of constitutive models in LAME.

    Energy Technology Data Exchange (ETDEWEB)

    Hammerand, Daniel Carl; Scherzinger, William Mark

    2007-09-01

    Constitutive models for computational solid mechanics codes are in LAME--the Library of Advanced Materials for Engineering. These models describe complex material behavior and are used in our finite deformation solid mechanics codes. To ensure the correct implementation of these models, regression tests have been created for constitutive models in LAME. A selection of these tests is documented here. Constitutive models are an important part of any solid mechanics code. If an analysis code is meant to provide accurate results, the constitutive models that describe the material behavior need to be implemented correctly. Ensuring the correct implementation of constitutive models is the goal of a testing procedure that is used with the Library of Advanced Materials for Engineering (LAME) (see [1] and [2]). A test suite for constitutive models can serve three purposes. First, the test problems provide the constitutive model developer a means to test the model implementation. This is an activity that is always done by any responsible constitutive model developer. Retaining the test problem in a repository where the problem can be run periodically is an excellent means of ensuring that the model continues to behave correctly. A second purpose of a test suite for constitutive models is that it gives application code developers confidence that the constitutive models work correctly. This is extremely important since any analyst that uses an application code for an engineering analysis will associate a constitutive model in LAME with the application code, not LAME. Therefore, ensuring the correct implementation of constitutive models is essential for application code teams. A third purpose of a constitutive model test suite is that it provides analysts with example problems that they can look at to understand the behavior of a specific model. Since the choice of a constitutive model, and the properties that are used in that model, have an enormous effect on the results of an

  8. Systematic integration of experimental data and models in systems biology.

    Science.gov (United States)

    Li, Peter; Dada, Joseph O; Jameson, Daniel; Spasic, Irena; Swainston, Neil; Carroll, Kathleen; Dunn, Warwick; Khan, Farid; Malys, Naglis; Messiha, Hanan L; Simeonidis, Evangelos; Weichart, Dieter; Winder, Catherine; Wishart, Jill; Broomhead, David S; Goble, Carole A; Gaskell, Simon J; Kell, Douglas B; Westerhoff, Hans V; Mendes, Pedro; Paton, Norman W

    2010-11-29

    The behaviour of biological systems can be deduced from their mathematical models. However, multiple sources of data in diverse forms are required in the construction of a model in order to define its components and their biochemical reactions, and corresponding parameters. Automating the assembly and use of systems biology models is dependent upon data integration processes involving the interoperation of data and analytical resources. Taverna workflows have been developed for the automated assembly of quantitative parameterised metabolic networks in the Systems Biology Markup Language (SBML). A SBML model is built in a systematic fashion by the workflows which starts with the construction of a qualitative network using data from a MIRIAM-compliant genome-scale model of yeast metabolism. This is followed by parameterisation of the SBML model with experimental data from two repositories, the SABIO-RK enzyme kinetics database and a database of quantitative experimental results. The models are then calibrated and simulated in workflows that call out to COPASIWS, the web service interface to the COPASI software application for analysing biochemical networks. These systems biology workflows were evaluated for their ability to construct a parameterised model of yeast glycolysis. Distributed information about metabolic reactions that have been described to MIRIAM standards enables the automated assembly of quantitative systems biology models of metabolic networks based on user-defined criteria. Such data integration processes can be implemented as Taverna workflows to provide a rapid overview of the components and their relationships within a biochemical system.

  9. Modelling and Testing of Friction in Forging

    DEFF Research Database (Denmark)

    Bay, Niels

    2007-01-01

    Knowledge about friction is still limited in forging. The theoretical models applied presently for process analysis are not satisfactory compared to the advanced and detailed studies possible to carry out by plastic FEM analyses and more refined models have to be based on experimental testing....... The paper presents an overview of tests reported in literature and gives examples on the authors own test results....

  10. Electronic searching of the literature for systematic reviews of screening and diagnostic tests for preterm birth.

    Science.gov (United States)

    Honest, Honest; Bachmann, Lucas M; Khan, Khalid

    2003-03-26

    Published systematic reviews on prediction of preterm birth have tended to focus on a limited number of tests and their search strategies have often been relatively simple. Evaluation of all available tests in a systemic review will require a broad search strategy. To describe a case study of electronic searching for a systematic review of accuracy studies evaluating all tests for predicting preterm birth. The search strategy, developed to capture literatures concerning all the tests en-masse consisted of formulation of an appropriate combination of search terms, pilot searches to refine the search term combination, selection of relevant databases, and citation retrieval from the refined searches for selection of potentially relevant papers. Electronic searches were carried out on general bibliographic databases (Biosis, Embase, Medline, Pascal and Scisearch), specialised databases (Database of Abstracts of Reviews of Effectiveness, Medion, National Research Register, Cochrane Controlled Trial Register and Cochrane Database of Systematic Reviews). A total of 30076 citations were identified. Of these 8855 (29%) citations were duplications either within a database or across databases. Of the remaining 21221 citations, 3333 were considered potentially relevant to the review after assessment by two reviewers. These citations covered 19 different tests for predicting preterm birth. This case study suggests that with use of a concerted effort to organise and manage the electronic searching it is feasible to undertake broad searches for systematic reviews with multiple questions.

  11. Clinical tests to diagnose lumbar spondylolysis and spondylolisthesis: A systematic review.

    Science.gov (United States)

    Alqarni, Abdullah M; Schneiders, Anthony G; Cook, Chad E; Hendrick, Paul A

    2015-08-01

    The aim of this paper was to systematically review the diagnostic ability of clinical tests to detect lumbar spondylolysis and spondylolisthesis. A systematic literature search of six databases, with no language restrictions, from 1950 to 2014 was concluded on February 1, 2014. Clinical tests were required to be compared against imaging reference standards and report, or allow computation, of common diagnostic values. The systematic search yielded a total of 5164 articles with 57 retained for full-text examination, from which 4 met the full inclusion criteria for the review. Study heterogeneity precluded a meta-analysis of included studies. Fifteen different clinical tests were evaluated for their ability to diagnose lumbar spondylolisthesis and one test for its ability to diagnose lumbar spondylolysis. The one-legged hyperextension test demonstrated low to moderate sensitivity (50%-73%) and low specificity (17%-32%) to diagnose lumbar spondylolysis, while the lumbar spinous process palpation test was the optimal diagnostic test for lumbar spondylolisthesis; returning high specificity (87%-100%) and moderate to high sensitivity (60-88) values. Lumbar spondylolysis and spondylolisthesis are identifiable causes of LBP in athletes. There appears to be utility to lumbar spinous process palpation for the diagnosis of lumbar spondylolisthesis, however the one-legged hyperextension test has virtually no value in diagnosing patients with spondylolysis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Geochemical Testing And Model Development - Residual Tank Waste Test Plan

    International Nuclear Information System (INIS)

    Cantrell, K.J.; Connelly, M.P.

    2010-01-01

    This Test Plan describes the testing and chemical analyses release rate studies on tank residual samples collected following the retrieval of waste from the tank. This work will provide the data required to develop a contaminant release model for the tank residuals from both sludge and salt cake single-shell tanks. The data are intended for use in the long-term performance assessment and conceptual model development.

  13. GEOCHEMICAL TESTING AND MODEL DEVELOPMENT - RESIDUAL TANK WASTE TEST PLAN

    Energy Technology Data Exchange (ETDEWEB)

    CANTRELL KJ; CONNELLY MP

    2010-03-09

    This Test Plan describes the testing and chemical analyses release rate studies on tank residual samples collected following the retrieval of waste from the tank. This work will provide the data required to develop a contaminant release model for the tank residuals from both sludge and salt cake single-shell tanks. The data are intended for use in the long-term performance assessment and conceptual model development.

  14. Atomic Action Refinement in Model Based Testing

    NARCIS (Netherlands)

    van der Bijl, H.M.; Rensink, Arend; Tretmans, G.J.

    2007-01-01

    In model based testing (MBT) test cases are derived from a specification of the system that we want to test. In general the specification is more abstract than the implementation. This may result in 1) test cases that are not executable, because their actions are too abstract (the implementation

  15. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  16. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  17. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  18. Impact of opportunistic testing in a systematic cervical cancer screening program: a nationwide registry study.

    Science.gov (United States)

    Tranberg, Mette; Larsen, Mette Bach; Mikkelsen, Ellen M; Svanholm, Hans; Andersen, Berit

    2015-07-21

    Systematic screening for precancerous cervical lesions has resulted in decreased incidence and mortality of cervical cancer. However, even in systematic screening programs, many women are still tested opportunistically. This study aimed to determine the spread of opportunistic testing in a systematic cervical cancer screening program, the impact of opportunistic testing in terms of detecting cytological abnormalities and examine the associations between sociodemography and opportunistic testing. A nationwide registry study was undertaken including women aged 23-49 years (n = 807,624) with a cervical cytology between 2010 and 2013. The women were categorised into: 1) screening after invitation; 2) routine opportunistic testing, if they were either tested more than 9 months after the latest invitation or between 2.5 years and 3 years after the latest cervical cytology and 3) sporadic opportunistic testing, if they were tested less than 2.5 years after the latest cervical cytology. Cytological diagnoses of women in each of the categories were identified and prevalence proportion differences (PPD) and 95% confidence intervals (CIs) were used to explore group differences. Associations between sociodemography and undergoing opportunistic testing were established by multinomial logistic regression. In total, 28.8% of the cervical cytologies were due to either routine (20.7%) or sporadic (8.1%) opportunistic testing. Among women undergoing routine opportunistic testing, a larger proportion had high-grade squamous intraepithelial abnormalities than invited women (PPD: 0.6%, 95 % CI: 0.03-1.17%). A similar proportion of cytological abnormalities among women undergoing sporadic opportunistic testing and invited women was found. In multivariate analyses, younger age, being single or a social welfare recipient and residence region (North Denmark) were especially associated with opportunistic testing (routine or sporadic). One fourth of cervical cytologies in this study were

  19. Tin Whisker Testing and Modeling

    Science.gov (United States)

    2015-11-01

    Center for Advanced Life Cycle Engineering, University of Maryland CTE Coefficient of Thermal Expansion DAU Defense Acquisition University DI...below 2.0% PCB Printed Circuit Board synonymous with PWB PWB Printed Wiring Board synonymous with PCB PCTC Simulated power cycling thermal cycling ...DoD focused tin whisker risk assessments and whisker growth mechanisms (long term testing, corrosion/oxidation in humidity, and thermal cycling

  20. HIV Testing among Men Who Have Sex with Men (MSM): Systematic Review of Qualitative Evidence

    Science.gov (United States)

    Lorenc, Theo; Marrero-Guillamon, Isaac; Llewellyn, Alexis; Aggleton, Peter; Cooper, Chris; Lehmann, Angela; Lindsay, Catriona

    2011-01-01

    We conducted a systematic review of qualitative evidence relating to the views and attitudes of men who have sex with men (MSM) concerning testing for HIV. Studies conducted in high-income countries (Organisation for Economic Co-operation and Development members) since 1996 were included. Seventeen studies were identified, most of gay or bisexual…

  1. Diagnostic accuracy of scapular physical examination tests for shoulder disorders: a systematic review.

    Science.gov (United States)

    Wright, Alexis A; Wassinger, Craig A; Frank, Mason; Michener, Lori A; Hegedus, Eric J

    2013-09-01

    To systematically review and critique the evidence regarding the diagnostic accuracy of physical examination tests for the scapula in patients with shoulder disorders. A systematic, computerised literature search of PubMED, EMBASE, CINAHL and the Cochrane Library databases (from database inception through January 2012) using keywords related to diagnostic accuracy of physical examination tests of the scapula. The Quality Assessment of Diagnostic Accuracy Studies tool was used to critique the quality of each paper. Eight articles met the inclusion criteria; three were considered to be of high quality. Of the three high-quality studies, two were in reference to a 'diagnosis' of shoulder pain. Only one high-quality article referenced specific shoulder pathology of acromioclavicular dislocation with reported sensitivity of 71% and 41% for the scapular dyskinesis and SICK scapula test, respectively. Overall, no physical examination test of the scapula was found to be useful in differentially diagnosing pathologies of the shoulder.

  2. HIV Testing and Counseling Among Female Sex Workers: A Systematic Literature Review.

    Science.gov (United States)

    Tokar, Anna; Broerse, Jacqueline E W; Blanchard, James; Roura, Maria

    2018-02-20

    HIV testing uptake continues to be low among Female Sex Workers (FSWs). We synthesizes evidence on barriers and facilitators to HIV testing among FSW as well as frequencies of testing, willingness to test, and return rates to collect results. We systematically searched the MEDLINE/PubMed, EMBASE, SCOPUS databases for articles published in English between January 2000 and November 2017. Out of 5036 references screened, we retained 36 papers. The two barriers to HIV testing most commonly reported were financial and time costs-including low income, transportation costs, time constraints, and formal/informal payments-as well as the stigma and discrimination ascribed to HIV positive people and sex workers. Social support facilitated testing with consistently higher uptake amongst married FSWs and women who were encouraged to test by peers and managers. The consistent finding that social support facilitated HIV testing calls for its inclusion into current HIV testing strategies addressed at FSW.

  3. Coronary heart disease policy models: a systematic review

    Directory of Open Access Journals (Sweden)

    Capewell Simon

    2006-08-01

    Full Text Available Abstract Background The prevention and treatment of coronary heart disease (CHD is complex. A variety of models have therefore been developed to try and explain past trends and predict future possibilities. The aim of this systematic review was to evaluate the strengths and limitations of existing CHD policy models. Methods A search strategy was developed, piloted and run in MEDLINE and EMBASE electronic databases, supplemented by manually searching reference lists of relevant articles and reviews. Two reviewers independently checked the papers for inclusion and appraisal. All CHD modelling studies were included which addressed a defined population and reported on one or more key outcomes (deaths prevented, life years gained, mortality, incidence, prevalence, disability or cost of treatment. Results In total, 75 articles describing 42 models were included; 12 (29% of the 42 models were micro-simulation, 8 (19% cell-based, and 8 (19% life table analyses, while 14 (33% used other modelling methods. Outcomes most commonly reported were cost-effectiveness (36%, numbers of deaths prevented (33%, life-years gained (23% or CHD incidence (23%. Among the 42 models, 29 (69% included one or more risk factors for primary prevention, while 8 (19% just considered CHD treatments. Only 5 (12% were comprehensive, considering both risk factors and treatments. The six best-developed models are summarised in this paper, all are considered in detail in the appendices. Conclusion Existing CHD policy models vary widely in their depth, breadth, quality, utility and versatility. Few models have been calibrated against observed data, replicated in different settings or adequately validated. Before being accepted as a policy aid, any CHD model should provide an explicit statement of its aims, assumptions, outputs, strengths and limitations.

  4. Systematic evaluation of non-animal test methods for skin sensitisation safety assessment

    OpenAIRE

    Reisinger, Kerstin; Hoffmann, Sebastian; Alépée, Nathalie; Ashikaga, Takao; Barroso, Joao; Elcombe, Cliff; Gellatly, Nicola; Galbiati, Valentina; Gibbs, Susan; Groux, Hervé; Hibatallah, Jalila; Keller, Donald; Kern, Petra; Klaric, Martina; Kolle, Susanne

    2015-01-01

    The need for non-animal data to assess skin sensitisation properties of substances, especially cosmetics ingredients, has spawned the development of many in vitro methods. As it is widely believed that no single method can provide a solution, the Cosmetics Europe Skin Tolerance Task Force has defined a three-phase framework for the development of a non-animal testing strategy for skin sensitisation potency prediction. The results of the first phase - systematic evaluation of 16 test methods -...

  5. Software Testing and Verification in Climate Model Development

    Science.gov (United States)

    Clune, Thomas L.; Rood, RIchard B.

    2011-01-01

    Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties.

  6. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  7. Results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.

    1998-05-01

    A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed

  8. Linear Logistic Test Modeling with R

    Science.gov (United States)

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  9. Field testing of bioenergetic models

    International Nuclear Information System (INIS)

    Nagy, K.A.

    1985-01-01

    Doubly labeled water provides a direct measure of the rate of carbon dioxide production by free-living animals. With appropriate conversion factors, based on chemical composition of the diet and assimilation efficiency, field metabolic rate (FMR), in units of energy expenditure, and field feeding rate can be estimated. Validation studies indicate that doubly labeled water measurements of energy metabolism are accurate to within 7% in reptiles, birds, and mammals. This paper discusses the use of doubly labeled water to generate empirical models for FMR and food requirements for a variety of animals

  10. Systematic Testing should not be a Topic in the Computer Science Curriculum!

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    2003-01-01

    In this paper we argue that treating "testing" as an isolated topic is a wrong approach in computer science and software engineering teaching. Instead testing should pervade practical topics and exercises in the computer science curriculum to teach students the importance of producing software...... of high quality. We point out that we, as teachers, are partly to blame that many software products are of low quality. We describe a set of teaching guidelines that conveys our main pedagogical point to the students: that systematic testing is important, rewarding, and fun, and that testing should...

  11. Diagnostic accuracy of xpert test in tuberculosis detection: A systematic review and meta-analysis

    Directory of Open Access Journals (Sweden)

    Ravdeep Kaur

    2016-01-01

    Full Text Available Background: World Health Organization (WHO recommends the use of Xpert MTB/RIF assay for rapid diagnosis of tuberculosis (TB and detection of rifampicin resistance. This systematic review was done to know about the diagnostic accuracy and cost-effectiveness of the Xpert MTB/RIF assay. Methods: A systematic literature search was conducted in following databases: Cochrane Central Register of Controlled Trials and Cochrane Database of Systematic Reviews, MEDLINE, PUBMED, Scopus, Science Direct and Google Scholar for relevant studies for studies published between 2010 and December 2014. Studies given in the systematic reviews were accessed separately and used for analysis. Selection of studies, data extraction and assessment of quality of included studies was performed independently by two reviewers. Studies evaluating the diagnostic accuracy of Xpert MTB/RIF assay among adult or predominantly adult patients (≥14 years, presumed to have pulmonary TB with or without HIV infection were included in the review. Also, studies that had assessed the diagnostic accuracy of Xpert MTB/RIF assay using sputum and other respiratory specimens were included. Results: The included studies had a low risk of any form of bias, showing that findings are of high scientific validity and credibility. Quantitative analysis of 37 included studies shows that Xpert MTB/RIF is an accurate diagnostic test for TB and detection of rifampicin resistance. Conclusion: Xpert MTB/RIF assay is a robust, sensitive and specific test for accurate diagnosis of tuberculosis as compared to conventional tests like culture and microscopic examination.

  12. A systematic review of tests for lymph node status in primary endometrial cancer

    Directory of Open Access Journals (Sweden)

    Zamora Javier

    2008-05-01

    Full Text Available Abstract Background The lymph node status of a patient is a key determinate in staging, prognosis and adjuvant treatment of endometrial cancer. Despite this, the potential additional morbidity associated with lymphadenectomy makes its role controversial. This study systematically reviews the accuracy literature on sentinel node biopsy; ultra sound scanning, magnetic resonance imaging (MRI and computer tomography (CT for determining lymph node status in endometrial cancer. Methods Relevant articles were identified form MEDLINE (1966–2006, EMBASE (1980–2006, MEDION, the Cochrane library, hand searching of reference lists from primary articles and reviews, conference abstracts and contact with experts in the field. The review included 18 relevant primary studies (693 women. Data was extracted for study characteristics and quality. Bivariate random-effect model meta-analysis was used to estimate diagnostic accuracy of the various index tests. Results MRI (pooled positive LR 26.7, 95% CI 10.6 – 67.6 and negative LR 0.29 95% CI 0.17 – 0.49 and successful sentinel node biopsy (pooled positive LR 18.9 95% CI 6.7 – 53.2 and negative LR 0.22, 95% CI 0.1 – 0.48 were the most accurate tests. CT was not as accurate a test (pooled positive LR 3.8, 95% CI 2.0 – 7.3 and negative LR of 0.62, 95% CI 0.45 – 0.86. There was only one study that reported the use of ultrasound scanning. Conclusion MRI and sentinel node biopsy have shown similar diagnostic accuracy in confirming lymph node status among women with primary endometrial cancer than CT scanning, although the comparisons made are indirect and hence subject to bias. MRI should be used in preference, in light of the ASTEC trial, because of its non invasive nature.

  13. A Systematic Literature Review of Agile Maturity Model Research

    Directory of Open Access Journals (Sweden)

    Vaughan Henriques

    2017-02-01

    Full Text Available Background/Aim/Purpose: A commonly implemented software process improvement framework is the capability maturity model integrated (CMMI. Existing literature indicates higher levels of CMMI maturity could result in a loss of agility due to its organizational focus. To maintain agility, research has focussed attention on agile maturity models. The objective of this paper is to find the common research themes and conclusions in agile maturity model research. Methodology: This research adopts a systematic approach to agile maturity model research, using Google Scholar, Science Direct, and IEEE Xplore as sources. In total 531 articles were initially found matching the search criteria, which was filtered to 39 articles by applying specific exclusion criteria. Contribution:: The article highlights the trends in agile maturity model research, specifically bringing to light the lack of research providing validation of such models. Findings: Two major themes emerge, being the coexistence of agile and CMMI and the development of agile principle based maturity models. The research trend indicates an increase in agile maturity model articles, particularly in the latter half of the last decade, with concentrations of research coinciding with version updates of CMMI. While there is general consensus around higher CMMI maturity levels being incompatible with true agility, there is evidence of the two coexisting when agile is introduced into already highly matured environments. Future Research:\tFuture research direction for this topic should include how to attain higher levels of CMMI maturity using only agile methods, how governance is addressed in agile environments, and whether existing agile maturity models relate to improved project success.

  14. Biglan Model Test Based on Institutional Diversity.

    Science.gov (United States)

    Roskens, Ronald W.; Creswell, John W.

    The Biglan model, a theoretical framework for empirically examining the differences among subject areas, classifies according to three dimensions: adherence to common set of paradigms (hard or soft), application orientation (pure or applied), and emphasis on living systems (life or nonlife). Tests of the model are reviewed, and a further test is…

  15. TESTING GARCH-X TYPE MODELS

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    2017-01-01

    We present novel theory for testing for reduction of GARCH-X type models with an exogenous (X) covariate to standard GARCH type models. To deal with the problems of potential nuisance parameters on the boundary of the parameter space as well as lack of identification under the null, we exploit...... a noticeable property of specific zero-entries in the inverse information of the GARCH-X type models. Specifically, we consider sequential testing based on two likelihood ratio tests and as demonstrated the structure of the inverse information implies that the proposed test neither depends on whether...

  16. Nucleic acid amplification tests in the diagnosis of tuberculous pleuritis: a systematic review and meta-analysis

    Directory of Open Access Journals (Sweden)

    Riley Lee W

    2004-02-01

    Full Text Available Abstract Background Conventional tests for tuberculous pleuritis have several limitations. A variety of new, rapid tests such as nucleic acid amplification tests – including polymerase chain reaction – have been evaluated in recent times. We conducted a systematic review to determine the accuracy of nucleic acid amplification (NAA tests in the diagnosis of tuberculous pleuritis. Methods A systematic review and meta-analysis of 38 English and Spanish articles (with 40 studies, identified via searches of six electronic databases, hand searching of selected journals, and contact with authors, experts, and test manufacturers. Sensitivity, specificity, and other measures of accuracy were pooled using random effects models. Summary receiver operating characteristic curves were used to summarize overall test performance. Heterogeneity in study results was formally explored using subgroup analyses. Results Of the 40 studies included, 26 used in-house ("home-brew" tests, and 14 used commercial tests. Commercial tests had a low overall sensitivity (0.62; 95% confidence interval [CI] 0.43, 0.77, and high specificity (0.98; 95% CI 0.96, 0.98. The positive and negative likelihood ratios for commercial tests were 25.4 (95% CI 16.2, 40.0 and 0.40 (95% CI 0.24, 0.67, respectively. All commercial tests had consistently high specificity estimates; the sensitivity estimates, however, were heterogeneous across studies. With the in-house tests, both sensitivity and specificity estimates were significantly heterogeneous. Clinically meaningful summary estimates could not be determined for in-house tests. Conclusions Our results suggest that commercial NAA tests may have a potential role in confirming (ruling in tuberculous pleuritis. However, these tests have low and variable sensitivity and, therefore, may not be useful in excluding (ruling out the disease. NAA test results, therefore, cannot replace conventional tests; they need to be interpreted in parallel

  17. Systematic Analysis of Hollow Fiber Model of Tuberculosis Experiments.

    Science.gov (United States)

    Pasipanodya, Jotam G; Nuermberger, Eric; Romero, Klaus; Hanna, Debra; Gumbo, Tawanda

    2015-08-15

    The in vitro hollow fiber system model of tuberculosis (HFS-TB), in tandem with Monte Carlo experiments, was introduced more than a decade ago. Since then, it has been used to perform a large number of tuberculosis pharmacokinetics/pharmacodynamics (PK/PD) studies that have not been subjected to systematic analysis. We performed a literature search to identify all HFS-TB experiments published between 1 January 2000 and 31 December 2012. There was no exclusion of articles by language. Bias minimization was according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Steps for reporting systematic reviews were followed. There were 22 HFS-TB studies published, of which 12 were combination therapy studies and 10 were monotherapy studies. There were 4 stand-alone Monte Carlo experiments that utilized quantitative output from the HFS-TB. All experiments reported drug pharmacokinetics, which recapitulated those encountered in humans. HFS-TB studies included log-phase growth studies under ambient air, semidormant bacteria at pH 5.8, and nonreplicating persisters at low oxygen tension of ≤ 10 parts per billion. The studies identified antibiotic exposures associated with optimal kill of Mycobacterium tuberculosis and suppression of acquired drug resistance (ADR) and informed predictions about optimal clinical doses, expected performance of standard doses and regimens in patients, and expected rates of ADR, as well as a proposal of new susceptibility breakpoints. The HFS-TB model offers the ability to perform PK/PD studies including humanlike drug exposures, to identify bactericidal and sterilizing effect rates, and to identify exposures associated with suppression of drug resistance. Because of the ability to perform repetitive sampling from the same unit over time, the HFS-TB vastly improves statistical power and facilitates the execution of time-to-event analyses and repeated event analyses, as well as dynamic system pharmacology mathematical

  18. Testing Expected Shortfall Models for Derivative Positions

    NARCIS (Netherlands)

    Kerkhof, F.L.J.; Melenberg, B.; Schumacher, J.M.

    2003-01-01

    In this paper we test several risk management models for computing expected shortfall for one-period hedge errors of hedged derivatives positions.Contrary to value-at-risk, expected shortfall cannot be tested using the standard binomial test, since we need information of the distribution in the

  19. Anaerobic exercise testing in rehabilitation : A systematic review of available tests and protocols

    NARCIS (Netherlands)

    Krops, Leonie A.; Albada, Trijntje; van der Woude, Lucas H. V.; Hijmans, Juha M.; Dekker, Rienk

    Objective: Anaerobic capacity assessment in rehabilitation has received increasing scientific attention in recent years. However, anaerobic capacity is not tested consistently in clinical rehabilitation practice. This study reviews tests and protocols for anaerobic capacity in adults with various

  20. Teaching through Entry Test & Summarization - An Effective Classroom Teaching Model in Higher Education Training

    OpenAIRE

    Aithal P. S.

    2015-01-01

    Systematic teaching through long-time tested model will certainly improve the effectiveness of teaching-learning process in higher education. Teaching through Entry Test & Summarization is an effective model named 'Aithal model of effective classroom teaching' in Higher Education Training developed by Prof. Aithal combines both positive and negative motivation and integrated into a best practice. According to this model each class of one hour duration starts with silent prayer for one minute ...

  1. The Couplex test cases: models and lessons

    Energy Technology Data Exchange (ETDEWEB)

    Bourgeat, A. [Lyon-1 Univ., MCS, 69 - Villeurbanne (France); Kern, M. [Institut National de Recherches Agronomiques (INRA), 78 - Le Chesnay (France); Schumacher, S.; Talandier, J. [Agence Nationale pour la Gestion des Dechets Radioactifs (ANDRA), 92 - Chatenay Malabry (France)

    2003-07-01

    The Couplex test cases are a set of numerical test models for nuclear waste deep geological disposal simulation. They are centered around the numerical issues arising in the near and far field transport simulation. They were used in an international contest, and are now becoming a reference in the field. We present the models used in these test cases, and show sample results from the award winning teams. (authors)

  2. The Couplex test cases: models and lessons

    International Nuclear Information System (INIS)

    Bourgeat, A.; Kern, M.; Schumacher, S.; Talandier, J.

    2003-01-01

    The Couplex test cases are a set of numerical test models for nuclear waste deep geological disposal simulation. They are centered around the numerical issues arising in the near and far field transport simulation. They were used in an international contest, and are now becoming a reference in the field. We present the models used in these test cases, and show sample results from the award winning teams. (authors)

  3. Tree-Based Global Model Tests for Polytomous Rasch Models

    Science.gov (United States)

    Komboz, Basil; Strobl, Carolin; Zeileis, Achim

    2018-01-01

    Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…

  4. The Application of Adaptive Behaviour Models: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Jessica A. Price

    2018-01-01

    Full Text Available Adaptive behaviour has been viewed broadly as an individual’s ability to meet the standards of social responsibilities and independence; however, this definition has been a source of debate amongst researchers and clinicians. Based on the rich history and the importance of the construct of adaptive behaviour, the current study aimed to provide a comprehensive overview of the application of adaptive behaviour models to assessment tools, through a systematic review. A plethora of assessment measures for adaptive behaviour have been developed in order to adequately assess the construct; however, it appears that the only definition on which authors seem to agree is that adaptive behaviour is what adaptive behaviour scales measure. The importance of the construct for diagnosis, intervention and planning has been highlighted throughout the literature. It is recommended that researchers and clinicians critically review what measures of adaptive behaviour they are utilising and it is suggested that the definition and theory is revisited.

  5. Systematic evaluation of non-animal test methods for skin sensitisation safety assessment.

    Science.gov (United States)

    Reisinger, Kerstin; Hoffmann, Sebastian; Alépée, Nathalie; Ashikaga, Takao; Barroso, Joao; Elcombe, Cliff; Gellatly, Nicola; Galbiati, Valentina; Gibbs, Susan; Groux, Hervé; Hibatallah, Jalila; Keller, Donald; Kern, Petra; Klaric, Martina; Kolle, Susanne; Kuehnl, Jochen; Lambrechts, Nathalie; Lindstedt, Malin; Millet, Marion; Martinozzi-Teissier, Silvia; Natsch, Andreas; Petersohn, Dirk; Pike, Ian; Sakaguchi, Hitoshi; Schepky, Andreas; Tailhardat, Magalie; Templier, Marie; van Vliet, Erwin; Maxwell, Gavin

    2015-02-01

    The need for non-animal data to assess skin sensitisation properties of substances, especially cosmetics ingredients, has spawned the development of many in vitro methods. As it is widely believed that no single method can provide a solution, the Cosmetics Europe Skin Tolerance Task Force has defined a three-phase framework for the development of a non-animal testing strategy for skin sensitization potency prediction. The results of the first phase – systematic evaluation of 16 test methods – are presented here. This evaluation involved generation of data on a common set of ten substances in all methods and systematic collation of information including the level of standardisation, existing test data,potential for throughput, transferability and accessibility in cooperation with the test method developers.A workshop was held with the test method developers to review the outcome of this evaluation and to discuss the results. The evaluation informed the prioritisation of test methods for the next phase of the non-animal testing strategy development framework. Ultimately, the testing strategy – combined with bioavailability and skin metabolism data and exposure consideration – is envisaged to allow establishment of a data integration approach for skin sensitisation safety assessment of cosmetic ingredients.

  6. Test-driven modeling of embedded systems

    DEFF Research Database (Denmark)

    Munck, Allan; Madsen, Jan

    2015-01-01

    To benefit maximally from model-based systems engineering (MBSE) trustworthy high quality models are required. From the software disciplines it is known that test-driven development (TDD) can significantly increase the quality of the products. Using a test-driven approach with MBSE may have...... a similar positive effect on the quality of the system models and the resulting products and may therefore be desirable. To define a test-driven model-based systems engineering (TD-MBSE) approach, we must define this approach for numerous sub disciplines such as modeling of requirements, use cases......, scenarios, behavior, architecture, etc. In this paper we present a method that utilizes the formalism of timed automatons with formal and statistical model checking techniques to apply TD-MBSE to the modeling of system architecture and behavior. The results obtained from applying it to an industrial case...

  7. Hydraulic Model Tests on Modified Wave Dragon

    DEFF Research Database (Denmark)

    Hald, Tue; Lynggaard, Jakob

    A floating model of the Wave Dragon (WD) was built in autumn 1998 by the Danish Maritime Institute in scale 1:50, see Sørensen and Friis-Madsen (1999) for reference. This model was subjected to a series of model tests and subsequent modifications at Aalborg University and in the following this mo...

  8. Model tests for prestressed concrete pressure vessels

    International Nuclear Information System (INIS)

    Stoever, R.

    1975-01-01

    Investigations with models of reactor pressure vessels are used to check results of three dimensional calculation methods and to predict the behaviour of the prototype. Model tests with 1:50 elastic pressure vessel models and with a 1:5 prestressed concrete pressure vessel are described and experimental results are presented. (orig.) [de

  9. Simulation Models for Socioeconomic Inequalities in Health: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Niko Speybroeck

    2013-11-01

    Full Text Available Background: The emergence and evolution of socioeconomic inequalities in health involves multiple factors interacting with each other at different levels. Simulation models are suitable for studying such complex and dynamic systems and have the ability to test the impact of policy interventions in silico. Objective: To explore how simulation models were used in the field of socioeconomic inequalities in health. Methods: An electronic search of studies assessing socioeconomic inequalities in health using a simulation model was conducted. Characteristics of the simulation models were extracted and distinct simulation approaches were identified. As an illustration, a simple agent-based model of the emergence of socioeconomic differences in alcohol abuse was developed. Results: We found 61 studies published between 1989 and 2013. Ten different simulation approaches were identified. The agent-based model illustration showed that multilevel, reciprocal and indirect effects of social determinants on health can be modeled flexibly. Discussion and Conclusions: Based on the review, we discuss the utility of using simulation models for studying health inequalities, and refer to good modeling practices for developing such models. The review and the simulation model example suggest that the use of simulation models may enhance the understanding and debate about existing and new socioeconomic inequalities of health frameworks.

  10. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J

    2011-01-01

    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  11. A simple parametric model selection test

    OpenAIRE

    Susanne M. Schennach; Daniel Wilhelm

    2014-01-01

    We propose a simple model selection test for choosing among two parametric likelihoods which can be applied in the most general setting without any assumptions on the relation between the candidate models and the true distribution. That is, both, one or neither is allowed to be correctly speci fied or misspeci fied, they may be nested, non-nested, strictly non-nested or overlapping. Unlike in previous testing approaches, no pre-testing is needed, since in each case, the same test statistic to...

  12. Stem cells in animal asthma models: a systematic review.

    Science.gov (United States)

    Srour, Nadim; Thébaud, Bernard

    2014-12-01

    Asthma control frequently falls short of the goals set in international guidelines. Treatment options for patients with poorly controlled asthma despite inhaled corticosteroids and long-acting β-agonists are limited, and new therapeutic options are needed. Stem cell therapy is promising for a variety of disorders but there has been no human clinical trial of stem cell therapy for asthma. We aimed to systematically review the literature regarding the potential benefits of stem cell therapy in animal models of asthma to determine whether a human trial is warranted. The MEDLINE and Embase databases were searched for original studies of stem cell therapy in animal asthma models. Nineteen studies were selected. They were found to be heterogeneous in their design. Mesenchymal stromal cells were used before sensitization with an allergen, before challenge with the allergen and after challenge, most frequently with ovalbumin, and mainly in BALB/c mice. Stem cell therapy resulted in a reduction of bronchoalveolar lavage fluid inflammation and eosinophilia as well as Th2 cytokines such as interleukin-4 and interleukin-5. Improvement in histopathology such as peribronchial and perivascular inflammation, epithelial thickness, goblet cell hyperplasia and smooth muscle layer thickening was universal. Several studies showed a reduction in airway hyper-responsiveness. Stem cell therapy decreases eosinophilic and Th2 inflammation and is effective in several phases of the allergic response in animal asthma models. Further study is warranted, up to human clinical trials. Copyright © 2014 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  13. HIV Testing Among Internet-Using MSM in the United States: Systematic Review.

    Science.gov (United States)

    Noble, Meredith; Jones, Amanda M; Bowles, Kristina; DiNenno, Elizabeth A; Tregear, Stephen J

    2017-02-01

    Regular HIV testing enables early identification and treatment of HIV among at-risk men who have sex with men (MSM). Characterizing HIV testing needs for Internet-using MSM informs development of Internet-facilitated testing interventions. In this systematic review we analyze HIV testing patterns among Internet-using MSM in the United States who report, through participation in an online study or survey, their HIV status as negative or unknown and identify demographic or behavioral risk factors associated with testing. We systematically searched multiple electronic databases for relevant English-language articles published between January 1, 2005 and December 16, 2014. Using meta-analysis, we summarized the proportion of Internet-using MSM who had ever tested for HIV and the proportion who tested in the 12 months preceding participation in the online study or survey. We also identified factors predictive of these outcomes using meta-regression and narrative synthesis. Thirty-two studies that enrolled 83,186 MSM met our inclusion criteria. Among the studies reporting data for each outcome, 85 % (95 % CI 82-87 %) of participants had ever tested, and 58 % (95 % CI 53-63 %) had tested in the year preceding enrollment in the study, among those for whom those data were reported. Age over 30 years, at least a college education, use of drugs, and self-identification as being homosexual or gay were associated with ever having tested for HIV. A large majority of Internet-using MSM indicated they had been tested for HIV at some point in the past. A smaller proportion-but still a majority-reported they had been tested within the year preceding study or survey participation. MSM who self-identify as heterosexual or bisexual, are younger, or who use drugs (including non-injection drugs) may be less likely to have ever tested for HIV. The overall findings of our systematic review are encouraging; however, a subpopulation of MSM may benefit from targeted outreach. These

  14. Physical examination tests for the diagnosis of femoroacetabular impingement. A systematic review.

    Science.gov (United States)

    Pacheco-Carrillo, Aitana; Medina-Porqueres, Ivan

    2016-09-01

    Numerous clinical tests have been proposed to diagnose FAI, but little is known about their diagnostic accuracy. To summarize and evaluate research on the accuracy of physical examination tests for diagnosis of FAI. A search of the PubMed, SPORTDiscus and CINAHL databases was performed. Studies were considered eligible if they compared the results of physical examination tests to those of a reference standard. Methodological quality and internal validity assessment was performed by two independent reviewers using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool. The systematic search strategy revealed 298 potential articles, five of which articles met the inclusion criteria. After assessment using the QUADAS score, four of the five articles were of high quality. Clinical tests included were Impingement sign, IROP test (Internal Rotation Over Pressure), FABER test (Flexion-Abduction-External Rotation), Stinchfield/RSRL (Resisted Straight Leg Raise) test, Scour test, Maximal squat test, and the Anterior Impingement test. IROP test, impingement sign, and FABER test showed the most sensitive values to identify FAI. The diagnostic accuracy of physical examination tests to assess FAI is limited due to its heterogenecity. There is a strong need for sound research of high methodological quality in this area. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. A numerical test method of California bearing ratio on graded crushed rocks using particle flow modeling

    OpenAIRE

    Jiang, Yingjun; Wong, Louis Ngai Yuen; Ren, Jiaolong

    2015-01-01

    In order to better understand the mechanical properties of graded crushed rocks (GCRs) and to optimize the relevant design, a numerical test method based on the particle flow modeling technique PFC2D is developed for the California bearing ratio (CBR) test on GCRs. The effects of different testing conditions and micro-mechanical parameters used in the model on the CBR numerical results have been systematically studied. The reliability of the numerical technique is verified. The numerical resu...

  16. User testing of an adaptation of fishbone diagrams to depict results of systematic reviews

    Directory of Open Access Journals (Sweden)

    Gerald Gartlehner

    2017-12-01

    Full Text Available Abstract Background Summary of findings tables in systematic reviews are highly informative but require epidemiological training to be interpreted correctly. The usage of fishbone diagrams as graphical displays could offer researchers an effective approach to simplify content for readers with limited epidemiological training. In this paper we demonstrate how fishbone diagrams can be applied to systematic reviews and present the results of an initial user testing. Methods Findings from two systematic reviews were graphically depicted in the form of the fishbone diagram. To test the utility of fishbone diagrams compared with summary of findings tables, we developed and pilot-tested an online survey using Qualtrics. Respondents were randomized to the fishbone diagram or a summary of findings table presenting the same body of evidence. They answered questions in both open-ended and closed-answer formats; all responses were anonymous. Measures of interest focused on first and second impressions, the ability to find and interpret critical information, as well as user experience with both displays. We asked respondents about the perceived utility of fishbone diagrams compared to summary of findings tables. We analyzed quantitative data by conducting t-tests and comparing descriptive statistics. Results Based on real world systematic reviews, we provide two different fishbone diagrams to show how they might be used to display complex information in a clear and succinct manner. User testing on 77 students with basic epidemiological training revealed that participants preferred summary of findings tables over fishbone diagrams. Significantly more participants liked the summary of findings table than the fishbone diagram (71.8% vs. 44.8%; p < .01; significantly more participants found the fishbone diagram confusing (63.2% vs. 35.9%, p < .05 or indicated that it was difficult to find information (65.8% vs. 45%; p < .01. However, more than half

  17. Reliability of physical functioning tests in patients with low back pain: a systematic review.

    Science.gov (United States)

    Denteneer, Lenie; Van Daele, Ulrike; Truijen, Steven; De Hertogh, Willem; Meirte, Jill; Stassijns, Gaetane

    2018-01-01

    The aim of this study was to provide a comprehensive overview of physical functioning tests in patients with low back pain (LBP) and to investigate their reliability. A systematic computerized search was finalized in four different databases on June 24, 2017: PubMed, Web of Science, Embase, and MEDLINE. Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines were followed during all stages of this review. Clinical studies that investigate the reliability of physical functioning tests in patients with LBP were eligible. The methodological quality of the included studies was assessed with the use of the Consensus-based Standards for the selection of health Measurement Instruments (COSMIN) checklist. To come to final conclusions on the reliability of the identified clinical tests, the current review assessed three factors, namely, outcome assessment, methodological quality, and consistency of description. A total of 20 studies were found eligible and 38 clinical tests were identified. Good overall test-retest reliability was concluded for the extensor endurance test (intraclass correlation coefficient [ICC]=0.93-0.97), the flexor endurance test (ICC=0.90-0.97), the 5-minute walking test (ICC=0.89-0.99), the 50-ft walking test (ICC=0.76-0.96), the shuttle walk test (ICC=0.92-0.99), the sit-to-stand test (ICC=0.91-0.99), and the loaded forward reach test (ICC=0.74-0.98). For inter-rater reliability, only one test, namely, the Biering-Sörensen test (ICC=0.88-0.99), could be concluded to have an overall good inter-rater reliability. None of the identified clinical tests could be concluded to have a good intrarater reliability. Further investigation should focus on a better overall study methodology and the use of identical protocols for the description of clinical tests. The assessment of reliability is only a first step in the recommendation process for the use of clinical tests. In future research, the identified clinical tests in the

  18. Barriers to workplace HIV testing in South Africa: a systematic review of the literature.

    Science.gov (United States)

    Weihs, Martin; Meyer-Weitz, Anna

    2016-01-01

    Low workplace HIV testing uptake makes effective management of HIV and AIDS difficult for South African organisations. Identifying barriers to workplace HIV testing is therefore crucial to inform urgently needed interventions aimed at increasing workplace HIV testing. This study reviewed literature on workplace HIV testing barriers in South Africa. Pubmed, ScienceDirect, PsycInfo and SA Publications were systematically researched. Studies needed to include measures to assess perceived or real barriers to participate in HIV Counselling and Testing (HCT) at the workplace or discuss perceived or real barriers of HIV testing at the workplace based on collected data, provide qualitative or quantitative evidence related to the research topic and needed to refer to workplaces in South Africa. Barriers were defined as any factor on economic, social, personal, environmental or organisational level preventing employees from participating in workplace HIV testing. Four peer-reviewed studies were included, two with quantitative and two with qualitative study designs. The overarching barriers across the studies were fear of compromised confidentiality, being stigmatised or discriminated in the event of testing HIV positive or being observed participating in HIV testing, and a low personal risk perception. Furthermore, it appeared that an awareness of an HIV-positive status hindered HIV testing at the workplace. Further research evidence of South African workplace barriers to HIV testing will enhance related interventions. This systematic review only found very little and contextualised evidence about workplace HCT barriers in South Africa, making it difficult to generalise, and not really sufficient to inform new interventions aimed at increasing workplace HCT uptake.

  19. Physical examination tests for screening and diagnosis of cervicogenic headache: A systematic review.

    Science.gov (United States)

    Rubio-Ochoa, J; Benítez-Martínez, J; Lluch, E; Santacruz-Zaragozá, S; Gómez-Contreras, P; Cook, C E

    2016-02-01

    It has been suggested that differential diagnosis of headaches should consist of a robust subjective examination and a detailed physical examination of the cervical spine. Cervicogenic headache (CGH) is a form of headache that involves referred pain from the neck. To our knowledge, no studies have summarized the reliability and diagnostic accuracy of physical examination tests for CGH. The aim of this study was to summarize the reliability and diagnostic accuracy of physical examination tests used to diagnose CGH. A systematic review following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines was performed in four electronic databases (MEDLINE, Web of Science, Embase and Scopus). Full text reports concerning physical tests for the diagnosis of CGH which reported the clinometric properties for assessment of CGH, were included and screened for methodological quality. Quality Appraisal for Reliability Studies (QAREL) and Quality Assessment of Studies of Diagnostic Accuracy (QUADAS-2) scores were completed to assess article quality. Eight articles were retrieved for quality assessment and data extraction. Studies investigating diagnostic reliability of physical examination tests for CGH scored poorer on methodological quality (higher risk of bias) than those of diagnostic accuracy. There is sufficient evidence showing high levels of reliability and diagnostic accuracy of the selected physical examination tests for the diagnosis of CGH. The cervical flexion-rotation test (CFRT) exhibited both the highest reliability and the strongest diagnostic accuracy for the diagnosis of CGH. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald; Liever, Peter; Nielsen, Tanner

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  1. Physical examination tests for the diagnosis of posterior cruciate ligament rupture: a systematic review.

    Science.gov (United States)

    Kopkow, Christian; Freiberg, Alice; Kirschner, Stephan; Seidler, Andreas; Schmitt, Jochen

    2013-11-01

    Systematic literature review. To summarize and evaluate research on the accuracy of physical examination tests for diagnosis of posterior cruciate ligament (PCL) tear. Rupture of the PCL is a severe knee injury that can lead to delayed rehabilitation, instability, or chronic knee pathologies. To our knowledge, there is currently no systematic review of studies on the diagnostic accuracy of clinical examination tests to evaluate the integrity of the PCL. A comprehensive systematic literature search was conducted in MEDLINE from 1946, Embase from 1974, and the Allied and Complementary Medicine Database from 1985 until April 30, 2012. Studies were considered eligible if they compared the results of physical examination tests performed in the context of a PCL physical examination to those of a reference standard (arthroscopy, arthrotomy, magnetic resonance imaging). Methodological quality assessment was performed by 2 independent reviewers using the revised version of the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. The search strategy revealed 1307 articles, of which 11 met the inclusion criteria for this review. In these studies, 11 different physical examination tests were identified. Due to differences in study types, different patient populations, and methodological quality, meta-analysis was not indicated. Presently, most physical examination tests have not been evaluated sufficiently enough to be confident in their ability to either confirm or rule out a PCL tear. The diagnostic accuracy of physical examination tests to assess the integrity of the PCL is largely unknown. There is a strong need for further research in this area. Level of Evidence Diagnosis, level 3a.

  2. Evaluating test-retest reliability in patient-reported outcome measures for older people: A systematic review.

    Science.gov (United States)

    Park, Myung Sook; Kang, Kyung Ja; Jang, Sun Joo; Lee, Joo Yun; Chang, Sun Ju

    2018-03-01

    This study aimed to evaluate the components of test-retest reliability including time interval, sample size, and statistical methods used in patient-reported outcome measures in older people and to provide suggestions on the methodology for calculating test-retest reliability for patient-reported outcomes in older people. This was a systematic literature review. MEDLINE, Embase, CINAHL, and PsycINFO were searched from January 1, 2000 to August 10, 2017 by an information specialist. This systematic review was guided by both the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist and the guideline for systematic review published by the National Evidence-based Healthcare Collaborating Agency in Korea. The methodological quality was assessed by the Consensus-based Standards for the selection of health Measurement Instruments checklist box B. Ninety-five out of 12,641 studies were selected for the analysis. The median time interval for test-retest reliability was 14days, and the ratio of sample size for test-retest reliability to the number of items in each measure ranged from 1:1 to 1:4. The most frequently used statistical methods for continuous scores was intraclass correlation coefficients (ICCs). Among the 63 studies that used ICCs, 21 studies presented models for ICC calculations and 30 studies reported 95% confidence intervals of the ICCs. Additional analyses using 17 studies that reported a strong ICC (>0.09) showed that the mean time interval was 12.88days and the mean ratio of the number of items to sample size was 1:5.37. When researchers plan to assess the test-retest reliability of patient-reported outcome measures for older people, they need to consider an adequate time interval of approximately 13days and the sample size of about 5 times the number of items. Particularly, statistical methods should not only be selected based on the types of scores of the patient-reported outcome measures, but should also be described clearly in

  3. Systematic review: Health care transition practice service models.

    Science.gov (United States)

    Betz, Cecily L; O'Kane, Lisa S; Nehring, Wendy M; Lobo, Marie L

    2016-01-01

    Nearly 750,000 adolescents and emerging adults with special health care needs (AEA-SHCN) enter into adulthood annually. The linkages to ensure the seamless transfer of care from pediatric to adult care and transition to adulthood for AEA-SHCN have yet to be realized. The purpose of this systematic review was to investigate the state of the science of health care transition (HCT) service models as described in quantitative investigations. A four-tier screening approach was used to obtain reviewed articles published from 2004 to 2013. A total of 17 articles were included in this review. Transfer of care was the most prominent intervention feature. Overall, using the Effective Public Health Practice Project criteria, the studies were rated as weak. Limitations included lack of control groups, rigorous designs and methodology, and incomplete intervention descriptions. As the findings indicate, HCT is an emerging field of practice that is largely in the exploratory stage of model development. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. A systematic review of the diagnostic performance of orthopedic physical examination tests of the hip

    Science.gov (United States)

    2013-01-01

    Background Previous reviews of the diagnostic performances of physical tests of the hip in orthopedics have drawn limited conclusions because of the low to moderate quality of primary studies published in the literature. This systematic review aims to build on these reviews by assessing a broad range of hip pathologies, and employing a more selective approach to the inclusion of studies in order to accurately gauge diagnostic performance for the purposes of making recommendations for clinical practice and future research. It specifically identifies tests which demonstrate strong and moderate diagnostic performance. Methods A systematic search of Medline, Embase, Embase Classic and CINAHL was conducted to identify studies of hip tests. Our selection criteria included an analysis of internal and external validity. We reported diagnostic performance in terms of sensitivity, specificity, predictive values and likelihood ratios. Likelihood ratios were used to identify tests with strong and moderate diagnostic utility. Results Only a small proportion of tests reported in the literature have been assessed in methodologically valid primary studies. 16 studies were included in our review, producing 56 independent test-pathology combinations. Two tests demonstrated strong clinical utility, the patellar-pubic percussion test for excluding radiologically occult hip fractures (negative LR 0.05, 95% Confidence Interval [CI] 0.03-0.08) and the hip abduction sign for diagnosing sarcoglycanopathies in patients with known muscular dystrophies (positive LR 34.29, 95% CI 10.97-122.30). Fifteen tests demonstrated moderate diagnostic utility for diagnosing and/or excluding hip fractures, symptomatic osteoarthritis and loosening of components post-total hip arthroplasty. Conclusions We have identified a number of tests demonstrating strong and moderate diagnostic performance. These findings must be viewed with caution as there are concerns over the methodological quality of the primary

  5. Social Media Interventions to Promote HIV Testing, Linkage, Adherence, and Retention: Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Cao, Bolin; Gupta, Somya; Wang, Jiangtao; Hightow-Weidman, Lisa B; Muessig, Kathryn E; Tang, Weiming; Pan, Stephen; Pendse, Razia; Tucker, Joseph D

    2017-11-24

    Social media is increasingly used to deliver HIV interventions for key populations worldwide. However, little is known about the specific uses and effects of social media on human immunodeficiency virus (HIV) interventions. This systematic review examines the effectiveness of social media interventions to promote HIV testing, linkage, adherence, and retention among key populations. We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist and Cochrane guidelines for this review and registered it on the International Prospective Register of Systematic Reviews, PROSPERO. We systematically searched six databases and three conference websites using search terms related to HIV, social media, and key populations. We included studies where (1) the intervention was created or implemented on social media platforms, (2) study population included men who have sex with men (MSM), transgender individuals, people who inject drugs (PWID), and/or sex workers, and (3) outcomes included promoting HIV testing, linkage, adherence, and/or retention. Meta-analyses were conducted by Review Manager, version 5.3. Pooled relative risk (RR) and 95% confidence intervals were calculated by random-effects models. Among 981 manuscripts identified, 26 studies met the inclusion criteria. We found 18 studies from high-income countries, 8 in middle-income countries, and 0 in low-income countries. Eight were randomized controlled trials, and 18 were observational studies. All studies (n=26) included MSM; five studies also included transgender individuals. The focus of 21 studies was HIV testing, four on HIV testing and linkage to care, and one on antiretroviral therapy adherence. Social media interventions were used to do the following: build online interactive communities to encourage HIV testing/adherence (10 studies), provide HIV testing services (9 studies), disseminate HIV information (9 studies), and develop intervention materials (1 study). Of the

  6. Social Media Interventions to Promote HIV Testing, Linkage, Adherence, and Retention: Systematic Review and Meta-Analysis

    Science.gov (United States)

    Gupta, Somya; Wang, Jiangtao; Hightow-Weidman, Lisa B; Muessig, Kathryn E; Tang, Weiming; Pan, Stephen; Pendse, Razia; Tucker, Joseph D

    2017-01-01

    Background Social media is increasingly used to deliver HIV interventions for key populations worldwide. However, little is known about the specific uses and effects of social media on human immunodeficiency virus (HIV) interventions. Objective This systematic review examines the effectiveness of social media interventions to promote HIV testing, linkage, adherence, and retention among key populations. Methods We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist and Cochrane guidelines for this review and registered it on the International Prospective Register of Systematic Reviews, PROSPERO. We systematically searched six databases and three conference websites using search terms related to HIV, social media, and key populations. We included studies where (1) the intervention was created or implemented on social media platforms, (2) study population included men who have sex with men (MSM), transgender individuals, people who inject drugs (PWID), and/or sex workers, and (3) outcomes included promoting HIV testing, linkage, adherence, and/or retention. Meta-analyses were conducted by Review Manager, version 5.3. Pooled relative risk (RR) and 95% confidence intervals were calculated by random-effects models. Results Among 981 manuscripts identified, 26 studies met the inclusion criteria. We found 18 studies from high-income countries, 8 in middle-income countries, and 0 in low-income countries. Eight were randomized controlled trials, and 18 were observational studies. All studies (n=26) included MSM; five studies also included transgender individuals. The focus of 21 studies was HIV testing, four on HIV testing and linkage to care, and one on antiretroviral therapy adherence. Social media interventions were used to do the following: build online interactive communities to encourage HIV testing/adherence (10 studies), provide HIV testing services (9 studies), disseminate HIV information (9 studies), and develop

  7. Modeling Systematicity and Individuality in Nonlinear Second Language Development: The Case of English Grammatical Morphemes

    Science.gov (United States)

    Murakami, Akira

    2016-01-01

    This article introduces two sophisticated statistical modeling techniques that allow researchers to analyze systematicity, individual variation, and nonlinearity in second language (L2) development. Generalized linear mixed-effects models can be used to quantify individual variation and examine systematic effects simultaneously, and generalized…

  8. Empirical tests of natural selection-based evolutionary accounts of ADHD: a systematic review.

    Science.gov (United States)

    Thagaard, Marthe S; Faraone, Stephen V; Sonuga-Barke, Edmund J; Østergaard, Søren D

    2016-10-01

    ADHD is a prevalent and highly heritable mental disorder associated with significant impairment, morbidity and increased rates of mortality. This combination of high prevalence and high morbidity/mortality seen in ADHD and other mental disorders presents a challenge to natural selection-based models of human evolution. Several hypotheses have been proposed in an attempt to resolve this apparent paradox. The aim of this study was to review the evidence for these hypotheses. We conducted a systematic review of the literature on empirical investigations of natural selection-based evolutionary accounts for ADHD in adherence with the PRISMA guideline. The PubMed, Embase, and PsycINFO databases were screened for relevant publications, by combining search terms covering evolution/selection with search terms covering ADHD. The search identified 790 records. Of these, 15 full-text articles were assessed for eligibility, and three were included in the review. Two of these reported on the evolution of the seven-repeat allele of the ADHD-associated dopamine receptor D4 gene, and one reported on the results of a simulation study of the effect of suggested ADHD-traits on group survival. The authors of the three studies interpreted their findings as favouring the notion that ADHD-traits may have been associated with increased fitness during human evolution. However, we argue that none of the three studies really tap into the core symptoms of ADHD, and that their conclusions therefore lack validity for the disorder. This review indicates that the natural selection-based accounts of ADHD have not been subjected to empirical test and therefore remain hypothetical.

  9. Modeling a Small Punch Testing Device

    Directory of Open Access Journals (Sweden)

    S. Habibi

    2014-04-01

    Full Text Available A small punch test of a sample in miniature is implemented in order to estimate the ultimate load of CrMoV ductile steel. The objective of this study is to model the ultimate tensile strength and ultimate load indentation according to the geometrical parameters of the SPT using experimental data. A comparison of the model obtained with the two models (European code of practice and method of Norris and Parker allows the design and dimensioning of an indentation device that meets the practical constraints. Implemented as a Matlab program, allows the investigation of new combinations of test variables.

  10. A systematic approach for model verification: application on seven published activated sludge models.

    Science.gov (United States)

    Hauduc, H; Rieger, L; Takács, I; Héduit, A; Vanrolleghem, P A; Gillot, S

    2010-01-01

    The quality of simulation results can be significantly affected by errors in the published model (typing, inconsistencies, gaps or conceptual errors) and/or in the underlying numerical model description. Seven of the most commonly used activated sludge models have been investigated to point out the typing errors, inconsistencies and gaps in the model publications: ASM1; ASM2d; ASM3; ASM3 + Bio-P; ASM2d + TUD; New General; UCTPHO+. A systematic approach to verify models by tracking typing errors and inconsistencies in model development and software implementation is proposed. Then, stoichiometry and kinetic rate expressions are checked for each model and the errors found are reported in detail. An attached spreadsheet (see http://www.iwaponline.com/wst/06104/0898.pdf) provides corrected matrices with the calculations of all stoichiometric coefficients for the discussed biokinetic models and gives an example of proper continuity checks.

  11. A Model of Consultation in Prostate Cancer Care: Evidence From a Systematic Review.

    Science.gov (United States)

    Paterson, Catherine; Nabi, Ghulam

    There has been an evolution of various consultation models in the literature. Men affected by prostate cancer can experience a range of unmet supportive care needs. Thus, effective consultations are paramount in the delivery of supportive care to optimize tailored self-management plans at the individual level of need. The aim of this study is to critically appraise existing models of consultation and make recommendations for a model of consultation within the scope of clinical practice for prostate cancer care. A systematic review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Statement Guidelines. Electronic databases were searched using a wide range of keywords and free text items to increase the sensitivity and inclusiveness of the searches. Findings were integrated in a narrative synthesis. A total of 1829 articles were retrieved and 17 papers were included. Beneficial features ranged across a number of models that included a person-centered consultation, shared management plans, and safety netting. None of the reviewed models of consultation are suitable for use in prostate cancer care because of a range of limitations and the clinical context in which models were developed. A Cancer Care Consultation Model was informed from critical appraisal of the evidence and expert clinical and service user comment. Further research is needed to empirically test consultation models in routine clinical practice, specifically for advanced cancer specialist nurses. The Prostate Cancer Model of Consultation can be used to structure clinical consultations to target self-management care plans at the individual level of need over the cancer care continuum.

  12. An Approach to Model Based Testing of Multiagent Systems

    Directory of Open Access Journals (Sweden)

    Shafiq Ur Rehman

    2015-01-01

    Full Text Available Autonomous agents perform on behalf of the user to achieve defined goals or objectives. They are situated in dynamic environment and are able to operate autonomously to achieve their goals. In a multiagent system, agents cooperate with each other to achieve a common goal. Testing of multiagent systems is a challenging task due to the autonomous and proactive behavior of agents. However, testing is required to build confidence into the working of a multiagent system. Prometheus methodology is a commonly used approach to design multiagents systems. Systematic and thorough testing of each interaction is necessary. This paper proposes a novel approach to testing of multiagent systems based on Prometheus design artifacts. In the proposed approach, different interactions between the agent and actors are considered to test the multiagent system. These interactions include percepts and actions along with messages between the agents which can be modeled in a protocol diagram. The protocol diagram is converted into a protocol graph, on which different coverage criteria are applied to generate test paths that cover interactions between the agents. A prototype tool has been developed to generate test paths from protocol graph according to the specified coverage criterion.

  13. Systematic review of studies on cost-effectiveness of cystic fibrosis carrier testing

    Directory of Open Access Journals (Sweden)

    Ernesto Andrade-Cerquera

    2016-10-01

    Full Text Available Introduction: Cystic fibrosis is considered the most common autosomal disease with multisystem complications in non-Hispanic white population. Objective: To review the available evidence on cost-effectiveness of the cystic fibrosis carrier testing compared to no intervention. Materials and methods: The databases of MEDLINE, Embase, NHS, EBM Reviews - Cochrane Database of Systematic Reviews, LILACS, Health Technology Assessment, Genetests.org, Genetsickkids.org and Web of Science were used to conduct a systematic review of the cost-effectiveness of performing the genetic test in cystic fibrosis patients. Cost-effectiveness studies were included without language or date of publication restrictions. Results: Only 13 studies were relevant for full review. Prenatal, preconception and mixed screening strategies were found. Health perspective was the most used; the discount rate applied was heterogeneous between 3.5% and 5%; the main analysis unit was the cost per detected carrier couple, followed by cost per averted birth with cystic fibrosis. It was evident that the most cost-effective strategy was preconception screening associated with prenatal test. Conclusions: A marked heterogeneity in the methodology was found, which led to incomparable results and to conclude that there are different approaches to this genetic test.

  14. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  15. Kinematic tests of exotic flat cosmological models

    International Nuclear Information System (INIS)

    Charlton, J.C.; Turner, M.S.; NASA/Fermilab Astrophysics Center, Batavia, IL)

    1987-01-01

    Theoretical prejudice and inflationary models of the very early universe strongly favor the flat, Einstein-de Sitter model of the universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the universe which posses a smooth component of energy density. The kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings is studied in detail. The observational tests which can be used to discriminate between these models are also discussed. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations. 58 references

  16. Kinematic tests of exotic flat cosmological models

    International Nuclear Information System (INIS)

    Charlton, J.C.; Turner, M.S.

    1986-05-01

    Theoretical prejudice and inflationary models of the very early Universe strongly favor the flat, Einstein-deSitter model of the Universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the Universe which possess a smooth component by energy density. We study in detail the kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings. We also discuss the observational tests which can be used to discriminate between these models. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations

  17. A Systematic Review of Point of Care Testing for Chlamydia trachomatis, Neisseria gonorrhoeae, and Trichomonas vaginalis

    Directory of Open Access Journals (Sweden)

    Sasha Herbst de Cortina

    2016-01-01

    Full Text Available Objectives. Systematic review of point of care (POC diagnostic tests for sexually transmitted infections: Chlamydia trachomatis (CT, Neisseria gonorrhoeae (NG, and Trichomonas vaginalis (TV. Methods. Literature search on PubMed for articles from January 2010 to August 2015, including original research in English on POC diagnostics for sexually transmitted CT, NG, and/or TV. Results. We identified 33 publications with original research on POC diagnostics for CT, NG, and/or TV. Thirteen articles evaluated test performance, yielding at least one test for each infection with sensitivity and specificity ≥90%. Each infection also had currently available tests with sensitivities <60%. Three articles analyzed cost effectiveness, and five publications discussed acceptability and feasibility. POC testing was acceptable to both providers and patients and was also demonstrated to be cost effective. Fourteen proof of concept articles introduced new tests. Conclusions. Highly sensitive and specific POC tests are available for CT, NG, and TV, but improvement is possible. Future research should focus on acceptability, feasibility, and cost of POC testing. While pregnant women specifically have not been studied, the results available in nonpregnant populations are encouraging for the ability to test and treat women in antenatal care to prevent adverse pregnancy and neonatal outcomes.

  18. The psychological impact of predictive genetic testing for Huntington's disease: a systematic review of the literature.

    Science.gov (United States)

    Crozier, S; Robertson, N; Dale, M

    2015-02-01

    Huntington's disease (HD) is a neurodegenerative genetic condition for which a predictive genetic test by mutation analysis has been available since 1993. However, whilst revealing the future presence of the disease, testing may have an adverse psychological impact given that the disease is progressive, incurable and ultimately fatal. This review seeks to systematically explore the psychological impact of genetic testing for individuals undergoing pre-symptomatic mutation analysis. Three databases (Medline, PsycInfo and Scopus) were interrogated for studies utilising standardised measures to assess psychological impact following predictive genetic testing for HD. From 100 papers initially identified, eight articles were eligible for inclusion. Psychological impact of predictive genetic testing was not found to be associated with test result. No detrimental effect of predictive genetic testing on non-carriers was found, although the process was not found to be psychologically neutral. Fluctuation in levels of distress was found over time for carriers and non-carriers alike. Methodological weaknesses of published literature were identified, notably the needs of individuals not requesting genetic testing, as well as inadequate support for individuals registering elevated distress and declining post-test follow-up. Further assessment of these vulnerable individuals is warranted to establish the extent and type of future psychological support.

  19. Systematic flood modelling to support flood-proof urban design

    Science.gov (United States)

    Bruwier, Martin; Mustafa, Ahmed; Aliaga, Daniel; Archambeau, Pierre; Erpicum, Sébastien; Nishida, Gen; Zhang, Xiaowei; Pirotton, Michel; Teller, Jacques; Dewals, Benjamin

    2017-04-01

    Urban flood risk is influenced by many factors such as hydro-meteorological drivers, existing drainage systems as well as vulnerability of population and assets. The urban fabric itself has also a complex influence on inundation flows. In this research, we performed a systematic analysis on how various characteristics of urban patterns control inundation flow within the urban area and upstream of it. An urban generator tool was used to generate over 2,250 synthetic urban networks of 1 km2. This tool is based on the procedural modelling presented by Parish and Müller (2001) which was adapted to generate a broader variety of urban networks. Nine input parameters were used to control the urban geometry. Three of them define the average length, orientation and curvature of the streets. Two orthogonal major roads, for which the width constitutes the fourth input parameter, work as constraints to generate the urban network. The width of secondary streets is given by the fifth input parameter. Each parcel generated by the street network based on a parcel mean area parameter can be either a park or a building parcel depending on the park ratio parameter. Three setback parameters constraint the exact location of the building whithin a building parcel. For each of synthetic urban network, detailed two-dimensional inundation maps were computed with a hydraulic model. The computational efficiency was enhanced by means of a porosity model. This enables the use of a coarser computational grid , while preserving information on the detailed geometry of the urban network (Sanders et al. 2008). These porosity parameters reflect not only the void fraction, which influences the storage capacity of the urban area, but also the influence of buildings on flow conveyance (dynamic effects). A sensitivity analysis was performed based on the inundation maps to highlight the respective impact of each input parameter characteristizing the urban networks. The findings of the study pinpoint

  20. Molecular Sieve Bench Testing and Computer Modeling

    Science.gov (United States)

    Mohamadinejad, Habib; DaLee, Robert C.; Blackmon, James B.

    1995-01-01

    The design of an efficient four-bed molecular sieve (4BMS) CO2 removal system for the International Space Station depends on many mission parameters, such as duration, crew size, cost of power, volume, fluid interface properties, etc. A need for space vehicle CO2 removal system models capable of accurately performing extrapolated hardware predictions is inevitable due to the change of the parameters which influences the CO2 removal system capacity. The purpose is to investigate the mathematical techniques required for a model capable of accurate extrapolated performance predictions and to obtain test data required to estimate mass transfer coefficients and verify the computer model. Models have been developed to demonstrate that the finite difference technique can be successfully applied to sorbents and conditions used in spacecraft CO2 removal systems. The nonisothermal, axially dispersed, plug flow model with linear driving force for 5X sorbent and pore diffusion for silica gel are then applied to test data. A more complex model, a non-darcian model (two dimensional), has also been developed for simulation of the test data. This model takes into account the channeling effect on column breakthrough. Four FORTRAN computer programs are presented: a two-dimensional model of flow adsorption/desorption in a packed bed; a one-dimensional model of flow adsorption/desorption in a packed bed; a model of thermal vacuum desorption; and a model of a tri-sectional packed bed with two different sorbent materials. The programs are capable of simulating up to four gas constituents for each process, which can be increased with a few minor changes.

  1. [Rational structures in health education models: basics and systematization].

    Science.gov (United States)

    Sánchez Moreno, A; Ramos García, E; Sánchez Estévez, V; Marset Campos, P

    1995-01-01

    The different Health Education (HE) models appeared in the scientific literature are analyzed, trying to eliminate the confusion produced by its great diversity, applying a general and systematic point of view. Due to the relevance of that topic in the activities of Health Promotion in Primary Health Care it is urgent a deep reappraisal due the heterogeneity of scientific papers dealing with that topic. The curriculum, as the confluence of thought and action in Health Education, is the basic concept thanks to which it is possible to integrate both scientific logic, the biological one and that pertaining to the social sciences. Of particular importance have been the different paradigms that have emerged in the field of HE from the beginning of the present century: a first generation with a "normative" point of view, a second one orientated from positivistic bases, and a third generation adopting an hermeneutic and critic nature. This third generation of paradigms in HE has taken distances from the behaviouristic and cognitive perspectives being more critical and participative. The principal scientific contributors in the field of HE, internationals as well as spaniards are studied and classified. The main conclusions obtained from this Health Education paradigm controversy are referred to both aspects: 1) planning, programming and evaluating activities, and 2) models, qualitative and quantitative methodologies. Emphasis is given to the need of including Community Participation in all phases of the process in critic methodologies of HE. It is postulated the critic paradigm as the only one able to integrate the rest of the scientific approaches in Health Education.

  2. Endogenous opioid antagonism in physiological experimental pain models: a systematic review.

    Directory of Open Access Journals (Sweden)

    Mads U Werner

    Full Text Available Opioid antagonists are pharmacological tools applied as an indirect measure to detect activation of the endogenous opioid system (EOS in experimental pain models. The objective of this systematic review was to examine the effect of mu-opioid-receptor (MOR antagonists in placebo-controlled, double-blind studies using 'inhibitory' or 'sensitizing', physiological test paradigms in healthy human subjects. The databases PubMed and Embase were searched according to predefined criteria. Out of a total of 2,142 records, 63 studies (1,477 subjects [male/female ratio = 1.5] were considered relevant. Twenty-five studies utilized 'inhibitory' test paradigms (ITP and 38 studies utilized 'sensitizing' test paradigms (STP. The ITP-studies were characterized as conditioning modulation models (22 studies and repetitive transcranial magnetic stimulation models (rTMS; 3 studies, and, the STP-studies as secondary hyperalgesia models (6 studies, 'pain' models (25 studies, summation models (2 studies, nociceptive reflex models (3 studies and miscellaneous models (2 studies. A consistent reversal of analgesia by a MOR-antagonist was demonstrated in 10 of the 25 ITP-studies, including stress-induced analgesia and rTMS. In the remaining 14 conditioning modulation studies either absence of effects or ambiguous effects by MOR-antagonists, were observed. In the STP-studies, no effect of the opioid-blockade could be demonstrated in 5 out of 6 secondary hyperalgesia studies. The direction of MOR-antagonist dependent effects upon pain ratings, threshold assessments and somatosensory evoked potentials (SSEP, did not appear consistent in 28 out of 32 'pain' model studies. In conclusion, only in 2 experimental human pain models, i.e., stress-induced analgesia and rTMS, administration of MOR-antagonist demonstrated a consistent effect, presumably mediated by an EOS-dependent mechanisms of analgesia and hyperalgesia.

  3. Accuracy tests of the tessellated SLBM model

    International Nuclear Information System (INIS)

    Ramirez, A L; Myers, S C

    2007-01-01

    We have compared the Seismic Location Base Model (SLBM) tessellated model (version 2.0 Beta, posted July 3, 2007) with the GNEMRE Unified Model. The comparison is done on a layer/depth-by-layer/depth and layer/velocity-by-layer/velocity comparison. The SLBM earth model is defined on a tessellation that spans the globe at a constant resolution of about 1 degree (Ballard, 2007). For the tests, we used the earth model in file ''unified( ) iasp.grid''. This model contains the top 8 layers of the Unified Model (UM) embedded in a global IASP91 grid. Our test queried the same set of nodes included in the UM model file. To query the model stored in memory, we used some of the functionality built into the SLBMInterface object. We used the method get InterpolatedPoint() to return desired values for each layer at user-specified points. The values returned include: depth to the top of each layer, layer velocity, layer thickness and (for the upper-mantle layer) velocity gradient. The SLBM earth model has an extra middle crust layer whose values are used when Pg/Lg phases are being calculated. This extra layer was not accessed by our tests. Figures 1 to 8 compare the layer depths, P velocities and P gradients in the UM and SLBM models. The figures show results for the three sediment layers, three crustal layers and the upper mantle layer defined in the UM model. Each layer in the models (sediment1, sediment2, sediment3, upper crust, middle crust, lower crust and upper mantle) is shown on a separate figure. The upper mantle P velocity and gradient distribution are shown on Figures 7 and 8. The left and center images in the top row of each figure is the rendering of depth to the top of the specified layer for the UM and SLBM models. When a layer has zero thickness, its depth is the same as that of the layer above. The right image in the top row is the difference between in layer depth for the UM and SLBM renderings. The left and center images in the bottom row of the figures are

  4. Unit testing, model validation, and biological simulation.

    Science.gov (United States)

    Sarma, Gopal P; Jacobs, Travis W; Watts, Mark D; Ghayoomie, S Vahid; Larson, Stephen D; Gerkin, Richard C

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  5. Variable amplitude fatigue, modelling and testing

    International Nuclear Information System (INIS)

    Svensson, Thomas.

    1993-01-01

    Problems related to metal fatigue modelling and testing are here treated in four different papers. In the first paper different views of the subject are summarised in a literature survey. In the second paper a new model for fatigue life is investigated. Experimental results are established which are promising for further development of the mode. In the third paper a method is presented that generates a stochastic process, suitable to fatigue testing. The process is designed in order to resemble certain fatigue related features in service life processes. In the fourth paper fatigue problems in transport vibrations are treated

  6. Flight Test Maneuvers for Efficient Aerodynamic Modeling

    Science.gov (United States)

    Morelli, Eugene A.

    2011-01-01

    Novel flight test maneuvers for efficient aerodynamic modeling were developed and demonstrated in flight. Orthogonal optimized multi-sine inputs were applied to aircraft control surfaces to excite aircraft dynamic response in all six degrees of freedom simultaneously while keeping the aircraft close to chosen reference flight conditions. Each maneuver was designed for a specific modeling task that cannot be adequately or efficiently accomplished using conventional flight test maneuvers. All of the new maneuvers were first described and explained, then demonstrated on a subscale jet transport aircraft in flight. Real-time and post-flight modeling results obtained using equation-error parameter estimation in the frequency domain were used to show the effectiveness and efficiency of the new maneuvers, as well as the quality of the aerodynamic models that can be identified from the resultant flight data.

  7. Tritium transfer in pigs - A model test

    Energy Technology Data Exchange (ETDEWEB)

    Melintescu, A.; Galeriu, D. [Horia Hulubei National Inst. for Physics and Nuclear Engineering, Dept. of Life and Environmental Physics, 407 Atomistilor St., Bucharest-Magurele, RO-077125 (Romania)

    2008-07-15

    In the frame of IAEA EMRAS (Environmental Modelling for Radiation Safety) programme, there was developed a scenario for models ' testing starting with unpublished data for a sow fed with OBT for 84 days. The scenario includes model predictions for the dynamics of tritium in urine and faeces and HTO and OBT in organs at sacrifice. There have been done two inter-comparison exercises and most of the models succeeded to give predictions better than a factor 3 to 5, excepting faeces. There has been done an analysis of models' structure, performance and limits in order to be able to build a model of moderate complexity with a reliable predictive power, able to be applied for human dosimetry, also, when OBT data are missing. (authors)

  8. Damage modeling in Small Punch Test specimens

    DEFF Research Database (Denmark)

    Martínez Pañeda, Emilio; Cuesta, I.I.; Peñuelas, I.

    2016-01-01

    Ductile damage modeling within the Small Punch Test (SPT) is extensively investigated. The capabilities ofthe SPT to reliably estimate fracture and damage properties are thoroughly discussed and emphasis isplaced on the use of notched specimens. First, different notch profiles are analyzed....... Furthermore,Gurson-Tvergaard-Needleman model predictions from a top-down approach are employed to gain insightinto the mechanisms governing crack initiation and subsequent propagation in small punch experiments.An accurate assessment of micromechanical toughness parameters from the SPT...

  9. VALIDITY OF FIELD TESTS TO ESTIMATE CARDIORESPIRATORY FITNESS IN CHILDREN AND ADOLESCENTS: A SYSTEMATIC REVIEW

    Science.gov (United States)

    Batista, Mariana Biagi; Romanzini, Catiana Leila Possamai; Castro-Piñero, José; Ronque, Enio Ricardo Vaz

    2017-01-01

    ABSTRACT Objective: To systematically review the literature to verify the validity of field-tests to evaluate cardiorespiratory fitness (CRF) in children and adolescents. Data sources: The electronic search was conducted in the databases: Medline (PubMed), SPORTDiscus, Scopus, Web of Science, in addition to the Latin American databases LILACS and SciELO. The search comprised the period from the inception of each database until February 2015, in English and Portuguese. All stages of the process were performed in accordance with the PRISMA flow diagram. Data synthesis: After confirming the inclusion criteria, eligibility, and quality of the studies, 43 studies were analyzed in full; 38 obtained through the searches in the electronic databases, and 5 through private libraries, and references from other articles. Of the total studies, only 13 were considered high quality according to the adopted criteria. The most commonly investigated test in the literature was the 20-meter shuttle run (SR-20 m), accounting for 23 studies, followed by tests of distances between 550 meters and 1 mile, in 9 studies, timed tests of 6, 9, and 12 minutes, also 9 studies, and finally bench protocols and new test proposals represented in 7 studies. Conclusions: The SR-20-m test seems to be the most appropriate to evaluate the CRF of young people with the equation of Barnett, recommended to estimate VO2 peak. As an alternative for evaluating CRF, the 1-mile test is indicated with the equation proposed by Cureton for estimating VO2 peak. PMID:28977338

  10. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling.

    Science.gov (United States)

    Johnson, Shane D; Groff, Elizabeth R

    2014-07-01

    The Journal of Research in Crime and Delinquency ( JRCD ) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.

  11. Testing of a steel containment vessel model

    International Nuclear Information System (INIS)

    Luk, V.K.; Hessheimer, M.F.; Matsumoto, T.; Komine, K.; Costello, J.F.

    1997-01-01

    A mixed-scale containment vessel model, with 1:10 in containment geometry and 1:4 in shell thickness, was fabricated to represent an improved, boiling water reactor (BWR) Mark II containment vessel. A contact structure, installed over the model and separated at a nominally uniform distance from it, provided a simplified representation of a reactor shield building in the actual plant. This paper describes the pretest preparations and the conduct of the high pressure test of the model performed on December 11-12, 1996. 4 refs., 2 figs

  12. Engineering Abstractions in Model Checking and Testing

    DEFF Research Database (Denmark)

    Achenbach, Michael; Ostermann, Klaus

    2009-01-01

    Abstractions are used in model checking to tackle problems like state space explosion or modeling of IO. The application of these abstractions in real software development processes, however, lacks engineering support. This is one reason why model checking is not widely used in practice yet...... and testing is still state of the art in falsification. We show how user-defined abstractions can be integrated into a Java PathFinder setting with tools like AspectJ or Javassist and discuss implications of remaining weaknesses of these tools. We believe that a principled engineering approach to designing...

  13. Binomial test models and item difficulty

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1979-01-01

    In choosing a binomial test model, it is important to know exactly what conditions are imposed on item difficulty. In this paper these conditions are examined for both a deterministic and a stochastic conception of item responses. It appears that they are more restrictive than is generally

  14. Shallow foundation model tests in Europe

    Czech Academy of Sciences Publication Activity Database

    Feda, Jaroslav; Simonini, P.; Arslan, U.; Georgiodis, M.; Laue, J.; Pinto, I.

    1999-01-01

    Roč. 2, č. 4 (1999), s. 447-475 ISSN 1436-6517. [Int. Conf. on Soil - Structure Interaction in Urban Civ. Engineering. Darmstadt, 08.10.1999-09.10.1999] R&D Projects: GA MŠk OC C7.10 Keywords : shallow foundations * model tests * sandy subsoil * bearing capacity * settlement Subject RIV: JM - Building Engineering

  15. Testing spatial heterogeneity with stock assessment models

    DEFF Research Database (Denmark)

    Jardim, Ernesto; Eero, Margit; Silva, Alexandra

    2018-01-01

    This paper describes a methodology that combines meta-population theory and stock assessment models to gain insights about spatial heterogeneity of the meta-population in an operational time frame. The methodology was tested with stochastic simulations for different degrees of connectivity betwee...

  16. Turbulence Modeling Validation, Testing, and Development

    Science.gov (United States)

    Bardina, J. E.; Huang, P. G.; Coakley, T. J.

    1997-01-01

    The primary objective of this work is to provide accurate numerical solutions for selected flow fields and to compare and evaluate the performance of selected turbulence models with experimental results. Four popular turbulence models have been tested and validated against experimental data often turbulent flows. The models are: (1) the two-equation k-epsilon model of Wilcox, (2) the two-equation k-epsilon model of Launder and Sharma, (3) the two-equation k-omega/k-epsilon SST model of Menter, and (4) the one-equation model of Spalart and Allmaras. The flows investigated are five free shear flows consisting of a mixing layer, a round jet, a plane jet, a plane wake, and a compressible mixing layer; and five boundary layer flows consisting of an incompressible flat plate, a Mach 5 adiabatic flat plate, a separated boundary layer, an axisymmetric shock-wave/boundary layer interaction, and an RAE 2822 transonic airfoil. The experimental data for these flows are well established and have been extensively used in model developments. The results are shown in the following four sections: Part A describes the equations of motion and boundary conditions; Part B describes the model equations, constants, parameters, boundary conditions, and numerical implementation; and Parts C and D describe the experimental data and the performance of the models in the free-shear flows and the boundary layer flows, respectively.

  17. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.

  18. Blast Testing and Modelling of Composite Structures

    DEFF Research Database (Denmark)

    Giversen, Søren

    The motivation for this work is based on a desire for finding light weight alternatives to high strength steel as the material to use for armouring in military vehicles. With the use of high strength steel, an increase in the level of armouring has a significant impact on the vehicle weight......-up proved functional and provided consistent data of the panel response. The tests reviled that the sandwich panels did not provide a decrease in panel deflection compared with the monolithic laminates, which was expected due to their higher flexural rigidity. This was found to be because membrane effects...... a pressure distribution on a selected surfaces and has been based on experimental pressure measurement data, and (ii) with a designed 3 step numerical load model, where the blast pressure and FSI (Fluid Structure Interaction) between the pressure wave and modelled panel is modelled numerically. The tested...

  19. HIV testing and counselling for migrant populations living in high-income countries: a systematic review.

    Science.gov (United States)

    Alvarez-del Arco, Debora; Monge, Susana; Azcoaga, Amaya; Rio, Isabel; Hernando, Victoria; Gonzalez, Cristina; Alejos, Belen; Caro, Ana Maria; Perez-Cachafeiro, Santiago; Ramirez-Rubio, Oriana; Bolumar, Francisco; Noori, Teymur; Del Amo, Julia

    2013-12-01

    The barriers to HIV testing and counselling that migrants encounter can jeopardize proactive HIV testing that relies on the fact that HIV testing must be linked to care. We analyse available evidence on HIV testing and counselling strategies targeting migrants and ethnic minorities in high-income countries. Systematic literature review of the five main databases of articles in English from Europe, North America and Australia between 2005 and 2009. Of 1034 abstracts, 37 articles were selected. Migrants, mainly from HIV-endemic countries, are at risk of HIV infection and its consequences. The HIV prevalence among migrants is higher than the general population's, and migrants have higher frequency of delayed HIV diagnosis. For migrants from countries with low HIV prevalence and for ethnic minorities, socio-economic vulnerability puts them at risk of acquiring HIV. Migrants have specific legal and administrative impediments to accessing HIV testing-in some countries, undocumented migrants are not entitled to health care-as well as cultural and linguistic barriers, racism and xenophobia. Migrants and ethnic minorities fear stigma from their communities, yet community acceptance is key for well-being. Migrants and ethnic minorities should be offered HIV testing, but the barriers highlighted in this review may deter programs from achieving the final goal, which is linking migrants and ethnic minorities to HIV clinical care under the public health perspective.

  20. Training of nurses in point-of-care testing: a systematic review of the literature.

    Science.gov (United States)

    Liikanen, Eeva; Lehto, Liisa

    2013-08-01

    To review and describe the training of nurses in point-of care testing. Point-of-care tests are usually carried out by nurses. They are used in many healthcare units. Through training, nurses are able to improve their competence in performing point-of-care testing. Systematic review. A literature search of electronic data was undertaken in autumn 2011 using CINAHL, The Cochrane Library, Medline (Ovid) and Scopus databases. From the available literature, six specific initiatives were analysed. The studies were performed on three continents and in five healthcare settings. The three interventions were related to glucose point-of-care testing. The training approaches involved seven aspects. The interventions were diverse, broad and multifaceted, but they appeared to be successful. The strength of the interventions lay in the involvement of laboratory staff. Quantitative synthesis of the data was not undertaken because of different designs for the studies. Training can improve nurses' competence, and many methods are available. There are very few studies of training nurses in point-of-care testing, although in-depth descriptions of interventions in different settings would be valuable. Nurses can be trained using a variety of methods in different healthcare settings. To save resources, especially in large hospitals and sparsely populated areas, distance learning is worth considering. However if training is delivered with the support of laboratory professionals, nurses subsequently perform good-quality point-of-care testing. © 2013 John Wiley & Sons Ltd.

  1. A systematic review on the microscopic agglutination test seroepidemiology of bovine leptospirosis in Latin America.

    Science.gov (United States)

    Pinto, Priscila da Silva; Libonati, Hugo; Penna, Bruno; Lilenbaum, Walter

    2016-02-01

    The diagnosis of leptospirosis commonly relies on serology, which has three issues that are referred: the sampling, the antigen panel, and the cutoff point. We propose a systematic review of the bovine leptospirosis in Latin America, in order to provide a better understanding of the evolution of the research and of the seroepidemiology of bovine leptospirosis in that region. Internet databases were consulted over the year of 2014. Inclusion criteria for analysis included serosurvey using microscopic agglutination test (MAT), a relevant number of animals, the presence in the antigen panel of at least one representant of serogroup Sejroe, and a cutoff point of ≥100. A total of 242 articles that referred to cattle, leptospir*, and one region of Latin America was found. Only 105 articles regarding to serosurveys using MAT were found in several countries, and 61 (58.1 %) met all the inclusion criteria. In conclusion, this systematic review demonstrated a high prevalence of the infection (75.0 % at herd level and 44.2 % at animal level), with predominance of strains of serogroup Sejroe (80.3 %). It was evident that there is the necessity of more studies in several countries, as well as the need for greater standardization in studies, especially with regard to the adopted cutoff point at serological tests.

  2. Testing for thrombophilia in mesenteric venous thrombosis - Retrospective original study and systematic review.

    Science.gov (United States)

    Zarrouk, M; Salim, S; Elf, J; Gottsäter, A; Acosta, S

    2017-02-01

    The aim was to perform a local study of risk factors and thrombophilia in mesenteric venous thrombosis (MVT), and to review the literature concerning thrombophilia testing in MVT. Patients hospitalized for surgical or medical treatment of MVT at our center 2000-2015. A systematic review of observational studies was performed. In the local study, the most frequently identified risk factor was Factor V Leiden mutation. The systematic review included 14 original studies. The highest pooled percentage of any inherited thrombophilic factor were: Factor V Leiden mutation 9% (CI 2.9-16.1), prothrombin gene mutation 7% (CI 2.7-11.8). The highest pooled percentage of acquired thrombophilic factors were JAK2 V617F mutation 14% (CI -1.9-28.1). The wide range of frequency of inherited and acquired thrombophilic factors in different populations indicates the necessity to relate these factors to background population based data in order to estimate their overrepresentation in MVT. There is a need to develop guidelines for when and how thrombophilia testing should be performed in MVT. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Testing the PRISMA-Equity 2012 reporting guideline: the perspectives of systematic review authors.

    Directory of Open Access Journals (Sweden)

    Belinda J Burford

    Full Text Available Reporting guidelines can be used to encourage standardised and comprehensive reporting of health research. In light of the global commitment to health equity, we have previously developed and published a reporting guideline for equity-focused systematic reviews (PRISMA-E 2012. The objectives of this study were to explore the utility of the equity extension items included in PRISMA-E 2012 from a systematic review author perspective, including facilitators and barriers to its use. This will assist in designing dissemination and knowledge translation strategies. We conducted a survey of systematic review authors to expose them to the new items in PRISMA-E 2012, establish the extent to which they had historically addressed those items in their own reviews, and gather feedback on the usefulness of the new items. Data were analysed using Microsoft Excel 2008 and Stata (version 11.2 for Mac. Of 151 respondents completing the survey, 18.5% (95% CI: 12.7% to 25.7% had not heard of the PRISMA statement before, although 83.4% (95% CI: 77.5% to 89.3% indicated that they plan to use PRISMA-E 2012 in the future, depending on the focus of their review. Most (68.9%; 95% CI: 60.8% to 76.2% thought that using PRISMA-E 2012 would lead them to conduct their reviews differently. Important facilitators to using PRISMA-E 2012 identified by respondents were journal endorsement and incorporation of the elements of the guideline into systematic review software. Barriers identified were lack of time, word limits and the availability of equity data in primary research. This study has been the first to 'road-test' the new PRISMA-E 2012 reporting guideline and the findings are encouraging. They confirm the acceptability and potential utility of the guideline to assist review authors in reporting on equity in their reviews. The uptake and impact of PRISMA-E 2012 over time on design, conduct and reporting of primary research and systematic reviews should continue to be

  4. Prospective Tests on Biological Models of Acupuncture

    Directory of Open Access Journals (Sweden)

    Charles Shang

    2009-01-01

    Full Text Available The biological effects of acupuncture include the regulation of a variety of neurohumoral factors and growth control factors. In science, models or hypotheses with confirmed predictions are considered more convincing than models solely based on retrospective explanations. Literature review showed that two biological models of acupuncture have been prospectively tested with independently confirmed predictions: The neurophysiology model on the long-term effects of acupuncture emphasizes the trophic and anti-inflammatory effects of acupuncture. Its prediction on the peripheral effect of endorphin in acupuncture has been confirmed. The growth control model encompasses the neurophysiology model and suggests that a macroscopic growth control system originates from a network of organizers in embryogenesis. The activity of the growth control system is important in the formation, maintenance and regulation of all the physiological systems. Several phenomena of acupuncture such as the distribution of auricular acupuncture points, the long-term effects of acupuncture and the effect of multimodal non-specific stimulation at acupuncture points are consistent with the growth control model. The following predictions of the growth control model have been independently confirmed by research results in both acupuncture and conventional biomedical sciences: (i Acupuncture has extensive growth control effects. (ii Singular point and separatrix exist in morphogenesis. (iii Organizers have high electric conductance, high current density and high density of gap junctions. (iv A high density of gap junctions is distributed as separatrices or boundaries at body surface after early embryogenesis. (v Many acupuncture points are located at transition points or boundaries between different body domains or muscles, coinciding with the connective tissue planes. (vi Some morphogens and organizers continue to function after embryogenesis. Current acupuncture research suggests a

  5. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  6. Parametric Testing of Launch Vehicle FDDR Models

    Science.gov (United States)

    Schumann, Johann; Bajwa, Anupa; Berg, Peter; Thirumalainambi, Rajkumar

    2011-01-01

    For the safe operation of a complex system like a (manned) launch vehicle, real-time information about the state of the system and potential faults is extremely important. The on-board FDDR (Failure Detection, Diagnostics, and Response) system is a software system to detect and identify failures, provide real-time diagnostics, and to initiate fault recovery and mitigation. The ERIS (Evaluation of Rocket Integrated Subsystems) failure simulation is a unified Matlab/Simulink model of the Ares I Launch Vehicle with modular, hierarchical subsystems and components. With this model, the nominal flight performance characteristics can be studied. Additionally, failures can be injected to see their effects on vehicle state and on vehicle behavior. A comprehensive test and analysis of such a complicated model is virtually impossible. In this paper, we will describe, how parametric testing (PT) can be used to support testing and analysis of the ERIS failure simulation. PT uses a combination of Monte Carlo techniques with n-factor combinatorial exploration to generate a small, yet comprehensive set of parameters for the test runs. For the analysis of the high-dimensional simulation data, we are using multivariate clustering to automatically find structure in this high-dimensional data space. Our tools can generate detailed HTML reports that facilitate the analysis.

  7. Reliability of physical examination tests for the diagnosis of knee disorders: Evidence from a systematic review.

    Science.gov (United States)

    Décary, Simon; Ouellet, Philippe; Vendittoli, Pascal-André; Desmeules, François

    2016-12-01

    Clinicians often rely on physical examination tests to guide them in the diagnostic process of knee disorders. However, reliability of these tests is often overlooked and may influence the consistency of results and overall diagnostic validity. Therefore, the objective of this study was to systematically review evidence on the reliability of physical examination tests for the diagnosis of knee disorders. A structured literature search was conducted in databases up to January 2016. Included studies needed to report reliability measures of at least one physical test for any knee disorder. Methodological quality was evaluated using the QAREL checklist. A qualitative synthesis of the evidence was performed. Thirty-three studies were included with a mean QAREL score of 5.5 ± 0.5. Based on low to moderate quality evidence, the Thessaly test for meniscal injuries reached moderate inter-rater reliability (k = 0.54). Based on moderate to excellent quality evidence, the Lachman for anterior cruciate ligament injuries reached moderate to excellent inter-rater reliability (k = 0.42 to 0.81). Based on low to moderate quality evidence, the Tibiofemoral Crepitus, Joint Line and Patellofemoral Pain/Tenderness, Bony Enlargement and Joint Pain on Movement tests for knee osteoarthritis reached fair to excellent inter-rater reliability (k = 0.29 to 0.93). Based on low to moderate quality evidence, the Lateral Glide, Lateral Tilt, Lateral Pull and Quality of Movement tests for patellofemoral pain reached moderate to good inter-rater reliability (k = 0.49 to 0.73). Many physical tests appear to reach good inter-rater reliability, but this is based on low-quality and conflicting evidence. High-quality research is required to evaluate the reliability of knee physical examination tests. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Systematic review and meta-analysis of studies evaluating diagnostic test accuracy: A practical review for clinical researchers-Part II. general guidance and tips

    International Nuclear Information System (INIS)

    Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi; Park, Seong Ho; Lee, June Young

    2015-01-01

    Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies

  9. Internet-Based Direct-to-Consumer Genetic Testing: A Systematic Review

    Science.gov (United States)

    Rubinelli, Sara; Ceretti, Elisabetta; Gelatti, Umberto

    2015-01-01

    Background Direct-to-consumer genetic tests (DTC-GT) are easily purchased through the Internet, independent of a physician referral or approval for testing, allowing the retrieval of genetic information outside the clinical context. There is a broad debate about the testing validity, their impact on individuals, and what people know and perceive about them. Objective The aim of this review was to collect evidence on DTC-GT from a comprehensive perspective that unravels the complexity of the phenomenon. Methods A systematic search was carried out through PubMed, Web of Knowledge, and Embase, in addition to Google Scholar according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist with the key term “Direct-to-consumer genetic test.” Results In the final sample, 118 articles were identified. Articles were summarized in five categories according to their focus on (1) knowledge of, attitude toward use of, and perception of DTC-GT (n=37), (2) the impact of genetic risk information on users (n=37), (3) the opinion of health professionals (n=20), (4) the content of websites selling DTC-GT (n=16), and (5) the scientific evidence and clinical utility of the tests (n=14). Most of the articles analyzed the attitude, knowledge, and perception of DTC-GT, highlighting an interest in using DTC-GT, along with the need for a health care professional to help interpret the results. The articles investigating the content analysis of the websites selling these tests are in agreement that the information provided by the companies about genetic testing is not completely comprehensive for the consumer. Given that risk information can modify consumers’ health behavior, there are surprisingly few studies carried out on actual consumers and they do not confirm the overall concerns on the possible impact of DTC-GT. Data from studies that investigate the quality of the tests offered confirm that they are not informative, have little predictive

  10. Temperature Buffer Test. Final THM modelling

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)

    2012-01-15

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  11. Temperature Buffer Test. Final THM modelling

    International Nuclear Information System (INIS)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan; Ledesma, Alberto; Jacinto, Abel

    2012-01-01

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code B right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code B right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  12. Metal hypersensitivity testing in patients undergoing joint replacement: a systematic review.

    Science.gov (United States)

    Granchi, D; Cenni, E; Giunti, A; Baldini, N

    2012-08-01

    We report a systematic review and meta-analysis of the peer-reviewed literature focusing on metal sensitivity testing in patients undergoing total joint replacement (TJR). Our purpose was to assess the risk of developing metal hypersensitivity post-operatively and its relationship with outcome and to investigate the advantages of performing hypersensitivity testing. We undertook a comprehensive search of the citations quoted in PubMed and EMBASE: 22 articles (comprising 3634 patients) met the inclusion criteria. The frequency of positive tests increased after TJR, especially in patients with implant failure or a metal-on-metal coupling. The probability of developing a metal allergy was higher post-operatively (odds ratio (OR) 1.52 (95% confidence interval (CI) 1.06 to 2.31)), and the risk was further increased when failed implants were compared with stable TJRs (OR 2.76 (95% CI 1.14 to 6.70)). Hypersensitivity testing was not able to discriminate between stable and failed TJRs, as its predictive value was not statistically proven. However, it is generally thought that hypersensitivity testing should be performed in patients with a history of metal allergy and in failed TJRs, especially with metal-on-metal implants and when the cause of the loosening is doubtful.

  13. Satellite data for systematic validation of wave model results in the Black Sea

    Science.gov (United States)

    Behrens, Arno; Staneva, Joanna

    2017-04-01

    The Black Sea is with regard to the availability of traditional in situ wave measurements recorded by usual waverider buoys a data sparse semi-enclosed sea. The only possibility for systematic validations of wave model results in such a regional area is the use of satellite data. In the frame of the COPERNICUS Marine Evolution System for the Black Sea that requires wave predictions, the third-generation spectral wave model WAM is used. The operational system is demonstrated based on four years' systematic comparisons with satellite data. The aim of this investigation was to answer two questions. Is the wave model able to provide a reliable description of the wave conditions in the Black Sea and are the satellite measurements suitable for validation purposes on such a regional scale ? Detailed comparisons between measured data and computed model results for the Black Sea including yearly statistics have been done for about 300 satellite overflights per year. The results discussed the different verification schemes needed to review the forecasting skills of the operational system. The good agreement between measured and modeled data supports the expectation that the wave model provides reasonable results and that the satellite data is of good quality and offer an appropriate validation alternative to buoy measurements. This is the required step towards further use of those satellite data for assimilation into the wave fields to improve the wave predictions. Additional support for the good quality of the wave predictions is provided by comparisons between ADCP measurements that are available for a short time period in February 2012 and the corresponding model results at a location near the Bulgarian coast in the western Black Sea. Sensitivity tests with different wave model options and different driving wind fields have been done which identify the appropriate model configuration that provides the best wave predictions. In addition to the comparisons between measured

  14. Business model stress testing : A practical approach to test the robustness of a business model

    NARCIS (Netherlands)

    Haaker, T.I.; Bouwman, W.A.G.A.; Janssen, W; de Reuver, G.A.

    Business models and business model innovation are increasingly gaining attention in practice as well as in academic literature. However, the robustness of business models (BM) is seldom tested vis-à-vis the fast and unpredictable changes in digital technologies, regulation and markets. The

  15. Preliminary Test for Constitutive Models of CAP

    International Nuclear Information System (INIS)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Lee, Keo Hyung; Kim, Min Ki; Lee, Byung Chul; Ha, Sang Jun; Choi, Hoon

    2010-01-01

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. As a part of this project, CAP (Containment Analysis Package) code has been developing for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (vapor, continuous liquid and dispersed drop) for the assessment of containment specific phenomena, and is featured by assessment capabilities in multi-dimensional and lumped parameter thermal hydraulic cell. Thermal hydraulics solver was developed and has a significant progress now. Implementation of the well proven constitutive models and correlations are essential in other for a containment code to be used with the generalized or optimized purposes. Generally, constitutive equations are composed of interfacial and wall transport models and correlations. These equations are included in the source terms of the governing field equations. In order to develop the best model and correlation package of the CAP code, various models currently used in major containment analysis codes, such as GOTHIC, CONTAIN2.0 and CONTEMPT-LT are reviewed. Several models and correlations were incorporated for the preliminary test of CAP's performance and test results and future plans to improve the level of execution besides will be discussed in this paper

  16. Preliminary Test for Constitutive Models of CAP

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Lee, Keo Hyung; Kim, Min Ki; Lee, Byung Chul [FNC Tech., Seoul (Korea, Republic of); Ha, Sang Jun; Choi, Hoon [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2010-05-15

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. As a part of this project, CAP (Containment Analysis Package) code has been developing for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (vapor, continuous liquid and dispersed drop) for the assessment of containment specific phenomena, and is featured by assessment capabilities in multi-dimensional and lumped parameter thermal hydraulic cell. Thermal hydraulics solver was developed and has a significant progress now. Implementation of the well proven constitutive models and correlations are essential in other for a containment code to be used with the generalized or optimized purposes. Generally, constitutive equations are composed of interfacial and wall transport models and correlations. These equations are included in the source terms of the governing field equations. In order to develop the best model and correlation package of the CAP code, various models currently used in major containment analysis codes, such as GOTHIC, CONTAIN2.0 and CONTEMPT-LT are reviewed. Several models and correlations were incorporated for the preliminary test of CAP's performance and test results and future plans to improve the level of execution besides will be discussed in this paper

  17. Divergence-based tests for model diagnostic

    Czech Academy of Sciences Publication Activity Database

    Hobza, Tomáš; Esteban, M. D.; Morales, D.; Marhuenda, Y.

    2008-01-01

    Roč. 78, č. 13 (2008), s. 1702-1710 ISSN 0167-7152 R&D Projects: GA MŠk 1M0572 Grant - others:Instituto Nacional de Estadistica (ES) MTM2006-05693 Institutional research plan: CEZ:AV0Z10750506 Keywords : goodness of fit * devergence statistics * GLM * model checking * bootstrap Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.445, year: 2008 http://library.utia.cas.cz/separaty/2008/SI/hobza-divergence-based%20tests%20for%20model%20diagnostic.pdf

  18. Which physical examination tests provide clinicians with the most value when examining the shoulder? Update of a systematic review with meta-analysis of individual tests.

    Science.gov (United States)

    Hegedus, Eric J; Goode, Adam P; Cook, Chad E; Michener, Lori; Myer, Cortney A; Myer, Daniel M; Wright, Alexis A

    2012-11-01

    To update our previously published systematic review and meta-analysis by subjecting the literature on shoulder physical examination (ShPE) to careful analysis in order to determine each tests clinical utility. This review is an update of previous work, therefore the terms in the Medline and CINAHL search strategies remained the same with the exception that the search was confined to the dates November, 2006 through to February, 2012. The previous study dates were 1966 - October, 2006. Further, the original search was expanded, without date restrictions, to include two new databases: EMBASE and the Cochrane Library. The Quality Assessment of Diagnostic Accuracy Studies, version 2 (QUADAS 2) tool was used to critique the quality of each new paper. Where appropriate, data from the prior review and this review were combined to perform meta-analysis using the updated hierarchical summary receiver operating characteristic and bivariate models. Since the publication of the 2008 review, 32 additional studies were identified and critiqued. For subacromial impingement, the meta-analysis revealed that the pooled sensitivity and specificity for the Neer test was 72% and 60%, respectively, for the Hawkins-Kennedy test was 79% and 59%, respectively, and for the painful arc was 53% and 76%, respectively. Also from the meta-analysis, regarding superior labral anterior to posterior (SLAP) tears, the test with the best sensitivity (52%) was the relocation test; the test with the best specificity (95%) was Yergason's test; and the test with the best positive likelihood ratio (2.81) was the compression-rotation test. Regarding new (to this series of reviews) ShPE tests, where meta-analysis was not possible because of lack of sufficient studies or heterogeneity between studies, there are some individual tests that warrant further investigation. A highly specific test (specificity >80%, LR+ ≥ 5.0) from a low bias study is the passive distraction test for a SLAP lesion. This test may

  19. Overload prevention in model supports for wind tunnel model testing

    Directory of Open Access Journals (Sweden)

    Anton IVANOVICI

    2015-09-01

    Full Text Available Preventing overloads in wind tunnel model supports is crucial to the integrity of the tested system. Results can only be interpreted as valid if the model support, conventionally called a sting remains sufficiently rigid during testing. Modeling and preliminary calculation can only give an estimate of the sting’s behavior under known forces and moments but sometimes unpredictable, aerodynamically caused model behavior can cause large transient overloads that cannot be taken into account at the sting design phase. To ensure model integrity and data validity an analog fast protection circuit was designed and tested. A post-factum analysis was carried out to optimize the overload detection and a short discussion on aeroelastic phenomena is included to show why such a detector has to be very fast. The last refinement of the concept consists in a fast detector coupled with a slightly slower one to differentiate between transient overloads that decay in time and those that are the result of aeroelastic unwanted phenomena. The decision to stop or continue the test is therefore conservatively taken preserving data and model integrity while allowing normal startup loads and transients to manifest.

  20. Critical appraisal and data extraction for systematic reviews of prediction modelling studies: the CHARMS checklist.

    Directory of Open Access Journals (Sweden)

    Karel G M Moons

    2014-10-01

    Full Text Available Carl Moons and colleagues provide a checklist and background explanation for critically appraising and extracting data from systematic reviews of prognostic and diagnostic prediction modelling studies. Please see later in the article for the Editors' Summary.

  1. Meta-epidemiologic analysis indicates that MEDLINE searches are sufficient for diagnostic test accuracy systematic reviews.

    Science.gov (United States)

    van Enst, Wynanda A; Scholten, Rob J P M; Whiting, Penny; Zwinderman, Aeilko H; Hooft, Lotty

    2014-11-01

    To investigate how the summary estimates in diagnostic test accuracy (DTA) systematic reviews are affected when searches are limited to MEDLINE. A systematic search was performed to identify DTA reviews that had conducted exhaustive searches and included a meta-analysis. Primary studies included in selected reviews were assessed to determine whether they were indexed on MEDLINE. The effect of omitting non-MEDLINE studies from meta-analyses was investigated by calculating the summary relative diagnostic odds ratio (RDORs) = DORMEDLINE only/DORall studies. We also calculated the summary difference in sensitivity and specificity between all studies and only MEDLINE-indexed studies. Ten reviews contributing 15 meta-analyses met inclusion criteria for quantitative analysis. The RDOR comparing MEDLINE-only studies with all studies was 1.04 (95% confidence interval [CI], 0.95, 1.15). Summary estimates of sensitivity and specificity remained almost unchanged (difference in sensitivity: -0.08%; 95% CI -1% to 1%; difference in specificity: -0.1%; 95% CI -0.8% to 1%). Restricting to studies indexed on MEDLINE did not influence the summary estimates of the meta-analyses in our sample. In certain circumstances, for instance, when resources are limited, it may be appropriate to restrict searches to MEDLINE. However, the impact on individual reviews cannot be predicted. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Prediction of pre-eclampsia: a protocol for systematic reviews of test accuracy

    Directory of Open Access Journals (Sweden)

    Khan Khalid S

    2006-10-01

    Full Text Available Abstract Background Pre-eclampsia, a syndrome of hypertension and proteinuria, is a major cause of maternal and perinatal morbidity and mortality. Accurate prediction of pre-eclampsia is important, since high risk women could benefit from intensive monitoring and preventive treatment. However, decision making is currently hampered due to lack of precise and up to date comprehensive evidence summaries on estimates of risk of developing pre-eclampsia. Methods/Design A series of systematic reviews and meta-analyses will be undertaken to determine, among women in early pregnancy, the accuracy of various tests (history, examinations and investigations for predicting pre-eclampsia. We will search Medline, Embase, Cochrane Library, MEDION, citation lists of review articles and eligible primary articles and will contact experts in the field. Reviewers working independently will select studies, extract data, and assess study validity according to established criteria. Language restrictions will not be applied. Bivariate meta-analysis of sensitivity and specificity will be considered for tests whose studies allow generation of 2 × 2 tables. Discussion The results of the test accuracy reviews will be integrated with results of effectiveness reviews of preventive interventions to assess the impact of test-intervention combinations for prevention of pre-eclampsia.

  3. Routinization of HIV Testing in an Inpatient Setting: A Systematic Process for Organizational Change.

    Science.gov (United States)

    Mignano, Jamie L; Miner, Lucy; Cafeo, Christina; Spencer, Derek E; Gulati, Mangla; Brown, Travis; Borkoski, Ruth; Gibson-Magri, Kate; Canzoniero, Jenna; Gottlieb, Jonathan E; Rowen, Lisa

    2016-01-01

    In 2006, the U.S. Centers for Disease Control and Prevention released revised recommendations for routinization of HIV testing in healthcare settings. Health professionals have been challenged to incorporate these guidelines. In March 2013, a routine HIV testing initiative was launched at a large urban academic medical center in a high prevalence region. The goal was to routinize HIV testing by achieving a 75% offer and 75% acceptance rate and promoting linkage to care in the inpatient setting. A systematic six-step organizational change process included stakeholder buy-in, identification of an interdisciplinary leadership team, infrastructure development, staff education, implementation, and continuous quality improvement. Success was measured by monitoring the percentage of offered and accepted HIV tests from March to December 2013. The targeted offer rate was exceeded consistently once nurses became part of the consent process (September 2013). Fifteen persons were newly diagnosed with HIV. Seventy-eight persons were identified as previously diagnosed with HIV, but not engaged in care. Through this process, patients who may have remained undiagnosed or out-of-care were identified and linked to care. The authors propose that this process can be replicated in other settings. Increasing identification and treatment will improve the individual patient's health and reduce community disease burden.

  4. The role of Pap test screening against cervical cancer: a systematic review and meta-analysis.

    Science.gov (United States)

    Meggiolaro, A; Unim, B; Semyonov, L; Miccoli, S; Maffongelli, E; La Torre, G

    2016-01-01

    The first aim of this article is to quantify the role of Pap test in cervical cancer prevention, updating the pool of available studies included in a previous meta-analysis. Potential sources of meta-analysis heterogeneity were investigated as second aim. Further evidence of cost-effectiveness has been provided about age and best time interval to perform Pap test screening. The articles' search was conducted using four medical electronic databases: PubMed, Google Scholar, ISI Web, and Scopus. Papers published until the 30th November 2013 were included. The research on Google Scholar was limited to the first 10 pages of web for each study design. A systematic review/meta-analysis was performed according to PRISMA Statement. New-Castle-Ottawa Scale and Jadad have been adopted for articles quality assessment. From 4143 screened articles, 34 met eligibility criteria and 30 case-control studies were included in meta-analysis. Meta-analysis was carried out using StatsDirect2.8.0. Heterogeneity was investigated with qualitative and quantitative approaches in sensitivity-analysis. Despite a great heterogeneity (Cochran Q=504.466, df=29, pPap test has been identified (OR=0.33; 95%CI=0.268-0.408, P Pap test against cervical cancer has been confirmed especially among women <40 years. Annual screening still remains the most cost-effective preventive strategy.

  5. Interventions to Improve Follow-Up of Laboratory Test Results Pending at Discharge: A Systematic Review.

    Science.gov (United States)

    Whitehead, Nedra S; Williams, Laurina; Meleth, Sreelatha; Kennedy, Sara; Epner, Paul; Singh, Hardeep; Wooldridge, Kathleene; Dalal, Anuj K; Walz, Stacy E; Lorey, Tom; Graber, Mark L

    2018-02-28

    Failure to follow up test results pending at discharge (TPAD) from hospitals or emergency departments is a major patient safety concern. The purpose of this review is to systematically evaluate the effectiveness of interventions to improve follow-up of laboratory TPAD. We conducted literature searches in PubMed, CINAHL, Cochrane, and EMBASE using search terms for relevant health care settings, transition of patient care, laboratory tests, communication, and pending or missed tests. We solicited unpublished studies from the clinical laboratory community and excluded articles that did not address transitions between settings, did not include an intervention, or were not related to laboratory TPAD. We also excluded letters, editorials, commentaries, abstracts, case reports, and case series. Of the 9,592 abstracts retrieved, 8 met the inclusion criteria and reported the successful communication of TPAD. A team member abstracted predetermined data elements from each study, and a senior scientist reviewed the abstraction. Two experienced reviewers independently appraised the quality of each study using published LMBP™ A-6 scoring criteria. We assessed the body of evidence using the A-6 methodology, and the evidence suggested that electronic tools or one-on-one education increased documentation of pending tests in discharge summaries. We also found that automated notifications improved awareness of TPAD. The interventions were supported by suggestive evidence; this type of evidence is below the level of evidence required for LMBP™ recommendations. We encourage additional research into the impact of these interventions on key processes and health outcomes. © 2018 Society of Hospital Medicine.

  6. Noninvasive Tests Do Not Accurately Differentiate Nonalcoholic Steatohepatitis From Simple Steatosis: A Systematic Review and Meta-analysis.

    Science.gov (United States)

    Verhaegh, Pauline; Bavalia, Roisin; Winkens, Bjorn; Masclee, Ad; Jonkers, Daisy; Koek, Ger

    2017-08-22

    Non-alcoholic fatty liver disease is a rapidly increasing health problem. Liver biopsy analysis is the most sensitive test to differentiate between non-alcoholic steatohepatitis (NASH) and simple steatosis (SS), but non-invasive methods are needed. We performed a systematic review and meta-analysis of non-invasive tests for differentiating NASH from SS, focusing on blood markers. We performed a systematic search of the PubMed, Medline and Embase (1990-2016) databases using defined keywords, limited to full-text papers in English and human adults, and identified 2608 articles. Two independent reviewers screened the articles and identified 122 eligible articles that used liver biopsy as reference standard. If at least 2 studies were available, pooled sensitivity (sens p ) and specificity (spec p ) values were determined using the Meta-Analysis Package for R (metafor). In the 122 studies analyzed, 219 different blood markers (107 single markers and 112 scoring systems) were identified to differentiate NASH from simple steatosis, and 22 other diagnostic tests were studied. Markers identified related to several pathophysiological mechanisms. The markers analyzed in the largest proportions of studies were alanine aminotransferase (sens p , 63.5% and spec p , 74.4%) within routine biochemical tests, adiponectin (sensp, 72.0% and spec p , 75.7%) within inflammatory markers, CK18-M30 (sens p , 68.4% and spec p , 74.2%) within markers of cell death or proliferation and homeostatic model assessment of insulin resistance (sens p , 69.0% and spec p , 72.7%) within the metabolic markers. Two scoring systems could also be pooled: the NASH test (differentiated NASH from borderline NASH plus simple steatosis with 22.9% sens p and 95.3% spec p ) and the GlycoNASH test (67.1% sens p and 63.8% spec p ). In the meta-analysis, we found no test to differentiate NASH from SS with a high level of pooled sensitivity and specificity (≥80%). However, some blood markers, when included in

  7. Movable scour protection. Model test report

    Energy Technology Data Exchange (ETDEWEB)

    Lorenz, R.

    2002-07-01

    This report presents the results of a series of model tests with scour protection of marine structures. The objective of the model tests is to investigate the integrity of the scour protection during a general lowering of the surrounding seabed, for instance in connection with movement of a sand bank or with general subsidence. The scour protection in the tests is made out of stone material. Two different fractions have been used: 4 mm and 40 mm. Tests with current, with waves and with combined current and waves were carried out. The scour protection material was placed after an initial scour hole has evolved in the seabed around the structure. This design philosophy has been selected because the situation often is that the scour hole starts to generate immediately after the structure has been placed. It is therefore difficult to establish a scour protection at the undisturbed seabed if the scour material is placed after the main structure. Further, placing the scour material in the scour hole increases the stability of the material. Two types of structure have been used for the test, a Monopile and a Tripod foundation. Test with protection mats around the Monopile model was also carried out. The following main conclusions have emerged form the model tests with flat bed (i.e. no general seabed lowering): 1. The maximum scour depth found in steady current on sand bed was 1.6 times the cylinder diameter, 2. The minimum horizontal extension of the scour hole (upstream direction) was 2.8 times the cylinder diameter, corresponding to a slope of 30 degrees, 3. Concrete protection mats do not meet the criteria for a strongly erodible seabed. In the present test virtually no reduction in the scour depth was obtained. The main problem is the interface to the cylinder. If there is a void between the mats and the cylinder, scour will develop. Even with the protection mats that are tightly connected to the cylinder, scour is expected to develop as long as the mats allow for

  8. Alternative test models for skin aging research.

    Science.gov (United States)

    Nakamura, Motoki; Haarmann-Stemmann, Thomas; Krutmann, Jean; Morita, Akimichi

    2018-02-25

    Increasing ethical concerns regarding animal experimentation have led to the development of various alternative methods based on the 3Rs (Refinement, Reduction, and Replacement), first described by Russell and Burch in 1959. Cosmetic and skin aging research are particularly susceptible to concerns related to animal testing. In addition to animal welfare reasons, there are scientific and economic reasons to reduce and avoid animal experiments. Importantly, animal experiments may not reflect findings in humans mainly because of the differences of architectures and immune responses between animal skin and human skin. Here we review the shift from animal testing to the development and application of alternative non-animal based methods and the necessity and benefits of this shift. Some specific alternatives to animal models are discussed, including biochemical approaches, two-dimensional and three-dimensional cell cultures, and volunteer studies, as well as future directions, including genome-based research and the development of in silico computer simulations of skin models. Among the in vitro methods, three-dimensional reconstructed skin models are highly popular and useful alternatives to animal models however still have many limitations. With careful selection and skillful handling, these alternative methods will become indispensable for modern dermatology and skin aging research. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  9. BIOMOVS test scenario model comparison using BIOPATH

    International Nuclear Information System (INIS)

    Grogan, H.A.; Van Dorp, F.

    1986-07-01

    This report presents the results of the irrigation test scenario, presented in the BIOMOVS intercomparison study, calculated by the computer code BIOPATH. This scenario defines a constant release of Tc-99 and Np-237 into groundwater that is used for irrigation. The system of compartments used to model the biosphere is based upon an area in northern Switzerland and is essentially the same as that used in Projekt Gewaehr to assess the radiological impact of a high level waste repository. Two separate irrigation methods are considered, namely ditch and overhead irrigation. Their influence on the resultant activities calculated in the groundwater, soil and different foodproducts, as a function of time, is evaluated. The sensitivity of the model to parameter variations is analysed which allows a deeper understanding of the model chain. These results are assessed subjectively in a first effort to realistically quantify the uncertainty associated with each calculated activity. (author)

  10. Thermal modelling of Advanced LIGO test masses

    International Nuclear Information System (INIS)

    Wang, H; Dovale Álvarez, M; Mow-Lowry, C M; Freise, A; Blair, C; Brooks, A; Kasprzack, M F; Ramette, J; Meyers, P M; Kaufer, S; O’Reilly, B

    2017-01-01

    High-reflectivity fused silica mirrors are at the epicentre of today’s advanced gravitational wave detectors. In these detectors, the mirrors interact with high power laser beams. As a result of finite absorption in the high reflectivity coatings the mirrors suffer from a variety of thermal effects that impact on the detectors’ performance. We propose a model of the Advanced LIGO mirrors that introduces an empirical term to account for the radiative heat transfer between the mirror and its surroundings. The mechanical mode frequency is used as a probe for the overall temperature of the mirror. The thermal transient after power build-up in the optical cavities is used to refine and test the model. The model provides a coating absorption estimate of 1.5–2.0 ppm and estimates that 0.3 to 1.3 ppm of the circulating light is scattered onto the ring heater. (paper)

  11. Testing substellar models with dynamical mass measurements

    Directory of Open Access Journals (Sweden)

    Liu M.C.

    2011-07-01

    Full Text Available We have been using Keck laser guide star adaptive optics to monitor the orbits of ultracool binaries, providing dynamical masses at lower luminosities and temperatures than previously available and enabling strong tests of theoretical models. We have identified three specific problems with theory: (1 We find that model color–magnitude diagrams cannot be reliably used to infer masses as they do not accurately reproduce the colors of ultracool dwarfs of known mass. (2 Effective temperatures inferred from evolutionary model radii are typically inconsistent with temperatures derived from fitting atmospheric models to observed spectra by 100–300 K. (3 For the only known pair of field brown dwarfs with a precise mass (3% and age determination (≈25%, the measured luminosities are ~2–3× higher than predicted by model cooling rates (i.e., masses inferred from Lbol and age are 20–30% larger than measured. To make progress in understanding the observed discrepancies, more mass measurements spanning a wide range of luminosity, temperature, and age are needed, along with more accurate age determinations (e.g., via asteroseismology for primary stars with brown dwarf binary companions. Also, resolved optical and infrared spectroscopy are needed to measure lithium depletion and to characterize the atmospheres of binary components in order to better assess model deficiencies.

  12. Statistical tests of simple earthquake cycle models

    Science.gov (United States)

    Devries, Phoebe M. R.; Evans, Eileen

    2016-01-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM ~ 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  13. A Systematic Review of the Anxiolytic-Like Effects of Essential Oils in Animal Models

    Directory of Open Access Journals (Sweden)

    Damião Pergentino de Sousa

    2015-10-01

    Full Text Available The clinical efficacy of standardized essential oils (such as Lavender officinalis, in treating anxiety disorders strongly suggests that these natural products are an important candidate source for new anxiolytic drugs. A systematic review of essential oils, their bioactive constituents, and anxiolytic-like activity is conducted. The essential oil with the best profile is Lavendula angustifolia, which has already been tested in controlled clinical trials with positive results. Citrus aurantium using different routes of administration also showed significant effects in several animal models, and was corroborated by different research groups. Other promising essential oils are Citrus sinensis and bergamot oil, which showed certain clinical anxiolytic actions; along with Achillea wilhemsii, Alpinia zerumbet, Citrus aurantium, and Spiranthera odoratissima, which, like Lavendula angustifolia, appear to exert anxiolytic-like effects without GABA/benzodiazepine activity, thus differing in their mechanisms of action from the benzodiazepines. The anxiolytic activity of 25 compounds commonly found in essential oils is also discussed.

  14. Low levels of HIV test coverage in clinical settings in the UK: a systematic review of adherence to 2008 guidelines

    NARCIS (Netherlands)

    Elmahdi, Rahma; Gerver, Sarah M.; Gomez Guillen, Gabriela; Fidler, Sarah; Cooke, Graham; Ward, Helen

    2014-01-01

    To quantify the extent to which guideline recommendations for routine testing for HIV are adhered to outside of genitourinary medicine (GUM), sexual health (SH) and antenatal clinics. A systematic review of published data on testing levels following publication of 2008 guidelines was undertaken.

  15. Prostate specific antigen testing policy worldwide varies greatly and seems not to be in accordance with guidelines : A systematic review

    NARCIS (Netherlands)

    Van der Meer, Saskia; Löwik, Sabine; Hirdes, Willem H.; Nijman, Rien M.; Van der Meer, Klaas; Hoekstra-Weebers, Josette E. H. M.; Blanker, Marco H.

    2012-01-01

    Background: Prostate specific antigen (PSA) testing is widely used, but guidelines on follow-up are unclear. Methods: We performed a systematic review of the literature to determine follow-up policy after PSA testing by general practitioners (GPs) and non-urologic hospitalists, the use of a cut-off

  16. Côte de Resyste -- Automated Model Based Testing

    NARCIS (Netherlands)

    Tretmans, G.J.; Brinksma, Hendrik; Schweizer, M.

    2002-01-01

    Systematic testing is very important for assessing and improving the quality of embedded software. Yet, testing turns out to be expensive, laborious, time-consuming and error-prone. The project Cˆote de Resyste has been working since 1998 on methods, techniques and tools for automating specification

  17. TorX: Automated Model-Based Testing

    NARCIS (Netherlands)

    Tretmans, G.J.; Brinksma, Hendrik; Hartman, A.; Dussa-Ziegler, K.

    2003-01-01

    Systematic testing is very important for assessing and improving the quality of software systems. Yet, testing turns out to be expensive, laborious, time-consuming and error-prone. The Dutch research and development project Côte de Resyste worked on methods, techniques and tools for automating

  18. Systematic approach for the identification of process reference models

    CSIR Research Space (South Africa)

    Van Der Merwe, A

    2009-02-01

    Full Text Available Process models are used in different application domains to capture knowledge on the process flow. Process reference models (PRM) are used to capture reusable process models, which should simplify the identification process of process models...

  19. Seepage Calibration Model and Seepage Testing Data

    International Nuclear Information System (INIS)

    Dixon, P.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M and O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty

  20. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    P. Dixon

    2004-02-17

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M&O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty of

  1. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard...... linear contrast in a generalized linear model using the probit link function. All methods developed in the paper are implemented in our free R-package sensR (http://www.cran.r-project.org/package=sensR/). This includes the basic power and sample size calculations for these four discrimination tests...

  2. A systematic review of current immunological tests for the diagnosis of cattle brucellosis.

    Science.gov (United States)

    Ducrotoy, Marie J; Muñoz, Pilar M; Conde-Álvarez, Raquel; Blasco, José M; Moriyón, Ignacio

    2018-03-01

    Brucellosis is a worldwide extended zoonosis with a heavy economic and public health impact. Cattle, sheep and goats are infected by smooth Brucella abortus and Brucella melitensis, and represent a common source of the human disease. Brucellosis diagnosis in these animals is largely based on detection of a specific immunoresponse. We review here the immunological tests used for the diagnosis of cattle brucellosis. First, we discuss how the diagnostic sensitivity (DSe) and specificity (DSp), balance should be adjusted for brucellosis diagnosis, and the difficulties that brucellosis tests specifically present for the estimation of DSe/DSp in frequentistic (gold standard) and Bayesian analyses. Then, we present a systematic review (PubMed, GoogleScholar and CABdirect) of works (154 out of 991; years 1960-August 2017) identified (by title and Abstract content) as DSe and DSp studies of smooth lipopolysaccharide, O-polysaccharide-core, native hapten and protein diagnostic tests. We summarize data of gold standard studies (n = 23) complying with strict inclusion and exclusion criteria with regards to test methodology and definition of the animals studied (infected and S19 or RB51 vaccinated cattle, and Brucella-free cattle affected or not by false positive serological reactions). We also discuss some studies (smooth lipopolysaccharide tests, protein antibody and delayed type hypersensitivity [skin] tests) that do not meet the criteria and yet fill some of the gaps in information. We review Bayesian studies (n = 5) and report that in most cases priors and assumptions on conditional dependence/independence are not coherent with the variable serological picture of the disease in different epidemiological scenarios and the bases (antigen, isotype and immunoglobulin properties involved) of brucellosis tests, practical experience and the results of gold standard studies. We conclude that very useful lipopolysaccharide (buffered plate antigen and indirect ELISA) and

  3. A systematic study of multiple minerals precipitation modelling in wastewater treatment.

    Science.gov (United States)

    Kazadi Mbamba, Christian; Tait, Stephan; Flores-Alsina, Xavier; Batstone, Damien J

    2015-11-15

    Mineral solids precipitation is important in wastewater treatment. However approaches to minerals precipitation modelling are varied, often empirical, and mostly focused on single precipitate classes. A common approach, applicable to multi-species precipitates, is needed to integrate into existing wastewater treatment models. The present study systematically tested a semi-mechanistic modelling approach, using various experimental platforms with multiple minerals precipitation. Experiments included dynamic titration with addition of sodium hydroxide to synthetic wastewater, and aeration to progressively increase pH and induce precipitation in real piggery digestate and sewage sludge digestate. The model approach consisted of an equilibrium part for aqueous phase reactions and a kinetic part for minerals precipitation. The model was fitted to dissolved calcium, magnesium, total inorganic carbon and phosphate. Results indicated that precipitation was dominated by the mineral struvite, forming together with varied and minor amounts of calcium phosphate and calcium carbonate. The model approach was noted to have the advantage of requiring a minimal number of fitted parameters, so the model was readily identifiable. Kinetic rate coefficients, which were statistically fitted, were generally in the range 0.35-11.6 h(-1) with confidence intervals of 10-80% relative. Confidence regions for the kinetic rate coefficients were often asymmetric with model-data residuals increasing more gradually with larger coefficient values. This suggests that a large kinetic coefficient could be used when actual measured data is lacking for a particular precipitate-matrix combination. Correlation between the kinetic rate coefficients of different minerals was low, indicating that parameter values for individual minerals could be independently fitted (keeping all other model parameters constant). Implementation was therefore relatively flexible, and would be readily expandable to include other

  4. Direct-to-consumer genetic testing: a systematic review of european guidelines, recommendations, and position statements.

    Science.gov (United States)

    Rafiq, Muhammad; Ianuale, Carolina; Ricciardi, Walter; Boccia, Stefania

    2015-10-01

    Personalized healthcare is expected to yield promising results, with a paradigm shift toward more personalization in the practice of medicine. This emerging field has wide-ranging implications for all the stakeholders. Commercial tests in the form of multiplex genetic profiles are currently being provided to consumers, without the physicians' consultation, through the Internet, referred to as direct-to-consumer genetic tests (DTC GT). The objective was to review all the existing European guidelines on DTC GT, and its associated interventions, to list all the supposed benefits and harms, issues and concerns, and recommendations. We conducted a systematic review of position statements, policies, guidelines, and recommendations, produced by professional organizations or other relevant bodies for use of DTC GT in Europe. Seventeen documents met the inclusion criteria, which were subjected to thematic analysis, and the texts were coded for statements related to use of DTC GT. Professional societies and associations are currently more suggestive of potential disadvantages of DTC GT, recommending improved genetic literacy of both populations and health professionals, and implementation research on the genetic tests to integrate public health genomics into healthcare systems.

  5. Tests of local Lorentz invariance violation of gravity in the standard model extension with pulsars.

    Science.gov (United States)

    Shao, Lijing

    2014-03-21

    The standard model extension is an effective field theory introducing all possible Lorentz-violating (LV) operators to the standard model and general relativity (GR). In the pure-gravity sector of minimal standard model extension, nine coefficients describe dominant observable deviations from GR. We systematically implemented 27 tests from 13 pulsar systems to tightly constrain eight linear combinations of these coefficients with extensive Monte Carlo simulations. It constitutes the first detailed and systematic test of the pure-gravity sector of minimal standard model extension with the state-of-the-art pulsar observations. No deviation from GR was detected. The limits of LV coefficients are expressed in the canonical Sun-centered celestial-equatorial frame for the convenience of further studies. They are all improved by significant factors of tens to hundreds with existing ones. As a consequence, Einstein's equivalence principle is verified substantially further by pulsar experiments in terms of local Lorentz invariance in gravity.

  6. Consistency test of the standard model

    International Nuclear Information System (INIS)

    Pawlowski, M.; Raczka, R.

    1997-01-01

    If the 'Higgs mass' is not the physical mass of a real particle but rather an effective ultraviolet cutoff then a process energy dependence of this cutoff must be admitted. Precision data from at least two energy scale experimental points are necessary to test this hypothesis. The first set of precision data is provided by the Z-boson peak experiments. We argue that the second set can be given by 10-20 GeV e + e - colliders. We pay attention to the special role of tau polarization experiments that can be sensitive to the 'Higgs mass' for a sample of ∼ 10 8 produced tau pairs. We argue that such a study may be regarded as a negative selfconsistency test of the Standard Model and of most of its extensions

  7. 2-D Model Test of Dolosse Breakwater

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Liu, Zhou

    1994-01-01

    The rational design diagram for Dolos armour should incorporate both the hydraulic stability and the structural integrity. The previous tests performed by Aalborg University (AU) made available such design diagram for the trunk of Dolos breakwater without superstructures (Burcharth et al. 1992......). To extend the design diagram to cover Dolos breakwaters with superstructure, 2-D model tests of Dolos breakwater with wave wall is included in the project Rubble Mound Breakwater Failure Modes sponsored by the Directorate General XII of the Commission of the European Communities under Contract MAS-CT92......-0042. Furthermore, Task IA will give the design diagram for Tetrapod breakwaters without a superstructure. The more complete research results on Dolosse can certainly give some insight into the behaviour of Tetrapods armour layer of the breakwaters with superstructure. The main part of the experiment...

  8. Uncertainty Analysis of Resistance Tests in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University

    Directory of Open Access Journals (Sweden)

    Cihad DELEN

    2015-12-01

    Full Text Available In this study, some systematical resistance tests, where were performed in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University (ITU, have been included in order to determine the uncertainties. Experiments which are conducted in the framework of mathematical and physical rules for the solution of engineering problems, measurements, calculations include uncertainty. To question the reliability of the obtained values, the existing uncertainties should be expressed as quantities. The uncertainty of a measurement system is not known if the results do not carry a universal value. On the other hand, resistance is one of the most important parameters that should be considered in the process of ship design. Ship resistance during the design phase of a ship cannot be determined precisely and reliably due to the uncertainty resources in determining the resistance value that are taken into account. This case may cause negative effects to provide the required specifications in the latter design steps. The uncertainty arising from the resistance test has been estimated and compared for a displacement type ship and high speed marine vehicles according to ITTC 2002 and ITTC 2014 regulations which are related to the uncertainty analysis methods. Also, the advantages and disadvantages of both ITTC uncertainty analysis methods have been discussed.

  9. Seepage Calibration Model and Seepage Testing Data

    International Nuclear Information System (INIS)

    Finsterle, S.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross-Drift to obtain the permeability structure for the seepage model

  10. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    S. Finsterle

    2004-09-02

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross

  11. Model-independent tests of cosmic gravity.

    Science.gov (United States)

    Linder, Eric V

    2011-12-28

    Gravitation governs the expansion and fate of the universe, and the growth of large-scale structure within it, but has not been tested in detail on these cosmic scales. The observed acceleration of the expansion may provide signs of gravitational laws beyond general relativity (GR). Since the form of any such extension is not clear, from either theory or data, we adopt a model-independent approach to parametrizing deviations to the Einstein framework. We explore the phase space dynamics of two key post-GR functions and derive a classification scheme, and an absolute criterion on accuracy necessary for distinguishing classes of gravity models. Future surveys will be able to constrain the post-GR functions' amplitudes and forms to the required precision, and hence reveal new aspects of gravitation.

  12. Systematic prediction error correction: a novel strategy for maintaining the predictive abilities of multivariate calibration models.

    Science.gov (United States)

    Chen, Zeng-Ping; Li, Li-Mei; Yu, Ru-Qin; Littlejohn, David; Nordon, Alison; Morris, Julian; Dann, Alison S; Jeffkins, Paul A; Richardson, Mark D; Stimpson, Sarah L

    2011-01-07

    The development of reliable multivariate calibration models for spectroscopic instruments in on-line/in-line monitoring of chemical and bio-chemical processes is generally difficult, time-consuming and costly. Therefore, it is preferable if calibration models can be used for an extended period, without the need to replace them. However, in many process applications, changes in the instrumental response (e.g. owing to a change of spectrometer) or variations in the measurement conditions (e.g. a change in temperature) can cause a multivariate calibration model to become invalid. In this contribution, a new method, systematic prediction error correction (SPEC), has been developed to maintain the predictive abilities of multivariate calibration models when e.g. the spectrometer or measurement conditions are altered. The performance of the method has been tested on two NIR data sets (one with changes in instrumental responses, the other with variations in experimental conditions) and the outcomes compared with those of some popular methods, i.e. global PLS, univariate slope and bias correction (SBC) and piecewise direct standardization (PDS). The results show that SPEC achieves satisfactory analyte predictions with significantly lower RMSEP values than global PLS and SBC for both data sets, even when only a few standardization samples are used. Furthermore, SPEC is simple to implement and requires less information than PDS, which offers advantages for applications with limited data.

  13. Extensive and systematic rewiring of histone post-translational modifications in cancer model systems.

    Science.gov (United States)

    Noberini, Roberta; Osti, Daniela; Miccolo, Claudia; Richichi, Cristina; Lupia, Michela; Corleone, Giacomo; Hong, Sung-Pil; Colombo, Piergiuseppe; Pollo, Bianca; Fornasari, Lorenzo; Pruneri, Giancarlo; Magnani, Luca; Cavallaro, Ugo; Chiocca, Susanna; Minucci, Saverio; Pelicci, Giuliana; Bonaldi, Tiziana

    2018-03-29

    Histone post-translational modifications (PTMs) generate a complex combinatorial code that regulates gene expression and nuclear functions, and whose deregulation has been documented in different types of cancers. Therefore, the availability of relevant culture models that can be manipulated and that retain the epigenetic features of the tissue of origin is absolutely crucial for studying the epigenetic mechanisms underlying cancer and testing epigenetic drugs. In this study, we took advantage of quantitative mass spectrometry to comprehensively profile histone PTMs in patient tumor tissues, primary cultures and cell lines from three representative tumor models, breast cancer, glioblastoma and ovarian cancer, revealing an extensive and systematic rewiring of histone marks in cell culture conditions, which includes a decrease of H3K27me2/me3, H3K79me1/me2 and H3K9ac/K14ac, and an increase of H3K36me1/me2. While some changes occur in short-term primary cultures, most of them are instead time-dependent and appear only in long-term cultures. Remarkably, such changes mostly revert in cell line- and primary cell-derived in vivo xenograft models. Taken together, these results support the use of xenografts as the most representative models of in vivo epigenetic processes, suggesting caution when using cultured cells, in particular cell lines and long-term primary cultures, for epigenetic investigations.

  14. Combinatorial QSAR modeling of chemical toxicants tested against Tetrahymena pyriformis.

    Science.gov (United States)

    Zhu, Hao; Tropsha, Alexander; Fourches, Denis; Varnek, Alexandre; Papa, Ester; Gramatica, Paola; Oberg, Tomas; Dao, Phuong; Cherkasov, Artem; Tetko, Igor V

    2008-04-01

    Selecting most rigorous quantitative structure-activity relationship (QSAR) approaches is of great importance in the development of robust and predictive models of chemical toxicity. To address this issue in a systematic way, we have formed an international virtual collaboratory consisting of six independent groups with shared interests in computational chemical toxicology. We have compiled an aqueous toxicity data set containing 983 unique compounds tested in the same laboratory over a decade against Tetrahymena pyriformis. A modeling set including 644 compounds was selected randomly from the original set and distributed to all groups that used their own QSAR tools for model development. The remaining 339 compounds in the original set (external set I) as well as 110 additional compounds (external set II) published recently by the same laboratory (after this computational study was already in progress) were used as two independent validation sets to assess the external predictive power of individual models. In total, our virtual collaboratory has developed 15 different types of QSAR models of aquatic toxicity for the training set. The internal prediction accuracy for the modeling set ranged from 0.76 to 0.93 as measured by the leave-one-out cross-validation correlation coefficient ( Q abs2). The prediction accuracy for the external validation sets I and II ranged from 0.71 to 0.85 (linear regression coefficient R absI2) and from 0.38 to 0.83 (linear regression coefficient R absII2), respectively. The use of an applicability domain threshold implemented in most models generally improved the external prediction accuracy but at the same time led to a decrease in chemical space coverage. Finally, several consensus models were developed by averaging the predicted aquatic toxicity for every compound using all 15 models, with or without taking into account their respective applicability domains. We find that consensus models afford higher prediction accuracy for the

  15. Fracture strength of implant abutments after fatigue testing: A systematic review and a meta-analysis.

    Science.gov (United States)

    Coray, Rafaela; Zeltner, Marco; Özcan, Mutlu

    2016-09-01

    The use of implants and their respective suprastructures to replace missing teeth has become a common therapeutic option in dentistry. Prior to their clinical application, all implant components have to demonstrate suitable durability in laboratory studies. Fatigue tests utilising cyclic loading typically simulate masticatory function in vitro. The objectives of this systematic review were to assess the loading conditions used for fatigue testing of implant abutments and to compare the fracture strength of different types of implant abutment and abutment-connection types after cyclic loading. Original scientific papers published in MEDLINE (PubMed) and Embase database in English between 01/01/1970 and 12/31/2014 on cyclic loading on implant abutments were included in this systematic review. The following MeSH terms, search terms and their combinations were used: "in vitro" or "ex vivo" or experimental or laboratory, "dental implants", "implants, experimental", "dental prosthesis, implant-supported", "fatigue", "dental abutments", "cyclic loading", "cyclic fatigue", "mechanical fatigue", "fatigue resistance", "bending moments", and "fracture". Two reviewers performed screening and data abstraction. Only the studies that reported, static fracture values before and after fatigue cycling of implant abutments, were included that allowed comparison of aging effect through cyclic loading. Data (N) were analyzed using a weighted linear regression analysis (α=0.05). The selection process resulted in the final sample of 7 studies. In general, loading conditions of the fatigue tests revealed heterogeneity in the sample but a meta-analysis could be performed for the following parameters: a) abutment material, b) implant-abutment connection, and (c) number of fatigue cycles. Mean fracture strength of titanium (508.9±334.6N) and for zirconia abutments (698.6±452.6N) did not show significant difference after cyclic loading (p>0.05). Internal implant-abutment connections

  16. Tests for predicting complications of pre-eclampsia: A protocol for systematic reviews

    Directory of Open Access Journals (Sweden)

    O'Brien Shaughn

    2008-08-01

    Full Text Available Abstract Background Pre-eclampsia is associated with several complications. Early prediction of complications and timely management is needed for clinical care of these patients to avert fetal and maternal mortality and morbidity. There is a need to identify best testing strategies in pre eclampsia to identify the women at increased risk of complications. We aim to determine the accuracy of various tests to predict complications of pre-eclampsia by systematic quantitative reviews. Method We performed extensive search in MEDLINE (1951–2004, EMBASE (1974–2004 and also will also include manual searches of bibliographies of primary and review articles. An initial search has revealed 19500 citations. Two reviewers will independently select studies and extract data on study characteristics, quality and accuracy. Accuracy data will be used to construct 2 × 2 tables. Data synthesis will involve assessment for heterogeneity and appropriately pooling of results to produce summary Receiver Operating Characteristics (ROC curve and summary likelihood ratios. Discussion This review will generate predictive information and integrate that with therapeutic effectiveness to determine the absolute benefit and harm of available therapy in reducing complications in women with pre-eclampsia.

  17. Should we systematically test patients with clinically isolated syndrome for auto-antibodies?

    Science.gov (United States)

    Negrotto, Laura; Tur, Carmen; Tintoré, Mar; Arrambide, Georgina; Sastre-Garriga, Jaume; Río, Jordi; Comabella, Manuel; Nos, Carlos; Galán, Ingrid; Vidal-Jordana, Angela; Simon, Eva; Castilló, Joaquín; Palavra, Filipe; Mitjana, Raquel; Auger, Cristina; Rovira, Àlex; Montalban, Xavier

    2015-12-01

    Several autoimmune diseases (ADs) can mimic multiple sclerosis (MS). For this reason, testing for auto-antibodies (auto-Abs) is often included in the diagnostic work-up of patients with a clinically isolated syndrome (CIS). The purpose was to study how useful it was to systematically determine antinuclear-antibodies, anti-SSA and anti-SSB in a non-selected cohort of CIS patients, regarding the identification of other ADs that could represent an alternative diagnosis. From a prospective CIS cohort, we selected 772 patients in which auto-Ab levels were tested within the first year from CIS. Baseline characteristics of auto-Ab positive and negative patients were compared. A retrospective revision of clinical records was then performed in the auto-Ab positive patients to identify those who developed ADs during follow-up. One or more auto-Ab were present in 29.4% of patients. Only 1.8% of patients developed other ADs during a mean follow-up of 6.6 years. In none of these cases the concurrent AD was considered the cause of the CIS. In all cases the diagnosis of the AD resulted from the development of signs and/or symptoms suggestive of each disease. Antinuclear-antibodies, anti-SSA and anti-SSB should not be routinely determined in CIS patients but only in those presenting symptoms suggestive of other ADs. © The Author(s), 2015.

  18. Model-Based Software Testing for Object-Oriented Software

    Science.gov (United States)

    Biju, Soly Mathew

    2008-01-01

    Model-based testing is one of the best solutions for testing object-oriented software. It has a better test coverage than other testing styles. Model-based testing takes into consideration behavioural aspects of a class, which are usually unchecked in other testing methods. An increase in the complexity of software has forced the software industry…

  19. lmerTest Package: Tests in Linear Mixed Effects Models

    DEFF Research Database (Denmark)

    Kuznetsova, Alexandra; Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2017-01-01

    by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using...

  20. A new model test in high energy physics in frequentist and bayesian statistical formalisms

    International Nuclear Information System (INIS)

    Kamenshchikov, A.

    2017-01-01

    The problem of a new physical model test using observed experimental data is a typical one for modern experiments in high energy physics (HEP). A solution of the problem may be provided with two alternative statistical formalisms, namely frequentist and Bayesian, which are widely spread in contemporary HEP searches. A characteristic experimental situation is modeled from general considerations, and both the approaches are utilized in order to test a new model. The results are juxtaposed, which demonstrates their consistency in this work. An effect of a systematic uncertainty treatment in the statistical analysis is also considered.

  1. Experimentally testing the standard cosmological model

    Energy Technology Data Exchange (ETDEWEB)

    Schramm, D.N. (Chicago Univ., IL (USA) Fermi National Accelerator Lab., Batavia, IL (USA))

    1990-11-01

    The standard model of cosmology, the big bang, is now being tested and confirmed to remarkable accuracy. Recent high precision measurements relate to the microwave background; and big bang nucleosynthesis. This paper focuses on the latter since that relates more directly to high energy experiments. In particular, the recent LEP (and SLC) results on the number of neutrinos are discussed as a positive laboratory test of the standard cosmology scenario. Discussion is presented on the improved light element observational data as well as the improved neutron lifetime data. alternate nucleosynthesis scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conclusions on the baryonic density relative to the critical density, {Omega}{sub b}, remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the standard model conclusion that {Omega}{sub b} {approximately} 0.06. This latter point is the deriving force behind the need for non-baryonic dark matter (assuming {Omega}{sub total} = 1) and the need for dark baryonic matter, since {Omega}{sub visible} < {Omega}{sub b}. Recent accelerator constraints on non-baryonic matter are discussed, showing that any massive cold dark matter candidate must now have a mass M{sub x} {approx gt} 20 GeV and an interaction weaker than the Z{sup 0} coupling to a neutrino. It is also noted that recent hints regarding the solar neutrino experiments coupled with the see-saw model for {nu}-masses may imply that the {nu}{sub {tau}} is a good hot dark matter candidate. 73 refs., 5 figs.

  2. Deterministic Modeling of the High Temperature Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Ortensi, J.; Cogliati, J. J.; Pope, M. A.; Ferrer, R. M.; Ougouag, A. M.

    2010-06-01

    Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the

  3. Impact of systematic HIV testing on case finding and retention in care at a primary care clinic in South Africa.

    Science.gov (United States)

    Clouse, Kate; Hanrahan, Colleen F; Bassett, Jean; Fox, Matthew P; Sanne, Ian; Van Rie, Annelies

    2014-12-01

    Systematic, opt-out HIV counselling and testing (HCT) may diagnose individuals at lower levels of immunodeficiency but may impact loss to follow-up (LTFU) if healthier people are less motivated to engage and remain in HIV care. We explored LTFU and patient clinical outcomes under two different HIV testing strategies. We compared patient characteristics and retention in care between adults newly diagnosed with HIV by either voluntary counselling and testing (VCT) plus targeted provider-initiated counselling and testing (PITC) or systematic HCT at a primary care clinic in Johannesburg, South Africa. One thousand one hundred and forty-four adults were newly diagnosed by VCT/PITC and 1124 by systematic HCT. Two-thirds of diagnoses were in women. Median CD4 count at HIV diagnosis (251 vs. 264 cells/μl, P = 0.19) and proportion of individuals eligible for antiretroviral therapy (ART) (67.2% vs. 66.7%, P = 0.80) did not differ by HCT strategy. Within 1 year of HIV diagnosis, half were LTFU: 50.5% under VCT/PITC and 49.6% under systematic HCT (P = 0.64). The overall hazard of LTFU was not affected by testing policy (aHR 0.98, 95%CI: 0.87-1.10). Independent of HCT strategy, males, younger adults and those ineligible for ART were at higher risk of LTFU. Implementation of systematic HCT did not increase baseline CD4 count. Overall retention in the first year after HIV diagnosis was low (37.9%), especially among those ineligible for ART, but did not differ by testing strategy. Expansion of HIV testing should coincide with effective strategies to increase retention in care, especially among those not yet eligible for ART at initial diagnosis. © 2014 John Wiley & Sons Ltd.

  4. Systematic and reliable multiscale modelling of lithium batteries

    Science.gov (United States)

    Atalay, Selcuk; Schmuck, Markus

    2017-11-01

    Motivated by the increasing interest in lithium batteries as energy storage devices (e.g. cars/bycicles/public transport, social robot companions, mobile phones, and tablets), we investigate three basic cells: (i) a single intercalation host; (ii) a periodic arrangement of intercalation hosts; and (iii) a rigorously upscaled formulation of (ii) as initiated in. By systematically accounting for Li transport and interfacial reactions in (i)-(iii), we compute the associated chracteristic current-voltage curves and power densities. Finally, we discuss the influence of how the intercalation particles are arranged. Our findings are expected to improve the understanding of how microscopic properties affect the battery behaviour observed on the macroscale and at the same time, the upscaled formulation (iii) serves as an efficient computational tool. This work has been supported by EPSRC, UK, through the Grant No. EP/P011713/1.

  5. KIDMED TEST; PREVALENCE OF LOW ADHERENCE TO THE MEDITERRANEAN DIET IN CHILDREN AND YOUNG; A SYSTEMATIC REVIEW.

    Science.gov (United States)

    García Cabrera, S; Herrera Fernández, N; Rodríguez Hernández, C; Nissensohn, M; Román-Viñas, B; Serra-Majem, L

    2015-12-01

    during the last decades, a quick and important modification of the dietary habits has been observed in the Mediterranean countries, especially among young people. Several authors have evaluated the pattern of adherence to the Mediterranean Diet in this group of population, by using the KIDMED test. the purpose of this study was to evaluate the adherence to the Mediterranean Diet among children and adolescents by using the KIDMED test through a systematic review and meta-analysis. PubMed database was accessed until January 2014. Only cross-sectional studies evaluating children and young people were included. A random effects model was considered. eighteen cross-sectional studies were included. The population age ranged from 2 to 25 years. The total sample included 24 067 people. The overall percentage of high adherence to the Mediterranean Diet was 10% (95% CI 0.07-0.13), while the low adhesion was 21% (IC 95% 0.14 to 0.27). In the low adherence group, further analyses were performed by defined subgroups, finding differences for the age of the population and the geographical area. the results obtained showed important differences between high and low adherence to the Mediterranean Diet levels, although successive subgroup analyzes were performed. There is a clear trend towards the abandonment of the Mediterranean lifestyle. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  6. HCV Core Antigen Testing for Diagnosis of HCV Infection: A systematic review and meta-analysis

    Science.gov (United States)

    Freiman, J. Morgan; Tran, Trang M.; Schumacher, Samuel G; White, Laura F.; Ongarello, Stefano; Cohn, Jennifer; Easterbrook, Philippa J.; Linas, Benjamin P.; Denkinger, Claudia M.

    2017-01-01

    Background Diagnosis of chronic Hepatitis C Virus (HCV) infection requires both a positive HCV antibody screen and confirmatory nucleic acid test (NAT). HCV core antigen (HCVcAg) is a potential alternative to NAT. Purpose This systematic review evaluated the accuracy of diagnosis of active HCV infection among adults and children for five HCVcAg tests compared to NAT. Data Sources EMBASE, PubMed, Web of Science, Scopus, and Cochrane from 1990 through March 31, 2016. Study Selection Cohort, cross-sectional, and randomized controlled trials were included without language restriction Data Extraction Two independent reviewers extracted data and assessed quality using an adapted Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Data Synthesis 44 studies evaluated 5 index tests. Studies for the ARCHITECT had the highest quality, while those for Ortho ELISA were the lowest. From bivariate analyses, the sensitivity and specificity with 95% CI were: ARCHITECT 93.4% (90.1, 96.4) and 98.8% (97.4, 99.5), Ortho ELISA 93.2% (81.6, 97.7) and 99.2% (87.9, 100), and Hunan Jynda 59.5% (46.0, 71.7) and 82.9% (58.6, 94.3). Insufficient data were available for a meta-analysis for Lumipulse and Lumispot. In three quantitative studies using ARCHITECT, HCVcAg correlated closely with HCV RNA above 3000 IU/mL. Limitations There was insufficient data on covariates such as HIV or HBV status for sub-group analyses. Few studies reported genotypes of isolates and there were scant data for genotypes 4, 5, and 6. Most studies were conducted in high resource settings within reference laboratories. Conclusions HCVcAg assays with signal amplification have high sensitivity, high specificity, and good correlation with HCV RNA above 3000 IU/mL. HCVcAg assays have the potential to replace NAT in high HCV prevalence settings. PMID:27322622

  7. The Chain Information Model: a systematic approach for food product development

    NARCIS (Netherlands)

    Benner, M.

    2005-01-01

    The chain information model has been developed to increase the success rate of new food products. The uniqueness of this approach is that it approaches the problem from a chain perspective and starts with the consumer. The model can be used to analyse the production chain in a systematic way. This

  8. In vitro biofilm models to study dental caries: a systematic review

    NARCIS (Netherlands)

    Maske, T.T.; Sande, F.H. van de; Arthur, R.A.; Huysmans, M.C.; Cenci, M.S.

    2017-01-01

    The aim of this systematic review is to characterize and discuss key methodological aspects of in vitro biofilm models for caries-related research and to verify the reproducibility and dose-response of models considering the response to anti-caries and/or antimicrobial substances. Inclusion criteria

  9. Multivariate models of subjective caregiver burden in dementia: A systematic review

    NARCIS (Netherlands)

    van der Lee, J.H.; Bakker, T.J.E.M.; Duivenvoorden, H.J.; Dröes, R.M.

    2014-01-01

    Background: Burden in dementia caregivers is a complex and multidimensional construct. Several models of burden and other representations of burden like depression or mental health are described in literature. To clarify the state of science, we systematically reviewed complex models that include

  10. A digital tool set for systematic model design in process-engineering education

    NARCIS (Netherlands)

    Schaaf, van der H.; Tramper, J.; Hartog, R.J.M.; Vermuë, M.H.

    2006-01-01

    One of the objectives of the process technology curriculum at Wageningen University is that students learn how to design mathematical models in the context of process engineering, using a systematic problem analysis approach. Students find it difficult to learn to design a model and little material

  11. Experimental Tests of the Algebraic Cluster Model

    Science.gov (United States)

    Gai, Moshe

    2018-02-01

    The Algebraic Cluster Model (ACM) of Bijker and Iachello that was proposed already in 2000 has been recently applied to 12C and 16O with much success. We review the current status in 12C with the outstanding observation of the ground state rotational band composed of the spin-parity states of: 0+, 2+, 3-, 4± and 5-. The observation of the 4± parity doublet is a characteristic of (tri-atomic) molecular configuration where the three alpha- particles are arranged in an equilateral triangular configuration of a symmetric spinning top. We discuss future measurement with electron scattering, 12C(e,e’) to test the predicted B(Eλ) of the ACM.

  12. Physical examination tests of the shoulder: a systematic review and meta-analysis of diagnostic test performance.

    Science.gov (United States)

    Gismervik, Sigmund Ø; Drogset, Jon O; Granviken, Fredrik; Rø, Magne; Leivseth, Gunnar

    2017-01-25

    Physical examination tests of the shoulder (PETS) are clinical examination maneuvers designed to aid the assessment of shoulder complaints. Despite more than 180 PETS described in the literature, evidence of their validity and usefulness in diagnosing the shoulder is questioned. This meta-analysis aims to use diagnostic odds ratio (DOR) to evaluate how much PETS shift overall probability and to rank the test performance of single PETS in order to aid the clinician's choice of which tests to use. This study adheres to the principles outlined in the Cochrane guidelines and the PRISMA statement. A fixed effect model was used to assess the overall diagnostic validity of PETS by pooling DOR for different PETS with similar biomechanical rationale when possible. Single PETS were assessed and ranked by DOR. Clinical performance was assessed by sensitivity, specificity, accuracy and likelihood ratio. Six thousand nine-hundred abstracts and 202 full-text articles were assessed for eligibility; 20 articles were eligible and data from 11 articles could be included in the meta-analysis. All PETS for SLAP (superior labral anterior posterior) lesions pooled gave a DOR of 1.38 [1.13, 1.69]. The Supraspinatus test for any full thickness rotator cuff tear obtained the highest DOR of 9.24 (sensitivity was 0.74, specificity 0.77). Compression-Rotation test obtained the highest DOR (6.36) among single PETS for SLAP lesions (sensitivity 0.43, specificity 0.89) and Hawkins test obtained the highest DOR (2.86) for impingement syndrome (sensitivity 0.58, specificity 0.67). No single PETS showed superior clinical test performance. The clinical performance of single PETS is limited. However, when the different PETS for SLAP lesions were pooled, we found a statistical significant change in post-test probability indicating an overall statistical validity. We suggest that clinicians choose their PETS among those with the highest pooled DOR and to assess validity to their own specific clinical

  13. Experimental tests of the standard model

    International Nuclear Information System (INIS)

    Nodulman, L.

    1998-01-01

    The title implies an impossibly broad field, as the Standard Model includes the fermion matter states, as well as the forces and fields of SU(3) x SU(2) x U(1). For practical purposes, I will confine myself to electroweak unification, as discussed in the lectures of M. Herrero. Quarks and mixing were discussed in the lectures of R. Aleksan, and leptons and mixing were discussed in the lectures of K. Nakamura. I will essentially assume universality, that is flavor independence, rather than discussing tests of it. I will not pursue tests of QED beyond noting the consistency and precision of measurements of α EM in various processes including the Lamb shift, the anomalous magnetic moment (g-2) of the electron, and the quantum Hall effect. The fantastic precision and agreement of these predictions and measurements is something that convinces people that there may be something to this science enterprise. Also impressive is the success of the ''Universal Fermi Interaction'' description of beta decay processes, or in more modern parlance, weak charged current interactions. With one coupling constant G F , most precisely determined in muon decay, a huge number of nuclear instabilities are described. The slightly slow rate for neutron beta decay was one of the initial pieces of evidence for Cabbibo mixing, now generalized so that all charged current decays of any flavor are covered

  14. Experimental tests of the standard model.

    Energy Technology Data Exchange (ETDEWEB)

    Nodulman, L.

    1998-11-11

    The title implies an impossibly broad field, as the Standard Model includes the fermion matter states, as well as the forces and fields of SU(3) x SU(2) x U(1). For practical purposes, I will confine myself to electroweak unification, as discussed in the lectures of M. Herrero. Quarks and mixing were discussed in the lectures of R. Aleksan, and leptons and mixing were discussed in the lectures of K. Nakamura. I will essentially assume universality, that is flavor independence, rather than discussing tests of it. I will not pursue tests of QED beyond noting the consistency and precision of measurements of {alpha}{sub EM} in various processes including the Lamb shift, the anomalous magnetic moment (g-2) of the electron, and the quantum Hall effect. The fantastic precision and agreement of these predictions and measurements is something that convinces people that there may be something to this science enterprise. Also impressive is the success of the ''Universal Fermi Interaction'' description of beta decay processes, or in more modern parlance, weak charged current interactions. With one coupling constant G{sub F}, most precisely determined in muon decay, a huge number of nuclear instabilities are described. The slightly slow rate for neutron beta decay was one of the initial pieces of evidence for Cabbibo mixing, now generalized so that all charged current decays of any flavor are covered.

  15. Computerized Classification Testing with the Rasch Model

    Science.gov (United States)

    Eggen, Theo J. H. M.

    2011-01-01

    If classification in a limited number of categories is the purpose of testing, computerized adaptive tests (CATs) with algorithms based on sequential statistical testing perform better than estimation-based CATs (e.g., Eggen & Straetmans, 2000). In these computerized classification tests (CCTs), the Sequential Probability Ratio Test (SPRT) (Wald,…

  16. The reliability of physical examination tests for the diagnosis of anterior cruciate ligament rupture--A systematic review.

    Science.gov (United States)

    Lange, Toni; Freiberg, Alice; Dröge, Patrik; Lützner, Jörg; Schmitt, Jochen; Kopkow, Christian

    2015-06-01

    Systematic literature review. Despite their frequent application in routine care, a systematic review on the reliability of clinical examination tests to evaluate the integrity of the ACL is missing. To summarize and evaluate intra- and interrater reliability research on physical examination tests used for the diagnosis of ACL tears. A comprehensive systematic literature search was conducted in MEDLINE, EMBASE and AMED until May 30th 2013. Studies were included if they assessed the intra- and/or interrater reliability of physical examination tests for the integrity of the ACL. Methodological quality was evaluated with the Quality Appraisal of Reliability Studies (QAREL) tool by two independent reviewers. 110 hits were achieved of which seven articles finally met the inclusion criteria. These studies examined the reliability of four physical examination tests. Intrarater reliability was assessed in three studies and ranged from fair to almost perfect (Cohen's k = 0.22-1.00). Interrater reliability was assessed in all included studies and ranged from slight to almost perfect (Cohen's k = 0.02-0.81). The Lachman test is the physical tests with the highest intrarater reliability (Cohen's k = 1.00), the Lachman test performed in prone position the test with the highest interrater reliability (Cohen's k = 0.81). Included studies were partly of low methodological quality. A meta-analysis could not be performed due to the heterogeneity in study populations, reliability measures and methodological quality of included studies. Systematic investigations on the reliability of physical examination tests to assess the integrity of the ACL are scarce and of varying methodological quality. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. A systematic review of the diagnostic accuracy of provocative tests of the neck for diagnosing cervical radiculopathy.

    NARCIS (Netherlands)

    Rubinstein, S.M.; Pool, J.J.; van Tulder, M.W.; Riphagen, II; de Vet, H.C.W.

    2007-01-01

    Clinical provocative tests of the neck, which position the neck and arm inorder to aggravate or relieve arm symptoms, are commonly used in clinical practice in patients with a suspected cervical radiculopathy. Their diagnostic accuracy, however, has never been examined in a systematic review. A

  18. Reduction in uptake of PSA tests following decision aids: systematic review of current aids and their evaluations.

    NARCIS (Netherlands)

    Evans, R.; Edwards, A.; Brett, J.; Bradburn, M.; Watson, E.; Austoker, J.; Elwyn, G.

    2005-01-01

    A man's decision to have a prostate-specific antigen (PSA) test should be an informed one. We undertook a systematic review to identify and appraise PSA decision aids and evaluations. We searched 15 electronic databases and hand-searched key journals. We also contacted key authors and organisations.

  19. A systematic review of randomized controlled trials testing the efficacy of psychosocial interventions for gastrointestinal cancers.

    Science.gov (United States)

    Steel, Jennifer L; Bress, Kathryn; Popichak, Lydia; Evans, Jonathan S; Savkova, Alexandra; Biala, Michelle; Ordos, Josh; Carr, Brian I

    2014-06-01

    Psychological morbidity in those diagnosed with cancer has been shown to result in poorer quality of life and increase the risk of mortality. As a result, researchers have designed and tested psychosocial interventions to improve quality of life and survival of patients diagnosed with cancer. A systematic review of the literature was performed to describe the psychosocial interventions that have been tested in patients with gastrointestinal cancers. Databases such as MEDLINE, PsychINFO, PubMed, MedLine, and Cochrane Reviews were searched. The searches were inclusive of studies published in English between 1966 and October 2013. Raters conducted full-text review of the resulting articles for the following eligibility criteria: (1) participants were 18 years or older, (2) the majority of patients in the sample were diagnosed with a gastrointestinal cancer, (3) the trial was testing a psychosocial intervention, and (4) random assignment to one or more interventions versus a usual care, placebo, attention control, or waiting-list control condition. The interventions that were eligible for this review included psychosocial or behavioral intervention (e.g., cognitive behavioral therapy, problem solving, educational, and collaborative care), physical activity, and/or psychopharmacologic treatment (e.g., selective serotonin reuptake inhibitor). Interventions that included dietary changes were not included in the present review. Study quality was also assessed using the Physiotherapy Evidence Database (PEDro) system. The results of the review resulted in a finding of eight studies to have been conducted, testing psychosocial interventions, in patients with gastrointestinal cancers. Findings of these studies suggested that the interventions were effective in reducing psychological and physical symptoms associated with the cancer, improved quality of life, and reduced immune system dysregulation, and one study demonstrated an improvement in survival. Two studies reported no

  20. A Systematic Literature Review of Agile Maturity Model Research

    OpenAIRE

    Vaughan Henriques; Maureen Tanner

    2017-01-01

    Background/Aim/Purpose: A commonly implemented software process improvement framework is the capability maturity model integrated (CMMI). Existing literature indicates higher levels of CMMI maturity could result in a loss of agility due to its organizational focus. To maintain agility, research has focussed attention on agile maturity models. The objective of this paper is to find the common research themes and conclusions in agile maturity model research. Methodology: This research adop...

  1. A person fit test for IRT models for polytomous items

    NARCIS (Netherlands)

    Glas, Cornelis A.W.; Dagohoy, A.V.

    2007-01-01

    A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability

  2. Incentivizing Blood Donation: Systematic Review and Meta-Analysis to Test Titmuss’ Hypotheses

    Science.gov (United States)

    2013-01-01

    Objectives: Titmuss hypothesized that paying blood donors would reduce the quality of the blood donated and would be economically inefficient. We report here the first systematic review to test these hypotheses, reporting on both financial and nonfinancial incentives. Method: Studies deemed eligible for inclusion were peer-reviewed, experimental studies that presented data on the quantity (as a proxy for efficiency) and quality of blood donated in at least two groups: those donating blood when offered an incentive, and those donating blood with no offer of an incentive. The following were searched: MEDLINE, EMBASE and PsycINFO using OVID SP, CINAHL via EBSCO and CENTRAL, the Cochrane Library, Econlit via EBSCO, JSTOR Health and General Science Collection, and Google. Results: The initial search yielded 1100 abstracts, which resulted in 89 full papers being assessed for eligibility, of which seven studies, reported in six papers, met the inclusion criteria. The included studies involved 93,328 participants. Incentives had no impact on the likelihood of donation (OR = 1.22 CI 95% 0.91–1.63; p = .19). There was no difference between financial and nonfinancial incentives in the quantity of blood donated. Of the two studies that assessed quality of blood, one found no effect and the other found an adverse effect from the offer of a free cholesterol test (β = 0.011 p < .05). Conclusion: The limited evidence suggests that Titmuss’ hypothesis of the economic inefficiency of incentives is correct. There is insufficient evidence to assess their likely impact on the quality of the blood provided. PMID:24001244

  3. Incentivizing blood donation: systematic review and meta-analysis to test Titmuss' hypotheses.

    Science.gov (United States)

    Niza, Claudia; Tung, Burcu; Marteau, Theresa M

    2013-09-01

    Titmuss hypothesized that paying blood donors would reduce the quality of the blood donated and would be economically inefficient. We report here the first systematic review to test these hypotheses, reporting on both financial and nonfinancial incentives. Studies deemed eligible for inclusion were peer-reviewed, experimental studies that presented data on the quantity (as a proxy for efficiency) and quality of blood donated in at least two groups: those donating blood when offered an incentive, and those donating blood with no offer of an incentive. The following were searched: MEDLINE, EMBASE and PsycINFO using OVID SP, CINAHL via EBSCO and CENTRAL, the Cochrane Library, Econlit via EBSCO, JSTOR Health and General Science Collection, and Google. The initial search yielded 1100 abstracts, which resulted in 89 full papers being assessed for eligibility, of which seven studies, reported in six papers, met the inclusion criteria. The included studies involved 93,328 participants. Incentives had no impact on the likelihood of donation (OR = 1.22 CI 95% 0.91-1.63; p = .19). There was no difference between financial and nonfinancial incentives in the quantity of blood donated. Of the two studies that assessed quality of blood, one found no effect and the other found an adverse effect from the offer of a free cholesterol test (β = 0.011 p < .05). The limited evidence suggests that Titmuss' hypothesis of the economic inefficiency of incentives is correct. There is insufficient evidence to assess their likely impact on the quality of the blood provided. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  4. Systematic modeling for free stators of rotary - Piezoelectric ultrasonic motors

    DEFF Research Database (Denmark)

    Mojallali, Hamed; Amini, Rouzbeh; Izadi-Zamanabadi, Roozbeh

    2007-01-01

    An equivalent circuit model with complex elements is presented in this paper to describe the free stator model of traveling wave piezoelectric motors. The mechanical, dielectric and piezoelectric losses associated with the vibrator are considered by introducing the imaginary part to the equivalent...... the measurements of a recently developed piezoelectric motor and a well known USR60....

  5. Model uncertainty and systematic risk in US banking

    NARCIS (Netherlands)

    Baele, L.T.M.; De Bruyckere, Valerie; De Jonghe, O.G.; Vander Vennet, Rudi

    This paper uses Bayesian Model Averaging to examine the driving factors of equity returns of US Bank Holding Companies. BMA has as an advantage over OLS that it accounts for the considerable uncertainty about the correct set (model) of bank risk factors. We find that out of a broad set of 12 risk

  6. Bayesian Network Models in Cyber Security: A Systematic Review

    NARCIS (Netherlands)

    Chockalingam, S.; Pieters, W.; Herdeiro Teixeira, A.M.; van Gelder, P.H.A.J.M.; Lipmaa, Helger; Mitrokotsa, Aikaterini; Matulevicius, Raimundas

    2017-01-01

    Bayesian Networks (BNs) are an increasingly popular modelling technique in cyber security especially due to their capability to overcome data limitations. This is also instantiated by the growth of BN models development in cyber security. However, a comprehensive comparison and analysis of these

  7. Measurement properties of the upright motor control test for adults with stroke: a systematic review.

    Science.gov (United States)

    Gorgon, Edward James R; Lazaro, Rolando T

    2016-01-01

    The Upright Motor Control Test (UMCT) has been used in clinical practice and research to assess functional strength of the hemiparetic lower limb in adults with stroke. It is unclear if evidence is sufficient to warrant its use. The purpose of this systematic review was to synthesize available evidence on the measurement properties of the UMCT for stroke rehabilitation. Electronic databases that indexed biomedical literature were systematically searched from inception until October 2015 (week 4): Embase, PubMed, Web of Science, CINAHL, PEDro, Cochrane Library, Scopus, ScienceDirect, SPORTDiscus, LILACS, DOAJ, and Google Scholar. All studies that had used the UMCT in the time period covered underwent hand searching for any additional study. Observational studies involving adults with stroke that explored any measurement property of the UMCT were included. The COnsensus-based Standards for the selection of health Measurement INstruments was used to assess the methodological quality of included studies. The CanChild Outcome Measures Rating Form was used for extracting data on measurement properties and clinical utility. The search yielded three methodologic studies that addressed criterion-related validity and contruct validity. Two studies of fair methodological quality demonstrated moderate-level evidence that Knee Extension and Knee Flexion subtest scores were predictive of community-level and household-level ambulation. One study of fair methodological quality provided limited-level evidence for the correlation of Knee Extension subtest scores with a laboratory measure of ground reaction forces. No published studies formally assessed reliability, responsiveness, or clinical utility. Limited information on responsiveness and clinical utility dimensions could be inferred from the included studies. The UMCT is a practical assessment tool for voluntary control or functional strength of the hemiparetic lower limb in standing in adults with stroke. Although different

  8. Seismic response analysis and test of CHASNUPP steam generator lower part model

    International Nuclear Information System (INIS)

    Han Liangbi; Xu Jinkang; Zhan Jingxian; He Yinbiao; Wang Peizhu

    1997-12-01

    The seismic response analysis and test of CHASNUPP steam generator lower part model has been performed. The lower part model consists of a tube sheet, 441 U-shaped rods (modeled on U-shaped tubes), 9 tie rods, a tube bundle wrapper, a lower shell and some position bolts between the lower shell and the wrapper. The analytical and experimental data show that the steam generator lower part model is more stiffer than pure tube bundle model. The movement of the lower part model appears in a systematic model in earthquake condition. The fundamental natural frequency of this model is higher than that of the tube bundle model and lower than that of the lower shell. The global frequency components are absolutely dominant and the local frequency components are insignificant. The experimental data are in good agreement with the results from finite element method. The effectiveness of mathematical model is verified

  9. Retention in HIV Care between Testing and Treatment in Sub-Saharan Africa: A Systematic Review

    Science.gov (United States)

    Rosen, Sydney; Fox, Matthew P.

    2011-01-01

    Background Improving the outcomes of HIV/AIDS treatment programs in resource-limited settings requires successful linkage of patients testing positive for HIV to pre–antiretroviral therapy (ART) care and retention in pre-ART care until ART initiation. We conducted a systematic review of pre-ART retention in care in Africa. Methods and Findings We searched PubMed, ISI Web of Knowledge, conference abstracts, and reference lists for reports on the proportion of adult patients retained between any two points between testing positive for HIV and initiating ART in sub-Saharan African HIV/AIDS care programs. Results were categorized as Stage 1 (from HIV testing to receipt of CD4 count results or clinical staging), Stage 2 (from staging to ART eligibility), or Stage 3 (from ART eligibility to ART initiation). Medians (ranges) were reported for the proportions of patients retained in each stage. We identified 28 eligible studies. The median proportion retained in Stage 1 was 59% (35%–88%); Stage 2, 46% (31%–95%); and Stage 3, 68% (14%–84%). Most studies reported on only one stage; none followed a cohort of patients through all three stages. Enrollment criteria, terminology, end points, follow-up, and outcomes varied widely and were often poorly defined, making aggregation of results difficult. Synthesis of findings from multiple studies suggests that fewer than one-third of patients testing positive for HIV and not yet eligible for ART when diagnosed are retained continuously in care, though this estimate should be regarded with caution because of review limitations. Conclusions Studies of retention in pre-ART care report substantial loss of patients at every step, starting with patients who do not return for their initial CD4 count results and ending with those who do not initiate ART despite eligibility. Better health information systems that allow patients to be tracked between service delivery points are needed to properly evaluate pre-ART loss to care, and

  10. Introducing malaria rapid diagnostic tests in private medicine retail outlets: A systematic literature review.

    Science.gov (United States)

    Visser, Theodoor; Bruxvoort, Katia; Maloney, Kathleen; Leslie, Toby; Barat, Lawrence M; Allan, Richard; Ansah, Evelyn K; Anyanti, Jennifer; Boulton, Ian; Clarke, Siân E; Cohen, Jessica L; Cohen, Justin M; Cutherell, Andrea; Dolkart, Caitlin; Eves, Katie; Fink, Günther; Goodman, Catherine; Hutchinson, Eleanor; Lal, Sham; Mbonye, Anthony; Onwujekwe, Obinna; Petty, Nora; Pontarollo, Julie; Poyer, Stephen; Schellenberg, David; Streat, Elizabeth; Ward, Abigail; Wiseman, Virginia; Whitty, Christopher J M; Yeung, Shunmay; Cunningham, Jane; Chandler, Clare I R

    2017-01-01

    Many patients with malaria-like symptoms seek treatment in private medicine retail outlets (PMR) that distribute malaria medicines but do not traditionally provide diagnostic services, potentially leading to overtreatment with antimalarial drugs. To achieve universal access to prompt parasite-based diagnosis, many malaria-endemic countries are considering scaling up malaria rapid diagnostic tests (RDTs) in these outlets, an intervention that may require legislative changes and major investments in supporting programs and infrastructures. This review identifies studies that introduced malaria RDTs in PMRs and examines study outcomes and success factors to inform scale up decisions. Published and unpublished studies that introduced malaria RDTs in PMRs were systematically identified and reviewed. Literature published before November 2016 was searched in six electronic databases, and unpublished studies were identified through personal contacts and stakeholder meetings. Outcomes were extracted from publications or provided by principal investigators. Six published and six unpublished studies were found. Most studies took place in sub-Saharan Africa and were small-scale pilots of RDT introduction in drug shops or pharmacies. None of the studies assessed large-scale implementation in PMRs. RDT uptake varied widely from 8%-100%. Provision of artemisinin-based combination therapy (ACT) for patients testing positive ranged from 30%-99%, and was more than 85% in five studies. Of those testing negative, provision of antimalarials varied from 2%-83% and was less than 20% in eight studies. Longer provider training, lower RDT retail prices and frequent supervision appeared to have a positive effect on RDT uptake and provider adherence to test results. Performance of RDTs by PMR vendors was generally good, but disposal of medical waste and referral of patients to public facilities were common challenges. Expanding services of PMRs to include malaria diagnostic services may hold

  11. Introducing malaria rapid diagnostic tests in private medicine retail outlets: A systematic literature review.

    Directory of Open Access Journals (Sweden)

    Theodoor Visser

    Full Text Available Many patients with malaria-like symptoms seek treatment in private medicine retail outlets (PMR that distribute malaria medicines but do not traditionally provide diagnostic services, potentially leading to overtreatment with antimalarial drugs. To achieve universal access to prompt parasite-based diagnosis, many malaria-endemic countries are considering scaling up malaria rapid diagnostic tests (RDTs in these outlets, an intervention that may require legislative changes and major investments in supporting programs and infrastructures. This review identifies studies that introduced malaria RDTs in PMRs and examines study outcomes and success factors to inform scale up decisions.Published and unpublished studies that introduced malaria RDTs in PMRs were systematically identified and reviewed. Literature published before November 2016 was searched in six electronic databases, and unpublished studies were identified through personal contacts and stakeholder meetings. Outcomes were extracted from publications or provided by principal investigators.Six published and six unpublished studies were found. Most studies took place in sub-Saharan Africa and were small-scale pilots of RDT introduction in drug shops or pharmacies. None of the studies assessed large-scale implementation in PMRs. RDT uptake varied widely from 8%-100%. Provision of artemisinin-based combination therapy (ACT for patients testing positive ranged from 30%-99%, and was more than 85% in five studies. Of those testing negative, provision of antimalarials varied from 2%-83% and was less than 20% in eight studies. Longer provider training, lower RDT retail prices and frequent supervision appeared to have a positive effect on RDT uptake and provider adherence to test results. Performance of RDTs by PMR vendors was generally good, but disposal of medical waste and referral of patients to public facilities were common challenges.Expanding services of PMRs to include malaria diagnostic

  12. USING OF BYOD MODEL FOR TESTING OF EDUCATIONAL ACHIEVEMENTS ON THE BASIS OF GOOGLE SEARCH SERVICES

    Directory of Open Access Journals (Sweden)

    Tetiana Bondarenko

    2016-04-01

    Full Text Available The technology of using their own mobile devices of learners for testing educational achievements, based on the model of BYOD, in an article is offered. The proposed technology is based on cloud services Google. This technology provides a comprehensive support of testing system: creating appropriate forms, storing the results in cloud storage, processing test results and management of testing system through the use of Google-Calendar. A number of software products based on cloud technologies that allow using BYOD model for testing of educational achievement are described, their strengths and weaknesses are identified. This article also describes the stages of the testing process of the academic achievements of students on the basis of Google search services with using the BYOD model. The proposed approaches to the testing of educational achievements based on using of BYOD model extends the space and time of the testing, makes the test procedure more flexible and systematically, adds to the procedure for testing the elements of a computer game. BYOD model opens up broad prospects for implementation of ICT in all forms of learning process, and particularly in testing of educational achievement in view of the limited computing resources in education

  13. Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions.

    Science.gov (United States)

    Baxter, Susan K; Blank, Lindsay; Woods, Helen Buckley; Payne, Nick; Rimmer, Melanie; Goyder, Elizabeth

    2014-05-10

    There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. PROSPERO registration number: CRD42013004037.

  14. Crash test for groundwater recharge models: The effects of model complexity and calibration period on groundwater recharge predictions

    Science.gov (United States)

    Moeck, Christian; Von Freyberg, Jana; Schrimer, Maria

    2016-04-01

    An important question in recharge impact studies is how model choice, structure and calibration period affect recharge predictions. It is still unclear if a certain model type or structure is less affected by running the model on time periods with different hydrological conditions compared to the calibration period. This aspect, however, is crucial to ensure reliable predictions of groundwater recharge. In this study, we quantify and compare the effect of groundwater recharge model choice, model parametrization and calibration period in a systematic way. This analysis was possible thanks to a unique data set from a large-scale lysimeter in a pre-alpine catchment where daily long-term recharge rates are available. More specifically, the following issues are addressed: We systematically evaluate how the choice of hydrological models influences predictions of recharge. We assess how different parameterizations of models due to parameter non-identifiability affect predictions of recharge by applying a Monte Carlo approach. We systematically assess how the choice of calibration periods influences predictions of recharge within a differential split sample test focusing on the model performance under extreme climatic and hydrological conditions. Results indicate that all applied models (simple lumped to complex physically based models) were able to simulate the observed recharge rates for five different calibration periods. However, there was a marked impact of the calibration period when the complete 20 years validation period was simulated. Both, seasonal and annual differences between simulated and observed daily recharge rates occurred when the hydrological conditions were different to the calibration period. These differences were, however, less distinct for the physically based models, whereas the simpler models over- or underestimate the observed recharge depending on the considered season. It is, however, possible to reduce the differences for the simple models by

  15. Do negative screening test results cause false reassurance? A systematic review.

    Science.gov (United States)

    Cooper, Grace C; Harvie, Michelle N; French, David P

    2017-11-01

    It has been suggested that receiving a negative screening test result may cause false reassurance or have a 'certificate of health effect'. False reassurance in those receiving a negative screening test result may result in them wrongly believing themselves to be at lower risk of the disease, and consequently less likely to engage in health-related behaviours that would lower their risk. The present systematic review aimed to identify the evidence regarding false reassurance effects due to negative screening test results in adults (over 18 years) screened for the presence of a disease or its precursors, where disease or precursors are linked to lifestyle behaviours. MEDLINE and PsycINFO were searched for trials that compared a group who had received negative screening results to an unscreened control group. The following outcomes were considered as markers of false reassurance: perceived risk of disease; anxiety and worry about disease; health-related behaviours or intention to change health-related behaviours (i.e., smoking, diet, physical activity, and alcohol consumption); self-rated health status. Nine unique studies were identified, reporting 55 measures in relation to the outcomes considered. Outcomes were measured at various time points from immediately following screening to up to 11 years after screening. Despite considerable variation in outcome measures used and timing of measurements, effect sizes for comparisons between participants who received negative screening test results and control participants were typically small with few statistically significant differences. There was evidence of high risk of bias, and measures of behaviours employed were often not valid. The limited evidence base provided little evidence of false reassurance following a negative screening test results on any of four outcomes examined. False reassurance should not be considered a significant harm of screening, but further research is warranted. Statement of contribution

  16. Physiologically Based Pharmacokinetic (PBPK) Modeling and Simulation Approaches: A Systematic Review of Published Models, Applications, and Model Verification

    Science.gov (United States)

    Sager, Jennifer E.; Yu, Jingjing; Ragueneau-Majlessi, Isabelle

    2015-01-01

    Modeling and simulation of drug disposition has emerged as an important tool in drug development, clinical study design and regulatory review, and the number of physiologically based pharmacokinetic (PBPK) modeling related publications and regulatory submissions have risen dramatically in recent years. However, the extent of use of PBPK modeling by researchers, and the public availability of models has not been systematically evaluated. This review evaluates PBPK-related publications to 1) identify the common applications of PBPK modeling; 2) determine ways in which models are developed; 3) establish how model quality is assessed; and 4) provide a list of publically available PBPK models for sensitive P450 and transporter substrates as well as selective inhibitors and inducers. PubMed searches were conducted using the terms “PBPK” and “physiologically based pharmacokinetic model” to collect published models. Only papers on PBPK modeling of pharmaceutical agents in humans published in English between 2008 and May 2015 were reviewed. A total of 366 PBPK-related articles met the search criteria, with the number of articles published per year rising steadily. Published models were most commonly used for drug-drug interaction predictions (28%), followed by interindividual variability and general clinical pharmacokinetic predictions (23%), formulation or absorption modeling (12%), and predicting age-related changes in pharmacokinetics and disposition (10%). In total, 106 models of sensitive substrates, inhibitors, and inducers were identified. An in-depth analysis of the model development and verification revealed a lack of consistency in model development and quality assessment practices, demonstrating a need for development of best-practice guidelines. PMID:26296709

  17. Models of care in nursing: a systematic review.

    Science.gov (United States)

    Fernandez, Ritin; Johnson, Maree; Tran, Duong Thuy; Miranda, Charmaine

    2012-12-01

    This review investigated the effect of the various models of nursing care delivery using the diverse levels of nurses on patient and nursing outcomes. All published studies that investigated patient and nursing outcomes were considered. Studies were included if the nursing delivery models only included nurses with varying skill levels. A literature search was performed using the following databases: Medline (1985-2011), CINAHL (1985-2011), EMBASE (1985 to current) and the Cochrane Controlled Studies Register (Issue 3, 2011 of Cochrane Library). In addition, the reference lists of relevant studies and conference proceedings were also scrutinised. Two reviewers independently assessed the eligibility of the studies for inclusion in the review, the methodological quality and extracted details of eligible studies. Data were analysed using the RevMan software (Nordic Cochrane Centre, Copenhagen, Denmark). Fourteen studies were included in this review. The results reveal that implementation of the team nursing model of care resulted in significantly decreased incidence of medication errors and adverse intravenous outcomes, as well as lower pain scores among patients; however, there was no effect of this model of care on the incidence of falls. Wards that used a hybrid model demonstrated significant improvement in quality of patient care, but no difference in incidence of pressure areas or infection rates. There were no significant differences in nursing outcomes relating to role clarity, job satisfaction and nurse absenteeism rates between any of the models of care. Based on the available evidence, a predominance of team nursing within the comparisons is suggestive of its popularity. Patient outcomes, nurse satisfaction, absenteeism and role clarity/confusion did not differ across model comparisons. Little benefit was found within primary nursing comparisons and the cost effectiveness of team nursing over other models remains debatable. Nonetheless, team nursing does

  18. A Systematic Modelling Framework for Phase Transfer Catalyst Systems

    DEFF Research Database (Denmark)

    Anantpinijwatna, Amata; Sales-Cruz, Mauricio; Hyung Kim, Sun

    2016-01-01

    equilibria, as well as kinetic mechanisms and rates. This paper presents a modelling framework for design and analysis of PTC systems that requires a minimum amount of experimental data to develop and employ the necessary thermodynamic and reaction models and embeds them into a reactor model for simulation...... in an aqueous phase. These reacting systems are receiving increased attention as novel organic synthesis options due to their flexible operation, higher product yields, and ability to avoid hazardous or expensive solvents. Major considerations in the design and analysis of PTC systems are physical and chemical....... The application of the framework is made to two cases in order to highlight the performance and issues of activity coefficient models for predicting design and operation and the effects when different organic solvents are employed....

  19. Accelerated testing statistical models, test plans, and data analysis

    CERN Document Server

    Nelson, Wayne B

    2009-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "". . . a goldmine of knowledge on accelerated life testing principles and practices . . . one of the very few capable of advancing the science of reliability. It definitely belongs in every bookshelf on engineering.""-Dev G.

  20. Comparison of a full systematic review versus rapid review approaches to assess a newborn screening test for tyrosinemia type 1.

    Science.gov (United States)

    Taylor-Phillips, Sian; Geppert, Julia; Stinton, Chris; Freeman, Karoline; Johnson, Samantha; Fraser, Hannah; Sutcliffe, Paul; Clarke, Aileen

    2017-12-01

    Rapid reviews are increasingly used to replace/complement systematic reviews to support evidence-based decision-making. Little is known about how this expedited process affects results. To assess differences between rapid and systematic review approaches for a case study of test accuracy of succinylacetone for detecting tyrosinemia type 1. Two reviewers conducted an "enhanced" rapid review then a systematic review. The enhanced rapid review involved narrower searches, a single reviewer checking 20% of titles/abstracts and data extraction, and quality assessment using an unadjusted QUADAS-2. Two reviewers performed the systematic review with a tailored QUADAS-2. Post hoc analysis examined rapid reviewing with a single reviewer (basic rapid review). Ten papers were included. Basic rapid reviews would have missed 1 or 4 of these (dependent on which reviewer). Enhanced rapid and systematic reviews identified all 10 papers; one paper was only identified in the rapid review through reference checking. Two thousand one hundred seventy-six fewer title/abstracts and 129 fewer full texts were screened during the enhanced rapid review than the systematic review. The unadjusted QUADAS-2 generated more "unclear" ratings than the adjusted QUADAS-2 [29/70 (41.4%) versus 16/70 (22.9%)], and fewer "high" ratings [22/70 (31.4%) versus 42/70 (60.0%)]. Basic rapid reviews contained important inaccuracies in data extraction, which were detected by a second reviewer in the enhanced rapid and systematic reviews. Enhanced rapid reviews with 20% checking by a second reviewer may be an appropriate tool for policymakers to expeditiously assess evidence. Basic rapid reviews (single reviewer) have higher risks of important inaccuracies and omissions. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Assessing the LWR codes capability to address SFR BDBAs: Modeling of the ABCOVE tests

    International Nuclear Information System (INIS)

    Garcia, M.; Herranz, L. E.

    2012-01-01

    Tic present paper is aimed at assessing the current capability of LWR codes to model aerosol transport within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC and MELCOR codes lo relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have been adopted so that differences in boundary conditions between LWR and SFR containments under BDBA can be accommodated to some extent.

  2. Insights on the impact of systematic model errors on data assimilation performance in changing catchments

    Science.gov (United States)

    Pathiraja, S.; Anghileri, D.; Burlando, P.; Sharma, A.; Marshall, L.; Moradkhani, H.

    2018-03-01

    The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.

  3. POC CD4 Testing Improves Linkage to HIV Care and Timeliness of ART Initiation in a Public Health Approach: A Systematic Review and Meta-Analysis.

    Directory of Open Access Journals (Sweden)

    Lara Vojnov

    Full Text Available CD4 cell count is an important test in HIV programs for baseline risk assessment, monitoring of ART where viral load is not available, and, in many settings, antiretroviral therapy (ART initiation decisions. However, access to CD4 testing is limited, in part due to the centralized conventional laboratory network. Point of care (POC CD4 testing has the potential to address some of the challenges of centralized CD4 testing and delays in delivery of timely testing and ART initiation. We conducted a systematic review and meta-analysis to identify the extent to which POC improves linkages to HIV care and timeliness of ART initiation.We searched two databases and four conference sites between January 2005 and April 2015 for studies reporting test turnaround times, proportion of results returned, and retention associated with the use of point-of-care CD4. Random effects models were used to estimate pooled risk ratios, pooled proportions, and 95% confidence intervals.We identified 30 eligible studies, most of which were completed in Africa. Test turnaround times were reduced with the use of POC CD4. The time from HIV diagnosis to CD4 test was reduced from 10.5 days with conventional laboratory-based testing to 0.1 days with POC CD4 testing. Retention along several steps of the treatment initiation cascade was significantly higher with POC CD4 testing, notably from HIV testing to CD4 testing, receipt of results, and pre-CD4 test retention (all p<0.001. Furthermore, retention between CD4 testing and ART initiation increased with POC CD4 testing compared to conventional laboratory-based testing (p = 0.01. We also carried out a non-systematic review of the literature observing that POC CD4 increased the projected life expectancy, was cost-effective, and acceptable.POC CD4 technologies reduce the time and increase patient retention along the testing and treatment cascade compared to conventional laboratory-based testing. POC CD4 is, therefore, a useful tool

  4. Bayesian Test of Significance for Conditional Independence: The Multinomial Model

    Directory of Open Access Journals (Sweden)

    Pablo de Morais Andrade

    2014-03-01

    Full Text Available Conditional independence tests have received special attention lately in machine learning and computational intelligence related literature as an important indicator of the relationship among the variables used by their models. In the field of probabilistic graphical models, which includes Bayesian network models, conditional independence tests are especially important for the task of learning the probabilistic graphical model structure from data. In this paper, we propose the full Bayesian significance test for tests of conditional independence for discrete datasets. The full Bayesian significance test is a powerful Bayesian test for precise hypothesis, as an alternative to the frequentist’s significance tests (characterized by the calculation of the p-value.

  5. Primary care models for treating opioid use disorders: What actually works? A systematic review.

    Directory of Open Access Journals (Sweden)

    Pooja Lagisetty

    Full Text Available Primary care-based models for Medication-Assisted Treatment (MAT have been shown to reduce mortality for Opioid Use Disorder (OUD and have equivalent efficacy to MAT in specialty substance treatment facilities.The objective of this study is to systematically analyze current evidence-based, primary care OUD MAT interventions and identify program structures and processes associated with improved patient outcomes in order to guide future policy and implementation in primary care settings.PubMed, EMBASE, CINAHL, and PsychInfo.We included randomized controlled or quasi experimental trials and observational studies evaluating OUD treatment in primary care settings treating adult patient populations and assessed structural domains using an established systems engineering framework.We included 35 interventions (10 RCTs and 25 quasi-experimental interventions that all tested MAT, buprenorphine or methadone, in primary care settings across 8 countries. Most included interventions used joint multi-disciplinary (specialty addiction services combined with primary care and coordinated care by physician and non-physician provider delivery models to provide MAT. Despite large variability in reported patient outcomes, processes, and tasks/tools used, similar key design factors arose among successful programs including integrated clinical teams with support staff who were often advanced practice clinicians (nurses and pharmacists as clinical care managers, incorporating patient "agreements," and using home inductions to make treatment more convenient for patients and providers.The findings suggest that multidisciplinary and coordinated care delivery models are an effective strategy to implement OUD treatment and increase MAT access in primary care, but research directly comparing specific structures and processes of care models is still needed.

  6. Modeling Information Accumulation in Psychological Tests Using Item Response Times

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias

    2015-01-01

    In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…

  7. Diagnostic test systematic reviews: bibliographic search filters ("Clinical Queries") for diagnostic accuracy studies perform well.

    Science.gov (United States)

    Kastner, Monika; Wilczynski, Nancy L; McKibbon, Ann K; Garg, Amit X; Haynes, R Brian

    2009-09-01

    Systematic reviews of health care topics are valuable summaries of all pertinent studies on focused questions. However, finding all relevant primary studies for systematic reviews remains challenging. To determine the performance of the Clinical Queries sensitive search filter for diagnostic accuracy studies for retrieving studies for systematic reviews. We compared the yield of the sensitive Clinical Queries diagnosis search filter for MEDLINE and EMBASE to retrieve studies in diagnostic accuracy systematic reviews reported in ACP Journal Club in 2006. Twelve of 22 diagnostic accuracy reviews (452 included studies) met the inclusion criteria. After excluding 11 studies not in MEDLINE or EMBASE, 95% of articles (417 of 441) were captured by the sensitive Clinical Queries diagnosis search filter (MEDLINE and EMBASE combined). Of 24 studies not retrieved by the filter, 22 were not diagnostic accuracy studies. Reanalysis of the Clinical Queries filter without these 22 nondiagnosis articles increased its performance to 99% (417 of 419). We found no substantive impact of the two articles missed by the Clinical Queries filter on the conclusions of the systematic reviews in which they were cited. The sensitive Clinical Queries diagnostic search filter captured 99% of articles and 100% of substantive articles indexed in MEDLINE and EMBASE in diagnostic accuracy systematic reviews.

  8. Business process maturity models : A systematic literature review

    NARCIS (Netherlands)

    Tarhan, Ayca; Turetken, Oktay; Reijers, Hajo A.

    2016-01-01

    Context The number of maturity models proposed in the area of Business Process Management (BPM) has increased considerably in the last decade. However, there are a number of challenges, such as the limited empirical studies on their validation and a limited extent of actionable properties of these

  9. Putting hydrological modelling practice to the test

    NARCIS (Netherlands)

    Melsen, Lieke Anna

    2017-01-01

    Six steps can be distinguished in the process of hydrological modelling: the perceptual model (deciding on the processes), the conceptual model (deciding on the equations), the procedural model (get the code to run on a computer), calibration (identify the parameters), evaluation (confronting

  10. The Use of Decision-Analytic Models in Atopic Eczema: A Systematic Review and Critical Appraisal.

    Science.gov (United States)

    McManus, Emma; Sach, Tracey; Levell, Nick

    2018-01-01

    The objective of this systematic review was to identify and assess the quality of published economic decision-analytic models within atopic eczema against best practice guidelines, with the intention of informing future decision-analytic models within this condition. A systematic search of the following online databases was performed: MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Cochrane Central Register of Controlled Trials, Database of Abstracts of Reviews of Effects, Cochrane Database of Systematic Reviews, NHS Economic Evaluation Database, EconLit, Scopus, Health Technology Assessment, Cost-Effectiveness Analysis Registry and Web of Science. Papers were eligible for inclusion if they described a decision-analytic model evaluating both the costs and benefits associated with an intervention or prevention for atopic eczema. Data were extracted using a standardised form by two independent reviewers, whilst quality was assessed using the model-specific Philips criteria. Twenty-four models were identified, evaluating either preventions (n = 12) or interventions (n = 12): 14 reported using a Markov modelling approach, four utilised decision trees and one a discrete event simulation, whilst five did not specify the approach. The majority, 22 studies, reported that the intervention was dominant or cost effective, given the assumptions and analytical perspective taken. Notably, the models tended to be short-term (16 used a time horizon of ≤1 year), often providing little justification for the limited time horizon chosen. The methodological and reporting quality of the studies was generally weak, with only seven studies fulfilling more than 50% of their applicable Philips criteria. This is the first systematic review of decision models in eczema. Whilst the majority of models reported favourable outcomes in terms of the cost effectiveness of the new intervention, the usefulness of these findings for decision-making is

  11. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  12. Systematic design of a sawtooth period feedback controller using a Kadomtsev-Porcelli sawtooth model

    NARCIS (Netherlands)

    Witvoet, G.; Baar, M.R. de; Westerhof, E.; Steinbuch, M.; Doelman, N.J.

    2011-01-01

    A systematic methodology for structured design of feedback controllers for the sawtooth period is presented, based on dedicated identification of the sawtooth dynamics. Therefore, a combined Kadomtsev-Porcelli model of a sawtoothing plasma actuated by an electron cyclotron current drive system has

  13. The Psychology Department Model Advisement Procedure: A Comprehensive, Systematic Approach to Career Development Advisement

    Science.gov (United States)

    Howell-Carter, Marya; Nieman-Gonder, Jennifer; Pellegrino, Jennifer; Catapano, Brittani; Hutzel, Kimberly

    2016-01-01

    The MAP (Model Advisement Procedure) is a comprehensive, systematic approach to developmental student advisement. The MAP was implemented to improve advisement consistency, improve student preparation for internships/senior projects, increase career exploration, reduce career uncertainty, and, ultimately, improve student satisfaction with the…

  14. Systematic literature review and meta-analysis of diagnostic test accuracy in Alzheimer's disease and other dementia using autopsy as standard of truth.

    Science.gov (United States)

    Cure, Sandrine; Abrams, Keith; Belger, Mark; Dell'agnello, Grazzia; Happich, Michael

    2014-01-01

    Early diagnosis of Alzheimer's disease (AD) is crucial to implement the latest treatment strategies and management of AD symptoms. Diagnostic procedures play a major role in this detection process but evidence on their respective accuracy is still limited. To conduct a systematic literature on the sensitivity and specificity of different test modalities to identify AD patients and perform meta-analyses on the test accuracy values of studies focusing on autopsy-confirmation as the standard of truth. The systematic review identified all English papers published between 1984 and 2011 on diagnostic imaging tests and cerebrospinal fluid biomarkers including results on the newest technologies currently investigated in this area. Meta-analyses using bivariate fixed and random-effect models and hierarchical summary receiver operating curve (HSROC) random-effect model were applied. Out of the 1,189 records, 20 publications were identified to report the accuracy of diagnostic tests in distinguishing autopsy-confirmed AD patients from other dementia types and healthy controls. Looking at all tests and comparator populations together, sensitivity was calculated at 85.4% (95% confidence interval [CI]: 80.9%-90.0%) and specificity at 77.7% (95% CI: 70.2%-85.1%). The area under the HSROC curve was 0.88. Sensitivity and specificity values were higher for imaging procedures, and slightly lower for CSF biomarkers. Test-specific random-effect models could not be calculated due to the small number of studies. The review and meta-analysis point to a slight advantage of imaging procedures in correctly detecting AD patients but also highlight the limited evidence on autopsy-confirmations and heterogeneity in study designs.

  15. Topic Modeling in Sentiment Analysis: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Toqir Ahmad Rana

    2016-06-01

    Full Text Available With the expansion and acceptance of Word Wide Web, sentiment analysis has become progressively popular research area in information retrieval and web data analysis. Due to the huge amount of user-generated contents over blogs, forums, social media, etc., sentiment analysis has attracted researchers both in academia and industry, since it deals with the extraction of opinions and sentiments. In this paper, we have presented a review of topic modeling, especially LDA-based techniques, in sentiment analysis. We have presented a detailed analysis of diverse approaches and techniques, and compared the accuracy of different systems among them. The results of different approaches have been summarized, analyzed and presented in a sophisticated fashion. This is the really effort to explore different topic modeling techniques in the capacity of sentiment analysis and imparting a comprehensive comparison among them.

  16. Primary care clinicians' attitudes towards point-of-care blood testing: a systematic review of qualitative studies.

    Science.gov (United States)

    Jones, Caroline H D; Howick, Jeremy; Roberts, Nia W; Price, Christopher P; Heneghan, Carl; Plüddemann, Annette; Thompson, Matthew

    2013-08-14

    Point-of-care blood tests are becoming increasingly available and could replace current venipuncture and laboratory testing for many commonly used tests. However, at present very few have been implemented in most primary care settings. Understanding the attitudes of primary care clinicians towards these tests may help to identify the barriers and facilitators to their wider adoption. We aimed to systematically review qualitative studies of primary care clinicians' attitudes to point-of-care blood tests. We systematically searched Medline, Embase, ISI Web of Knowledge, PsycINFO and CINAHL for qualitative studies of primary care clinicians' attitudes towards point-of-care blood tests in high income countries. We conducted a thematic synthesis of included studies. Our search identified seven studies, including around two hundred participants from Europe and Australia. The synthesis generated three main themes: the impact of point-of-care testing on decision-making, diagnosis and treatment; impact on clinical practice more broadly; and impact on patient-clinician relationships and perceived patient experience. Primary care clinicians believed point-of-care testing improved diagnostic certainty, targeting of treatment, self-management of chronic conditions, and clinician-patient communication and relationships. There were concerns about test accuracy, over-reliance on tests, undermining of clinical skills, cost, and limited usefulness. We identified several perceived benefits and barriers regarding point-of-care tests in primary care. These imply that if point-of-care tests are to become more widely adopted, primary care clinicians require evidence of their accuracy, rigorous testing of the impact of introduction on patient pathways and clinical practice, and consideration of test funding.

  17. Method matters: systematic effects of testing procedure on visual working memory sensitivity.

    Science.gov (United States)

    Makovski, Tal; Watson, Leah M; Koutstaal, Wilma; Jiang, Yuhong V

    2010-11-01

    Visual working memory (WM) is traditionally considered a robust form of visual representation that survives changes in object motion, observer's position, and other visual transients. This article presents data that are inconsistent with the traditional view. We show that memory sensitivity is dramatically influenced by small variations in the testing procedure, supporting the idea that representations in visual WM are susceptible to interference from testing. In the study, participants were shown an array of colors to remember. After a short retention interval, memory for one of the items was tested with either a same-different task or a 2-alternative-forced-choice (2AFC) task. Memory sensitivity was much lower in the 2AFC task than in the same-different task. This difference was found regardless of encoding similarity or of whether visual WM required a fine or coarse memory resolution. The 2AFC disadvantage was reduced when participants were informed shortly before testing which item would be probed. The 2AFC disadvantage diminished in perceptual tasks and was not found in tasks probing visual long-term memory. These results support memory models that acknowledge the labile nature of visual WM and have implications for the format of visual WM and its assessment. (c) 2010 APA, all rights reserved

  18. Systematic Methods and Tools for Computer Aided Modelling

    OpenAIRE

    Fedorova, Marina; Gani, Rafiqul; Sin, Gürkan

    2015-01-01

    Modeller spiller vigtige roller til design og analyse af kemi- og biokemibaserede produkter samt til processerne, der fremstille dem. Modelbaserede metoder og værktøjer har potentialet til at formindske antallet af eksperimenter, som kan være dyre og tidskrævende, og til at udvælge kandidater, hvorpå den eksperimentelle indsats bør fokuseres. I dette projekt blev en generel modelleringsramme udviklet til en systematisk modelopsætning ved hjælp af modelskabeloner. Modelrammen understøtter en n...

  19. Non-invasive prenatal testing for aneuploidy: a systematic review of Internet advertising to potential users by commercial companies and private health providers.

    OpenAIRE

    Skirton, H; Goldsmith, L; Jackson, L; Lewis, C; Chitty, LS

    2015-01-01

    BACKGROUND: The development of non-invasive prenatal testing has increased accessibility of fetal testing. Companies are now advertising prenatal testing for aneuploidy via the Internet. OBJECTIVES: The aim of this systematic review of websites advertising non-invasive prenatal testing for aneuploidy was to explore the nature of the information being provided to potential users. METHODS: We systematically searched two Internet search engines for relevant websites using the following terms: 'p...

  20. Understanding in vivo modelling of depression in non-human animals: a systematic review protocol

    DEFF Research Database (Denmark)

    Bannach-Brown, Alexandra; Liao, Jing; Wegener, Gregers

    2016-01-01

    experimental model(s) to induce or mimic a depressive-like phenotype. Data that will be extracted include the model or method of induction; species and gender of the animals used; the behavioural, anatomical, electrophysiological, neurochemical or genetic outcome measure(s) used; risk of bias......The aim of this study is to systematically collect all published preclinical non-human animal literature on depression to provide an unbiased overview of existing knowledge. A systematic search will be carried out in PubMed and Embase. Studies will be included if they use non-human animal......-analysis of the preclinical studies modelling depression-like behaviours and phenotypes in animals....

  1. A Systematic Approach to Modelling Change Processes in Construction Projects

    Directory of Open Access Journals (Sweden)

    Ibrahim Motawa

    2012-11-01

    Full Text Available Modelling change processes within construction projects isessential to implement changes efficiently. Incomplete informationon the project variables at the early stages of projects leads toinadequate knowledge of future states and imprecision arisingfrom ambiguity in project parameters. This lack of knowledge isconsidered among the main source of changes in construction.Change identification and evaluation, in addition to predictingits impacts on project parameters, can help in minimising thedisruptive effects of changes. This paper presents a systematicapproach to modelling change process within construction projectsthat helps improve change identification and evaluation. Theapproach represents the key decisions required to implementchanges. The requirements of an effective change processare presented first. The variables defined for efficient changeassessment and diagnosis are then presented. Assessmentof construction changes requires an analysis for the projectcharacteristics that lead to change and also analysis of therelationship between the change causes and effects. The paperconcludes that, at the early stages of a project, projects with a highlikelihood of change occurrence should have a control mechanismover the project characteristics that have high influence on theproject. It also concludes, for the relationship between changecauses and effects, the multiple causes of change should bemodelled in a way to enable evaluating the change effects moreaccurately. The proposed approach is the framework for tacklingsuch conclusions and can be used for evaluating change casesdepending on the available information at the early stages ofconstruction projects.

  2. Genetics of borderline personality disorder: systematic review and proposal of an integrative model.

    Science.gov (United States)

    Amad, Ali; Ramoz, Nicolas; Thomas, Pierre; Jardri, Renaud; Gorwood, Philip

    2014-03-01

    Borderline personality disorder (BPD) is one of the most common mental disorders and is characterized by a pervasive pattern of emotional lability, impulsivity, interpersonal difficulties, identity disturbances, and disturbed cognition. Here, we performed a systematic review of the literature concerning the genetics of BPD, including familial and twin studies, association studies, and gene-environment interaction studies. Moreover, meta-analyses were performed when at least two case-control studies testing the same polymorphism were available. For each gene variant, a pooled odds ratio (OR) was calculated using fixed or random effects models. Familial and twin studies largely support the potential role of a genetic vulnerability at the root of BPD, with an estimated heritability of approximately 40%. Moreover, there is evidence for both gene-environment interactions and correlations. However, association studies for BPD are sparse, making it difficult to draw clear conclusions. According to our meta-analysis, no significant associations were found for the serotonin transporter gene, the tryptophan hydroxylase 1 gene, or the serotonin 1B receptor gene. We hypothesize that such a discrepancy (negative association studies but high heritability of the disorder) could be understandable through a paradigm shift, in which "plasticity" genes (rather than "vulnerability" genes) would be involved. Such a framework postulates a balance between positive and negative events, which interact with plasticity genes in the genesis of BPD. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. TURBHO - Higher order turbulence modeling for industrial applications. Design document: Module Test Phase (MTP). Software engineering module: Testing; TURBHO. Turbulenzmodellierung hoeherer Ordnung fuer industrielle Anwendungen. Design document: Module Test Phase (MTP). Software engineering module: testing

    Energy Technology Data Exchange (ETDEWEB)

    Grotjans, H.

    1998-11-19

    In the current Software Engineering Module (SEM-4) new physical model implementations have been tested and additional complex test cases have been investigated with the available models. For all validation test cases it has been shown that the computed results are grid independent. This has been done by systematic grid refinement studies. No grid independence has been shown so far for the Aerospatiale-A airfoil, the draft tube flow, the transonic bump flow and the impinging jet flow. Most of the main objectives of the current SEM, cf. Chapter 1, are fulfilled. These are the verification of the alternative pressure-strain term (SSG-model), the implementation of a swirl correction for the standard-{kappa}-{epsilon} turbulence model and the assembling of additional test cases. However, few results are available so far for the industrial test cases. These have to be provided in the remaining time of this project. The implementation of the Low-Reynolds model has not been completed in this SEM as the other topics were preferred for completion. Additionally to the planned items two models have been implemented and tested. These are the wall distance equation, which is considered to give an important part of a low-Reynolds model implementation, and the {kappa}-{omega} turbulence model. (orig.)

  4. Facilitators and barriers for HIV-testing in Zambia: A systematic review of multi-level factors.

    Directory of Open Access Journals (Sweden)

    Shan Qiao

    Full Text Available It was estimated that 1.2 million people live with HIV/AIDS in Zambia by 2015. Zambia has developed and implemented diverse programs to reduce the prevalence in the country. HIV-testing is a critical step in HIV treatment and prevention, especially among all the key populations. However, there is no systematic review so far to demonstrate the trend of HIV-testing studies in Zambia since 1990s or synthesis the key factors that associated with HIV-testing practices in the country. Therefore, this study conducted a systematic review to search all English literature published prior to November 2016 in six electronic databases and retrieved 32 articles that meet our inclusion criteria. The results indicated that higher education was a common facilitator of HIV testing, while misconception of HIV testing and the fear of negative consequences were the major barriers for using the testing services. Other factors, such as demographic characteristics, marital dynamics, partner relationship, and relationship with the health care services, also greatly affects the participants' decision making. The findings indicated that 1 individualized strategies and comprehensive services are needed for diverse key population; 2 capacity building for healthcare providers is critical for effectively implementing the task-shifting strategy; 3 HIV testing services need to adapt to the social context of Zambia where HIV-related stigma and discrimination is still persistent and overwhelming; and 4 family-based education and intervention should involving improving gender equity.

  5. Reliability of specific physical examination tests for the diagnosis of shoulder pathologies: a systematic review and meta-analysis.

    Science.gov (United States)

    Lange, Toni; Matthijs, Omer; Jain, Nitin B; Schmitt, Jochen; Lützner, Jörg; Kopkow, Christian

    2017-03-01

    Shoulder pain in the general population is common and to identify the aetiology of shoulder pain, history, motion and muscle testing, and physical examination tests are usually performed. The aim of this systematic review was to summarise and evaluate intrarater and inter-rater reliability of physical examination tests in the diagnosis of shoulder pathologies. A comprehensive systematic literature search was conducted using MEDLINE, EMBASE, Allied and Complementary Medicine Database (AMED) and Physiotherapy Evidence Database (PEDro) through 20 March 2015. Methodological quality was assessed using the Quality Appraisal of Reliability Studies (QAREL) tool by 2 independent reviewers. The search strategy revealed 3259 articles, of which 18 finally met the inclusion criteria. These studies evaluated the reliability of 62 test and test variations used for the specific physical examination tests for the diagnosis of shoulder pathologies. Methodological quality ranged from 2 to 7 positive criteria of the 11 items of the QAREL tool. This review identified a lack of high-quality studies evaluating inter-rater as well as intrarater reliability of specific physical examination tests for the diagnosis of shoulder pathologies. In addition, reliability measures differed between included studies hindering proper cross-study comparisons. PROSPERO CRD42014009018. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  6. Diagnostic validity of physical examination tests for common knee disorders: An overview of systematic reviews and meta-analysis.

    Science.gov (United States)

    Décary, Simon; Ouellet, Philippe; Vendittoli, Pascal-André; Roy, Jean-Sébastien; Desmeules, François

    2017-01-01

    More evidence on diagnostic validity of physical examination tests for knee disorders is needed to lower frequently used and costly imaging tests. To conduct a systematic review of systematic reviews (SR) and meta-analyses (MA) evaluating the diagnostic validity of physical examination tests for knee disorders. A structured literature search was conducted in five databases until January 2016. Methodological quality was assessed using the AMSTAR. Seventeen reviews were included with mean AMSTAR score of 5.5 ± 2.3. Based on six SR, only the Lachman test for ACL injuries is diagnostically valid when individually performed (Likelihood ratio (LR+):10.2, LR-:0.2). Based on two SR, the Ottawa Knee Rule is a valid screening tool for knee fractures (LR-:0.05). Based on one SR, the EULAR criteria had a post-test probability of 99% for the diagnosis of knee osteoarthritis. Based on two SR, a complete physical examination performed by a trained health provider was found to be diagnostically valid for ACL, PCL and meniscal injuries as well as for cartilage lesions. When individually performed, common physical tests are rarely able to rule in or rule out a specific knee disorder, except the Lachman for ACL injuries. There is low-quality evidence concerning the validity of combining history elements and physical tests. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Facilitators and barriers for HIV-testing in Zambia: A systematic review of multi-level factors.

    Science.gov (United States)

    Qiao, Shan; Zhang, Yao; Li, Xiaoming; Menon, J Anitha

    2018-01-01

    It was estimated that 1.2 million people live with HIV/AIDS in Zambia by 2015. Zambia has developed and implemented diverse programs to reduce the prevalence in the country. HIV-testing is a critical step in HIV treatment and prevention, especially among all the key populations. However, there is no systematic review so far to demonstrate the trend of HIV-testing studies in Zambia since 1990s or synthesis the key factors that associated with HIV-testing practices in the country. Therefore, this study conducted a systematic review to search all English literature published prior to November 2016 in six electronic databases and retrieved 32 articles that meet our inclusion criteria. The results indicated that higher education was a common facilitator of HIV testing, while misconception of HIV testing and the fear of negative consequences were the major barriers for using the testing services. Other factors, such as demographic characteristics, marital dynamics, partner relationship, and relationship with the health care services, also greatly affects the participants' decision making. The findings indicated that 1) individualized strategies and comprehensive services are needed for diverse key population; 2) capacity building for healthcare providers is critical for effectively implementing the task-shifting strategy; 3) HIV testing services need to adapt to the social context of Zambia where HIV-related stigma and discrimination is still persistent and overwhelming; and 4) family-based education and intervention should involving improving gender equity.

  8. Effects of Photobiomodulation Therapy on Oxidative Stress in Muscle Injury Animal Models: A Systematic Review

    OpenAIRE

    dos Santos, Solange Almeida; Serra, Andrey Jorge; Stancker, Tatiane Garcia; Simões, Maíra Cecília Brandão; dos Santos Vieira, Marcia Ataíze; Leal-Junior, Ernesto Cesar; Prokic, Marko; Vasconsuelo, Andrea; Santos, Simone Silva; de Carvalho, Paulo de Tarso Camillo

    2017-01-01

    This systematic review was performed to identify the role of photobiomodulation therapy on experimental muscle injury models linked to induce oxidative stress. EMBASE, PubMed, and CINAHL were searched for studies published from January 2006 to January 2016 in the areas of laser and oxidative stress. Any animal model using photobiomodulation therapy to modulate oxidative stress was included in analysis. Eight studies were selected from 68 original articles targeted on laser irradiation and oxi...

  9. Test Driven Development of Scientific Models

    Science.gov (United States)

    Clune, Thomas L.

    2014-01-01

    Test-Driven Development (TDD), a software development process that promises many advantages for developer productivity and software reliability, has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices.After a brief overview of the TDD process and my experience in applying the methodology for development activities at Goddard, I will delve more deeply into some of the challenges that are posed by numerical and scientific software as well as tools and implementation approaches that should address those challenges.

  10. Decision modelling of non-pharmacological interventions for individuals with dementia: a systematic review of methodologies

    DEFF Research Database (Denmark)

    Sopina, Liza; Sørensen, Jan

    2018-01-01

    Abstract Objectives: The main objective of this study is to conduct a systematic review to identify and discuss methodological issues surrounding decision modelling for economic evaluation of non-pharmacological interventions (NPIs) in dementia. Methods: A systematic search was conducted for publ......Abstract Objectives: The main objective of this study is to conduct a systematic review to identify and discuss methodological issues surrounding decision modelling for economic evaluation of non-pharmacological interventions (NPIs) in dementia. Methods: A systematic search was conducted...... of challenging methodological issues were identified, including the use of MMSE-score as the main outcome measure, limited number of strategies compared, restricted time horizons, and limited or dated data on dementia onset, progression and mortality. Only one of the three tertiary prevention studies explicitly...... impact of NPIs for dementia in future decision models. It is also important to account for the effects of pharmacological therapies alongside the NPIs in economic evaluations. Access to more localised and up-to-date data on dementia onset, progression and mortality is a priority for accurate prediction....

  11. Modelling of the spallation reaction: analysis and testing of nuclear models

    International Nuclear Information System (INIS)

    Toccoli, C.

    2000-01-01

    The spallation reaction is considered as a 2-step process. First a very quick stage (10 -22 , 10 -29 s) which corresponds to the individual interaction between the incident projectile and nucleons, this interaction is followed by a series of nucleon-nucleon collisions (intranuclear cascade) during which fast particles are emitted, the nucleus is left in a strongly excited level. Secondly a slower stage (10 -18 , 10 -19 s) during which the nucleus is expected to de-excite completely. This de-excitation is performed by evaporation of light particles (n, p, d, t, 3 He, 4 He) or/and fission or/and fragmentation. The HETC code has been designed to simulate spallation reactions, this simulation is based on the 2-steps process and on several models of intranuclear cascades (Bertini model, Cugnon model, Helder Duarte model), the evaporation model relies on the statistical theory of Weiskopf-Ewing. The purpose of this work is to evaluate the ability of the HETC code to predict experimental results. A methodology about the comparison of relevant experimental data with results of calculation is presented and a preliminary estimation of the systematic error of the HETC code is proposed. The main problem of cascade models originates in the difficulty of simulating inelastic nucleon-nucleon collisions, the emission of pions is over-estimated and corresponding differential spectra are badly reproduced. The inaccuracy of cascade models has a great impact to determine the excited level of the nucleus at the end of the first step and indirectly on the distribution of final residual nuclei. The test of the evaporation model has shown that the emission of high energy light particles is under-estimated. (A.C.)

  12. Validity and Reliability of Published Comprehensive Theory of Mind Tests for Normal Preschool Children: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Seyyede Zohreh Ziatabar Ahmadi

    2015-12-01

    Full Text Available Objective: Theory of mind (ToM or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children.Method: We searched MEDLINE (PubMed interface, Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library databases from 1990 to June 2015. Search strategy was Latin transcription of ‘Theory of Mind’ AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks for ToM assessment and/or had no description about structure, validity or reliability of their tests. Methodological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP.Result: In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. Conclusion: There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric

  13. Validity and Reliability of Published Comprehensive Theory of Mind Tests for Normal Preschool Children: A Systematic Review.

    Science.gov (United States)

    Ziatabar Ahmadi, Seyyede Zohreh; Jalaie, Shohreh; Ashayeri, Hassan

    2015-09-01

    Theory of mind (ToM) or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children. We searched MEDLINE (PubMed interface), Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library) databases from 1990 to June 2015. Search strategy was Latin transcription of 'Theory of Mind' AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks) for ToM assessment and/or had no description about structure, validity or reliability of their tests. METHODological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP). In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric characteristics, validity and reliability.

  14. Model Based Analysis and Test Generation for Flight Software

    Science.gov (United States)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  15. Scaling analysis in modeling transport and reaction processes a systematic approach to model building and the art of approximation

    CERN Document Server

    Krantz, William B

    2007-01-01

    This book is unique as the first effort to expound on the subject of systematic scaling analysis. Not written for a specific discipline, the book targets any reader interested in transport phenomena and reaction processes. The book is logically divided into chapters on the use of systematic scaling analysis in fluid dynamics, heat transfer, mass transfer, and reaction processes. An integrating chapter is included that considers more complex problems involving combined transport phenomena. Each chapter includes several problems that are explained in considerable detail. These are followed by several worked examples for which the general outline for the scaling is given. Each chapter also includes many practice problems. This book is based on recognizing the value of systematic scaling analysis as a pedagogical method for teaching transport and reaction processes and as a research tool for developing and solving models and in designing experiments. Thus, the book can serve as both a textbook and a reference boo...

  16. Simulation Modelling in Healthcare: An Umbrella Review of Systematic Literature Reviews.

    Science.gov (United States)

    Salleh, Syed; Thokala, Praveen; Brennan, Alan; Hughes, Ruby; Booth, Andrew

    2017-09-01

    Numerous studies examine simulation modelling in healthcare. These studies present a bewildering array of simulation techniques and applications, making it challenging to characterise the literature. The aim of this paper is to provide an overview of the level of activity of simulation modelling in healthcare and the key themes. We performed an umbrella review of systematic literature reviews of simulation modelling in healthcare. Searches were conducted of academic databases (JSTOR, Scopus, PubMed, IEEE, SAGE, ACM, Wiley Online Library, ScienceDirect) and grey literature sources, enhanced by citation searches. The articles were included if they performed a systematic review of simulation modelling techniques in healthcare. After quality assessment of all included articles, data were extracted on numbers of studies included in each review, types of applications, techniques used for simulation modelling, data sources and simulation software. The search strategy yielded a total of 117 potential articles. Following sifting, 37 heterogeneous reviews were included. Most reviews achieved moderate quality rating on a modified AMSTAR (A Measurement Tool used to Assess systematic Reviews) checklist. All the review articles described the types of applications used for simulation modelling; 15 reviews described techniques used for simulation modelling; three reviews described data sources used for simulation modelling; and six reviews described software used for simulation modelling. The remaining reviews either did not report or did not provide enough detail for the data to be extracted. Simulation modelling techniques have been used for a wide range of applications in healthcare, with a variety of software tools and data sources. The number of reviews published in recent years suggest an increased interest in simulation modelling in healthcare.

  17. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  18. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks.

    Science.gov (United States)

    Jarama, Ángel J; López-Araquistain, Jaime; Miguel, Gonzalo de; Besada, Juan A

    2017-09-21

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  19. Modelling of the spallation reaction: analysis and testing of nuclear models; Simulation de la spallation: analyse et test des modeles nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Toccoli, C

    2000-04-03

    The spallation reaction is considered as a 2-step process. First a very quick stage (10{sup -22}, 10{sup -29} s) which corresponds to the individual interaction between the incident projectile and nucleons, this interaction is followed by a series of nucleon-nucleon collisions (intranuclear cascade) during which fast particles are emitted, the nucleus is left in a strongly excited level. Secondly a slower stage (10{sup -18}, 10{sup -19} s) during which the nucleus is expected to de-excite completely. This de-excitation is performed by evaporation of light particles (n, p, d, t, {sup 3}He, {sup 4}He) or/and fission or/and fragmentation. The HETC code has been designed to simulate spallation reactions, this simulation is based on the 2-steps process and on several models of intranuclear cascades (Bertini model, Cugnon model, Helder Duarte model), the evaporation model relies on the statistical theory of Weiskopf-Ewing. The purpose of this work is to evaluate the ability of the HETC code to predict experimental results. A methodology about the comparison of relevant experimental data with results of calculation is presented and a preliminary estimation of the systematic error of the HETC code is proposed. The main problem of cascade models originates in the difficulty of simulating inelastic nucleon-nucleon collisions, the emission of pions is over-estimated and corresponding differential spectra are badly reproduced. The inaccuracy of cascade models has a great impact to determine the excited level of the nucleus at the end of the first step and indirectly on the distribution of final residual nuclei. The test of the evaporation model has shown that the emission of high energy light particles is under-estimated. (A.C.)

  20. A Validation Process for the Groundwater Flow and Transport Model of the Faultless Nuclear Test at Central Nevada Test Area

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed Hassan

    2003-01-01

    Many sites of groundwater contamination rely heavily on complex numerical models of flow and transport to develop closure plans. This has created a need for tools and approaches that can be used to build confidence in model predictions and make it apparent to regulators, policy makers, and the public that these models are sufficient for decision making. This confidence building is a long-term iterative process and it is this process that should be termed ''model validation.'' Model validation is a process not an end result. That is, the process of model validation cannot always assure acceptable prediction or quality of the model. Rather, it provides safeguard against faulty models or inadequately developed and tested models. Therefore, development of a systematic approach for evaluating and validating subsurface predictive models and guiding field activities for data collection and long-term monitoring is strongly needed. This report presents a review of model validation studies that pertain to groundwater flow and transport modeling. Definitions, literature debates, previously proposed validation strategies, and conferences and symposia that focused on subsurface model validation are reviewed and discussed. The review is general in nature, but the focus of the discussion is on site-specific, predictive groundwater models that are used for making decisions regarding remediation activities and site closure. An attempt is made to compile most of the published studies on groundwater model validation and assemble what has been proposed or used for validating subsurface models. The aim is to provide a reasonable starting point to aid the development of the validation plan for the groundwater flow and transport model of the Faultless nuclear test conducted at the Central Nevada Test Area (CNTA). The review of previous studies on model validation shows that there does not exist a set of specific procedures and tests that can be easily adapted and

  1. What women want. Women's preferences for the management of low-grade abnormal cervical screening tests: a systematic review

    DEFF Research Database (Denmark)

    Frederiksen, Maria Eiholm; Lynge, E; Rebolj, M

    2012-01-01

    Pap smears. In all but two studies testing other situations, women more often expressed a preference for active follow-up than for observation; however, women appeared to be somewhat more willing to accept observation if reassured of the low risk of cervical cancer. Conclusions Even for low......Please cite this paper as: Frederiksen M, Lynge E, Rebolj M. What women want. Women's preferences for the management of low-grade abnormal cervical screening tests: a systematic review. BJOG 2011; DOI: 10.1111/j.1471-0528.2011.03130.x. Background If human papillomavirus (HPV) testing will replace...... cytology in primary cervical screening, the frequency of low-grade abnormal screening tests will double. Several available alternatives for the follow-up of low-grade abnormal screening tests have similar outcomes. In this situation, women's preferences have been proposed as a guide for management...

  2. Storytelling to Enhance Teaching and Learning: The Systematic Design, Development, and Testing of Two Online Courses

    Science.gov (United States)

    Hirumi, Atsusi; Sivo, Stephen; Pounds, Kelly

    2012-01-01

    Storytelling may be a powerful instructional approach for engaging learners and facilitating e-learning. However, relatively little is known about how to apply story within the context of systematic instructional design processes and claims for the effectiveness of storytelling in training and education have been primarily anecdotal and…

  3. A verification system survival probability assessment model test methods

    International Nuclear Information System (INIS)

    Jia Rui; Wu Qiang; Fu Jiwei; Cao Leituan; Zhang Junnan

    2014-01-01

    Subject to the limitations of funding and test conditions, the number of sub-samples of large complex system test less often. Under the single sample conditions, how to make an accurate evaluation of the performance, it is important for reinforcement of complex systems. It will be able to significantly improve the technical maturity of the assessment model, if that can experimental validation and evaluation model. In this paper, a verification system survival probability assessment model test method, the method by the test system sample test results, verify the correctness of the assessment model and a priori information. (authors)

  4. A maximin model for test design with practical constraints

    NARCIS (Netherlands)

    van der Linden, Willem J.; Boekkooi-Timminga, Ellen

    1987-01-01

    A "maximin" model for item response theory based test design is proposed. In this model only the relative shape of the target test information function is specified. It serves as a constraint subject to which a linear programming algorithm maximizes the information in the test. In the practice of

  5. 2-D Model Test Study of the Suape Breakwater, Brazil

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Burcharth, Hans F.; Sopavicius, A.

    This report deals with a two-dimensional model test study of the extension of the breakwater in Suape, Brazil. One cross-section was tested for stability and overtopping in various sea conditions. The length scale used for the model tests was 1:35. Unless otherwise specified all values given...

  6. Systematic Review of Health Economic Impact Evaluations of Risk Prediction Models: Stop Developing, Start Evaluating.

    Science.gov (United States)

    van Giessen, Anoukh; Peters, Jaime; Wilcher, Britni; Hyde, Chris; Moons, Carl; de Wit, Ardine; Koffijberg, Erik

    2017-04-01

    Although health economic evaluations (HEEs) are increasingly common for therapeutic interventions, they appear to be rare for the use of risk prediction models (PMs). To evaluate the current state of HEEs of PMs by performing a comprehensive systematic review. Four databases were searched for HEEs of PM-based strategies. Two reviewers independently selected eligible articles. A checklist was compiled to score items focusing on general characteristics of HEEs of PMs, model characteristics and quality of HEEs, evidence on PMs typically used in the HEEs, and the specific challenges in performing HEEs of PMs. After screening 791 abstracts, 171 full texts, and reference checking, 40 eligible HEEs evaluating 60 PMs were identified. In these HEEs, PM strategies were compared with current practice (n = 32; 80%), to other stratification methods for patient management (n = 19; 48%), to an extended PM (n = 9; 23%), or to alternative PMs (n = 5; 13%). The PMs guided decisions on treatment (n = 42; 70%), further testing (n = 18; 30%), or treatment prioritization (n = 4; 7%). For 36 (60%) PMs, only a single decision threshold was evaluated. Costs of risk prediction were ignored for 28 (46%) PMs. Uncertainty in outcomes was assessed using probabilistic sensitivity analyses in 22 (55%) HEEs. Despite the huge number of PMs in the medical literature, HEE of PMs remains rare. In addition, we observed great variety in their quality and methodology, which may complicate interpretation of HEE results and implementation of PMs in practice. Guidance on HEE of PMs could encourage and standardize their application and enhance methodological quality, thereby improving adequate use of PM strategies. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  7. A SysML Test Model and Test Suite for the ETCS Ceiling Speed Monitor

    DEFF Research Database (Denmark)

    Braunstein, Cécile; Peleska, Jan; Schulze, Uwe

    2014-01-01

    dedicated to the publication of models that are of interest for the model-based testing (MBT) community, and may serve as benchmarks for comparing MBT tool capabilities. The model described here is of particular interest for analysing the capabilities of equivalence class testing strategies. The CSM...... application inputs velocity values from a domain which could not be completely enumerated for test purposes with reasonable effort. We describe a novel method for equivalence class testing that – despite the conceptually infinite cardinality of the input domains – is capable to produce finite test suites...... that are exhaustive under certain hypotheses about the internal structure of the system under test....

  8. Systematic Multi‐Scale Model Development Strategy for the Fragrance Spraying Process and Transport

    DEFF Research Database (Denmark)

    Heitzig, M.; Rong, Y.; Gregson, C.

    2012-01-01

    The fast and efficient development and application of reliable models with appropriate degree of detail to predict the behavior of fragrance aerosols are challenging problems of high interest to the related industries. A generic modeling template for the systematic derivation of specific fragrance...... aerosol models is proposed. The main benefits of the fragrance spraying template are the speed‐up of the model development/derivation process, the increase in model quality, and the provision of structured domain knowledge where needed. The fragrance spraying template is integrated in a generic computer......‐aided modeling framework, which is structured based on workflows for different general modeling tasks. The benefits of the fragrance spraying template are highlighted by a case study related to the derivation of a fragrance aerosol model that is able to reflect measured dynamic droplet size distribution profiles...

  9. A Lagrange Multiplier Test for Testing the Adequacy of the Constant Conditional Correlation GARCH Model

    DEFF Research Database (Denmark)

    Catani, Paul; Teräsvirta, Timo; Yin, Meiqun

    A Lagrange multiplier test for testing the parametric structure of a constant conditional correlation generalized autoregressive conditional heteroskedasticity (CCC-GARCH) model is proposed. The test is based on decomposing the CCC-GARCH model multiplicatively into two components, one of which...

  10. A Functional Test Platform for the Community Land Model

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Yang [ORNL; Thornton, Peter E [ORNL; King, Anthony Wayne [ORNL; Steed, Chad A [ORNL; Gu, Lianhong [ORNL; Schuchart, Joseph [ORNL

    2014-01-01

    A functional test platform is presented to create direct linkages between site measurements and the process-based ecosystem model within the Community Earth System Models (CESM). The platform consists of three major parts: 1) interactive user interfaces, 2) functional test model and 3) observational datasets. It provides much needed integration interfaces for both field experimentalists and ecosystem modelers to improve the model s representation of ecosystem processes within the CESM framework without large software overhead.

  11. Conquering systematics in the timing of the pulsar triple system J0337+1715: Towards a unique and robust test of the strong equivalence principle

    Science.gov (United States)

    Gusinskaia, N. V.; Archibald, A. M.; Hessels, J. W. T.; Lorimer, D. R.; Ransom, S. M.; Stairs, I. H.; Lynch, R. S.

    2017-12-01

    PSR J0337+1715 is a millisecond radio pulsar in a hierarchical stellar triple system containing two white dwarfs. The pulsar orbits the inner white dwarf every 1.6 days. In turn, this inner binary system orbits the outer white dwarf every 327 days. The gravitational influence of the outer white dwarf strongly accelerates the inner binary, making this system an excellent laboratory in which to test the strong equivalence principle (SEP) of general relativity – especially because the neutron star has significant gravitational self-binding energy. This system has been intensively monitored using three radio telescopes: Arecibo, Green Bank and Westerbork. Using the more than 25000 pulse times of arrival (TOAs) collected to date, we have modeled the system using direct 3-body numerical integration. Here we present our efforts to quantify the effects of systematics in the TOAs and timing residuals, which can limit the precision to which we can test the SEP in this system. In this work we describe Fourier-based techniques that we apply to the residuals in order to isolate the effects of systematics that could masquerade as an SEP violation. We also demonstrate that tidal effects are insignificant in the modeling.

  12. Testing the race model inequality : a nonparametric approach

    NARCIS (Netherlands)

    Maris, G.K.J.; Maris, E.

    2003-01-01

    This paper introduces a nonparametric procedure for testing the race model explanation of the redundant signals effect. The null hypothesis is the race model inequality derived from the race model by Miller (Cognitive Psychol. 14 (1982) 247). The construction of a nonparametric test is made possible

  13. Rapid fetal fibronectin testing to predict preterm birth in women with symptoms of premature labour: a systematic review and cost analysis.

    Science.gov (United States)

    Deshpande, S N; van Asselt, A D I; Tomini, F; Armstrong, N; Allen, A; Noake, C; Khan, K; Severens, J L; Kleijnen, J; Westwood, M E

    2013-09-01

    Premature birth is defined as birth of before 37 completed weeks' gestation. Not all pregnant women showing symptoms of preterm labour will go on to deliver before 37 weeks' gestation. Hence, addition of fetal fibronectin (fFN) testing to the diagnostic workup of women with suspected preterm labour may help to identify those women who do not require active management, and thus avoid unnecessary interventions, hospitalisations and associated costs. To assess the clinical effectiveness and cost-effectiveness of rapid fFN testing in predicting preterm birth (PTB) in symptomatic women. Bibliographic databases (including EMBASE, Cochrane Database of Systematic Reviews and Cochrane Central Register of Controlled Trials) were searched from 2000 to September/November 2011. Trial registers were also searched. Systematic review methods followed published guidance; we assessed clinical effectiveness and updated a previous systematic review of test accuracy. Risk of bias was assessed using the Cochrane tool (randomised controlled trials; RCTs) and a modification of QUADAS-2 (diagnostic test accuracy studies; DTAs). Summary risk ratios or weighted mean difference were calculated using random-effects models. Summary sensitivity and specificity used a bivariate summary receiver operating characteristic model. Heterogeneity was investigated using subgroup and sensitivity analyses. Health economic analysis focused on cost consequences. The time horizon was hospital admission for observation. A main structural assumption was that, compared with usual care, fFN testing doesn't increase adverse events or negative pregnancy outcomes. Five RCTs and 15 new DTAs were identified. No RCT reported significant effects of fFN testing on maternal or neonatal outcomes. One study reported a subgroup analysis of women with negative fFN test observed > 6 hours, which showed a reduction in length of hospital stay where results were known to clinicians. Combining data from new studies and the

  14. Life course socio-economic position and quality of life in adulthood: a systematic review of life course models

    Science.gov (United States)

    2012-01-01

    Background A relationship between current socio-economic position and subjective quality of life has been demonstrated, using wellbeing, life and needs satisfaction approaches. Less is known regarding the influence of different life course socio-economic trajectories on later quality of life. Several conceptual models have been proposed to help explain potential life course effects on health, including accumulation, latent, pathway and social mobility models. This systematic review aimed to assess whether evidence supported an overall relationship between life course socio-economic position and quality of life during adulthood and if so, whether there was support for one or more life course models. Methods A review protocol was developed detailing explicit inclusion and exclusion criteria, search terms, data extraction items and quality appraisal procedures. Literature searches were performed in 12 electronic databases during January 2012 and the references and citations of included articles were checked for additional relevant articles. Narrative synthesis was used to analyze extracted data and studies were categorized based on the life course model analyzed. Results Twelve studies met the eligibility criteria and used data from 10 datasets and five countries. Study quality varied and heterogeneity between studies was high. Seven studies assessed social mobility models, five assessed the latent model, two assessed the pathway model and three tested the accumulation model. Evidence indicated an overall relationship, but mixed results were found for each life course model. Some evidence was found to support the latent model among women, but not men. Social mobility models were supported in some studies, but overall evidence suggested little to no effect. Few studies addressed accumulation and pathway effects and study heterogeneity limited synthesis. Conclusions To improve potential for synthesis in this area, future research should aim to increase study

  15. A Model for Random Student Drug Testing

    Science.gov (United States)

    Nelson, Judith A.; Rose, Nancy L.; Lutz, Danielle

    2011-01-01

    The purpose of this case study was to examine random student drug testing in one school district relevant to: (a) the perceptions of students participating in competitive extracurricular activities regarding drug use and abuse; (b) the attitudes and perceptions of parents, school staff, and community members regarding student drug involvement; (c)…

  16. Modeling Reliability Growth in Accelerated Stress Testing

    Science.gov (United States)

    2013-12-01

    190285075. html . [Accessed October 2012]. [2] R. Gates, "Science and Technology (S&T) Priorities for Fiscal Years 2013-17 Planning," 19 April 2011...John Wiley & Sons Inc., 1982. [100] J. Seo , M. Jung and C. Kim, "Design of accelerated life test sampling plans with a non-constant shape

  17. Variable influenza vaccine effectiveness by subtype: a systematic review and meta-analysis of test-negative design studies.

    Science.gov (United States)

    Belongia, Edward A; Simpson, Melissa D; King, Jennifer P; Sundaram, Maria E; Kelley, Nicholas S; Osterholm, Michael T; McLean, Huong Q

    2016-08-01

    Influenza vaccine effectiveness (VE) can vary by type and subtype. Over the past decade, the test-negative design has emerged as a valid method for estimation of VE. In this design, VE is calculated as 100% × (1 - odds ratio) for vaccine receipt in influenza cases versus test-negative controls. We did a systematic review and meta-analysis to estimate VE by type and subtype. In this systematic review and meta-analysis, we searched PubMed and Embase from Jan 1, 2004, to March 31, 2015. Test-negative design studies of influenza VE were eligible if they enrolled outpatients on the basis of predefined illness criteria, reported subtype-level VE by season, used PCR to confirm influenza, and adjusted for age. We excluded studies restricted to hospitalised patients or special populations, duplicate reports, interim reports superseded by a final report, studies of live-attenuated vaccine, and studies of prepandemic seasonal vaccine against H1N1pdm09. Two reviewers independently assessed titles and abstracts to identify articles for full review. Discrepancies in inclusion and exclusion criteria and VE estimates were adjudicated by consensus. Outcomes were VE against H3N2, H1N1pdm09, H1N1 (pre-2009), and type B. We calculated pooled VE using a random-effects model. We identified 3368 unduplicated publications, selected 142 for full review, and included 56 in the meta-analysis. Pooled VE was 33% (95% CI 26-39; I(2)=44·4) for H3N2, 54% (46-61; I(2)=61·3) for type B, 61% (57-65; I(2)=0·0) for H1N1pdm09, and 67% (29-85; I(2)=57·6) for H1N1; VE was 73% (61-81; I(2)=31·4) for monovalent vaccine against H1N1pdm09. VE against H3N2 for antigenically matched viruses was 33% (22-43; I(2)=56·1) and for variant viruses was 23% (2-40; I(2)=55·6). Among older adults (aged >60 years), pooled VE was 24% (-6 to 45; I(2)=17·6) for H3N2, 63% (33-79; I(2)=0·0) for type B, and 62% (36-78; I(2)=0·0) for H1N1pdm09. Influenza vaccines provided substantial protection against H1N1pdm

  18. Measuring the quality and quantity of professional intrapartum support: testing a computerised systematic observation tool in the clinical setting.

    Science.gov (United States)

    Ross-Davie, Mary C; Cheyne, Helen; Niven, Catherine

    2013-08-14

    Continuous support in labour has a significant impact on a range of clinical outcomes, though whether the quality and quantity of support behaviours affects the strength of this impact has not yet been established. To identify the quality and quantity of support, a reliable means of measurement is needed. To this end, a new computerised systematic observation tool, the 'SMILI' (Supportive Midwifery in Labour Instrument) was developed.The aim of the study was to test the validity and usability of the 'Supportive Midwifery in Labour Instrument' (SMILI) and to test the feasibility and acceptability of the systematic observation approach in the clinical intrapartum setting. Systematic observation was combined with a postnatal questionnaire and the collection of data about clinical processes and outcomes for each observed labour.The setting for the study was four National Health Service maternity units in Scotland, UK. Participants in this study were forty five midwives and forty four women.The SMILI was used by trained midwife observers to record labour care provided by midwives. Observations were undertaken for an average of two hours and seventeen minutes during the active first stage of labour and, in 18 cases, the observation included the second stage of labour. Content validity of the instrument was tested by the observers, noting the extent to which the SMILI facilitated the recording of all key aspects of labour care and interactions. Construct validity was tested through exploration of correlations between the data recorded and women's feelings about the support they received. Feasibility and usability data were recorded following each observation by the observer. Internal reliability and construct validity were tested through statistical analysis of the data. One hundred and four hours of labour care were observed and recorded using the SMILI during forty nine labour episodes. The SMILI was found to be a valid and reliable instrument in the intrapartum setting

  19. Syndemics of psychosocial problems and HIV risk: A systematic review of empirical tests of the disease interaction concept.

    Science.gov (United States)

    Tsai, Alexander C; Burns, Bridget F O

    2015-08-01

    In the theory of syndemics, diseases co-occur in particular temporal or geographical contexts due to harmful social conditions (disease concentration) and interact at the level of populations and individuals, with mutually enhancing deleterious consequences for health (disease interaction). This theory has widespread adherents in the field, but the extent to which there is empirical support for the concept of disease interaction remains unclear. In January 2015 we systematically searched 7 bibliographic databases and tracked citations to highly cited publications associated with the theory of syndemics. Of the 783 records, we ultimately included 34 published journal articles, 5 dissertations, and 1 conference abstract. Most studies were based on a cross-sectional design (32 [80%]), were conducted in the U.S. (32 [80%]), and focused on men who have sex with men (21 [53%]). The most frequently studied psychosocial problems were related to mental health (33 [83%]), substance abuse (36 [90%]), and violence (27 [68%]); while the most frequently studied outcome variables were HIV transmission risk behaviors (29 [73%]) or HIV infection (9 [23%]). To test the disease interaction concept, 11 (28%) studies used some variation of a product term, with less than half of these (5/11 [45%]) providing sufficient information to interpret interaction both on an additive and on a multiplicative scale. The most frequently used specification (31 [78%]) to test the disease interaction concept was the sum score corresponding to the total count of psychosocial problems. Although the count variable approach does not test hypotheses about interactions between psychosocial problems, these studies were much more likely than others (14/31 [45%] vs. 0/9 [0%]; χ2 = 6.25, P = 0.01) to incorporate language about "synergy" or "interaction" that was inconsistent with the statistical models used. Therefore, more evidence is needed to assess the extent to which diseases interact, either at the

  20. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  1. Testing Software Development Project Productivity Model

    Science.gov (United States)

    Lipkin, Ilya

    Software development is an increasingly influential factor in today's business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted. There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis. This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD. Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers. Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control

  2. FROM ATOMISTIC TO SYSTEMATIC COARSE-GRAINED MODELS FOR MOLECULAR SYSTEMS

    KAUST Repository

    Harmandaris, Vagelis

    2017-10-03

    The development of systematic (rigorous) coarse-grained mesoscopic models for complex molecular systems is an intense research area. Here we first give an overview of methods for obtaining optimal parametrized coarse-grained models, starting from detailed atomistic representation for high dimensional molecular systems. Different methods are described based on (a) structural properties (inverse Boltzmann approaches), (b) forces (force matching), and (c) path-space information (relative entropy). Next, we present a detailed investigation concerning the application of these methods in systems under equilibrium and non-equilibrium conditions. Finally, we present results from the application of these methods to model molecular systems.

  3. Systematic model development for partial nitrification of landfill leachate in a SBR

    DEFF Research Database (Denmark)

    Ganigue, R.; Volcke, E.I.P.; Puig, S.

    2010-01-01

    This study deals with partial nitrification in a sequencing batch reactor (PN-SBR) treating raw urban landfill leachate. In order to enhance process insight (e.g. quantify interactions between aeration, CO2 stripping, alkalinity, pH, nitrification kinetics), a mathematical model has been set up....... Following a systematic procedure, the model was successfully constructed, calibrated and validated using data from short-term (one cycle) operation of the PN-SBR. The evaluation of the model revealed a good fit to the main physical-chemical measurements (ammonium, nitrite, nitrate and inorganic carbon...

  4. A Systematic Review of Behavioral Interventions to Reduce Condomless Sex and Increase HIV Testing for Latino MSM.

    Science.gov (United States)

    Pérez, Ashley; Santamaria, E Karina; Operario, Don

    2017-12-15

    Latino men who have sex with men (MSM) in the United States are disproportionately affected by HIV, and there have been calls to improve availability of culturally sensitive HIV prevention programs for this population. This article provides a systematic review of intervention programs to reduce condomless sex and/or increase HIV testing among Latino MSM. We searched four electronic databases using a systematic review protocol, screened 1777 unique records, and identified ten interventions analyzing data from 2871 Latino MSM. Four studies reported reductions in condomless anal intercourse, and one reported reductions in number of sexual partners. All studies incorporated surface structure cultural features such as bilingual study recruitment, but the incorporation of deep structure cultural features, such as machismo and sexual silence, was lacking. There is a need for rigorously designed interventions that incorporate deep structure cultural features in order to reduce HIV among Latino MSM.

  5. Acute kidney injury following liver transplantation: a systematic review of published predictive models.

    Science.gov (United States)

    Caragata, R; Wyssusek, K H; Kruger, P

    2016-03-01

    Acute kidney injury is a frequent postoperative complication amongst liver transplant recipients and is associated with increased morbidity and mortality. This systematic review analysed the existing predictive models, in order to solidify current understanding. Articles were selected for inclusion if they described the primary development of a clinical prediction model (either an algorithm or risk score) to predict AKI post liver transplantation. The database search yielded a total of seven studies describing the primary development of a prediction model or risk score for the development of AKI following liver transplantation. The models span thirteen years of clinical research and highlight a gradual change in the definitions of AKI, emphasising the need to employ standardised definitions for subsequent studies. Collectively, the models identify a diverse range of predictive factors with several common trends. They emphasise the impact of preoperative renal dysfunction, liver disease severity and aetiology, metabolic risk factors as well as intraoperative variables including measures of haemodynamic instability and graft quality. Although several of the models address postoperative parameters, their utility in predictive modelling seems to be of questionable relevance. The common risk factors identified within this systematic review provide a minimum list of variables, which future studies should address. Research in this area would benefit from prospective, multi-site studies with larger cohorts as well as the subsequent internal and external validation of predictive models. Ultimately, the ability to identify patients at high risk of post-transplant AKI may enable early intervention and perhaps prevention.

  6. Decoding β-decay systematics: A global statistical model for β- half-lives

    International Nuclear Information System (INIS)

    Costiris, N. J.; Mavrommatis, E.; Gernoth, K. A.; Clark, J. W.

    2009-01-01

    Statistical modeling of nuclear data provides a novel approach to nuclear systematics complementary to established theoretical and phenomenological approaches based on quantum theory. Continuing previous studies in which global statistical modeling is pursued within the general framework of machine learning theory, we implement advances in training algorithms designed to improve generalization, in application to the problem of reproducing and predicting the half-lives of nuclear ground states that decay 100% by the β - mode. More specifically, fully connected, multilayer feed-forward artificial neural network models are developed using the Levenberg-Marquardt optimization algorithm together with Bayesian regularization and cross-validation. The predictive performance of models emerging from extensive computer experiments is compared with that of traditional microscopic and phenomenological models as well as with the performance of other learning systems, including earlier neural network models as well as the support vector machines recently applied to the same problem. In discussing the results, emphasis is placed on predictions for nuclei that are far from the stability line, and especially those involved in r-process nucleosynthesis. It is found that the new statistical models can match or even surpass the predictive performance of conventional models for β-decay systematics and accordingly should provide a valuable additional tool for exploring the expanding nuclear landscape.

  7. Modellering, test og fortolkning af indirekte revisionsbeviser

    DEFF Research Database (Denmark)

    Holm, Claus

    1999-01-01

    vil være påvirket af kildens troværdighed og rapportens relevans. I dette indlæg vises, hvordan den normative teori bag den Bayesianske flertrinsmodel giver mulighed for at hypoteser kan udledes og testes. Et 2*2 eksperimentielt design er testet på 89 revisorer med tre hovedresultater, nemlig (1......, når den sammenlignes med dén normative præstationsstandard, som de selv fastlægger....

  8. Port Adriano, 2D-Model tests

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Meinert, Palle; Andersen, Thomas Lykke

    the crown wall have been measured. The model has been subjected to irregular waves corresponding to typical conditions offshore from the intended prototype location. Characteristic situations have been video recorded. The stability of the toe has been investigated. The wave-generated forces on the caisson...

  9. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik

    2015-01-01

    In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two ....... These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...

  10. Tests for the Assessment of Sport-Specific Performance in Olympic Combat Sports: A Systematic Review With Practical Recommendations.

    Science.gov (United States)

    Chaabene, Helmi; Negra, Yassine; Bouguezzi, Raja; Capranica, Laura; Franchini, Emerson; Prieske, Olaf; Hbacha, Hamdi; Granacher, Urs

    2018-01-01

    The regular monitoring of physical fitness and sport-specific performance is important in elite sports to increase the likelihood of success in competition. This study aimed to systematically review and to critically appraise the methodological quality, validation data, and feasibility of the sport-specific performance assessment in Olympic combat sports like amateur boxing, fencing, judo, karate, taekwondo, and wrestling. A systematic search was conducted in the electronic databases PubMed, Google-Scholar, and Science-Direct up to October 2017. Studies in combat sports were included that reported validation data (e.g., reliability, validity, sensitivity) of sport-specific tests. Overall, 39 studies were eligible for inclusion in this review. The majority of studies (74%) contained sample sizes sport-specific tests (intraclass correlation coefficient [ICC] = 0.43-1.00). Content validity was addressed in all included studies, criterion validity (only the concurrent aspect of it) in approximately half of the studies with correlation coefficients ranging from r = -0.41 to 0.90. Construct validity was reported in 31% of the included studies and predictive validity in only one. Test sensitivity was addressed in 13% of the included studies. The majority of studies (64%) ignored and/or provided incomplete information on test feasibility and methodological limitations of the sport-specific test. In 28% of the included studies, insufficient information or a complete lack of information was provided in the respective field of the test application. Several methodological gaps exist in studies that used sport-specific performance tests in Olympic combat sports. Additional research should adopt more rigorous validation procedures in the application and description of sport-specific performance tests in Olympic combat sports.

  11. Systematic Assessment of Neutron and Gamma Backgrounds Relevant to Operational Modeling and Detection Technology Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Daniel E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hornback, Donald Eric [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Jeffrey O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nicholson, Andrew D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peplow, Douglas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ayaz-Maierhafer, Birsen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-01

    This report summarizes the findings of a two year effort to systematically assess neutron and gamma backgrounds relevant to operational modeling and detection technology implementation. The first year effort focused on reviewing the origins of background sources and their impact on measured rates in operational scenarios of interest. The second year has focused on the assessment of detector and algorithm performance as they pertain to operational requirements against the various background sources and background levels.

  12. Evaluation models and criteria of the quality of hospital websites: a systematic review study

    OpenAIRE

    Jeddi, Fatemeh Rangraz; Gilasi, Hamidreza; Khademi, Sahar

    2017-01-01

    Introduction Hospital websites are important tools in establishing communication and exchanging information between patients and staff, and thus should enjoy an acceptable level of quality. The aim of this study was to identify proper models and criteria to evaluate the quality of hospital websites. Methods This research was a systematic review study. The international databases such as Science Direct, Google Scholar, PubMed, Proquest, Ovid, Elsevier, Springer, and EBSCO together with regiona...

  13. Testing constancy of unconditional variance in volatility models by misspecification and specification tests

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Terasvirta, Timo

    fitting a GARCH model to the data is discussed. The power of the ensuing test is vastly superior to that of the misspecification test and the size distortion minimal. The test has reasonable power already in very short time series. It would thus serve as a test of constant variance in conditional mean......The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH...... model to the original series. It is found by simulation that the positive size distortion present in these tests is a function of the kurtosis of the GARCH process. Adjusting the size by numerical methods is considered. The possibility of testing the constancy of the unconditional variance before...

  14. The Systematic Guideline Review: Method, rationale, and test on chronic heart failure

    Directory of Open Access Journals (Sweden)

    Hutchinson Allen

    2009-05-01

    Full Text Available Abstract Background Evidence-based guidelines have the potential to improve healthcare. However, their de-novo-development requires substantial resources – especially for complex conditions, and adaptation may be biased by contextually influenced recommendations in source guidelines. In this paper we describe a new approach to guideline development – the systematic guideline review method (SGR, and its application in the development of an evidence-based guideline for family physicians on chronic heart failure (CHF. Methods A systematic search for guidelines was carried out. Evidence-based guidelines on CHF management in adults in ambulatory care published in English or German between the years 2000 and 2004 were included. Guidelines on acute or right heart failure were excluded. Eligibility was assessed by two reviewers, methodological quality of selected guidelines was appraised using the AGREE instrument, and a framework of relevant clinical questions for diagnostics and treatment was derived. Data were extracted into evidence tables, systematically compared by means of a consistency analysis and synthesized in a preliminary draft. Most relevant primary sources were re-assessed to verify the cited evidence. Evidence and recommendations were summarized in a draft guideline. Results Of 16 included guidelines five were of good quality. A total of 35 recommendations were systematically compared: 25/35 were consistent, 9/35 inconsistent, and 1/35 un-rateable (derived from a single guideline. Of the 25 consistencies, 14 were based on consensus, seven on evidence and four differed in grading. Major inconsistencies were found in 3/9 of the inconsistent recommendations. We re-evaluated the evidence for 17 recommendations (evidence-based, differing evidence levels and minor inconsistencies – the majority was congruent. Incongruity was found where the stated evidence could not be verified in the cited primary sources, or where the evaluation in the

  15. Model tests in RAMONA and NEPTUN

    International Nuclear Information System (INIS)

    Hoffmann, H.; Ehrhard, P.; Weinberg, D.; Carteciano, L.; Dres, K.; Frey, H.H.; Hayafune, H.; Hoelle, C.; Marten, K.; Rust, K.; Thomauske, K.

    1995-01-01

    In order to demonstrate passive decay heat removal (DHR) in an LMR such as the European Fast Reactor, the RAMONA and NEPTUN facilities, with water as a coolant medium, were used to measure transient flow data corresponding to a transition from forced convection (under normal operation) to natural convection under DHR conditions. The facilities were 1:20 and 1:5 models, respectively, of a pool-type reactor including the IHXs, pumps, and immersed coolers. Important results: The decay heat can be removed from all parts of the primary system by natural convection, even if the primary fluid circulation through the IHX is interrupted. This result could be transferred to liquid metal cooling by experiments in models with thermohydraulic similarity. (orig.)

  16. Quake's motion modeled in relay test

    International Nuclear Information System (INIS)

    Calhoun, H.J.

    1978-01-01

    Relays in safety-related functions at nuclear plants can now be tested seismically at lower frequencies and with more meaningful force mangnitudes than ever before. A massive, computer-controlled machine shakes the relays with a complex set of frequencies to determine their fragility. NRC has examined the technique by which relay movement is programmed, and has accepted it as an adequate means to do so. The machine, installed last fall, will improve the ability of manufacturers to produce more rugged, vibration-resistant relays, thus increasing system reliability

  17. Measurement of physical performance by field tests in programs of cardiac rehabilitation: a systematic review and meta-analysis.

    Science.gov (United States)

    Travensolo, Cristiane; Goessler, Karla; Poton, Roberto; Pinto, Roberta Ramos; Polito, Marcos Doederlein

    2018-04-13

    The literature concerning the effects of cardiac rehabilitation (CR) on field tests results is inconsistent. To perform a systematic review with meta-analysis on field tests results after programs of CR. Studies published in PubMed and Web of Science databases until May 2016 were analyzed. The standard difference in means correct by bias (Hedges' g) was used as effect size (g) to measure que amount of modifications in performance of field tests after CR period. Potential differences between subgroups were analyzed by Q-test based on ANOVA. Fifteen studies published between 1996 e 2016 were included in the review, 932 patients and age ranged 54,4 - 75,3 years old. Fourteen studies used the six-minutes walking test to evaluate the exercise capacity and one study used the Shuttle Walk Test. The random Hedges's g was 0.617 (P<0.001), representing a drop of 20% in the performance of field test after CR. The meta-regression showed significantly association (P=0.01) to aerobic exercise duration, i.e., for each 1-min increase in aerobic exercise duration, there is a 0.02 increase in effect size for performance in the field test. Field tests can detect physical modification after CR, and the large duration of aerobic exercise during CR was associated with a better result. Copyright © 2018 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.

  18. A magnetorheological actuation system: test and model

    International Nuclear Information System (INIS)

    John, Shaju; Chaudhuri, Anirban; Wereley, Norman M

    2008-01-01

    Self-contained actuation systems, based on frequency rectification of the high frequency motion of an active material, can produce high force and stroke output. Magnetorheological (MR) fluids are active fluids whose rheological properties can be altered by the application of a magnetic field. By using MR fluids as the energy transmission medium in such hybrid devices, a valving system with no moving parts can be implemented and used to control the motion of an output cylinder shaft. The MR fluid based valves are configured in the form of an H-bridge to produce bi-directional motion in an output cylinder by alternately applying magnetic fields in the two opposite arms of the bridge. The rheological properties of the MR fluid are modeled using both Bingham plastic and bi-viscous models. In this study, the primary actuation is performed using a compact terfenol-D rod driven pump and frequency rectification of the rod motion is done using passive reed valves. The pump and reed valve configuration along with MR fluidic valves form a compact hydraulic actuation system. Actuator design, analysis and experimental results are presented in this paper. A time domain model of the actuator is developed and validated using experimental data

  19. Minimal Requirements for Primary HIV Latency Models Based on a Systematic Review.

    Science.gov (United States)

    Bonczkowski, Pawel; De Scheerder, Marie-Angélique; De Spiegelaere, Ward; Vandekerckhove, Linos

    2016-01-01

    Due to the scarcity of HIV-1 latently infected cells in patients, in vitro primary latency models are now commonly used to study the HIV-1 reservoir. To this end, a number of experimental systems have been developed. Most of these models differ based on the nature of the primary CD4+ T-cell type, the used HIV strains, activation methods, and latency assessment strategies. Despite these differences, most models share some common characteristics. Here, we provide a systematic review covering the primary HIV latency models that have been used to date with the aim to compare these models and identify minimal requirements for such experiments. A systematic search on PubMed and Web of Science databases generated a short list of 17 unique publications that propose new in vitro latency models. Based on the described methods, we propose and discuss a generalized workflow, visualizing all the necessary steps to perform such an in vitro study, with the key choices and validation steps that need to be made; from cell type selection until the model readout.

  20. Theoretical Tools and Software for Modeling, Simulation and Control Design of Rocket Test Facilities

    Science.gov (United States)

    Richter, Hanz

    2004-01-01

    A rocket test stand and associated subsystems are complex devices whose operation requires that certain preparatory calculations be carried out before a test. In addition, real-time control calculations must be performed during the test, and further calculations are carried out after a test is completed. The latter may be required in order to evaluate if a particular test conformed to specifications. These calculations are used to set valve positions, pressure setpoints, control gains and other operating parameters so that a desired system behavior is obtained and the test can be successfully carried out. Currently, calculations are made in an ad-hoc fashion and involve trial-and-error procedures that may involve activating the system with the sole purpose of finding the correct parameter settings. The goals of this project are to develop mathematical models, control methodologies and associated simulation environments to provide a systematic and comprehensive prediction and real-time control capability. The models and controller designs are expected to be useful in two respects: 1) As a design tool, a model is the only way to determine the effects of design choices without building a prototype, which is, in the context of rocket test stands, impracticable; 2) As a prediction and tuning tool, a good model allows to set system parameters off-line, so that the expected system response conforms to specifications. This includes the setting of physical parameters, such as valve positions, and the configuration and tuning of any feedback controllers in the loop.

  1. Systematic review and proposal of a field-based physical fitness-test battery in preschool children: the PREFIT battery.

    Science.gov (United States)

    Ortega, Francisco B; Cadenas-Sánchez, Cristina; Sánchez-Delgado, Guillermo; Mora-González, José; Martínez-Téllez, Borja; Artero, Enrique G; Castro-Piñero, Jose; Labayen, Idoia; Chillón, Palma; Löf, Marie; Ruiz, Jonatan R

    2015-04-01

    Physical fitness is a powerful health marker in childhood and adolescence, and it is reasonable to think that it might be just as important in younger children, i.e. preschoolers. At the moment, researchers, clinicians and sport practitioners do not have enough information about which fitness tests are more reliable, valid and informative from the health point of view to be implemented in preschool children. Our aim was to systematically review the studies conducted in preschool children using field-based fitness tests, and examine their (1) reliability, (2) validity, and (3) relationship with health outcomes. Our ultimate goal was to propose a field-based physical fitness-test battery to be used in preschool children. PubMed and Web of Science. Studies conducted in healthy preschool children that included field-based fitness tests. When using PubMed, we included Medical Subject Heading (MeSH) terms to enhance the power of the search. A set of fitness-related terms were combined with 'child, preschool' [MeSH]. The same strategy and terms were used for Web of Science (except for the MeSH option). Since no previous reviews with a similar aim were identified, we searched for all articles published up to 1 April 2014 (no starting date). A total of 2,109 articles were identified, of which 22 articles were finally selected for this review. Most studies focused on reliability of the fitness tests (n = 21, 96%), while very few focused on validity (0 criterion-related validity and 4 (18%) convergent validity) or relationship with health outcomes (0 longitudinal and 1 (5%) cross-sectional study). Motor fitness, particularly balance, was the most studied fitness component, while cardiorespiratory fitness was the least studied. After analyzing the information retrieved in the current systematic review about fitness testing in preschool children, we propose the PREFIT battery, field-based FITness testing in PREschool children. The PREFIT battery is composed of the following

  2. Mixed Portmanteau Test for Diagnostic Checking of Time Series Models

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2014-01-01

    Full Text Available Model criticism is an important stage of model building and thus goodness of fit tests provides a set of tools for diagnostic checking of the fitted model. Several tests are suggested in literature for diagnostic checking. These tests use autocorrelation or partial autocorrelation in the residuals to criticize the adequacy of fitted model. The main idea underlying these portmanteau tests is to identify if there is any dependence structure which is yet unexplained by the fitted model. In this paper, we suggest mixed portmanteau tests based on autocorrelation and partial autocorrelation functions of the residuals. We derived the asymptotic distribution of the mixture test and studied its size and power using Monte Carlo simulations.

  3. Systematic Pilot Testing as a Step in the Instructional Design Process of Corporate Training and Development.

    Science.gov (United States)

    White, Bethany S.; Branch, Robert Maribe

    2001-01-01

    Discussion of pilot testing instructional materials focuses on a survey that determined the extent to which pilot tests are conducted in identified corporate training environments and ascertains reasons pilot tests were not implemented. Considers factors that influence the decision to pilot test products and suggest further research. (Author/LRW)

  4. Modeling and testing of geometric processing model based on double baselines stereo photogrammetric system

    Science.gov (United States)

    Li, Yingbo; Zhao, Sisi; Hu, Bin; Zhao, Haibo; He, Jinping; Zhao, Xuemin

    2017-10-01

    Aimed at key problems the system of 1:5000 scale space stereo mapping and the shortage of the surveying capability of urban area, in regard of the performance index and the surveying systems of the existing domestic optical mapping satellites are unable to meet the demand of the large scale stereo mapping, it is urgent to develop the very high accuracy space photogrammetric satellite system which has a 1:5000 scale (or larger).The new surveying systems of double baseline stereo photogrammetric mode with combined of linear array sensor and area array sensor was proposed, which aims at solving the problems of barriers, distortions and radiation differences in complex ground object mapping for the existing space stereo mapping technology. Based on collinearity equation, double baseline stereo photogrammetric method and the model of combined adjustment were presented, systematic error compensation for this model was analyzed, position precision of double baseline stereo photogrammetry based on both simulated images and images acquired under lab conditions was studied. The laboratory tests showed that camera geometric calibration accuracy is better than 1μm, the height positioning accuracy is better than 1.5GSD with GCPs. The results showed that the mode of combined of one linear array sensor and one plane array sensor had higher positioning precision. Explore the new system of 1:5000 scale very high accuracy space stereo mapping can provide available new technologies and strategies for achieving demotic very high accuracy space stereo mapping.

  5. Inhibition in speed and concentration tests: The Poisson inhibition model

    NARCIS (Netherlands)

    Smit, J.C.; Ven, A.H.G.S. van der

    1995-01-01

    A new model is presented to account for the reaction time fluctuations in concentration tests. The model is a natural generalization of an earlier model, the so-called Poisson-Erlang model, published by Pieters & van der Ven (1982). First, a description is given of the type of tasks for which the

  6. Test Procedures to Assess Somatosensory Abnormalities in Individuals with Peripheral Joint Pain: A Systematic Review of Psychometric Properties.

    Science.gov (United States)

    Alqarni, Abdullah Mohammad; Manlapaz, Donald; Baxter, David; Tumilty, Steve; Mani, Ramakrishnan

    2018-01-19

    Test procedures that were developed to assess somatosensory abnormalities should possess optimal psychometric properties (PMPs) to be used in clinical practice. The aim of this systematic review was to evaluate the literature to assess the level of evidence for PMPs of test procedures investigated in individuals with peripheral joint pain (PJP). A comprehensive electronic literature search was conducted in 7 databases from inception to March 2016. The Quality Appraisal for Reliability Studies (QAREL) checklist and the Consensus-based Standards for the Selection of Health Status Measurement Instruments (COSMIN) tool were used to assess risk for bias of the included studies. Level of evidence was evaluated based on the methodological quality and the quality of the measurement properties. Forty-one studies related to PJP were included. The majority of included studies were considered to be of insufficient methodological quality, and the level of evidence for PMPs varied across different test procedures. The level of evidence for PMPs varied across different test procedures in different types of PJP. Hand-held pressure algometry is the only test procedure that showed moderate positive evidence of intrarater reliability, agreement, and responsiveness, simultaneously, when it was investigated in patients with chronic knee osteoarthritis. This systematic review identified that the level of evidence for PMPs varied across different testing procedures to assess somatosensory abnormalities for different PJP populations. Further research with standardized protocols is recommended to further investigate the predictive ability and responsiveness of reported test procedures in order to warrant their extended utility in clinical practice. © 2018 World Institute of Pain.

  7. Petroleum reservoir data for testing simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Lloyd, J.M.; Harrison, W.

    1980-09-01

    This report consists of reservoir pressure and production data for 25 petroleum reservoirs. Included are 5 data sets for single-phase (liquid) reservoirs, 1 data set for a single-phase (liquid) reservoir with pressure maintenance, 13 data sets for two-phase (liquid/gas) reservoirs and 6 for two-phase reservoirs with pressure maintenance. Also given are ancillary data for each reservoir that could be of value in the development and validation of simulation models. A bibliography is included that lists the publications from which the data were obtained.

  8. Upgraded Analytical Model of the Cylinder Test

    Energy Technology Data Exchange (ETDEWEB)

    Souers, P. Clark [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center; Lauderbach, Lisa [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center; Garza, Raul [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center; Ferranti, Louis [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center; Vitello, Peter [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center

    2013-03-15

    A Gurney-type equation was previously corrected for wall thinning and angle of tilt, and now we have added shock wave attenuation in the copper wall and air gap energy loss. Extensive calculations were undertaken to calibrate the two new energy loss mechanisms across all explosives. The corrected Gurney equation is recommended for cylinder use over the original 1943 form. The effect of these corrections is to add more energy to the adiabat values from a relative volume of 2 to 7, with low energy explosives having the largest correction. The data was pushed up to a relative volume of about 15 and the JWL parameter ω was obtained directly. Finally, the total detonation energy density was locked to the v = 7 adiabat energy density, so that the Cylinder test gives all necessary values needed to make a JWL.

  9. Upgraded Analytical Model of the Cylinder Test

    Energy Technology Data Exchange (ETDEWEB)

    Souers, P. Clark; Lauderbach, Lisa; Garza, Raul; Ferranti, Louis; Vitello, Peter

    2013-03-15

    A Gurney-type equation was previously corrected for wall thinning and angle of tilt, and now we have added shock wave attenuation in the copper wall and air gap energy loss. Extensive calculations were undertaken to calibrate the two new energy loss mechanisms across all explosives. The corrected Gurney equation is recommended for cylinder use over the original 1943 form. The effect of these corrections is to add more energy to the adiabat values from a relative volume of 2 to 7, with low energy explosives having the largest correction. The data was pushed up to a relative volume of about 15 and the JWL parameter ω was obtained directly. The total detonation energy density was locked to the v=7 adiabat energy density, so that the Cylinder test gives all necessary values needed to make a JWL.

  10. Mathematical Modeling in Tobacco Control Research: Initial Results From a Systematic Review.

    Science.gov (United States)

    Feirman, Shari P; Donaldson, Elisabeth; Glasser, Allison M; Pearson, Jennifer L; Niaura, Ray; Rose, Shyanika W; Abrams, David B; Villanti, Andrea C

    2016-03-01

    The US Food and Drug Administration has expressed interest in using mathematical models to evaluate potential tobacco policies. The goal of this systematic review was to synthesize data from tobacco control studies that employ mathematical models. We searched five electronic databases on July 1, 2013 to identify published studies that used a mathematical model to project a tobacco-related outcome and developed a data extraction form based on the ISPOR-SMDM Modeling Good Research Practices. We developed an organizational framework to categorize these studies and identify models employed across multiple papers. We synthesized results qualitatively, providing a descriptive synthesis of included studies. The 263 studies in this review were heterogeneous with regard to their methodologies and aims. We used the organizational framework to categorize each study according to its objective and map the objective to a model outcome. We identified two types of study objectives (trend and policy/intervention) and three types of model outcomes (change in tobacco use behavior, change in tobacco-related morbidity or mortality, and economic impact). Eighteen models were used across 118 studies. This paper extends conventional systematic review methods to characterize a body of literature on mathematical modeling in tobacco control. The findings of this synthesis can inform the development of new models and the improvement of existing models, strengthening the ability of researchers to accurately project future tobacco-related trends and evaluate potential tobacco control policies and interventions. These findings can also help decision-makers to identify and become oriented with models relevant to their work. © The Author 2015. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Model Modification in Covariance Structure Modeling: A Comparison among Likelihood Ratio, Lagrange Multiplier, and Wald Tests.

    Science.gov (United States)

    Chou, Chih-Ping; Bentler, P. M.

    1990-01-01

    The empirical performance under null/alternative hypotheses of the likelihood ratio difference test (LRDT); Lagrange Multiplier test (evaluating the impact of model modification with a specific model); and Wald test (using a general model) were compared. The new tests for covariance structure analysis performed as well as did the LRDT. (RLC)

  12. Clinician-friendly physical performance tests in athletes part 3: a systematic review of measurement properties and correlations to injury for tests in the upper extremity.

    Science.gov (United States)

    Tarara, Daniel T; Fogaca, Lucas K; Taylor, Jeffrey B; Hegedus, Eric J

    2016-05-01

    In parts 1 and 2 of this systematic review, the methodological quality as well as the quality of the measurement properties of physical performance tests (PPTs) of the lower extremity in athletes was assessed. In this study, part 3, PPTs of the upper extremity in athletes are examined. Database and hand searches were conducted to identify primary literature addressing the use of upper extremity PPTs in athletes. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed and the Consensus-based Standards for the selection of health Measurement Instruments (COSMIN) checklist was used to critique the methodological quality of each paper. The Terwee Scale was used to analyse the quality of the measurement properties of each test. 11 articles that examined 6 PPTs were identified. The 6 PPTs were: closed kinetic chain upper extremity stability test (CKCUEST), seated shot put (2 hands), unilateral seated shot put, medicine ball throw, modified push-up test and 1-arm hop test. Best evidence synthesis provided moderate positive evidence for the CKCUEST and unilateral seated shot put. Limited positive evidence was available for the medicine ball throw and 1-arm hop test. There are a limited number of upper extremity PPTs used as part of musculoskeletal screening examinations, or as outcome measures in athletic populations. The CKCUEST and unilateral seated shot put are 2 promising PPTs based on moderate evidence. However, the utility of the PPTs in injured populations is unsubstantiated in literature and warrants further investigation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. The effects of HIV self-testing on the uptake of HIV testing and linkage to antiretroviral treatment among adults in Africa: a systematic review protocol.

    Science.gov (United States)

    Njau, Bernard; Damian, Damian J; Abdullahi, Leila; Boulle, Andrew; Mathews, Catherine

    2016-04-05

    HIV is still a global public health problem. More than 75 % of HIV-infected people are in Africa, and most of them are unaware of their HIV status, which is a barrier to accessing antiretroviral treatment. Our review aims, firstly, to determine whether HIV self-testing is an effective method to increase the uptake of testing, the yield of new HIV-positive diagnoses, and the linkage to antiretroviral treatment. Secondly, we aim to review the factors that facilitate or impede the uptake of HIV self-testing. Participants will be adults living in Africa. For the first aim, the intervention will be HIV self-testing either alone or in addition to HIV testing standard of care. The comparison will be HIV testing standard of care. The primary outcomes will be (i) uptake of HIV testing and (ii) yield of new HIV-positive diagnoses. The secondary outcomes will be (a) linkage to antiretroviral (ARV) treatment and (b) incidence of social harms. For the second aim, we will review barriers and facilitators to the uptake of self-testing. We will search PubMed, Cochrane Central Register of Controlled Trials, Scopus, Web of Science, WHOLIS, Africa Wide, and CINAHL for eligible studies from 1998, with no language limits. We will check reference lists of included studies for other eligible reports. Eligible studies will include experimental and observational studies. Two authors will independently screen the search output, select studies, and extract data, resolving discrepancies by consensus and discussion. Two authors will use Cochrane risk of bias tools for experimental studies, the Newcastle-Ottawa Quality Assessment Scale for observational studies, and the Critical Appraisal Skills Programme (CASP) quality assessment tool for qualitative studies. Innovative and cost-effective community-based HIV testing strategies, such as self-testing, will contribute to universal coverage of HIV testing in Africa. The findings from this systematic review will guide development of self-testing

  14. A systematic review of repetitive functional task practice with modelling of resource use, costs and effectiveness.

    Science.gov (United States)

    French, B; Leathley, M; Sutton, C; McAdam, J; Thomas, L; Forster, A; Langhorne, P; Price, C; Walker, A; Watkins, C

    2008-07-01

    To determine whether repetitive functional task practice (RFTP) after stroke improves limb-specific or global function or activities of daily living and whether treatment effects are dependent on the amount of practice, or the type or timing of the intervention. Also to provide estimates of the cost-effectiveness of RFTP. The main electronic databases were searched from inception to week 4, September 2006. Searches were also carried out on non-English-language databases and for unpublished trials up to May 2006. Standard quantitative methods were used to conduct the systematic review. The measures of efficacy of RFTP from the data synthesis were used to inform an economic model. The model used a pre-existing data set and tested the potential impact of RFTP on cost. An incremental cost per quality-adjusted life-year (QALY) gained for RFTP was estimated from the model. Sensitivity analyses around the assumptions made for the model were used to test the robustness of the estimates. Thirty-one trials with 34 intervention-control pairs and 1078 participants were included. Overall, it was found that some forms of RFTP resulted in improvement in global function, and in both arm and lower limb function. Overall standardised mean difference in data suitable for pooling was 0.38 [95% confidence interval (CI) 0.09 to 0.68] for global motor function, 0.24 (95% CI 0.06 to 0.42) for arm function and 0.28 (95% CI 0.05 to 0.51) for functional ambulation. Results suggest that training may be sufficient to have an impact on activities of daily living. Retention effects of training persist for up to 6 months, but whether they persist beyond this is unclear. There was little or no evidence that treatment effects overall were modified by time since stroke or dosage of task practice, but results for upper limb function were modified by type of intervention. The economic modelling suggested that RFTP was cost-effective. Given a threshold for cost-effectiveness of 20,000 pounds per QALY

  15. Steel Containment Vessel Model Test: Results and Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Costello, J.F.; Hashimote, T.; Hessheimer, M.F.; Luk, V.K.

    1999-03-01

    A high pressure test of the steel containment vessel (SCV) model was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. A concentric steel contact structure (CS), installed over the SCV model and separated at a nominally uniform distance from it, provided a simplified representation of a reactor shield building in the actual plant. The SCV model and contact structure were instrumented with strain gages and displacement transducers to record the deformation behavior of the SCV model during the high pressure test. This paper summarizes the conduct and the results of the high pressure test and discusses the posttest metallurgical evaluation results on specimens removed from the SCV model.

  16. Using Built-In Domain-Specific Modeling Support to Guide Model-Based Test Generation

    Directory of Open Access Journals (Sweden)

    Teemu Kanstrén

    2012-02-01

    Full Text Available We present a model-based testing approach to support automated test generation with domain-specific concepts. This includes a language expert who is an expert at building test models and domain experts who are experts in the domain of the system under test. First, we provide a framework to support the language expert in building test models using a full (Java programming language with the help of simple but powerful modeling elements of the framework. Second, based on the model built with this framework, the toolset automatically forms a domain-specific modeling language that can be used to further constrain and guide test generation from these models by a domain expert. This makes it possible to generate a large set of test cases covering the full model, chosen (constrained parts of the model, or manually define specific test cases on top of the model while using concepts familiar to the domain experts.

  17. Model tests on dynamic performance of RC shear walls

    International Nuclear Information System (INIS)

    Nagashima, Toshio; Shibata, Akenori; Inoue, Norio; Muroi, Kazuo.

    1991-01-01

    For the inelastic dynamic response analysis of a reactor building subjected to earthquakes, it is essentially important to properly evaluate its restoring force characteristics under dynamic loading condition and its damping performance. Reinforced concrete shear walls are the main structural members of a reactor building, and dominate its seismic behavior. In order to obtain the basic information on the dynamic restoring force characteristics and damping performance of shear walls, the dynamic test using a large shaking table, static displacement control test and the pseudo-dynamic test on the models of a shear wall were conducted. In the dynamic test, four specimens were tested on a large shaking table. In the static test, four specimens were tested, and in the pseudo-dynamic test, three specimens were tested. These tests are outlined. The results of these tests were compared, placing emphasis on the restoring force characteristics and damping performance of the RC wall models. The strength was higher in the dynamic test models than in the static test models mainly due to the effect of loading rate. (K.I.)

  18. Several submaximal exercise tests are reliable, valid and acceptable in people with chronic pain, fibromyalgia or chronic fatigue: a systematic review

    Directory of Open Access Journals (Sweden)

    Julia Ratter

    2014-09-01

    [Ratter J, Radlinger L, Lucas C (2014 Several submaximal exercise tests are reliable, valid and acceptable in people with chronic pain, fibromyalgia or chronic fatigue: a systematic review. Journal of Physiotherapy 60: 144–150

  19. Routine testing for blood-borne viruses in prisons: a systematic review.

    Science.gov (United States)

    Rumble, Caroline; Pevalin, David J; O'Moore, Éamonn

    2015-12-01

    People in prison have a higher burden of blood-borne virus (BBV) infection than the general population, and prisons present an opportunity to test for BBVs in high-risk, underserved groups. Changes to the BBV testing policies in English prisons have recently been piloted. This review will enable existing evidence to inform policy revisions. We describe components of routine HIV, hepatitis B and C virus testing policies in prisons and quantify testing acceptance, coverage, result notification and diagnosis. We searched five databases for studies of both opt-in (testing offered to all and the individual chooses to have the test or not) and opt-out (the individual is informed the test will be performed unless they actively refuse) prison BBV testing policies. Forty-four studies published between 1989 and 2013 met the inclusion criteria. Of these, 82% were conducted in the USA, 91% included HIV testing and most tested at the time of incarceration. HIV testing acceptance rates ranged from 22 to 98% and testing coverage from 3 to 90%. Mixed results were found for equity in uptake. Six studies reported reasons for declining a test including recent testing and fear. While the quality of evidence is mixed, this review suggests that reasonable rates of uptake can be achieved with opt-in and, even better, with opt-out HIV testing policies. Little evidence was found relating to hepatitis testing. Policies need to specify exclusion criteria and consider consent processes, type of test and timing of the testing offer to balance acceptability, competence and availability of individuals. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association.

  20. Clinical uncertainties, health service challenges, and ethical complexities of HIV "test-and-treat": a systematic review.

    Science.gov (United States)

    Kulkarni, Sonali P; Shah, Kavita R; Sarma, Karthik V; Mahajan, Anish P

    2013-06-01

    Despite the HIV "test-and-treat" strategy's promise, questions about its clinical rationale, operational feasibility, and ethical appropriateness have led to vigorous debate in the global HIV community. We performed a systematic review of the literature published between January 2009 and May 2012 using PubMed, SCOPUS, Global Health, Web of Science, BIOSIS, Cochrane CENTRAL, EBSCO Africa-Wide Information, and EBSCO CINAHL Plus databases to summarize clinical uncertainties, health service challenges, and ethical complexities that may affect the test-and-treat strategy's success. A thoughtful approach to research and implementation to address clinical and health service questions and meaningful community engagement regarding ethical complexities may bring us closer to safe, feasible, and effective test-and-treat implementation.

  1. Clinical Uncertainties, Health Service Challenges, and Ethical Complexities of HIV “Test-and-Treat”: A Systematic Review

    Science.gov (United States)

    Shah, Kavita R.; Sarma, Karthik V.; Mahajan, Anish P.

    2013-01-01

    Despite the HIV “test-and-treat” strategy’s promise, questions about its clinical rationale, operational feasibility, and ethical appropriateness have led to vigorous debate in the global HIV community. We performed a systematic review of the literature published between January 2009 and May 2012 using PubMed, SCOPUS, Global Health, Web of Science, BIOSIS, Cochrane CENTRAL, EBSCO Africa-Wide Information, and EBSCO CINAHL Plus databases to summarize clinical uncertainties, health service challenges, and ethical complexities that may affect the test-and-treat strategy’s success. A thoughtful approach to research and implementation to address clinical and health service questions and meaningful community engagement regarding ethical complexities may bring us closer to safe, feasible, and effective test-and-treat implementation. PMID:23597344

  2. Are chiropractic tests for the lumbo-pelvic spine reliable and valid? A systematic critical literature review

    DEFF Research Database (Denmark)

    Hestbaek, L; Leboeuf-Yde, C

    2000-01-01

    OBJECTIVE: To systematically review the peer-reviewed literature about the reliability and validity of chiropractic tests used to determine the need for spinal manipulative therapy of the lumbo-pelvic spine, taking into account the quality of the studies. DATA SOURCES: The CHIROLARS database...... evaluated in relation to reliability and validity. Only tests for palpation for pain had consistently acceptable results. Motion palpation of the lumbar spine might be valid but showed poor reliability, whereas motion palpation of the sacroiliac joints seemed to be slightly reliable but was not shown....... Documentation of applied kinesiology was not available. Palpation for muscle tension, palpation for misalignment, and visual inspection were either undocumented, unreliable, or not valid. CONCLUSION: The detection of the manipulative lesion in the lumbo-pelvic spine depends on valid and reliable tests. Because...

  3. Coordinating the Provision of Health Services in Humanitarian Crises: a Systematic Review of Suggested Models.

    Science.gov (United States)

    Lotfi, Tamara; Bou-Karroum, Lama; Darzi, Andrea; Hajjar, Rayan; El Rahyel, Ahmed; El Eid, Jamale; Itani, Mira; Brax, Hneine; Akik, Chaza; Osman, Mona; Hassan, Ghayda; El-Jardali, Fadi; Akl, Elie

    2016-08-03

    Our objective was to identify published models of coordination between entities funding or delivering health services in humanitarian crises, whether the coordination took place during or after the crises. We included reports describing models of coordination in sufficient detail to allow reproducibility. We also included reports describing implementation of identified models, as case studies. We searched Medline, PubMed, EMBASE, Cochrane Central Register of Controlled Trials, CINAHL, PsycINFO, and the WHO Global Health Library. We also searched websites of relevant organizations. We followed standard systematic review methodology. Our search captured 14,309 citations. The screening process identified 34 eligible papers describing five models of coordination of delivering health services: the "Cluster Approach" (with 16 case studies), the 4Ws "Who is Where, When, doing What" mapping tool (with four case studies), the "Sphere Project" (with two case studies), the "5x5" model (with one case study), and the "model of information coordination" (with one case study). The 4Ws and the 5x5 focus on coordination of services for mental health, the remaining models do not focus on a specific health topic. The Cluster approach appears to be the most widely used. One case study was a mixed implementation of the Cluster approach and the Sphere model. We identified no model of coordination for funding of health service. This systematic review identified five proposed coordination models that have been implemented by entities funding or delivering health service in humanitarian crises. There is a need to compare the effect of these different models on outcomes such as availability of and access to health services.

  4. Economic Evaluations of Multicomponent Disease Management Programs with Markov Models: A Systematic Review.

    Science.gov (United States)

    Kirsch, Florian

    2016-12-01

    Disease management programs (DMPs) for chronic diseases are being increasingly implemented worldwide. To present a systematic overview of the economic effects of DMPs with Markov models. The quality of the models is assessed, the method by which the DMP intervention is incorporated into the model is examined, and the differences in the structure and data used in the models are considered. A literature search was conducted; the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement was followed to ensure systematic selection of the articles. Study characteristics e.g. results, the intensity of the DMP and usual care, model design, time horizon, discount rates, utility measures, and cost-of-illness were extracted from the reviewed studies. Model quality was assessed by two researchers with two different appraisals: one proposed by Philips et al. (Good practice guidelines for decision-analytic modelling in health technology assessment: a review and consolidation of quality asessment. Pharmacoeconomics 2006;24:355-71) and the other proposed by Caro et al. (Questionnaire to assess relevance and credibility of modeling studies for informing health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force report. Value Health 2014;17:174-82). A total of 16 studies (9 on chronic heart disease, 2 on asthma, and 5 on diabetes) met the inclusion criteria. Five studies reported cost savings and 11 studies reported additional costs. In the quality, the overall score of the models ranged from 39% to 65%, it ranged from 34% to 52%. Eleven models integrated effectiveness derived from a clinical trial or a meta-analysis of complete DMPs and only five models combined intervention effects from different sources into a DMP. The main limitations of the models are bad reporting practice and the variation in the selection of input parameters. Eleven of the 14 studies reported cost-effectiveness results of less than $30,000 per quality-adjusted life-year and

  5. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    DEFF Research Database (Denmark)

    H. Hjort, Ulrik; Rasmussen, Jacob Illum; Larsen, Kim Guldstrand

    2009-01-01

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates...

  6. A Bootstrap Cointegration Rank Test for Panels of VAR Models

    DEFF Research Database (Denmark)

    Callot, Laurent

    functions of the individual Cointegrated VARs (CVAR) models. A bootstrap based procedure is used to compute empirical distributions of the trace test statistics for these individual models. From these empirical distributions two panel trace test statistics are constructed. The satisfying small sample...

  7. Primary Health Care Models Addressing Health Equity for Immigrants: A Systematic Scoping Review.

    Science.gov (United States)

    Batista, Ricardo; Pottie, Kevin; Bouchard, Louise; Ng, Edward; Tanuseputro, Peter; Tugwell, Peter

    2018-02-01

    To examine two healthcare models, specifically "Primary Medical Care" (PMC) and "Primary Health Care" (PHC) in the context of immigrant populations' health needs. We conducted a systematic scoping review of studies that examined primary care provided to immigrants. We categorized studies into two models, PMC and PHC. We used subjects of access barriers and preventive interventions to analyze the potential of PMC/PHC to address healthcare inequities. From 1385 articles, 39 relevant studies were identified. In the context of immigrant populations, the PMC model was found to be more oriented to implement strategies that improve quality of care of the acute and chronically ill, while PHC models focused more on health promotion and strategies to address cultural and access barriers to care, and preventive strategies to address social determinants of health. Primary Health Care models may be better equipped to address social determinants of health, and thus have more potential to reduce immigrant populations' health inequities.

  8. Glide back booster wind tunnel model testing

    Science.gov (United States)

    Pricop, M. V.; Cojocaru, M. G.; Stoica, C. I.; Niculescu, M. L.; Neculaescu, A. M.; Persinaru, A. G.; Boscoianu, M.

    2017-07-01

    Affordable space access requires partial or ideally full launch vehicle reuse, which is in line with clean environment requirement. Although the idea is old, the practical use is difficult, requiring very large technology investment for qualification. Rocket gliders like Space Shuttle have been successfullyoperated but the price and correspondingly the energy footprint were found not sustainable. For medium launchers, finally there is a very promising platform as Falcon 9. For very small launchers the situation is more complex, because the performance index (payload to start mass) is already small, versus medium and heavy launchers. For partial reusable micro launchers this index is even smaller. However the challenge has to be taken because it is likely that in a multiyear effort, technology is going to enable the performance recovery to make such a system economically and environmentally feasible. The current paper is devoted to a small unitary glide back booster which is foreseen to be assembled in a number of possible configurations. Although the level of analysis is not deep, the solution is analyzed from the aerodynamic point of view. A wind tunnel model is designed, with an active canard, to enablea more efficient wind tunnel campaign, as a national level premiere.

  9. Collider tests of the Renormalizable Coloron Model

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Yang; Dobrescu, Bogdan A.

    2018-04-01

    The coloron, a massive version of the gluon present in gauge extensions of QCD, has been searched for at the LHC as a dijet or top quark pair resonance. We point out that in the Renormalizable Coloron Model (ReCoM) with a minimal field content to break the gauge symmetry, a color-octet scalar and a singlet scalar are naturally lighter than the coloron because they are pseudo Nambu-Goldstone bosons. Consequently, the coloron may predominantly decay into scalar pairs, leading to novel signatures at the LHC. When the color-octet scalar is lighter than the singlet, or when the singlet mass is above roughly 1 TeV, the signatures consist of multi-jet resonances of multiplicity up to 12, including topologies with multi-prong jet substructure, slightly displaced vertices, and sometimes a top quark pair. When the singlet is the lightest ReCoM boson and lighter than about 1 TeV, its main decays ($W^+W^-$, $\\gamma Z$, $ZZ$) arise at three loops. The LHC signatures then involve two or four boosted electroweak bosons, often originating from highly displaced vertices, plus one or two pairs of prompt jets or top quarks.

  10. Economic contract theory tests models of mutualism.

    Science.gov (United States)

    Weyl, E Glen; Frederickson, Megan E; Yu, Douglas W; Pierce, Naomi E

    2010-09-07

    Although mutualisms are common in all ecological communities and have played key roles in the diversification of life, our current understanding of the evolution of cooperation applies mostly to social behavior within a species. A central question is whether mutualisms persist because hosts have evolved costly punishment of cheaters. Here, we use the economic theory of employment contracts to formulate and distinguish between two mechanisms that have been proposed to prevent cheating in host-symbiont mutualisms, partner fidelity feedback (PFF) and host sanctions (HS). Under PFF, positive feedback between host fitness and symbiont fitness is sufficient to prevent cheating; in contrast, HS posits the necessity of costly punishment to maintain mutualism. A coevolutionary model of mutualism finds that HS are unlikely to evolve de novo, and published data on legume-rhizobia and yucca-moth mutualisms are consistent with PFF and not with HS. Thus, in systems considered to be textbook cases of HS, we find poor support for the theory that hosts have evolved to punish cheating symbionts; instead, we show that even horizontally transmitted mutualisms can be stabilized via PFF. PFF theory may place previously underappreciated constraints on the evolution of mutualism and explain why punishment is far from ubiquitous in nature.

  11. Standardized tests of handwriting readiness: a systematic review of the literature

    NARCIS (Netherlands)

    Hartingsveldt, M.J. van; Groot, I.J.M. de; Aarts, P.B.M.; Nijhuis-Van der Sanden, M.W.G.

    2011-01-01

    AIM: To establish if there are psychometrically sound standardized tests or test items to assess handwriting readiness in 5- and 6-year-old children on the levels of occupations activities/tasks and performance. METHOD: Electronic databases were searched to identify measurement instruments. Tests

  12. Item Response Theory Models for Performance Decline during Testing

    Science.gov (United States)

    Jin, Kuan-Yu; Wang, Wen-Chung

    2014-01-01

    Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…

  13. Conformance test development with the Java modeling language

    DEFF Research Database (Denmark)

    Søndergaard, Hans; Korsholm, Stephan E.; Ravn, Anders P.

    2017-01-01

    In order to claim conformance with a Java Specification Request, a Java implementation has to pass all tests in an associated Technology Compatibility Kit (TCK). This paper presents a model-based development of a TCK test suite and a test execution tool for the draft Safety-Critical Java (SCJ) pr...

  14. Testing for spatial error dependence in probit models

    NARCIS (Netherlands)

    Amaral, P. V.; Anselin, L.; Arribas-Bel, D.

    2013-01-01

    In this note, we compare three test statistics that have been suggested to assess the presence of spatial error autocorrelation in probit models. We highlight the differences between the tests proposed by Pinkse and Slade (J Econom 85(1):125-254, 1998), Pinkse (Asymptotics of the Moran test and a

  15. Essential value of cocaine and food in rats: tests of the exponential model of demand.

    Science.gov (United States)

    Christensen, Chesley J; Silberberg, Alan; Hursh, Steven R; Huntsberry, Mary E; Riley, Anthony L

    2008-06-01

    To provide a prospective test of the predictive adequacy of the exponential model of demand (Hursh and Silberberg, Psych Rev 115(1):186-198, 2008). In Experiment 1, to measure the 'essential value' (the propensity to defend consumption with changes in price) of cocaine and food in a demand analysis (functional relation between price and consumption) by means of the exponential model; in Experiment 2, to test whether the model's systematic underestimation of cocaine consumption in Experiment 1 was due to weight loss; and in Experiment 3, to evaluate the effects of cocaine on the essential value of food. In Experiment 1, demand curves for food and cocaine were determined by measuring consumption of these goods in a multiple schedule over a range of fixed ratios; in Experiment 2, a demand curve for only cocaine was determined; and in Experiment 3, demand for food was determined in the absence of cocaine. In Experiment 1, the exponential equation accommodated high portions of variance for both curves, but systematically underestimated cocaine demand; in Experiment 2, this predictive underestimation of the equation was eliminated; and in Experiment 3, the essential value of food was greater than in Experiment 1. The exponential model of demand accommodated the data variance for all cocaine and food demand curves. Compared to food, cocaine is a good of lower essential value.

  16. Specification test for Markov models with measurement errors.

    Science.gov (United States)

    Kim, Seonjin; Zhao, Zhibiao

    2014-09-01

    Most existing works on specification testing assume that we have direct observations from the model of interest. We study specification testing for Markov models based on contaminated observations. The evolving model dynamics of the unobservable Markov chain is implicitly coded into the conditional distribution of the observed process. To test whether the underlying Markov chain follows a parametric model, we propose measuring the deviation between nonparametric and parametric estimates of conditional regression functions of the observed process. Specifically, we construct a nonparametric simultaneous confidence band for conditional regression functions and check whether the parametric estimate is contained within the band.

  17. Testing of materials and scale models for impact limiters

    International Nuclear Information System (INIS)

    Maji, A.K.; Satpathi, D.; Schryer, H.L.

    1991-01-01

    Aluminum Honeycomb and Polyurethane foam specimens were tested to obtain experimental data on the material's behavior under different loading conditions. This paper reports the dynamic tests conducted on the materials and on the design and testing of scale models made out of these open-quotes Impact Limiters,close quotes as they are used in the design of transportation casks. Dynamic tests were conducted on a modified Charpy Impact machine with associated instrumentation, and compared with static test results. A scale model testing setup was designed and used for preliminary tests on models being used by current designers of transportation casks. The paper presents preliminary results of the program. Additional information will be available and reported at the time of presentation of the paper

  18. A Novel, Physics-Based Data Analytics Framework for Reducing Systematic Model Errors

    Science.gov (United States)

    Wu, W.; Liu, Y.; Vandenberghe, F. C.; Knievel, J. C.; Hacker, J.

    2015-12-01

    Most climate and weather models exhibit systematic biases, such as under predicted diurnal temperatures in the WRF (Weather Research and Forecasting) model. General approaches to alleviate the systematic biases include improving model physics and numerics, improving data assimilation, and bias correction through post-processing. In this study, we developed a novel, physics-based data analytics framework in post processing by taking advantage of ever-growing high-resolution (spatial and temporal) observational and modeling data. In the framework, a spatiotemporal PCA (Principal Component Analysis) is first applied on the observational data to filter out noise and information on scales that a model may not be able to resolve. The filtered observations are then used to establish regression relationships with archived model forecasts in the same spatiotemporal domain. The regressions along with the model forecasts predict the projected observations in the forecasting period. The pre-regression PCA procedure strengthens regressions, and enhances predictive skills. We then combine the projected observations with the past observations to apply PCA iteratively to derive the final forecasts. This post-regression PCA reconstructs variances and scales of information that are lost in the regression. The framework was examined and validated with 24 days of 5-minute observational data and archives from the WRF model at 27 stations near Dugway Proving Ground, Utah. The validation shows significant bias reduction in the diurnal cycle of predicted surface air temperature compared to the direct output from the WRF model. Additionally, unlike other post-processing bias correction schemes, the data analytics framework does not require long-term historic data and model archives. A week or two of the data is enough to take into account changes in weather regimes. The program, written in python, is also computationally efficient.

  19. Bankruptcy risk model and empirical tests.

    Science.gov (United States)

    Podobnik, Boris; Horvatic, Davor; Petersen, Alexander M; Urosevic, Branko; Stanley, H Eugene

    2010-10-26

    We analyze the size dependence and temporal stability of firm bankruptcy risk in the US economy by applying Zipf scaling techniques. We focus on a single risk factor--the debt-to-asset ratio R--in order to study the stability of the Zipf distribution of R over time. We find that the Zipf exponent increases during market crashes, implying that firms go bankrupt with larger values of R. Based on the Zipf analysis, we employ Bayes's theorem and relate the conditional probability that a bankrupt firm has a ratio R with the conditional probability of bankruptcy for a firm with a given R value. For 2,737 bankrupt firms, we demonstrate size dependence in assets change during the bankruptcy proceedings. Prepetition firm assets and petition firm assets follow Zipf distributions but with different exponents, meaning that firms with smaller assets adjust their assets more than firms with larger assets during the bankruptcy process. We compare bankrupt firms with nonbankrupt firms by analyzing the assets and liabilities of two large subsets of the US economy: 2,545 Nasdaq members and 1,680 New York Stock Exchange (NYSE) members. We find that both assets and liabilities follow a Pareto distribution. The finding is not a trivial consequence of the Zipf scaling relationship of firm size quantified by employees--although the market capitalization of Nasdaq stocks follows a Pareto distribution, the same distribution does not describe NYSE stocks. We propose a coupled Simon model that simultaneously evolves both assets and debt with the possibility of bankruptcy, and we also consider the possibility of firm mergers.

  20. Distress in unaffected individuals who decline, delay or remain ineligible for genetic testing for hereditary diseases: a systematic review.

    Science.gov (United States)

    Heiniger, Louise; Butow, Phyllis N; Price, Melanie A; Charles, Margaret

    2013-09-01

    Reviews on the psychosocial aspects of genetic testing for hereditary diseases typically focus on outcomes for carriers and non-carriers of genetic mutations. However, the majority of unaffected individuals from high-risk families do not undergo predictive testing. The aim of this review was to examine studies on psychosocial distress in unaffected individuals who delay, decline or remain ineligible for predictive genetic testing. Systematic searches of Medline, CINAHL, PsychINFO, PubMed and handsearching of related articles published between 1990 and 2012 identified 23 articles reporting 17 different studies that were reviewed and subjected to quality assessment. Findings suggest that definitions of delaying and declining are not always straightforward, and few studies have investigated psychological distress among individuals who remain ineligible for testing. Findings related to distress in delayers and decliners have been mixed, but there is evidence to suggest that cancer-related distress is lower in those who decline genetic counselling and testing, compared with testers, and that those who remain ineligible for testing experience more anxiety than tested individuals. Psychological, personality and family history vulnerability factors were identified for decliners and individuals who are ineligible for testing. The small number of studies and methodological limitations preclude definitive conclusions. Nevertheless, subgroups of those who remain untested appear to be at increased risk for psychological morbidity. As the majority of unaffected individuals do not undergo genetic testing, further research is needed to better understand the psychological impact of being denied the option of testing, declining and delaying testing. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Do the tuberculin skin test and the QuantiFERON-TB Gold in-tube test agree in detecting latent tuberculosis among high-risk contacts? A systematic review and meta-analysis

    Directory of Open Access Journals (Sweden)

    Erfan Ayubi

    2015-10-01

    Full Text Available OBJECTIVES: The QuantiFERON-TB Gold in-tube test (QFT-GIT and the tuberculin skin test (TST are used to diagnose latent tuberculosis infection (LTBI. However, conclusive evidence regarding the agreement of these two tests among high risk contacts is lacking. This systematic review and meta-analysis aimed to estimate the agreement between the TST and the QFT-GIT using kappa statistics. METHODS: According to the Preferred Reporting Items for Systematic Review and Meta-Analyses guidelines, scientific databases including PubMed, Scopus, and Ovid were searched using a targeted search strategy to identify relevant studies published as of June 2015. Two researchers reviewed the eligibility of studies and extracted data from them. The pooled kappa estimate was determined using a random effect model. Subgroup analysis, Egger’s test and sensitivity analysis were also performed. RESULTS: A total of 6,744 articles were retrieved in the initial search, of which 24 studies had data suitable for meta-analysis. The pooled kappa coefficient and prevalence-adjusted bias-adjusted kappa were 0.40 (95% confidence interval [CI], 0.34 to 0.45 and 0.45 (95% CI, 0.38 to 0.49, respectively. The results of the subgroup analysis found that age group, quality of the study, location, and the TST cutoff point affected heterogeneity for the kappa estimate. No publication bias was found (Begg’s test, p=0.53; Egger’s test, p=0.32. CONCLUSIONS: The agreement between the QFT-GIT and the TST in diagnosing LTBI among high-risk contacts was found to range from fair to moderate.

  2. Molecular structure based property modeling: Development/ improvement of property models through a systematic property-data-model analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol Shivajirao; Sarup, Bent; Sin, Gürkan

    2013-01-01

    The objective of this work is to develop a method for performing property-data-model analysis so that efficient use of knowledge of properties could be made in the development/improvement of property prediction models. The method includes: (i) analysis of property data and its consistency check; ......, a method for selecting a minimum data-set for the parameter regression is also discussed for the cases where it is preferred to retain some data-points from the total data-set to test the reliability of predictions for validation purposes.......; (ii) selection of the most appropriate form of the property model; (iii) selection of the data-set for performing parameter regression and uncertainty analysis; and (iv) analysis of model prediction errors to take necessary corrective steps to improve the accuracy and the reliability of property...

  3. A systematic review of qualitative findings on factors enabling and deterring uptake of HIV testing in Sub-Saharan Africa.

    Science.gov (United States)

    Musheke, Maurice; Ntalasha, Harriet; Gari, Sara; McKenzie, Oran; Bond, Virginia; Martin-Hilber, Adriane; Merten, Sonja

    2013-03-11

    Despite Sub-Saharan Africa (SSA) being the epicenter of the HIV epidemic, uptake of HIV testing is not optimal. While qualitative studies have been undertaken to investigate factors influencing uptake of HIV testing, systematic reviews to provide a more comprehensive understanding are lacking. Using Noblit and Hare's meta-ethnography method, we synthesised published qualitative research to understand factors enabling and deterring uptake of HIV testing in SSA. We identified 5,686 citations out of which 56 were selected for full text review and synthesised 42 papers from 13 countries using Malpass' notion of first-, second-, and third-order constructs. The predominant factors enabling uptake of HIV testing are deterioration of physical health and/or death of sexual partner or child. The roll-out of various HIV testing initiatives such as 'opt-out' provider-initiated HIV testing and mobile HIV testing has improved uptake of HIV testing by being conveniently available and attenuating fear of HIV-related stigma and financial costs. Other enabling factors are availability of treatment and social network influence and support. Major barriers to uptake of HIV testing comprise perceived low risk of HIV infection, perceived health workers' inability to maintain confidentiality and fear of HIV-related stigma. While the increasingly wider availability of life-saving treatment in SSA is an incentive to test, the perceived psychological burden of living with HIV inhibits uptake of HIV testing. Other barriers are direct and indirect financial costs of accessing HIV testing, and gender inequality which undermines women's decision making autonomy about HIV testing. Despite differences across SSA, the findings suggest comparable factors influencing HIV testing. Improving uptake of HIV testing requires addressing perception of low risk of HIV infection and perceived inability to live with HIV. There is also a need to continue addressing HIV-related stigma, which is intricately

  4. A Double Parametric Bootstrap Test for Topic Models

    OpenAIRE

    Seto, Skyler; Tan, Sarah; Hooker, Giles; Wells, Martin T.

    2017-01-01

    Non-negative matrix factorization (NMF) is a technique for finding latent representations of data. The method has been applied to corpora to construct topic models. However, NMF has likelihood assumptions which are often violated by real document corpora. We present a double parametric bootstrap test for evaluating the fit of an NMF-based topic model based on the duality of the KL divergence and Poisson maximum likelihood estimation. The test correctly identifies whether a topic model based o...

  5. Matrix diffusion model. In situ tests using natural analogues

    International Nuclear Information System (INIS)

    Rasilainen, K.

    1997-11-01

    Matrix diffusion is an important retarding and dispersing mechanism for substances carried by groundwater in fractured bedrock. Natural analogues provide, unlike laboratory or field experiments, a possibility to test the model of matrix diffusion in situ over long periods of time. This thesis documents quantitative model tests against in situ observations, done to support modelling of matrix diffusion in performance assessments of nuclear waste repositories

  6. Matrix diffusion model. In situ tests using natural analogues

    Energy Technology Data Exchange (ETDEWEB)

    Rasilainen, K. [VTT Energy, Espoo (Finland)

    1997-11-01

    Matrix diffusion is an important retarding and dispersing mechanism for substances carried by groundwater in fractured bedrock. Natural analogues provide, unlike laboratory or field experiments, a possibility to test the model of matrix diffusion in situ over long periods of time. This thesis documents quantitative model tests against in situ observations, done to support modelling of matrix diffusion in performance assessments of nuclear waste repositories. 98 refs. The thesis includes also eight previous publications by author.

  7. Eating disorders among fashion models: a systematic review of the literature.

    Science.gov (United States)

    Zancu, Simona Alexandra; Enea, Violeta

    2017-09-01

    In the light of recent concerns regarding the eating disorders among fashion models and professional regulations of fashion model occupation, an examination of the scientific evidence on this issue is necessary. The article reviews findings on the prevalence of eating disorders and body image concerns among professional fashion models. A systematic literature search was conducted using ProQUEST, EBSCO, PsycINFO, SCOPUS, and Gale Canage electronic databases. A very low number of studies conducted on fashion models and eating disorders resulted between 1980 and 2015, with seven articles included in this review. Overall, results of these studies do not indicate a higher prevalence of eating disorders among fashion models compared to non-models. Fashion models have a positive body image and generally do not report more dysfunctional eating behaviors than controls. However, fashion models are on average slightly underweight with significantly lower BMI than controls, and give higher importance to appearance and thin body shape, and thus have a higher prevalence of partial-syndrome eating disorders than controls. Despite public concerns, research on eating disorders among professional fashion models is extremely scarce and results cannot be generalized to all models. The existing research fails to clarify the matter of eating disorders among fashion models and given the small number of studies, further research is needed.

  8. A comprehensive and systematic evaluation framework for a parsimonious daily rainfall field model

    Science.gov (United States)

    Bennett, Bree; Thyer, Mark; Leonard, Michael; Lambert, Martin; Bates, Bryson

    2018-01-01

    The spatial distribution of rainfall has a significant influence on catchment dynamics and the generation of streamflow time series. However, there are few stochastic models that can simulate long sequences of stochastic rainfall fields continuously in time and space. To address this issue, the first goal of this study was to present a new parsimonious stochastic model that produces daily rainfall fields across the catchment. To achieve parsimony, the model used the latent-variable approach (because this parsimoniously simulates rainfall occurrences as well as amounts) and several other assumptions (including contemporaneous and separable spatiotemporal covariance structures). The second goal was to develop a comprehensive and systematic evaluation (CASE) framework to identify model strengths and weaknesses. This included quantitative performance categorisation that provided a systematic, succinct and transparent method to assess and summarise model performance over a range of statistics, sites, scales and seasons. The model is demonstrated using a case study from the Onkaparinga catchment in South Australia. The model showed many strengths in reproducing the observed rainfall characteristics with the majority of statistics classified as either statistically indistinguishable from the observed or within 5% of the observed across the majority of sites and seasons. These included rainfall occurrences/amounts, wet/dry spell distributions, annual volumes/extremes and spatial patterns, which are important from a hydrological perspective. One of the few weaknesses of the model was that the total annual rainfall in dry years (lower 5%) was overestimated by 15% on average over all sites. An advantage of the CASE framework was that it was able to identify the source of this overestimation was poor representation of the annual variability of rainfall occurrences. Given the strengths of this continuous daily rainfall field model it has a range of potential hydrological

  9. Testing search strategies for systematic reviews in the Medline literature database through PubMed.

    Science.gov (United States)

    Volpato, Enilze S N; Betini, Marluci; El Dib, Regina

    2014-04-01

    A high-quality electronic search is essential in ensuring accuracy and completeness in retrieved records for the conducting of a systematic review. We analysed the available sample of search strategies to identify the best method for searching in Medline through PubMed, considering the use or not of parenthesis, double quotation marks, truncation and use of a simple search or search history. In our cross-sectional study of search strategies, we selected and analysed the available searches performed during evidence-based medicine classes and in systematic reviews conducted in the Botucatu Medical School, UNESP, Brazil. We analysed 120 search strategies. With regard to the use of phrase searches with parenthesis, there was no difference between the results with and without parenthesis and simple searches or search history tools in 100% of the sample analysed (P = 1.0). The number of results retrieved by the searches analysed was smaller using double quotations marks and using truncation compared with the standard strategy (P = 0.04 and P = 0.08, respectively). There is no need to use phrase-searching parenthesis to retrieve studies; however, we recommend the use of double quotation marks when an investigator attempts to retrieve articles in which a term appears to be exactly the same as what was proposed in the search form. Furthermore, we do not recommend the use of truncation in search strategies in the Medline via PubMed. Although the results of simple searches or search history tools were the same, we recommend using the latter.

  10. A Dutch test with the NewProd-model

    NARCIS (Netherlands)

    Bronnenberg, J.J.A.M.; van Engelen, M.L.

    1988-01-01

    The paper contains a report of a test of Cooper's NewProd model for predicting success and failure of product development projects. Based on Canadian data, the model has been shown to make predictions which are 84% correct. Having reservations on the reliability and validity of the model on

  11. A test of 3 models of Kirtland's warbler habitat suitability

    Science.gov (United States)

    Mark D. Nelson; Richard R. Buech

    1996-01-01

    We tested 3 models of Kirtland's warbler (Dendroica kirtlandii) habitat suitability during a period when we believe there was a surplus of good quality breeding habitat. A jack pine canopy-cover model was superior to 2 jack pine stem-density models in predicting Kirtland's warbler habitat use and non-use. Estimated density of birds in high...

  12. Systematic review and overview of health economic evaluation models in obesity prevention and therapy.

    Science.gov (United States)

    Schwander, Bjoern; Hiligsmann, Mickaël; Nuijten, Mark; Evers, Silvia

    2016-10-01

    Given the increasing clinical and economic burden of obesity, it is of major importance to identify cost-effective approaches for obesity management. Areas covered: This study aims to systematically review and compile an overview of published decision models for health economic assessments (HEA) in obesity, in order to summarize and compare their key characteristics as well as to identify, inform and guide future research. Of the 4,293 abstracts identified, 87 papers met our inclusion criteria. A wide range of different methodological approaches have been identified. Of the 87 papers, 69 (79%) applied unique /distinctive modelling approaches. Expert commentary: This wide range of approaches suggests the need to develop recommendations /minimal requirements for model-based HEA of obesity. In order to reach this long-term goal, further research is required. Valuable future research steps would be to investigate the predictiveness, validity and quality of the identified modelling approaches.

  13. Selection and measurement of control antidepressants in clinical tests for Chinese: A systematic review.

    Science.gov (United States)

    Liu, Hao; Yang, Zhi-Min; Geng, Ying; Yang, Huan; Zhao, De-Heng; Xiao, Wei-Dong; Wang, Gao-Hua

    2017-10-01

    The study aims to help domestic application units and research institutions improve their research quality of antidepressant clinical tests by studying and analyzing the current status and problems in selecting control drugs during domestic antidepressant clinical tests and illustrating some key problems that should be noted when selecting the control drug in such researches. Considering the current domestic and overseas status of control drug selection in antidepressant clinical tests, various considerations, and misunderstandings on control drug selection in domestic antidepressant clinical tests were clarified and described, and possible factors that may influence the absolute effect of antidepressants were analyzed. Furthermore, problems that should be noted in selecting control drugs for the antidepressant clinical test, especially the placebo control, were stated. During the antidepressant clinical research, selecting placebo controls conform to moral philosophy and safety requirements. To verify the absolute effect of a test drug, a placebo control should be set or 3-arm tests should be conducted as far as possible. Possible factors that may affect the absolute effect of the test drug, including illness severity of the subject at baseline and research scale, should be given consideration. Application units and research institutions should consider the selection of subjects, control the failure rate, strengthen safety risks, and control and intensify quality control to further improve the overall quality and research level of domestic antidepressant clinical tests.

  14. The Validity and Responsiveness of Isometric Lower Body Multi-Joint Tests of Muscular Strength: a Systematic Review.

    Science.gov (United States)

    Drake, David; Kennedy, Rodney; Wallace, Eric

    2017-12-01

    Researchers and practitioners working in sports medicine and science require valid tests to determine the effectiveness of interventions and enhance understanding of mechanisms underpinning adaptation. Such decision making is influenced by the supportive evidence describing the validity of tests within current research. The objective of this study is to review the validity of lower body isometric multi-joint tests ability to assess muscular strength and determine the current level of supporting evidence. Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines were followed in a systematic fashion to search, assess and synthesize existing literature on this topic. Electronic databases such as Web of Science, CINAHL and PubMed were searched up to 18 March 2015. Potential inclusions were screened against eligibility criteria relating to types of test, measurement instrument, properties of validity assessed and population group and were required to be published in English. The Consensus-based Standards for the Selection of health Measurement Instruments (COSMIN) checklist was used to assess methodological quality and measurement property rating of included studies. Studies rated as fair or better in methodological quality were included in the best evidence synthesis. Fifty-nine studies met the eligibility criteria for quality appraisal. The ten studies that rated fair or better in methodological quality were included in the best evidence synthesis. The most frequently investigated lower body isometric multi-joint tests for validity were the isometric mid-thigh pull and isometric squat. The validity of each of these tests was strong in terms of reliability and construct validity. The evidence for responsiveness of tests was found to be moderate for the isometric squat test and unknown for the isometric mid-thigh pull. No tests using the isometric leg press met the criteria for inclusion in the best evidence synthesis. Researchers and

  15. Development of dynamic Bayesian models for web application test management

    Science.gov (United States)

    Azarnova, T. V.; Polukhin, P. V.; Bondarenko, Yu V.; Kashirina, I. L.

    2018-03-01

    The mathematical apparatus of dynamic Bayesian networks is an effective and technically proven tool that can be used to model complex stochastic dynamic processes. According to the results of the research, mathematical models and methods of dynamic Bayesian networks provide a high coverage of stochastic tasks associated with error testing in multiuser software products operated in a dynamically changing environment. Formalized representation of the discrete test process as a dynamic Bayesian model allows us to organize the logical connection between individual test assets for multiple time slices. This approach gives an opportunity to present testing as a discrete process with set structural components responsible for the generation of test assets. Dynamic Bayesian network-based models allow us to combine in one management area individual units and testing components with different functionalities and a direct influence on each other in the process of comprehensive testing of various groups of computer bugs. The application of the proposed models provides an opportunity to use a consistent approach to formalize test principles and procedures, methods used to treat situational error signs, and methods used to produce analytical conclusions based on test results.

  16. In vitro biofilm models to study dental caries: a systematic review.

    Science.gov (United States)

    Maske, T T; van de Sande, F H; Arthur, R A; Huysmans, M C D N J M; Cenci, M S

    2017-09-01

    The aim of this systematic review is to characterize and discuss key methodological aspects of in vitro biofilm models for caries-related research and to verify the reproducibility and dose-response of models considering the response to anti-caries and/or antimicrobial substances. Inclusion criteria were divided into Part I (PI): an in vitro biofilm model that produces a cariogenic biofilm and/or caries-like lesions and allows pH fluctuations; and Part II (PII): models showing an effect of anti-caries and/or antimicrobial substances. Within PI, 72.9% consisted of dynamic biofilm models, while 27.1% consisted of batch models. Within PII, 75.5% corresponded to dynamic models, whereas 24.5% corresponded to batch models. Respectively, 20.4 and 14.3% of the studies reported dose-response validations and reproducibility, and 32.7% were classified as having a high risk of bias. Several in vitro biofilm models are available for caries-related research; however, most models lack validation by dose-response and reproducibility experiments for each proposed protocol.

  17. Systematic narrative review of decision frameworks to select the appropriate modelling approaches for health economic evaluations.

    Science.gov (United States)

    Tsoi, B; O'Reilly, D; Jegathisawaran, J; Tarride, J-E; Blackhouse, G; Goeree, R

    2015-06-17

    In constructing or appraising a health economic model, an early consideration is whether the modelling approach selected is appropriate for the given decision problem. Frameworks and taxonomies that distinguish between modelling approaches can help make this decision more systematic and this study aims to identify and compare the decision frameworks proposed to date on this topic area. A systematic review was conducted to identify frameworks from peer-reviewed and grey literature sources. The following databases were searched: OVID Medline and EMBASE; Wiley's Cochrane Library and Health Economic Evaluation Database; PubMed; and ProQuest. Eight decision frameworks were identified, each focused on a different set of modelling approaches and employing a different collection of selection criterion. The selection criteria can be categorized as either: (i) structural features (i.e. technical elements that are factual in nature) or (ii) practical considerations (i.e. context-dependent attributes). The most commonly mentioned structural features were population resolution (i.e. aggregate vs. individual) and interactivity (i.e. static vs. dynamic). Furthermore, understanding the needs of the end-users and stakeholders was frequently incorporated as a criterion within these frameworks. There is presently no universally-accepted framework for selecting an economic modelling approach. Rather, each highlights different criteria that may be of importance when determining whether a modelling approach is appropriate. Further discussion is thus necessary as the modelling approach selected will impact the validity of the underlying economic model and have downstream implications on its efficiency, transparency and relevance to decision-makers.

  18. Are chiropractic tests for the lumbo-pelvic spine reliable and valid? A systematic critical literature review

    DEFF Research Database (Denmark)

    Hestbaek, L; Leboeuf-Yde, C

    2000-01-01

    OBJECTIVE: To systematically review the peer-reviewed literature about the reliability and validity of chiropractic tests used to determine the need for spinal manipulative therapy of the lumbo-pelvic spine, taking into account the quality of the studies. DATA SOURCES: The CHIROLARS database...... was searched for the years 1976 to 1995 with the following index terms: "chiropractic tests," "chiropractic adjusting technique," "motion palpation," "movement palpation," "leg length," "applied kinesiology," and "sacrooccipital technique." In addition, a manual search was performed at the libraries...... of the Nordic Institute of Chiropractic and Clinical Biomechanics, Odense, Denmark, and the Anglo-European College of Chiropractic, Bournemouth, United Kingdom. STUDY SELECTION: Studies pertaining to intraexaminer reliability, interexaminer reliability, and/or validity of chiropractic evaluation of the lumbo...

  19. Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: The PRISMA-DTA Statement.

    Science.gov (United States)

    McInnes, Matthew D F; Moher, David; Thombs, Brett D; McGrath, Trevor A; Bossuyt, Patrick M; Clifford, Tammy; Cohen, Jérémie F; Deeks, Jonathan J; Gatsonis, Constantine; Hooft, Lotty; Hunt, Harriet A; Hyde, Christopher J; Korevaar, Daniël A; Leeflang, Mariska M G; Macaskill, Petra; Reitsma, Johannes B; Rodin, Rachel; Rutjes, Anne W S; Salameh, Jean-Paul; Stevens, Adrienne; Takwoingi, Yemisi; Tonelli, Marcello; Weeks, Laura; Whiting, Penny; Willis, Brian H

    2018-01-23

    Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. The 27-item

  20. Reference value for the 6-minute walk test in children and adolescents : a systematic review

    NARCIS (Netherlands)

    Mylius, C. F.; Paap, D.; Takken, T.

    2016-01-01

    Introduction: The 6-minute walk test is a submaximal exercise test used to quantify the functional exercise capacity in clinical populations. It measures the distance walked within a period of 6-minutes. Obtaining reference values in the pediatric population is especially demanding due to factors as

  1. Systematic Testing should not be a Topic in the Computer Science Curriculum!

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    2003-01-01

    In this paper we argue that treating "testing" as an isolated topic is a wrong approach in computer science and software engineering teaching. Instead testing should pervade practical topics and exercises in the computer science curriculum to teach students the importance of producing software...

  2. Specificity, sensitivity, and predictive values of clinical tests of the sacroiliac joint: a systematic review of the literature.

    Science.gov (United States)

    Stuber, Kent Jason

    2007-03-01

    To determine which physical examination tests have the highest sensitivity, specificity, and predictive values for determining the presence of sacroiliac joint injuries and/or dysfunction when compared with the gold standard of a sacroiliac joint block. A systematic search of the literature was conducted for articles that evaluated clinical sacroiliac joint tests for sensitivity, specificity, and predictive value when compared to sacroiliac joint block. The search was conducted using several online databases: Medline, Embase, Cinahl, AMED, and the Index to Chiropractic Literature. Reference and journal searching and contact with several experts in the area was also employed. Studies selected for inclusion were evaluated with a data extraction sheet and assessed for methodological quality using an assessment tool based on accepted principles of evaluation. Article results were compared, no attempt to formally combine the results into a meta-analysis was made. Seven papers were identified for inclusion in the review, two of which dealt with the same study, thus six studies were to be assessed although one paper could not be obtained. The most recently published article had the highest methodological quality. Study designs rarely incorporated randomized, placebo controlled, double blinded study designs or confirmatory sacroiliac joint blocks. There was considerable inconsistency between studies in design and outcome measurement, making comparison difficult. Five tests were found to have sensitivity and specificity over 60% each in at least one study with at least moderately high methodological quality. Using several tests and requiring a minimum number to be positive yielded adequate sensitivity and specificity for identifying sacroiliac joint injury when compared with sacroiliac joint block. Practitioners may consider using the distraction test, compression test, thigh thrust/posterior shear, sacral thrust, and resisted hip abduction as these were the only tests to

  3. A systematic literature review of open source software quality assessment models.

    Science.gov (United States)

    Adewumi, Adewole; Misra, Sanjay; Omoregbe, Nicholas; Crawford, Broderick; Soto, Ricardo

    2016-01-01

    Many open source software (OSS) quality assessment models are proposed and available in the literature. However, there is little or no adoption of these models in practice. In order to guide the formulation of newer models so they can be acceptable by practitioners, there is need for clear discrimination of the existing models based on their specific properties. Based on this, the aim of this study is to perform a systematic literature review to investigate the properties of the existing OSS quality assessment models by classifying them with respect to their quality characteristics, the methodology they use for assessment, and their domain of application so as to guide the formulation and development of newer models. Searches in IEEE Xplore, ACM, Science Direct, Springer and Google Search is performed so as to retrieve all relevant primary studies in this regard. Journal and conference papers between the year 2003 and 2015 were considered since the first known OSS quality model emerged in 2003. A total of 19 OSS quality assessment model papers were selected. To select these models we have developed assessment criteria to evaluate the quality of the existing studies. Quality assessment models are classified into five categories based on the quality characteristics they possess namely: single-attribute, rounded category, community-only attribute, non-community attribute as well as the non-quality in use models. Our study reflects that software selection based on hierarchical structures is found to be the most popular selection method in the existing OSS quality assessment models. Furthermore, we found that majority (47%) of the existing models do not specify any domain of application. In conclusion, our study will be a valuable contribution to the community and helps the quality assessment model developers in formulating newer models and also to the practitioners (software evaluators) in selecting suitable OSS in the midst of alternatives.

  4. On selection of optimal stochastic model for accelerated life testing

    International Nuclear Information System (INIS)

    Volf, P.; Timková, J.

    2014-01-01

    This paper deals with the problem of proper lifetime model selection in the context of statistical reliability analysis. Namely, we consider regression models describing the dependence of failure intensities on a covariate, for instance, a stressor. Testing the model fit is standardly based on the so-called martingale residuals. Their analysis has already been studied by many authors. Nevertheless, the Bayes approach to the problem, in spite of its advantages, is just developing. We shall present the Bayes procedure of estimation in several semi-parametric regression models of failure intensity. Then, our main concern is the Bayes construction of residual processes and goodness-of-fit tests based on them. The method is illustrated with both artificial and real-data examples. - Highlights: • Statistical survival and reliability analysis and Bayes approach. • Bayes semi-parametric regression modeling in Cox's and AFT models. • Bayes version of martingale residuals and goodness-of-fit test

  5. Systematic iteration between model and methodology: A proposed approach to evaluating unintended consequences.

    Science.gov (United States)

    Morell, Jonathan A

    2017-09-18

    This article argues that evaluators could better deal with unintended consequences if they improved their methods of systematically and methodically combining empirical data collection and model building over the life cycle of an evaluation. This process would be helpful because it can increase the timespan from when the need for a change in methodology is first suspected to the time when the new element of the methodology is operational. The article begins with an explanation of why logic models are so important in evaluation, and why the utility of models is limited if they are not continually revised based on empirical evaluation data. It sets the argument within the larger context of the value and limitations of models in the scientific enterprise. Following will be a discussion of various issues that are relevant to model development and revision. What is the relevance of complex system behavior for understanding predictable and unpredictable unintended consequences, and the methods needed to deal with them? How might understanding of unintended consequences be improved with an appreciation of generic patterns of change that are independent of any particular program or change effort? What are the social and organizational dynamics that make it rational and adaptive to design programs around single-outcome solutions to multi-dimensional problems? How does cognitive bias affect our ability to identify likely program outcomes? Why is it hard to discern change as a result of programs being embedded in multi-component, continually fluctuating, settings? The last part of the paper outlines a process for actualizing systematic iteration between model and methodology, and concludes with a set of research questions that speak to how the model/data process can be made efficient and effective. Copyright © 2017. Published by Elsevier Ltd.

  6. A new fit-for-purpose model testing framework: Decision Crash Tests

    Science.gov (United States)

    Tolson, Bryan; Craig, James

    2016-04-01

    Decision-makers in water resources are often burdened with selecting appropriate multi-million dollar strategies to mitigate the impacts of climate or land use change. Unfortunately, the suitability of existing hydrologic simulation models to accurately inform decision-making is in doubt because the testing procedures used to evaluate model utility (i.e., model validation) are insufficient. For example, many authors have identified that a good standard framework for model testing called the Klemes Crash Tests (KCTs), which are the classic model validation procedures from Klemeš (1986) that Andréassian et al. (2009) rename as KCTs, have yet to become common practice in hydrology. Furthermore, Andréassian et al. (2009) claim that the progression of hydrological science requires widespread use of KCT and the development of new crash tests. Existing simulation (not forecasting) model testing procedures such as KCTs look backwards (checking for consistency between simulations and past observations) rather than forwards (explicitly assessing if the model is likely to support future decisions). We propose a fundamentally different, forward-looking, decision-oriented hydrologic model testing framework based upon the concept of fit-for-purpose model testing that we call Decision Crash Tests or DCTs. Key DCT elements are i) the model purpose (i.e., decision the model is meant to support) must be identified so that model outputs can be mapped to management decisions ii) the framework evaluates not just the selected hydrologic model but the entire suite of model-building decisions associated with model discretization, calibration etc. The framework is constructed to directly and quantitatively evaluate model suitability. The DCT framework is applied to a model building case study on the Grand River in Ontario, Canada. A hypothetical binary decision scenario is analysed (upgrade or not upgrade the existing flood control structure) under two different sets of model building

  7. Provider-initiated testing and counselling programmes in sub-Saharan Africa: a systematic review of their operational implementation.

    Science.gov (United States)

    Roura, Maria; Watson-Jones, Deborah; Kahawita, Tanya M; Ferguson, Laura; Ross, David A

    2013-02-20

    The routine offer of an HIV test during patient-provider encounters is gaining momentum within HIV treatment and prevention programmes. This review examined the operational implementation of provider-initiated testing and counselling (PITC) programmes in sub-Saharan Africa. PUBMED, EMBASE, Global Health, COCHRANE Library and JSTOR databases were searched systematically for articles published in English between January 2000 and November 2010. Grey literature was explored through the websites of international and nongovernmental organizations. Eligibility of studies was based on predetermined criteria applied during independent screening by two researchers. We retained 44 studies out of 5088 references screened. PITC polices have been effective at identifying large numbers of previously undiagnosed individuals. However, the translation of policy guidance into practice has had mixed results, and in several studies of routine programmes the proportion of patients offered an HIV test was disappointingly low. There were wide variations in the rates of acceptance of the test and poor linkage of those testing positive to follow-up assessments and antiretroviral treatment. The challenges encountered encompass a range of areas from logistics, to data systems, human resources and management, reflecting some of the weaknesses of health systems in the region. The widespread adoption of PITC provides an unprecedented opportunity for identifying HIV-positive individuals who are already in contact with health services and should be accompanied by measures aimed at strengthening health systems and fostering the normalization of HIV at community level. The resources and effort needed to do this successfully should not be underestimated.

  8. Systematical calculation of α decay half-lives with a generalized liquid drop model

    Energy Technology Data Exchange (ETDEWEB)

    Bao, Xiaojun [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Zhang, Hongfei, E-mail: zhanghongfei@lzu.edu.cn [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Zhang, Haifei [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Royer, G. [Laboratoire Subatech, UMR, IN2P3/CNRS, Université – Ecole des Mines, 44 Nantes (France); Li, Junqing [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Institute of Modern Physics, Chinese Academy of Science, Lanzhou 730000 (China)

    2014-01-15

    A systematic calculation of α decay half-lives is presented for even–even nuclei between Te and Z=118 isotopes. The potential energy governing α decay has been determined within a liquid drop model including proximity effects between the α particle and the daughter nucleus and taking into account the experimental Q value. The α decay half-lives have been deduced from the WKB barrier penetration probability. The α decay half-lives obtained agree reasonably well with the experimental data.

  9. Systematic Wind Farm Measurement Data Filtering Tool for Wake Model Calibration

    DEFF Research Database (Denmark)

    Rethore, Pierre-Elouan Mikael; Johansen, Nicholas Alan; Frandsen, Sten Tronæs

    A set of systematic methods for characterizing the sensors of a wind farm and using these sensors to filter more accurately large volumes of measurement data is proposed. These methods are based on the experience accumulated while processing datasets from two large offshore wind farms in Denmark....... Both wake model developers and wind farm operators seeking to determine how the wind farm operates under specific conditions can find these methods valuable. The methods are general and can be applied successfully to any wind farm by taking into consideration the specific aspects of each wind farm....

  10. Deformation modeling and the strain transient dip test

    International Nuclear Information System (INIS)

    Jones, W.B.; Rohde, R.W.; Swearengen, J.C.

    1980-01-01

    Recent efforts in material deformation modeling reveal a trend toward unifying creep and plasticity with a single rate-dependent formulation. While such models can describe actual material deformation, most require a number of different experiments to generate model parameter information. Recently, however, a new model has been proposed in which most of the requisite constants may be found by examining creep transients brought about through abrupt changes in creep stress (strain transient dip test). The critical measurement in this test is the absence of a resolvable creep rate after a stress drop. As a consequence, the result is extraordinarily sensitive to strain resolution as well as machine mechanical response. This paper presents the design of a machine in which these spurious effects have been minimized and discusses the nature of the strain transient dip test using the example of aluminum. It is concluded that the strain transient dip test is not useful as the primary test for verifying any micromechanical model of deformation. Nevertheless, if a model can be developed which is verifiable by other experimentts, data from a dip test machine may be used to generate model parameters

  11. Diagnostic Accuracy of Molecular Amplification Tests for Human African Trypanosomiasis-Systematic Review

    NARCIS (Netherlands)

    Mugasa, Claire M.; Adams, Emily R.; Boer, Kimberly R.; Dyserinck, Heleen C.; Büscher, Philippe; Schallig, Henk D. H. F.; Leeflang, Mariska M. G.

    2012-01-01

    Background: A range of molecular amplification techniques have been developed for the diagnosis of Human African Trypanosomiasis (HAT); however, careful evaluation of these tests must precede implementation to ensure their high clinical accuracy. Here, we investigated the diagnostic accuracy of

  12. TURBHO - Higher order turbulence modeling for industrial appications. Design document: Module Test Phase (MTP). Software engineering module: Additional physical models; TURBHO. Turbulenzmodellierung hoeherer Ordnung fuer industrielle Anwendungen. Design document: Module Test Phase (MTP). Software engineering module: additional physical models

    Energy Technology Data Exchange (ETDEWEB)

    Grotjans, H.

    1998-04-01

    In the current Software Engineering Module (SEM2) three additional test cases have been investigated, as listed in Chapter 2. For all test cases it has been shown that the computed results are grid independent. This has been done by systematic grid refinement studies. The main objective of the current SEM2 was the verification and validation of the new wall function implementation for the k-{epsilon} mode and the SMC-model. Analytical relations and experimental data have been used for comparison of the computational results. The agreement of the results is good. Therefore, the correct implementation of the new wall function has been demonstrated. As the results in this report have shown, a consistent grid refinement can be done for any test case. This is an important improvement for industrial applications, as no model specific requirements must be considered during grid generation. (orig.)

  13. Dynamic epidemiological models for dengue transmission: a systematic review of structural approaches.

    Directory of Open Access Journals (Sweden)

    Mathieu Andraud

    Full Text Available Dengue is a vector-borne disease recognized as the major arbovirose with four immunologically distant dengue serotypes coexisting in many endemic areas. Several mathematical models have been developed to understand the transmission dynamics of dengue, including the role of cross-reactive antibodies for the four different dengue serotypes. We aimed to review deterministic models of dengue transmission, in order to summarize the evolution of insights for, and provided by, such models, and to identify important characteristics for future model development. We identified relevant publications using PubMed and ISI Web of Knowledge, focusing on mathematical deterministic models of dengue transmission. Model assumptions were systematically extracted from each reviewed model structure, and were linked with their underlying epidemiological concepts. After defining common terms in vector-borne disease modelling, we generally categorised fourty-two published models of interest into single serotype and multiserotype models. The multi-serotype models assumed either vector-host or direct host-to-host transmission (ignoring the vector component. For each approach, we discussed the underlying structural and parameter assumptions, threshold behaviour and the projected impact of interventions. In view of the expected availability of dengue vaccines, modelling approaches will increasingly focus on the effectiveness and cost-effectiveness of vaccination options. For this purpose, the level of representation of the vector and host populations seems pivotal. Since vector-host transmission models would be required for projections of combined vaccination and vector control interventions, we advocate their use as most relevant to advice health policy in the future. The limited understanding of the factors which influence dengue transmission as well as limited data availability remain important concerns when applying dengue models to real-world decision problems.

  14. A systematic evaluation of immunoassay point-of-care testing to define impact on patients' outcomes.

    Science.gov (United States)

    Pecoraro, Valentina; Banfi, Giuseppe; Germagnoli, Luca; Trenti, Tommaso

    2017-07-01

    Background Point-of-care testing has been developed to provide rapid test results. Most published studies focus on analytical performance, neglecting its impact on patient outcomes. Objective To review the analytical performance and accuracy of point-of-care testing specifically planned for immunoassay and to evaluate the impact of faster results on patient management. Methods A search of electronic databases for studies reporting immunoassay results obtained in both point-of-care testing and central laboratory scenarios was performed. Data were extracted concerning the study details, and the methodological quality was assessed. The analytical characteristics and diagnostic accuracy of six points-of-care testing: troponin, procalcitonin, parathyroid hormone, brain natriuretic peptide, C-reactive protein and neutrophil gelatinase-associated lipocalin were evaluated. Results A total of 116 scientific papers were analysed. Studies measuring procalcitonin, parathyroid hormone and neutrophil gelatinase-associated lipocalin reported a limited impact on diagnostic decisions. Seven studies measuring C-reactive protein claimed a significant reduction of antibiotic prescription. Several authors evaluated brain natriuretic peptide or troponin reporting faster decision-making without any improvement in clinical outcome. Forty-four per cent of studies reported analytical data, showing satisfactory correlations between results obtained through point-of-care testing and central laboratory setting. Half of studies defined the diagnostic accuracy of point-of-care testing as acceptable for troponin (median sensitivity and specificity: 74% and 94%, respectively), brain natriuretic peptide (median sensitivity and specificity: 82% and 88%, respectively) and C-reactive protein (median sensitivity and specificity 85%). Conclusions Point-of-care testing immunoassay results seem to be reliable and accurate for troponin, brain natriuretic peptide and C-reactive protein. The satisfactory

  15. TESTING THE RELIABILITY OF CLUSTER MASS INDICATORS WITH A SYSTEMATICS LIMITED DATA SET

    International Nuclear Information System (INIS)

    Juett, Adrienne M.; Mushotzky, Richard; Davis, David S.

    2010-01-01

    We present the mass-X-ray observable scaling relationships for clusters of galaxies using the XMM-Newton cluster catalog of Snowden et al. Our results are roughly consistent with previous observational and theoretical work, with one major exception. We find two to three times the scatter around the best-fit mass scaling relationships as expected from cluster simulations or seen in other observational studies. We suggest that this is a consequence of using hydrostatic mass, as opposed to virial mass, and is due to the explicit dependence of the hydrostatic mass on the gradients of the temperature and gas density profiles. We find a larger range of slope in the cluster temperature profiles at r 500 than previous observational studies. Additionally, we find only a weak dependence of the gas mass fraction on cluster mass, consistent with a constant. Our average gas mass fraction results argue for a closer study of the systematic errors due to instrumental calibration and analysis method variations. We suggest that a more careful study of the differences between various observational results and with cluster simulations is needed to understand sources of bias and scatter in cosmological studies of galaxy clusters.

  16. Models of expected returns on the brazilian market: Empirical tests using predictive methodology

    Directory of Open Access Journals (Sweden)

    Adriano Mussa

    2009-01-01

    Full Text Available Predictive methodologies for test of the expected returns models are largely diffused on the international academic environment. However, these methods have not been used in Brazil in a systematic way. Generally, empirical studies proceeded with Brazilian stock market data are concentrated only in the first step of these methodologies. The purpose of this article was test and compare the models CAPM, 3-factors and 4-factors using a predictive methodology, considering two steps – temporal and cross-section regressions – with standard errors obtained by the techniques of Fama and Macbeth (1973. The results indicated the superiority of the 4-fators model as compared to the 3-fators model, and the superiority of the 3- factors model as compared to the CAPM, but no one of the tested models were enough on the explanation of the Brazilian stock returns. Contrary to some empirical evidences, that do not use predictive methodology, the size and momentum effect seem do not exist on the Brazilian capital markets, but there are evidences of the value effect and the relevance of the market for explanation of expected returns. These finds rise some questions, mainly caused by the originality of the methodology on the local market and by the fact that this subject is still incipient and polemic on the Brazilian academic environment.

  17. Prostate specific antigen testing policy worldwide varies greatly and seems not to be in accordance with guidelines: a systematic review

    Directory of Open Access Journals (Sweden)

    Van der Meer Saskia

    2012-10-01

    Full Text Available Abstract Background Prostate specific antigen (PSA testing is widely used, but guidelines on follow-up are unclear. Methods We performed a systematic review of the literature to determine follow-up policy after PSA testing by general practitioners (GPs and non-urologic hospitalists, the use of a cut-off value for this policy, the reasons for repeating a PSA test after an initial normal result, the existence of a general cut-off value below which a PSA result is considered normal, and the time frame for repeating a test. Data sources. MEDLINE, Embase, PsychInfo and the Cochrane library from January 1950 until May 2011. Study eligibility criteria. Studies describing follow-up policy by GPs or non-urologic hospitalists after a primary PSA test, excluding urologists and patients with prostate cancer. Studies written in Dutch, English, French, German, Italian or Spanish were included. Excluded were studies describing follow-up policy by urologists and follow-up of patients with prostate cancer. The quality of each study was structurally assessed. Results Fifteen articles met the inclusion criteria. Three studies were of high quality. Follow-up differed greatly both after a normal and an abnormal PSA test result. Only one study described the reasons for not performing follow-up after an abnormal PSA result. Conclusions Based on the available literature, we cannot adequately assess physicians’ follow-up policy after a primary PSA test. Follow-up after a normal or raised PSA test by GPs and non-urologic hospitalists seems to a large extent not in accordance with the guidelines.

  18. Systematic model for lean product development implementation in an automotive related company

    Directory of Open Access Journals (Sweden)

    Daniel Osezua Aikhuele

    2017-07-01

    Full Text Available Lean product development is a major innovative business strategy that employs sets of practices to achieve an efficient, innovative and a sustainable product development. Despite the many benefits and high hopes in the lean strategy, many companies are still struggling, and unable to either achieve or sustain substantial positive results with their lean implementation efforts. However, as the first step towards addressing this issue, this paper seeks to propose a systematic model that considers the administrative and implementation limitations of lean thinking practices in the product development process. The model which is based on the integration of fuzzy Shannon’s entropy and Modified Technique for Order Preference by Similarity to the Ideal Solution (M-TOPSIS model for the lean product development practices implementation with respective to different criteria including management and leadership, financial capabilities, skills and expertise and organization culture, provides a guide or roadmap for product development managers on the lean implementation route.

  19. [Model for the systematic classification of outcomes in health promotion and prevention].

    Science.gov (United States)

    Cloetta, Bernhard; Spencer, Brenda; Spörri, Adrian; Ruckstuhl, Brigitte; Broesskamp-Stone, Ursel; Ackermann, Günter

    2005-01-01

    Successful demonstration of the effects of health promotion calls for systematic documentation and comparison of the outcome of different measures and projects. A model has been developed in the form of an outcome categorisation system for this purpose and is presented here. The model includes four categories covering the intermediate outcomes of health promotion measures, and three categories for outcomes at the level of health determinants (conditions necessary for health). Each category includes three to four sub-categories, for which examples of possible indicators are presented. The model can be applied both in the planning and in the evaluation stage of a project. This makes it possible for health promotion agencies and institutions responsible for funding and promotion to obtain a general overview of the outcome of their work.

  20. Risk Prediction Models for Incident Heart Failure: A Systematic Review of Methodology and Model Performance.

    Science.gov (United States)

    Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M

    2017-09-01

    Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Measuring Fit of Sequence Data to Phylogenetic Model: Gain of Power Using Marginal Tests

    Science.gov (United States)

    Waddell, Peter J.; Ota, Rissa; Penny, David

    2009-10-01

    Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (1978) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (1982) to the present. We compare the general log-likelihood ratio (the G or G2 statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (p~0.5), but the marginalized tests do. Tests on pair-wise frequency (F) matrices, strongly (p < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (p < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4t patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with p << 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published analyses may really be far larger than the analytical methods (e.g., bootstrap) report.

  2. A permutation test for the race model inequality

    DEFF Research Database (Denmark)

    Gondan, Matthias

    2010-01-01

    of such experiments is whether the observed redundancy gains can be explained by parallel processing of the two stimuli in a race-like fashion. To test the parallel processing model, Miller derived the well-known race model inequality which has become a routine test for behavioral data in experiments with redundant...... signals. Several statistical procedures have been used for testing the race model inequality. However, the commonly employed procedure does not control the Type I error. In this article a permutation test is described that keeps the Type I error at the desired level. Simulations show that the power...... of the test is reasonable even for small samples. The scripts discussed in this article may be downloaded as supplemental materials from http://brm.psychonomic-journals.org/content/supplemental....

  3. THE MISHKIN TEST: AN ANALYSIS OF MODEL EXTENSIONS

    Directory of Open Access Journals (Sweden)

    Diana MURESAN

    2015-04-01

    Full Text Available This paper reviews empirical research that apply Mishkin test for the examination of the existence of accruals anomaly using alternative approaches. Mishkin test is a test used in macro-econometrics for rational hypothesis, which test for the market efficiency. Starting with Sloan (1996 the model has been applied to accruals anomaly literature. Since Sloan (1996, the model has known various improvements and it has been the subject to many debates in the literature regarding its efficacy. Nevertheless, the current evidence strengthens the pervasiveness of the model. The analyses realized on the extended studies on Mishkin test highlights that adding additional variables enhances the results, providing insightful information about the occurrence of accruals anomaly.

  4. Collision Detection Modelling for Store Release Testing at AMRL

    National Research Council Canada - National Science Library

    Leung, Sunny

    2000-01-01

    ... of stores from aircraft. A computer program called CDM, written in Java and Java 3D programming language has been developed to visualise, model, and provide collision detection, for an aerodynamic grid test...

  5. Functional Testing Protocols for Commercial Building Efficiency Baseline Modeling Software

    Energy Technology Data Exchange (ETDEWEB)

    Jump, David; Price, Phillip N.; Granderson, Jessica; Sohn, Michael

    2013-09-06

    This document describes procedures for testing and validating proprietary baseline energy modeling software accuracy in predicting energy use over the period of interest, such as a month or a year. The procedures are designed according to the methodology used for public domain baselining software in another LBNL report that was (like the present report) prepared for Pacific Gas and Electric Company: ?Commercial Building Energy Baseline Modeling Software: Performance Metrics and Method Testing with Open Source Models and Implications for Proprietary Software Testing Protocols? (referred to here as the ?Model Analysis Report?). The test procedure focuses on the quality of the software?s predictions rather than on the specific algorithms used to predict energy use. In this way the software vendor is not required to divulge or share proprietary information about how their software works, while enabling stakeholders to assess its performance.

  6. Testing algorithms for a passenger train braking performance model.

    Science.gov (United States)

    2011-09-01

    "The Federal Railroad Administrations Office of Research and Development funded a project to establish performance model to develop, analyze, and test positive train control (PTC) braking algorithms for passenger train operations. With a good brak...

  7. An Innovative Physical Model for Testing Bucket Foundations

    DEFF Research Database (Denmark)

    Foglia, Aligi; Ibsen, Lars Bo; Andersen, Lars Vabbersgaard

    2012-01-01

    Pa), 20 (kPa), and 30 (kPa) respectively. The comparison between the tests conducted at stress level of 0 (kPa), and the tests with stress level increased, shows remarkable differences. The relationship between scaled overturning moment and rotation is well represented by a power law. The exponent...... of the power law is consistent among all tests carried out with stress level increased. Besides, attention is given to the instantaneous centre of rotation distribution. To validate the mode, the tests are compared with a large scale test by means of a scaling moment. The validation of the model is only...

  8. Regional estimation of groundwater arsenic concentrations through systematical dynamic-neural modeling

    Science.gov (United States)

    Chang, Fi-John; Chen, Pin-An; Liu, Chen-Wuing; Liao, Vivian Hsiu-Chuan; Liao, Chung-Min

    2013-08-01

    Arsenic (As) is an odorless semi-metal that occurs naturally in rock and soil, and As contamination in groundwater resources has become a serious threat to human health. Thus, assessing the spatial and temporal variability of As concentration is highly desirable, particularly in heavily As-contaminated areas. However, various difficulties may be encountered in the regional estimation of As concentration such as cost-intensive field monitoring, scarcity of field data, identification of important factors affecting As, over-fitting or poor estimation accuracy. This study develops a novel systematical dynamic-neural modeling (SDM) for effectively estimating regional As-contaminated water quality by using easily-measured water quality variables. To tackle the difficulties commonly encountered in regional estimation, the SDM comprises of a neural network and four statistical techniques: the Nonlinear Autoregressive with eXogenous input (NARX) network, Gamma test, cross-validation, Bayesian regularization method and indicator kriging (IK). For practical application, this study investigated a heavily As-contaminated area in Taiwan. The backpropagation neural network (BPNN) is adopted for comparison purpose. The results demonstrate that the NARX network (Root mean square error (RMSE): 95.11 μg l-1 for training; 106.13 μg l-1 for validation) outperforms the BPNN (RMSE: 121.54 μg l-1 for training; 143.37 μg l-1 for validation). The constructed SDM can provide reliable estimation (R2 > 0.89) of As concentration at ungauged sites based merely on three easily-measured water quality variables (Alk, Ca2+ and pH). In addition, risk maps under the threshold of the WHO drinking water standard (10 μg l-1) are derived by the IK to visually display the spatial and temporal variation of the As concentration in the whole study area at different time spans. The proposed SDM can be practically applied with satisfaction to the regional estimation in study areas of interest and the

  9. Instrumentation and testing of a prestressed concrete containment vessel model

    International Nuclear Information System (INIS)

    Hessheimer, M.F.; Pace, D.W.; Klamerus, E.W.

    1997-01-01

    Static overpressurization tests of two scale models of nuclear containment structures - a steel containment vessel (SCV) representative of an improved, boiling water reactor (BWR) Mark II design and a prestressed concrete containment vessel (PCCV) for pressurized water reactors (PWR) - are being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the U.S. Nuclear Regulatory Commission. This paper discusses plans for instrumentation and testing of the PCCV model. 6 refs., 2 figs., 2 tabs

  10. Functional Testing Protocols for Commercial Building Efficiency Baseline Modeling Software

    OpenAIRE

    Jump, David

    2014-01-01

    This document describes procedures for testing and validating proprietary baseline energy modeling software accuracy in predicting energy use over the period of interest, such as a month or a year. The procedures are designed according to the methodology used for public domain baselining software in another LBNL report that was (like the present report) prepared for Pacific Gas and Electric Company: ?Commercial Building Energy Baseline Modeling Software: Performance Metrics and Method Testing...

  11. Calibration of a Chemistry Test Using the Rasch Model

    Directory of Open Access Journals (Sweden)

    Nancy Coromoto Martín Guaregua

    2011-11-01

    Full Text Available The Rasch model was used to calibrate a general chemistry test for the purpose of analyzing the advantages and information the model provides. The sample was composed of 219 college freshmen. Of the 12 questions used, good fit was achieved in 10. The evaluation shows that although there are items of variable difficulty, there are gaps on the scale; in order to make the test complete, it will be necessary to design new items to fill in these gaps.

  12. ESTAR model with multiple fixed points. Testing and Estimation

    OpenAIRE

    I A Venetis; I Paya; D Peel

    2009-01-01

    In this paper we propose a globally stationary augmentation of the Exponential Smooth Transition Autoregressive (ESTAR) model that allows for multiple fixed points in the transition function. An F-type test statistic for the null of nonstationarity against such globally stationary nonlinear alternative is developed. The test statistic is based on the standard approximation of the nonlinear function under the null hypothesis by a Taylor series expansion. The model is applied to the U.S real in...

  13. A general diagnostic model applied to language testing data.

    Science.gov (United States)

    von Davier, Matthias

    2008-11-01

    Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.

  14. Improved animal models for testing gene therapy for atherosclerosis.

    Science.gov (United States)

    Du, Liang; Zhang, Jingwan; De Meyer, Guido R Y; Flynn, Rowan; Dichek, David A

    2014-04-01

    Gene therapy delivered to the blood vessel wall could augment current therapies for atherosclerosis, including systemic drug therapy and stenting. However, identification of clinically useful vectors and effective therapeutic transgenes remains at the preclinical stage. Identification of effective vectors and transgenes would be accelerated by availability of animal models that allow practical and expeditious testing of vessel-wall-directed gene therapy. Such models would include humanlike lesions that develop rapidly in vessels that are amenable to efficient gene delivery. Moreover, because human atherosclerosis develops in normal vessels, gene therapy that prevents atherosclerosis is most logically tested in relatively normal arteries. Similarly, gene therapy that causes atherosclerosis regression requires gene delivery to an existing lesion. Here we report development of three new rabbit models for testing vessel-wall-directed gene therapy that either prevents or reverses atherosclerosis. Carotid artery intimal lesions in these new models develop within 2-7 months after initiation of a high-fat diet and are 20-80 times larger than lesions in a model we described previously. Individual models allow generation of lesions that are relatively rich in either macrophages or smooth muscle cells, permitting testing of gene therapy strategies targeted at either cell type. Two of the models include gene delivery to essentially normal arteries and will be useful for identifying strategies that prevent lesion development. The third model generates lesions rapidly in vector-naïve animals and can be used for testing gene therapy that promotes lesion regression. These models are optimized for testing helper-dependent adenovirus (HDAd)-mediated gene therapy; however, they could be easily adapted for testing of other vectors or of different types of molecular therapies, delivered directly to the blood vessel wall. Our data also supports the promise of HDAd to deliver long

  15. Transition between process models (BPMN and service models (WS-BPEL and other standards: A systematic review

    Directory of Open Access Journals (Sweden)

    Marko Jurišić

    2011-12-01

    Full Text Available BPMN and BPEL have become de facto standards for modeling of business processes and imple-mentation of business processes via Web services. There is a quintessential problem of discrep-ancy between these two approaches as they are applied in different phases of lifecycle and theirfundamental concepts are different — BPMN is a graph based language while BPEL is basicallya block-based programming language. This paper shows basic concepts and gives an overviewof research and ideas which emerged during last two years, presents state of the art and possiblefuture research directions. Systematic literature review was performed and critical review wasgiven regarding the potential of the given solutions.

  16. Systematic review and meta-analysis of studies evaluating diagnostic test accuracy: A practical review for clinical researchers-Part I. general guidance and tips

    International Nuclear Information System (INIS)

    Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi; Park, Seong Ho; Lee, June Young

    2015-01-01

    In the field of diagnostic test accuracy (DTA), the use of systematic review and meta-analyses is steadily increasing. By means of objective evaluation of all available primary studies, these two processes generate an evidence-based systematic summary regarding a specific research topic. The methodology for systematic review and meta-analysis in DTA studies differs from that in therapeutic/interventional studies, and its content is still evolving. Here we review the overall process from a practical standpoint, which may serve as a reference for those who implement these methods

  17. Linearity and Misspecification Tests for Vector Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    Teräsvirta, Timo; Yang, Yukai

    The purpose of the paper is to derive Lagrange multiplier and Lagrange multiplier type specification and misspecification tests for vector smooth transition regression models. We report results from simulation studies in which the size and power properties of the proposed asymptotic tests in small...

  18. An integrated service excellence model for military test and ...

    African Journals Online (AJOL)

    The purpose of this article is to introduce an Integrated Service Excellence Model (ISEM) for empowering the leadership core of the capital-intensive military test and evaluation facilities to provide strategic military test and evaluation services and to continuously improve service excellence by ensuring that all activities ...

  19. Design Of Computer Based Test Using The Unified Modeling Language

    Science.gov (United States)

    Tedyyana, Agus; Danuri; Lidyawati

    2017-12-01

    The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.

  20. 1g Model Tests with Foundations in Sand

    DEFF Research Database (Denmark)

    Krabbenhøft, Sven; Damkilde, Lars; Clausen, Johan

    2010-01-01

    This paper presents the results of a series 1g model tests with both a circular and a strip foundation on dense sand. The test results have been compared with the results from finite element calculations based on a non linear Mohr-Coulomb yield criterion taking into account the dependence...

  1. Aligned rank tests for the linear model with heteroscedastic errors

    NARCIS (Netherlands)

    Albers, Willem/Wim; Akritas, Michael G.

    1993-01-01

    We consider the problem of testing subhypotheses in a heteroscedastic linear regression model. The proposed test statistics are based on the ranks of scaled residuals obtained under the null hypothesis. Any estimator that is n -consistent under the null hypothesis can be used to form the residuals.

  2. Early Response to treatment in Eating Disorders: A Systematic Review and a Diagnostic Test Accuracy Meta-Analysis.

    Science.gov (United States)

    Nazar, Bruno Palazzo; Gregor, Louise Kathrine; Albano, Gaia; Marchica, Angelo; Coco, Gianluca Lo; Cardi, Valentina; Treasure, Janet

    2017-03-01

    Early response to eating disorders treatment is thought to predict a later favourable outcome. A systematic review of the literature and meta-analyses examined the robustness of this concept. The criteria used across studies to define early response were summarised following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Diagnostic Test Accuracy methodology was used to estimate the size of the effect. Findings from 24 studies were synthesized and data from 14 studies were included in the meta-analysis. In Anorexia Nervosa, the odds ratio of early response predicting remission was 4.85(95%CI: 2.94-8.01) and the summary Area Under the Curve (AUC) = .77. In Bulimia Nervosa, the odds ratio was 2.75(95%CI:1.24-6.09) and AUC = .67. For Binge Eating Disorder, the odds ratio was 5.01(95%CI: 3.38-7.42) and AUC = .71. Early behaviour change accurately predicts later symptom remission for Anorexia Nervosa and Binge Eating Disorder but there is less predictive accuracy for Bulimia Nervosa. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association.

  3. Model-Driven Test Generation of Distributed Systems

    Science.gov (United States)

    Easwaran, Arvind; Hall, Brendan; Schweiker, Kevin

    2012-01-01

    This report describes a novel test generation technique for distributed systems. Utilizing formal models and formal verification tools, spe cifically the Symbolic Analysis Laboratory (SAL) tool-suite from SRI, we present techniques to generate concurrent test vectors for distrib uted systems. These are initially explored within an informal test validation context and later extended to achieve full MC/DC coverage of the TTEthernet protocol operating within a system-centric context.

  4. A systematic review of the diagnostic accuracy of automated tests for cognitive impairment.

    Science.gov (United States)

    Aslam, Rabeea'h W; Bates, Vickie; Dundar, Yenal; Hounsome, Juliet; Richardson, Marty; Krishan, Ashma; Dickson, Rumona; Boland, Angela; Fisher, Joanne; Robinson, Louise; Sikdar, Sudip

    2018-04-01

    The aim of this review is to determine whether automated computerised tests accurately identify patients with progressive cognitive impairment and, if so, to investigate their role in monitoring disease progression and/or response to treatment. Six electronic databases (Medline, Embase, Cochrane, Institute for Scientific Information, PsycINFO, and ProQuest) were searched from January 2005 to August 2015 to identify papers for inclusion. Studies assessing the diagnostic accuracy of automated computerised tests for mild cognitive impairment (MCI) and early dementia against a reference standard were included. Where possible, sensitivity, specificity, positive predictive value, negative predictive value, and likelihood ratios were calculated. The Quality Assessment of Diagnostic Accuracy Studies tool was used to assess risk of bias. Sixteen studies assessing 11 diagnostic tools for MCI and early dementia were included. No studies were eligible for inclusion in the review of tools for monitoring progressive disease and response to treatment. The overall quality of the studies was good. However, the wide range of tests assessed and the non-standardised reporting of diagnostic accuracy outcomes meant that statistical analysis was not possible. Some tests have shown promising results for identifying MCI and early dementia. However, concerns over small sample sizes, lack of replicability of studies, and lack of evidence available make it difficult to make recommendations on the clinical use of the computerised tests for diagnosing, monitoring progression, and treatment response for MCI and early dementia. Research is required to establish stable cut-off points for automated computerised tests used to diagnose patients with MCI or early dementia. © 2018 The Authors. International Journal of Geriatric Psychiatry Published by John Wiley & Sons Ltd.

  5. Intra- and inter-rater reliability of movement and palpation tests in patients with neck pain: A systematic review.

    Science.gov (United States)

    Jonsson, Anders; Rasmussen-Barr, Eva

    2018-03-01

    Neck pain is common and often becomes chronic. Various clinical tests of the cervical spine are used to direct and evaluate treatment. This systematic review aimed to identify studies examining the intra- and/or interrater reliability of tests used in clinical examination of patients with neck pain. A database search up to April 2016 was conducted in PubMed, CINAHL, and AMED. The Quality Appraisal of Reliability Studies Checklist (QAREL) was used to assess risk of bias. Eleven studies were included, comprising tests of active and passive movement and pain evaluating participants with ongoing neck pain. One study was assessed with a low risk of bias, three with medium risk, while the rest were assessed with high risk of bias. The results showed differing reliabilities for the included tests ranging from poor to almost perfect. In conclusion, active movement and pain for pain or mobility overall presented acceptable to very good reliability (Kappa >0.40); while passive intervertebral tests had lower Kappa values, suggesting poor reliability. It may be a coincidence that the studies indicating very good reliability tended to be of higher quality (low to moderate risk of bias), while studies finding poor reliability tended to be of lower quality (high risk of bias). Regardless, the current recommendation from this review would suggest the clinical use of tests with acceptable reliability and avoiding the use of tests that have been shown to not be reliable. Finally, it is critical that all future reliability studies are of higher quality with low risk of bias.

  6. Numerical Well Testing Interpretation Model and Applications in Crossflow Double-Layer Reservoirs by Polymer Flooding

    Directory of Open Access Journals (Sweden)

    Haiyang Yu

    2014-01-01

    Full Text Available This work presents numerical well testing interpretation model and analysis techniques to evaluate formation by using pressure transient data acquired with logging tools in crossflow double-layer reservoirs by polymer flooding. A well testing model is established based on rheology experiments and by considering shear, diffusion, convection, inaccessible pore volume (IPV, permeability reduction, wellbore storage effect, and skin factors. The type curves were then developed based on this model, and parameter sensitivity is analyzed. Our research shows that the type curves have five segments with different flow status: (I wellbore storage section, (II intermediate flow section (transient section, (III mid-radial flow section, (IV crossflow section (from low permeability layer to high permeability layer, and (V systematic radial flow section. The polymer flooding field tests prove that our model can accurately determine formation parameters in crossflow double-layer reservoirs by polymer flooding. Moreover, formation damage caused by polymer flooding can also be evaluated by comparison of the interpreted permeability with initial layered permeability before polymer flooding. Comparison of the analysis of numerical solution based on flow mechanism with observed polymer flooding field test data highlights the potential for the application of this interpretation method in formation evaluation and enhanced oil recovery (EOR.

  7. What Makes Hydrologic Models Differ? Using SUMMA to Systematically Explore Model Uncertainty and Error

    Science.gov (United States)

    Bennett, A.; Nijssen, B.; Chegwidden, O.; Wood, A.; Clark, M. P.

    2017-12-01

    Model intercomparison experiments have been conducted to quantify the variability introduced during the model development process, but have had limited success in identifying the sources of this model variability. The Structure for Unifying Multiple Modeling Alternatives (SUMMA) has been developed as a framework which defines a general set of conservation equations for mass and energy as well as a common core of numerical solvers along with the ability to set options for choosing between different spatial discretizations and flux parameterizations. SUMMA can be thought of as a framework for implementing meta-models which allows for the investigation of the impacts of decisions made during the model development process. Through this flexibility we develop a hierarchy of definitions which allows for models to be compared to one another. This vocabulary allows us to define the notion of weak equivalence between model instantiations. Through this weak equivalence we develop the concept of model mimicry, which can be used to investigate the introduction of uncertainty and error during the modeling process as well as provide a framework for identifying modeling decisions which may complement or negate one another. We instantiate SUMMA instances that mimic the behaviors of the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS) by choosing modeling decisions which are implemented in each model. We compare runs from these models and their corresponding mimics across the Columbia River Basin located in the Pacific Northwest of the United States and Canada. From these comparisons, we are able to determine the extent to which model implementation has an effect on the results, as well as determine the changes in sensitivity of parameters due to these implementation differences. By examining these changes in results and sensitivities we can attempt to postulate changes in the modeling decisions which may provide better estimation of

  8. Supervised and unsupervised self-testing for HIV in high- and low-risk populations: a systematic review.

    Science.gov (United States)

    Pant Pai, Nitika; Sharma, Jigyasa; Shivkumar, Sushmita; Pillay, Sabrina; Vadnais, Caroline; Joseph, Lawrence; Dheda, Keertan; Peeling, Rosanna W

    2013-01-01

    Stigma, discrimination, lack of privacy, and long waiting times partly explain why six out of ten individuals living with HIV do not access facility-based testing. By circumventing these barriers, self-testing offers potential for more people to know their sero-status. Recent approval of an in-home HIV self test in the US has sparked self-testing initiatives, yet data on acceptability, feasibility, and linkages to care are limited. We systematically reviewed evidence on supervised (self-testing and counselling aided by a health care professional) and unsupervised (performed by self-tester with access to phone/internet counselling) self-testing strategies. Seven databases (Medline [via PubMed], Biosis, PsycINFO, Cinahl, African Medicus, LILACS, and EMBASE) and conference abstracts of six major HIV/sexually transmitted infections conferences were searched from 1st January 2000-30th October 2012. 1,221 citations were identified and 21 studies included for review. Seven studies evaluated an unsupervised strategy and 14 evaluated a supervised strategy. For both strategies, data on acceptability (range: 74%-96%), preference (range: 61%-91%), and partner self-testing (range: 80%-97%) were high. A high specificity (range: 99.8%-100%) was observed for both strategies, while a lower sensitivity was reported in the unsupervised (range: 92.9%-100%; one study) versus supervised (range: 97.4%-97.9%; three studies) strategy. Regarding feasibility of linkage to counselling and care, 96% (n = 102/106) of individuals testing positive for HIV stated they would seek post-test counselling (unsupervised strategy, one study). No extreme adverse events were noted. The majority of data (n = 11,019/12,402 individuals, 89%) were from high-income settings and 71% (n = 15/21) of studies were cross-sectional in design, thus limiting our analysis. Both supervised and unsupervised testing strategies were highly acceptable, preferred, and more likely to result in partner self-testing. However, no

  9. The microelectronics and photonics test bed (MPTB) space, ground test and modeling experiments

    International Nuclear Information System (INIS)

    Campbell, A.

    1999-01-01

    This paper is an overview of the MPTB (microelectronics and photonics test bed) experiment, a combination of a space experiment, ground test and modeling programs looking at the response of advanced electronic and photonic technologies to the natural radiation environment of space. (author)

  10. Health Belief Model Scale for Cervical Cancer and Pap Smear Test: psychometric testing.

    Science.gov (United States)

    Guvenc, Gulten; Akyuz, Aygul; Açikel, Cengiz Han

    2011-02-01

    This study is a report of the development and psychometric testing of the Health Belief Model Scale for Cervical Cancer and the Pap Smear Test. While the Champion Health Belief Model scales have been tested extensively for breast cancer and screening for this, evaluation of these scales in explaining the beliefs of women with regard to cervical cancer and the Pap Smear Test has only received limited attention. This methodological research was carried out in Turkey in 2007. The data were collected with 237 randomly selected women who met the criteria for inclusion and agreed to participate in this study. The Champion Health Belief Model scales were translated into Turkish, adapted for cervical cancer, validated by professional experts, translated back into English and pilot-tested. Factor analysis yielded five factors: Pap smear benefits and health motivation, Pap smear barriers, seriousness, susceptibility and health motivation. Cronbach's alpha reliability coefficients for the five subscales ranged from 0·62 to 0·86, and test-retest reliability coefficients ranged from 0·79 to 0·87 for the subscales. The Health Belief Model Scale for Cervical Cancer and the Pap Smear Test was found to be a valid and reliable tool in assessing the women's health beliefs. Understanding the beliefs of women in respect of cervical cancer and the Pap Smear Test will help healthcare professionals to develop more effective cervical cancer screening programmes. © 2010 Blackwell Publishing Ltd.

  11. Towards Universal Voluntary HIV Testing and Counselling: A Systematic Review and Meta-Analysis of Community-Based Approaches

    Science.gov (United States)

    Suthar, Amitabh B.; Ford, Nathan; Bachanas, Pamela J.; Wong, Vincent J.; Rajan, Jay S.; Saltzman, Alex K.; Ajose, Olawale; Fakoya, Ade O.; Granich, Reuben M.; Negussie, Eyerusalem K.; Baggaley, Rachel C.

    2013-01-01

    Background Effective national and global HIV responses require a significant expansion of HIV testing and counselling (HTC) to expand access to prevention and care. Facility-based HTC, while essential, is unlikely to meet national and global targets on its own. This article systematically reviews the evidence for community-based HTC. Methods and Findings PubMed was searched on 4 March 2013, clinical trial registries were searched on 3 September 2012, and Embase and the World Health Organization Global Index Medicus were searched on 10 April 2012 for studies including community-based HTC (i.e., HTC outside of health facilities). Randomised controlled trials, and observational studies were eligible if they included a community-based testing approach and reported one or more of the following outcomes: uptake, proportion receiving their first HIV test, CD4 value at diagnosis, linkage to care, HIV positivity rate, HTC coverage, HIV incidence, or cost per person tested (outcomes are defined fully in the text). The following community-based HTC approaches were reviewed: (1) door-to-door testing (systematically offering HTC to homes in a catchment area), (2) mobile testing for the general population (offering HTC via a mobile HTC service), (3) index testing (offering HTC to household members of people with HIV and persons who may have been exposed to HIV), (4) mobile testing for men who have sex with men, (5) mobile testing for people who inject drugs, (6) mobile testing for female sex workers, (7) mobile testing for adolescents, (8) self-testing, (9) workplace HTC, (10) church-based HTC, and (11) school-based HTC. The Newcastle-Ottawa Quality Assessment Scale and the Cochrane Collaboration's “risk of bias” tool were used to assess the risk of bias in studies with a comparator arm included in pooled estimates.  117 studies, including 864,651 participants completing HTC, met the inclusion criteria. The percentage of people offered community-based HTC who accepted HTC

  12. Health literacy and public health: A systematic review and integration of definitions and models

    Science.gov (United States)

    2012-01-01

    Background Health literacy concerns the knowledge and competences of persons to meet the complex demands of health in modern society. Although its importance is increasingly recognised, there is no consensus about the definition of health literacy or about its conceptual dimensions, which limits the possibilities for measurement and comparison. The aim of the study is to review definitions and models on health literacy to develop an integrated definition and conceptual model capturing the most comprehensive evidence-based dimensions of health literacy. Methods A systematic literature review was performed to identify definitions and conceptual frameworks of health literacy. A content analysis of the definitions and conceptual frameworks was carried out to identify the central dimensions of health literacy and develop an integrated model. Results The review resulted in 17 definitions of health literacy and 12 conceptual models. Based on the content analysis, an integrative conceptual model was developed containing 12 dimensions referring to the knowledge, motivation and competencies of accessing, understanding, appraising and applying health-related information within the healthcare, disease prevention and health promotion setting, respectively. Conclusions Based upon this review, a model is proposed integrating medical and public health views of health literacy. The model can serve as a basis for developing health literacy enhancing interventions and provide a conceptual basis for the development and validation of measurement tools, capturing the different dimensions of health literacy within the healthcare, disease prevention and health promotion settings. PMID:22276600

  13. Health literacy and public health: A systematic review and integration of definitions and models

    LENUS (Irish Health Repository)

    Sorensen, Kristine

    2012-01-25

    Abstract Background Health literacy concerns the knowledge and competences of persons to meet the complex demands of health in modern society. Although its importance is increasingly recognised, there is no consensus about the definition of health literacy or about its conceptual dimensions, which limits the possibilities for measurement and comparison. The aim of the study is to review definitions and models on health literacy to develop an integrated definition and conceptual model capturing the most comprehensive evidence-based dimensions of health literacy. Methods A systematic literature review was performed to identify definitions and conceptual frameworks of health literacy. A content analysis of the definitions and conceptual frameworks was carried out to identify the central dimensions of health literacy and develop an integrated model. Results The review resulted in 17 definitions of health literacy and 12 conceptual models. Based on the content analysis, an integrative conceptual model was developed containing 12 dimensions referring to the knowledge, motivation and competencies of accessing, understanding, appraising and applying health-related information within the healthcare, disease prevention and health promotion setting, respectively. Conclusions Based upon this review, a model is proposed integrating medical and public health views of health literacy. The model can serve as a basis for developing health literacy enhancing interventions and provide a conceptual basis for the development and validation of measurement tools, capturing the different dimensions of health literacy within the healthcare, disease prevention and health promotion settings.

  14. A critical comparison of systematic calibration protocols for activated sludge models: a SWOT analysis.

    Science.gov (United States)

    Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A

    2005-07-01

    Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.

  15. Testing and reference model analysis of FTTH system

    Science.gov (United States)

    Feng, Xiancheng; Cui, Wanlong; Chen, Ying

    2009-08-01

    With rapid development of Internet and broadband access network, the technologies of xDSL, FTTx+LAN , WLAN have more applications, new network service emerges in endless stream, especially the increase of network game, meeting TV, video on demand, etc. FTTH supports all present and future service with enormous bandwidth, including traditional telecommunication service, traditional data service and traditional TV service, and the future digital TV and VOD. With huge bandwidth of FTTH, it wins the final solution of broadband network, becomes the final goal of development of optical access network.. Fiber to the Home (FTTH) will be the goal of telecommunications cable broadband access. In accordance with the development trend of telecommunication services, to enhance the capacity of integrated access network, to achieve triple-play (voice, data, image), based on the existing optical Fiber to the curb (FTTC), Fiber To The Zone (FTTZ), Fiber to the Building (FTTB) user optical cable network, the optical fiber can extend to the FTTH system of end-user by using EPON technology. The article first introduced the basic components of FTTH system; and then explain the reference model and reference point for testing of the FTTH system; Finally, by testing connection diagram, the testing process, expected results, primarily analyze SNI Interface Testing, PON interface testing, Ethernet performance testing, UNI interface testing, Ethernet functional testing, PON functional testing, equipment functional testing, telephone functional testing, operational support capability testing and so on testing of FTTH system. ...

  16. Feasibility and Safety of Cardiopulmonary Exercise Testing in Multiple Sclerosis: A Systematic Review

    NARCIS (Netherlands)

    van den Akker, L.E.; Heine, M.; van der Veldt, N.; Dekker, J.; de Groot, V.; Beckerman, H.

    2015-01-01

    Objective To investigate the feasibility and safety of cardiopulmonary exercise testing (CPET) in patients with multiple sclerosis (MS). Data Sources PubMed, EMBASE, CINAHL, SPORTDiscus, PsycINFO, ERIC, and the Psychology and Behavioral Sciences Collection were searched up to October 2014.

  17. Feasibility and Safety of Cardiopulmonary Exercise Testing in Multiple Sclerosis: A Systematic Review

    NARCIS (Netherlands)

    Van Den Akker, Lizanne Eva; Heine, M; van der Veldt, Nikki; Dekker, Joost; de Groot, Vincent; Beckerman, Heleen

    2015-01-01

    OBJECTIVE: To investigate the feasibility and safety of cardiopulmonary exercise testing (CPET) in patients with multiple sclerosis (MS). DATA SOURCES: PubMed, EMBASE, CINAHL, SPORTDiscus, PsycINFO, ERIC, and the Psychology and Behavioral Sciences Collection were searched up to October 2014.

  18. Value of physical tests in diagnosing cervical radiculopathy : a systematic review

    NARCIS (Netherlands)

    Thoomes, Erik J; van Geest, Sarita; van der Windt, Danielle A; Falla, Deborah; Verhagen, Arianne P; Koes, Bart W; Thoomes-de Graaf, Marloes; Kuijper, Barbara; Scholten-Peeters, Wendy Gm; Vleggeert-Lankamp, Carmen L

    Background context In clinical practice, the diagnosis of cervical radiculopathy is based on information from the patient history, physical examination and diagnostic imaging. Various physical tests may be performed, but their diagnostic accuracy is unknown. Purpose To summarize and update the

  19. A Practical Methodology for the Systematic Development of Multiple Choice Tests.

    Science.gov (United States)

    Blumberg, Phyllis; Felner, Joel

    Using Guttman's facet design analysis, four parallel forms of a multiple-choice test were developed. A mapping sentence, logically representing the universe of content of a basic cardiology course, specified the facets of the course and the semantic structural units linking them. The facets were: cognitive processes, disease priority, specific…

  20. Wind Loads on Ships and Offshore Structures Determined by Model Tests, CFD and Full-Scale Measurements

    DEFF Research Database (Denmark)

    Aage, Christian

    1998-01-01

    Wind loads on ships and offshore structures have until recently been determined only by model tests, or by statistical methods based on model tests. By the development of Computational Fluid Dynamics or CFD there is now a realistic computational alternative. In principle, both methods should...... be validated systematically against full-scale measurements, but due to the great practical difficulties involved, this is almost never done. In this investigation, wind loads on a seagoing ferry and on a semisubmersible platform have been determined by model tests and by CFD. On the ferry, full......-scale measurements have been carried out as well. The CFD method also offers the possibility of a computational estimate of scale effects related to wind tunnel model testing. An example of such an estimate on the ferry is discussed. This work has been published in more details in Proceedings of BOSS'97, Aage et al...

  1. A systematic procedure for the incorporation of common cause events into risk and reliability models

    International Nuclear Information System (INIS)

    Fleming, K.N.; Mosleh, A.; Deremer, R.K.

    1986-01-01

    Common cause events are an important class of dependent events with respect to their contribution to system unavailability and to plant risk. Unfortunately, these events have not been treated with any king of consistency in applied risk studies over the past decade. Many probabilistic risk assessments (PRA) have not included these events at all, and those that have did not employ the kind of systematic procedures that are needed to achieve consistency, accuracy, and credibility in this area of PRA methodology. In this paper, the authors report on the progress recently made in the development of a systematic approach for incorporating common cause events into applied risk and reliability evaluations. This approach takes advantage of experience from recently completed PRAs and is the result of a project, sponsored by the Electric Power Research Institute (EPRI), in which procedures for dependent events analysis are being developed. Described in this paper is a general framework for system-level common cause failure (CCF) analysis and its application to a three-train auxiliary feedwater system. Within this general framework, three parametric CCF models are compared, including the basic parameter (BP), multiple Greek letter (MGL), and binominal failure rate (BFR) models. Pitfalls of not following the recommended procedure are discussed, and some old issues, such as the benefits of redundancy and diversity, are reexamined. (orig.)

  2. Mathematical Models in Humanitarian Supply Chain Management: A Systematic Literature Review

    Directory of Open Access Journals (Sweden)

    Muhammad Salman Habib

    2016-01-01

    Full Text Available In the past decade the humanitarian supply chain (HSC has attracted the attention of researchers due to the increasing frequency of disasters. The uncertainty in time, location, and severity of disaster during predisaster phase and poor conditions of available infrastructure during postdisaster phase make HSC operations difficult to handle. In order to overcome the difficulties during these phases, we need to assure that HSC operations are designed in an efficient manner to minimize human and economic losses. In the recent times, several mathematical optimization techniques and algorithms have been developed to increase the efficiency of HSC operations. These techniques and algorithms developed for the field of HSC motivate the need of a systematic literature review. Owing to the importance of mathematical modelling techniques, this paper presents the review of the mathematical contributions made in the last decade in the field of HSC. A systematic literature review methodology is used for this paper due to its transparent procedure. There are two objectives of this study: the first one is to conduct an up-to-date survey of mathematical models developed in HSC area and the second one is to highlight the potential research areas which require attention of the researchers.

  3. A comprehensive model for executing knowledge management audits in organizations: a systematic review.

    Science.gov (United States)

    Shahmoradi, Leila; Ahmadi, Maryam; Sadoughi, Farahnaz; Piri, Zakieh; Gohari, Mahmood Reza

    2015-01-01

    A knowledge management audit (KMA) is the first phase in knowledge management implementation. Incomplete or incomprehensive execution of the KMA has caused many knowledge management programs to fail. A study was undertaken to investigate how KMAs are performed systematically in organizations and present a comprehensive model for performing KMAs based on a systematic review. Studies were identified by searching electronic databases such as Emerald, LISA, and the Cochrane library and e-journals such as the Oxford Journal and hand searching of printed journals, theses, and books in the Tehran University of Medical Sciences digital library. The sources used in this study consisted of studies available through the digital library of the Tehran University of Medical Sciences that were published between 2000 and 2013, including both Persian- and English-language sources, as well as articles explaining the steps involved in performing a KMA. A comprehensive model for KMAs is presented in this study. To successfully execute a KMA, it is necessary to perform the appropriate preliminary activities in relation to the knowledge management infrastructure, determine the knowledge management situation, and analyze and use the available data on this situation.

  4. A Preliminary Field Test of an Employee Work Passion Model

    Science.gov (United States)

    Zigarmi, Drea; Nimon, Kim; Houson, Dobie; Witt, David; Diehl, Jim

    2011-01-01

    Four dimensions of a process model for the formulation of employee work passion, derived from Zigarmi, Nimon, Houson, Witt, and Diehl (2009), were tested in a field setting. A total of 447 employees completed questionnaires that assessed the internal elements of the model in a corporate work environment. Data from the measurements of work affect,…

  5. Testing static tradeoff theory against pecking order models of capital ...

    African Journals Online (AJOL)

    We test two models with the purpose of finding the best empirical explanation for corporate financing choice of a cross section of 27 Nigerian quoted companies. The models were developed to represent the Static tradeoff Theory and the Pecking order Theory of capital structure with a view to make comparison between ...

  6. Unit-Weighted Scales Imply Models that Should Be Tested!

    Science.gov (United States)

    Beauducel, Andre; Leue, Anja

    2013-01-01

    In several studies unit-weighted sum scales based on the unweighted sum of items are derived from the pattern of salient loadings in confirmatory factor analysis. The problem of this procedure is that the unit-weighted sum scales imply a model other than the initially tested confirmatory factor model. In consequence, it remains generally unknown…

  7. Testing Affine Term Structure Models in Case of Transaction Costs

    NARCIS (Netherlands)

    Driessen, J.J.A.G.; Melenberg, B.; Nijman, T.E.

    1999-01-01

    In this paper we empirically analyze the impact of transaction costs on the performance of affine interest rate models. We test the implied (no arbitrage) Euler restrictions, and we calculate the specification error bound of Hansen and Jagannathan to measure the extent to which a model is

  8. Testing static tradeoff theiry against pecking order models of capital ...

    African Journals Online (AJOL)

    We test two models with the purpose of finding the best empirical explanation for corporate financing choice of a cross section of 27 Nigerian quoted companies. The models were developed to represent the Static tradeoff Theory and the Pecking order Theory of capital structure with a view to make comparison between ...

  9. Data Modeling for Measurements in the Metrology and Testing Fields

    CERN Document Server

    Pavese, Franco

    2009-01-01

    Offers a comprehensive set of modeling methods for data and uncertainty analysis. This work develops methods and computational tools to address general models that arise in practice, allowing for a more valid treatment of calibration and test data and providing an understanding of complex situations in measurement science

  10. Animal models for testing anti-prion drugs.

    Science.gov (United States)

    Fernández-Borges, Natalia; Elezgarai, Saioa R; Eraña, Hasier; Castilla, Joaquín

    2013-01-01

    Prion diseases belong to a group of fatal infectious diseases with no effective therapies available. Throughout the last 35 years, less than 50 different drugs have been tested in different experimental animal models without hopeful results. An important limitation when searching for new drugs is the existence of appropriate models of the disease. The three different possible origins of prion diseases require the existence of different animal models for testing anti-prion compounds. Wild type, over-expressing transgenic mice and other more sophisticated animal models have been used to evaluate a diversity of compounds which some of them were previously tested in different in vitro experimental models. The complexity of prion diseases will require more pre-screening studies, reliable sporadic (or spontaneous) animal models and accurate chemical modifications of the selected compounds before having an effective therapy against human prion diseases. This review is intended to put on display the more relevant animal models that have been used in the search of new antiprion therapies and describe some possible procedures when handling chemical compounds presumed to have anti-prion activity prior to testing them in animal models.

  11. Modelling of wetting tests for a natural pyroclastic soil

    Directory of Open Access Journals (Sweden)

    Moscariello Mariagiovanna

    2016-01-01

    Full Text Available The so-called wetting-induced collapse is one of the most common problems associated with unsaturated soils. This paper applies the Modified Pastor-Zienkiewicz model (MPZ to analyse the wetting behaviour of undisturbed specimens of an unsaturated air-fall volcanic (pyroclastic soil originated from the explosive activity of the Somma-Vesuvius volcano (Southern Italy. Both standard oedometric tests, suction-controlled oedometeric tests and suction-controlled isotropic tests are considered. The results of the constitutive modelling show a satisfactory capability of the MPZ to simulate the variations of soil void ratio upon wetting, with negligible differences among the measured and the computed values.

  12. Modelling, simulation and visualisation for electromagnetic non-destructive testing

    International Nuclear Information System (INIS)

    Ilham Mukriz Zainal Abidin; Abdul Razak Hamzah

    2010-01-01

    This paper reviews the state-of-the art and the recent development of modelling, simulation and visualization for eddy current Non-Destructive Testing (NDT) technique. Simulation and visualization has aid in the design and development of electromagnetic sensors and imaging techniques and systems for Electromagnetic Non-Destructive Testing (ENDT); feature extraction and inverse problems for Quantitative Non-Destructive Testing (QNDT). After reviewing the state-of-the art of electromagnetic modelling and simulation, case studies of Research and Development in eddy current NDT technique via magnetic field mapping and thermography for eddy current distribution are discussed. (author)

  13. Testing Process Factor Analysis Models Using the Parametric Bootstrap.

    Science.gov (United States)

    Zhang, Guangjian

    2018-01-01

    Process factor analysis (PFA) is a latent variable model for intensive longitudinal data. It combines P-technique factor analysis and time series analysis. The goodness-of-fit test in PFA is currently unavailable. In the paper, we propose a parametric bootstrap method for assessing model fit in PFA. We illustrate the test with an empirical data set in which 22 participants rated their effects everyday over a period of 90 days. We also explore Type I error and power of the parametric bootstrap test with simulated data.

  14. Testing and Modeling of Mechanical Characteristics of Resistance Welding Machines

    DEFF Research Database (Denmark)

    Wu, Pei; Zhang, Wenqi; Bay, Niels

    2003-01-01

    The dynamic mechanical response of resistance welding machine is very important to the weld quality in resistance welding especially in projection welding when collapse or deformation of work piece occurs. It is mainly governed by the mechanical parameters of machine. In this paper, a mathematical...... model for characterizing the dynamic mechanical responses of machine and a special test set-up called breaking test set-up are developed. Based on the model and the test results, the mechanical parameters of machine are determined, including the equivalent mass, damping coefficient, and stiffness...

  15. Testing and modelling autoregressive conditional heteroskedasticity of streamflow processes

    Directory of Open Access Journals (Sweden)

    W. Wang

    2005-01-01

    Full Text Available Conventional streamflow models operate under the assumption of constant variance or season-dependent variances (e.g. ARMA (AutoRegressive Moving Average models for deseasonalized streamflow series and PARMA (Periodic AutoRegressive Moving Average models for seasonal streamflow series. However, with McLeod-Li test and Engle's Lagrange Multiplier test, clear evidences are found for the existence of autoregressive conditional heteroskedasticity (i.e. the ARCH (AutoRegressive Conditional Heteroskedasticity effect, a nonlinear phenomenon of the variance behaviour, in the residual series from linear models fitted to daily and monthly streamflow processes of the upper Yellow River, China. It is shown that the major cause of the ARCH effect is the seasonal variation in variance of the residual series. However, while the seasonal variation in variance can fully explain the ARCH effect for monthly streamflow, it is only a partial explanation for daily flow. It is also shown that while the periodic autoregressive moving average model is adequate in modelling monthly flows, no model is adequate in modelling daily streamflow processes because none of the conventional time series models takes the seasonal variation in variance, as well as the ARCH effect in the residuals, into account. Therefore, an ARMA-GARCH (Generalized AutoRegressive Conditional Heteroskedasticity error model is proposed to capture the ARCH effect present in daily streamflow series, as well as to preserve seasonal variation in variance in the residuals. The ARMA-GARCH error model combines an ARMA model for modelling the mean behaviour and a GARCH model for modelling the variance behaviour of the residuals from the ARMA model. Since the GARCH model is not followed widely in statistical hydrology, the work can be a useful addition in terms of statistical modelling of daily streamflow processes for the hydrological community.

  16. Estrogen receptor testing and 10-year mortality from breast cancer: A model for determining testing strategy

    Directory of Open Access Journals (Sweden)

    Christopher Naugler

    2012-01-01

    Full Text Available Background: The use of adjuvant tamoxifen therapy in the treatment of estrogen receptor (ER expressing breast carcinomas represents a major advance in personalized cancer treatment. Because there is no benefit (and indeed there is increased morbidity and mortality associated with the use of tamoxifen therapy in ER-negative breast cancer, its use is restricted to women with ER expressing cancers. However, correctly classifying cancers as ER positive or negative has been challenging given the high reported false negative test rates for ER expression in surgical specimens. In this paper I model practice recommendations using published information from clinical trials to address the question of whether there is a false negative test rate above which it is more efficacious to forgo ER testing and instead treat all patients with tamoxifen regardless of ER test results. Methods: I used data from randomized clinical trials to model two different hypothetical treatment strategies: (1 the current strategy of treating only ER positive women with tamoxifen and (2 an alternative strategy where all women are treated with tamoxifen regardless of ER test results. The variables used in the model are literature-derived survival rates of the different combinations of ER positivity and treatment with tamoxifen, varying true ER positivity rates and varying false negative ER testing rates. The outcome variable was hypothetical 10-year survival. Results: The model predicted that there will be a range of true ER rates and false negative test rates above which it would be more efficacious to treat all women with breast cancer with tamoxifen and forgo ER testing. This situation occurred with high true positive ER rates and false negative ER test rates in the range of 20-30%. Conclusions: It is hoped that this model will provide an example of the potential importance of diagnostic error on clinical outcomes and furthermore will give an example of how the effect of that

  17. Several submaximal exercise tests are reliable, valid and acceptable in people with chronic pain, fibromyalgia or chronic fatigue: a systematic review

    NARCIS (Netherlands)

    Ratter, Julia; Radlinger, Lorenz; Lucas, Cees

    2014-01-01

    Are submaximal and maximal exercise tests reliable, valid and acceptable in people with chronic pain, fibromyalgia and fatigue disorders? Systematic review of studies of the psychometric properties of exercise tests. People older than 18 years with chronic pain, fibromyalgia and chronic fatigue

  18. The Osteoporosis Self-Assessment Tool versus alternative tests for selecting postmenopausal women for bone mineral density assessment: a comparative systematic review of accuracy

    DEFF Research Database (Denmark)

    Rud, B; Hilden, J; Hyldstrup, L

    2008-01-01

    We performed a systematic review of studies comparing the Osteoporosis Self-Assessment Tool (OST) and other tests used to select women for bone mineral density (BMD) assessment. In comparative meta-analyses, we found that the accuracy of OST was similar to other tests that are based on information...

  19. Systematics of nuclear densities, deformations and excitation energies within the context of the generalized rotation-vibration model

    Energy Technology Data Exchange (ETDEWEB)

    Chamon, L.C., E-mail: luiz.chamon@dfn.if.usp.b [Departamento de Fisica Nuclear, Instituto de Fisica da Universidade de Sao Paulo, Caixa Postal 66318, 05315-970, Sao Paulo, SP (Braz