WorldWideScience

Sample records for model testing systematic

  1. Testing flow diversion in animal models: a systematic review.

    Science.gov (United States)

    Fahed, Robert; Raymond, Jean; Ducroux, Célina; Gentric, Jean-Christophe; Salazkin, Igor; Ziegler, Daniela; Gevry, Guylaine; Darsaut, Tim E

    2016-04-01

    Flow diversion (FD) is increasingly used to treat intracranial aneurysms. We sought to systematically review published studies to assess the quality of reporting and summarize the results of FD in various animal models. Databases were searched to retrieve all animal studies on FD from 2000 to 2015. Extracted data included species and aneurysm models, aneurysm and neck dimensions, type of flow diverter, occlusion rates, and complications. Articles were evaluated using a checklist derived from the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines. Forty-two articles reporting the results of FD in nine different aneurysm models were included. The rabbit elastase-induced aneurysm model was the most commonly used, with 3-month occlusion rates of 73.5%, (95%CI [61.9-82.6%]). FD of surgical sidewall aneurysms, constructed in rabbits or canines, resulted in high occlusion rates (100% [65.5-100%]). FD resulted in modest occlusion rates (15.4% [8.9-25.1%]) when tested in six complex canine aneurysm models designed to reproduce more difficult clinical contexts (large necks, bifurcation, or fusiform aneurysms). Adverse events, including branch occlusion, were rarely reported. There were no hemorrhagic complications. Articles complied with 20.8 ± 3.9 of 41 ARRIVE items; only a small number used randomization (3/42 articles [7.1%]) or a control group (13/42 articles [30.9%]). Preclinical studies on FD have shown various results. Occlusion of elastase-induced aneurysms was common after FD. The model is not challenging but standardized in many laboratories. Failures of FD can be reproduced in less standardized but more challenging surgical canine constructions. The quality of reporting could be improved.

  2. Systematic reviews of diagnostic test accuracy.

    Science.gov (United States)

    Leeflang, Mariska M G; Deeks, Jonathan J; Gatsonis, Constantine; Bossuyt, Patrick M M

    2008-12-16

    More and more systematic reviews of diagnostic test accuracy studies are being published, but they can be methodologically challenging. In this paper, the authors present some of the recent developments in the methodology for conducting systematic reviews of diagnostic test accuracy studies. Restrictive electronic search filters are discouraged, as is the use of summary quality scores. Methods for meta-analysis should take into account the paired nature of the estimates and their dependence on threshold. Authors of these reviews are advised to use the hierarchical summary receiver-operating characteristic or the bivariate model for the data analysis. Challenges that remain are the poor reporting of original diagnostic test accuracy studies and difficulties with the interpretation of the results of diagnostic test accuracy research.

  3. Testing Scientific Software: A Systematic Literature Review.

    Science.gov (United States)

    Kanewala, Upulee; Bieman, James M

    2014-10-01

    Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.

  4. Systematic Model-in-the-Loop Test of Embedded Control Systems

    Science.gov (United States)

    Krupp, Alexander; Müller, Wolfgang

    Current model-based development processes offer new opportunities for verification automation, e.g., in automotive development. The duty of functional verification is the detection of design flaws. Current functional verification approaches exhibit a major gap between requirement definition and formal property definition, especially when analog signals are involved. Besides lack of methodical support for natural language formalization, there does not exist a standardized and accepted means for formal property definition as a target for verification planning. This article addresses several shortcomings of embedded system verification. An Enhanced Classification Tree Method is developed based on the established Classification Tree Method for Embeded Systems CTM/ES which applies a hardware verification language to define a verification environment.

  5. Facilitators and barriers to chlamydia testing in general practice for young people using a theoretical model (COM-B): a systematic review protocol

    Science.gov (United States)

    McDonagh, Lorraine K; Saunders, John M; Cassell, Jackie; Bastaki, Hamad; Hartney, Thomas; Rait, Greta

    2017-01-01

    Introduction Chlamydia is a key health concern with high economic and social costs. There were over 200 000 chlamydia diagnoses made in England in 2015. The burden of chlamydia is greatest among young people where the highest prevalence rates are found. Annual testing for sexually active young people is recommended; however, many of those at risk do not receive testing. General practice has been identified as an ideal setting for testing, yet efforts to increase testing in this setting have not been effective. One theoretical model which may provide insight into the underpinnings of chlamydia testing is the Capability, Opportunity and Motivation Model of Behaviour (COM-B model). The aim of this systematic review is to: (1) identify barriers and facilitators to chlamydia testing for young people in general practice and (2) use a theoretical model to conduct a behavioural analysis of chlamydia testing behaviour. Methods and analysis Qualitative, quantitative and mixed methods studies published after 2000 will be included. Seven databases (MEDLINE, PubMed, EMBASE, Informit, PsycInfo, Scopus, Web of Science) will be searched to identify peer-reviewed publications which examined barriers and facilitators to chlamydia testing in general practice. Risk of bias will be assessed using the Critical Appraisal Skills Programme. Data regarding study design and key findings will be extracted. The data will be analysed using thematic analysis and the resultant factors will be mapped onto the COM-B model components. All findings will be reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Ethics and dissemination Ethical approval is not required. The results will be disseminated via submission for publication to a peer-review journal when complete and for presentation at national and international conferences. The review findings will be used to inform the development of interventions to facilitate effective and efficient

  6. Systematic reviews of diagnostic test accuracy

    DEFF Research Database (Denmark)

    Leeflang, Mariska M G; Deeks, Jonathan J; Gatsonis, Constantine

    2008-01-01

    More and more systematic reviews of diagnostic test accuracy studies are being published, but they can be methodologically challenging. In this paper, the authors present some of the recent developments in the methodology for conducting systematic reviews of diagnostic test accuracy studies....... Restrictive electronic search filters are discouraged, as is the use of summary quality scores. Methods for meta-analysis should take into account the paired nature of the estimates and their dependence on threshold. Authors of these reviews are advised to use the hierarchical summary receiver...

  7. CCDs at ESO: A Systematic Testing Program

    Science.gov (United States)

    Abbott, T. M. C.; Warmels, R. H.

    ESO currently offers a stable of 12 CCDs for use by visiting astronomers. It is incumbent upon ESO to ensure that these devices perform according to their advertised specifications (Abbott 1994). We describe a systematic, regular testing program for CCDs which is now being applied at La Silla. These tests are designed to expose failures which may not have catastrophic effects but which may compromise observations. The results of these tests are stored in an archive, accessible to visiting astronomers, and will be subject to trend analysis. The test are integrated in the CCD reduction package of the Munich Image Data Analysis System (ESO-MIDAS).

  8. Systematic Multiscale Modeling of Polymers

    Science.gov (United States)

    Faller, Roland; Huang, David; Bayramoglu, Beste; Moule, Adam

    2011-03-01

    The systematic coarse-graining of heterogeneous soft matter systems is an area of current research. We show how the Iterative Boltzmann Inversion systematically develops models for polymers in different environments. We present the scheme and a few applications. We study polystyrene in various environments and compare the different models from the melt, the solution and polymer brushes to validate accuracy and efficiency. We then apply the technique to a complex system needed as active layer in polymer-based solar cells. Nano-scale morphological information is difficult to obtain experimentally. On the other hand, atomistic computer simulations are only feasible to studying systems not much larger than an exciton diffusion length. Thus, we develop a coarse-grained (CG) simulation model, in which collections of atoms from an atomistic model are mapped onto a smaller number of ``superatoms.'' We study mixtures of poly(3-hexylthiophene) and C60 . By comparing the results of atomistic and CG simulations, we demonstrate that the model, parametrized at one temperature and two mixture compositions, accurately reproduces the system structure at other points of the phase diagram. We use the CG model to characterize the microstructure as a function of polymer:fullerene mole fraction and polymer chain length for systems approaching the scale of photovoltaic devices.

  9. Model-Based Security Testing

    CERN Document Server

    Schieferdecker, Ina; Schneider, Martin; 10.4204/EPTCS.80.1

    2012-01-01

    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST) is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing,...

  10. Childhood asthma prediction models: a systematic review.

    Science.gov (United States)

    Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup

    2015-12-01

    Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.

  11. Systematic reviews and meta-analyses of diagnostic test accuracy.

    Science.gov (United States)

    Leeflang, M M G

    2014-02-01

    Systematic reviews of diagnostic test accuracy summarize the accuracy, e.g. the sensitivity and specificity, of diagnostic tests in a systematic and transparent way. The aim of such a review is to investigate whether a test is sufficiently specific or sensitive to fit its role in practice, to compare the accuracy of two or more diagnostic tests, or to investigate where existing variation in results comes from. The search strategy should be broad and preferably fully reported, to enable readers to assess the completeness of it. Included studies usually have a cross-sectional design in which the tests of interest, ideally both the index test and its comparator, are evaluated against the reference standard. They should be a reflection of the situation that the review question refers to. The quality of included studies is assessed with the Quality Assessment of Diagnostic Accuracy Studies-2 checklist, containing items such as a consecutive and all-inclusive patient selection process, blinding of index test and reference standard assessment, a valid reference standard, and complete verification of all included participants. Studies recruiting cases separately from (healthy) controls are regarded as bearing a high risk of bias. For meta-analysis, the bivariate model or the hierarchical summary receiver operating characteristic model is used. These models take into account potential threshold effects and the correlation between sensitivity and specificity. They also allow addition of covariates for investigatation of potential sources of heterogeneity. Finally, the results from the meta-analyses should be explained and interpreted for the reader, to be well understood.

  12. Towards Systematic Benchmarking of Climate Model Performance

    Science.gov (United States)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  13. Hypnosis Versus Systematic Desensitization in the Treatment of Test Anxiety

    Science.gov (United States)

    Melnick, Joseph; Russell, Ronald W.

    1976-01-01

    This study compared the effectiveness of systematic desensitization and the directed experience hypnotic technique in reducing self-reported test anxiety and increasing the academic performance of test-anxious undergraduates (N=36). The results are discussed as evidence for systematic desensitization as the more effective treatment in reducing…

  14. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  15. The effect of uncertainty and systematic errors in hydrological modelling

    Science.gov (United States)

    Steinsland, I.; Engeland, K.; Johansen, S. S.; Øverleir-Petersen, A.; Kolberg, S. A.

    2014-12-01

    The aims of hydrological model identification and calibration are to find the best possible set of process parametrization and parameter values that transform inputs (e.g. precipitation and temperature) to outputs (e.g. streamflow). These models enable us to make predictions of streamflow. Several sources of uncertainties have the potential to hamper the possibility of a robust model calibration and identification. In order to grasp the interaction between model parameters, inputs and streamflow, it is important to account for both systematic and random errors in inputs (e.g. precipitation and temperatures) and streamflows. By random errors we mean errors that are independent from time step to time step whereas by systematic errors we mean errors that persists for a longer period. Both random and systematic errors are important in the observation and interpolation of precipitation and temperature inputs. Important random errors comes from the measurements themselves and from the network of gauges. Important systematic errors originate from the under-catch in precipitation gauges and from unknown spatial trends that are approximated in the interpolation. For streamflow observations, the water level recordings might give random errors whereas the rating curve contributes mainly with a systematic error. In this study we want to answer the question "What is the effect of random and systematic errors in inputs and observed streamflow on estimated model parameters and streamflow predictions?". To answer we test systematically the effect of including uncertainties in inputs and streamflow during model calibration and simulation in distributed HBV model operating on daily time steps for the Osali catchment in Norway. The case study is based on observations from, uncertainty carefullt quantified, and increased uncertainties and systmatical errors are done realistically by for example removing a precipitation gauge from the network.We find that the systematical errors in

  16. Systematic tests for position-dependent additive shear bias

    Science.gov (United States)

    van Uitert, Edo; Schneider, Peter

    2016-11-01

    We present new tests to identify stationary position-dependent additive shear biases in weak gravitational lensing data sets. These tests are important diagnostics for currently ongoing and planned cosmic shear surveys, as such biases induce coherent shear patterns that can mimic and potentially bias the cosmic shear signal. The central idea of these tests is to determine the average ellipticity of all galaxies with shape measurements in a grid in the pixel plane. The distribution of the absolute values of these averaged ellipticities can be compared to randomised catalogues; a difference points to systematics in the data. In addition, we introduce a method to quantify the spatial correlation of the additive bias, which suppresses the contribution from cosmic shear and therefore eases the identification of a position-dependent additive shear bias in the data. We apply these tests to the publicly available shear catalogues from the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) and the Kilo Degree Survey (KiDS) and find evidence for a small but non-negligible residual additive bias at small scales. As this residual bias is smaller than the error on the shear correlation signal at those scales, it is highly unlikely that it causes a significant bias in the published cosmic shear results of CFHTLenS. In CFHTLenS, the amplitude of this systematic signal is consistent with zero in fields where the number of stars used to model the point spread function (PSF) is higher than average, suggesting that the position-dependent additive shear bias originates from undersampled PSF variations across the image.

  17. Multivariate Model for Test Response Analysis

    NARCIS (Netherlands)

    Krishnan, Shaji; Krishnan, Shaji; Kerkhoff, Hans G.

    2010-01-01

    A systematic approach to construct an effective multivariate test response model for capturing manufacturing defects in electronic products is described. The effectiveness of the model is demonstrated by its capability in reducing the number of test-points, while achieving the maximal coverage

  18. Multivariate model for test response analysis

    NARCIS (Netherlands)

    Krishnan, S.; Kerkhoff, H.G.

    2010-01-01

    A systematic approach to construct an effective multivariate test response model for capturing manufacturing defects in electronic products is described. The effectiveness of the model is demonstrated by its capability in reducing the number of test-points, while achieving the maximal coverage attai

  19. Multivariate model for test response analysis

    NARCIS (Netherlands)

    Krishnan, S.; Kerkhoff, H.G.

    2010-01-01

    A systematic approach to construct an effective multivariate test response model for capturing manufacturing defects in electronic products is described. The effectiveness of the model is demonstrated by its capability in reducing the number of test-points, while achieving the maximal coverage attai

  20. Systematic modelling and simulation of refrigeration systems

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1998-01-01

    The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose of the s......The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....

  1. A Method for Systematic Improvement of Stochastic Grey-Box Models

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2004-01-01

    A systematic framework for improving the quality of continuous time models of dynamic systems based on experimental data is presented. The framework is based on an interplay between stochastic differential equation modelling, statistical tests and nonparametric modelling and provides features...

  2. Systematic modelling and simulation of refrigeration systems

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1998-01-01

    The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....

  3. Systematic Unit Testing in a Read-eval-print Loop

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2010-01-01

    Lisp programmers constantly carry out experiments in a read-eval-print loop.  The experimental activities convince the Lisp programmers that new or modified pieces of programs work as expected.  But the experiments typically do not represent systematic and comprehensive unit testing efforts...... how to use a test repository for other purposes than testing.  As a concrete contribution we show how to use test cases as examples in library interface documentation.  It is hypothesized---but not yet validated---that the tool will motivate the Lisp programmer to take the transition from casual...

  4. [Systematic review of diagnostic tests accuracy: a narrative review].

    Science.gov (United States)

    de Oliveira, Glória Maria; Camargo, Fábio Trinca; Gonçalves, Eduardo Costa; Duarte, Carlos Vinicius Nascimento; Guimarães, Carlos Alberto

    2010-04-01

    The aim of this study is to perform a narrative review of systematic reviews of diagnostic tests accuracy. We undertook a search using The Cochrane Methodology Reviews (Cochrane Reviews of Diagnostic Test Accuracy), Medline and LILACS up to October 2009. Reference lists of included studies were also hand searched. The following search strategy was constructed by using a combination of subject headings and text words: 1. Cochrane Methodology Reviews: accuracy study "Methodology" 2. In Pubmed "Meta-Analysis" [Publication Type] AND "Evidence-Based Medicine" [Mesh]) AND "Sensitivity and Specificity" [Mesh] 3. LILACS (revisao sistematica) or "literatura de REVISAO como assunto" [Descritor de assunto] and (sistematica) or "SISTEMATICA" [Descritor de assunto] and (acuracia) or "SENSIBILIDADE e especificidade" [Descritor de assunto]. In summary, the methodological planning and preparation of systematic reviews of therapeutic interventions are prior to that used in systematic reviews of diagnostic tests accuracy. There are more sources of heterogeneity in design of diagnostic test studies, which impair the synthesis - meta-analysis - of the results. To work around this problem, there are currently uniform requirements for diagnostic test manuscripts submitted to leading biomedical journals.

  5. A Unified Framework for Systematic Model Improvement

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2003-01-01

    A unified framework for improving the quality of continuous time models of dynamic systems based on experimental data is presented. The framework is based on an interplay between stochastic differential equation (SDE) modelling, statistical tests and multivariate nonparametric regression...

  6. Testing agile requirements models

    Institute of Scientific and Technical Information of China (English)

    BOTASCHANJAN Jewgenij; PISTER Markus; RUMPE Bernhard

    2004-01-01

    This paper discusses a model-based approach to validate software requirements in agile development processes by simulation and in particular automated testing. The use of models as central development artifact needs to be added to the portfolio of software engineering techniques, to further increase efficiency and flexibility of the development beginning already early in the requirements definition phase. Testing requirements are some of the most important techniques to give feedback and to increase the quality of the result. Therefore testing of artifacts should be introduced as early as possible, even in the requirements definition phase.

  7. Systematic approach to MIS model creation

    Directory of Open Access Journals (Sweden)

    Macura Perica

    2004-01-01

    Full Text Available In this paper-work, by application of basic principles of general theory of system (systematic approach, we have formulated a model of marketing information system. Bases for research were basic characteristics of systematic approach and marketing system. Informational base for management of marketing system, i.e. marketing instruments was presented in a way that the most important information for decision making were listed per individual marketing mix instruments. In projected model of marketing information system, information listed in this way create a base for establishing of data bases, i.e. bases of information (data bases of: product, price, distribution, promotion. This paper-work gives basic preconditions for formulation and functioning of the model. Model was presented by explication of elements of its structure (environment, data bases operators, analysts of information system, decision makers - managers, i.e. input, process, output, feedback and relations between these elements which are necessary for its optimal functioning. Beside that, here are basic elements for implementation of the model into business system, as well as conditions for its efficient functioning and development.

  8. A Systematic Approach to Multiple Breath Nitrogen Washout Test Quality

    Science.gov (United States)

    Klingel, Michelle; Pizarro, Maria Ester; Hall, Graham L.; Ramsey, Kathryn; Foong, Rachel; Saunders, Clare; Robinson, Paul D.; Webster, Hailey; Hardaker, Kate; Kane, Mica; Ratjen, Felix

    2016-01-01

    Background Accurate estimates of multiple breath washout (MBW) outcomes require correct operation of the device, appropriate distraction of the subject to ensure they breathe in a manner representative of their relaxed tidal breathing pattern, and appropriate interpretation of the acquired data. Based on available recommendations for an acceptable MBW test, we aimed to develop a protocol to systematically evaluate MBW measurements based on these criteria. Methods 50 MBW test occasions were systematically reviewed for technical elements and whether the breathing pattern was representative of relaxed tidal breathing by an experienced MBW operator. The impact of qualitative and quantitative criteria on inter-observer agreement was assessed across eight MBW operators (n = 20 test occasions, compared using a Kappa statistic). Results Using qualitative criteria, 46/168 trials were rejected: 16.6% were technically unacceptable and 10.7% were excluded due to inappropriate breathing pattern. Reviewer agreement was good using qualitative criteria and further improved with quantitative criteria from (κ = 0.53–0.83%) to (κ 0.73–0.97%), but at the cost of exclusion of further test occasions in this retrospective data analysis. Conclusions The application of the systematic review improved inter-observer agreement but did not affect reported MBW outcomes. PMID:27304432

  9. A Systematic Approach to Multiple Breath Nitrogen Washout Test Quality.

    Directory of Open Access Journals (Sweden)

    Renee Jensen

    Full Text Available Accurate estimates of multiple breath washout (MBW outcomes require correct operation of the device, appropriate distraction of the subject to ensure they breathe in a manner representative of their relaxed tidal breathing pattern, and appropriate interpretation of the acquired data. Based on available recommendations for an acceptable MBW test, we aimed to develop a protocol to systematically evaluate MBW measurements based on these criteria.50 MBW test occasions were systematically reviewed for technical elements and whether the breathing pattern was representative of relaxed tidal breathing by an experienced MBW operator. The impact of qualitative and quantitative criteria on inter-observer agreement was assessed across eight MBW operators (n = 20 test occasions, compared using a Kappa statistic.Using qualitative criteria, 46/168 trials were rejected: 16.6% were technically unacceptable and 10.7% were excluded due to inappropriate breathing pattern. Reviewer agreement was good using qualitative criteria and further improved with quantitative criteria from (κ = 0.53-0.83% to (κ 0.73-0.97%, but at the cost of exclusion of further test occasions in this retrospective data analysis.The application of the systematic review improved inter-observer agreement but did not affect reported MBW outcomes.

  10. Remote control missile model test

    Science.gov (United States)

    Allen, Jerry M.; Shaw, David S.; Sawyer, Wallace C.

    1989-01-01

    An extremely large, systematic, axisymmetric body/tail fin data base was gathered through tests of an innovative missile model design which is described herein. These data were originally obtained for incorporation into a missile aerodynamics code based on engineering methods (Program MISSILE3), but can also be used as diagnostic test cases for developing computational methods because of the individual-fin data included in the data base. Detailed analysis of four sample cases from these data are presented to illustrate interesting individual-fin force and moment trends. These samples quantitatively show how bow shock, fin orientation, fin deflection, and body vortices can produce strong, unusual, and computationally challenging effects on individual fin loads. Comparisons between these data and calculations from the SWINT Euler code are also presented.

  11. Personal utility in genomic testing: a systematic literature review.

    Science.gov (United States)

    Kohler, Jennefer N; Turbitt, Erin; Biesecker, Barbara B

    2017-06-01

    Researchers and clinicians refer to outcomes of genomic testing that extend beyond clinical utility as 'personal utility'. No systematic delineation of personal utility exists, making it challenging to appreciate its scope. Identifying empirical elements of personal utility reported in the literature offers an inventory that can be subsequently ranked for its relative value by those who have undergone genomic testing. A systematic review was conducted of the peer-reviewed literature reporting non-health-related outcomes of genomic testing from 1 January 2003 to 5 August 2016. Inclusion criteria specified English language, date of publication, and presence of empirical evidence. Identified outcomes were iteratively coded into unique domains. The search returned 551 abstracts from which 31 studies met the inclusion criteria. Study populations and type of genomic testing varied. Coding resulted in 15 distinct elements of personal utility, organized into three domains related to personal outcomes: affective, cognitive, and behavioral; and one domain related to social outcomes. The domains of personal utility may inform pre-test counseling by helping patients anticipate potential value of test results beyond clinical utility. Identified elements may also inform investigations into the prevalence and importance of personal utility to future test users.

  12. Using data assimilation for systematic model improvement

    Science.gov (United States)

    Lang, Matthew S.; van Leeuwen, Peter Jan; Browne, Phil

    2016-04-01

    In Numerical Weather Prediction parameterisations are used to simulate missing physics in the model. These can be due to a lack of scientific understanding or a lack of computing power available to address all the known physical processes. Parameterisations are sources of large uncertainty in a model as parameter values used in these parameterisations cannot be measured directly and hence are often not well known, and the parameterisations themselves are approximations of the processes present in the true atmosphere. Whilst there are many efficient and effective methods for combined state/parameter estimation in data assimilation, such as state augmentation, these are not effective at estimating the structure of parameterisations. A new method of parameterisation estimation is proposed that uses sequential data assimilation methods to estimate errors in the numerical models at each space-time point for each model equation. These errors are then fitted to predetermined functional forms of missing physics or parameterisations, that are based upon prior information. The method picks out the functional form, or that combination of functional forms, that bests fits the error structure. The prior information typically takes the form of expert knowledge. We applied the method to a one-dimensional advection model with additive model error, and it is shown that the method can accurately estimate parameterisations, with consistent error estimates. It is also demonstrated that state augmentation is not successful. The results indicate that this new method is a powerful tool in systematic model improvement.

  13. Wave Reflection Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Larsen, Brian Juul

    The investigation concerns the design of a new internal breakwater in the main port of Ibiza. The objective of the model tests was in the first hand to optimize the cross section to make the wave reflection low enough to ensure that unacceptable wave agitation will not occur in the port. Secondly...

  14. Systematic tests for position-dependent additive shear bias

    CERN Document Server

    van Uitert, Edo

    2016-01-01

    We present new tests to identify stationary position-dependent additive shear biases in weak gravitational lensing data sets. These tests are important diagnostics for currently ongoing and planned cosmic shear surveys, as such biases induce coherent shear patterns that can mimic and potentially bias the cosmic shear signal. The central idea of these tests is to determine the average ellipticity of all galaxies with shape measurements in a grid in the pixel plane. The distribution of the absolute values of these averaged ellipticities can be compared to randomized catalogues; a difference points to systematics in the data. In addition, we introduce a method to quantify the spatial correlation of the additive bias, which suppresses the contribution from cosmic shear and therefore eases the identification of a position-dependent additive shear bias in the data. We apply these tests to the publicly available shear catalogues from the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) and the Kilo Degree Su...

  15. Systematic Unit Testing in a Read-eval-print Loop

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2010-01-01

    .  Rather, the experiments are quick and dirty one shot validations which do not add lasting value to the software, which is being developed.  In this paper we propose a tool that is able to collect, organize, and re-validate test cases, which are entered as expressions in a read-eval-print loop......Lisp programmers constantly carry out experiments in a read-eval-print loop.  The experimental activities convince the Lisp programmers that new or modified pieces of programs work as expected.  But the experiments typically do not represent systematic and comprehensive unit testing efforts.......  The process of collecting the expressions and their results imposes only little extra work on the programmer.  The use of the tool provides for creation of test repositories, and it is intended to catalyze a much more systematic approach to unit testing in a read-eval-print loop.  In the paper we also discuss...

  16. Status of computerized cognitive testing in aging: a systematic review.

    Science.gov (United States)

    Wild, Katherine; Howieson, Diane; Webbe, Frank; Seelye, Adriana; Kaye, Jeffrey

    2008-11-01

    Early detection of cognitive decline in the elderly has become of heightened importance in parallel with the recent advances in therapeutics. Computerized assessment might be uniquely suited to early detection of changes in cognition in the elderly. We present here a systematic review of the status of computer-based cognitive testing, focusing on detection of cognitive decline in the aging population. All studies purporting to assess or detect age-related changes in cognition or early dementia/mild cognitive impairment by means of computerized testing were included. Each test battery was rated on availability of normative data, level of evidence for test validity and reliability, comprehensiveness, and usability. All published studies relevant to a particular computerized test were read by a minimum of two reviewers, who completed rating forms containing the above mentioned criteria. Of the 18 test batteries identified from the initial search, 11 were appropriate to cognitive testing in the elderly and were subjected to systematic review. Of those 11, five were either developed specifically for application with the elderly or have been used extensively with that population. Even within the computerized testing genre, great variability existed in manner of administration, ranging from fully examiner-administered to fully self-administered. All tests had at least minimal reliability and validity data, commonly reported in peer-reviewed articles. However, level of rigor of validity testing varied widely. All test batteries exhibited some of the strengths of computerized cognitive testing: standardization of administration and stimulus presentation, accurate measures of response latencies, automated comparison in real time with an individual's prior performance as well as with age-related norms, and efficiencies of staffing and cost. Some, such as the Mild Cognitive Impairment Screen, adapted complicated scoring algorithms to enhance the information gathered from

  17. Systematic model building with flavor symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Plentinger, Florian

    2009-12-19

    The observation of neutrino masses and lepton mixing has highlighted the incompleteness of the Standard Model of particle physics. In conjunction with this discovery, new questions arise: why are the neutrino masses so small, which form has their mass hierarchy, why is the mixing in the quark and lepton sectors so different or what is the structure of the Higgs sector. In order to address these issues and to predict future experimental results, different approaches are considered. One particularly interesting possibility, are Grand Unified Theories such as SU(5) or SO(10). GUTs are vertical symmetries since they unify the SM particles into multiplets and usually predict new particles which can naturally explain the smallness of the neutrino masses via the seesaw mechanism. On the other hand, also horizontal symmetries, i.e., flavor symmetries, acting on the generation space of the SM particles, are promising. They can serve as an explanation for the quark and lepton mass hierarchies as well as for the different mixings in the quark and lepton sectors. In addition, flavor symmetries are significantly involved in the Higgs sector and predict certain forms of mass matrices. This high predictivity makes GUTs and flavor symmetries interesting for both, theorists and experimentalists. These extensions of the SM can be also combined with theories such as supersymmetry or extra dimensions. In addition, they usually have implications on the observed matter-antimatter asymmetry of the universe or can provide a dark matter candidate. In general, they also predict the lepton flavor violating rare decays {mu} {yields} e{gamma}, {tau} {yields} {mu}{gamma}, and {tau} {yields} e{gamma} which are strongly bounded by experiments but might be observed in the future. In this thesis, we combine all of these approaches, i.e., GUTs, the seesaw mechanism and flavor symmetries. Moreover, our request is to develop and perform a systematic model building approach with flavor symmetries and

  18. Knowledge tests in patient education: a systematic review.

    Science.gov (United States)

    Kesänen, Jukka; Leino-Kilpi, Helena; Arifulla, Dinah; Siekkinen, Mervi; Valkeapää, Kirsi

    2014-06-01

    This study describes knowledge tests in patient education through a systematic review of the Medline, Cinahl, PsycINFO, and ERIC databases with the guidance of the PRISMA Statement. Forty-nine knowledge tests were identified. The contents were health-problem related, focusing on biophysiological and functional knowledge. The mean number of items was 20, with true-false or multiple-choice scales. Most of the tests were purposely designed for the studies included in the review. The most frequently reported quality assessments of knowledge tests were content validity and internal consistency. The outcome measurements for patient-education needs were comprehensive, validating knowledge tests that cover multidimensional aspects of knowledge. Besides the measurement of the outcomes of patient education, knowledge tests could be used for several purposes in patient education: to guide the content of education as checklists, to monitor the learning process, and as educational tools. There is a need for more efficient content and health problem-specific knowledge-test assessments.

  19. A Systematic Review of the Testing Effect in Learning

    Directory of Open Access Journals (Sweden)

    Raquel Eloisa Eisenkraemer

    2013-09-01

    Full Text Available The retrieval of a given piece of information from memory increases the long-term retention of that information, a phenomenon often called “testing effect”. The current study aimed to select and review articles on the testing effect to verify the extent and importance of this phenomenon, bringing the main results of recent research. To accomplish this, a systematic review of articles on this subject published between 2006 and 2012 was conducted, a period in which there was an acute increase in the amount of publications on this subject. The articles were searched in the databases Web of Science, PubMed and PsycINFO. The results, which were organized according to test format (recall and recognition tests, demonstrated that tests can be remarkably beneficial to the retention of long-term memories. A theoretical explanation regarding the cognitive processes involved in this phenomenon still needs to be developed and tested. Such explanation would have important implications for the development of efficient educational practices.

  20. A Precision test of Hipparcos systematics towards the Hyades

    CERN Document Server

    Narayanan, V K; Narayanan, Vijay K.; Gould, Andrew

    1999-01-01

    We propose and apply a test that can detect any systematic errors in the Hipparcos parallaxes towards the Hyades cluster at the level of 0.3 mas. We show that the statistical parallax method subsumes the classical moving cluster methods and provides more accurate and robust estimates of the distance and the first two moments of the velocity distribution of the Hyades cluster namely, its bulk space velocity and the velocity dispersion tensor. We predict the parallaxes of Hyades cluster members using the common cluster space velocity derived from the statistical parallax method and their individual Hipparcos proper motions. We show that the parallaxes determined in this manner (pi_pm) are consistent at the 1-sigma level with the parallaxes (pi_orb) of three Hyades spectroscopic binary systems with orbital solutions. We find that = 0.49 +- 0.47 mas, where the error is dominated by the errors in the orbital parallaxes. A reduction in this error would allow a test of the systematic errors in the Hipparcos paralla...

  1. Caffeine challenge test and panic disorder: a systematic literature review.

    Science.gov (United States)

    Vilarim, Marina Machado; Rocha Araujo, Daniele Marano; Nardi, Antonio Egidio

    2011-08-01

    This systematic review aimed to examine the results of studies that have investigated the induction of panic attacks and/or the anxiogenic effect of the caffeine challenge test in patients with panic disorder. The literature search was performed in PubMed, Biblioteca Virtual em Saúde and the ISI Web of Knowledge. The words used for the search were caffeine, caffeine challenge test, panic disorder, panic attacks and anxiety disorder. In total, we selected eight randomized, double-blind studies where caffeine was administered orally, and none of them controlled for confounding factors in the analysis. The percentage of loss during follow-up ranged between 14.3% and 73.1%. The eight studies all showed a positive association between caffeine and anxiogenic effects and/or panic disorder.

  2. Recent tests of realistic models

    Energy Technology Data Exchange (ETDEWEB)

    Brida, Giorgio; Degiovanni, Ivo Pietro; Genovese, Marco; Gramegna, Marco; Piacentini, Fabrizio; Schettini, Valentina; Traina, Paolo, E-mail: m.genovese@inrim.i [Istituto Nazionale di Ricerca Metrologica, Strada delle Cacce 91, 10135 Torino (Italy)

    2009-06-01

    In this article we present recent activity of our laboratories on testing specific hidden variable models and in particular we discuss the realizations of Alicki - van Ryn test and tests of SED and of Santos' models.

  3. Systematic evaluation of atmospheric chemistry-transport model CHIMERE

    Science.gov (United States)

    Khvorostyanov, Dmitry; Menut, Laurent; Mailler, Sylvain; Siour, Guillaume; Couvidat, Florian; Bessagnet, Bertrand; Turquety, Solene

    2017-04-01

    Regional-scale atmospheric chemistry-transport models (CTM) are used to develop air quality regulatory measures, to support environmentally sensitive decisions in the industry, and to address variety of scientific questions involving the atmospheric composition. Model performance evaluation with measurement data is critical to understand their limits and the degree of confidence in model results. CHIMERE CTM (http://www.lmd.polytechnique.fr/chimere/) is a French national tool for operational forecast and decision support and is widely used in the international research community in various areas of atmospheric chemistry and physics, climate, and environment (http://www.lmd.polytechnique.fr/chimere/CW-articles.php). This work presents the model evaluation framework applied systematically to the new CHIMERE CTM versions in the course of the continuous model development. The framework uses three of the four CTM evaluation types identified by the Environmental Protection Agency (EPA) and the American Meteorological Society (AMS): operational, diagnostic, and dynamic. It allows to compare the overall model performance in subsequent model versions (operational evaluation), identify specific processes and/or model inputs that could be improved (diagnostic evaluation), and test the model sensitivity to the changes in air quality, such as emission reductions and meteorological events (dynamic evaluation). The observation datasets currently used for the evaluation are: EMEP (surface concentrations), AERONET (optical depths), and WOUDC (ozone sounding profiles). The framework is implemented as an automated processing chain and allows interactive exploration of the results via a web interface.

  4. Models Predicting Success of Infertility Treatment: A Systematic Review

    Science.gov (United States)

    Zarinara, Alireza; Zeraati, Hojjat; Kamali, Koorosh; Mohammad, Kazem; Shahnazari, Parisa; Akhondi, Mohammad Mehdi

    2016-01-01

    Background: Infertile couples are faced with problems that affect their marital life. Infertility treatment is expensive and time consuming and occasionally isn’t simply possible. Prediction models for infertility treatment have been proposed and prediction of treatment success is a new field in infertility treatment. Because prediction of treatment success is a new need for infertile couples, this paper reviewed previous studies for catching a general concept in applicability of the models. Methods: This study was conducted as a systematic review at Avicenna Research Institute in 2015. Six data bases were searched based on WHO definitions and MESH key words. Papers about prediction models in infertility were evaluated. Results: Eighty one papers were eligible for the study. Papers covered years after 1986 and studies were designed retrospectively and prospectively. IVF prediction models have more shares in papers. Most common predictors were age, duration of infertility, ovarian and tubal problems. Conclusion: Prediction model can be clinically applied if the model can be statistically evaluated and has a good validation for treatment success. To achieve better results, the physician and the couples’ needs estimation for treatment success rate were based on history, the examination and clinical tests. Models must be checked for theoretical approach and appropriate validation. The privileges for applying the prediction models are the decrease in the cost and time, avoiding painful treatment of patients, assessment of treatment approach for physicians and decision making for health managers. The selection of the approach for designing and using these models is inevitable. PMID:27141461

  5. Background model systematics for the Fermi GeV excess

    CERN Document Server

    Calore, Francesca; Weniger, Christoph

    2014-01-01

    The possible gamma-ray excess in the inner Galaxy and the Galactic center (GC) suggested by Fermi-LAT observations has triggered a large number of studies. It has been interpreted as a variety of different phenomena such as a signal from WIMP dark matter annihilation, gamma-ray emission from a population of millisecond pulsars, or emission from cosmic rays injected in a sequence of burst-like events or continuously at the GC. We present the first comprehensive study of model systematics coming from the Galactic diffuse emission in the inner part of our Galaxy and their impact on the inferred properties of the excess emission at Galactic latitudes $2^\\circ<|b|<20^\\circ$ and 300 MeV to 500 GeV. We study both theoretical and empirical model systematics, which we deduce from a large range of Galactic diffuse emission models and a principal component analysis of residuals in numerous test regions along the Galactic plane. We show that the hypothesis of an extended spherical excess emission with a uniform ene...

  6. Psychometric properties of 2-minute walk test: a systematic review.

    Science.gov (United States)

    Pin, Tamis W

    2014-09-01

    To systematically review the psychometric evidence on the 2-minute walk test (2MWT). Electronic searches of databases including MEDLINE, CINAHL, Academic Search Premier, SPORTDiscus, PsycINFO, EMBASE, the Cochrane Library, and DARE were done until February 2014 using a combination of subject headings and free texts. Studies were included if psychometric properties of the 2MWT were (1) evaluated; (2) written as full reports; and (3) published in English language peer-reviewed journals. A modified consensus-based standard for the selection of health measurement instruments checklist was used to rate the methodological quality of the included studies. A quality assessment for statistical outcomes was used to assess the measurement properties of the 2MWT. Best-evidence synthesis was collated from 25 studies of 14 patient groups. Only 1 study was found that examined the 2MWT in the pediatric population. The testing procedures of the 2MWT varied across the included studies. Reliability, validity (construct and criterion), and responsiveness of the 2MWT also varied across different patient groups. Moderate to strong evidence was found for reliability, convergent validity, discriminative validity, and responsiveness of the 2MWT in frail elderly patients. Moderate to strong evidence for reliability, convergent validity, and responsiveness was found in adults with lower limb amputations. Moderate to strong evidence for validity (convergent and discriminative) was found in adults who received rehabilitation after hip fractures or cardiac surgery. Limited evidence for the psychometric properties of the 2MWT was found in other population groups because of methodological flaws. There is inadequate breadth and depth of psychometric evidence of the 2MWT for clinical and research purposes-specifically, minimal clinically important changes and responsiveness. More good-quality studies are needed, especially in the pediatric population. Consensus on standardized testing procedures of

  7. Ship Model Testing

    Science.gov (United States)

    2016-01-15

    analyzer, dual fuel, material tester, universal tester, laser scanner and 3D printer 16. SECURITY CLASSIFICATION OF: a. REPORT b. ABSTRACT c...New Additions • New material testing machine with environmental chamber • New dual -fuel test bed for Haeberle Laboratory • Upgrade existing...of purchasing more data acquisition equipment (ie. FARO laser scanner, data telemetry , and velocity profiler). Table 1: Spending vs. budget

  8. Systematic identification of crystallization kinetics within a generic modelling framework

    DEFF Research Database (Denmark)

    Abdul Samad, Noor Asma Fazli Bin; Meisler, Kresten Troelstrup; Gernaey, Krist

    2012-01-01

    A systematic development of constitutive models within a generic modelling framework has been developed for use in design, analysis and simulation of crystallization operations. The framework contains a tool for model identification connected with a generic crystallizer modelling tool-box, a tool...

  9. Testing and validating environmental models

    Science.gov (United States)

    Kirchner, J.W.; Hooper, R.P.; Kendall, C.; Neal, C.; Leavesley, G.

    1996-01-01

    Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show 'acceptable' agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, 'false positive' rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined 'target' for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series

  10. Conceptual Model for Systematic Construction Waste Management

    OpenAIRE

    Abd Rahim Mohd Hilmi Izwan; Kasim Narimah

    2017-01-01

    Development of the construction industry generated construction waste which can contribute towards environmental issues. Weaknesses of compliance in construction waste management especially in construction site have also contributed to the big issues of waste generated in landfills and illegal dumping area. This gives sign that construction projects are needed a systematic construction waste management. To date, a comprehensive criteria of construction waste management, particularly for const...

  11. Maturity Models in Supply Chain Sustainability: A Systematic Literature Review

    Directory of Open Access Journals (Sweden)

    Elisabete Correia

    2017-01-01

    Full Text Available A systematic literature review of supply chain maturity models with sustainability concerns is presented. The objective is to give insights into methodological issues related to maturity models, namely the research objectives; the research methods used to develop, validate and test them; the scope; and the main characteristics associated with their design. The literature review was performed based on journal articles and conference papers from 2000 to 2015 using the SCOPUS, Emerald Insight, EBSCO and Web of Science databases. Most of the analysed papers have as main objective the development of maturity models and their validation. The case study is the methodology that is most widely used by researchers to develop and validate maturity models. From the sustainability perspective, the scope of the analysed maturity models is the Triple Bottom Line (TBL and environmental dimension, focusing on a specific process (eco-design and new product development and without a broad SC perspective. The dominant characteristics associated with the design of the maturity models are the maturity grids and a continuous representation. In addition, results do not allow identifying a trend for a specific number of maturity levels. The comprehensive review, analysis, and synthesis of the maturity model literature represent an important contribution to the organization of this research area, making possible to clarify some confusion that exists about concepts, approaches and components of maturity models in sustainability. Various aspects associated with the maturity models (i.e., research objectives, research methods, scope and characteristics of the design of models are explored to contribute to the evolution and significance of this multidimensional area.

  12. Methods Used in Economic Evaluations of Chronic Kidney Disease Testing — A Systematic Review

    Science.gov (United States)

    Sutton, Andrew J.; Breheny, Katie; Deeks, Jon; Khunti, Kamlesh; Sharpe, Claire; Ottridge, Ryan S.; Stevens, Paul E.; Cockwell, Paul; Kalra, Philp A.; Lamb, Edmund J.

    2015-01-01

    Background The prevalence of chronic kidney disease (CKD) is high in general populations around the world. Targeted testing and screening for CKD are often conducted to help identify individuals that may benefit from treatment to ameliorate or prevent their disease progression. Aims This systematic review examines the methods used in economic evaluations of testing and screening in CKD, with a particular focus on whether test accuracy has been considered, and how analysis has incorporated issues that may be important to the patient, such as the impact of testing on quality of life and the costs they incur. Methods Articles that described model-based economic evaluations of patient testing interventions focused on CKD were identified through the searching of electronic databases and the hand searching of the bibliographies of the included studies. Results The initial electronic searches identified 2,671 papers of which 21 were included in the final review. Eighteen studies focused on proteinuria, three evaluated glomerular filtration rate testing and one included both tests. The full impact of inaccurate test results was frequently not considered in economic evaluations in this setting as a societal perspective was rarely adopted. The impact of false positive tests on patients in terms of the costs incurred in re-attending for repeat testing, and the anxiety associated with a positive test was almost always overlooked. In one study where the impact of a false positive test on patient quality of life was examined in sensitivity analysis, it had a significant impact on the conclusions drawn from the model. Conclusion Future economic evaluations of kidney function testing should examine testing and monitoring pathways from the perspective of patients, to ensure that issues that are important to patients, such as the possibility of inaccurate test results, are properly considered in the analysis. PMID:26465773

  13. Methods Used in Economic Evaluations of Chronic Kidney Disease Testing - A Systematic Review.

    Directory of Open Access Journals (Sweden)

    Andrew J Sutton

    Full Text Available The prevalence of chronic kidney disease (CKD is high in general populations around the world. Targeted testing and screening for CKD are often conducted to help identify individuals that may benefit from treatment to ameliorate or prevent their disease progression.This systematic review examines the methods used in economic evaluations of testing and screening in CKD, with a particular focus on whether test accuracy has been considered, and how analysis has incorporated issues that may be important to the patient, such as the impact of testing on quality of life and the costs they incur.Articles that described model-based economic evaluations of patient testing interventions focused on CKD were identified through the searching of electronic databases and the hand searching of the bibliographies of the included studies.The initial electronic searches identified 2,671 papers of which 21 were included in the final review. Eighteen studies focused on proteinuria, three evaluated glomerular filtration rate testing and one included both tests. The full impact of inaccurate test results was frequently not considered in economic evaluations in this setting as a societal perspective was rarely adopted. The impact of false positive tests on patients in terms of the costs incurred in re-attending for repeat testing, and the anxiety associated with a positive test was almost always overlooked. In one study where the impact of a false positive test on patient quality of life was examined in sensitivity analysis, it had a significant impact on the conclusions drawn from the model.Future economic evaluations of kidney function testing should examine testing and monitoring pathways from the perspective of patients, to ensure that issues that are important to patients, such as the possibility of inaccurate test results, are properly considered in the analysis.

  14. Effectiveness of Structured Psychodrama and Systematic Desensitization in Reducing Test Anxiety.

    Science.gov (United States)

    Kipper, David A.; Giladi, Daniel

    1978-01-01

    Students with examination anxiety took part in study of effectiveness of two kinds of treatment, structured psychodrama and systematic desensitization, in reducing test anxiety. Results showed that subjects in both treatment groups significantly reduced test-anxiety scores. Structured psychodrama is as effective as systematic desensitization in…

  15. Towards systematic exploration of multi-Higgs-doublet models

    CERN Document Server

    Ivanov, I P

    2015-01-01

    Conservative bSM models with rich scalar sector, such as multi-Higgs-doublet models, can easily accommodate the SM-like properties of the 125 GeV scalar observed at the LHC. Possessing a variety of bSM signals, they are worth investigating in fuller detail. Systematic study of these models is hampered by the highly multi-dimensional parameter space and by mathematical challenges. I outline some directions along which multi-Higgs-doublet models in the vicinity of a large discrete symmetry can be systematically explored.

  16. Model testing of Wave Dragon

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-07-01

    Previous to this project a scale model 1:50 of the wave energy converter (WEC) Wave Dragon was built by the Danish Maritime Institute and tested in a wave tank at Aalborg University (AAU). The test programs investigated the movements of the floating structure, mooring forces and forces in the reflectors. The first test was followed by test establishing the efficiency in different sea states. The scale model has also been extensively tested in the EU Joule Craft project JOR-CT98-7027 (Low-Pressure Turbine and Control Equipment for Wave Energy Converters /Wave Dragon) at University College Cork, Hydraulics and Maritime Research Centre, Ireland. The results of the previous model tests have formed the basis for a redesign of the WEC. In this project a reconstruction of the scale 1:50 model and sequential tests of changes to the model geometry and mass distribution parameters will be performed. AAU will make the modifications to the model based on the revised Loewenmark design and perform the tests in their wave tank. Grid connection requirements have been established. A hydro turbine with no movable parts besides the rotor has been developed and a scale model 1:3.5 tested, with a high efficiency over the whole head range. The turbine itself has possibilities for being used in river systems with low head and variable flow, an area of interest for many countries around the world. Finally, a regulation strategy for the turbines has been developed, which is essential for the future deployment of Wave Dragon.The video includes the following: 1. Title, 2. Introduction of the Wave Dragon, 3. Model test series H, Hs = 3 m, Rc = 3 m, 4. Model test series H, Hs = 5 m, Rc = 4 m, 5. Model test series I, Hs = 7 m, Rc = 1.25 m, 6. Model test series I, Hs = 7 m, Rc = 4 m, 7. Rolling title. On this VCD additional versions of the video can be found in the directory 'addvideo' for playing the video on PC's. These versions are: Model testing of Wave Dragon, DVD version

  17. Systematic reviews of animal models: methodology versus epistemology.

    Science.gov (United States)

    Greek, Ray; Menache, Andre

    2013-01-01

    Systematic reviews are currently favored methods of evaluating research in order to reach conclusions regarding medical practice. The need for such reviews is necessitated by the fact that no research is perfect and experts are prone to bias. By combining many studies that fulfill specific criteria, one hopes that the strengths can be multiplied and thus reliable conclusions attained. Potential flaws in this process include the assumptions that underlie the research under examination. If the assumptions, or axioms, upon which the research studies are based, are untenable either scientifically or logically, then the results must be highly suspect regardless of the otherwise high quality of the studies or the systematic reviews. We outline recent criticisms of animal-based research, namely that animal models are failing to predict human responses. It is this failure that is purportedly being corrected via systematic reviews. We then examine the assumption that animal models can predict human outcomes to perturbations such as disease or drugs, even under the best of circumstances. We examine the use of animal models in light of empirical evidence comparing human outcomes to those from animal models, complexity theory, and evolutionary biology. We conclude that even if legitimate criticisms of animal models were addressed, through standardization of protocols and systematic reviews, the animal model would still fail as a predictive modality for human response to drugs and disease. Therefore, systematic reviews and meta-analyses of animal-based research are poor tools for attempting to reach conclusions regarding human interventions.

  18. Systematic Reviews of Animal Models: Methodology versus Epistemology

    Directory of Open Access Journals (Sweden)

    Ray Greek, Andre Menache

    2013-01-01

    Full Text Available Systematic reviews are currently favored methods of evaluating research in order to reach conclusions regarding medical practice. The need for such reviews is necessitated by the fact that no research is perfect and experts are prone to bias. By combining many studies that fulfill specific criteria, one hopes that the strengths can be multiplied and thus reliable conclusions attained. Potential flaws in this process include the assumptions that underlie the research under examination. If the assumptions, or axioms, upon which the research studies are based, are untenable either scientifically or logically, then the results must be highly suspect regardless of the otherwise high quality of the studies or the systematic reviews. We outline recent criticisms of animal-based research, namely that animal models are failing to predict human responses. It is this failure that is purportedly being corrected via systematic reviews. We then examine the assumption that animal models can predict human outcomes to perturbations such as disease or drugs, even under the best of circumstances. We examine the use of animal models in light of empirical evidence comparing human outcomes to those from animal models, complexity theory, and evolutionary biology. We conclude that even if legitimate criticisms of animal models were addressed, through standardization of protocols and systematic reviews, the animal model would still fail as a predictive modality for human response to drugs and disease. Therefore, systematic reviews and meta-analyses of animal-based research are poor tools for attempting to reach conclusions regarding human interventions.

  19. Chapter 4: effective search strategies for systematic reviews of medical tests.

    Science.gov (United States)

    Relevo, Rose

    2012-06-01

    This article discusses techniques that are appropriate when developing search strategies for systematic reviews of medical tests. This includes general advice for searching for systematic reviews and issues specific to systematic reviews of medical tests. Diagnostic search filters are currently not sufficiently developed for use when searching for systematic reviews. Instead, authors should construct a highly sensitive search strategy that uses both controlled vocabulary and text words. A comprehensive search should include multiple databases and sources of grey literature. A list of subject-specific databases is included in this article.

  20. Systematic approach to verification and validation: High explosive burn models

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Laboratory; Scovel, Christina A. [Los Alamos National Laboratory

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code

  1. Diagnostic test accuracy: methods for systematic review and meta-analysis.

    Science.gov (United States)

    Campbell, Jared M; Klugar, Miloslav; Ding, Sandrine; Carmody, Dennis P; Hakonsen, Sasja J; Jadotte, Yuri T; White, Sarahlouise; Munn, Zachary

    2015-09-01

    Systematic reviews are carried out to provide an answer to a clinical question based on all available evidence (published and unpublished), to critically appraise the quality of studies, and account for and explain variations between the results of studies. The Joanna Briggs Institute specializes in providing methodological guidance for the conduct of systematic reviews and has developed methods and guidance for reviewers conducting systematic reviews of studies of diagnostic test accuracy. Diagnostic tests are used to identify the presence or absence of a condition for the purpose of developing an appropriate treatment plan. Owing to demands for improvements in speed, cost, ease of performance, patient safety, and accuracy, new diagnostic tests are continuously developed, and there are often several tests available for the diagnosis of a particular condition. In order to provide the evidence necessary for clinicians and other healthcare professionals to make informed decisions regarding the optimum test to use, primary studies need to be carried out on the accuracy of diagnostic tests and the results of these studies synthesized through systematic review. The Joanna Briggs Institute and its international collaboration have updated, revised, and developed new guidance for systematic reviews, including systematic reviews of diagnostic test accuracy. This methodological article summarizes that guidance and provides detailed advice on the effective conduct of systematic reviews of diagnostic test accuracy.

  2. Systematic experimental based modeling of a rotary piezoelectric ultrasonic motor

    DEFF Research Database (Denmark)

    Mojallali, Hamed; Amini, Rouzbeh; Izadi-Zamanabadi, Roozbeh

    2007-01-01

    In this paper, a new method for equivalent circuit modeling of a traveling wave ultrasonic motor is presented. The free stator of the motor is modeled by an equivalent circuit containing complex circuit elements. A systematic approach for identifying the elements of the equivalent circuit...

  3. Testing the model for testing competency.

    Science.gov (United States)

    Keating, Sarah B; Rutledge, Dana N; Sargent, Arlene; Walker, Polly

    2003-05-01

    The pilot study to demonstrate the utility of the CBRDM in the practice setting was successful. Using a matrix evaluation tool based on the model's competencies, evaluators were able to observe specific performance behaviors of senior nursing students and new graduates at either the novice or competent levels. The study faced the usual perils of pilot studies, including small sample size, a limited number of items from the total CBRDM, restricted financial resources, inexperienced researchers, unexpected barriers, and untested evaluation tools. It was understood from the beginning of the study that the research would be based on a program evaluation model, analyzing both processes and outcomes. However, the meager data findings led to the desire to continue to study use of the model for practice setting job expectations, career planning for nurses, and curriculum development for educators. Although the California Strategic Planning Committee for Nursing no longer has funding, we hope that others interested in role differentiation issues will take the results of this study and test the model in other practice settings. Its ability to measure higher levels of competency as well as novice and competent should be studied, i.e., proficient, expert, and advanced practice. The CBRDM may be useful in evaluating student and nurse performance, defining role expectations, and identifying the preparation necessary for the roles. The initial findings related to the two functions as leader and teacher in the care provider and care coordinator roles led to much discussion about helping students and nurses develop competence. Additional discussion focused on the roles as they apply to settings such as critical care or primary health care. The model is useful for all of nursing as it continues to define its levels of practice and their relationship to on-the-job performance, curriculum development, and career planning.

  4. Molecular testing for Lynch syndrome in people with colorectal cancer: systematic reviews and economic evaluation.

    Science.gov (United States)

    Snowsill, Tristan; Coelho, Helen; Huxley, Nicola; Jones-Hughes, Tracey; Briscoe, Simon; Frayling, Ian M; Hyde, Chris

    2017-09-01

    Inherited mutations in deoxyribonucleic acid (DNA) mismatch repair (MMR) genes lead to an increased risk of colorectal cancer (CRC), gynaecological cancers and other cancers, known as Lynch syndrome (LS). Risk-reducing interventions can be offered to individuals with known LS-causing mutations. The mutations can be identified by comprehensive testing of the MMR genes, but this would be prohibitively expensive in the general population. Tumour-based tests - microsatellite instability (MSI) and MMR immunohistochemistry (IHC) - are used in CRC patients to identify individuals at high risk of LS for genetic testing. MLH1 (MutL homologue 1) promoter methylation and BRAF V600E testing can be conducted on tumour material to rule out certain sporadic cancers. To investigate whether testing for LS in CRC patients using MSI or IHC (with or without MLH1 promoter methylation testing and BRAF V600E testing) is clinically effective (in terms of identifying Lynch syndrome and improving outcomes for patients) and represents a cost-effective use of NHS resources. Systematic reviews were conducted of the published literature on diagnostic test accuracy studies of MSI and/or IHC testing for LS, end-to-end studies of screening for LS in CRC patients and economic evaluations of screening for LS in CRC patients. A model-based economic evaluation was conducted to extrapolate long-term outcomes from the results of the diagnostic test accuracy review. The model was extended from a model previously developed by the authors. Ten studies were identified that evaluated the diagnostic test accuracy of MSI and/or IHC testing for identifying LS in CRC patients. For MSI testing, sensitivity ranged from 66.7% to 100.0% and specificity ranged from 61.1% to 92.5%. For IHC, sensitivity ranged from 80.8% to 100.0% and specificity ranged from 80.5% to 91.9%. When tumours showing low levels of MSI were treated as a positive result, the sensitivity of MSI testing increased but specificity fell. No end

  5. Systematic development of reduced reaction mechanisms for dynamic modeling

    Science.gov (United States)

    Frenklach, M.; Kailasanath, K.; Oran, E. S.

    1986-01-01

    A method for systematically developing a reduced chemical reaction mechanism for dynamic modeling of chemically reactive flows is presented. The method is based on the postulate that if a reduced reaction mechanism faithfully describes the time evolution of both thermal and chain reaction processes characteristic of a more complete mechanism, then the reduced mechanism will describe the chemical processes in a chemically reacting flow with approximately the same degree of accuracy. Here this postulate is tested by producing a series of mechanisms of reduced accuracy, which are derived from a full detailed mechanism for methane-oxygen combustion. These mechanisms were then tested in a series of reactive flow calculations in which a large-amplitude sinusoidal perturbation is applied to a system that is initially quiescent and whose temperature is high enough to start ignition processes. Comparison of the results for systems with and without convective flow show that this approach produces reduced mechanisms that are useful for calculations of explosions and detonations. Extensions and applicability to flames are discussed.

  6. A 'Turing' Test for Landscape Evolution Models

    Science.gov (United States)

    Parsons, A. J.; Wise, S. M.; Wainwright, J.; Swift, D. A.

    2008-12-01

    Resolving the interactions among tectonics, climate and surface processes at long timescales has benefited from the development of computer models of landscape evolution. However, testing these Landscape Evolution Models (LEMs) has been piecemeal and partial. We argue that a more systematic approach is required. What is needed is a test that will establish how 'realistic' an LEM is and thus the extent to which its predictions may be trusted. We propose a test based upon the Turing Test of artificial intelligence as a way forward. In 1950 Alan Turing posed the question of whether a machine could think. Rather than attempt to address the question directly he proposed a test in which an interrogator asked questions of a person and a machine, with no means of telling which was which. If the machine's answer could not be distinguished from those of the human, the machine could be said to demonstrate artificial intelligence. By analogy, if an LEM cannot be distinguished from a real landscape it can be deemed to be realistic. The Turing test of intelligence is a test of the way in which a computer behaves. The analogy in the case of an LEM is that it should show realistic behaviour in terms of form and process, both at a given moment in time (punctual) and in the way both form and process evolve over time (dynamic). For some of these behaviours, tests already exist. For example there are numerous morphometric tests of punctual form and measurements of punctual process. The test discussed in this paper provides new ways of assessing dynamic behaviour of an LEM over realistically long timescales. However challenges remain in developing an appropriate suite of challenging tests, in applying these tests to current LEMs and in developing LEMs that pass them.

  7. Systematic parameter inference in stochastic mesoscopic modeling

    Science.gov (United States)

    Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.

  8. Systematic review of the performance of rapid rifampicin resistance testing for drug-resistant tuberculosis.

    Directory of Open Access Journals (Sweden)

    Matthew Arentz

    Full Text Available INTRODUCTION: Rapid tests for rifampicin resistance may be useful for identifying isolates at high risk of drug resistance, including multidrug-resistant TB (MDR-TB. However, choice of diagnostic test and prevalence of rifampicin resistance may both impact a diagnostic strategy for identifying drug resistant-TB. We performed a systematic review to evaluate the performance of WHO-endorsed rapid tests for rifampicin resistance detection. METHODS: We searched MEDLINE, Embase and the Cochrane Library through January 1, 2012. For each rapid test, we determined pooled sensitivity and specificity estimates using a hierarchical random effects model. Predictive values of the tests were determined at different prevalence rates of rifampicin resistance and MDR-TB. RESULTS: We identified 60 publications involving six different tests (INNO-LiPA Rif. TB assay, Genotype MTBDR assay, Genotype MTBDRplus assay, Colorimetric Redox Indicator (CRI assay, Nitrate Reductase Assay (NRA and MODS tests: for all tests, negative predictive values were high when rifampicin resistance prevalence was ≤ 30%. However, positive predictive values were considerably reduced for the INNO-LiPA Rif. TB assay, the MTBDRplus assay and MODS when rifampicin resistance prevalence was < 5%. LIMITATIONS: In many studies, it was unclear whether patient selection or index test performance could have introduced bias. In addition, we were unable to evaluate critical concentration thresholds for the colorimetric tests. DISCUSSION: Rapid tests for rifampicin resistance alone cannot accurately predict rifampicin resistance or MDR-TB in areas with a low prevalence of rifampicin resistance. However, in areas with a high prevalence of rifampicin resistance and MDR-TB, these tests may be a valuable component of an MDR-TB management strategy.

  9. Silo model tests with sand

    DEFF Research Database (Denmark)

    Munch-Andersen, Jørgen

    Tests have been carried out in a large silo model with Leighton Buzzard Sand. Normal pressures and shear stresses have been measured during tests carried out with inlet and outlet geometry. The filling method is a very important parameter for the strength of the mass and thereby the pressures...

  10. Systematic parameter inference in stochastic mesoscopic modeling

    CERN Document Server

    Lei, Huan; Li, Zhen; Karniadakis, George

    2016-01-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are sparse. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space....

  11. Background model systematics for the Fermi GeV excess

    Energy Technology Data Exchange (ETDEWEB)

    Calore, Francesca; Cholis, Ilias; Weniger, Christoph

    2015-03-01

    The possible gamma-ray excess in the inner Galaxy and the Galactic center (GC) suggested by Fermi-LAT observations has triggered a large number of studies. It has been interpreted as a variety of different phenomena such as a signal from WIMP dark matter annihilation, gamma-ray emission from a population of millisecond pulsars, or emission from cosmic rays injected in a sequence of burst-like events or continuously at the GC. We present the first comprehensive study of model systematics coming from the Galactic diffuse emission in the inner part of our Galaxy and their impact on the inferred properties of the excess emission at Galactic latitudes 2° < |b| < 20° and 300 MeV to 500 GeV. We study both theoretical and empirical model systematics, which we deduce from a large range of Galactic diffuse emission models and a principal component analysis of residuals in numerous test regions along the Galactic plane. We show that the hypothesis of an extended spherical excess emission with a uniform energy spectrum is compatible with the Fermi-LAT data in our region of interest at 95% CL. Assuming that this excess is the extended counterpart of the one seen in the inner few degrees of the Galaxy, we derive a lower limit of 10.0° (95% CL) on its extension away from the GC. We show that, in light of the large correlated uncertainties that affect the subtraction of the Galactic diffuse emission in the relevant regions, the energy spectrum of the excess is equally compatible with both a simple broken power-law of break energy E(break) = 2.1 ± 0.2 GeV, and with spectra predicted by the self-annihilation of dark matter, implying in the case of bar bb final states a dark matter mass of m(χ)=49(+6.4)(-)(5.4)  GeV.

  12. The Social Relations Model in Family Studies: A Systematic Review

    Science.gov (United States)

    Eichelsheim, Veroni I.; Dekovic, Maja; Buist, Kirsten L.; Cook, William L.

    2009-01-01

    The Social Relations Model (SRM) allows for examination of family relations on three different levels: the individual level (actor and partner effects), the dyadic level (relationship effects), and the family level (family effect). The aim of this study was to present a systematic review of SRM family studies and identify general patterns in the…

  13. Understanding in vivo modelling of depression in non-human animals: a systematic review protocol

    DEFF Research Database (Denmark)

    Bannach-Brown, Alexandra; Liao, Jing; Wegener, Gregers

    2016-01-01

    The aim of this study is to systematically collect all published preclinical non-human animal literature on depression to provide an unbiased overview of existing knowledge. A systematic search will be carried out in PubMed and Embase. Studies will be included if they use non-human animal...... experimental model(s) to induce or mimic a depressive-like phenotype. Data that will be extracted include the model or method of induction; species and gender of the animals used; the behavioural, anatomical, electrophysiological, neurochemical or genetic outcome measure(s) used; risk of bias....../quality of reporting; and any intervention(s) tested. There were no exclusion criteria based on language or date of publication. Automation techniques will be used, where appropriate, to reduce the human reviewer time. Meta-analyses will be conducted if feasible. This broad systematic review aims to gain a better...

  14. Systematic approach in protection and ergonomics testing personal protective equipment

    NARCIS (Netherlands)

    Hartog. E.A. den

    2009-01-01

    In the area of personal protection against chemical and biological (CB) agents there is a strong focus on testing the materials against the relevant threats. The testing programs in this area are elaborate and are aimed to guarantee that the material protects according to specifications. This

  15. Silo model tests with sand

    DEFF Research Database (Denmark)

    Munch-Andersen, Jørgen

    Tests have been carried out in a large silo model with Leighton Buzzard Sand. Normal pressures and shear stresses have been measured during tests carried out with inlet and outlet geometry. The filling method is a very important parameter for the strength of the mass and thereby the pressures...... as well as the flow pattern during discharge of the silo. During discharge a mixed flow pattern has been identified...

  16. Diagnostic test accuracy of glutamate dehydrogenase for Clostridium difficile: Systematic review and meta-analysis.

    Science.gov (United States)

    Arimoto, Jun; Horita, Nobuyuki; Kato, Shingo; Fuyuki, Akiko; Higurashi, Takuma; Ohkubo, Hidenori; Endo, Hiroki; Takashi, Nonaka; Kaneko, Takeshi; Nakajima, Atsushi

    2016-07-15

    We performed this systematic review and meta-analysis to assess the diagnostic accuracy of detecting glutamate dehydrogenase (GDH) for Clostridium difficile infection (CDI) based on the hierarchical model. Two investigators electrically searched four databases. Reference tests were stool cell cytotoxicity neutralization assay (CCNA) and stool toxigenic culture (TC). To assess the overall accuracy, we calculated the diagnostic odds ratio (DOR) using a DerSimonian-Laird random-model and area the under hierarchical summary receiver operating characteristics (AUC) using Holling's proportional hazard models. The summary estimate of the sensitivity and the specificity were obtained using the bivariate model. According to 42 reports consisting of 3055 reference positive comparisons, and 26188 reference negative comparisons, the DOR was 115 (95%CI: 77-172, I(2) = 12.0%) and the AUC was 0.970 (95%CI: 0.958-0.982). The summary estimate of sensitivity and specificity were 0.911 (95%CI: 0.871-0.940) and 0.912 (95%CI: 0.892-0.928). The positive and negative likelihood ratios were 10.4 (95%CI 8.4-12.7) and 0.098 (95%CI 0.066-0.142), respectively. Detecting GDH for the diagnosis of CDI had both high sensitivity and specificity. Considering its low cost and prevalence, it is appropriate for a screening test for CDI.

  17. Economic Evaluations of Pharmacogenetic and Pharmacogenomic Screening Tests: A Systematic Review. Second Update of the Literature.

    Directory of Open Access Journals (Sweden)

    Elizabeth J J Berm

    Full Text Available Due to extended application of pharmacogenetic and pharmacogenomic screening (PGx tests it is important to assess whether they provide good value for money. This review provides an update of the literature.A literature search was performed in PubMed and papers published between August 2010 and September 2014, investigating the cost-effectiveness of PGx screening tests, were included. Papers from 2000 until July 2010 were included via two previous systematic reviews. Studies' overall quality was assessed with the Quality of Health Economic Studies (QHES instrument.We found 38 studies, which combined with the previous 42 studies resulted in a total of 80 included studies. An average QHES score of 76 was found. Since 2010, more studies were funded by pharmaceutical companies. Most recent studies performed cost-utility analysis, univariate and probabilistic sensitivity analyses, and discussed limitations of their economic evaluations. Most studies indicated favorable cost-effectiveness. Majority of evaluations did not provide information regarding the intrinsic value of the PGx test. There were considerable differences in the costs for PGx testing. Reporting of the direction and magnitude of bias on the cost-effectiveness estimates as well as motivation for the chosen economic model and perspective were frequently missing.Application of PGx tests was mostly found to be a cost-effective or cost-saving strategy. We found that only the minority of recent pharmacoeconomic evaluations assessed the intrinsic value of the PGx tests. There was an increase in the number of studies and in the reporting of quality associated characteristics. To improve future evaluations, scenario analysis including a broad range of PGx tests costs and equal costs of comparator drugs to assess the intrinsic value of the PGx tests, are recommended. In addition, robust clinical evidence regarding PGx tests' efficacy remains of utmost importance.

  18. Economic Evaluations of Pharmacogenetic and Pharmacogenomic Screening Tests: A Systematic Review. Second Update of the Literature.

    Science.gov (United States)

    Berm, Elizabeth J J; Looff, Margot de; Wilffert, Bob; Boersma, Cornelis; Annemans, Lieven; Vegter, Stefan; Boven, Job F M van; Postma, Maarten J

    2016-01-01

    Due to extended application of pharmacogenetic and pharmacogenomic screening (PGx) tests it is important to assess whether they provide good value for money. This review provides an update of the literature. A literature search was performed in PubMed and papers published between August 2010 and September 2014, investigating the cost-effectiveness of PGx screening tests, were included. Papers from 2000 until July 2010 were included via two previous systematic reviews. Studies' overall quality was assessed with the Quality of Health Economic Studies (QHES) instrument. We found 38 studies, which combined with the previous 42 studies resulted in a total of 80 included studies. An average QHES score of 76 was found. Since 2010, more studies were funded by pharmaceutical companies. Most recent studies performed cost-utility analysis, univariate and probabilistic sensitivity analyses, and discussed limitations of their economic evaluations. Most studies indicated favorable cost-effectiveness. Majority of evaluations did not provide information regarding the intrinsic value of the PGx test. There were considerable differences in the costs for PGx testing. Reporting of the direction and magnitude of bias on the cost-effectiveness estimates as well as motivation for the chosen economic model and perspective were frequently missing. Application of PGx tests was mostly found to be a cost-effective or cost-saving strategy. We found that only the minority of recent pharmacoeconomic evaluations assessed the intrinsic value of the PGx tests. There was an increase in the number of studies and in the reporting of quality associated characteristics. To improve future evaluations, scenario analysis including a broad range of PGx tests costs and equal costs of comparator drugs to assess the intrinsic value of the PGx tests, are recommended. In addition, robust clinical evidence regarding PGx tests' efficacy remains of utmost importance.

  19. Computational Modeling of Simulation Tests.

    Science.gov (United States)

    1980-06-01

    Mexico , March 1979. 14. Kinney, G. F.,.::. IeiN, .hoce 1h Ir, McMillan, p. 57, 1962. 15. Courant and Friedrichs, ,U: r. on moca an.: Jho...AD 79 275 NEW MEXICO UNIV ALBUGUERGUE ERIC H WANG CIVIL ENGINE-ETC F/6 18/3 COMPUTATIONAL MODELING OF SIMULATION TESTS.(U) JUN 80 6 LEIGH, W CHOWN, B...COMPUTATIONAL MODELING OF SIMULATION TESTS00 0G. Leigh W. Chown B. Harrison Eric H. Wang Civil Engineering Research Facility University of New Mexico

  20. Systematic improvement of molecular representations for machine learning models

    CERN Document Server

    Huang, Bing

    2016-01-01

    The predictive accuracy of Machine Learning (ML) models of molecular properties depends on the choice of the molecular representation. We introduce a hierarchy of representations based on uniqueness and target similarity criteria. To systematically control target similarity, we rely on interatomic many body expansions including Bonding, Angular, and higher order terms (BA). Addition of higher order contributions systematically increases similarity to the potential energy function as well as predictive accuracy of the resulting ML models. Numerical evidence is presented for the performance of BAML models trained on molecular properties pre-calculated at electron-correlated and density functional theory level of theory for thousands of small organic molecules. Properties studied include enthalpies and free energies of atomization, heatcapacity, zero-point vibrational energies, dipole-moment, polarizability, HOMO/LUMO energies and gap, ionization potential, electron affinity, and electronic excitations. After tr...

  1. An attempt to lower sources of systematic measurement error using Hierarchical Generalized Linear Modeling (HGLM).

    Science.gov (United States)

    Sideridis, George D; Tsaousis, Ioannis; Katsis, Athanasios

    2014-01-01

    The purpose of the present studies was to test the effects of systematic sources of measurement error on the parameter estimates of scales using the Rasch model. Studies 1 and 2 tested the effects of mood and affectivity. Study 3 evaluated the effects of fatigue. Last, studies 4 and 5 tested the effects of motivation on a number of parameters of the Rasch model (e.g., ability estimates). Results indicated that (a) the parameters of interest and the psychometric properties of the scales were substantially distorted in the presence of all systematic sources of error, and, (b) the use of HGLM provides a way of adjusting the parameter estimates in the presence of these sources of error. It is concluded that validity in measurement requires a thorough evaluation of potential sources of error and appropriate adjustments based on each occasion.

  2. Effects of waveform model systematics on the interpretation of GW150914

    CERN Document Server

    Abbott, B P; Abbott, T D; Abernathy, M R; Acernese, F; Ackley, K; Adams, C; Adams, T; Addesso, P; Adhikari, R X; Adya, V B; Affeldt, C; Agathos, M; Agatsuma, K; Aggarwal, N; Aguiar, O D; Aiello, L; Ain, A; Ajith, P; Allen, B; Allocca, A; Altin, P A; Ananyeva, A; Anderson, S B; Anderson, W G; Appert, S; Arai, K; Araya, M C; Areeda, J S; Arnaud, N; Arun, K G; Ascenzi, S; Ashton, G; Ast, M; Aston, S M; Astone, P; Aufmuth, P; Aulbert, C; Avila-Alvarez, A; Babak, S; Bacon, P; Bader, M K M; Baker, P T; Baldaccini, F; Ballardin, G; Ballmer, S W; Barayoga, J C; Barclay, S E; Barish, B C; Barker, D; Barone, F; Barr, B; Barsotti, L; Barsuglia, M; Barta, D; Bartlett, J; Bartos, I; Bassiri, R; Basti, A; Batch, J C; Baune, C; Bavigadda, V; Bazzan, M; Beer, C; Bejger, M; Belahcene, I; Belgin, M; Bell, A S; Berger, B K; Bergmann, G; Berry, C P L; Bersanetti, D; Bertolini, A; Betzwieser, J; Bhagwat, S; Bhandare, R; Bilenko, I A; Billingsley, G; Billman, C R; Birch, J; Birney, R; Birnholtz, O; Biscans, S; Bisht, A; Bitossi, M; Biwer, C; Bizouard, M A; Blackburn, J K; Blackman, J; Blair, C D; Blair, D G; Blair, R M; Bloemen, S; Bock, O; Boer, M; Bogaert, G; Bohe, A; Bondu, F; Bonnand, R; Boom, B A; Bork, R; Boschi, V; Bose, S; Bouffanais, Y; Bozzi, A; Bradaschia, C; Brady, P R; Braginsky, V B; Branchesi, M; Brau, J E; Briant, T; Brillet, A; Brinkmann, M; Brisson, V; Brockill, P; Broida, J E; Brooks, A F; Brown, D A; Brown, D D; Brown, N M; Brunett, S; Buchanan, C C; Buikema, A; Bulik, T; Bulten, H J; Buonanno, A; Buskulic, D; Buy, C; Byer, R L; Cabero, M; Cadonati, L; Cagnoli, G; Cahillane, C; Bustillo, J Calder'on; Callister, T A; Calloni, E; Camp, J B; Cannon, K C; Cao, H; Cao, J; Capano, C D; Capocasa, E; Carbognani, F; Caride, S; Diaz, J Casanueva; Casentini, C; Caudill, S; Cavagli`a, M; Cavalier, F; Cavalieri, R; Cella, G; Cepeda, C B; Baiardi, L Cerboni; Cerretani, G; Cesarini, E; Chamberlin, S J; Chan, M; Chao, S; Charlton, P; Chassande-Mottin, E; Cheeseboro, B D; Chen, H Y; Chen, Y; Cheng, H -P; Chincarini, A; Chiummo, A; Chmiel, T; Cho, H S; Cho, M; Chow, J H; Christensen, N; Chu, Q; Chua, A J K; Chua, S; Chung, S; Ciani, G; Clara, F; Clark, J A; Cleva, F; Cocchieri, C; Coccia, E; Cohadon, P -F; Colla, A; Collette, C G; Cominsky, L; Constancio, M; Conti, L; Cooper, S J; Corbitt, T R; Cornish, N; Corsi, A; Cortese, S; Costa, C A; Coughlin, M W; Coughlin, S B; Coulon, J -P; Countryman, S T; Couvares, P; Covas, P B; Cowan, E E; Coward, D M; Cowart, M J; Coyne, D C; Coyne, R; Creighton, J D E; Creighton, T D; Cripe, J; Crowder, S G; Cullen, T J; Cumming, A; Cunningham, L; Cuoco, E; Canton, T Dal; Danilishin, S L; D'Antonio, S; Danzmann, K; Dasgupta, A; Costa, C F Da Silva; Dattilo, V; Dave, I; Davier, M; Davies, G S; Davis, D; Daw, E J; Day, B; Day, R; De, S; DeBra, D; Debreczeni, G; Degallaix, J; De Laurentis, M; Del'eglise, S; Del Pozzo, W; Denker, T; Dent, T; Dergachev, V; De Rosa, R; DeRosa, R T; DeSalvo, R; Devenson, J; Devine, R C; Dhurandhar, S; D'iaz, M C; Di Fiore, L; Di Giovanni, M; Di Girolamo, T; Di Lieto, A; Di Pace, S; Di Palma, I; Di Virgilio, A; Doctor, Z; Dolique, V; Donovan, F; Dooley, K L; Doravari, S; Dorrington, I; Douglas, R; 'Alvarez, M Dovale; Downes, T P; Drago, M; Drever, R W P; Driggers, J C; Du, Z; Ducrot, M; Dwyer, S E; Edo, T B; Edwards, M C; Effler, A; Eggenstein, H -B; Ehrens, P; Eichholz, J; Eikenberry, S S; Eisenstein, R A; Essick, R C; Etienne, Z; Etzel, T; Evans, M; Evans, T M; Everett, R; Factourovich, M; Fafone, V; Fair, H; Fairhurst, S; Fan, X; Farinon, S; Farr, B; Farr, W M; Fauchon-Jones, E J; Favata, M; Fays, M; Fehrmann, H; Fejer, M M; Galiana, A Fern'andez; Ferrante, I; Ferreira, E C; Ferrini, F; Fidecaro, F; Fiori, I; Fiorucci, D; Fisher, R P; Flaminio, R; Fletcher, M; Fong, H; Forsyth, S S; Fournier, J -D; Frasca, S; Frasconi, F; Frei, Z; Freise, A; Frey, R; Frey, V; Fries, E M; Fritschel, P; Frolov, V V; Fulda, P; Fyffe, M; Gabbard, H; Gadre, B U; Gaebel, S M; Gair, J R; Gammaitoni, L; Gaonkar, S G; Garufi, F; Gaur, G; Gayathri, V; Gehrels, N; Gemme, G; Genin, E; Gennai, A; George, J; Gergely, L; Germain, V; Ghonge, S; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S; Giaime, J A; Giardina, K D; Giazotto, A; Gill, K; Glaefke, A; Goetz, E; Goetz, R; Gondan, L; Gonz'alez, G; Castro, J M Gonzalez; Gopakumar, A; Gorodetsky, M L; Gossan, S E; Gosselin, M; Gouaty, R; Grado, A; Graef, C; Granata, M; Grant, A; Gras, S; Gray, C; Greco, G; Green, A C; Groot, P; Grote, H; Grunewald, S; Guidi, G M; Guo, X; Gupta, A; Gupta, M K; Gushwa, K E; Gustafson, E K; Gustafson, R; Hacker, J J; Hall, B R; Hall, E D; Hammond, G; Haney, M; Hanke, M M; Hanks, J; Hanna, C; Hannam, M D; Hanson, J; Hardwick, T; Harms, J; Harry, G M; Harry, I W; Hart, M J; Hartman, M T; Haster, C -J; Haughian, K; Healy, J; Heidmann, A; Heintze, M C; Heitmann, H; Hello, P; Hemming, G; Hendry, M; Heng, I S; Hennig, J; Henry, J; Heptonstall, A W; Heurs, M; Hild, S; Hoak, D; Hofman, D; Holt, K; Holz, D E; Hopkins, P; Hough, J; Houston, E A; Howell, E J; Hu, Y M; Huerta, E A; Huet, D; Hughey, B; Husa, S; Huttner, S H; Huynh-Dinh, T; Indik, N; Ingram, D R; Inta, R; Isa, H N; Isac, J -M; Isi, M; Isogai, T; Iyer, B R; Izumi, K; Jacqmin, T; Jani, K; Jaranowski, P; Jawahar, S; Jim'enez-Forteza, F; Johnson, W W; Jones, D I; Jones, R; Jonker, R J G; Ju, L; Junker, J; Kalaghatgi, C V; Kalogera, V; Kandhasamy, S; Kang, G; Kanner, J B; Karki, S; Karvinen, K S; Kasprzack, M; Katsavounidis, E; Katzman, W; Kaufer, S; Kaur, T; Kawabe, K; K'ef'elian, F; Keitel, D; Kelley, D B; Kennedy, R; Key, J S; Khalili, F Y; Khan, I; Khan, S; Khan, Z; Khazanov, E A; Kijbunchoo, N; Kim, Chunglee; Kim, J C; Kim, Whansun; Kim, W; Kim, Y -M; Kimbrell, S J; King, E J; King, P J; Kirchhoff, R; Kissel, J S; Klein, B; Kleybolte, L; Klimenko, S; Koch, P; Koehlenbeck, S M; Koley, S; Kondrashov, V; Kontos, A; Korobko, M; Korth, W Z; Kowalska, I; Kozak, D B; Kr"amer, C; Kringel, V; Krishnan, B; Kr'olak, A; Kuehn, G; Kumar, P; Kumar, R; Kuo, L; Kutynia, A; Lackey, B D; Landry, M; Lang, R N; Lange, J; Lantz, B; Lanza, R K; Lartaux-Vollard, A; Lasky, P D; Laxen, M; Lazzarini, A; Lazzaro, C; Leaci, P; Leavey, S; Lebigot, E O; Lee, C H; Lee, H K; Lee, H M; Lee, K; Lehmann, J; Lenon, A; Leonardi, M; Leong, J R; Leroy, N; Letendre, N; Levin, Y; Li, T G F; Libson, A; Littenberg, T B; Liu, J; Lockerbie, N A; Lombardi, A L; London, L T; Lord, J E; Lorenzini, M; Loriette, V; Lormand, M; Losurdo, G; Lough, J D; Lovelace, G; L"uck, H; Lundgren, A P; Lynch, R; Ma, Y; Macfoy, S; Machenschalk, B; MacInnis, M; Macleod, D M; Magana-Sandoval, F; Majorana, E; Maksimovic, I; Malvezzi, V; Man, N; Mandic, V; Mangano, V; Mansell, G L; Manske, M; Mantovani, M; Marchesoni, F; Marion, F; M'arka, S; M'arka, Z; Markosyan, A S; Maros, E; Martelli, F; Martellini, L; Martin, I W; Martynov, D V; Mason, K; Masserot, A; Massinger, T J; Masso-Reid, M; Mastrogiovanni, S; Matichard, F; Matone, L; Mavalvala, N; Mazumder, N; McCarthy, R; McClelland, D E; McCormick, S; McGrath, C; McGuire, S C; McIntyre, G; McIver, J; McManus, D J; McRae, T; McWilliams, S T; Meacher, D; Meadors, G D; Meidam, J; Melatos, A; Mendell, G; Mendoza-Gandara, D; Mercer, R A; Merilh, E L; Merzougui, M; Meshkov, S; Messenger, C; Messick, C; Metzdorff, R; Meyers, P M; Mezzani, F; Miao, H; Michel, C; Middleton, H; Mikhailov, E E; Milano, L; Miller, A L; Miller, A; Miller, B B; Miller, J; Millhouse, M; Minenkov, Y; Ming, J; Mirshekari, S; Mishra, C; Mitra, S; Mitrofanov, V P; Mitselmakher, G; Mittleman, R; Moggi, A; Mohan, M; Mohapatra, S R P; Montani, M; Moore, B C; Moore, C J; Moraru, D; Moreno, G; Morriss, S R; Mours, B; Mow-Lowry, C M; Mueller, G; Muir, A W; Mukherjee, Arunava; Mukherjee, D; Mukherjee, S; Mukund, N; Mullavey, A; Munch, J; Muniz, E A M; Murray, P G; Mytidis, A; Napier, K; Nardecchia, I; Naticchioni, L; Nelemans, G; Nelson, T J N; Neri, M; Nery, M; Neunzert, A; Newport, J M; Newton, G; Nguyen, T T; Nielsen, A B; Nissanke, S; Nitz, A; Noack, A; Nocera, F; Nolting, D; Normandin, M E N; Nuttall, L K; Oberling, J; Ochsner, E; Oelker, E; Ogin, G H; Oh, J J; Oh, S H; Ohme, F; Oliver, M; Oppermann, P; Oram, Richard J; O'Reilly, B; O'Shaughnessy, R; Ottaway, D J; Overmier, H; Owen, B J; Pace, A E; Page, J; Pai, A; Pai, S A; Palamos, J R; Palashov, O; Palomba, C; Pal-Singh, A; Pan, H; Pankow, C; Pannarale, F; Pant, B C; Paoletti, F; Paoli, A; Papa, M A; Paris, H R; Parker, W; Pascucci, D; Pasqualetti, A; Passaquieti, R; Passuello, D; Patricelli, B; Pearlstone, B L; Pedraza, M; Pedurand, R; Pekowsky, L; Pele, A; Penn, S; Perez, C J; Perreca, A; Perri, L M; Pfeiffer, H P; Phelps, M; Piccinni, O J; Pichot, M; Piergiovanni, F; Pierro, V; Pillant, G; Pinard, L; Pinto, I M; Pitkin, M; Poe, M; Poggiani, R; Popolizio, P; Post, A; Powell, J; Prasad, J; Pratt, J W W; Predoi, V; Prestegard, T; Prijatelj, M; Principe, M; Privitera, S; Prodi, G A; Prokhorov, L G; Puncken, O; Punturo, M; Puppo, P; P"urrer, M; Qi, H; Qin, J; Qiu, S; Quetschke, V; Quintero, E A; Quitzow-James, R; Raab, F J; Rabeling, D S; Radkins, H; Raffai, P; Raja, S; Rajan, C; Rakhmanov, M; Rapagnani, P; Raymond, V; Razzano, M; Re, V; Read, J; Regimbau, T; Rei, L; Reid, S; Reitze, D H; Rew, H; Reyes, S D; Rhoades, E; Ricci, F; Riles, K; Rizzo, M; Robertson, N A; Robie, R; Robinet, F; Rocchi, A; Rolland, L; Rollins, J G; Roma, V J; Romano, J D; Romano, R; Romie, J H; Rosi'nska, D; Rowan, S; R"udiger, A; Ruggi, P; Ryan, K; Sachdev, S; Sadecki, T; Sadeghian, L; Sakellariadou, M; Salconi, L; Saleem, M; Salemi, F; Samajdar, A; Sammut, L; Sampson, L M; Sanchez, E J; Sandberg, V; Sanders, J R; Sassolas, B; Sathyaprakash, B S; Saulson, P R; Sauter, O; Savage, R L; Sawadsky, A; Schale, P; Scheuer, J; Schmidt, E; Schmidt, J; Schmidt, P; Schnabel, R; Schofield, R M S; Sch"onbeck, A; Schreiber, E; Schuette, D; Schutz, B F; Schwalbe, S G; Scott, J; Scott, S M; Sellers, D; Sengupta, A S; Sentenac, D; Sequino, V; Sergeev, A; Setyawati, Y; Shaddock, D A; Shaffer, T J; Shahriar, M S; Shapiro, B; Shawhan, P; Sheperd, A; Shoemaker, D H; Shoemaker, D M; Siellez, K; Siemens, X; Sieniawska, M; Sigg, D; Silva, A D; Singer, A; Singer, L P; Singh, A; Singh, R; Singhal, A; Sintes, A M; Slagmolen, B J J; Smith, B; Smith, J R; Smith, R J E; Son, E J; Sorazu, B; Sorrentino, F; Souradeep, T; Spencer, A P; Srivastava, A K; Staley, A; Steinke, M; Steinlechner, J; Steinlechner, S; Steinmeyer, D; Stephens, B C; Stevenson, S P; Stone, R; Strain, K A; Straniero, N; Stratta, G; Strigin, S E; Sturani, R; Stuver, A L; Summerscales, T Z; Sun, L; Sunil, S; Sutton, P J; Swinkels, B L; Szczepa'nczyk, M J; Tacca, M; Talukder, D; Tanner, D B; T'apai, M; Taracchini, A; Taylor, R; Theeg, T; Thomas, E G; Thomas, M; Thomas, P; Thorne, K A; Thrane, E; Tippens, T; Tiwari, S; Tiwari, V; Tokmakov, K V; Toland, K; Tomlinson, C; Tonelli, M; Tornasi, Z; Torrie, C I; T"oyr"a, D; Travasso, F; Traylor, G; Trifir`o, D; Trinastic, J; Tringali, M C; Trozzo, L; Tse, M; Tso, R; Turconi, M; Tuyenbayev, D; Ugolini, D; Unnikrishnan, C S; Urban, A L; Usman, S A; Vahlbruch, H; Vajente, G; Valdes, G; van Bakel, N; van Beuzekom, M; Brand, J F J van den; Broeck, C Van Den; Vander-Hyde, D C; van der Schaaf, L; van Heijningen, J V; van Veggel, A A; Vardaro, M; Varma, V; Vass, S; Vas'uth, M; Vecchio, A; Vedovato, G; Veitch, J; Veitch, P J; Venkateswara, K; Venugopalan, G; Verkindt, D; Vetrano, F; Vicer'e, A; Viets, A D; Vinciguerra, S; Vine, D J; Vinet, J -Y; Vitale, S; Vo, T; Vocca, H; Vorvick, C; Voss, D V; Vousden, W D; Vyatchanin, S P; Wade, A R; Wade, L E; Wade, M; Walker, M; Wallace, L; Walsh, S; Wang, G; Wang, H; Wang, M; Wang, Y; Ward, R L; Warner, J; Was, M; Watchi, J; Weaver, B; Wei, L -W; Weinert, M; Weinstein, A J; Weiss, R; Wen, L; Wessels, P; Westphal, T; Wette, K; Whelan, J T; Whiting, B F; Whittle, C; Williams, D; Williams, R D; Williamson, A R; Willis, J L; Willke, B; Wimmer, M H; Winkler, W; Wipf, C C; Wittel, H; Woan, G; Woehler, J; Worden, J; Wright, J L; Wu, D S; Wu, G; Yam, W; Yamamoto, H; Yancey, C C; Yap, M J; Yu, Hang; Yu, Haocun; Yvert, M; zny, A Zadro; Zangrando, L; Zanolin, M; Zendri, J -P; Zevin, M; Zhang, L; Zhang, M; Zhang, T; Zhang, Y; Zhao, C; Zhou, M; Zhou, Z; Zhu, S J; Zhu, X J; Zucker, M E; Zweizig, J; Boyle, M; Chu, T; Hemberger, D; Hinder, I; Kidder, L E; Ossokine, S; Scheel, M; Szilagyi, B; Teukolsky, S; Vano-Vinuales, A

    2016-01-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein's equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analyses on mock signals from numerical simulations of a serie...

  3. Testing Geological Models with Terrestrial Antineutrino Flux Measurements

    CERN Document Server

    Dye, Steve

    2009-01-01

    Uranium and thorium are the main heat producing elements in the earth. Their quantities and distributions, which specify the flux of detectable antineutrinos generated by the beta decay of their daughter isotopes, remain unmeasured. Geological models of the continental crust and the mantle predict different quantities and distributions of uranium and thorium. Many of these differences are resolvable with precision measurements of the terrestrial antineutrino flux. This precision depends on both statistical and systematic uncertainties. An unavoidable background of antineutrinos from nuclear reactors typically dominates the systematic uncertainty. This report explores in detail the capability of various operating and proposed geo-neutrino detectors for testing geological models.

  4. Business model framework applications in health care: A systematic review.

    Science.gov (United States)

    Fredriksson, Jens Jacob; Mazzocato, Pamela; Muhammed, Rafiq; Savage, Carl

    2017-01-01

    It has proven to be a challenge for health care organizations to achieve the Triple Aim. In the business literature, business model frameworks have been used to understand how organizations are aligned to achieve their goals. We conducted a systematic literature review with an explanatory synthesis approach to understand how business model frameworks have been applied in health care. We found a large increase in applications of business model frameworks during the last decade. E-health was the most common context of application. We identified six applications of business model frameworks: business model description, financial assessment, classification based on pre-defined typologies, business model analysis, development, and evaluation. Our synthesis suggests that the choice of business model framework and constituent elements should be informed by the intent and context of application. We see a need for harmonization in the choice of elements in order to increase generalizability, simplify application, and help organizations realize the Triple Aim.

  5. In-vitro orthodontic bond strength testing : A systematic review and meta-analysis

    NARCIS (Netherlands)

    Finnema, K.J.; Ozcan, M.; Post, W.J.; Ren, Y.J.; Dijkstra, P.U.

    2010-01-01

    INTRODUCTION: The aims of this study were to systematically review the available literature regarding in-vitro orthodontic shear bond strength testing and to analyze the influence of test conditions on bond strength. METHODS: Our data sources were Embase and Medline. Relevant studies were selected b

  6. Accuracy of clinical tests in the diagnosis of anterior cruciate ligament injury: A systematic review

    NARCIS (Netherlands)

    M.S. Swain (Michael S.); N. Henschke (Nicholas); S.J. Kamper (Steven); A.S. Downie (Aron S.); B.W. Koes (Bart); C. Maher (Chris)

    2014-01-01

    textabstractBackground: Numerous clinical tests are used in the diagnosis of anterior cruciate ligament (ACL) injury but their accuracy is unclear. The purpose of this study is to evaluate the diagnostic accuracy of clinical tests for the diagnosis of ACL injury.Methods: Study Design: Systematic rev

  7. Improved Systematic Pointing Error Model for the DSN Antennas

    Science.gov (United States)

    Rochblatt, David J.; Withington, Philip M.; Richter, Paul H.

    2011-01-01

    New pointing models have been developed for large reflector antennas whose construction is founded on elevation over azimuth mount. At JPL, the new models were applied to the Deep Space Network (DSN) 34-meter antenna s subnet for corrections of their systematic pointing errors; it achieved significant improvement in performance at Ka-band (32-GHz) and X-band (8.4-GHz). The new models provide pointing improvements relative to the traditional models by a factor of two to three, which translate to approximately 3-dB performance improvement at Ka-band. For radio science experiments where blind pointing performance is critical, the new innovation provides a new enabling technology. The model extends the traditional physical models with higher-order mathematical terms, thereby increasing the resolution of the model for a better fit to the underlying systematic imperfections that are the cause of antenna pointing errors. The philosophy of the traditional model was that all mathematical terms in the model must be traced to a physical phenomenon causing antenna pointing errors. The traditional physical terms are: antenna axis tilts, gravitational flexure, azimuth collimation, azimuth encoder fixed offset, azimuth and elevation skew, elevation encoder fixed offset, residual refraction, azimuth encoder scale error, and antenna pointing de-rotation terms for beam waveguide (BWG) antennas. Besides the addition of spherical harmonics terms, the new models differ from the traditional ones in that the coefficients for the cross-elevation and elevation corrections are completely independent and may be different, while in the traditional model, some of the terms are identical. In addition, the new software allows for all-sky or mission-specific model development, and can utilize the previously used model as an a priori estimate for the development of the updated models.

  8. Analysis and Correction of Systematic Height Model Errors

    Science.gov (United States)

    Jacobsen, K.

    2016-06-01

    The geometry of digital height models (DHM) determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC). Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3) has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP), but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM) digital surface model (DSM) or the new AW3D30 DSM, based on ALOS PRISM images, are

  9. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  10. Laboratory Tests of Chameleon Models

    CERN Document Server

    Brax, Philippe; Davis, Anne-Christine; Shaw, Douglas

    2009-01-01

    We present a cursory overview of chameleon models of dark energy and their laboratory tests with an emphasis on optical and Casimir experiments. Optical experiments measuring the ellipticity of an initially polarised laser beam are sensitive to the coupling of chameleons to photons. The next generation of Casimir experiments may be able to unravel the nature of the scalar force mediated by the chameleon between parallel plates.

  11. Seven years of workplace drug testing in Italy: A systematic review and meta-analysis.

    Science.gov (United States)

    Rosso, Gian Luca; Montomoli, Cristina; Morini, Luca; Candura, Stefano M

    2017-06-01

    In Italy, Workplace Drug Testing (WDT) has been compulsory by law for specific categories of workers since 2008, offering the opportunity to compare studies conducted within a single regulatory framework. The aims of this paper are to estimate the overall prevalence of WDT positivity (at screening survey) among Italian workers and evaluate the percentage of true and false positives at confirmation analysis. A systematic review and meta-analysis of the scientific literature on WDT in Italy from January 2008 to March 2015 was carried out, according to the MOOSE guidelines. A random effects model was utilized to calculate pooled prevalence. Potential sources of heterogeneity were explored using sensitivity test and subgroup analysis. The overall meta-analytical prevalence of positivity at WDT among Italian workers was 1.4% [95% confidence interval (CI) = 1.1-1.7%]. It was significantly lower among workers screened with an on-site test (1%; 95% CI = 0.5-1.5%), compared with a bench-top test (1.7%; 95% CI = 1.3-2.1%). Nine studies provided data on false positives at the screening test, with a combined prevalence estimate - calculated on positive cases - of 30% (95% CI = 16-44%). In Italy, the number of true positives at first-level workplace drug testing is low, while the frequency of false positives is relatively high. A revision of the Italian legislation on the subject seems advisable. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. [Reliability and validity of the modified Allen test: a systematic review and metanalysis].

    Science.gov (United States)

    Romeu-Bordas, Óscar; Ballesteros-Peña, Sendoa

    2017-01-01

    The objective was to evaluate the reliability and validity of the modified Allen test in screening for collateral circulation deficits in the palm and for predicting distal hand ischemia. We performed a systematic review of the literature indexed in 6 databases. We developed a search strategy to locate studies comparing the Allen test to Doppler ultrasound to detect circulation deficits in the hand, studies assessing the incidence of ischemic events on arterial puncture after an abnormal Allen test result, and studies of Allen test interobserver agreement. Fourteen articles met the inclusion criteria. Nine assessed the validity of the test as a screening tool for detecting collateral circulation deficits. From data published in 3 studies that had followed comparable designs we calculated a sensitivity of 77% and specificity of 93% for the Allen test. Four studies that assessed the ability of the test to predict ischemia did not predict any ischemic hand events following arterial puncture in patients with abnormal Allen test results. A single study assessing the test's reliability reported an interobserver agreement rate of 71.5%. This systematic review and metanalysis allows to conclude that the Allen test does not have sufficient diagnostic validity to serve as a screening tool for collateral circulation deficits in the hand. Nor is it a good predictor of hand ischemia after arterial puncture. Moreover, its reliability is limited. There is insufficient evidence to support its systematic use before arterial puncture.

  13. A systematically tested intervention for managing reactive depression.

    Science.gov (United States)

    Smith, Carol E; Leenerts, Mary Hobbs; Gajewski, Byron J

    2003-01-01

    Patients and family caregivers repeatedly experience reactive depression that leads to medication errors, mismanagement of chronic disease, and poor self-care. These problems place them at high-risk for malnutrition, infection, heart diseases, and psychiatric sequelae. A secondary data analysis compared findings across a series of studies to evaluate the acceptability, effectiveness, and cost of a therapeutic writing intervention to reduce reactive depression, a common and frequently recurring adverse symptom. Secondary analysis of data from the series of studies was conducted. Data came from patients requiring lifelong, daily central intravenous catheter infusion of home total parenteral nutrition necessitated by nonmalignant bowel disease and their family caregivers who assist with this complex home care. Variables combined across the studies were pre- and postintervention scores from the Center for Epidemiological Studies-Depression Scale (CES-D), the number of weeks patients wrote in their diaries (adherence), and the written content in the diaries. Content analysis was used to analyze written data. The intervention materials and nurses' time spent were averaged across studies to determine costs. The weighted average baseline CES-D scores across studies for patients (17.94) and caregivers (15.75) showed the presence of depression. After journal writing had been used for an average of 10.4 weeks across studies, the effect sizes of the between (d =.27) and within (d =.65) patient group scores indicated moderate to large improvement in depression. Themes from written diaries showed that missing out on activities, financial worries, strain related to the severe illness, and the complexity of home care were related to depression across the studies. The intervention was acceptable to participants, effective for managing reactive depression, and low in cost. The next steps will address testing for the longitudinal effects of the intervention.

  14. Systematic testing of flood adaptation options in urban areas through simulations

    Science.gov (United States)

    Löwe, Roland; Urich, Christian; Sto. Domingo, Nina; Mark, Ole; Deletic, Ana; Arnbjerg-Nielsen, Karsten

    2016-04-01

    While models can quantify flood risk in great detail, the results are subject to a number of deep uncertainties. Climate dependent drivers such as sea level and rainfall intensities, population growth and economic development all have a strong influence on future flood risk, but future developments can only be estimated coarsely. In such a situation, robust decision making frameworks call for the systematic evaluation of mitigation measures against ensembles of potential futures. We have coupled the urban development software DAnCE4Water and the 1D-2D hydraulic simulation package MIKE FLOOD to create a framework that allows for such systematic evaluations, considering mitigation measures under a variety of climate futures and urban development scenarios. A wide spectrum of mitigation measures can be considered in this setup, ranging from structural measures such as modifications of the sewer network over local retention of rainwater and the modification of surface flow paths to policy measures such as restrictions on urban development in flood prone areas or master plans that encourage compact development. The setup was tested in a 300 ha residential catchment in Melbourne, Australia. The results clearly demonstrate the importance of considering a range of potential futures in the planning process. For example, local rainwater retention measures strongly reduce flood risk a scenario with moderate increase of rain intensities and moderate urban growth, but their performance strongly varies, yielding very little improvement in situations with pronounced climate change. The systematic testing of adaptation measures further allows for the identification of so-called adaptation tipping points, i.e. levels for the drivers of flood risk where the desired level of flood risk is exceeded despite the implementation of (a combination of) mitigation measures. Assuming a range of development rates for the drivers of flood risk, such tipping points can be translated into

  15. Proceedings Tenth Workshop on Model Based Testing

    OpenAIRE

    Pakulin, Nikolay; Petrenko, Alexander K.; Schlingloff, Bernd-Holger

    2015-01-01

    The workshop is devoted to model-based testing of both software and hardware. Model-based testing uses models describing the required behavior of the system under consideration to guide such efforts as test selection and test results evaluation. Testing validates the real system behavior against models and checks that the implementation conforms to them, but is capable also to find errors in the models themselves. The intent of this workshop is to bring together researchers and users of model...

  16. Systematic Uncertainties in High-Energy Hadronic Interaction Models

    Science.gov (United States)

    Zha, M.; Knapp, J.; Ostapchenko, S.

    2003-07-01

    Hadronic interaction models for cosmic ray energies are uncertain since our knowledge of hadronic interactions is extrap olated from accelerator experiments at much lower energies. At present most high-energy models are based on Grib ov-Regge theory of multi-Pomeron exchange, which provides a theoretical framework to evaluate cross-sections and particle production. While experimental data constrain some of the model parameters, others are not well determined and are therefore a source of systematic uncertainties. In this paper we evaluate the variation of results obtained with the QGSJET model, when modifying parameters relating to three ma jor sources of uncertainty: the form of the parton structure function, the role of diffractive interactions, and the string hadronisation. Results on inelastic cross sections, on secondary particle production and on the air shower development are discussed.

  17. Clinical tests to diagnose lumbar spondylolysis and spondylolisthesis: A systematic review.

    Science.gov (United States)

    Alqarni, Abdullah M; Schneiders, Anthony G; Cook, Chad E; Hendrick, Paul A

    2015-08-01

    The aim of this paper was to systematically review the diagnostic ability of clinical tests to detect lumbar spondylolysis and spondylolisthesis. A systematic literature search of six databases, with no language restrictions, from 1950 to 2014 was concluded on February 1, 2014. Clinical tests were required to be compared against imaging reference standards and report, or allow computation, of common diagnostic values. The systematic search yielded a total of 5164 articles with 57 retained for full-text examination, from which 4 met the full inclusion criteria for the review. Study heterogeneity precluded a meta-analysis of included studies. Fifteen different clinical tests were evaluated for their ability to diagnose lumbar spondylolisthesis and one test for its ability to diagnose lumbar spondylolysis. The one-legged hyperextension test demonstrated low to moderate sensitivity (50%-73%) and low specificity (17%-32%) to diagnose lumbar spondylolysis, while the lumbar spinous process palpation test was the optimal diagnostic test for lumbar spondylolisthesis; returning high specificity (87%-100%) and moderate to high sensitivity (60-88) values. Lumbar spondylolysis and spondylolisthesis are identifiable causes of LBP in athletes. There appears to be utility to lumbar spinous process palpation for the diagnosis of lumbar spondylolisthesis, however the one-legged hyperextension test has virtually no value in diagnosing patients with spondylolysis.

  18. Effects of waveform model systematics on the interpretation of GW150914

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; E Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; E Brau, J.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; E Broida, J.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; E Cowan, E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; E Creighton, J. D.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; E Dwyer, S.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; E Gossan, S.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; E Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; E Holz, D.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; E Lord, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; E McClelland, D.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; E Mikhailov, E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; E Pace, A.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; E Smith, R. J.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; E Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; E Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; E Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Boyle, M.; Chu, T.; Hemberger, D.; Hinder, I.; E Kidder, L.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano Vinuales, A.

    2017-05-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein’s equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analysis on mock signals from numerical simulations of a series of binary configurations with parameters similar to those found for GW150914. Overall, we find no evidence for a systematic bias relative to the statistical error of the original parameter recovery of GW150914 due to modeling approximations or modeling inaccuracies. However, parameter biases are found to occur for some configurations disfavored by the data of GW150914: for binaries inclined edge-on to the detector over a small range of choices of polarization angles, and also for eccentricities greater than  ˜0.05. For signals with higher signal-to-noise ratio than GW150914, or in other regions of the binary parameter space (lower masses, larger mass ratios, or higher spins), we expect that systematic errors in current waveform models may impact gravitational-wave measurements, making more accurate models desirable for future observations.

  19. Modelling the transmission of healthcare associated infections: a systematic review

    Science.gov (United States)

    2013-01-01

    Background Dynamic transmission models are increasingly being used to improve our understanding of the epidemiology of healthcare-associated infections (HCAI). However, there has been no recent comprehensive review of this emerging field. This paper summarises how mathematical models have informed the field of HCAI and how methods have developed over time. Methods MEDLINE, EMBASE, Scopus, CINAHL plus and Global Health databases were systematically searched for dynamic mathematical models of HCAI transmission and/or the dynamics of antimicrobial resistance in healthcare settings. Results In total, 96 papers met the eligibility criteria. The main research themes considered were evaluation of infection control effectiveness (64%), variability in transmission routes (7%), the impact of movement patterns between healthcare institutes (5%), the development of antimicrobial resistance (3%), and strain competitiveness or co-colonisation with different strains (3%). Methicillin-resistant Staphylococcus aureus was the most commonly modelled HCAI (34%), followed by vancomycin resistant enterococci (16%). Other common HCAIs, e.g. Clostridum difficile, were rarely investigated (3%). Very few models have been published on HCAI from low or middle-income countries. The first HCAI model has looked at antimicrobial resistance in hospital settings using compartmental deterministic approaches. Stochastic models (which include the role of chance in the transmission process) are becoming increasingly common. Model calibration (inference of unknown parameters by fitting models to data) and sensitivity analysis are comparatively uncommon, occurring in 35% and 36% of studies respectively, but their application is increasing. Only 5% of models compared their predictions to external data. Conclusions Transmission models have been used to understand complex systems and to predict the impact of control policies. Methods have generally improved, with an increased use of stochastic models, and

  20. Measurement properties of maximal cardiopulmonary exercise tests protocols in persons after stroke: a systematic review.

    NARCIS (Netherlands)

    Wittink, H.; Verschuren, O.; Terwee, C.; Groot, J. de; Kwakkel, G.; Port, I. van de

    2017-01-01

    Objective: To systematically review and critically appraise the literature on measurement properties of cardiopulmonary exercise test protocols for measuring aerobic capacity, VO2max, in persons after stroke. Data sources: PubMed, Embase and Cinahl were searched from inception up to 15 June 2016. A

  1. Systematic integration of experimental data and models in systems biology

    Directory of Open Access Journals (Sweden)

    Simeonidis Evangelos

    2010-11-01

    Full Text Available Abstract Background The behaviour of biological systems can be deduced from their mathematical models. However, multiple sources of data in diverse forms are required in the construction of a model in order to define its components and their biochemical reactions, and corresponding parameters. Automating the assembly and use of systems biology models is dependent upon data integration processes involving the interoperation of data and analytical resources. Results Taverna workflows have been developed for the automated assembly of quantitative parameterised metabolic networks in the Systems Biology Markup Language (SBML. A SBML model is built in a systematic fashion by the workflows which starts with the construction of a qualitative network using data from a MIRIAM-compliant genome-scale model of yeast metabolism. This is followed by parameterisation of the SBML model with experimental data from two repositories, the SABIO-RK enzyme kinetics database and a database of quantitative experimental results. The models are then calibrated and simulated in workflows that call out to COPASIWS, the web service interface to the COPASI software application for analysing biochemical networks. These systems biology workflows were evaluated for their ability to construct a parameterised model of yeast glycolysis. Conclusions Distributed information about metabolic reactions that have been described to MIRIAM standards enables the automated assembly of quantitative systems biology models of metabolic networks based on user-defined criteria. Such data integration processes can be implemented as Taverna workflows to provide a rapid overview of the components and their relationships within a biochemical system.

  2. Which Psychological Factors are Related to HIV Testing? A Quantitative Systematic Review of Global Studies.

    Science.gov (United States)

    Evangeli, Michael; Pady, Kirsten; Wroe, Abigail L

    2016-04-01

    Deciding to test for HIV is necessary for receiving HIV treatment and care among those who are HIV-positive. This article presents a systematic review of quantitative studies on relationships between psychological (cognitive and affective) variables and HIV testing. Sixty two studies were included (fifty six cross sectional). Most measured lifetime testing. HIV knowledge, risk perception and stigma were the most commonly measured psychological variables. Meta-analysis was carried out on the relationships between HIV knowledge and testing, and HIV risk perception and testing. Both relationships were positive and significant, representing small effects (HIV knowledge, d = 0.22, 95 % CI 0.14-0.31, p testing included: perceived testing benefits, testing fear, perceived behavioural control/self-efficacy, knowledge of testing sites, prejudiced attitudes towards people living with HIV, and knowing someone with HIV. Research and practice implications are outlined.

  3. Systematic assignment of thermodynamic constraints in metabolic network models

    Directory of Open Access Journals (Sweden)

    Heinemann Matthias

    2006-11-01

    Full Text Available Abstract Background The availability of genome sequences for many organisms enabled the reconstruction of several genome-scale metabolic network models. Currently, significant efforts are put into the automated reconstruction of such models. For this, several computational tools have been developed that particularly assist in identifying and compiling the organism-specific lists of metabolic reactions. In contrast, the last step of the model reconstruction process, which is the definition of the thermodynamic constraints in terms of reaction directionalities, still needs to be done manually. No computational method exists that allows for an automated and systematic assignment of reaction directions in genome-scale models. Results We present an algorithm that – based on thermodynamics, network topology and heuristic rules – automatically assigns reaction directions in metabolic models such that the reaction network is thermodynamically feasible with respect to the production of energy equivalents. It first exploits all available experimentally derived Gibbs energies of formation to identify irreversible reactions. As these thermodynamic data are not available for all metabolites, in a next step, further reaction directions are assigned on the basis of network topology considerations and thermodynamics-based heuristic rules. Briefly, the algorithm identifies reaction subsets from the metabolic network that are able to convert low-energy co-substrates into their high-energy counterparts and thus net produce energy. Our algorithm aims at disabling such thermodynamically infeasible cyclic operation of reaction subnetworks by assigning reaction directions based on a set of thermodynamics-derived heuristic rules. We demonstrate our algorithm on a genome-scale metabolic model of E. coli. The introduced systematic direction assignment yielded 130 irreversible reactions (out of 920 total reactions, which corresponds to about 70% of all irreversible

  4. Diagnostic accuracy of xpert test in tuberculosis detection: A systematic review and meta-analysis

    Directory of Open Access Journals (Sweden)

    Ravdeep Kaur

    2016-01-01

    Full Text Available Background: World Health Organization (WHO recommends the use of Xpert MTB/RIF assay for rapid diagnosis of tuberculosis (TB and detection of rifampicin resistance. This systematic review was done to know about the diagnostic accuracy and cost-effectiveness of the Xpert MTB/RIF assay. Methods: A systematic literature search was conducted in following databases: Cochrane Central Register of Controlled Trials and Cochrane Database of Systematic Reviews, MEDLINE, PUBMED, Scopus, Science Direct and Google Scholar for relevant studies for studies published between 2010 and December 2014. Studies given in the systematic reviews were accessed separately and used for analysis. Selection of studies, data extraction and assessment of quality of included studies was performed independently by two reviewers. Studies evaluating the diagnostic accuracy of Xpert MTB/RIF assay among adult or predominantly adult patients (≥14 years, presumed to have pulmonary TB with or without HIV infection were included in the review. Also, studies that had assessed the diagnostic accuracy of Xpert MTB/RIF assay using sputum and other respiratory specimens were included. Results: The included studies had a low risk of any form of bias, showing that findings are of high scientific validity and credibility. Quantitative analysis of 37 included studies shows that Xpert MTB/RIF is an accurate diagnostic test for TB and detection of rifampicin resistance. Conclusion: Xpert MTB/RIF assay is a robust, sensitive and specific test for accurate diagnosis of tuberculosis as compared to conventional tests like culture and microscopic examination.

  5. Diagnostic Accuracy of Xpert Test in Tuberculosis Detection: A Systematic Review and Meta-analysis

    Science.gov (United States)

    Kaur, Ravdeep; Kachroo, Kavita; Sharma, Jitendar Kumar; Vatturi, Satyanarayana Murthy; Dang, Amit

    2016-01-01

    Background: World Health Organization (WHO) recommends the use of Xpert MTB/RIF assay for rapid diagnosis of tuberculosis (TB) and detection of rifampicin resistance. This systematic review was done to know about the diagnostic accuracy and cost-effectiveness of the Xpert MTB/RIF assay. Methods: A systematic literature search was conducted in following databases: Cochrane Central Register of Controlled Trials and Cochrane Database of Systematic Reviews, MEDLINE, PUBMED, Scopus, Science Direct and Google Scholar for relevant studies for studies published between 2010 and December 2014. Studies given in the systematic reviews were accessed separately and used for analysis. Selection of studies, data extraction and assessment of quality of included studies was performed independently by two reviewers. Studies evaluating the diagnostic accuracy of Xpert MTB/RIF assay among adult or predominantly adult patients (≥14 years), presumed to have pulmonary TB with or without HIV infection were included in the review. Also, studies that had assessed the diagnostic accuracy of Xpert MTB/RIF assay using sputum and other respiratory specimens were included. Results: The included studies had a low risk of any form of bias, showing that findings are of high scientific validity and credibility. Quantitative analysis of 37 included studies shows that Xpert MTB/RIF is an accurate diagnostic test for TB and detection of rifampicin resistance. Conclusion: Xpert MTB/RIF assay is a robust, sensitive and specific test for accurate diagnosis of tuberculosis as compared to conventional tests like culture and microscopic examination. PMID:27013842

  6. A systematic review of predictive modeling for bronchiolitis.

    Science.gov (United States)

    Luo, Gang; Nkoy, Flory L; Gesteland, Per H; Glasgow, Tiffany S; Stone, Bryan L

    2014-10-01

    Bronchiolitis is the most common cause of illness leading to hospitalization in young children. At present, many bronchiolitis management decisions are made subjectively, leading to significant practice variation among hospitals and physicians caring for children with bronchiolitis. To standardize care for bronchiolitis, researchers have proposed various models to predict the disease course to help determine a proper management plan. This paper reviews the existing state of the art of predictive modeling for bronchiolitis. Predictive modeling for respiratory syncytial virus (RSV) infection is covered whenever appropriate, as RSV accounts for about 70% of bronchiolitis cases. A systematic review was conducted through a PubMed search up to April 25, 2014. The literature on predictive modeling for bronchiolitis was retrieved using a comprehensive search query, which was developed through an iterative process. Search results were limited to human subjects, the English language, and children (birth to 18 years). The literature search returned 2312 references in total. After manual review, 168 of these references were determined to be relevant and are discussed in this paper. We identify several limitations and open problems in predictive modeling for bronchiolitis, and provide some preliminary thoughts on how to address them, with the hope to stimulate future research in this domain. Many problems remain open in predictive modeling for bronchiolitis. Future studies will need to address them to achieve optimal predictive models. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. The Utility of Brief Cognitive Tests for Patients With Type 2 Diabetes Mellitus: A Systematic Review.

    Science.gov (United States)

    Dong, YanHong; Kua, Zhong Jie; Khoo, Eric Yin Hao; Koo, Edward H; Merchant, Reshma A

    2016-10-01

    Type 2 diabetes mellitus (T2DM) is associated with an increased risk for mild cognitive impairment and dementia in both middle-aged and older individuals. Brief cognitive tests can potentially serve as a reliable and cost effective approach to detect for cognitive decrements in clinical practice. This systematic review examined the utility of brief cognitive tests in studies with patients with T2DM. This systematic review was conducted according to guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses. "PubMed," "PsychINFO," "ScienceDirect," and "ProQuest" electronic databases were searched to identify articles published from January 1, 2005 to December 31, 2015. The search yielded 22 studies, with only 8 using brief tests as a cognitive screening tool, whereas the majority using these tests as a measure of global cognitive functions. In regard to cognitive screening studies, most had failed to fulfil the standard reporting of diagnostic test accuracy criteria such as Standards for Reporting of Diagnostic Accuracy for dementia and cognitive impairment. Moreover, few studies reported discriminant indices such as sensitivity, specificity, and positive and negative predictive values of brief cognitive tests in detecting cognitive impairment in patients with T2DM. Among studies which used brief cognitive tests as a measure of global cognitive function, patients with diabetes tended to perform worse than patients without diabetes. Processing speed appeared to be particularly impaired among patients with diabetes, therefore, measures of processing speed such as the Digit Symbol Substitution Test may add value to brief cognitive tests such as the Montreal Cognitive Assessment. The Montreal Cognitive Assessment supplemented by the Digit Symbol Substitution Test indicate initial promise in screening for cognitive impairment in T2DM. Copyright © 2016 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc

  8. A systematic review of animal models for Staphylococcus aureus osteomyelitis

    Directory of Open Access Journals (Sweden)

    W Reizner

    2014-03-01

    Full Text Available Staphylococcus aureus (S. aureus osteomyelitis is a significant complication for orthopaedic patients undergoing surgery, particularly with fracture fixation and arthroplasty. Given the difficulty in studying S. aureus infections in human subjects, animal models serve an integral role in exploring the pathogenesis of osteomyelitis, and aid in determining the efficacy of prophylactic and therapeutic treatments. Animal models should mimic the clinical scenarios seen in patients as closely as possible to permit the experimental results to be translated to the corresponding clinical care. To help understand existing animal models of S. aureus, we conducted a systematic search of PubMed and Ovid MEDLINE to identify in vivo animal experiments that have investigated the management of S. aureus osteomyelitis in the context of fractures and metallic implants. In this review, experimental studies are categorised by animal species and are further classified by the setting of the infection. Study methods are summarised and the relevant advantages and disadvantages of each species and model are discussed. While no ideal animal model exists, the understanding of a model’s strengths and limitations should assist clinicians and researchers to appropriately select an animal model to translate the conclusions to the clinical setting.

  9. ANAEROBIC EXERCISE TESTING IN REHABILITATION : A SYSTEMATIC REVIEW OF AVAILABLE TESTS AND PROTOCOLS

    NARCIS (Netherlands)

    Krops, Leonie A.; Albada, Trijntje; van der Woude, Lucas H. V.; Hijmans, Juha M.; Dekker, Rienk

    Objective: Anaerobic capacity assessment in rehabilitation has received increasing scientific attention in recent years. However, anaerobic capacity is not tested consistently in clinical rehabilitation practice. This study reviews tests and protocols for anaerobic capacity in adults with various

  10. Testing Strategies for Model-Based Development

    Science.gov (United States)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  11. Nucleic acid amplification tests in the diagnosis of tuberculous pleuritis: a systematic review and meta-analysis

    Directory of Open Access Journals (Sweden)

    Riley Lee W

    2004-02-01

    Full Text Available Abstract Background Conventional tests for tuberculous pleuritis have several limitations. A variety of new, rapid tests such as nucleic acid amplification tests – including polymerase chain reaction – have been evaluated in recent times. We conducted a systematic review to determine the accuracy of nucleic acid amplification (NAA tests in the diagnosis of tuberculous pleuritis. Methods A systematic review and meta-analysis of 38 English and Spanish articles (with 40 studies, identified via searches of six electronic databases, hand searching of selected journals, and contact with authors, experts, and test manufacturers. Sensitivity, specificity, and other measures of accuracy were pooled using random effects models. Summary receiver operating characteristic curves were used to summarize overall test performance. Heterogeneity in study results was formally explored using subgroup analyses. Results Of the 40 studies included, 26 used in-house ("home-brew" tests, and 14 used commercial tests. Commercial tests had a low overall sensitivity (0.62; 95% confidence interval [CI] 0.43, 0.77, and high specificity (0.98; 95% CI 0.96, 0.98. The positive and negative likelihood ratios for commercial tests were 25.4 (95% CI 16.2, 40.0 and 0.40 (95% CI 0.24, 0.67, respectively. All commercial tests had consistently high specificity estimates; the sensitivity estimates, however, were heterogeneous across studies. With the in-house tests, both sensitivity and specificity estimates were significantly heterogeneous. Clinically meaningful summary estimates could not be determined for in-house tests. Conclusions Our results suggest that commercial NAA tests may have a potential role in confirming (ruling in tuberculous pleuritis. However, these tests have low and variable sensitivity and, therefore, may not be useful in excluding (ruling out the disease. NAA test results, therefore, cannot replace conventional tests; they need to be interpreted in parallel

  12. Procedure for the systematic orientation of digitised cranial models. Design and validation.

    Science.gov (United States)

    Bailo, M; Baena, S; Marín, J J; Arredondo, J M; Auría, J M; Sánchez, B; Tardío, E; Falcón, L

    2015-12-01

    Comparison of bony pieces requires that they are oriented systematically to ensure that homologous regions are compared. Few orientation methods are highly accurate; this is particularly true for methods applied to three-dimensional models obtained by surface scanning, a technique whose special features make it a powerful tool in forensic contexts. The aim of this study was to develop and evaluate a systematic, assisted orientation method for aligning three-dimensional cranial models relative to the Frankfurt Plane, which would be produce accurate orientations independent of operator and anthropological expertise. The study sample comprised four crania of known age and sex. All the crania were scanned and reconstructed using an Eva Artec™ portable 3D surface scanner and subsequently, the position of certain characteristic landmarks were determined by three different operators using the Rhinoceros 3D surface modelling software. Intra-observer analysis showed a tendency for orientation to be more accurate when using the assisted method than when using conventional manual orientation. Inter-observer analysis showed that experienced evaluators achieve results at least as accurate if not more accurate using the assisted method than those obtained using manual orientation; while inexperienced evaluators achieved more accurate orientation using the assisted method. The method tested is a an innovative system capable of providing very precise, systematic and automatised spatial orientations of virtual cranial models relative to standardised anatomical planes independent of the operator and operator experience.

  13. Systematic multiscale models for deep convection on mesoscales

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Rupert [Freie Universitaet Berlin and Potsdam Institute for Climate Impact Research, FB Mathematik and Informatik, Berlin (Germany); Majda, Andrew J. [New York University, Courant Institute of Mathematical Sciences, New York, NY (United States)

    2006-11-15

    This paper builds on recent developments of a unified asymptotic approach to meteorological modeling [ZAMM, 80: 765-777, 2000, SIAM Proc. App. Math. 116, 227-289, 2004], which was used successfully in the development of Systematic multiscale models for the tropics in Majda and Klein [J. Atmosph. Sci. 60: 393-408, 2003] and Majda and Biello [PNAS, 101: 4736-4741, 2004]. Biello and Majda [J. Atmosph. Sci. 62: 1694-1720, 2005]. Here we account for typical bulk microphysics parameterizations of moist processes within this framework. The key steps are careful nondimensionalization of the bulk microphysics equations and the choice of appropriate distinguished limits for the various nondimensional small parameters that appear. We are then in a position to study scale interactions in the atmosphere involving moist physics. We demonstrate this by developing two systematic multiscale models that are motivated by our interest in mesoscale organized convection. The emphasis here is on multiple length scales but common time scales. The first of these models describes the short-time evolution of slender, deep convective hot towers with horizontal scale {proportional_to}1 km interacting with the linearized momentum balance on length and time scales of (10 km/3 min). We expect this model to describe how convective inhibition may be overcome near the surface, how the onset of deep convection triggers convective-scale gravity waves, and that it will also yield new insight into how such local convective events may conspire to create larger-scale strong storms. The second model addresses the next larger range of length and time scales (10 km, 100 km, and 20 min) and exhibits mathematical features that are strongly reminiscent of mesoscale organized convection. In both cases, the asymptotic analysis reveals how the stiffness of condensation/evaporation processes induces highly nonlinear dynamics. Besides providing new theoretical insights, the derived models may also serve as a

  14. Systematic review of cardiopulmonary exercise testing post stroke: Are we adhering to practice recommendations?

    Science.gov (United States)

    van de Port, Ingrid G L; Kwakkel, Gert; Wittink, Harriet

    2015-11-01

    To systematically review the use of cardiopulmonary exercise testing in people who have survived a stroke. The following questions are addressed: (i) What are the testing procedures used? (ii) What are the patient, safety and outcomes characteristics in the cardiopulmonary exercise testing procedures? (iii) Which criteria are used to determine maximum oxygen uptake (VO2peak/max) in the cardiopulmonary exercise testing procedures? Systematic review of studies of cardiopulmonary exercise testing in stroke survivors. PubMed, EMBASE, and CINAHL were searched from inception until January 2014. MeSH headings and keywords used were: oxygen capacity, oxygen consumption, oxygen uptake, peak VO2, max VO2, aerobic fitness, physical fitness, aerobic capacity, physical endurance and stroke. Search and selection were performed independently by 2 reviewers. Sixty studies were scrutinized, including 2,104 stroke survivors. Protocols included treadmill (n = 21), bicycle (n = 33), stepper (n = 3) and arm (n = 1) ergometry. Five studies reported 11 adverse events (1%). Secondary outcomes were reported in few studies, which hampered interpretation of the patient's effort, and hence the value of the VO2peak. Most studies did not adhere, or insufficiently adhered, to the existing cardiopulmonary exercise testing guidelines for exercise testing. Thus, the results of cardiopulmonary exercise testing protocols in stroke patients cannot be compared.

  15. A Systematic Literature Review of Agile Maturity Model Research

    Directory of Open Access Journals (Sweden)

    Vaughan Henriques

    2017-02-01

    Full Text Available Background/Aim/Purpose: A commonly implemented software process improvement framework is the capability maturity model integrated (CMMI. Existing literature indicates higher levels of CMMI maturity could result in a loss of agility due to its organizational focus. To maintain agility, research has focussed attention on agile maturity models. The objective of this paper is to find the common research themes and conclusions in agile maturity model research. Methodology: This research adopts a systematic approach to agile maturity model research, using Google Scholar, Science Direct, and IEEE Xplore as sources. In total 531 articles were initially found matching the search criteria, which was filtered to 39 articles by applying specific exclusion criteria. Contribution:: The article highlights the trends in agile maturity model research, specifically bringing to light the lack of research providing validation of such models. Findings: Two major themes emerge, being the coexistence of agile and CMMI and the development of agile principle based maturity models. The research trend indicates an increase in agile maturity model articles, particularly in the latter half of the last decade, with concentrations of research coinciding with version updates of CMMI. While there is general consensus around higher CMMI maturity levels being incompatible with true agility, there is evidence of the two coexisting when agile is introduced into already highly matured environments. Future Research:\tFuture research direction for this topic should include how to attain higher levels of CMMI maturity using only agile methods, how governance is addressed in agile environments, and whether existing agile maturity models relate to improved project success.

  16. Current Developments in Dementia Risk Prediction Modelling: An Updated Systematic Review.

    Directory of Open Access Journals (Sweden)

    Eugene Y H Tang

    Full Text Available Accurate identification of individuals at high risk of dementia influences clinical care, inclusion criteria for clinical trials and development of preventative strategies. Numerous models have been developed for predicting dementia. To evaluate these models we undertook a systematic review in 2010 and updated this in 2014 due to the increase in research published in this area. Here we include a critique of the variables selected for inclusion and an assessment of model prognostic performance.Our previous systematic review was updated with a search from January 2009 to March 2014 in electronic databases (MEDLINE, Embase, Scopus, Web of Science. Articles examining risk of dementia in non-demented individuals and including measures of sensitivity, specificity or the area under the curve (AUC or c-statistic were included.In total, 1,234 articles were identified from the search; 21 articles met inclusion criteria. New developments in dementia risk prediction include the testing of non-APOE genes, use of non-traditional dementia risk factors, incorporation of diet, physical function and ethnicity, and model development in specific subgroups of the population including individuals with diabetes and those with different educational levels. Four models have been externally validated. Three studies considered time or cost implications of computing the model.There is no one model that is recommended for dementia risk prediction in population-based settings. Further, it is unlikely that one model will fit all. Consideration of the optimal features of new models should focus on methodology (setting/sample, model development and testing in a replication cohort and the acceptability and cost of attaining the risk variables included in the prediction score. Further work is required to validate existing models or develop new ones in different populations as well as determine the ethical implications of dementia risk prediction, before applying the particular

  17. Linear Logistic Test Modeling with R

    Directory of Open Access Journals (Sweden)

    Purya Baghaei

    2014-01-01

    Full Text Available The present paper gives a general introduction to the linear logistic test model (Fischer, 1973, an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014 functions to estimate the model and interpret its parameters. The applications of the model in test validation, hypothesis testing, cross-cultural studies of test bias, rule-based item generation, and investigating construct irrelevant factors which contribute to item difficulty are explained. The model is applied to an English as a foreign language reading comprehension test and the results are discussed.

  18. HIV Testing Among Internet-Using MSM in the United States: Systematic Review.

    Science.gov (United States)

    Noble, Meredith; Jones, Amanda M; Bowles, Kristina; DiNenno, Elizabeth A; Tregear, Stephen J

    2017-02-01

    Regular HIV testing enables early identification and treatment of HIV among at-risk men who have sex with men (MSM). Characterizing HIV testing needs for Internet-using MSM informs development of Internet-facilitated testing interventions. In this systematic review we analyze HIV testing patterns among Internet-using MSM in the United States who report, through participation in an online study or survey, their HIV status as negative or unknown and identify demographic or behavioral risk factors associated with testing. We systematically searched multiple electronic databases for relevant English-language articles published between January 1, 2005 and December 16, 2014. Using meta-analysis, we summarized the proportion of Internet-using MSM who had ever tested for HIV and the proportion who tested in the 12 months preceding participation in the online study or survey. We also identified factors predictive of these outcomes using meta-regression and narrative synthesis. Thirty-two studies that enrolled 83,186 MSM met our inclusion criteria. Among the studies reporting data for each outcome, 85 % (95 % CI 82-87 %) of participants had ever tested, and 58 % (95 % CI 53-63 %) had tested in the year preceding enrollment in the study, among those for whom those data were reported. Age over 30 years, at least a college education, use of drugs, and self-identification as being homosexual or gay were associated with ever having tested for HIV. A large majority of Internet-using MSM indicated they had been tested for HIV at some point in the past. A smaller proportion-but still a majority-reported they had been tested within the year preceding study or survey participation. MSM who self-identify as heterosexual or bisexual, are younger, or who use drugs (including non-injection drugs) may be less likely to have ever tested for HIV. The overall findings of our systematic review are encouraging; however, a subpopulation of MSM may benefit from targeted outreach. These

  19. Using Laser Scanners to Augment the Systematic Error Pointing Model

    Science.gov (United States)

    Wernicke, D. R.

    2016-08-01

    The antennas of the Deep Space Network (DSN) rely on precise pointing algorithms to communicate with spacecraft that are billions of miles away. Although the existing systematic error pointing model is effective at reducing blind pointing errors due to static misalignments, several of its terms have a strong dependence on seasonal and even daily thermal variation and are thus not easily modeled. Changes in the thermal state of the structure create a separation from the model and introduce a varying pointing offset. Compensating for this varying offset is possible by augmenting the pointing model with laser scanners. In this approach, laser scanners mounted to the alidade measure structural displacements while a series of transformations generate correction angles. Two sets of experiments were conducted in August 2015 using commercially available laser scanners. When compared with historical monopulse corrections under similar conditions, the computed corrections are within 3 mdeg of the mean. However, although the results show promise, several key challenges relating to the sensitivity of the optical equipment to sunlight render an implementation of this approach impractical. Other measurement devices such as inclinometers may be implementable at a significantly lower cost.

  20. Testing Linear Models for Ability Parameters in Item Response Models

    NARCIS (Netherlands)

    Glas, Cees A.W.; Hendrawan, Irene

    2005-01-01

    Methods for testing hypotheses concerning the regression parameters in linear models for the latent person parameters in item response models are presented. Three tests are outlined: A likelihood ratio test, a Lagrange multiplier test and a Wald test. The tests are derived in a marginal maximum like

  1. Field accuracy of fourth-generation rapid diagnostic tests for acute HIV-1: a systematic review

    OpenAIRE

    2015-01-01

    Introduction: Fourth-generation HIV-1 rapid diagnostic tests (RDTs) detect HIV-1 p24 antigen to screen for acute HIV-1. However, diagnostic accuracy during clinical use may be suboptimal. Methods: Clinical sensitivity and specificity of fourth-generation RDTs for acute HIV-1 were collated from field evaluation studies in adults identified by a systematic literature search. Results: Four studies with 17 381 participants from Australia, Swaziland, the United Kingdom and Malawi were identified. ...

  2. Testing linearity against nonlinear moving average models

    NARCIS (Netherlands)

    de Gooijer, J.G.; Brännäs, K.; Teräsvirta, T.

    1998-01-01

    Lagrange multiplier (LM) test statistics are derived for testing a linear moving average model against an additive smooth transition moving average model. The latter model is introduced in the paper. The small sample performance of the proposed tests are evaluated in a Monte Carlo study and compared

  3. Software Testing Method Based on Model Comparison

    Institute of Scientific and Technical Information of China (English)

    XIE Xiao-dong; LU Yan-sheng; MAO Cheng-yin

    2008-01-01

    A model comparison based software testing method (MCST) is proposed. In this method, the requirements and programs of software under test are transformed into the ones in the same form, and described by the same model describe language (MDL).Then, the requirements are transformed into a specification model and the programs into an implementation model. Thus, the elements and structures of the two models are compared, and the differences between them are obtained. Based on the diffrences, a test suite is generated. Different MDLs can be chosen for the software under test. The usages of two classical MDLs in MCST, the equivalence classes model and the extended finite state machine (EFSM) model, are described with example applications. The results show that the test suites generated by MCST are more efficient and smaller than some other testing methods, such as the path-coverage testing method, the object state diagram testing method, etc.

  4. Barriers to workplace HIV testing in South Africa: a systematic review of the literature.

    Science.gov (United States)

    Weihs, Martin; Meyer-Weitz, Anna

    2016-01-01

    Low workplace HIV testing uptake makes effective management of HIV and AIDS difficult for South African organisations. Identifying barriers to workplace HIV testing is therefore crucial to inform urgently needed interventions aimed at increasing workplace HIV testing. This study reviewed literature on workplace HIV testing barriers in South Africa. Pubmed, ScienceDirect, PsycInfo and SA Publications were systematically researched. Studies needed to include measures to assess perceived or real barriers to participate in HIV Counselling and Testing (HCT) at the workplace or discuss perceived or real barriers of HIV testing at the workplace based on collected data, provide qualitative or quantitative evidence related to the research topic and needed to refer to workplaces in South Africa. Barriers were defined as any factor on economic, social, personal, environmental or organisational level preventing employees from participating in workplace HIV testing. Four peer-reviewed studies were included, two with quantitative and two with qualitative study designs. The overarching barriers across the studies were fear of compromised confidentiality, being stigmatised or discriminated in the event of testing HIV positive or being observed participating in HIV testing, and a low personal risk perception. Furthermore, it appeared that an awareness of an HIV-positive status hindered HIV testing at the workplace. Further research evidence of South African workplace barriers to HIV testing will enhance related interventions. This systematic review only found very little and contextualised evidence about workplace HCT barriers in South Africa, making it difficult to generalise, and not really sufficient to inform new interventions aimed at increasing workplace HCT uptake.

  5. Using Islands to Systematically Compare Satellite Observations to Models and Theory

    Science.gov (United States)

    Sherwood, S. C.; Robinson, F.; Gerstle, D.; Liu, C.; Kirshbaum, D. J.; Hernandez-Deckers, D.; Li, Y.

    2012-12-01

    Satellite observations are our most voluminous, and perhaps most important source of information on atmospheric convective behavior. However testing models is quite difficult, especially with satellites in low Earth orbits, due to several problems including infrequent sampling, the chaotic nature of convection (which means actual storms will always differ from modeled ones even with perfect models), model initialization, and uncertain boundary conditions. This talk presents work using forcing by islands of different sizes as a strategy for overcoming these problems. We examine the systematic dependence of different characteristics of convection with island size, as a target for simple theories of convection and the sea breeze, and for CRMs (cloud resolving models). We find some nonintuitive trends of behavior with size -- some of which we can reproduce with the WRF CRM, and some which we cannot.

  6. Physical examination tests for screening and diagnosis of cervicogenic headache: A systematic review.

    Science.gov (United States)

    Rubio-Ochoa, J; Benítez-Martínez, J; Lluch, E; Santacruz-Zaragozá, S; Gómez-Contreras, P; Cook, C E

    2016-02-01

    It has been suggested that differential diagnosis of headaches should consist of a robust subjective examination and a detailed physical examination of the cervical spine. Cervicogenic headache (CGH) is a form of headache that involves referred pain from the neck. To our knowledge, no studies have summarized the reliability and diagnostic accuracy of physical examination tests for CGH. The aim of this study was to summarize the reliability and diagnostic accuracy of physical examination tests used to diagnose CGH. A systematic review following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines was performed in four electronic databases (MEDLINE, Web of Science, Embase and Scopus). Full text reports concerning physical tests for the diagnosis of CGH which reported the clinometric properties for assessment of CGH, were included and screened for methodological quality. Quality Appraisal for Reliability Studies (QAREL) and Quality Assessment of Studies of Diagnostic Accuracy (QUADAS-2) scores were completed to assess article quality. Eight articles were retrieved for quality assessment and data extraction. Studies investigating diagnostic reliability of physical examination tests for CGH scored poorer on methodological quality (higher risk of bias) than those of diagnostic accuracy. There is sufficient evidence showing high levels of reliability and diagnostic accuracy of the selected physical examination tests for the diagnosis of CGH. The cervical flexion-rotation test (CFRT) exhibited both the highest reliability and the strongest diagnostic accuracy for the diagnosis of CGH. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Factors affecting the clinical use of non-invasive prenatal testing: a mixed methods systematic review.

    Science.gov (United States)

    Skirton, Heather; Patch, Christine

    2013-06-01

    Non-invasive prenatal testing has been in clinical use for a decade; however, there is evidence that this technology will be more widely applied within the next few years. Guidance is therefore required to ensure that the procedure is offered in a way that is evidence based and ethically and clinically acceptable. We conducted a systematic review of the current relevant literature to ascertain the factors that should be considered when offering non-invasive prenatal testing in a clinical setting. We undertook a systematic search of relevant databases, journals and reference lists, and from an initial list of 298 potential papers, identified 11 that were directly relevant to the study. Original data were extracted and presented in a table, and the content of all papers was analysed and presented in narrative form. Four main themes emerged: perceived attributes of the test, regulation and ethical issues, non-invasive prenatal testing in practice and economic considerations. However, there was a basic difference in the approach of actual or potential service users, who were very positive about the benefits of the technology, compared with other research participants, who were concerned with the potential moral and ethical outcomes of using this testing method. Recommendations for the appropriate use of non-invasive prenatal testing are made.

  8. Stem cells in animal asthma models: a systematic review.

    Science.gov (United States)

    Srour, Nadim; Thébaud, Bernard

    2014-12-01

    Asthma control frequently falls short of the goals set in international guidelines. Treatment options for patients with poorly controlled asthma despite inhaled corticosteroids and long-acting β-agonists are limited, and new therapeutic options are needed. Stem cell therapy is promising for a variety of disorders but there has been no human clinical trial of stem cell therapy for asthma. We aimed to systematically review the literature regarding the potential benefits of stem cell therapy in animal models of asthma to determine whether a human trial is warranted. The MEDLINE and Embase databases were searched for original studies of stem cell therapy in animal asthma models. Nineteen studies were selected. They were found to be heterogeneous in their design. Mesenchymal stromal cells were used before sensitization with an allergen, before challenge with the allergen and after challenge, most frequently with ovalbumin, and mainly in BALB/c mice. Stem cell therapy resulted in a reduction of bronchoalveolar lavage fluid inflammation and eosinophilia as well as Th2 cytokines such as interleukin-4 and interleukin-5. Improvement in histopathology such as peribronchial and perivascular inflammation, epithelial thickness, goblet cell hyperplasia and smooth muscle layer thickening was universal. Several studies showed a reduction in airway hyper-responsiveness. Stem cell therapy decreases eosinophilic and Th2 inflammation and is effective in several phases of the allergic response in animal asthma models. Further study is warranted, up to human clinical trials. Copyright © 2014 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  9. Physical examination tests for the diagnosis of posterior cruciate ligament rupture: a systematic review.

    Science.gov (United States)

    Kopkow, Christian; Freiberg, Alice; Kirschner, Stephan; Seidler, Andreas; Schmitt, Jochen

    2013-11-01

    Systematic literature review. To summarize and evaluate research on the accuracy of physical examination tests for diagnosis of posterior cruciate ligament (PCL) tear. Rupture of the PCL is a severe knee injury that can lead to delayed rehabilitation, instability, or chronic knee pathologies. To our knowledge, there is currently no systematic review of studies on the diagnostic accuracy of clinical examination tests to evaluate the integrity of the PCL. A comprehensive systematic literature search was conducted in MEDLINE from 1946, Embase from 1974, and the Allied and Complementary Medicine Database from 1985 until April 30, 2012. Studies were considered eligible if they compared the results of physical examination tests performed in the context of a PCL physical examination to those of a reference standard (arthroscopy, arthrotomy, magnetic resonance imaging). Methodological quality assessment was performed by 2 independent reviewers using the revised version of the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. The search strategy revealed 1307 articles, of which 11 met the inclusion criteria for this review. In these studies, 11 different physical examination tests were identified. Due to differences in study types, different patient populations, and methodological quality, meta-analysis was not indicated. Presently, most physical examination tests have not been evaluated sufficiently enough to be confident in their ability to either confirm or rule out a PCL tear. The diagnostic accuracy of physical examination tests to assess the integrity of the PCL is largely unknown. There is a strong need for further research in this area. Level of Evidence Diagnosis, level 3a.

  10. Implementing HIV Testing in Substance Use Treatment Programs: A Systematic Review.

    Science.gov (United States)

    Simeone, Claire A; Seal, Stella M; Savage, Christine

    People who use drugs are at increased risk for HIV acquisition, poor engagement in health care, and late screening for HIV with advanced HIV at diagnosis and increased HIV-related morbidity, mortality, and health care costs. This systematic review evaluates current evidence about the effectiveness and feasibility of implementing HIV testing in U.S. substance use treatment programs. The literature search identified 535 articles. Full text review was limited to articles that explicitly addressed strategies to implement HIV testing in substance use programs: 17 met criteria and were included in the review; nine used quantitative, qualitative, or mixed-method designs to describe or quantify HIV testing rates, acceptance by clients and staff, and cost-effectiveness; eight organization surveys described barriers and facilitators to testing implementation. The evidence supported the effectiveness and feasibility of rapid, routine, and streamlined HIV testing in substance use treatment programs. Primary challenges included organizational support and sustainable funding.

  11. Evaluating the psychological effects of genetic testing in symptomatic patients: a systematic review.

    Science.gov (United States)

    Vansenne, Fleur; Bossuyt, Patrick M M; de Borgie, Corianne A J M

    2009-10-01

    Most research on the effects of genetic testing is performed in individuals at increased risk for a specific disease (presymptomatic subjects) but not in patients already affected by disease. If results of these studies in presymptomatic subjects can be applied to patients is unclear. We performed a systematic review to evaluate the effects of genetic testing in patients and describe the methodological instruments used. About 2611 articles were retrieved and 16 studies included. Studies reported great variety in designs, methods, and patient outcomes. In total, 2868 participants enrolled of which 62% were patients. Patients appeared to have a lower perceived general health and higher levels of anxiety and depression than presymptomatic subjects before genetic testing. In the long term no psychological impairment was shown. We conclude that patients differ from presymptomatic subjects and may be more vulnerable to negative effects of genetic testing. Conclusions from earlier research on presymptomatic genetic testing cannot be generalized to patients, and more standardized research is needed.

  12. Simulation Models for Socioeconomic Inequalities in Health: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Niko Speybroeck

    2013-11-01

    Full Text Available Background: The emergence and evolution of socioeconomic inequalities in health involves multiple factors interacting with each other at different levels. Simulation models are suitable for studying such complex and dynamic systems and have the ability to test the impact of policy interventions in silico. Objective: To explore how simulation models were used in the field of socioeconomic inequalities in health. Methods: An electronic search of studies assessing socioeconomic inequalities in health using a simulation model was conducted. Characteristics of the simulation models were extracted and distinct simulation approaches were identified. As an illustration, a simple agent-based model of the emergence of socioeconomic differences in alcohol abuse was developed. Results: We found 61 studies published between 1989 and 2013. Ten different simulation approaches were identified. The agent-based model illustration showed that multilevel, reciprocal and indirect effects of social determinants on health can be modeled flexibly. Discussion and Conclusions: Based on the review, we discuss the utility of using simulation models for studying health inequalities, and refer to good modeling practices for developing such models. The review and the simulation model example suggest that the use of simulation models may enhance the understanding and debate about existing and new socioeconomic inequalities of health frameworks.

  13. A systematic review of usability test metrics for mobile video streaming apps

    Science.gov (United States)

    Hussain, Azham; Mkpojiogu, Emmanuel O. C.

    2016-08-01

    This paper presents the results of a systematic review regarding the usability test metrics for mobile video streaming apps. In the study, 238 studies were found, but only 51 relevant papers were eventually selected for the review. The study reveals that time taken for video streaming and the video quality were the two most popular metrics used in the usability tests for mobile video streaming apps. Besides, most of the studies concentrated on the usability of mobile TV as users are switching from traditional TV to mobile TV.

  14. Vehicle rollover sensor test modeling

    NARCIS (Netherlands)

    McCoy, R.W.; Chou, C.C.; Velde, R. van de; Twisk, D.; Schie, C. van

    2007-01-01

    A computational model of a mid-size sport utility vehicle was developed using MADYMO. The model includes a detailed description of the suspension system and tire characteristics that incorporated the Delft-Tyre magic formula description. The model was correlated by simulating a vehicle suspension ki

  15. Propfan test assessment testbed aircraft flutter model test report

    Science.gov (United States)

    Jenness, C. M. J.

    1987-01-01

    The PropFan Test Assessment (PTA) program includes flight tests of a propfan power plant mounted on the left wind of a modified Gulfstream II testbed aircraft. A static balance boom is mounted on the right wing tip for lateral balance. Flutter analyses indicate that these installations reduce the wing flutter stabilizing speed and that torsional stiffening and the installation of a flutter stabilizing tip boom are required on the left wing for adequate flutter safety margins. Wind tunnel tests of a 1/9th scale high speed flutter model of the testbed aircraft were conducted. The test program included the design, fabrication, and testing of the flutter model and the correlation of the flutter test data with analysis results. Excellent correlations with the test data were achieved in posttest flutter analysis using actual model properties. It was concluded that the flutter analysis method used was capable of accurate flutter predictions for both the (symmetric) twin propfan configuration and the (unsymmetric) single propfan configuration. The flutter analysis also revealed that the differences between the tested model configurations and the current aircraft design caused the (scaled) model flutter speed to be significantly higher than that of the aircraft, at least for the single propfan configuration without a flutter boom. Verification of the aircraft final design should, therefore, be based on flutter predictions made with the test validated analysis methods.

  16. A test of systematic coarse-graining of molecular dynamics simulations: thermodynamic properties.

    Science.gov (United States)

    Fu, Chia-Chun; Kulkarni, Pandurang M; Shell, M Scott; Leal, L Gary

    2012-10-28

    Coarse-graining (CG) techniques have recently attracted great interest for providing descriptions at a mesoscopic level of resolution that preserve fluid thermodynamic and transport behaviors with a reduced number of degrees of freedom and hence less computational effort. One fundamental question arises: how well and to what extent can a "bottom-up" developed mesoscale model recover the physical properties of a molecular scale system? To answer this question, we explore systematically the properties of a CG model that is developed to represent an intermediate mesoscale model between the atomistic and continuum scales. This CG model aims to reduce the computational cost relative to a full atomistic simulation, and we assess to what extent it is possible to preserve both the thermodynamic and transport properties of an underlying reference all-atom Lennard-Jones (LJ) system. In this paper, only the thermodynamic properties are considered in detail. The transport properties will be examined in subsequent work. To coarse-grain, we first use the iterative Boltzmann inversion (IBI) to determine a CG potential for a (1-φ)N mesoscale particle system, where φ is the degree of coarse-graining, so as to reproduce the radial distribution function (RDF) of an N atomic particle system. Even though the uniqueness theorem guarantees a one to one relationship between the RDF and an effective pairwise potential, we find that RDFs are insensitive to the long-range part of the IBI-determined potentials, which provides some significant flexibility in further matching other properties. We then propose a reformulation of IBI as a robust minimization procedure that enables simultaneous matching of the RDF and the fluid pressure. We find that this new method mainly changes the attractive tail region of the CG potentials, and it improves the isothermal compressibility relative to pure IBI. We also find that there are optimal interaction cutoff lengths for the CG system, as a function of

  17. Testing the effectiveness of simplified search strategies for updating systematic reviews.

    Science.gov (United States)

    Rice, Maureen; Ali, Muhammad Usman; Fitzpatrick-Lewis, Donna; Kenny, Meghan; Raina, Parminder; Sherifali, Diana

    2017-08-01

    The objective of the study was to test the overall effectiveness of a simplified search strategy (SSS) for updating systematic reviews. We identified nine systematic reviews undertaken by our research group for which both comprehensive and SSS updates were performed. Three relevant performance measures were estimated, that is, sensitivity, precision, and number needed to read (NNR). The update reference searches for all nine included systematic reviews identified a total of 55,099 citations that were screened resulting in final inclusion of 163 randomized controlled trials. As compared with reference search, the SSS resulted in 8,239 hits and had a median sensitivity of 83.3%, while precision and NNR were 4.5 times better. During analysis, we found that the SSS performed better for clinically focused topics, with a median sensitivity of 100% and precision and NNR 6 times better than for the reference searches. For broader topics, the sensitivity of the SSS was 80% while precision and NNR were 5.4 times better compared with reference search. SSS performed well for clinically focused topics and, with a median sensitivity of 100%, could be a viable alternative to a conventional comprehensive search strategy for updating this type of systematic reviews particularly considering the budget constraints and the volume of new literature being published. For broader topics, 80% sensitivity is likely to be considered too low for a systematic review update in most cases, although it might be acceptable if updating a scoping or rapid review. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Patch-test results in children and adolescents: systematic review of a 15-year period.

    Science.gov (United States)

    Rodrigues, Dulcilea Ferraz; Goulart, Eugênio Marcos Andrade

    2016-01-01

    The number of studies on patch-test results in children and adolescents has gradually increased in recent years, thus stimulating reviews. This paper is a systematic review of a 15-year period devoted to studying the issue. Variations pertaining to the number and age groups of tested children and/or adolescents, the number of subjects with atopy/atopic dermatitis history, the quantity, type and concentrations of the tested substances, the test technique and type of data regarding clinical relevance, must all be considered in evaluating these studies, as they make it harder to formulate conclusions. The most common allergens in children were nickel, thimerosal, cobalt, fragrance, lanolin and neomycin. In adolescents, they were nickel, thimerosal, cobalt, fragrance, potassium dichromate, and Myroxylon pereirae. Knowledge of this matter aids health professionals in planning preventive programs aimed at improving children's quality of life and ensuring that their future prospects are not undermined.

  19. Testing of constitutive models in LAME.

    Energy Technology Data Exchange (ETDEWEB)

    Hammerand, Daniel Carl; Scherzinger, William Mark

    2007-09-01

    Constitutive models for computational solid mechanics codes are in LAME--the Library of Advanced Materials for Engineering. These models describe complex material behavior and are used in our finite deformation solid mechanics codes. To ensure the correct implementation of these models, regression tests have been created for constitutive models in LAME. A selection of these tests is documented here. Constitutive models are an important part of any solid mechanics code. If an analysis code is meant to provide accurate results, the constitutive models that describe the material behavior need to be implemented correctly. Ensuring the correct implementation of constitutive models is the goal of a testing procedure that is used with the Library of Advanced Materials for Engineering (LAME) (see [1] and [2]). A test suite for constitutive models can serve three purposes. First, the test problems provide the constitutive model developer a means to test the model implementation. This is an activity that is always done by any responsible constitutive model developer. Retaining the test problem in a repository where the problem can be run periodically is an excellent means of ensuring that the model continues to behave correctly. A second purpose of a test suite for constitutive models is that it gives application code developers confidence that the constitutive models work correctly. This is extremely important since any analyst that uses an application code for an engineering analysis will associate a constitutive model in LAME with the application code, not LAME. Therefore, ensuring the correct implementation of constitutive models is essential for application code teams. A third purpose of a constitutive model test suite is that it provides analysts with example problems that they can look at to understand the behavior of a specific model. Since the choice of a constitutive model, and the properties that are used in that model, have an enormous effect on the results of an

  20. A Systematic Review of Point of Care Testing for Chlamydia trachomatis, Neisseria gonorrhoeae, and Trichomonas vaginalis

    Science.gov (United States)

    Herbst de Cortina, Sasha; Bristow, Claire C.; Joseph Davey, Dvora; Klausner, Jeffrey D.

    2016-01-01

    Objectives. Systematic review of point of care (POC) diagnostic tests for sexually transmitted infections: Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and Trichomonas vaginalis (TV). Methods. Literature search on PubMed for articles from January 2010 to August 2015, including original research in English on POC diagnostics for sexually transmitted CT, NG, and/or TV. Results. We identified 33 publications with original research on POC diagnostics for CT, NG, and/or TV. Thirteen articles evaluated test performance, yielding at least one test for each infection with sensitivity and specificity ≥90%. Each infection also had currently available tests with sensitivities <60%. Three articles analyzed cost effectiveness, and five publications discussed acceptability and feasibility. POC testing was acceptable to both providers and patients and was also demonstrated to be cost effective. Fourteen proof of concept articles introduced new tests. Conclusions. Highly sensitive and specific POC tests are available for CT, NG, and TV, but improvement is possible. Future research should focus on acceptability, feasibility, and cost of POC testing. While pregnant women specifically have not been studied, the results available in nonpregnant populations are encouraging for the ability to test and treat women in antenatal care to prevent adverse pregnancy and neonatal outcomes. PMID:27313440

  1. The psychological impact of predictive genetic testing for Huntington's disease: a systematic review of the literature.

    Science.gov (United States)

    Crozier, S; Robertson, N; Dale, M

    2015-02-01

    Huntington's disease (HD) is a neurodegenerative genetic condition for which a predictive genetic test by mutation analysis has been available since 1993. However, whilst revealing the future presence of the disease, testing may have an adverse psychological impact given that the disease is progressive, incurable and ultimately fatal. This review seeks to systematically explore the psychological impact of genetic testing for individuals undergoing pre-symptomatic mutation analysis. Three databases (Medline, PsycInfo and Scopus) were interrogated for studies utilising standardised measures to assess psychological impact following predictive genetic testing for HD. From 100 papers initially identified, eight articles were eligible for inclusion. Psychological impact of predictive genetic testing was not found to be associated with test result. No detrimental effect of predictive genetic testing on non-carriers was found, although the process was not found to be psychologically neutral. Fluctuation in levels of distress was found over time for carriers and non-carriers alike. Methodological weaknesses of published literature were identified, notably the needs of individuals not requesting genetic testing, as well as inadequate support for individuals registering elevated distress and declining post-test follow-up. Further assessment of these vulnerable individuals is warranted to establish the extent and type of future psychological support.

  2. GEOCHEMICAL TESTING AND MODEL DEVELOPMENT - RESIDUAL TANK WASTE TEST PLAN

    Energy Technology Data Exchange (ETDEWEB)

    CANTRELL KJ; CONNELLY MP

    2010-03-09

    This Test Plan describes the testing and chemical analyses release rate studies on tank residual samples collected following the retrieval of waste from the tank. This work will provide the data required to develop a contaminant release model for the tank residuals from both sludge and salt cake single-shell tanks. The data are intended for use in the long-term performance assessment and conceptual model development.

  3. Partial continuation model and its application in mitigating systematic errors of double-differenced GPS measurements

    Institute of Scientific and Technical Information of China (English)

    GUO Jianfeng; OU Jikun; REN Chao

    2005-01-01

    Based on the so-called partial continuation model with exact finite measurements, a new stochastic assessment procedure is introduced. For every satellite pair, the temporal correlation coefficient is estimated using the original double-differenced (DD) GPS measurements. And then, the Durbin-Watson test is applied to test specific hypothesis on the temporal correlation coefficient. Unless the test is not significant with a certain significant level, a data transformation is required. These transformed measurements are free of time correlations. For purpose of illustration, two static GPS baseline data sets are analyzed in detail. The experimental results demonstrate that the proposed procedure can mitigate effectively the impact of systematic errors on DD GPS measurements.

  4. The status of computerized cognitive testing in aging: A systematic review

    Science.gov (United States)

    Wild, Katherine; Howieson, Diane; Webbe, Frank; Seelye, Adriana; Kaye, Jeffrey

    2008-01-01

    Background Early detection of cognitive decline in the elderly has become of heightened importance in parallel with the recent advances in therapeutics. Computerized assessment may be uniquely suited to early detection of changes in cognition in the elderly. We present here a systematic review of the status of computer-based cognitive testing focusing on detection of cognitive decline in the aging population. Methods All studies purporting to assess or detect age-related changes in cognition or early dementia/mild cognitive impairment (MCI) by means of computerized testing were included. Each test battery was rated on availability of normative data, level of evidence for test validity and reliability, comprehensiveness, and usability. All published studies relevant to a particular computerized test were read by a minimum of two reviewers, who completed rating forms containing the above-mentioned criteria. Results Of the 18 test batteries identified from the initial search, eleven were appropriate to cognitive testing in the elderly and were subjected to systematic review. Of those 11, five were either developed specifically for application with the elderly or have been used extensively with that population. Even within the computerized testing genre, great variability existed in manner of administration, ranging from fully examiner administered to fully self-administered. All tests had at least minimal reliability and validity data, commonly reported in peer-reviewed articles. However, level of rigor of validity testing varied widely. Conclusion All test batteries exhibited some of the strengths of computerized cognitive testing: standardization of administration and stimulus presentation, accurate measures of response latencies, automated comparison in real-time with an individual’s prior performance as well as with age-related norms, and efficiencies of staffing and cost. Some, such as the MCIS, adapted complicated scoring algorithms to enhance the information

  5. The Neuman Systems Model Institute: testing middle-range theories.

    Science.gov (United States)

    Gigliotti, Eileen

    2003-07-01

    The credibility of the Neuman systems model can only be established through the generation and testing of Neuman systems model-derived middle-range theories. However, due to the number and complexity of Neuman systems model concepts/concept interrelations and the diversity of middle-range theory concepts linked to these Neuman systems model concepts by researchers, no explicit middle-range theories have yet been derived from the Neuman systems model. This article describes the development of an organized program for the systematic study of the Neuman systems model. Preliminary work, already accomplished, is detailed, and a tentative plan for the completion of further preliminary work as well as beginning the actual research conduction phase is proposed.

  6. Hydraulic Model Tests on Modified Wave Dragon

    DEFF Research Database (Denmark)

    Hald, Tue; Lynggaard, Jakob

    A floating model of the Wave Dragon (WD) was built in autumn 1998 by the Danish Maritime Institute in scale 1:50, see Sørensen and Friis-Madsen (1999) for reference. This model was subjected to a series of model tests and subsequent modifications at Aalborg University and in the following...... are found in Hald and Lynggaard (2001). Model tests and reconstruction are carried out during the phase 3 project: ”Wave Dragon. Reconstruction of an existing model in scale 1:50 and sequentiel tests of changes to the model geometry and mass distribution parameters” sponsored by the Danish Energy Agency...

  7. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  8. Bayesian analysis of an anisotropic universe model: systematics and polarization

    CERN Document Server

    Groeneboom, Nicolaas E; Wehus, Ingunn Kathrine; Eriksen, Hans Kristian

    2009-01-01

    We revisit the anisotropic universe model previously developed by Ackerman, Carroll and Wise (ACW), and generalize both the theoretical and computational framework to include polarization and various forms of systematic effects. We apply our new tools to simulated WMAP data in order to understand the potential impact of asymmetric beams, noise mis-estimation and potential Zodiacal light emission. We find that neither has any significant impact on the results. We next show that the previously reported ACW signal is also present in the 1-year WMAP temperature sky map presented by Liu & Li, where data cuts are more aggressive. Finally, we reanalyze the 5-year WMAP data taking into account a previously neglected (-i)^{l-l'}-term in the signal covariance matrix. We still find a strong detection of a preferred direction in the temperature map. Including multipoles up to l=400, the anisotropy amplitude for the W-band is found to be g = 0.29 +- 0.031, nonzero at 9 sigma. However, the corresponding preferred direc...

  9. Software Testing and Verification in Climate Model Development

    Science.gov (United States)

    Clune, Thomas L.; Rood, RIchard B.

    2011-01-01

    Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties.

  10. Systematic flood modelling to support flood-proof urban design

    Science.gov (United States)

    Bruwier, Martin; Mustafa, Ahmed; Aliaga, Daniel; Archambeau, Pierre; Erpicum, Sébastien; Nishida, Gen; Zhang, Xiaowei; Pirotton, Michel; Teller, Jacques; Dewals, Benjamin

    2017-04-01

    Urban flood risk is influenced by many factors such as hydro-meteorological drivers, existing drainage systems as well as vulnerability of population and assets. The urban fabric itself has also a complex influence on inundation flows. In this research, we performed a systematic analysis on how various characteristics of urban patterns control inundation flow within the urban area and upstream of it. An urban generator tool was used to generate over 2,250 synthetic urban networks of 1 km2. This tool is based on the procedural modelling presented by Parish and Müller (2001) which was adapted to generate a broader variety of urban networks. Nine input parameters were used to control the urban geometry. Three of them define the average length, orientation and curvature of the streets. Two orthogonal major roads, for which the width constitutes the fourth input parameter, work as constraints to generate the urban network. The width of secondary streets is given by the fifth input parameter. Each parcel generated by the street network based on a parcel mean area parameter can be either a park or a building parcel depending on the park ratio parameter. Three setback parameters constraint the exact location of the building whithin a building parcel. For each of synthetic urban network, detailed two-dimensional inundation maps were computed with a hydraulic model. The computational efficiency was enhanced by means of a porosity model. This enables the use of a coarser computational grid , while preserving information on the detailed geometry of the urban network (Sanders et al. 2008). These porosity parameters reflect not only the void fraction, which influences the storage capacity of the urban area, but also the influence of buildings on flow conveyance (dynamic effects). A sensitivity analysis was performed based on the inundation maps to highlight the respective impact of each input parameter characteristizing the urban networks. The findings of the study pinpoint

  11. Systematic Construction of Bell-Like Inequalities and Proposal of a New Type of Test

    Science.gov (United States)

    Tanimura, Shogo

    2013-09-01

    Although it is usually argued that the violation of the Bell inequality is a manifestation of the nonlocality of quantum state, we show that it is a manifestation of the noncommutativity of quantum observables that are defined at the same location. From this point of view we invent a method for systematic construction of generalized Bell inequalities and derive a new inequality that belongs to a different type from the traditional Bell inequality. The new inequality provides a severer and fairer test of quantum mechanics.REFID="9789814425988_0005FN002">

  12. Used Fuel Testing Transportation Model

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Steven B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Best, Ralph E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Maheras, Steven J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Jensen, Philip J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); England, Jeffery L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); LeDuc, Dan [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2014-09-25

    This report identifies shipping packages/casks that might be used by the Used Nuclear Fuel Disposition Campaign Program (UFDC) to ship fuel rods and pieces of fuel rods taken from high-burnup used nuclear fuel (UNF) assemblies to and between research facilities for purposes of evaluation and testing. Also identified are the actions that would need to be taken, if any, to obtain U.S. Nuclear Regulatory (NRC) or other regulatory authority approval to use each of the packages and/or shipping casks for this purpose.

  13. Used Fuel Testing Transportation Model

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Steven B.; Best, Ralph E.; Maheras, Steven J.; Jensen, Philip J.; England, Jeffery L.; LeDuc, Dan

    2014-09-24

    This report identifies shipping packages/casks that might be used by the Used Nuclear Fuel Disposition Campaign Program (UFDC) to ship fuel rods and pieces of fuel rods taken from high-burnup used nuclear fuel (UNF) assemblies to and between research facilities for purposes of evaluation and testing. Also identified are the actions that would need to be taken, if any, to obtain U.S. Nuclear Regulatory (NRC) or other regulatory authority approval to use each of the packages and/or shipping casks for this purpose.

  14. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  15. Colour Reconnection - Models and Tests

    CERN Document Server

    Christiansen, Jesper R

    2015-01-01

    Recent progress on colour reconnection within the Pythia framework is presented. A new model is introduced, based on the SU(3) structure of QCD and a minimization of the potential string energy. The inclusion of the epsilon structure of SU(3) gives a new baryon production mechanism and makes it possible simultaneously to describe hyperon production at both $e^+e^-$ and pp colliders. Finally, predictions for $e^+e^-$ colliders, both past and potential future ones, are presented.

  16. Model Based Testing for Agent Systems

    Science.gov (United States)

    Zhang, Zhiyong; Thangarajah, John; Padgham, Lin

    Although agent technology is gaining world wide popularity, a hindrance to its uptake is the lack of proper testing mechanisms for agent based systems. While many traditional software testing methods can be generalized to agent systems, there are many aspects that are different and which require an understanding of the underlying agent paradigm. In this paper we present certain aspects of a testing framework that we have developed for agent based systems. The testing framework is a model based approach using the design models of the Prometheus agent development methodology. In this paper we focus on model based unit testing and identify the appropriate units, present mechanisms for generating suitable test cases and for determining the order in which the units are to be tested, present a brief overview of the unit testing process and an example. Although we use the design artefacts from Prometheus the approach is suitable for any plan and event based agent system.

  17. TESTING MONETARY EXCHANGE RATE MODELS WITH PANEL COINTEGRATION TESTS

    Directory of Open Access Journals (Sweden)

    Szabo Andrea

    2015-07-01

    Full Text Available The monetary exchange rate models explain the long run behaviour of the nominal exchange rate. Their central assertion is that there is a long run equilibrium relationship between the nominal exchange rate and monetary macro-fundamentals. Although these models are essential tools of international macroeconomics, their empirical validity is ambiguous. Previously, time series testing was prevalent in the literature, but it did not bring convincing results. The power of the unit root and the cointegration tests are too low to reject the null hypothesis of no cointegration between the variables. This power can be enhanced by arranging our data in a panel data set, which allows us to analyse several time series simultaneously and enables us to increase the number of observations. We conducted a weak empirical test of the monetary exchange rate models by testing the existence of cointegration between the variables in three panels. We investigated 6, 10 and 15 OECD countries during the following periods: 1976Q1-2011Q4, 1985Q1-2011Q4 and 1996Q1-2011Q4. We tested the reduced form of the monetary exchange rate models in three specifications; we have two restricted models and an unrestricted model. Since cointegration can only be interpreted among non-stationary processes, we investigate the order of the integration of our variables with IPS, Fisher-ADF, Fisher-PP panel unit root tests and the Hadri panel stationary test. All the variables can be unit root processes; therefore we analyze the cointegration with the Pedroni and Kao panel cointegration test. The restricted models performed better than the unrestricted one and we obtained the best results with the 1985Q1-2011Q4 panel. The Kao test rejects the null hypotheses – there is no cointegration between the variables – in all the specifications and all the panels, but the Pedroni test does not show such a positive picture. Hence we found only moderate support for the monetary exchange rate models.

  18. Linear Logistic Test Modeling with R

    Science.gov (United States)

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  19. Modeling Systematic Change in Stopover Duration Does Not Improve Bias in Trends Estimated from Migration Counts.

    Directory of Open Access Journals (Sweden)

    Tara L Crewe

    Full Text Available The use of counts of unmarked migrating animals to monitor long term population trends assumes independence of daily counts and a constant rate of detection. However, migratory stopovers often last days or weeks, violating the assumption of count independence. Further, a systematic change in stopover duration will result in a change in the probability of detecting individuals once, but also in the probability of detecting individuals on more than one sampling occasion. We tested how variation in stopover duration influenced accuracy and precision of population trends by simulating migration count data with known constant rate of population change and by allowing daily probability of survival (an index of stopover duration to remain constant, or to vary randomly, cyclically, or increase linearly over time by various levels. Using simulated datasets with a systematic increase in stopover duration, we also tested whether any resulting bias in population trend could be reduced by modeling the underlying source of variation in detection, or by subsampling data to every three or five days to reduce the incidence of recounting. Mean bias in population trend did not differ significantly from zero when stopover duration remained constant or varied randomly over time, but bias and the detection of false trends increased significantly with a systematic increase in stopover duration. Importantly, an increase in stopover duration over time resulted in a compounding effect on counts due to the increased probability of detection and of recounting on subsequent sampling occasions. Under this scenario, bias in population trend could not be modeled using a covariate for stopover duration alone. Rather, to improve inference drawn about long term population change using counts of unmarked migrants, analyses must include a covariate for stopover duration, as well as incorporate sampling modifications (e.g., subsampling to reduce the probability that individuals will

  20. Systematic review and retrospective validation of prediction models for weight loss after bariatric surgery.

    Science.gov (United States)

    Sharples, Alistair J; Mahawar, Kamal; Cheruvu, Chandra V N

    2017-08-12

    Patients often have less than realistic expectations of the weight loss they are likely to achieve after bariatric surgery. It would be useful to have a well-validated prediction tool that could give patients a realistic estimate of their expected weight loss. To perform a systematic review of the literature to identify existing prediction models and attempt to validate these models. University hospital, United Kingdom. A systematic review was performed. All English language studies were included if they used data to create a prediction model for postoperative weight loss after bariatric surgery. These models were then tested on patients undergoing bariatric surgery between January 1, 2013 and December 31, 2014 within our unit. An initial literature search produced 446 results, of which only 4 were included in the final review. Our study population included 317 patients. Mean preoperative body mass index was 46.1 ± 7.1. For 257 (81.1%) patients, 12-month follow-up was available, and mean body mass index and percentage excess weight loss at 12 months was 33.0 ± 6.7 and 66.1% ± 23.7%, respectively. All 4 of the prediction models significantly overestimated the amount of weight loss achieved by patients. The best performing prediction model in our series produced a correlation coefficient (R(2)) of .61 and an area under the curve of .71 on receiver operating curve analysis. All prediction models overestimated weight loss after bariatric surgery in our cohort. There is a need to develop better procedures and patient-specific models for better patient counselling. Copyright © 2017 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.

  1. Endogenous opioid antagonism in physiological experimental pain models: a systematic review.

    Science.gov (United States)

    Werner, Mads U; Pereira, Manuel P; Andersen, Lars Peter H; Dahl, Jørgen B

    2015-01-01

    Opioid antagonists are pharmacological tools applied as an indirect measure to detect activation of the endogenous opioid system (EOS) in experimental pain models. The objective of this systematic review was to examine the effect of mu-opioid-receptor (MOR) antagonists in placebo-controlled, double-blind studies using 'inhibitory' or 'sensitizing', physiological test paradigms in healthy human subjects. The databases PubMed and Embase were searched according to predefined criteria. Out of a total of 2,142 records, 63 studies (1,477 subjects [male/female ratio = 1.5]) were considered relevant. Twenty-five studies utilized 'inhibitory' test paradigms (ITP) and 38 studies utilized 'sensitizing' test paradigms (STP). The ITP-studies were characterized as conditioning modulation models (22 studies) and repetitive transcranial magnetic stimulation models (rTMS; 3 studies), and, the STP-studies as secondary hyperalgesia models (6 studies), 'pain' models (25 studies), summation models (2 studies), nociceptive reflex models (3 studies) and miscellaneous models (2 studies). A consistent reversal of analgesia by a MOR-antagonist was demonstrated in 10 of the 25 ITP-studies, including stress-induced analgesia and rTMS. In the remaining 14 conditioning modulation studies either absence of effects or ambiguous effects by MOR-antagonists, were observed. In the STP-studies, no effect of the opioid-blockade could be demonstrated in 5 out of 6 secondary hyperalgesia studies. The direction of MOR-antagonist dependent effects upon pain ratings, threshold assessments and somatosensory evoked potentials (SSEP), did not appear consistent in 28 out of 32 'pain' model studies. In conclusion, only in 2 experimental human pain models, i.e., stress-induced analgesia and rTMS, administration of MOR-antagonist demonstrated a consistent effect, presumably mediated by an EOS-dependent mechanisms of analgesia and hyperalgesia.

  2. HIV testing and counselling for migrant populations living in high-income countries: a systematic review.

    Science.gov (United States)

    Alvarez-del Arco, Debora; Monge, Susana; Azcoaga, Amaya; Rio, Isabel; Hernando, Victoria; Gonzalez, Cristina; Alejos, Belen; Caro, Ana Maria; Perez-Cachafeiro, Santiago; Ramirez-Rubio, Oriana; Bolumar, Francisco; Noori, Teymur; Del Amo, Julia

    2013-12-01

    The barriers to HIV testing and counselling that migrants encounter can jeopardize proactive HIV testing that relies on the fact that HIV testing must be linked to care. We analyse available evidence on HIV testing and counselling strategies targeting migrants and ethnic minorities in high-income countries. Systematic literature review of the five main databases of articles in English from Europe, North America and Australia between 2005 and 2009. Of 1034 abstracts, 37 articles were selected. Migrants, mainly from HIV-endemic countries, are at risk of HIV infection and its consequences. The HIV prevalence among migrants is higher than the general population's, and migrants have higher frequency of delayed HIV diagnosis. For migrants from countries with low HIV prevalence and for ethnic minorities, socio-economic vulnerability puts them at risk of acquiring HIV. Migrants have specific legal and administrative impediments to accessing HIV testing-in some countries, undocumented migrants are not entitled to health care-as well as cultural and linguistic barriers, racism and xenophobia. Migrants and ethnic minorities fear stigma from their communities, yet community acceptance is key for well-being. Migrants and ethnic minorities should be offered HIV testing, but the barriers highlighted in this review may deter programs from achieving the final goal, which is linking migrants and ethnic minorities to HIV clinical care under the public health perspective.

  3. Cost Modeling for SOC Modules Testing

    Directory of Open Access Journals (Sweden)

    Balwinder Singh

    2013-08-01

    Full Text Available The complexity of the system design is increasing very rapidly as the number of transistors on Integrated Circuits (IC doubles as per Moore’s law.There is big challenge of testing this complex VLSI circuit, in which whole system is integrated into a single chip called System on Chip (SOC. Cost of testing the SOC is also increasing with complexity. Cost modeling plays a vital role in reduction of test cost and time to market. This paper includes the cost modeling of the SOC Module testing which contains both analog and digital modules. The various test cost parameters and equations are considered from the previous work. The mathematical relations are developed for cost modeling to test the SOC further cost modeling equations are modeled in Graphical User Interface (GUI in MATLAB, which can be used as a cost estimation tool. A case study is done to calculate the cost of the SOC testing due to Logic Built in Self Test (LBIST and Memory Built in Self Test (MBIST. VLSI Test engineers can take the benefits of such cost estimation tools for test planning.

  4. Biglan Model Test Based on Institutional Diversity.

    Science.gov (United States)

    Roskens, Ronald W.; Creswell, John W.

    The Biglan model, a theoretical framework for empirically examining the differences among subject areas, classifies according to three dimensions: adherence to common set of paradigms (hard or soft), application orientation (pure or applied), and emphasis on living systems (life or nonlife). Tests of the model are reviewed, and a further test is…

  5. Graphical Models and Computerized Adaptive Testing.

    Science.gov (United States)

    Mislevy, Robert J.; Almond, Russell G.

    This paper synthesizes ideas from the fields of graphical modeling and education testing, particularly item response theory (IRT) applied to computerized adaptive testing (CAT). Graphical modeling can offer IRT a language for describing multifaceted skills and knowledge, and disentangling evidence from complex performances. IRT-CAT can offer…

  6. Test-driven modeling of embedded systems

    DEFF Research Database (Denmark)

    Munck, Allan; Madsen, Jan

    2015-01-01

    To benefit maximally from model-based systems engineering (MBSE) trustworthy high quality models are required. From the software disciplines it is known that test-driven development (TDD) can significantly increase the quality of the products. Using a test-driven approach with MBSE may have a sim...

  7. Port Adriano, 2D-Model Tests

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Andersen, Thomas Lykke; Jensen, Palle Meinert

    This report present the results of 2D physical model tests (length scale 1:50) carried out in a waveflume at Dept. of Civil Engineering, Aalborg University (AAU).......This report present the results of 2D physical model tests (length scale 1:50) carried out in a waveflume at Dept. of Civil Engineering, Aalborg University (AAU)....

  8. Testing the PRISMA-Equity 2012 reporting guideline: the perspectives of systematic review authors.

    Directory of Open Access Journals (Sweden)

    Belinda J Burford

    Full Text Available Reporting guidelines can be used to encourage standardised and comprehensive reporting of health research. In light of the global commitment to health equity, we have previously developed and published a reporting guideline for equity-focused systematic reviews (PRISMA-E 2012. The objectives of this study were to explore the utility of the equity extension items included in PRISMA-E 2012 from a systematic review author perspective, including facilitators and barriers to its use. This will assist in designing dissemination and knowledge translation strategies. We conducted a survey of systematic review authors to expose them to the new items in PRISMA-E 2012, establish the extent to which they had historically addressed those items in their own reviews, and gather feedback on the usefulness of the new items. Data were analysed using Microsoft Excel 2008 and Stata (version 11.2 for Mac. Of 151 respondents completing the survey, 18.5% (95% CI: 12.7% to 25.7% had not heard of the PRISMA statement before, although 83.4% (95% CI: 77.5% to 89.3% indicated that they plan to use PRISMA-E 2012 in the future, depending on the focus of their review. Most (68.9%; 95% CI: 60.8% to 76.2% thought that using PRISMA-E 2012 would lead them to conduct their reviews differently. Important facilitators to using PRISMA-E 2012 identified by respondents were journal endorsement and incorporation of the elements of the guideline into systematic review software. Barriers identified were lack of time, word limits and the availability of equity data in primary research. This study has been the first to 'road-test' the new PRISMA-E 2012 reporting guideline and the findings are encouraging. They confirm the acceptability and potential utility of the guideline to assist review authors in reporting on equity in their reviews. The uptake and impact of PRISMA-E 2012 over time on design, conduct and reporting of primary research and systematic reviews should continue to be

  9. Testing the PRISMA-Equity 2012 reporting guideline: the perspectives of systematic review authors.

    Science.gov (United States)

    Burford, Belinda J; Welch, Vivian; Waters, Elizabeth; Tugwell, Peter; Moher, David; O'Neill, Jennifer; Koehlmoos, Tracey; Petticrew, Mark

    2013-01-01

    Reporting guidelines can be used to encourage standardised and comprehensive reporting of health research. In light of the global commitment to health equity, we have previously developed and published a reporting guideline for equity-focused systematic reviews (PRISMA-E 2012). The objectives of this study were to explore the utility of the equity extension items included in PRISMA-E 2012 from a systematic review author perspective, including facilitators and barriers to its use. This will assist in designing dissemination and knowledge translation strategies. We conducted a survey of systematic review authors to expose them to the new items in PRISMA-E 2012, establish the extent to which they had historically addressed those items in their own reviews, and gather feedback on the usefulness of the new items. Data were analysed using Microsoft Excel 2008 and Stata (version 11.2 for Mac). Of 151 respondents completing the survey, 18.5% (95% CI: 12.7% to 25.7%) had not heard of the PRISMA statement before, although 83.4% (95% CI: 77.5% to 89.3%) indicated that they plan to use PRISMA-E 2012 in the future, depending on the focus of their review. Most (68.9%; 95% CI: 60.8% to 76.2%) thought that using PRISMA-E 2012 would lead them to conduct their reviews differently. Important facilitators to using PRISMA-E 2012 identified by respondents were journal endorsement and incorporation of the elements of the guideline into systematic review software. Barriers identified were lack of time, word limits and the availability of equity data in primary research. This study has been the first to 'road-test' the new PRISMA-E 2012 reporting guideline and the findings are encouraging. They confirm the acceptability and potential utility of the guideline to assist review authors in reporting on equity in their reviews. The uptake and impact of PRISMA-E 2012 over time on design, conduct and reporting of primary research and systematic reviews should continue to be examined.

  10. Testing Models for Structure Formation

    CERN Document Server

    Kaiser, N

    1993-01-01

    I review a number of tests of theories for structure formation. Large-scale flows and IRAS galaxies indicate a high density parameter $\\Omega \\simeq 1$, in accord with inflationary predictions, but it is not clear how this meshes with the uniformly low values obtained from virial analysis on scales $\\sim$ 1Mpc. Gravitational distortion of faint galaxies behind clusters allows one to construct maps of the mass surface density, and this should shed some light on the large vs small-scale $\\Omega$ discrepancy. Power spectrum analysis reveals too red a spectrum (compared to standard CDM) on scales $\\lambda \\sim 10-100$ $h^{-1}$Mpc, but the gaussian fluctuation hypothesis appears to be in good shape. These results suggest that the problem for CDM lies not in the very early universe --- the inflationary predictions of $\\Omega = 1$ and gaussianity both seem to be OK; furthermore, the COBE result severely restricts modifications such as tilting the primordial spectrum --- but in the assumed matter content. The power s...

  11. The Couplex test cases: models and lessons

    Energy Technology Data Exchange (ETDEWEB)

    Bourgeat, A. [Lyon-1 Univ., MCS, 69 - Villeurbanne (France); Kern, M. [Institut National de Recherches Agronomiques (INRA), 78 - Le Chesnay (France); Schumacher, S.; Talandier, J. [Agence Nationale pour la Gestion des Dechets Radioactifs (ANDRA), 92 - Chatenay Malabry (France)

    2003-07-01

    The Couplex test cases are a set of numerical test models for nuclear waste deep geological disposal simulation. They are centered around the numerical issues arising in the near and far field transport simulation. They were used in an international contest, and are now becoming a reference in the field. We present the models used in these test cases, and show sample results from the award winning teams. (authors)

  12. Impact of presymptomatic genetic testing on young adults: a systematic review.

    Science.gov (United States)

    Godino, Lea; Turchetti, Daniela; Jackson, Leigh; Hennessy, Catherine; Skirton, Heather

    2016-04-01

    Presymptomatic and predictive genetic testing should involve a considered choice, which is particularly true when testing is undertaken in early adulthood. Young adults are at a key life stage as they may be developing a career, forming partnerships and potentially becoming parents: presymptomatic testing may affect many facets of their future lives. The aim of this integrative systematic review was to assess factors that influence young adults' or adolescents' choices to have a presymptomatic genetic test and the emotional impact of those choices. Peer-reviewed papers published between January 1993 and December 2014 were searched using eight databases. Of 3373 studies identified, 29 were reviewed in full text: 11 met the inclusion criteria. Thematic analysis was used to identify five major themes: period before testing, experience of genetic counselling, parental involvement in decision-making, impact of test result communication, and living with genetic risk. Many participants grew up with little or no information concerning their genetic risk. The experience of genetic counselling was either reported as an opportunity for discussing problems or associated with feelings of disempowerment. Emotional outcomes of disclosure did not directly correlate with test results: some mutation carriers were relieved to know their status, however, the knowledge they may have passed on the mutation to their children was a common concern. Parents appeared to have exerted pressure on their children during the decision-making process about testing and risk reduction surgery. Health professionals should take into account all these issues to effectively assist young adults in making decisions about presymptomatic genetic testing.

  13. Reliability of physical examination tests for the diagnosis of knee disorders: Evidence from a systematic review.

    Science.gov (United States)

    Décary, Simon; Ouellet, Philippe; Vendittoli, Pascal-André; Desmeules, François

    2016-12-01

    Clinicians often rely on physical examination tests to guide them in the diagnostic process of knee disorders. However, reliability of these tests is often overlooked and may influence the consistency of results and overall diagnostic validity. Therefore, the objective of this study was to systematically review evidence on the reliability of physical examination tests for the diagnosis of knee disorders. A structured literature search was conducted in databases up to January 2016. Included studies needed to report reliability measures of at least one physical test for any knee disorder. Methodological quality was evaluated using the QAREL checklist. A qualitative synthesis of the evidence was performed. Thirty-three studies were included with a mean QAREL score of 5.5 ± 0.5. Based on low to moderate quality evidence, the Thessaly test for meniscal injuries reached moderate inter-rater reliability (k = 0.54). Based on moderate to excellent quality evidence, the Lachman for anterior cruciate ligament injuries reached moderate to excellent inter-rater reliability (k = 0.42 to 0.81). Based on low to moderate quality evidence, the Tibiofemoral Crepitus, Joint Line and Patellofemoral Pain/Tenderness, Bony Enlargement and Joint Pain on Movement tests for knee osteoarthritis reached fair to excellent inter-rater reliability (k = 0.29 to 0.93). Based on low to moderate quality evidence, the Lateral Glide, Lateral Tilt, Lateral Pull and Quality of Movement tests for patellofemoral pain reached moderate to good inter-rater reliability (k = 0.49 to 0.73). Many physical tests appear to reach good inter-rater reliability, but this is based on low-quality and conflicting evidence. High-quality research is required to evaluate the reliability of knee physical examination tests.

  14. Tests examining skill outcomes in sport: a systematic review of measurement properties and feasibility.

    Science.gov (United States)

    Robertson, Samuel J; Burnett, Angus F; Cochrane, Jodie

    2014-04-01

    A high level of participant skill is influential in determining the outcome of many sports. Thus, tests assessing skill outcomes in sport are commonly used by coaches and researchers to estimate an athlete's ability level, to evaluate the effectiveness of interventions or for the purpose of talent identification. The objective of this systematic review was to examine the methodological quality, measurement properties and feasibility characteristics of sporting skill outcome tests reported in the peer-reviewed literature. A search of both SPORTDiscus and MEDLINE databases was undertaken. Studies that examined tests of sporting skill outcomes were reviewed. Only studies that investigated measurement properties of the test (reliability or validity) were included. A total of 22 studies met the inclusion/exclusion criteria. A customised checklist of assessment criteria, based on previous research, was utilised for the purpose of this review. A range of sports were the subject of the 22 studies included in this review, with considerations relating to methodological quality being generally well addressed by authors. A range of methods and statistical procedures were used by researchers to determine the measurement properties of their skill outcome tests. The majority (95%) of the reviewed studies investigated test-retest reliability, and where relevant, inter and intra-rater reliability was also determined. Content validity was examined in 68% of the studies, with most tests investigating multiple skill domains relevant to the sport. Only 18% of studies assessed all three reviewed forms of validity (content, construct and criterion), with just 14% investigating the predictive validity of the test. Test responsiveness was reported in only 9% of studies, whilst feasibility received varying levels of attention. In organised sport, further tests may exist which have not been investigated in this review. This could be due to such tests firstly not being published in the peer

  15. Systematic review of risk adjustment models of hospital length of stay (LOS).

    Science.gov (United States)

    Lu, Mingshan; Sajobi, Tolulope; Lucyk, Kelsey; Lorenzetti, Diane; Quan, Hude

    2015-04-01

    Policy decisions in health care, such as hospital performance evaluation and performance-based budgeting, require an accurate prediction of hospital length of stay (LOS). This paper provides a systematic review of risk adjustment models for hospital LOS, and focuses primarily on studies that use administrative data. MEDLINE, EMBASE, Cochrane, PubMed, and EconLit were searched for studies that tested the performance of risk adjustment models in predicting hospital LOS. We included studies that tested models developed for the general inpatient population, and excluded those that analyzed risk factors only correlated with LOS, impact analyses, or those that used disease-specific scales and indexes to predict LOS. Our search yielded 3973 abstracts, of which 37 were included. These studies used various disease groupers and severity/morbidity indexes to predict LOS. Few models were developed specifically for explaining hospital LOS; most focused primarily on explaining resource spending and the costs associated with hospital LOS, and applied these models to hospital LOS. We found a large variation in predictive power across different LOS predictive models. The best model performance for most studies fell in the range of 0.30-0.60, approximately. The current risk adjustment methodologies for predicting LOS are still limited in terms of models, predictors, and predictive power. One possible approach to improving the performance of LOS risk adjustment models is to include more disease-specific variables, such as disease-specific or condition-specific measures, and functional measures. For this approach, however, more comprehensive and standardized data are urgently needed. In addition, statistical methods and evaluation tools more appropriate to LOS should be tested and adopted.

  16. Internet-Based Direct-to-Consumer Genetic Testing: A Systematic Review.

    Science.gov (United States)

    Covolo, Loredana; Rubinelli, Sara; Ceretti, Elisabetta; Gelatti, Umberto

    2015-12-14

    Direct-to-consumer genetic tests (DTC-GT) are easily purchased through the Internet, independent of a physician referral or approval for testing, allowing the retrieval of genetic information outside the clinical context. There is a broad debate about the testing validity, their impact on individuals, and what people know and perceive about them. The aim of this review was to collect evidence on DTC-GT from a comprehensive perspective that unravels the complexity of the phenomenon. A systematic search was carried out through PubMed, Web of Knowledge, and Embase, in addition to Google Scholar according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist with the key term "Direct-to-consumer genetic test." In the final sample, 118 articles were identified. Articles were summarized in five categories according to their focus on (1) knowledge of, attitude toward use of, and perception of DTC-GT (n=37), (2) the impact of genetic risk information on users (n=37), (3) the opinion of health professionals (n=20), (4) the content of websites selling DTC-GT (n=16), and (5) the scientific evidence and clinical utility of the tests (n=14). Most of the articles analyzed the attitude, knowledge, and perception of DTC-GT, highlighting an interest in using DTC-GT, along with the need for a health care professional to help interpret the results. The articles investigating the content analysis of the websites selling these tests are in agreement that the information provided by the companies about genetic testing is not completely comprehensive for the consumer. Given that risk information can modify consumers' health behavior, there are surprisingly few studies carried out on actual consumers and they do not confirm the overall concerns on the possible impact of DTC-GT. Data from studies that investigate the quality of the tests offered confirm that they are not informative, have little predictive power, and do not measure genetic risk

  17. FABASOFT BEST PRACTICES AND TEST METRICS MODEL

    Directory of Open Access Journals (Sweden)

    Nadica Hrgarek

    2007-06-01

    Full Text Available Software companies have to face serious problems about how to measure the progress of test activities and quality of software products in order to estimate test completion criteria, and if the shipment milestone will be reached on time. Measurement is a key activity in testing life cycle and requires established, managed and well documented test process, defined software quality attributes, quantitative measures, and using of test management and bug tracking tools. Test metrics are a subset of software metrics (product metrics, process metrics and enable the measurement and quality improvement of test process and/or software product. The goal of this paper is to briefly present Fabasoft best practices and lessons learned during functional and system testing of big complex software products, and to describe a simple test metrics model applied to the software test process with the purpose to better control software projects, measure and increase software quality.

  18. A systematic review of perceived risks, psychological and behavioral impacts of genetic testing.

    Science.gov (United States)

    Heshka, Jodi T; Palleschi, Crystal; Howley, Heather; Wilson, Brenda; Wells, Philip S

    2008-01-01

    Genetic testing may enable early disease detection, targeted surveillance, and result in effective prevention strategies. Knowledge of genetic risk may also enable behavioral change. However, the impact of carrier status from the psychological, behavior, and perceived risk perspectives is not well understood. We conducted a systematic review to summarize the available literature on these elements. An extensive literature review was performed to identify studies that measured the perceived risk, psychological, and/or behavioral impacts of genetic testing on individuals. The search was not limited to specific diseases but excluded the impacts of testing for single gene disorders. A total of 35 articles and 30 studies were included. The studies evaluated hereditary nonpolyposis colorectal carcinoma, hereditary breast and ovarian cancer, and Alzheimer disease. For affective outcomes, the majority of the studies reported negative effects on carriers but these were short-lived. For behavioral outcomes, an increase in screening behavior of varying rates was demonstrated in carriers but the change in behaviors was less than expected. With respect to perceived risk, there were generally no differences between carriers and noncarriers by 12 months after genetic testing and over time risk perception decreased. Overall, predispositional genetic testing has no significant impact on psychological outcomes, little effect on behavior, and did not change perceived risk. It seems as though better patient education strategies are required. Our data would suggest better knowledge among carriers would not have significant psychological impacts and therefore, it is worth pursuing improved educational strategies.

  19. Systematic model researches on the stability limits of the DVL series of float designs

    Science.gov (United States)

    Sottorf, W.

    1949-01-01

    To determine the trim range in which a seaplane can take off without porpoising, stability tests were made of a Plexiglas model, composed of float, wing, and tailplane, which corresponded to a full-size research airplane. The model and full-size stability limits are in good agreement. After all structural parts pertaining to the air frame were removed gradually, the aerodynamic forces replaced by weight forces, and the moment of inertia and position of the center of gravity changed, no marked change of limits of the stable zone was noticeable. The latter, therefore, is for practical purposes affected only by hydrodynamic phenomena. The stability limits of the DVL family of floats were determined by a systematic investigation independent of any particular sea-plane design, thus a seaplane may be designed to give a run free from porpoising.

  20. How effective is drug testing as a workplace safety strategy? A systematic review of the evidence.

    Science.gov (United States)

    Pidd, Ken; Roche, Ann M

    2014-10-01

    The growing prevalence of workplace drug testing and the narrow scope of previous reviews of the evidence base necessitate a comprehensive review of research concerning the efficacy of drug testing as a workplace strategy. A systematic qualitative review of relevant research published between January 1990 and January 2013 was undertaken. Inclusion criteria were studies that evaluated the effectiveness of drug testing in deterring employee drug use or reducing workplace accident or injury rates. Methodological adequacy was assessed using a published assessment tool specifically designed to assess the quality of intervention studies. A total of 23 studies were reviewed and assessed, six of which reported on the effectiveness of testing in reducing employee drug use and 17 which reported on occupational accident or injury rates. No studies involved randomised control trials. Only one study was assessed as demonstrating strong methodological rigour. That study found random alcohol testing reduced fatal accidents in the transport industry. The majority of studies reviewed contained methodological weaknesses including; inappropriate study design, limited sample representativeness, the use of ecological data to evaluate individual behaviour change and failure to adequately control for potentially confounding variables. This latter finding is consistent with previous reviews and indicates the evidence base for the effectiveness of testing in improving workplace safety is at best tenuous. Better dissemination of the current evidence in relation to workplace drug testing is required to support evidence-informed policy and practice. There is also a pressing need for more methodologically rigorous research to evaluate the efficacy and utility of drug testing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Estimating systematic continuous-time trends in recidivism using a non-gaussian panel data model

    NARCIS (Netherlands)

    Koopman, S.J.; Ooms, M.; Montfort, van K.; Geest, van der W.

    2008-01-01

    We model panel data of crime careers of juveniles from a Dutch Judicial Juvenile Institution. The data are decomposed into a systematic and an individual-specific component, of which the systematic component reflects the general time-varying conditions including the criminological climate. Within a

  2. The Diagnostic Validity of Clinical Tests in Temporomandibular Internal Derangement: A Systematic Review and Meta-analysis.

    Science.gov (United States)

    Chaput, Eve; Gross, Anita; Stewart, Ryan; Nadeau, Gordon; Goldsmith, Charlie H

    2012-01-01

    To assess the diagnostic validity of clinical tests for temporomandibular internal derangement relative to magnetic resonance imaging (MRI). MEDLINE and Embase were searched from 1994 through 2009. Independent reviewers conducted study selection; risk of bias was assessed using Quality Assessment of studies of Diagnostic Accuracy included in Systematic reviews (QUADAS); ≥9/14) and data abstraction. Overall quality of evidence was profiled using Grading of Recommendations Assessment, Development, and Evaluation (GRADE). Agreement was measured using quadratic weighted kappa (κw). Positive (+) or negative (-) likelihood ratios (LR) with 95% CIs were calculated and pooled using the DerSimonian-Laird method and a random-effects model when homogeneous (I(2)≥0.40, Q-test p≤0.10). We selected 8 of 36 studies identified. There is very low quality evidence that deflection (+LR: 6.37 [95% CI, 2.13-19.03]) and crepitation (LR:5.88 [95% CI, 1.95-17.76]) as single tests and crepitation, deflection, pain, and limited mouth opening as a cluster of tests are the most valuable for ruling in internal derangement without reduction (+LR:6.37 [95% CI, 2.13-19.03]), (-LR:0.27 [95% CI, 0.11-0.64]) while the test cluster click, deviation, and pain rules out internal derangement with reduction (-LR: 0.09 [95% CI, 0.01-0.72]). No single test or cluster of tests was conclusive and of significant value for ruling in internal derangement with reduction. Findings of this review will assist clinicians in deciding which diagnostic tests to use when internal derangement is suspected. The literature search revealed a lack of high-quality studies; further research with adequate description of patient populations, blinded assessments, and both sagittal and coronal MRI planes is therefore recommended. Purpose: To assess the diagnostic validity of clinical tests for temporomandibular internal derangement relative to magnetic resonance imaging (MRI). Methods: MEDLINE and Embase were searched from

  3. Reliability and minimal detectable change of the weight-bearing lunge test: A systematic review.

    Science.gov (United States)

    Powden, Cameron J; Hoch, Johanna M; Hoch, Matthew C

    2015-08-01

    Ankle dorsiflexion range of motion (DROM) is often a point of emphasis during the rehabilitation of lower extremity pathologies. With the growing popularity of weight-bearing DROM assessments, several versions of the weight-bearing lunge (WBLT) test have been developed and numerous reliability studies have been conducted. The purpose of this systematic review was to critically appraise and synthesize the studies which examined the reliability and responsiveness of the WBLT to assess DROM. A systematic search of PubMed and EBSCO Host databases from inception to September 2014 was conducted to identify studies whose primary aim was assessing the reliability of the WBLT. The Quality Appraisal of Reliability Studies assessment tool was utilized to determine the quality of included studies. Relative reliability was examined through intraclass correlation coefficients (ICC) and responsiveness was evaluated through minimal detectable change (MDC). A total of 12 studies met the eligibility criteria and were included. Nine included studies assessed inter-clinician reliability and 12 included studies assessed intra-clinician reliability. There was strong evidence that inter-clinician reliability (ICC = 0.80-0.99) as well as intra-clinician reliability (ICC = 0.65-0.99) of the WBLT is good. Additionally, average MDC scores of 4.6° or 1.6 cm for inter-clinician and 4.7° or 1.9 cm for intra-clinician were found, indicating the minimal change in DROM needed to be outside the error of the WBLT. This systematic review determined that the WBLT, regardless of method, can be used clinically to assess DROM as it provides consistent results between one or more clinicians and demonstrates reasonable responsiveness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. The Vanishing Tetrad Test: Another Test of Model Misspecification

    Science.gov (United States)

    Roos, J. Micah

    2014-01-01

    The Vanishing Tetrad Test (VTT) (Bollen, Lennox, & Dahly, 2009; Bollen & Ting, 2000; Hipp, Bauer, & Bollen, 2005) is an extension of the Confirmatory Tetrad Analysis (CTA) proposed by Bollen and Ting (Bollen & Ting, 1993). VTT is a powerful tool for detecting model misspecification and can be particularly useful in cases in which…

  5. The Vanishing Tetrad Test: Another Test of Model Misspecification

    Science.gov (United States)

    Roos, J. Micah

    2014-01-01

    The Vanishing Tetrad Test (VTT) (Bollen, Lennox, & Dahly, 2009; Bollen & Ting, 2000; Hipp, Bauer, & Bollen, 2005) is an extension of the Confirmatory Tetrad Analysis (CTA) proposed by Bollen and Ting (Bollen & Ting, 1993). VTT is a powerful tool for detecting model misspecification and can be particularly useful in cases in which…

  6. Modeling and Testing of Hydrodynamic Damping Model for a Complex-shaped Remotely-operated Vehicle for Control

    Institute of Scientific and Technical Information of China (English)

    Cheng Chin; Michael Lau

    2012-01-01

    In this paper,numerical modeling and model testing of a complex-shaped remotely-operated vehicle (ROV) were shown.The paper emphasized the systematic modeling of hydrodynamic damping using the computational fluid dynamic software ANSYS-CFXTM on the complex-shaped ROV,a practice that is not commonly applied.For initial design and prototype testing during the developmental stage,small-scale testing using a free-decaying experiment was used to verify the theoretical models obtained from ANSYS-CFXTM.Simulation results are shown to coincide with the experimental tests.The proposed method could determine the hydrodynamic damping coefficients of the ROV.

  7. Prognostic value of quantitative sensory testing in low back pain: a systematic review of the literature

    Directory of Open Access Journals (Sweden)

    Marcuzzi A

    2016-09-01

    Full Text Available Anna Marcuzzi,1,2 Catherine M Dean,1,2 Paul J Wrigley,3,4 Rosemary J Chakiath,3,4 Julia M Hush1,2 1Discipline of Physiotherapy, Department of Health Professions, Faculty of Medicine and Health Sciences, 2The Centre for Physical Health, Macquarie University, Sydney, 3Pain Management Research Institute, Kolling Institute of Medical Research, Royal North Shore Hospital, St Leonards, 4Sydney Medical School – Northern, University of Sydney, Sydney, NSW, Australia Abstract: Quantitative sensory testing (QST measures have recently been shown to predict outcomes in various musculoskeletal and pain conditions. The aim of this systematic review was to summarize the emerging body of evidence investigating the prognostic value of QST measures in people with low back pain (LBP. The protocol for this review was prospectively registered on the International Prospective Register of Systematic Reviews. An electronic search of six databases was conducted from inception to October 2015. Experts in the field were contacted to retrieve additional unpublished data. Studies were included if they were prospective longitudinal in design, assessed at least one QST measure in people with LBP, assessed LBP status at follow-up, and reported the association of QST data with LBP status at follow-up. Statistical pooling of results was not possible due to heterogeneity between studies. Of 6,408 references screened after duplicates removed, three studies were finally included. None of them reported a significant association between the QST measures assessed and the LBP outcome. Three areas at high risk of bias were identified which potentially compromise the validity of these results. Due to the paucity of available studies and the methodological shortcomings identified, it remains unknown whether QST measures are predictive of outcome in LBP. Keywords: prognosis, quantitative sensory testing, low back pain, cohort studies, pain, sensory testing

  8. A hybrid variational-ensemble data assimilation scheme with systematic error correction for limited-area ocean models

    Science.gov (United States)

    Oddo, Paolo; Storto, Andrea; Dobricic, Srdjan; Russo, Aniello; Lewis, Craig; Onken, Reiner; Coelho, Emanuel

    2016-10-01

    A hybrid variational-ensemble data assimilation scheme to estimate the vertical and horizontal parts of the background error covariance matrix for an ocean variational data assimilation system is presented and tested in a limited-area ocean model implemented in the western Mediterranean Sea. An extensive data set collected during the Recognized Environmental Picture Experiments conducted in June 2014 by the Centre for Maritime Research and Experimentation has been used for assimilation and validation. The hybrid scheme is used to both correct the systematic error introduced in the system from the external forcing (initialisation, lateral and surface open boundary conditions) and model parameterisation, and improve the representation of small-scale errors in the background error covariance matrix. An ensemble system is run offline for further use in the hybrid scheme, generated through perturbation of assimilated observations. Results of four different experiments have been compared. The reference experiment uses the classical stationary formulation of the background error covariance matrix and has no systematic error correction. The other three experiments account for, or not, systematic error correction and hybrid background error covariance matrix combining the static and the ensemble-derived errors of the day. Results show that the hybrid scheme when used in conjunction with the systematic error correction reduces the mean absolute error of temperature and salinity misfit by 55 and 42 % respectively, versus statistics arising from standard climatological covariances without systematic error correction.

  9. Accuracy of Onsite Tests to Detect Asymptomatic Bacteriuria in Pregnancy: A Systematic Review and Meta-analysis.

    Science.gov (United States)

    Rogozińska, Ewelina; Formina, Sandra; Zamora, Javier; Mignini, Luciano; Khan, Khalid S

    2016-09-01

    To estimate the accuracy of onsite tests to detect asymptomatic bacteriuria among pregnant women. We searched MEDLINE, EMBASE, Web of Science, Scopus, and Latin-American Literature from inception until June 2015 without language restrictions. The ClinicalTrials.gov register database was screened to identify any recently completed studies. Two independent reviewers selected studies that recruited asymptomatic pregnant women to evaluate the accuracy of onsite tests in detecting the presence of bacteria in the urine using urine culture as a reference standard. Women's characteristics, study design, urine sample collection, and handling were extracted along with the test accuracy data. Where possible, we pooled the data using a bivariate, hierarchical random-effects model. Of 1,360 screened references, 27 articles (13,641 women) with test accuracy data on nine tests met the inclusion criteria. The most commonly evaluated test was urine dipstick. The pooled sensitivity and specificity of nitrites detected by dipstick to detect asymptomatic bacteriuria were 0.55 (95% confidence interval [CI] 0.42-0.67) and 0.99 (95% CI 0.98-0.99), respectively. The Griess test to detect nitrites had a sensitivity of 0.65 (95% CI 0.50-0.78) and specificity of 0.99 (95% CI 0.98-1.00). Dipslide with Gram staining had a pooled sensitivity of 0.86 (95% CI 0.80-0.91) and specificity of 0.97 (95% CI 0.93-0.99). The specificity of onsite tests is high; however, the sensitivity is not with the result that they will fail to detect a substantial number of cases of asymptomatic bacteriuria. PROSPERO International prospective register of systematic reviews, http://www.crd.york.ac.uk/PROSPERO/, CRD42015027905.

  10. Critical appraisal of speech in noise tests: a systematic review and survey

    Directory of Open Access Journals (Sweden)

    Suhani Sharma

    2016-12-01

    Full Text Available Speech in noise tests that measure the perception of speech in presence of noise are now an important part of audiologic tests battery and hearing research as well. There are various tests available to estimate the perception of speech in presence of noise, for example, connected sentence test, hearing in noise test, words in noise, quick speech-in-noise test, bamford-kowal-bench speech-in-noise test, and listening in spatialized noise-sentences. All these tests are different in terms of target age, measure, procedure, speech material, noise, normative, etc. Because of the variety of tests available to estimate speech-in-noise abilities, audiologists often select tests based on their availability, ease to administer the test, time required in running the test, age of the patient, hearing status, type of hearing disorder and type of amplification device if using. A critical appraisal of these speech-in-noise tests is required for the evidence based selection and to be used in audiology clinics. In this article speech-in-noise tests were critically appraised for their conceptual model, measurement model, normatives, reliability, validity, responsiveness, item/instrument bias, respondent burden and administrative burden. Selection of a standard speech-in-noise test based on this critical appraisal will also allow an easy comparison of speech-in-noise ability of any hearing impaired individual or group across audiology clinics and research centers. This article also describes the survey which was done to grade the speech in noise tests on the various appraisal characteristics.

  11. Systematic review and meta-analysis of studies evaluating diagnostic test accuracy: A practical review for clinical researchers-Part II. general guidance and tips

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi; Park, Seong Ho [Dept. of Radiology, and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of); Lee, June Young [Dept. of Biostatistics, Korea University College of Medicine, Seoul (Korea, Republic of)

    2015-12-15

    Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies.

  12. Cost Modeling for SOC Modules Testing

    OpenAIRE

    Balwinder Singh; Arun Khosla; Sukhleen B. Narang

    2013-01-01

    The complexity of the system design is increasing very rapidly as the number of transistors on Integrated Circuits (IC) doubles as per Moore’s law.There is big challenge of testing this complex VLSI circuit, in which whole system is integrated into a single chip called System on Chip (SOC). Cost of testing the SOC is also increasing with complexity. Cost modeling plays a vital role in reduction of test cost and time to market. This paper includes the cost modeling of the SOC Module testing...

  13. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J

    2011-01-01

    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  14. Prognostic value of quantitative sensory testing in low back pain: a systematic review of the literature.

    Science.gov (United States)

    Marcuzzi, Anna; Dean, Catherine M; Wrigley, Paul J; Chakiath, Rosemary J; Hush, Julia M

    2016-01-01

    Quantitative sensory testing (QST) measures have recently been shown to predict outcomes in various musculoskeletal and pain conditions. The aim of this systematic review was to summarize the emerging body of evidence investigating the prognostic value of QST measures in people with low back pain (LBP). The protocol for this review was prospectively registered on the International Prospective Register of Systematic Reviews. An electronic search of six databases was conducted from inception to October 2015. Experts in the field were contacted to retrieve additional unpublished data. Studies were included if they were prospective longitudinal in design, assessed at least one QST measure in people with LBP, assessed LBP status at follow-up, and reported the association of QST data with LBP status at follow-up. Statistical pooling of results was not possible due to heterogeneity between studies. Of 6,408 references screened after duplicates removed, three studies were finally included. None of them reported a significant association between the QST measures assessed and the LBP outcome. Three areas at high risk of bias were identified which potentially compromise the validity of these results. Due to the paucity of available studies and the methodological shortcomings identified, it remains unknown whether QST measures are predictive of outcome in LBP.

  15. Systematic Geometric Error Modeling for Workspace Volumetric Calibration of a 5-axis Turbine Blade Grinding Machine

    Institute of Scientific and Technical Information of China (English)

    Abdul Wahid Khan; Chen Wuyi

    2010-01-01

    A systematic geometric model has been presented for calibration of a newly designed 5-axis turbine blade grinding machine.This machine is designed to serve a specific purpose to attain high accuracy and high efficiency grinding of turbine blades by eliminating the hand grinding process.Although its topology is RPPPR (P:prismatic;R:rotary),its design is quite distinct from the competitive machine tools.As error quantification is the only way to investigate,maintain and improve its accuracy,calibration is recommended for its performance assessment and acceptance testing.Systematic geometric error modeling technique is implemented and 52 position dependent and position independent errors are identified while considering the machine as five rigid bodies by eliminating the set-up errors ofworkpiece and cutting tool.39 of them are found to have influential errors and are accommodated for finding the resultant effect between the cutting tool and the workpiece in workspace volume.Rigid body kinematics techniques and homogenous transformation matrices are used for error synthesis.

  16. Prediction of pre-eclampsia: a protocol for systematic reviews of test accuracy

    Directory of Open Access Journals (Sweden)

    Khan Khalid S

    2006-10-01

    Full Text Available Abstract Background Pre-eclampsia, a syndrome of hypertension and proteinuria, is a major cause of maternal and perinatal morbidity and mortality. Accurate prediction of pre-eclampsia is important, since high risk women could benefit from intensive monitoring and preventive treatment. However, decision making is currently hampered due to lack of precise and up to date comprehensive evidence summaries on estimates of risk of developing pre-eclampsia. Methods/Design A series of systematic reviews and meta-analyses will be undertaken to determine, among women in early pregnancy, the accuracy of various tests (history, examinations and investigations for predicting pre-eclampsia. We will search Medline, Embase, Cochrane Library, MEDION, citation lists of review articles and eligible primary articles and will contact experts in the field. Reviewers working independently will select studies, extract data, and assess study validity according to established criteria. Language restrictions will not be applied. Bivariate meta-analysis of sensitivity and specificity will be considered for tests whose studies allow generation of 2 × 2 tables. Discussion The results of the test accuracy reviews will be integrated with results of effectiveness reviews of preventive interventions to assess the impact of test-intervention combinations for prevention of pre-eclampsia.

  17. The Alcock Paczy'nski test with Baryon Acoustic Oscillations: systematic effects for future surveys

    Science.gov (United States)

    Lepori, Francesca; Di Dio, Enea; Viel, Matteo; Baccigalupi, Carlo; Durrer, Ruth

    2017-02-01

    We investigate the Alcock Paczy'nski (AP) test applied to the Baryon Acoustic Oscillation (BAO) feature in the galaxy correlation function. By using a general formalism that includes relativistic effects, we quantify the importance of the linear redshift space distortions and gravitational lensing corrections to the galaxy number density fluctuation. We show that redshift space distortions significantly affect the shape of the correlation function, both in radial and transverse directions, causing different values of galaxy bias to induce offsets up to 1% in the AP test. On the other hand, we find that the lensing correction around the BAO scale modifies the amplitude but not the shape of the correlation function and therefore does not introduce any systematic effect. Furthermore, we investigate in details how the AP test is sensitive to redshift binning: a window function in transverse direction suppresses correlations and shifts the peak position toward smaller angular scales. We determine the correction that should be applied in order to account for this effect, when performing the test with data from three future planned galaxy redshift surveys: Euclid, the Dark Energy Spectroscopic Instrument (DESI) and the Square Kilometer Array (SKA).

  18. Routinization of HIV Testing in an Inpatient Setting: A Systematic Process for Organizational Change.

    Science.gov (United States)

    Mignano, Jamie L; Miner, Lucy; Cafeo, Christina; Spencer, Derek E; Gulati, Mangla; Brown, Travis; Borkoski, Ruth; Gibson-Magri, Kate; Canzoniero, Jenna; Gottlieb, Jonathan E; Rowen, Lisa

    2016-01-01

    In 2006, the U.S. Centers for Disease Control and Prevention released revised recommendations for routinization of HIV testing in healthcare settings. Health professionals have been challenged to incorporate these guidelines. In March 2013, a routine HIV testing initiative was launched at a large urban academic medical center in a high prevalence region. The goal was to routinize HIV testing by achieving a 75% offer and 75% acceptance rate and promoting linkage to care in the inpatient setting. A systematic six-step organizational change process included stakeholder buy-in, identification of an interdisciplinary leadership team, infrastructure development, staff education, implementation, and continuous quality improvement. Success was measured by monitoring the percentage of offered and accepted HIV tests from March to December 2013. The targeted offer rate was exceeded consistently once nurses became part of the consent process (September 2013). Fifteen persons were newly diagnosed with HIV. Seventy-eight persons were identified as previously diagnosed with HIV, but not engaged in care. Through this process, patients who may have remained undiagnosed or out-of-care were identified and linked to care. The authors propose that this process can be replicated in other settings. Increasing identification and treatment will improve the individual patient's health and reduce community disease burden.

  19. Do Test Design and Uses Influence Test Preparation? Testing a Model of Washback with Structural Equation Modeling

    Science.gov (United States)

    Xie, Qin; Andrews, Stephen

    2013-01-01

    This study introduces Expectancy-value motivation theory to explain the paths of influences from perceptions of test design and uses to test preparation as a special case of washback on learning. Based on this theory, two conceptual models were proposed and tested via Structural Equation Modeling. Data collection involved over 870 test takers of…

  20. Do Test Design and Uses Influence Test Preparation? Testing a Model of Washback with Structural Equation Modeling

    Science.gov (United States)

    Xie, Qin; Andrews, Stephen

    2013-01-01

    This study introduces Expectancy-value motivation theory to explain the paths of influences from perceptions of test design and uses to test preparation as a special case of washback on learning. Based on this theory, two conceptual models were proposed and tested via Structural Equation Modeling. Data collection involved over 870 test takers of…

  1. Rapid antigen group A streptococcus test to diagnose pharyngitis: a systematic review and meta-analysis.

    Directory of Open Access Journals (Sweden)

    Emily H Stewart

    Full Text Available BACKGROUND: Pharyngitis management guidelines include estimates of the test characteristics of rapid antigen streptococcus tests (RAST using a non-systematic approach. OBJECTIVE: To examine the sensitivity and specificity, and sources of variability, of RAST for diagnosing group A streptococcal (GAS pharyngitis. DATA SOURCES: MEDLINE, Cochrane Reviews, Centre for Reviews and Dissemination, Scopus, SciELO, CINAHL, guidelines, 2000-2012. STUDY SELECTION: Culture as reference standard, all languages. DATA EXTRACTION AND SYNTHESIS: Study characteristics, quality. MAIN OUTCOME(S AND MEASURE(S: Sensitivity, specificity. RESULTS: We included 59 studies encompassing 55,766 patients. Forty three studies (18,464 patients fulfilled the higher quality definition (at least 50 patients, prospective data collection, and no significant biases and 16 (35,634 patients did not. For the higher quality immunochromatographic methods in children (10,325 patients, heterogeneity was high for sensitivity (inconsistency [I(2] 88% and specificity (I(2 86%. For enzyme immunoassay in children (342 patients, the pooled sensitivity was 86% (95% CI, 79-92% and the pooled specificity was 92% (95% CI, 88-95%. For the higher quality immunochromatographic methods in the adult population (1,216 patients, the pooled sensitivity was 91% (95% CI, 87 to 94% and the pooled specificity was 93% (95% CI, 92 to 95%; however, heterogeneity was modest for sensitivity (I(2 61% and specificity (I(2 72%. For enzyme immunoassay in the adult population (333 patients, the pooled sensitivity was 86% (95% CI, 81-91% and the pooled specificity was 97% (95% CI, 96 to 99%; however, heterogeneity was high for sensitivity and specificity (both, I(2 88%. CONCLUSIONS: RAST immunochromatographic methods appear to be very sensitive and highly specific to diagnose group A streptococcal pharyngitis among adults but not in children. We could not identify sources of variability among higher quality studies. The

  2. Tests of pattern separation and pattern completion in humans-A systematic review.

    Science.gov (United States)

    Liu, Kathy Y; Gould, Rebecca L; Coulson, Mark C; Ward, Emma V; Howard, Robert J

    2016-06-01

    To systematically review the characteristics, validity and outcome measures of tasks that have been described in the literature as assessing pattern separation and pattern completion in humans. Electronic databases were searched for articles. Parameters for task validity were obtained from two reviews that described optimal task design factors to evaluate pattern separation and pattern completion processes. These were that pattern separation should be tested during an encoding task using abstract, never-before-seen visual stimuli, and pattern completion during a retrieval task using partial cues; parametric alteration of the degree of interference of stimuli or degradation of cues should be used to generate a corresponding gradient in behavioral output; studies should explicitly identify the specific memory domain under investigation (sensory/perceptual, temporal, spatial, affect, response, or language) and account for the contribution of other potential attributes involved in performance of the task. A systematic, qualitative assessment of validity in relation to these parameters was performed, along with a review of general validity and task outcome measures. Sixty-two studies were included. The majority of studies investigated pattern separation and most tasks were performed on young, healthy adults. Pattern separation and pattern completion were most frequently tested during a retrieval task using familiar or recognizable visual stimuli and cues. Not all studies parametrically altered the degree of stimulus interference or cue degradation, or controlled for potential confounding factors. This review found evidence that some of the parameters for task validity have been followed in some human studies of pattern separation and pattern completion, but no study was judged to have adequately met all the parameters for task validity. The contribution of these parameters and other task design factors towards an optimal behavioral paradigm is discussed and

  3. Dynamic testing of learning potential in adults with cognitive impairments: A systematic review of methodology and predictive value.

    NARCIS (Netherlands)

    Boosman, H.; Bovend'Eerdt, T.J.; Visser-Meily, J.M.; Nijboer, T.C.W.; Van heugten, C.M.

    2016-01-01

    Dynamic testing includes procedures that examine the effects of brief training on test performance where pre- to post-training change reflects patients' learning potential. The objective of this systematic review was to provide clinicians and researchers insight into the concept and methodology of

  4. Satellite data for systematic validation of wave model results in the Black Sea

    Science.gov (United States)

    Behrens, Arno; Staneva, Joanna

    2017-04-01

    The Black Sea is with regard to the availability of traditional in situ wave measurements recorded by usual waverider buoys a data sparse semi-enclosed sea. The only possibility for systematic validations of wave model results in such a regional area is the use of satellite data. In the frame of the COPERNICUS Marine Evolution System for the Black Sea that requires wave predictions, the third-generation spectral wave model WAM is used. The operational system is demonstrated based on four years' systematic comparisons with satellite data. The aim of this investigation was to answer two questions. Is the wave model able to provide a reliable description of the wave conditions in the Black Sea and are the satellite measurements suitable for validation purposes on such a regional scale ? Detailed comparisons between measured data and computed model results for the Black Sea including yearly statistics have been done for about 300 satellite overflights per year. The results discussed the different verification schemes needed to review the forecasting skills of the operational system. The good agreement between measured and modeled data supports the expectation that the wave model provides reasonable results and that the satellite data is of good quality and offer an appropriate validation alternative to buoy measurements. This is the required step towards further use of those satellite data for assimilation into the wave fields to improve the wave predictions. Additional support for the good quality of the wave predictions is provided by comparisons between ADCP measurements that are available for a short time period in February 2012 and the corresponding model results at a location near the Bulgarian coast in the western Black Sea. Sensitivity tests with different wave model options and different driving wind fields have been done which identify the appropriate model configuration that provides the best wave predictions. In addition to the comparisons between measured

  5. Critical appraisal and data extraction for systematic reviews of prediction modelling studies: the CHARMS checklist.

    Directory of Open Access Journals (Sweden)

    Karel G M Moons

    2014-10-01

    Full Text Available Carl Moons and colleagues provide a checklist and background explanation for critically appraising and extracting data from systematic reviews of prognostic and diagnostic prediction modelling studies. Please see later in the article for the Editors' Summary.

  6. A Systematic Review of Agent-Based Modelling and Simulation Applications in the Higher Education Domain

    Science.gov (United States)

    Gu, X.; Blackmore, K. L.

    2015-01-01

    This paper presents the results of a systematic review of agent-based modelling and simulation (ABMS) applications in the higher education (HE) domain. Agent-based modelling is a "bottom-up" modelling paradigm in which system-level behaviour (macro) is modelled through the behaviour of individual local-level agent interactions (micro).…

  7. Standardized Tests and Froebel's Original Kindergarten Model

    Science.gov (United States)

    Jeynes, William H.

    2006-01-01

    The author argues that American educators rely on standardized tests at too early an age when administered in kindergarten, particularly given the original intent of kindergarten as envisioned by its founder, Friedrich Froebel. The author examines the current use of standardized tests in kindergarten and the Froebel model, including his emphasis…

  8. Horns Rev II, 2-D Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Frigaard, Peter

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), on behalf of Energy E2 A/S part of DONG Energy A/S, Denmark. The objective of the tests was: to investigate the combined influence of the pile...

  9. Sample Size Determination for Rasch Model Tests

    Science.gov (United States)

    Draxler, Clemens

    2010-01-01

    This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…

  10. Systematic two-dimensional cascade tests. Volume 3: Slotted double circular-arc hydrofoils

    Science.gov (United States)

    Columbo, R. M.; Murrin, T. A.

    1972-01-01

    Performance parameters are presented for cascades of slotted double circular-arc hydrofoils tested over a range of systematically introduced variables in a rectilinear cascade tunnel which uses water as the test medium. Cascade configurations included various combinations of an inlet flow angle of 50, 60 and 70 deg; a cascade solidity of 0.75, 1.00 and 1.50; a hydrofoil camber angle of 20, 30, 40 and 45 deg; and angles of incidence between positive and negative stall. The slot was positioned at the 45 percent chord station and the slot exit width was 0.047-in. Tests were also performed with the slot positioned at the 35 percent chord station and with slot widths of 0.63 and 0.094-in. These data were correlated to indicate the effects of slot location and slot width on minimum loss incidence and deviation angles. In addition, a comparison is presented of the performance parameters for cascades of slotted and unslotted hydrofoils.

  11. Direct-to-consumer genetic testing: a systematic review of european guidelines, recommendations, and position statements.

    Science.gov (United States)

    Rafiq, Muhammad; Ianuale, Carolina; Ricciardi, Walter; Boccia, Stefania

    2015-10-01

    Personalized healthcare is expected to yield promising results, with a paradigm shift toward more personalization in the practice of medicine. This emerging field has wide-ranging implications for all the stakeholders. Commercial tests in the form of multiplex genetic profiles are currently being provided to consumers, without the physicians' consultation, through the Internet, referred to as direct-to-consumer genetic tests (DTC GT). The objective was to review all the existing European guidelines on DTC GT, and its associated interventions, to list all the supposed benefits and harms, issues and concerns, and recommendations. We conducted a systematic review of position statements, policies, guidelines, and recommendations, produced by professional organizations or other relevant bodies for use of DTC GT in Europe. Seventeen documents met the inclusion criteria, which were subjected to thematic analysis, and the texts were coded for statements related to use of DTC GT. Professional societies and associations are currently more suggestive of potential disadvantages of DTC GT, recommending improved genetic literacy of both populations and health professionals, and implementation research on the genetic tests to integrate public health genomics into healthcare systems.

  12. Clinimetric properties of the Timed Up and Go Test for patients with stroke: a systematic review.

    Science.gov (United States)

    Hafsteinsdóttir, Thóra B; Rensink, Marijke; Schuurmans, Marieke

    2014-01-01

    To systematically review and summarize the clinimetric properties, including reliability, validity, and responsiveness, the procedures used, and the meanings of the scores in the Timed Up and Go Test (TUG). The TUG is a performance test that identifies problems with functional mobility in patients with stroke. MEDLINE and the Cochrane Central Register of Controlled Trials were searched from 1991 to January 2013. Studies were included if (1) the participants were adults with stroke; (2) the research design was cross-sectional, descriptive, or longitudinal and examined the clinimetric properties, including reliability, validity, and sensitivity to change, and procedural differences in the TUG; and (3) the study was published in English from 1991 to January 2013. Thirteen studies met the inclusion criteria. Of these, 4 showed the TUG to have good convergent validity, as it had significant correlations with various instruments. Three studies that investigated the test-retest reliability showed the TUG to have excellent intrarater and interrater reliability (intraclass correlation coefficient [ICC] ≯ 0.95). The 3 studies that investigated whether the TUG could predict falls after stroke showed inconclusive results. Three studies showed the TUG to be sensitive to change, and 1 study showed the TUG to be responsive in moderate- and fast-walking patients with stroke. However, there were wide variations in the procedures and instructions used. The TUG can be recommended for measuring basic mobility skills after stroke in patients who are able to walk. However, the procedures and instructions should be described more clearly.

  13. Hydrocarbon Fuel Thermal Performance Modeling based on Systematic Measurement and Comprehensive Chromatographic Analysis

    Science.gov (United States)

    2016-07-31

    distribution unlimited Hydrocarbon Fuel Thermal Performance Modeling based on Systematic Measurement and Comprehensive Chromatographic Analysis Matthew...Technical Note 3. DATES COVERED (From - To) 04 January 2016 - 31 July 2016 4. TITLE AND SUBTITLE Hydrocarbon Fuel Thermal Performance Modeling based on...Systematic Measurement and Comprehensive Chromatographic Analysis 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S

  14. Statistical Inference Models for Image Datasets with Systematic Variations.

    Science.gov (United States)

    Kim, Won Hwa; Bendlin, Barbara B; Chung, Moo K; Johnson, Sterling C; Singh, Vikas

    2015-06-01

    Statistical analysis of longitudinal or cross sectional brain imaging data to identify effects of neurodegenerative diseases is a fundamental task in various studies in neuroscience. However, when there are systematic variations in the images due to parameter changes such as changes in the scanner protocol, hardware changes, or when combining data from multi-site studies, the statistical analysis becomes problematic. Motivated by this scenario, the goal of this paper is to develop a unified statistical solution to the problem of systematic variations in statistical image analysis. Based in part on recent literature in harmonic analysis on diffusion maps, we propose an algorithm which compares operators that are resilient to the systematic variations. These operators are derived from the empirical measurements of the image data and provide an efficient surrogate to capturing the actual changes across images. We also establish a connection between our method to the design of wavelets in non-Euclidean space. To evaluate the proposed ideas, we present various experimental results on detecting changes in simulations as well as show how the method offers improved statistical power in the analysis of real longitudinal PIB-PET imaging data acquired from participants at risk for Alzheimer's disease (AD).

  15. Statistical Inference Models for Image Datasets with Systematic Variations

    Science.gov (United States)

    Kim, Won Hwa; Bendlin, Barbara B.; Chung, Moo K.; Johnson, Sterling C.; Singh, Vikas

    2016-01-01

    Statistical analysis of longitudinal or cross sectional brain imaging data to identify effects of neurodegenerative diseases is a fundamental task in various studies in neuroscience. However, when there are systematic variations in the images due to parameter changes such as changes in the scanner protocol, hardware changes, or when combining data from multi-site studies, the statistical analysis becomes problematic. Motivated by this scenario, the goal of this paper is to develop a unified statistical solution to the problem of systematic variations in statistical image analysis. Based in part on recent literature in harmonic analysis on diffusion maps, we propose an algorithm which compares operators that are resilient to the systematic variations. These operators are derived from the empirical measurements of the image data and provide an efficient surrogate to capturing the actual changes across images. We also establish a connection between our method to the design of wavelets in non-Euclidean space. To evaluate the proposed ideas, we present various experimental results on detecting changes in simulations as well as show how the method offers improved statistical power in the analysis of real longitudinal PIB-PET imaging data acquired from participants at risk for Alzheimer’s disease (AD). PMID:26989336

  16. Orthodontic measurements on digital study models compared with plaster models: a systematic review.

    Science.gov (United States)

    Fleming, P S; Marinho, V; Johal, A

    2011-02-01

    The aim of this study is to evaluate the validity of the use of digital models to assess tooth size, arch length, irregularity index, arch width and crowding versus measurements generated on hand-held plaster models with digital callipers in patients with and without malocclusion. Studies comparing linear and angular measurements obtained on digital and standard plaster models were identified by searching multiple databases including MEDLINE, LILACS, BBO, ClinicalTrials.gov, the National Research Register and Pro-Quest Dissertation Abstracts and Thesis database, without restrictions relating to publication status or language of publication. Two authors were involved in study selection, quality assessment and the extraction of data. Items from the Quality Assessment of Studies of Diagnostic Accuracy included in Systematic Reviews checklist were used to assess the methodological quality of included studies. No meta-analysis was conducted. Comparisons between measurements of digital and plaster models made directly within studies were reported, and the difference between the (repeated) measurement means for digital and plaster models were considered as estimates. Seventeen relevant studies were included. Where reported, overall, the absolute mean differences between direct and indirect measurements on plaster and digital models were minor and clinically insignificant. Orthodontic measurements with digital models were comparable to those derived from plaster models. The use of digital models as an alternative to conventional measurement on plaster models may be recommended, although the evidence identified in this review is of variable quality. © 2010 John Wiley & Sons A/S.

  17. Modelling and Testing of Friction in Forging

    DEFF Research Database (Denmark)

    Bay, Niels

    2007-01-01

    Knowledge about friction is still limited in forging. The theoretical models applied presently for process analysis are not satisfactory compared to the advanced and detailed studies possible to carry out by plastic FEM analyses and more refined models have to be based on experimental testing...

  18. Testing inequality constrained hypotheses in SEM Models

    NARCIS (Netherlands)

    Van de Schoot, R.; Hoijtink, H.J.A.; Dekovic, M.

    2010-01-01

    Researchers often have expectations that can be expressed in the form of inequality constraints among the parameters of a structural equation model. It is currently not possible to test these so-called informative hypotheses in structural equation modeling software. We offer a solution to this probl

  19. Modeling Answer Changes on Test Items

    Science.gov (United States)

    van der Linden, Wim J.; Jeon, Minjeong

    2012-01-01

    The probability of test takers changing answers upon review of their initial choices is modeled. The primary purpose of the model is to check erasures on answer sheets recorded by an optical scanner for numbers and patterns that may be indicative of irregular behavior, such as teachers or school administrators changing answer sheets after their…

  20. Modeling Nonignorable Missing Data in Speeded Tests

    Science.gov (United States)

    Glas, Cees A. W.; Pimentel, Jonald L.

    2008-01-01

    In tests with time limits, items at the end are often not reached. Usually, the pattern of missing responses depends on the ability level of the respondents; therefore, missing data are not ignorable in statistical inference. This study models data using a combination of two item response theory (IRT) models: one for the observed response data and…

  1. A Systematic Review of the Anxiolytic-Like Effects of Essential Oils in Animal Models

    Directory of Open Access Journals (Sweden)

    Damião Pergentino de Sousa

    2015-10-01

    Full Text Available The clinical efficacy of standardized essential oils (such as Lavender officinalis, in treating anxiety disorders strongly suggests that these natural products are an important candidate source for new anxiolytic drugs. A systematic review of essential oils, their bioactive constituents, and anxiolytic-like activity is conducted. The essential oil with the best profile is Lavendula angustifolia, which has already been tested in controlled clinical trials with positive results. Citrus aurantium using different routes of administration also showed significant effects in several animal models, and was corroborated by different research groups. Other promising essential oils are Citrus sinensis and bergamot oil, which showed certain clinical anxiolytic actions; along with Achillea wilhemsii, Alpinia zerumbet, Citrus aurantium, and Spiranthera odoratissima, which, like Lavendula angustifolia, appear to exert anxiolytic-like effects without GABA/benzodiazepine activity, thus differing in their mechanisms of action from the benzodiazepines. The anxiolytic activity of 25 compounds commonly found in essential oils is also discussed.

  2. Model Checking and Model-based Testing in the Railway Domain

    DEFF Research Database (Denmark)

    Haxthausen, Anne Elisabeth; Peleska, Jan

    2015-01-01

    This chapter describes some approaches and emerging trends for verification and model-based testing of railway control systems. We describe state-of-the-art methods and associated tools for verifying interlocking systems and their configuration data, using bounded model checking and k......-induction. Using real-world models of novel Danish interlocking systems, it is exemplified how this method scales up and is suitable for industrial application. For verification of the integrated HW/SW system performing the interlocking control tasks, a modelbased hardware-in-the-loop testing approach is presented...... with good test strength are explained. Interlocking systems represent just one class of many others, where concrete system instances are created from generic representations, using configuration data for determining the behaviour of the instances. We explain how the systematic transition from generic...

  3. Systematic approach for the identification of process reference models

    CSIR Research Space (South Africa)

    Van Der Merwe, A

    2009-02-01

    Full Text Available Process models are used in different application domains to capture knowledge on the process flow. Process reference models (PRM) are used to capture reusable process models, which should simplify the identification process of process models...

  4. An approach to model based testing of multiagent systems.

    Science.gov (United States)

    Ur Rehman, Shafiq; Nadeem, Aamer

    2015-01-01

    Autonomous agents perform on behalf of the user to achieve defined goals or objectives. They are situated in dynamic environment and are able to operate autonomously to achieve their goals. In a multiagent system, agents cooperate with each other to achieve a common goal. Testing of multiagent systems is a challenging task due to the autonomous and proactive behavior of agents. However, testing is required to build confidence into the working of a multiagent system. Prometheus methodology is a commonly used approach to design multiagents systems. Systematic and thorough testing of each interaction is necessary. This paper proposes a novel approach to testing of multiagent systems based on Prometheus design artifacts. In the proposed approach, different interactions between the agent and actors are considered to test the multiagent system. These interactions include percepts and actions along with messages between the agents which can be modeled in a protocol diagram. The protocol diagram is converted into a protocol graph, on which different coverage criteria are applied to generate test paths that cover interactions between the agents. A prototype tool has been developed to generate test paths from protocol graph according to the specified coverage criterion.

  5. An Approach to Model Based Testing of Multiagent Systems

    Directory of Open Access Journals (Sweden)

    Shafiq Ur Rehman

    2015-01-01

    Full Text Available Autonomous agents perform on behalf of the user to achieve defined goals or objectives. They are situated in dynamic environment and are able to operate autonomously to achieve their goals. In a multiagent system, agents cooperate with each other to achieve a common goal. Testing of multiagent systems is a challenging task due to the autonomous and proactive behavior of agents. However, testing is required to build confidence into the working of a multiagent system. Prometheus methodology is a commonly used approach to design multiagents systems. Systematic and thorough testing of each interaction is necessary. This paper proposes a novel approach to testing of multiagent systems based on Prometheus design artifacts. In the proposed approach, different interactions between the agent and actors are considered to test the multiagent system. These interactions include percepts and actions along with messages between the agents which can be modeled in a protocol diagram. The protocol diagram is converted into a protocol graph, on which different coverage criteria are applied to generate test paths that cover interactions between the agents. A prototype tool has been developed to generate test paths from protocol graph according to the specified coverage criterion.

  6. Testing Rare-Variant Association without Calling Genotypes Allows for Systematic Differences in Sequencing between Cases and Controls.

    Directory of Open Access Journals (Sweden)

    Yi-Juan Hu

    2016-05-01

    Full Text Available Next-generation sequencing of DNA provides an unprecedented opportunity to discover rare genetic variants associated with complex diseases and traits. However, the common practice of first calling underlying genotypes and then treating the called values as known is prone to false positive findings, especially when genotyping errors are systematically different between cases and controls. This happens whenever cases and controls are sequenced at different depths, on different platforms, or in different batches. In this article, we provide a likelihood-based approach to testing rare variant associations that directly models sequencing reads without calling genotypes. We consider the (weighted burden test statistic, which is the (weighted sum of the score statistic for assessing effects of individual variants on the trait of interest. Because variant locations are unknown, we develop a simple, computationally efficient screening algorithm to estimate the loci that are variants. Because our burden statistic may not have mean zero after screening, we develop a novel bootstrap procedure for assessing the significance of the burden statistic. We demonstrate through extensive simulation studies that the proposed tests are robust to a wide range of differential sequencing qualities between cases and controls, and are at least as powerful as the standard genotype calling approach when the latter controls type I error. An application to the UK10K data reveals novel rare variants in gene BTBD18 associated with childhood onset obesity. The relevant software is freely available.

  7. Improved testing inference in mixed linear models

    CERN Document Server

    Melo, Tatiane F N; Cribari-Neto, Francisco; 10.1016/j.csda.2008.12.007

    2011-01-01

    Mixed linear models are commonly used in repeated measures studies. They account for the dependence amongst observations obtained from the same experimental unit. Oftentimes, the number of observations is small, and it is thus important to use inference strategies that incorporate small sample corrections. In this paper, we develop modified versions of the likelihood ratio test for fixed effects inference in mixed linear models. In particular, we derive a Bartlett correction to such a test and also to a test obtained from a modified profile likelihood function. Our results generalize those in Zucker et al. (Journal of the Royal Statistical Society B, 2000, 62, 827-838) by allowing the parameter of interest to be vector-valued. Additionally, our Bartlett corrections allow for random effects nonlinear covariance matrix structure. We report numerical evidence which shows that the proposed tests display superior finite sample behavior relative to the standard likelihood ratio test. An application is also presente...

  8. Systematic Methods and Tools for Computer Aided Modelling

    DEFF Research Database (Denmark)

    Fedorova, Marina

    -friendly system, which will make the model development process easier and faster and provide the way for unified and consistent model documentation. The modeller can use the template for their specific problem or to extend and/or adopt a model. This is based on the idea of model reuse, which emphasizes the use...... and processes can be faster, cheaper and very efficient. The developed modelling framework involves five main elements: 1) a modelling tool, that includes algorithms for model generation; 2) a template library, which provides building blocks for the templates (generic models previously developed); 3) computer...... aided methods and tools, that include procedures to perform model translation, model analysis, model verification/validation, model solution and model documentation; 4) model transfer – export/import to/from other application for further extension and application – several types of formats, such as XML...

  9. An information theory approach to minimise correlated systematic uncertainty in modelling resonance parameters

    Energy Technology Data Exchange (ETDEWEB)

    Krishna Kumar, P.T. [Research Laboratory for Nuclear Reactors, Tokyo Institute of Technology, 2-12-1, O-Okayama, Meguro-Ku, Tokyo 152-8550 (Japan)], E-mail: gstptk@yahoo.co.in; Sekimoto, Hiroshi [Research Laboratory for Nuclear Reactors, Tokyo Institute of Technology, 2-12-1, O-Okayama, Meguro-Ku, Tokyo 152-8550 (Japan)], E-mail: hsekimot@nr.titech.ac.jp

    2009-02-15

    Covariance matrix elements depict the statistical and systematic uncertainties in reactor parameter measurements. All the efforts have so far been devoted only to minimise the statistical uncertainty by repeated measurements but the dominant systematic uncertainty has either been neglected or randomized. In recent years efforts has been devoted to simulate the resonance parameter uncertainty information through covariance matrices in code SAMMY. But, the code does not have any provision to check the reliability of the simulated covariance data. We propose a new approach called entropy based information theory to reduce the systematic uncertainty in the correlation matrix element so that resonance parameters with minimum systematic uncertainty can be modelled. We apply our information theory approach in generating the resonance parameters of {sup 156}Gd with reduced systematic uncertainty and demonstrate the superiority of our technique over the principal component analysis method.

  10. Tests for predicting complications of pre-eclampsia: A protocol for systematic reviews

    Directory of Open Access Journals (Sweden)

    O'Brien Shaughn

    2008-08-01

    Full Text Available Abstract Background Pre-eclampsia is associated with several complications. Early prediction of complications and timely management is needed for clinical care of these patients to avert fetal and maternal mortality and morbidity. There is a need to identify best testing strategies in pre eclampsia to identify the women at increased risk of complications. We aim to determine the accuracy of various tests to predict complications of pre-eclampsia by systematic quantitative reviews. Method We performed extensive search in MEDLINE (1951–2004, EMBASE (1974–2004 and also will also include manual searches of bibliographies of primary and review articles. An initial search has revealed 19500 citations. Two reviewers will independently select studies and extract data on study characteristics, quality and accuracy. Accuracy data will be used to construct 2 × 2 tables. Data synthesis will involve assessment for heterogeneity and appropriately pooling of results to produce summary Receiver Operating Characteristics (ROC curve and summary likelihood ratios. Discussion This review will generate predictive information and integrate that with therapeutic effectiveness to determine the absolute benefit and harm of available therapy in reducing complications in women with pre-eclampsia.

  11. Low levels of HIV test coverage in clinical settings in the UK: a systematic review of adherence to 2008 guidelines

    OpenAIRE

    Elmahdi, Rahma; Gerver, Sarah M.; Gomez Guillen, Gabriela; Fidler, Sarah; Cooke, Graham; Ward, Helen

    2014-01-01

    Objectives To quantify the extent to which guideline recommendations for routine testing for HIV are adhered to outside of genitourinary medicine (GUM), sexual health (SH) and antenatal clinics. Methods A systematic review of published data on testing levels following publication of 2008 guidelines was undertaken. Medline, Embase and conference abstracts were searched according to a predefined protocol. We included studies reporting the number of HIV tests administered in those eligible for g...

  12. A test of systematic coarse-graining of molecular dynamics simulations: Transport properties.

    Science.gov (United States)

    Fu, Chia-Chun; Kulkarni, Pandurang M; Shell, M Scott; Leal, L Gary

    2013-09-07

    To what extent can a "bottom-up" mesoscale fluid model developed through systematic coarse-graining techniques recover the physical properties of a molecular scale system? In a previous paper [C.-C. Fu, P. M. Kulkarni, M. S. Shell, and L. G. Leal, J. Chem. Phys. 137, 164106 (2012)], we addressed this question for thermodynamic properties through the development of coarse-grained (CG) fluid models using modified iterative Boltzmann inversion methods that reproduce correct pair structure and pressure. In the present work we focus on the dynamic behavior. Unlike the radial distribution function and the pressure, dynamical properties such as the self-diffusion coefficient and viscosity in a CG model cannot be matched during coarse-graining by modifying the pair interaction. Instead, removed degrees of freedom require a modification of the equations of motion to simulate their implicit effects on dynamics. A simple but approximate approach is to introduce a friction coefficient, γ, and random forces for the remaining degrees of freedom, in which case γ becomes an additional parameter in the coarse-grained model that can be tuned. We consider the non-Galilean-invariant Langevin and the Galilean-invariant dissipative particle dynamics (DPD) thermostats with CG systems in which we can systematically tune the fraction φ of removed degrees of freedom. Between these two choices, only DPD allows both the viscosity and diffusivity to match a reference Lennard-Jones liquid with a single value of γ for each degree of coarse-graining φ. This friction constant is robust to the pressure correction imposed on the effective CG potential, increases approximately linearly with φ, and also depends on the interaction cutoff length, rcut, of the pair interaction potential. Importantly, we show that the diffusion constant and viscosity are constrained by a simple scaling law that leads to a specific choice of DPD friction coefficient for a given degree of coarse-graining. Moreover, we

  13. Systematic comparison of trip distribution laws and models

    CERN Document Server

    Lenormand, Maxime; Ramasco, José J

    2016-01-01

    Trip distribution laws are basic for the travel demand characterization needed in transport and urban planning. Several approaches have been considered in the last years. One of them is the so-called gravity law, in which the number of trips is assumed to be related to the population at origin and destination and to decrease with the distance. The mathematical expression of this law resembles Newton's law of gravity, which explains its name. Another popular approach is inspired by the theory of intervening opportunities and it has been concreted into the so-called radiation models. Individuals are supposed to travel until they find a job opportunity, so the population and jobs spatial distributions naturally lead to a trip flow network. In this paper, we perform a thorough comparison between the gravity and the radiation approaches in their ability at estimating commuting flows. We test the gravity and the radiation laws against empirical trip data at different scales and coming from different countries. Diff...

  14. Graded CTL Model Checking for Test Generation

    CERN Document Server

    Napoli, Margherita

    2011-01-01

    Recently there has been a great attention from the scientific community towards the use of the model-checking technique as a tool for test generation in the simulation field. This paper aims to provide a useful mean to get more insights along these lines. By applying recent results in the field of graded temporal logics, we present a new efficient model-checking algorithm for Hierarchical Finite State Machines (HSM), a well established symbolism long and widely used for representing hierarchical models of discrete systems. Performing model-checking against specifications expressed using graded temporal logics has the peculiarity of returning more counterexamples within a unique run. We think that this can greatly improve the efficacy of automatically getting test cases. In particular we verify two different models of HSM against branching time temporal properties.

  15. Pharmacogenomic Testing for Psychotropic Medication Selection: A Systematic Review of the Assurex GeneSight Psychotropic Test

    Science.gov (United States)

    Brener, Stacey; Holubowich, Corinne

    2017-01-01

    Background A large proportion of the Ontario population lives with a diagnosed mental illness. Nearly 5% of Ontarians have major depressive disorder, and another 5% have another type of depressive disorder, bipolar disorder, schizophrenia, anxiety, or some other disorder not otherwise specified. Medications are commonly used to treat mental illness, but choosing the right medication for each patient is challenging, and more than 40% of patients discontinue their medication within 90 days because of adverse effects or lack of response. The Assurex GeneSight Psychotropic test is a pharmacogenomic panel that provides clinicians with a report to guide medication selection that is unique to each patient based on their individual genetic profile. However, it is uncertain whether guided treatment using GeneSight is effective compared with unguided treatment (usual care). Methods We performed a systematic review to identify English-language studies published before February 22, 2016, that compared GeneSight-guided care and usual care among people with mood disorders, anxiety, or schizophrenia. Primary outcomes of interest were prevention of suicide, remission of depression symptoms, response to depression therapy, depression score, and quality of life. Secondary outcomes of interest were impact on therapeutic decisions and patient and clinician satisfaction. Risk of bias was evaluated, and the quality of the evidence was assessed using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) working group criteria. Results Four studies met the inclusion criteria. These studies used a version of GeneSight that included the CYP2D6, CYP2C19, CYP1A2, SLC6A4, and HTR2A genes; one of the studies also included CYP2C9. Patients who received the GeneSight test to guide psychotropic medication selection had improved response to depression treatment, greater improvements in measures of depression, and greater patient and clinician satisfaction compared with

  16. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through suffi

  17. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  18. Engineering Abstractions in Model Checking and Testing

    DEFF Research Database (Denmark)

    Achenbach, Michael; Ostermann, Klaus

    2009-01-01

    Abstractions are used in model checking to tackle problems like state space explosion or modeling of IO. The application of these abstractions in real software development processes, however, lacks engineering support. This is one reason why model checking is not widely used in practice yet...... and testing is still state of the art in falsification. We show how user-defined abstractions can be integrated into a Java PathFinder setting with tools like AspectJ or Javassist and discuss implications of remaining weaknesses of these tools. We believe that a principled engineering approach to designing...... and implementing abstractions will improve the applicability of model checking in practice....

  19. CD4 enumeration technologies: a systematic review of test performance for determining eligibility for antiretroviral therapy.

    Directory of Open Access Journals (Sweden)

    Rosanna W Peeling

    Full Text Available Measurement of CD4+ T-lymphocytes (CD4 is a crucial parameter in the management of HIV patients, particularly in determining eligibility to initiate antiretroviral treatment (ART. A number of technologies exist for CD4 enumeration, with considerable variation in cost, complexity, and operational requirements. We conducted a systematic review of the performance of technologies for CD4 enumeration.Studies were identified by searching electronic databases MEDLINE and EMBASE using a pre-defined search strategy. Data on test accuracy and precision included bias and limits of agreement with a reference standard, and misclassification probabilities around CD4 thresholds of 200 and 350 cells/μl over a clinically relevant range. The secondary outcome measure was test imprecision, expressed as % coefficient of variation. Thirty-two studies evaluating 15 CD4 technologies were included, of which less than half presented data on bias and misclassification compared to the same reference technology. At CD4 counts 350 cells/μl, bias ranged from -70.7 to +47 cells/μl, compared to the BD FACSCount as a reference technology. Misclassification around the threshold of 350 cells/μl ranged from 1-29% for upward classification, resulting in under-treatment, and 7-68% for downward classification resulting in overtreatment. Less than half of these studies reported within laboratory precision or reproducibility of the CD4 values obtained.A wide range of bias and percent misclassification around treatment thresholds were reported on the CD4 enumeration technologies included in this review, with few studies reporting assay precision. The lack of standardised methodology on test evaluation, including the use of different reference standards, is a barrier to assessing relative assay performance and could hinder the introduction of new point-of-care assays in countries where they are most needed.

  20. CD4 Enumeration Technologies: A Systematic Review of Test Performance for Determining Eligibility for Antiretroviral Therapy

    Science.gov (United States)

    Peeling, Rosanna W.; Sollis, Kimberly A.; Glover, Sarah; Crowe, Suzanne M.; Landay, Alan L.; Cheng, Ben; Barnett, David; Denny, Thomas N.; Spira, Thomas J.; Stevens, Wendy S.; Crowley, Siobhan; Essajee, Shaffiq; Vitoria, Marco; Ford, Nathan

    2015-01-01

    Background Measurement of CD4+ T-lymphocytes (CD4) is a crucial parameter in the management of HIV patients, particularly in determining eligibility to initiate antiretroviral treatment (ART). A number of technologies exist for CD4 enumeration, with considerable variation in cost, complexity, and operational requirements. We conducted a systematic review of the performance of technologies for CD4 enumeration. Methods and Findings Studies were identified by searching electronic databases MEDLINE and EMBASE using a pre-defined search strategy. Data on test accuracy and precision included bias and limits of agreement with a reference standard, and misclassification probabilities around CD4 thresholds of 200 and 350 cells/μl over a clinically relevant range. The secondary outcome measure was test imprecision, expressed as % coefficient of variation. Thirty-two studies evaluating 15 CD4 technologies were included, of which less than half presented data on bias and misclassification compared to the same reference technology. At CD4 counts 350 cells/μl, bias ranged from -70.7 to +47 cells/μl, compared to the BD FACSCount as a reference technology. Misclassification around the threshold of 350 cells/μl ranged from 1-29% for upward classification, resulting in under-treatment, and 7-68% for downward classification resulting in overtreatment. Less than half of these studies reported within laboratory precision or reproducibility of the CD4 values obtained. Conclusions A wide range of bias and percent misclassification around treatment thresholds were reported on the CD4 enumeration technologies included in this review, with few studies reporting assay precision. The lack of standardised methodology on test evaluation, including the use of different reference standards, is a barrier to assessing relative assay performance and could hinder the introduction of new point-of-care assays in countries where they are most needed. PMID:25790185

  1. Understanding Systematics in ZZ Ceti Model Fitting to Enable Differential Seismology

    Science.gov (United States)

    Fuchs, J. T.; Dunlap, B. H.; Clemens, J. C.; Meza, J. A.; Dennihy, E.; Koester, D.

    2017-03-01

    We are conducting a large spectroscopic survey of over 130 Southern ZZ Cetis with the Goodman Spectrograph on the SOAR Telescope. Because it employs a single instrument with high UV throughput, this survey will both improve the signal-to-noise of the sample of SDSS ZZ Cetis and provide a uniform dataset for model comparison. We are paying special attention to systematics in the spectral fitting and quantify three of those systematics here. We show that relative positions in the log g -Teff plane are consistent for these three systematics.

  2. Understanding Systematics in ZZ Ceti Model Fitting to Enable Differential Seismology

    CERN Document Server

    Fuchs, J T; Clemens, J C; Meza, J A; Dennihy, E; Koester, D

    2016-01-01

    We are conducting a large spectroscopic survey of over 130 Southern ZZ Cetis with the Goodman Spectrograph on the SOAR Telescope. Because it employs a single instrument with high UV throughput, this survey will both improve the signal-to-noise of the sample of SDSS ZZ Cetis and provide a uniform dataset for model comparison. We are paying special attention to systematics in the spectral fitting and quantify three of those systematics here. We show that relative positions in the $\\log{g}$-$T_{\\rm eff}$ plane are consistent for these three systematics.

  3. Impact of systematic HIV testing on case finding and retention in care at a primary care clinic in South Africa.

    Science.gov (United States)

    Clouse, Kate; Hanrahan, Colleen F; Bassett, Jean; Fox, Matthew P; Sanne, Ian; Van Rie, Annelies

    2014-12-01

    Systematic, opt-out HIV counselling and testing (HCT) may diagnose individuals at lower levels of immunodeficiency but may impact loss to follow-up (LTFU) if healthier people are less motivated to engage and remain in HIV care. We explored LTFU and patient clinical outcomes under two different HIV testing strategies. We compared patient characteristics and retention in care between adults newly diagnosed with HIV by either voluntary counselling and testing (VCT) plus targeted provider-initiated counselling and testing (PITC) or systematic HCT at a primary care clinic in Johannesburg, South Africa. One thousand one hundred and forty-four adults were newly diagnosed by VCT/PITC and 1124 by systematic HCT. Two-thirds of diagnoses were in women. Median CD4 count at HIV diagnosis (251 vs. 264 cells/μl, P = 0.19) and proportion of individuals eligible for antiretroviral therapy (ART) (67.2% vs. 66.7%, P = 0.80) did not differ by HCT strategy. Within 1 year of HIV diagnosis, half were LTFU: 50.5% under VCT/PITC and 49.6% under systematic HCT (P = 0.64). The overall hazard of LTFU was not affected by testing policy (aHR 0.98, 95%CI: 0.87-1.10). Independent of HCT strategy, males, younger adults and those ineligible for ART were at higher risk of LTFU. Implementation of systematic HCT did not increase baseline CD4 count. Overall retention in the first year after HIV diagnosis was low (37.9%), especially among those ineligible for ART, but did not differ by testing strategy. Expansion of HIV testing should coincide with effective strategies to increase retention in care, especially among those not yet eligible for ART at initial diagnosis. © 2014 John Wiley & Sons Ltd.

  4. On the systematic errors of cosmological-scale gravity tests using redshift-space distortion: non-linear effects and the halo bias

    Science.gov (United States)

    Ishikawa, Takashi; Totani, Tomonori; Nishimichi, Takahiro; Takahashi, Ryuichi; Yoshida, Naoki; Tonegawa, Motonari

    2014-10-01

    Redshift-space distortion (RSD) observed in galaxy redshift surveys is a powerful tool to test gravity theories on cosmological scales, but the systematic uncertainties must carefully be examined for future surveys with large statistics. Here we employ various analytic models of RSD and estimate the systematic errors on measurements of the structure growth-rate parameter, fσ8, induced by non-linear effects and the halo bias with respect to the dark matter distribution, by using halo catalogues from 40 realizations of 3.4 × 108 comoving h-3 Mpc3 cosmological N-body simulations. We consider hypothetical redshift surveys at redshifts z = 0.5, 1.35 and 2, and different minimum halo mass thresholds in the range of 5.0 × 1011-2.0 × 1013 h-1 M⊙. We find that the systematic error of fσ8 is greatly reduced to ˜5 per cent level, when a recently proposed analytical formula of RSD that takes into account the higher order coupling between the density and velocity fields is adopted, with a scale-dependent parametric bias model. Dependence of the systematic error on the halo mass, the redshift and the maximum wavenumber used in the analysis is discussed. We also find that the Wilson-Hilferty transformation is useful to improve the accuracy of likelihood analysis when only a small number of modes are available in power spectrum measurements.

  5. Unit testing, model validation, and biological simulation

    Science.gov (United States)

    Watts, Mark D.; Ghayoomie, S. Vahid; Larson, Stephen D.; Gerkin, Richard C.

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models. PMID:27635225

  6. A Specification Test of Stochastic Diffusion Models

    Institute of Scientific and Technical Information of China (English)

    Shu-lin ZHANG; Zheng-hong WEI; Qiu-xiang BI

    2013-01-01

    In this paper,we propose a hypothesis testing approach to checking model mis-specification in continuous-time stochastic diffusion model.The key idea behind the development of our test statistic is rooted in the generalized information equality in the context of martingale estimating equations.We propose a bootstrap resampling method to implement numerically the proposed diagnostic procedure.Through intensive simulation studies,we show that our approach is well performed in the aspects of type Ⅰ error control,power improvement as well as computational efficiency.

  7. Testing cosmological models with COBE data

    Energy Technology Data Exchange (ETDEWEB)

    Torres, S. [Observatorio Astronomico, Bogota` (Colombia)]|[Centro Internacional de Fisica, Bogota` (Colombia); Cayon, L. [Lawrence Berkeley Laboratory and Center for Particle Astrophysics, Berkeley (United States); Martinez-Gonzalez, E.; Sanz, J. L. [Santander, Univ. de Cantabria (Spain). Instituto de Fisica. Consejo Superior de Investigaciones Cientificas

    1997-02-01

    The authors test cosmological models with {Omega} < 1 using the COBE two-year cross-correlation function by means of a maximum-likelihood test with Monte Carlo realizations of several {Omega} models. Assuming a Harrison-Zel`dovich primordial power spectrum with amplitude {proportional_to} Q, it is found that there is a large region in the ({Omega}, Q), parameter space that fits the data equally well. They find that the flatness of the universe is not implied by the data. A summary of other analyses of COBE data to constrain the shape of the primordial spectrum is presented.

  8. Design, modeling and testing of data converters

    CERN Document Server

    Kiaei, Sayfe; Xu, Fang

    2014-01-01

    This book presents the a scientific discussion of the state-of-the-art techniques and designs for modeling, testing and for the performance analysis of data converters. The focus is put on sustainable data conversion. Sustainability has become a public issue that industries and users can not ignore. Devising environmentally friendly solutions for data conversion designing, modeling and testing is nowadays a requirement that researchers and practitioners must consider in their activities. This book presents the outcome of the IWADC workshop 2011, held in Orvieto, Italy.

  9. Experimental Concepts for Testing Seismic Hazard Models

    Science.gov (United States)

    Marzocchi, W.; Jordan, T. H.

    2015-12-01

    Seismic hazard analysis is the primary interface through which useful information about earthquake rupture and wave propagation is delivered to society. To account for the randomness (aleatory variability) and limited knowledge (epistemic uncertainty) of these natural processes, seismologists must formulate and test hazard models using the concepts of probability. In this presentation, we will address the scientific objections that have been raised over the years against probabilistic seismic hazard analysis (PSHA). Owing to the paucity of observations, we must rely on expert opinion to quantify the epistemic uncertainties of PSHA models (e.g., in the weighting of individual models from logic-tree ensembles of plausible models). The main theoretical issue is a frequentist critique: subjectivity is immeasurable; ergo, PSHA models cannot be objectively tested against data; ergo, they are fundamentally unscientific. We have argued (PNAS, 111, 11973-11978) that the Bayesian subjectivity required for casting epistemic uncertainties can be bridged with the frequentist objectivity needed for pure significance testing through "experimental concepts." An experimental concept specifies collections of data, observed and not yet observed, that are judged to be exchangeable (i.e., with a joint distribution independent of the data ordering) when conditioned on a set of explanatory variables. We illustrate, through concrete examples, experimental concepts useful in the testing of PSHA models for ontological errors in the presence of aleatory variability and epistemic uncertainty. In particular, we describe experimental concepts that lead to exchangeable binary sequences that are statistically independent but not identically distributed, showing how the Bayesian concept of exchangeability generalizes the frequentist concept of experimental repeatability. We also address the issue of testing PSHA models using spatially correlated data.

  10. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling.

    Science.gov (United States)

    Johnson, Shane D; Groff, Elizabeth R

    2014-07-01

    The Journal of Research in Crime and Delinquency (JRCD) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.

  11. [Thurstone model application to difference sensory tests].

    Science.gov (United States)

    Angulo, Ofelia; O'Mahony, Michael

    2009-12-01

    Part of understanding why judges perform better on some difference tests than others requires an understanding of how information coming from the mouth to the brain is processed. For some tests it is processed more efficiently than others. This is described by what has been called Thurstonian modeling. This brief review introduces the concepts and ideas involved in Thurstonian modeling as applied to sensory difference measurement. It summarizes the literature concerned with the theorizing and confirmation of Thurstonian models. It introduces the important concept of stimulus variability and the fundamental measure of sensory difference: d'. It indicates how the paradox of discriminatory non-discriminators, which had puzzled researchers for years, can be simply explained using the model. It considers how memory effects and the complex interactions in the mouth can reduce d' by increasing the variance of sensory distributions.

  12. Systematic study of polycrystalline flow during tension test of sheet 304 austenitic stainless steel at room temperature

    Energy Technology Data Exchange (ETDEWEB)

    Muñoz-Andrade, Juan D., E-mail: jdma@correo.azc.uam.mx [Departamento de Materiales, División de Ciencias Básicas e Ingeniería, Universidad Autónoma Metropolitana Unidad Azcapotzalco, Av. San Pablo No. 180, Colonia Reynosa Tamaulipas, C.P. 02200, México Distrito Federal (Mexico)

    2013-12-16

    By systematic study the mapping of polycrystalline flow of sheet 304 austenitic stainless steel (ASS) during tension test at constant crosshead velocity at room temperature was obtained. The main results establish that the trajectory of crystals in the polycrystalline spatially extended system (PCSES), during irreversible deformation process obey a hyperbolic motion. Where, the ratio between the expansion velocity of the field and the velocity of the field source is not constant and the field lines of such trajectory of crystals become curved, this accelerated motion is called a hyperbolic motion. Such behavior is assisted by dislocations dynamics and self-accommodation process between crystals in the PCSES. Furthermore, by applying the quantum mechanics and relativistic model proposed by Muñoz-Andrade, the activation energy for polycrystalline flow during the tension test of 304 ASS was calculated for each instant in a global form. In conclusion was established that the mapping of the polycrystalline flow is fundamental to describe in an integral way the phenomenology and mechanics of irreversible deformation processes.

  13. Electroweak tests of the Standard Model

    CERN Document Server

    Erler, Jens

    2012-01-01

    Electroweak precision tests of the Standard Model of the fundamental interactions are reviewed ranging from the lowest to the highest energy experiments. Results from global fits are presented with particular emphasis on the extraction of fundamental parameters such as the Fermi constant, the strong coupling constant, the electroweak mixing angle, and the mass of the Higgs boson. Constraints on physics beyond the Standard Model are also discussed.

  14. Tests of the Electroweak Standard Model

    CERN Document Server

    Erler, Jens

    2012-01-01

    Electroweak precision tests of the Standard Model of the fundamental interactions are reviewed ranging from the lowest to the highest energy experiments. Results from global fits are presented with particular emphasis on the extraction of fundamental parameters such as the Fermi constant, the strong coupling constant, the electroweak mixing angle, and the mass of the Higgs boson. Constraints on physics beyond the Standard Model are also discussed.

  15. Testing mechanistic models of growth in insects

    OpenAIRE

    Maino, James L.; Kearney, Michael R.

    2015-01-01

    Insects are typified by their small size, large numbers, impressive reproductive output and rapid growth. However, insect growth is not simply rapid; rather, insects follow a qualitatively distinct trajectory to many other animals. Here we present a mechanistic growth model for insects and show that increasing specific assimilation during the growth phase can explain the near-exponential growth trajectory of insects. The presented model is tested against growth data on 50 insects, and compare...

  16. Modeling and Testing Legacy Data Consistency Requirements

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard

    2003-01-01

    An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...

  17. The reliability of physical examination tests for the diagnosis of anterior cruciate ligament rupture--A systematic review.

    Science.gov (United States)

    Lange, Toni; Freiberg, Alice; Dröge, Patrik; Lützner, Jörg; Schmitt, Jochen; Kopkow, Christian

    2015-06-01

    Systematic literature review. Despite their frequent application in routine care, a systematic review on the reliability of clinical examination tests to evaluate the integrity of the ACL is missing. To summarize and evaluate intra- and interrater reliability research on physical examination tests used for the diagnosis of ACL tears. A comprehensive systematic literature search was conducted in MEDLINE, EMBASE and AMED until May 30th 2013. Studies were included if they assessed the intra- and/or interrater reliability of physical examination tests for the integrity of the ACL. Methodological quality was evaluated with the Quality Appraisal of Reliability Studies (QAREL) tool by two independent reviewers. 110 hits were achieved of which seven articles finally met the inclusion criteria. These studies examined the reliability of four physical examination tests. Intrarater reliability was assessed in three studies and ranged from fair to almost perfect (Cohen's k = 0.22-1.00). Interrater reliability was assessed in all included studies and ranged from slight to almost perfect (Cohen's k = 0.02-0.81). The Lachman test is the physical tests with the highest intrarater reliability (Cohen's k = 1.00), the Lachman test performed in prone position the test with the highest interrater reliability (Cohen's k = 0.81). Included studies were partly of low methodological quality. A meta-analysis could not be performed due to the heterogeneity in study populations, reliability measures and methodological quality of included studies. Systematic investigations on the reliability of physical examination tests to assess the integrity of the ACL are scarce and of varying methodological quality. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Rank-Defect Adjustment Model for Survey-Line Systematic Errors in Marine Survey Net

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In this paper,the structure of systematic and random errors in marine survey net are discussed in detail and the adjustment method for observations of marine survey net is studied,in which the rank-defect characteristic is discovered first up to now.On the basis of the survey-line systematic error model,the formulae of the rank-defect adjustment model are deduced according to modern adjustment theory.An example of calculations with really observed data is carried out to demonstrate the efficiency of this adjustment model.Moreover,it is proved that the semi-systematic error correction method used at present in marine gravimetry in China is a special case of the adjustment model presented in this paper.

  19. Incentivizing Blood Donation: Systematic Review and Meta-Analysis to Test Titmuss’ Hypotheses

    Science.gov (United States)

    2013-01-01

    Objectives: Titmuss hypothesized that paying blood donors would reduce the quality of the blood donated and would be economically inefficient. We report here the first systematic review to test these hypotheses, reporting on both financial and nonfinancial incentives. Method: Studies deemed eligible for inclusion were peer-reviewed, experimental studies that presented data on the quantity (as a proxy for efficiency) and quality of blood donated in at least two groups: those donating blood when offered an incentive, and those donating blood with no offer of an incentive. The following were searched: MEDLINE, EMBASE and PsycINFO using OVID SP, CINAHL via EBSCO and CENTRAL, the Cochrane Library, Econlit via EBSCO, JSTOR Health and General Science Collection, and Google. Results: The initial search yielded 1100 abstracts, which resulted in 89 full papers being assessed for eligibility, of which seven studies, reported in six papers, met the inclusion criteria. The included studies involved 93,328 participants. Incentives had no impact on the likelihood of donation (OR = 1.22 CI 95% 0.91–1.63; p = .19). There was no difference between financial and nonfinancial incentives in the quantity of blood donated. Of the two studies that assessed quality of blood, one found no effect and the other found an adverse effect from the offer of a free cholesterol test (β = 0.011 p < .05). Conclusion: The limited evidence suggests that Titmuss’ hypothesis of the economic inefficiency of incentives is correct. There is insufficient evidence to assess their likely impact on the quality of the blood provided. PMID:24001244

  20. Incentivizing blood donation: systematic review and meta-analysis to test Titmuss' hypotheses.

    Science.gov (United States)

    Niza, Claudia; Tung, Burcu; Marteau, Theresa M

    2013-09-01

    Titmuss hypothesized that paying blood donors would reduce the quality of the blood donated and would be economically inefficient. We report here the first systematic review to test these hypotheses, reporting on both financial and nonfinancial incentives. Studies deemed eligible for inclusion were peer-reviewed, experimental studies that presented data on the quantity (as a proxy for efficiency) and quality of blood donated in at least two groups: those donating blood when offered an incentive, and those donating blood with no offer of an incentive. The following were searched: MEDLINE, EMBASE and PsycINFO using OVID SP, CINAHL via EBSCO and CENTRAL, the Cochrane Library, Econlit via EBSCO, JSTOR Health and General Science Collection, and Google. The initial search yielded 1100 abstracts, which resulted in 89 full papers being assessed for eligibility, of which seven studies, reported in six papers, met the inclusion criteria. The included studies involved 93,328 participants. Incentives had no impact on the likelihood of donation (OR = 1.22 CI 95% 0.91-1.63; p = .19). There was no difference between financial and nonfinancial incentives in the quantity of blood donated. Of the two studies that assessed quality of blood, one found no effect and the other found an adverse effect from the offer of a free cholesterol test (β = 0.011 p < .05). The limited evidence suggests that Titmuss' hypothesis of the economic inefficiency of incentives is correct. There is insufficient evidence to assess their likely impact on the quality of the blood provided. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  1. Timed Up and Go test and risk of falls in older adults: a systematic review.

    Science.gov (United States)

    Beauchet, O; Fantino, B; Allali, G; Muir, S W; Montero-Odasso, M; Annweiler, C

    2011-12-01

    To assess the association and the predictive ability of the Timed Up and Go test (TUG) on the occurrence of falls among people aged 65 and older. A systematic English Medline literature search was conducted on November 30, 2009 with no limit of date using the following Medical Subject Heading (MeSH) terms "Aged OR aged, 80 and over" AND "Accidental falls" combined with the terms "Timed Up and Go" OR "Get Up and Go". The search also included the Cochrane library and the reference lists of the retrieved articles. Of the 92 selected studies, 11 met the selection criteria and were included in the final analysis. Fall rate ranged from 7.5 to 60.0% in the selected studies. The cut-off time separating non-fallers and fallers varied from 10 to 32.6 seconds. All retrospective studies showed a significant positive association between the time taken to perform the TUG and a history of falls with the highest odds ratio (OR) calculated at 42.3 [5.1 - 346.9]. In contrast, only one prospective study found a significant association with the occurrence of future falls. This association with incident falls was lower than in retrospective studies. Although retrospective studies found that the TUG time performance is associated with a past history of falls, its predictive ability for future falls remains limited. In addition, standardization of testing conditions combined with a control of the significant potential confounders (age, female gender and comorbidities) would provide better information about the TUG predictive value for future falls in older adults.

  2. Modeling nonignorable missing data in speeded tests

    NARCIS (Netherlands)

    Glas, Cees A.W.; Pimentel, Jonald L.

    2008-01-01

    In tests with time limits, items at the end are often not reached. Usually, the pattern of missing responses depends on the ability level of the respondents; therefore, missing data are not ignorable in statistical inference. This study models data using a combination of two item response theory (IR

  3. Mechanism test bed. Flexible body model report

    Science.gov (United States)

    Compton, Jimmy

    1991-01-01

    The Space Station Mechanism Test Bed is a six degree-of-freedom motion simulation facility used to evaluate docking and berthing hardware mechanisms. A generalized rigid body math model was developed which allowed the computation of vehicle relative motion in six DOF due to forces and moments from mechanism contact, attitude control systems, and gravity. No vehicle size limitations were imposed in the model. The equations of motion were based on Hill's equations for translational motion with respect to a nominal circular earth orbit and Newton-Euler equations for rotational motion. This rigid body model and supporting software were being refined.

  4. Testing models of tree canopy structure

    Energy Technology Data Exchange (ETDEWEB)

    Martens, S.N. (Los Alamos National Laboratory, NM (United States))

    1994-06-01

    Models of tree canopy structure are difficult to test because of a lack of data which are suitability detailed. Previously, I have made three-dimensional reconstructions of individual trees from measured data. These reconstructions have been used to test assumptions about the dispersion of canopy elements in two- and three-dimensional space. Lacunarity analysis has also been used to describe the texture of the reconstructed canopies. Further tests regarding models of the nature of tree branching structures have been made. Results using probability distribution functions for branching measured from real trees show that branching in Juglans is not Markovian. Specific constraints or rules are necessary to achieve simulations of branching structure which are faithful to the originally measured trees.

  5. Modeling of novel diagnostic strategies for active tuberculosis - a systematic review: current practices and recommendations.

    Directory of Open Access Journals (Sweden)

    Alice Zwerling

    Full Text Available The field of diagnostics for active tuberculosis (TB is rapidly developing. TB diagnostic modeling can help to inform policy makers and support complicated decisions on diagnostic strategy, with important budgetary implications. Demand for TB diagnostic modeling is likely to increase, and an evaluation of current practice is important. We aimed to systematically review all studies employing mathematical modeling to evaluate cost-effectiveness or epidemiological impact of novel diagnostic strategies for active TB.Pubmed, personal libraries and reference lists were searched to identify eligible papers. We extracted data on a wide variety of model structure, parameter choices, sensitivity analyses and study conclusions, which were discussed during a meeting of content experts.From 5619 records a total of 36 papers were included in the analysis. Sixteen papers included population impact/transmission modeling, 5 were health systems models, and 24 included estimates of cost-effectiveness. Transmission and health systems models included specific structure to explore the importance of the diagnostic pathway (n = 4, key determinants of diagnostic delay (n = 5, operational context (n = 5, and the pre-diagnostic infectious period (n = 1. The majority of models implemented sensitivity analysis, although only 18 studies described multi-way sensitivity analysis of more than 2 parameters simultaneously. Among the models used to make cost-effectiveness estimates, most frequent diagnostic assays studied included Xpert MTB/RIF (n = 7, and alternative nucleic acid amplification tests (NAATs (n = 4. Most (n = 16 of the cost-effectiveness models compared new assays to an existing baseline and generated an incremental cost-effectiveness ratio (ICER.Although models have addressed a small number of important issues, many decisions regarding implementation of TB diagnostics are being made without the full benefits of insight from mathematical

  6. Introducing malaria rapid diagnostic tests in private medicine retail outlets: A systematic literature review

    Science.gov (United States)

    Visser, Theodoor; Bruxvoort, Katia; Maloney, Kathleen; Leslie, Toby; Barat, Lawrence M.; Allan, Richard; Ansah, Evelyn K.; Anyanti, Jennifer; Boulton, Ian; Clarke, Siân E.; Cohen, Jessica L.; Cohen, Justin M.; Cutherell, Andrea; Dolkart, Caitlin; Eves, Katie; Fink, Günther; Goodman, Catherine; Hutchinson, Eleanor; Lal, Sham; Mbonye, Anthony; Onwujekwe, Obinna; Petty, Nora; Pontarollo, Julie; Poyer, Stephen; Schellenberg, David; Streat, Elizabeth; Ward, Abigail; Wiseman, Virginia; Whitty, Christopher J. M.; Yeung, Shunmay; Cunningham, Jane; Chandler, Clare I. R.

    2017-01-01

    Background Many patients with malaria-like symptoms seek treatment in private medicine retail outlets (PMR) that distribute malaria medicines but do not traditionally provide diagnostic services, potentially leading to overtreatment with antimalarial drugs. To achieve universal access to prompt parasite-based diagnosis, many malaria-endemic countries are considering scaling up malaria rapid diagnostic tests (RDTs) in these outlets, an intervention that may require legislative changes and major investments in supporting programs and infrastructures. This review identifies studies that introduced malaria RDTs in PMRs and examines study outcomes and success factors to inform scale up decisions. Methods Published and unpublished studies that introduced malaria RDTs in PMRs were systematically identified and reviewed. Literature published before November 2016 was searched in six electronic databases, and unpublished studies were identified through personal contacts and stakeholder meetings. Outcomes were extracted from publications or provided by principal investigators. Results Six published and six unpublished studies were found. Most studies took place in sub-Saharan Africa and were small-scale pilots of RDT introduction in drug shops or pharmacies. None of the studies assessed large-scale implementation in PMRs. RDT uptake varied widely from 8%-100%. Provision of artemisinin-based combination therapy (ACT) for patients testing positive ranged from 30%-99%, and was more than 85% in five studies. Of those testing negative, provision of antimalarials varied from 2%-83% and was less than 20% in eight studies. Longer provider training, lower RDT retail prices and frequent supervision appeared to have a positive effect on RDT uptake and provider adherence to test results. Performance of RDTs by PMR vendors was generally good, but disposal of medical waste and referral of patients to public facilities were common challenges. Conclusions Expanding services of PMRs to

  7. Introducing malaria rapid diagnostic tests in private medicine retail outlets: A systematic literature review.

    Science.gov (United States)

    Visser, Theodoor; Bruxvoort, Katia; Maloney, Kathleen; Leslie, Toby; Barat, Lawrence M; Allan, Richard; Ansah, Evelyn K; Anyanti, Jennifer; Boulton, Ian; Clarke, Siân E; Cohen, Jessica L; Cohen, Justin M; Cutherell, Andrea; Dolkart, Caitlin; Eves, Katie; Fink, Günther; Goodman, Catherine; Hutchinson, Eleanor; Lal, Sham; Mbonye, Anthony; Onwujekwe, Obinna; Petty, Nora; Pontarollo, Julie; Poyer, Stephen; Schellenberg, David; Streat, Elizabeth; Ward, Abigail; Wiseman, Virginia; Whitty, Christopher J M; Yeung, Shunmay; Cunningham, Jane; Chandler, Clare I R

    2017-01-01

    Many patients with malaria-like symptoms seek treatment in private medicine retail outlets (PMR) that distribute malaria medicines but do not traditionally provide diagnostic services, potentially leading to overtreatment with antimalarial drugs. To achieve universal access to prompt parasite-based diagnosis, many malaria-endemic countries are considering scaling up malaria rapid diagnostic tests (RDTs) in these outlets, an intervention that may require legislative changes and major investments in supporting programs and infrastructures. This review identifies studies that introduced malaria RDTs in PMRs and examines study outcomes and success factors to inform scale up decisions. Published and unpublished studies that introduced malaria RDTs in PMRs were systematically identified and reviewed. Literature published before November 2016 was searched in six electronic databases, and unpublished studies were identified through personal contacts and stakeholder meetings. Outcomes were extracted from publications or provided by principal investigators. Six published and six unpublished studies were found. Most studies took place in sub-Saharan Africa and were small-scale pilots of RDT introduction in drug shops or pharmacies. None of the studies assessed large-scale implementation in PMRs. RDT uptake varied widely from 8%-100%. Provision of artemisinin-based combination therapy (ACT) for patients testing positive ranged from 30%-99%, and was more than 85% in five studies. Of those testing negative, provision of antimalarials varied from 2%-83% and was less than 20% in eight studies. Longer provider training, lower RDT retail prices and frequent supervision appeared to have a positive effect on RDT uptake and provider adherence to test results. Performance of RDTs by PMR vendors was generally good, but disposal of medical waste and referral of patients to public facilities were common challenges. Expanding services of PMRs to include malaria diagnostic services may hold

  8. Testing mechanistic models of growth in insects.

    Science.gov (United States)

    Maino, James L; Kearney, Michael R

    2015-11-22

    Insects are typified by their small size, large numbers, impressive reproductive output and rapid growth. However, insect growth is not simply rapid; rather, insects follow a qualitatively distinct trajectory to many other animals. Here we present a mechanistic growth model for insects and show that increasing specific assimilation during the growth phase can explain the near-exponential growth trajectory of insects. The presented model is tested against growth data on 50 insects, and compared against other mechanistic growth models. Unlike the other mechanistic models, our growth model predicts energy reserves per biomass to increase with age, which implies a higher production efficiency and energy density of biomass in later instars. These predictions are tested against data compiled from the literature whereby it is confirmed that insects increase their production efficiency (by 24 percentage points) and energy density (by 4 J mg(-1)) between hatching and the attainment of full size. The model suggests that insects achieve greater production efficiencies and enhanced growth rates by increasing specific assimilation and increasing energy reserves per biomass, which are less costly to maintain than structural biomass. Our findings illustrate how the explanatory and predictive power of mechanistic growth models comes from their grounding in underlying biological processes.

  9. A systematic review of the diagnostic accuracy of provocative tests of the neck for diagnosing cervical radiculopathy

    OpenAIRE

    2006-01-01

    Clinical provocative tests of the neck, which position the neck and arm inorder to aggravate or relieve arm symptoms, are commonly used in clinical practice in patients with a suspected cervical radiculopathy. Their diagnostic accuracy, however, has never been examined in a systematic review. A comprehensive search was conducted in order to identify all possible studies fulfilling the inclusion criteria. A study was included if: (1) any provocative test of the neck for diagnosing cervical rad...

  10. Testing LSST Dither Strategies for Survey Uniformity and Large-scale Structure Systematics

    Science.gov (United States)

    Awan, Humna; Gawiser, Eric; Kurczynski, Peter; Jones, R. Lynne; Zhan, Hu; Padilla, Nelson D.; Muñoz Arancibia, Alejandra M.; Orsi, Alvaro; Cora, Sofía A.; Yoachim, Peter

    2016-09-01

    The Large Synoptic Survey Telescope (LSST) will survey the southern sky from 2022-2032 with unprecedented detail. Since the observing strategy can lead to artifacts in the data, we investigate the effects of telescope-pointing offsets (called dithers) on the r-band coadded 5σ depth yielded after the 10-year survey. We analyze this survey depth for several geometric patterns of dithers (e.g., random, hexagonal lattice, spiral) with amplitudes as large as the radius of the LSST field of view, implemented on different timescales (per season, per night, per visit). Our results illustrate that per night and per visit dither assignments are more effective than per season assignments. Also, we find that some dither geometries (e.g., hexagonal lattice) are particularly sensitive to the timescale on which the dithers are implemented, while others like random dithers perform well on all timescales. We then model the propagation of depth variations to artificial fluctuations in galaxy counts, which are a systematic for LSS studies. We calculate the bias in galaxy counts caused by the observing strategy accounting for photometric calibration uncertainties, dust extinction, and magnitude cuts; uncertainties in this bias limit our ability to account for structure induced by the observing strategy. We find that after 10 years of the LSST survey, the best dither strategies lead to uncertainties in this bias that are smaller than the minimum statistical floor for a galaxy catalog as deep as r floor for r < 25.7 after the first year of survey.

  11. Interpretation of test data with dynamic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Biba, P. [Southern California Edison, San Clemente, CA (United States). San Onofre Nuclear Generating Station

    1999-11-01

    The in-service testing of many Important-to-safety components, such as valves, pumps, etc. is often performed while the plant is either shut-down or the particular system is in a test mode. Thus the test conditions may be different from the actual operating conditions under which the components would be required to operate. In addition, the components must function under various postulated accident scenarios, which can not be duplicated during plant normal operation. This paper deals with the method of interpretation of the test data by a dynamic model, which allows the evaluation of the many factors affecting the system performance, in order to assure component and system operability.

  12. The Chain Information Model: a systematic approach for food product development

    NARCIS (Netherlands)

    Benner, M.

    2005-01-01

    The chain information model has been developed to increase the success rate of new food products. The uniqueness of this approach is that it approaches the problem from a chain perspective and starts with the consumer. The model can be used to analyse the production chain in a systematic way. This

  13. A Demonstration of a Systematic Item-Reduction Approach Using Structural Equation Modeling

    Science.gov (United States)

    Larwin, Karen; Harvey, Milton

    2012-01-01

    Establishing model parsimony is an important component of structural equation modeling (SEM). Unfortunately, little attention has been given to developing systematic procedures to accomplish this goal. To this end, the current study introduces an innovative application of the jackknife approach first presented in Rensvold and Cheung (1999). Unlike…

  14. The Chain Information Model: a systematic approach for food product development

    NARCIS (Netherlands)

    Benner, M.

    2005-01-01

    The chain information model has been developed to increase the success rate of new food products. The uniqueness of this approach is that it approaches the problem from a chain perspective and starts with the consumer. The model can be used to analyse the production chain in a systematic way. This r

  15. A Digital Tool Set for Systematic Model Design in Process-Engineering Education

    Science.gov (United States)

    van der Schaaf, Hylke; Tramper, Johannes; Hartog, Rob J.M.; Vermue, Marian

    2006-01-01

    One of the objectives of the process technology curriculum at Wageningen University is that students learn how to design mathematical models in the context of process engineering, using a systematic problem analysis approach. Students find it difficult to learn to design a model and little material exists to meet this learning objective. For these…

  16. A digital tool set for systematic model design in process-engineering education

    NARCIS (Netherlands)

    Schaaf, van der H.; Tramper, J.; Hartog, R.J.M.; Vermuë, M.H.

    2006-01-01

    One of the objectives of the process technology curriculum at Wageningen University is that students learn how to design mathematical models in the context of process engineering, using a systematic problem analysis approach. Students find it difficult to learn to design a model and little material

  17. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e. m(

  18. Parametric Testing of Launch Vehicle FDDR Models

    Science.gov (United States)

    Schumann, Johann; Bajwa, Anupa; Berg, Peter; Thirumalainambi, Rajkumar

    2011-01-01

    For the safe operation of a complex system like a (manned) launch vehicle, real-time information about the state of the system and potential faults is extremely important. The on-board FDDR (Failure Detection, Diagnostics, and Response) system is a software system to detect and identify failures, provide real-time diagnostics, and to initiate fault recovery and mitigation. The ERIS (Evaluation of Rocket Integrated Subsystems) failure simulation is a unified Matlab/Simulink model of the Ares I Launch Vehicle with modular, hierarchical subsystems and components. With this model, the nominal flight performance characteristics can be studied. Additionally, failures can be injected to see their effects on vehicle state and on vehicle behavior. A comprehensive test and analysis of such a complicated model is virtually impossible. In this paper, we will describe, how parametric testing (PT) can be used to support testing and analysis of the ERIS failure simulation. PT uses a combination of Monte Carlo techniques with n-factor combinatorial exploration to generate a small, yet comprehensive set of parameters for the test runs. For the analysis of the high-dimensional simulation data, we are using multivariate clustering to automatically find structure in this high-dimensional data space. Our tools can generate detailed HTML reports that facilitate the analysis.

  19. Systematic assignment of thermodynamic constraints in metabolic network models

    NARCIS (Netherlands)

    Kümmel, Anne; Panke, Sven; Heinemann, Matthias

    2006-01-01

    Background: The availability of genome sequences for many organisms enabled the reconstruction of several genome-scale metabolic network models. Currently, significant efforts are put into the automated reconstruction of such models. For this, several computational tools have been developed that par

  20. Systematic evaluation of land use regression models for NO₂

    NARCIS (Netherlands)

    Wang, M.|info:eu-repo/dai/nl/345480279; Beelen, R.M.J.|info:eu-repo/dai/nl/30483100X; Eeftens, M.R.|info:eu-repo/dai/nl/315028300; Meliefste, C.; Hoek, G.|info:eu-repo/dai/nl/069553475; Brunekreef, B.|info:eu-repo/dai/nl/067548180

    2012-01-01

    Land use regression (LUR) models have become popular to explain the spatial variation of air pollution concentrations. Independent evaluation is important. We developed LUR models for nitrogen dioxide (NO(2)) using measurements conducted at 144 sampling sites in The Netherlands. Sites were randomly

  1. Model uncertainty and systematic risk in US banking

    NARCIS (Netherlands)

    Baele, L.T.M.; De Bruyckere, Valerie; De Jonghe, O.G.; Vander Vennet, Rudi

    2015-01-01

    This paper uses Bayesian Model Averaging to examine the driving factors of equity returns of US Bank Holding Companies. BMA has as an advantage over OLS that it accounts for the considerable uncertainty about the correct set (model) of bank risk factors. We find that out of a broad set of 12 risk fa

  2. Systematic modeling for free stators of rotary - Piezoelectric ultrasonic motors

    DEFF Research Database (Denmark)

    Mojallali, Hamed; Amini, Rouzbeh; Izadi-Zamanabadi, Roozbeh

    2007-01-01

    An equivalent circuit model with complex elements is presented in this paper to describe the free stator model of traveling wave piezoelectric motors. The mechanical, dielectric and piezoelectric losses associated with the vibrator are considered by introducing the imaginary part to the equivalent...

  3. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  4. Comparison of a full systematic review versus rapid review approaches to assess a newborn screening test for tyrosinemia type 1.

    Science.gov (United States)

    Taylor-Phillips, Sian; Geppert, Julia; Stinton, Chris; Freeman, Karoline; Johnson, Samantha; Fraser, Hannah; Sutcliffe, Paul; Clarke, Aileen

    2017-07-13

    Rapid reviews are increasingly used to replace/complement systematic reviews to support evidence-based decision-making. Little is known about how this expedited process affects results. To assess differences between rapid and systematic review approaches for a case study of test accuracy of succinylacetone for detecting tyrosinemia type 1. Two reviewers conducted an "enhanced" rapid review then a systematic review. The enhanced rapid review involved narrower searches, a single reviewer checking 20% of titles/abstracts and data extraction, and quality assessment using an unadjusted QUADAS-2. Two reviewers performed the systematic review with a tailored QUADAS-2. Post hoc analysis examined rapid reviewing with a single reviewer (basic rapid review). Ten papers were included. Basic rapid reviews would have missed 1 or 4 of these (dependent on which reviewer). Enhanced rapid and systematic reviews identified all 10 papers; one paper was only identified in the rapid review through reference checking. Two thousand one hundred seventy-six fewer title/abstracts and 129 fewer full texts were screened during the enhanced rapid review than the systematic review. The unadjusted QUADAS-2 generated more "unclear" ratings than the adjusted QUADAS-2 [29/70 (41.4%) versus 16/70 (22.9%)], and fewer "high" ratings [22/70 (31.4%) versus 42/70 (60.0%)]. Basic rapid reviews contained important inaccuracies in data extraction, which were detected by a second reviewer in the enhanced rapid and systematic reviews. Enhanced rapid reviews with 20% checking by a second reviewer may be an appropriate tool for policymakers to expeditiously assess evidence. Basic rapid reviews (single reviewer) have higher risks of important inaccuracies and omissions. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Risk models to predict hypertension: a systematic review.

    Directory of Open Access Journals (Sweden)

    Justin B Echouffo-Tcheugui

    Full Text Available BACKGROUND: As well as being a risk factor for cardiovascular disease, hypertension is also a health condition in its own right. Risk prediction models may be of value in identifying those individuals at risk of developing hypertension who are likely to benefit most from interventions. METHODS AND FINDINGS: To synthesize existing evidence on the performance of these models, we searched MEDLINE and EMBASE; examined bibliographies of retrieved articles; contacted experts in the field; and searched our own files. Dual review of identified studies was conducted. Included studies had to report on the development, validation, or impact analysis of a hypertension risk prediction model. For each publication, information was extracted on study design and characteristics, predictors, model discrimination, calibration and reclassification ability, validation and impact analysis. Eleven studies reporting on 15 different hypertension prediction risk models were identified. Age, sex, body mass index, diabetes status, and blood pressure variables were the most common predictor variables included in models. Most risk models had acceptable-to-good discriminatory ability (C-statistic>0.70 in the derivation sample. Calibration was less commonly assessed, but overall acceptable. Two hypertension risk models, the Framingham and Hopkins, have been externally validated, displaying acceptable-to-good discrimination, and C-statistic ranging from 0.71 to 0.81. Lack of individual-level data precluded analyses of the risk models in subgroups. CONCLUSIONS: The discrimination ability of existing hypertension risk prediction tools is acceptable, but the impact of using these tools on prescriptions and outcomes of hypertension prevention is unclear.

  6. Excised Abdominoplasty Material as a Systematic Plastic Surgical Training Model

    Directory of Open Access Journals (Sweden)

    M. Erol Demirseren

    2012-01-01

    Full Text Available Achieving a level of technical skill and confidence in surgical operations is the main goal of plastic surgical training. Operating rooms were accepted as the practical teaching venues of the traditional apprenticeship model. However, increased patient population, time, and ethical and legal considerations made preoperation room practical work a must for plastic surgical training. There are several plastic surgical teaching models and simulators which are very useful in preoperation room practical training and the evaluation of plastic surgery residents. The full thickness skin with its vascular network excised in abdominoplasty procedures is an easily obtainable real human tissue which could be used as a training model in plastic surgery.

  7. Temperature Buffer Test. Final THM modelling

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)

    2012-01-15

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  8. A Systematic Modelling Framework for Phase Transfer Catalyst Systems

    DEFF Research Database (Denmark)

    Anantpinijwatna, Amata; Sales-Cruz, Mauricio; Hyung Kim, Sun

    2016-01-01

    in an aqueous phase. These reacting systems are receiving increased attention as novel organic synthesis options due to their flexible operation, higher product yields, and ability to avoid hazardous or expensive solvents. Major considerations in the design and analysis of PTC systems are physical and chemical...... equilibria, as well as kinetic mechanisms and rates. This paper presents a modelling framework for design and analysis of PTC systems that requires a minimum amount of experimental data to develop and employ the necessary thermodynamic and reaction models and embeds them into a reactor model for simulation....... The application of the framework is made to two cases in order to highlight the performance and issues of activity coefficient models for predicting design and operation and the effects when different organic solvents are employed....

  9. A Blind Test of Hapke's Photometric Model

    Science.gov (United States)

    Helfenstein, P.; Shepard, M. K.

    2003-01-01

    Hapke's bidirectional reflectance equation is a versatile analytical tool for predicting (i.e. forward modeling) the photometric behavior of a particulate surface from the observed optical and structural properties of its constituents. Remote sensing applications of Hapke s model, however, generally seek to predict the optical and structural properties of particulate soil constituents from the observed photometric behavior of a planetary surface (i.e. inverse-modeling). Our confidence in the latter approach can be established only if we ruthlessly test and optimize it. Here, we summarize preliminary results from a blind-test of the Hapke model using laboratory measurements obtained with the Bloomsburg University Goniometer (B.U.G.). The first author selected eleven well-characterized powder samples and measured the spectrophotometric behavior of each. A subset of twenty undisclosed examples of the photometric measurement sets were sent to the second author who fit the data using the Hapke model and attempted to interpret their optical and mechanical properties from photometry alone.

  10. Testing the Correlated Random Coefficient Model*

    Science.gov (United States)

    Heckman, James J.; Schmierer, Daniel; Urzua, Sergio

    2010-01-01

    The recent literature on instrumental variables (IV) features models in which agents sort into treatment status on the basis of gains from treatment as well as on baseline-pretreatment levels. Components of the gains known to the agents and acted on by them may not be known by the observing economist. Such models are called correlated random coe cient models. Sorting on unobserved components of gains complicates the interpretation of what IV estimates. This paper examines testable implications of the hypothesis that agents do not sort into treatment based on gains. In it, we develop new tests to gauge the empirical relevance of the correlated random coe cient model to examine whether the additional complications associated with it are required. We examine the power of the proposed tests. We derive a new representation of the variance of the instrumental variable estimator for the correlated random coefficient model. We apply the methods in this paper to the prototypical empirical problem of estimating the return to schooling and nd evidence of sorting into schooling based on unobserved components of gains. PMID:21057649

  11. Temperature Buffer Test. Final THM modelling

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)

    2012-01-15

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  12. Vertical transmission and fetal damage in animal models of congenital toxoplasmosis: A systematic review.

    Science.gov (United States)

    Vargas-Villavicencio, José Antonio; Besné-Mérida, Alejandro; Correa, Dolores

    2016-06-15

    In humans, the probability of congenital infection and fetal damage due to Toxoplasma gondii is dependent on the gestation period at which primary infection occurs. Many animal models have been used for vaccine, drug testing, or studies on host or parasite factors that affect transmission or fetal pathology, but few works have directly tested fetal infection and damage rates along gestation. So, the purpose of this work was to perform a systematic review of the literature to determine if there is a model which reflects these changes as they occur in humans. We looked for papers appearing between 1970 and 2014 in major databases like Medline and Scopus, as well as gray literature. From almost 11,000 citations obtained, only 49 papers fulfilled the criteria of having data of all independent variables and at least one dependent datum for control (untreated) groups. Some interesting findings could be extracted. For example, pigs seem resistant and sheep susceptible to congenital infection. Also, oocysts cause more congenitally infected offspring than tissue cysts, bradyzoites or tachyzoites. In spite of these interesting findings, very few results on vertical transmission or fetal damage rates were similar to those described for humans and only for one of the gestation thirds, not all. Moreover, in most designs tissue cysts - with unknown number of bradyzoites - were used, so actual dose could not be established. The meta-analysis could not be performed, mainly because of great heterogeneity in experimental conditions. Nevertheless, results gathered suggest that a model could be designed to represent the increase in vertical transmission and decrease in fetal damage found in humans under natural conditions.

  13. Prognostic Models in Adults Undergoing Physical Therapy for Rotator Cuff Disorders: Systematic Review.

    Science.gov (United States)

    Braun, Cordula; Hanchard, Nigel C; Batterham, Alan M; Handoll, Helen H; Betthäuser, Andreas

    2016-07-01

    Rotator cuff-related disorders represent the largest subgroup of shoulder complaints. Despite the availability of various conservative and surgical treatment options, the precise indications for these options remain unclear. The purpose of this systematic review was to synthesize the available research on prognostic models for predicting outcomes in adults undergoing physical therapy for painful rotator cuff disorders. The MEDLINE, EMBASE, CINAHL, Cochrane CENTRAL, and PEDro databases and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) up to October 2015 were searched. The review included primary studies exploring prognostic models in adults undergoing physical therapy, with or without other conservative measures, for painful rotator cuff disorders. Primary outcomes were pain, disability, and adverse events. Inclusion was limited to prospective investigations of prognostic factors elicited at the baseline assessment. Study selection was independently performed by 2 reviewers. A pilot-tested form was used to extract data on key aspects of study design, characteristics, analyses, and results. Risk of bias and applicability were independently assessed by 2 reviewers using the Prediction Study Risk of Bias Assessment tool (PROBAST). Five studies were included in the review. These studies were extremely heterogeneous in many aspects of design, conduct, and analysis. The findings were analyzed narratively. All included studies were rated as at high risk of bias, and none of the resulting prognostic models was found to be usable in clinical practice. There are no prognostic models ready to inform clinical practice in the context of the review question, highlighting the need for further research on prognostic models for predicting outcomes in adults who undergo physical therapy for painful rotator cuff disorders. The design and conduct of future studies should be receptive to developing methods. © 2016 American Physical Therapy

  14. Physical model tests for floating wind turbines

    DEFF Research Database (Denmark)

    Bredmose, Henrik; Mikkelsen, Robert Flemming; Borg, Michael

    Floating offshore wind turbines are relevant at sites where the depth is too large for the installation of a bottom fixed substructure. While 3200 bottom fixed offshore turbines has been installed in Europe (EWEA 2016), only a handful of floating wind turbines exist worldwide and it is still...... an open question which floater concept is the most economically feasible. The design of the floaters for the floating turbines relies heavily on numerical modelling. While several coupled models exist, data sets for their validation are scarce. Validation, however, is important since the turbine behaviour...... is complex due to the combined actions of aero- and hydrodynamic loads, mooring loads and blade pitch control. The present talk outlines two recent test campaigns with a floating wind turbine in waves and wind. Two floater were tested, a compact TLP floater designed at DTU (Bredmose et al 2015, Pegalajar...

  15. Associations between psychological variables and pain in experimental pain models. A systematic review.

    Science.gov (United States)

    Hansen, M S; Horjales-Araujo, E; Dahl, J B

    2015-10-01

    The association between pain and psychological characteristics has been widely debated. Thus, it remains unclear whether an individual's psychological profile influences a particular pain experience, or if previous pain experience contributes to a certain psychological profile. Translational studies performed in healthy volunteers may provide knowledge concerning psychological factors in healthy individuals as well as basic pain physiology. The aim of this review was to investigate whether psychological vulnerability or specific psychological variables in healthy volunteers are predictive of the level of pain following experimental pain models. A systematic search on the databases, PubMed, Embase, Cochcrane library, and Clinicaltrials.gov was performed during September 2014. All trials investigating the association between psychological variables and experimental pain in healthy volunteers were considered for inclusion. Twenty-nine trials met the inclusion criteria, with a total of 2637 healthy volunteers. The included trials investigated a total of 45 different psychological tests and 27 different types of pain models. The retrieved trials did not present a sufficiently homogenous group to perform meta-analysis. The collected results were diverse. A total of 16 trials suggested that psychological factors may predict the level of pain, seven studies found divergent results, and six studies found no significant association between psychological variables and experimental pain. Psychological factors may have predictive value when investigating experimental pain. However, due to substantial heterogeneity and methodological shortcomings of the published literature, firm conclusions are not possible. © 2015 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  16. External Validity and Model Validity: A Conceptual Approach for Systematic Review Methodology

    Directory of Open Access Journals (Sweden)

    Raheleh Khorsan

    2014-01-01

    Full Text Available Background. Evidence rankings do not consider equally internal (IV, external (EV, and model validity (MV for clinical studies including complementary and alternative medicine/integrative health care (CAM/IHC research. This paper describe this model and offers an EV assessment tool (EVAT© for weighing studies according to EV and MV in addition to IV. Methods. An abbreviated systematic review methodology was employed to search, assemble, and evaluate the literature that has been published on EV/MV criteria. Standard databases were searched for keywords relating to EV, MV, and bias-scoring from inception to Jan 2013. Tools identified and concepts described were pooled to assemble a robust tool for evaluating these quality criteria. Results. This study assembled a streamlined, objective tool to incorporate for the evaluation of quality of EV/MV research that is more sensitive to CAM/IHC research. Conclusion. Improved reporting on EV can help produce and provide information that will help guide policy makers, public health researchers, and other scientists in their selection, development, and improvement in their research-tested intervention. Overall, clinical studies with high EV have the potential to provide the most useful information about “real-world” consequences of health interventions. It is hoped that this novel tool which considers IV, EV, and MV on equal footing will better guide clinical decision making.

  17. Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions.

    Science.gov (United States)

    Baxter, Susan K; Blank, Lindsay; Woods, Helen Buckley; Payne, Nick; Rimmer, Melanie; Goyder, Elizabeth

    2014-05-10

    There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. PROSPERO registration number: CRD42013004037.

  18. Interdependence of Model Systematic Biases in the Tropical Atlantic and the Tropical Pacific

    Science.gov (United States)

    Demissie, Teferi; Shonk, Jon; Toniazzo, Thomas; Woolnough, Steve Steve; Guilyardi, Eric

    2017-04-01

    The tropical climatology represented in simulations with General Circulation Models (GCMs) is affected by significant systematic biases despite the huge investments in model devlopment over the past 20 years. In this study, coupled seasonal hindcasts performed with EC-Earth and ECMWF System 4 are analyzed to understand the development of systematic biases in the tropical Atlantic and Pacific oceans. These models use similar atmosphere and ocean components (IFS and NEMO, respectively). We focus on hindcasts initialized in February and May. We discuss possible mechanisms for the evolution and origin of rapidly developing systematic biases over the tropical Atlantic during boreal spring. In addition, we look for evidence of the interrelation of systematic biases in the Atlantic and Pacific, and investigate if the errors in one ocean basin affect those in the other. We perform an upper-atmosphere wave analysis by Fourier filtering for certain ranges of temporal frequencies and zonal wavenumbers. Our results identicate common systematic biases in EC-Earth and System 4 purely attributable to the atmosphere component. Biases develop in the Atlantic basin independently of external influences, while a possible effect of such biases on the eastern Pacific is found.

  19. 2-D Model Test of Dolosse Breakwater

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Liu, Zhou

    1994-01-01

    The rational design diagram for Dolos armour should incorporate both the hydraulic stability and the structural integrity. The previous tests performed by Aalborg University (AU) made available such design diagram for the trunk of Dolos breakwater without superstructures (Burcharth et al. 1992......). To extend the design diagram to cover Dolos breakwaters with superstructure, 2-D model tests of Dolos breakwater with wave wall is included in the project Rubble Mound Breakwater Failure Modes sponsored by the Directorate General XII of the Commission of the European Communities under Contract MAS-CT92...... was on the Dolos breakwater with a high superstructure, where there was almost no overtopping. This case is believed to be the most dangerous one. The test of the Dolos breakwater with a low superstructure was also performed. The objective of the last part of the experiment is to investigate the influence...

  20. Damage modeling in Small Punch Test specimens

    DEFF Research Database (Denmark)

    Martínez Pañeda, Emilio; Cuesta, I.I.; Peñuelas, I.

    2016-01-01

    Ductile damage modeling within the Small Punch Test (SPT) is extensively investigated. The capabilities ofthe SPT to reliably estimate fracture and damage properties are thoroughly discussed and emphasis isplaced on the use of notched specimens. First, different notch profiles are analyzed...... and constraint conditionsquantified. The role of the notch shape is comprehensively examined from both triaxiality and notchfabrication perspectives. Afterwards, a methodology is presented to extract the micromechanical-basedductile damage parameters from the load-displacement curve of notched SPT samples...

  1. POC CD4 Testing Improves Linkage to HIV Care and Timeliness of ART Initiation in a Public Health Approach: A Systematic Review and Meta-Analysis.

    Directory of Open Access Journals (Sweden)

    Lara Vojnov

    Full Text Available CD4 cell count is an important test in HIV programs for baseline risk assessment, monitoring of ART where viral load is not available, and, in many settings, antiretroviral therapy (ART initiation decisions. However, access to CD4 testing is limited, in part due to the centralized conventional laboratory network. Point of care (POC CD4 testing has the potential to address some of the challenges of centralized CD4 testing and delays in delivery of timely testing and ART initiation. We conducted a systematic review and meta-analysis to identify the extent to which POC improves linkages to HIV care and timeliness of ART initiation.We searched two databases and four conference sites between January 2005 and April 2015 for studies reporting test turnaround times, proportion of results returned, and retention associated with the use of point-of-care CD4. Random effects models were used to estimate pooled risk ratios, pooled proportions, and 95% confidence intervals.We identified 30 eligible studies, most of which were completed in Africa. Test turnaround times were reduced with the use of POC CD4. The time from HIV diagnosis to CD4 test was reduced from 10.5 days with conventional laboratory-based testing to 0.1 days with POC CD4 testing. Retention along several steps of the treatment initiation cascade was significantly higher with POC CD4 testing, notably from HIV testing to CD4 testing, receipt of results, and pre-CD4 test retention (all p<0.001. Furthermore, retention between CD4 testing and ART initiation increased with POC CD4 testing compared to conventional laboratory-based testing (p = 0.01. We also carried out a non-systematic review of the literature observing that POC CD4 increased the projected life expectancy, was cost-effective, and acceptable.POC CD4 technologies reduce the time and increase patient retention along the testing and treatment cascade compared to conventional laboratory-based testing. POC CD4 is, therefore, a useful tool

  2. A Systematic Evaluation Model for Solar Cell Technologies

    Directory of Open Access Journals (Sweden)

    Chang-Fu Hsu

    2014-01-01

    Full Text Available Fossil fuels, including coal, petroleum, natural gas, and nuclear energy, are the primary electricity sources currently. However, with depletion of fossil fuels, global warming, nuclear crisis, and increasing environmental consciousness, the demand for renewable energy resources has skyrocketed. Solar energy is one of the most popular renewable energy resources for meeting global energy demands. Even though there are abundant studies on various solar technology developments, there is a lack of studies on solar technology evaluation and selection. Therefore, this research develops a model using interpretive structural modeling (ISM, benefits, opportunities, costs, and risks concept (BOCR, and fuzzy analytic network process (FANP to aggregate experts' opinions in evaluating current available solar cell technology. A case study in a photovoltaics (PV firm is used to examine the practicality of the proposed model in selecting the most suitable technology for the firm in manufacturing new products.

  3. Overload prevention in model supports for wind tunnel model testing

    Directory of Open Access Journals (Sweden)

    Anton IVANOVICI

    2015-09-01

    Full Text Available Preventing overloads in wind tunnel model supports is crucial to the integrity of the tested system. Results can only be interpreted as valid if the model support, conventionally called a sting remains sufficiently rigid during testing. Modeling and preliminary calculation can only give an estimate of the sting’s behavior under known forces and moments but sometimes unpredictable, aerodynamically caused model behavior can cause large transient overloads that cannot be taken into account at the sting design phase. To ensure model integrity and data validity an analog fast protection circuit was designed and tested. A post-factum analysis was carried out to optimize the overload detection and a short discussion on aeroelastic phenomena is included to show why such a detector has to be very fast. The last refinement of the concept consists in a fast detector coupled with a slightly slower one to differentiate between transient overloads that decay in time and those that are the result of aeroelastic unwanted phenomena. The decision to stop or continue the test is therefore conservatively taken preserving data and model integrity while allowing normal startup loads and transients to manifest.

  4. Movable scour protection. Model test report

    Energy Technology Data Exchange (ETDEWEB)

    Lorenz, R.

    2002-07-01

    This report presents the results of a series of model tests with scour protection of marine structures. The objective of the model tests is to investigate the integrity of the scour protection during a general lowering of the surrounding seabed, for instance in connection with movement of a sand bank or with general subsidence. The scour protection in the tests is made out of stone material. Two different fractions have been used: 4 mm and 40 mm. Tests with current, with waves and with combined current and waves were carried out. The scour protection material was placed after an initial scour hole has evolved in the seabed around the structure. This design philosophy has been selected because the situation often is that the scour hole starts to generate immediately after the structure has been placed. It is therefore difficult to establish a scour protection at the undisturbed seabed if the scour material is placed after the main structure. Further, placing the scour material in the scour hole increases the stability of the material. Two types of structure have been used for the test, a Monopile and a Tripod foundation. Test with protection mats around the Monopile model was also carried out. The following main conclusions have emerged form the model tests with flat bed (i.e. no general seabed lowering): 1. The maximum scour depth found in steady current on sand bed was 1.6 times the cylinder diameter, 2. The minimum horizontal extension of the scour hole (upstream direction) was 2.8 times the cylinder diameter, corresponding to a slope of 30 degrees, 3. Concrete protection mats do not meet the criteria for a strongly erodible seabed. In the present test virtually no reduction in the scour depth was obtained. The main problem is the interface to the cylinder. If there is a void between the mats and the cylinder, scour will develop. Even with the protection mats that are tightly connected to the cylinder, scour is expected to develop as long as the mats allow for

  5. PENGARUH DPR, GRE, DAN SYSTEMATIC RISK TERHADAP PER: UJI KONSISTENSI MODEL

    Directory of Open Access Journals (Sweden)

    Fanny Rifqi El Fuad

    2012-09-01

    Full Text Available The study aims to analyze the stock price valuation which is listed in Jakarta Stock Exchange by employing modelling approach of Price Earnings Ratio (PER and factors that are assumed enable to explain the changes. These factors are Dividend Payout Ratio ( DPR , the Growth Rate of Earnings (GRE, and systematic risk. The result showed that among these three variables, the DPR is the only factor consistently influences the variation in the value of PER on the three models of regression cross section which were made respectively from 2000 to 2002. The next analysis was conducted by using a simple regression between variable of DPR, an independent variable, and PER, dependent variable. This analysis revealed a significant result- a level of consistency coefficient and high intercept. This research also aims to test the regression model consistency cross section. It showed that the theoretical value of PER (earning multiplier gathered from regression cross section, can be used to determine the intrinsic stock if the regression model made is in the similar market situation during the valuation process. Without having this assumption accomplished, an investor cannot compare the theoretical PER value of the various models which are made by using equal sample and method. Penelitian ini dilakukan untuk menganalisis penilaian harga saham yang listed di Bursa Efek Jakarta berdasarkan pendekatan model price earning ratio (PER, beserta faktor-faktor yang diduga mampu menjelaskan perubahannya. Faktor-faktor yang diduga mampu menjelaskan perubahan PER adalah: dividend payout ratio (DPR, growth rate of earning (GRE, dan risiko sistematis. Dari keseluruhan pengujian hipotesis di dalam penelitian ini menunjukkan bahwa ketiga variabel yang diduga mempengaruhi variasi nilai PER saham, hanya variabel DPR yang secara konsisten secara signifikan mempengaruhi variasi nilai PER pada ketiga model regresi cross section yang dibuat berturut-turut mulai tahun 2000 sampai

  6. A systematic review of health manpower forecasting models.

    NARCIS (Netherlands)

    Martins-Coelho, G.; Greuningen, M. van; Barros, H.; Batenburg, R.

    2011-01-01

    Context: Health manpower planning (HMP) aims at matching health manpower (HM) supply to the population’s health requirements. To achieve this, HMP needs information on future HM supply and requirement (S&R). This is estimated by several different forecasting models (FMs). In this paper, we review

  7. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...... as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard...... linear contrast in a generalized linear model using the probit link function. All methods developed in the paper are implemented in our free R-package sensR (http://www.cran.r-project.org/package=sensR/). This includes the basic power and sample size calculations for these four discrimination tests...

  8. Tests of local Lorentz invariance violation of gravity in the standard model extension with pulsars.

    Science.gov (United States)

    Shao, Lijing

    2014-03-21

    The standard model extension is an effective field theory introducing all possible Lorentz-violating (LV) operators to the standard model and general relativity (GR). In the pure-gravity sector of minimal standard model extension, nine coefficients describe dominant observable deviations from GR. We systematically implemented 27 tests from 13 pulsar systems to tightly constrain eight linear combinations of these coefficients with extensive Monte Carlo simulations. It constitutes the first detailed and systematic test of the pure-gravity sector of minimal standard model extension with the state-of-the-art pulsar observations. No deviation from GR was detected. The limits of LV coefficients are expressed in the canonical Sun-centered celestial-equatorial frame for the convenience of further studies. They are all improved by significant factors of tens to hundreds with existing ones. As a consequence, Einstein's equivalence principle is verified substantially further by pulsar experiments in terms of local Lorentz invariance in gravity.

  9. Dynamic model of Fast Breeder Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Vaidyanathan, G., E-mail: vaidya@igcar.gov.i [Fast Reactor Technology Group, Indira Gandhi Center for Atomic Research, Kalpakkam (India); Kasinathan, N.; Velusamy, K. [Fast Reactor Technology Group, Indira Gandhi Center for Atomic Research, Kalpakkam (India)

    2010-04-15

    Fast Breeder Test Reactor (FBTR) is a 40 M Wt/13.2 MWe sodium cooled reactor operating since 1985. It is a loop type reactor. As part of the safety analysis the response of the plant to various transients is needed. In this connection a computer code named DYNAM was developed to model the reactor core, the intermediate heat exchanger, steam generator, piping, etc. This paper deals with the mathematical model of the various components of FBTR, the numerical techniques to solve the model, and comparison of the predictions of the code with plant measurements. Also presented is the benign response of the plant to a station blackout condition, which brings out the role of the various reactivity feedback mechanisms combined with a gradual coast down of reactor sodium flow.

  10. Statistical tests of simple earthquake cycle models

    Science.gov (United States)

    DeVries, Phoebe M. R.; Evans, Eileen L.

    2016-12-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  11. Testing the Model of Oscillating Magnetic Traps

    Science.gov (United States)

    Szaforz, Ż.; Tomczak, M.

    2015-01-01

    The aim of this paper is to test the model of oscillating magnetic traps (the OMT model), proposed by Jakimiec and Tomczak ( Solar Phys. 261, 233, 2010). This model describes the process of excitation of quasi-periodic pulsations (QPPs) observed during solar flares. In the OMT model energetic electrons are accelerated within a triangular, cusp-like structure situated between the reconnection point and the top of a flare loop as seen in soft X-rays. We analyzed QPPs in hard X-ray light curves for 23 flares as observed by Yohkoh. Three independent methods were used. We also used hard X-ray images to localize magnetic traps and soft X-ray images to diagnose thermal plasmas inside the traps. We found that the majority of the observed pulsation periods correlates with the diameters of oscillating magnetic traps, as was predicted by the OMT model. We also found that the electron number density of plasma inside the magnetic traps in the time of pulsation disappearance is strongly connected with the pulsation period. We conclude that the observations are consistent with the predictions of the OMT model for the analyzed set of flares.

  12. Testing substellar models with dynamical mass measurements

    Directory of Open Access Journals (Sweden)

    Liu M.C.

    2011-07-01

    Full Text Available We have been using Keck laser guide star adaptive optics to monitor the orbits of ultracool binaries, providing dynamical masses at lower luminosities and temperatures than previously available and enabling strong tests of theoretical models. We have identified three specific problems with theory: (1 We find that model color–magnitude diagrams cannot be reliably used to infer masses as they do not accurately reproduce the colors of ultracool dwarfs of known mass. (2 Effective temperatures inferred from evolutionary model radii are typically inconsistent with temperatures derived from fitting atmospheric models to observed spectra by 100–300 K. (3 For the only known pair of field brown dwarfs with a precise mass (3% and age determination (≈25%, the measured luminosities are ~2–3× higher than predicted by model cooling rates (i.e., masses inferred from Lbol and age are 20–30% larger than measured. To make progress in understanding the observed discrepancies, more mass measurements spanning a wide range of luminosity, temperature, and age are needed, along with more accurate age determinations (e.g., via asteroseismology for primary stars with brown dwarf binary companions. Also, resolved optical and infrared spectroscopy are needed to measure lithium depletion and to characterize the atmospheres of binary components in order to better assess model deficiencies.

  13. Topic Modeling in Sentiment Analysis: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Toqir Ahmad Rana

    2016-06-01

    Full Text Available With the expansion and acceptance of Word Wide Web, sentiment analysis has become progressively popular research area in information retrieval and web data analysis. Due to the huge amount of user-generated contents over blogs, forums, social media, etc., sentiment analysis has attracted researchers both in academia and industry, since it deals with the extraction of opinions and sentiments. In this paper, we have presented a review of topic modeling, especially LDA-based techniques, in sentiment analysis. We have presented a detailed analysis of diverse approaches and techniques, and compared the accuracy of different systems among them. The results of different approaches have been summarized, analyzed and presented in a sophisticated fashion. This is the really effort to explore different topic modeling techniques in the capacity of sentiment analysis and imparting a comprehensive comparison among them.

  14. Physiologically Based Pharmacokinetic (PBPK) Modeling and Simulation Approaches: A Systematic Review of Published Models, Applications, and Model Verification.

    Science.gov (United States)

    Sager, Jennifer E; Yu, Jingjing; Ragueneau-Majlessi, Isabelle; Isoherranen, Nina

    2015-11-01

    Modeling and simulation of drug disposition has emerged as an important tool in drug development, clinical study design and regulatory review, and the number of physiologically based pharmacokinetic (PBPK) modeling related publications and regulatory submissions have risen dramatically in recent years. However, the extent of use of PBPK modeling by researchers, and the public availability of models has not been systematically evaluated. This review evaluates PBPK-related publications to 1) identify the common applications of PBPK modeling; 2) determine ways in which models are developed; 3) establish how model quality is assessed; and 4) provide a list of publically available PBPK models for sensitive P450 and transporter substrates as well as selective inhibitors and inducers. PubMed searches were conducted using the terms "PBPK" and "physiologically based pharmacokinetic model" to collect published models. Only papers on PBPK modeling of pharmaceutical agents in humans published in English between 2008 and May 2015 were reviewed. A total of 366 PBPK-related articles met the search criteria, with the number of articles published per year rising steadily. Published models were most commonly used for drug-drug interaction predictions (28%), followed by interindividual variability and general clinical pharmacokinetic predictions (23%), formulation or absorption modeling (12%), and predicting age-related changes in pharmacokinetics and disposition (10%). In total, 106 models of sensitive substrates, inhibitors, and inducers were identified. An in-depth analysis of the model development and verification revealed a lack of consistency in model development and quality assessment practices, demonstrating a need for development of best-practice guidelines.

  15. Reliability of specific physical examination tests for the diagnosis of shoulder pathologies: a systematic review and meta-analysis.

    Science.gov (United States)

    Lange, Toni; Matthijs, Omer; Jain, Nitin B; Schmitt, Jochen; Lützner, Jörg; Kopkow, Christian

    2017-03-01

    Shoulder pain in the general population is common and to identify the aetiology of shoulder pain, history, motion and muscle testing, and physical examination tests are usually performed. The aim of this systematic review was to summarise and evaluate intrarater and inter-rater reliability of physical examination tests in the diagnosis of shoulder pathologies. A comprehensive systematic literature search was conducted using MEDLINE, EMBASE, Allied and Complementary Medicine Database (AMED) and Physiotherapy Evidence Database (PEDro) through 20 March 2015. Methodological quality was assessed using the Quality Appraisal of Reliability Studies (QAREL) tool by 2 independent reviewers. The search strategy revealed 3259 articles, of which 18 finally met the inclusion criteria. These studies evaluated the reliability of 62 test and test variations used for the specific physical examination tests for the diagnosis of shoulder pathologies. Methodological quality ranged from 2 to 7 positive criteria of the 11 items of the QAREL tool. This review identified a lack of high-quality studies evaluating inter-rater as well as intrarater reliability of specific physical examination tests for the diagnosis of shoulder pathologies. In addition, reliability measures differed between included studies hindering proper cross-study comparisons. PROSPERO CRD42014009018. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  16. Diagnostic validity of physical examination tests for common knee disorders: An overview of systematic reviews and meta-analysis.

    Science.gov (United States)

    Décary, Simon; Ouellet, Philippe; Vendittoli, Pascal-André; Roy, Jean-Sébastien; Desmeules, François

    2017-01-01

    More evidence on diagnostic validity of physical examination tests for knee disorders is needed to lower frequently used and costly imaging tests. To conduct a systematic review of systematic reviews (SR) and meta-analyses (MA) evaluating the diagnostic validity of physical examination tests for knee disorders. A structured literature search was conducted in five databases until January 2016. Methodological quality was assessed using the AMSTAR. Seventeen reviews were included with mean AMSTAR score of 5.5 ± 2.3. Based on six SR, only the Lachman test for ACL injuries is diagnostically valid when individually performed (Likelihood ratio (LR+):10.2, LR-:0.2). Based on two SR, the Ottawa Knee Rule is a valid screening tool for knee fractures (LR-:0.05). Based on one SR, the EULAR criteria had a post-test probability of 99% for the diagnosis of knee osteoarthritis. Based on two SR, a complete physical examination performed by a trained health provider was found to be diagnostically valid for ACL, PCL and meniscal injuries as well as for cartilage lesions. When individually performed, common physical tests are rarely able to rule in or rule out a specific knee disorder, except the Lachman for ACL injuries. There is low-quality evidence concerning the validity of combining history elements and physical tests. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    P. Dixon

    2004-02-17

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M&O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty of

  18. Prostate cancer antigen 3 test for prostate biopsy decision: a systematic review and meta analysis

    Institute of Scientific and Technical Information of China (English)

    Luo Yong; Gou Xin; Huang Peng; Mou Chan

    2014-01-01

    Background The specificity for early interventions of prostate-specific antigen (PSA) in prostate cancer (PCa) is not satisfactory.It is likely that prostate cancer antigen 3 (PCA3) can be used to predict biopsy outcomes more accurately than PSA for the early detection of PCa.We systematically reviewed literatures and subsequently performed a meta-analysis.Methods A bibliographic search in the database of Embase,Medline,Web of Science,NCBI,PubMed,CNKI,and those of health technology assessment agencies published before April 2013 was conducted.The key words used were "prostatic neoplasms","prostate","‘prostate,' ‘carcinoma' or ‘cancer' or ‘tumor',or ‘PCa,'" and free terms of "upm3","pca3","dd3","aptimapca 3",and "prostate cancer antigen 3".All patients were adults.The intervention was detecting PCA3 in urine samples for PCa diagnosis.We checked the quality based on the QUADAS criteria,collected data,and developed a meta-analysis to synthesize results.Twenty-four studies of diagnostic tests with moderate to high quality were selected.Results The sensitivity was between 46.9% and 82.3%; specificity was from 55% to 92%; positive predictive value had a range of 39.0%-86.0%; and the negative predictive value was 61.0%-89.7%.The meta-analysis has heterogeneity between studies.The global sensitivity value was 0.82 (95% Cl 0.72-0.90); specificity was 0.962 (95% Cl 0.73-0.99); positive likelihood ratio was 2.39 (95% Cl 2.10-2.71); negative likelihood ratio was 0.51 (95% Cl 0.46-0.86); diagnostic odds ratio was 4.89 (95% Cl 3.94-6.06); and AUC in SROC curve was 0.744 1.Conclusion PCA3 can be used for early diagnosis of PCa and to avoid unnecessary biopsies.

  19. Prediction models for cardiovascular disease risk in the general population : Systematic review

    NARCIS (Netherlands)

    Damen, Johanna A A G; Hooft, Lotty; Schuit, Ewoud; Debray, Thomas P A; Collins, Gary S.; Tzoulaki, Ioanna; Lassale, Camille M.; Siontis, George C M; Chiocchia, Virginia; Roberts, Corran; Schlüssel, Michael Maia; Gerry, Stephen; Black, James A.; Heus, Pauline; Van Der Schouw, Yvonne T.; Peelen, Linda M.; Moons, Karel G M

    2016-01-01

    OBJECTIVE: To provide an overview of prediction models for risk of cardiovascular disease (CVD) in the general population. DESIGN: Systematic review. DATA SOURCES: Medline and Embase until June 2013. ELIGIBILITY CRITERIA FOR STUDY SELECTION: Studies describing the development or external validation

  20. A Systematic Approach to Modelling Change Processes in Construction Projects

    Directory of Open Access Journals (Sweden)

    Ibrahim Motawa

    2012-11-01

    Full Text Available Modelling change processes within construction projects isessential to implement changes efficiently. Incomplete informationon the project variables at the early stages of projects leads toinadequate knowledge of future states and imprecision arisingfrom ambiguity in project parameters. This lack of knowledge isconsidered among the main source of changes in construction.Change identification and evaluation, in addition to predictingits impacts on project parameters, can help in minimising thedisruptive effects of changes. This paper presents a systematicapproach to modelling change process within construction projectsthat helps improve change identification and evaluation. Theapproach represents the key decisions required to implementchanges. The requirements of an effective change processare presented first. The variables defined for efficient changeassessment and diagnosis are then presented. Assessmentof construction changes requires an analysis for the projectcharacteristics that lead to change and also analysis of therelationship between the change causes and effects. The paperconcludes that, at the early stages of a project, projects with a highlikelihood of change occurrence should have a control mechanismover the project characteristics that have high influence on theproject. It also concludes, for the relationship between changecauses and effects, the multiple causes of change should bemodelled in a way to enable evaluating the change effects moreaccurately. The proposed approach is the framework for tacklingsuch conclusions and can be used for evaluating change casesdepending on the available information at the early stages ofconstruction projects.

  1. Model Tests of Pile Defect Detection

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The pile, as an important foundation style, is being used in engineering practice. Defects of different types and damages of different degrees easily occur during the process of pile construction. So,dietecting defects of the pile is very important. As so far, there are some difficult problems in pile defect detection. Based on stress wave theory, some of these typical difficult problems were studied through model tests. The analyses of the test results are carried out and some significant results of the low-strain method are obtained, when a pile has a gradually-decreasing crosssection part, the amplitude of the reflective signal originating from the defect is dependent on the decreasing value of the rate of crosssection β. No apparent signal reflected from the necking appeares on the velocity response curve when the value of β is less than about 3.5 %.

  2. 2-D Model Test of Dolosse Breakwater

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Liu, Zhou

    1994-01-01

    The rational design diagram for Dolos armour should incorporate both the hydraulic stability and the structural integrity. The previous tests performed by Aalborg University (AU) made available such design diagram for the trunk of Dolos breakwater without superstructures (Burcharth et al. 1992......). To extend the design diagram to cover Dolos breakwaters with superstructure, 2-D model tests of Dolos breakwater with wave wall is included in the project Rubble Mound Breakwater Failure Modes sponsored by the Directorate General XII of the Commission of the European Communities under Contract MAS-CT92......-0042. Furthermore, Task IA will give the design diagram for Tetrapod breakwaters without a superstructure. The more complete research results on Dolosse can certainly give some insight into the behaviour of Tetrapods armour layer of the breakwaters with superstructure. The main part of the experiment...

  3. Validity and Reliability of Published Comprehensive Theory of Mind Tests for Normal Preschool Children: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Seyyede Zohreh Ziatabar Ahmadi

    2015-12-01

    Full Text Available Objective: Theory of mind (ToM or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children.Method: We searched MEDLINE (PubMed interface, Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library databases from 1990 to June 2015. Search strategy was Latin transcription of ‘Theory of Mind’ AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks for ToM assessment and/or had no description about structure, validity or reliability of their tests. Methodological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP.Result: In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. Conclusion: There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric

  4. Validity and Reliability of Published Comprehensive Theory of Mind Tests for Normal Preschool Children: A Systematic Review

    Science.gov (United States)

    Ziatabar Ahmadi, Seyyede Zohreh; Jalaie, Shohreh; Ashayeri, Hassan

    2015-01-01

    Objective: Theory of mind (ToM) or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children. Method: We searched MEDLINE (PubMed interface), Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library) databases from 1990 to June 2015. Search strategy was Latin transcription of ‘Theory of Mind’ AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks) for ToM assessment and/or had no description about structure, validity or reliability of their tests. Methodological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP). Result: In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. Conclusion: There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric characteristics

  5. Testing the stability and reliability of starspot modelling.

    Science.gov (United States)

    Kovari, Zs.; Bartus, J.

    1997-07-01

    Since the mid 70's different starspot modelling techniques have been used to describe the observed spot variability on active stars. Spot positions and temperatures are calculated by application of surface integration techniques or solution of analytic equations on observed photometric data. Artificial spotted light curves were generated, by use of the analytic expressions of Budding (1977Ap&SS..48..207B), to test how the different constraints like the intrinsic scatter of the observed data or the angle of inclination affects the spot solutions. Counteractions between the different parameters like inclination, latitude and spot size were also investigated. The results of re-modelling the generated data were scrutinized statistically. It was found, that (1) 0.002-0.005mag of photometric accuracy is required to recover geometrical spot parameters within an acceptable error box; (2) even a 0.03-0.05mag error in unspotted brightness substantially affects the recovery of the original spot distribution; (3) especially at low inclination, under- or overestimation of inclination by 10° leads to an important systematic error in spot latitude and size; (4) when the angle of inclination i<~20° photometric spot modelling is unable to provide satisfactory information on spot location and size.

  6. Systematic U(1 ) B - L extensions of loop-induced neutrino mass models with dark matter

    Science.gov (United States)

    Ho, Shu-Yu; Toma, Takashi; Tsumura, Koji

    2016-08-01

    We study the gauged U(1 ) B - L extensions of the models for neutrino masses and dark matter. In this class of models, tiny neutrino masses are radiatively induced through the loop diagrams, while the origin of the dark matter stability is guaranteed by the remnant of the gauge symmetry. Depending on how the lepton number conservation is violated, these models are systematically classified. We present complete lists for the one-loop Z2 and the two-loop Z3 radiative seesaw models as examples of the classification. The anomaly cancellation conditions in these models are also discussed.

  7. Diagnostic accuracy of molecular amplification tests for human African trypanosomiasis--systematic review.

    Directory of Open Access Journals (Sweden)

    Claire M Mugasa

    2012-01-01

    Full Text Available BACKGROUND: A range of molecular amplification techniques have been developed for the diagnosis of Human African Trypanosomiasis (HAT; however, careful evaluation of these tests must precede implementation to ensure their high clinical accuracy. Here, we investigated the diagnostic accuracy of molecular amplification tests for HAT, the quality of articles and reasons for variation in accuracy. METHODOLOGY: Data from studies assessing diagnostic molecular amplification tests were extracted and pooled to calculate accuracy. Articles were included if they reported sensitivity and specificity or data whereby values could be calculated. Study quality was assessed using QUADAS and selected studies were analysed using the bivariate random effects model. RESULTS: 16 articles evaluating molecular amplification tests fulfilled the inclusion criteria: PCR (n = 12, NASBA (n = 2, LAMP (n = 1 and a study comparing PCR and NASBA (n = 1. Fourteen articles, including 19 different studies were included in the meta-analysis. Summary sensitivity for PCR on blood was 99.0% (95% CI 92.8 to 99.9 and the specificity was 97.7% (95% CI 93.0 to 99.3. Differences in study design and readout method did not significantly change estimates although use of satellite DNA as a target significantly lowers specificity. Sensitivity and specificity of PCR on CSF for staging varied from 87.6% to 100%, and 55.6% to 82.9% respectively. CONCLUSION: Here, PCR seems to have sufficient accuracy to replace microscopy where facilities allow, although this conclusion is based on multiple reference standards and a patient population that was not always representative. Future studies should, therefore, include patients for which PCR may become the test of choice and consider well designed diagnostic accuracy studies to provide extra evidence on the value of PCR in practice. Another use of PCR for control of disease could be to screen samples collected from rural areas and test in

  8. Model-independent tests of cosmic gravity.

    Science.gov (United States)

    Linder, Eric V

    2011-12-28

    Gravitation governs the expansion and fate of the universe, and the growth of large-scale structure within it, but has not been tested in detail on these cosmic scales. The observed acceleration of the expansion may provide signs of gravitational laws beyond general relativity (GR). Since the form of any such extension is not clear, from either theory or data, we adopt a model-independent approach to parametrizing deviations to the Einstein framework. We explore the phase space dynamics of two key post-GR functions and derive a classification scheme, and an absolute criterion on accuracy necessary for distinguishing classes of gravity models. Future surveys will be able to constrain the post-GR functions' amplitudes and forms to the required precision, and hence reveal new aspects of gravitation.

  9. Genetics of borderline personality disorder: systematic review and proposal of an integrative model.

    Science.gov (United States)

    Amad, Ali; Ramoz, Nicolas; Thomas, Pierre; Jardri, Renaud; Gorwood, Philip

    2014-03-01

    Borderline personality disorder (BPD) is one of the most common mental disorders and is characterized by a pervasive pattern of emotional lability, impulsivity, interpersonal difficulties, identity disturbances, and disturbed cognition. Here, we performed a systematic review of the literature concerning the genetics of BPD, including familial and twin studies, association studies, and gene-environment interaction studies. Moreover, meta-analyses were performed when at least two case-control studies testing the same polymorphism were available. For each gene variant, a pooled odds ratio (OR) was calculated using fixed or random effects models. Familial and twin studies largely support the potential role of a genetic vulnerability at the root of BPD, with an estimated heritability of approximately 40%. Moreover, there is evidence for both gene-environment interactions and correlations. However, association studies for BPD are sparse, making it difficult to draw clear conclusions. According to our meta-analysis, no significant associations were found for the serotonin transporter gene, the tryptophan hydroxylase 1 gene, or the serotonin 1B receptor gene. We hypothesize that such a discrepancy (negative association studies but high heritability of the disorder) could be understandable through a paradigm shift, in which "plasticity" genes (rather than "vulnerability" genes) would be involved. Such a framework postulates a balance between positive and negative events, which interact with plasticity genes in the genesis of BPD.

  10. A new model test in high energy physics in frequentist and Bayesian statistical formalisms

    CERN Document Server

    Kamenshchikov, Andrey

    2016-01-01

    A problem of a new physical model test given observed experimental data is a typical one for modern experiments of high energy physics (HEP). A solution of the problem may be provided with two alternative statistical formalisms, namely frequentist and Bayesian, which are widely spread in contemporary HEP searches. A characteristic experimental situation is modeled from general considerations and both the approaches are utilized in order to test a new model. The results are juxtaposed, what demonstrates their consistency in this work. An effect of a systematic uncertainty treatment in the statistical analysis is also considered.

  11. A new model test in high energy physics in frequentist and Bayesian statistical formalisms

    Science.gov (United States)

    Kamenshchikov, A.

    2017-01-01

    A problem of a new physical model test given observed experimental data is a typical one for modern experiments of high energy physics (HEP). A solution of the problem may be provided with two alternative statistical formalisms, namely frequentist and Bayesian, which are widely spread in contemporary HEP searches. A characteristic experimental situation is modeled from general considerations and both the approaches are utilized in order to test a new model. The results are juxtaposed, what demonstrates their consistency in this work. An effect of a systematic uncertainty treatment in the statistical analysis is also considered.

  12. Predicting hydrophobic solvation by molecular simulation: 1. Testing united-atom alkane models.

    Science.gov (United States)

    Jorge, Miguel; Garrido, Nuno M; Simões, Carlos J V; Silva, Cândida G; Brito, Rui M M

    2017-03-05

    We present a systematic test of the performance of three popular united-atom force fields-OPLS-UA, GROMOS and TraPPE-at predicting hydrophobic solvation, more precisely at describing the solvation of alkanes in alkanes. Gibbs free energies of solvation were calculated for 52 solute/solvent pairs from Molecular Dynamics simulations and thermodynamic integration making use of the IBERCIVIS volunteer computing platform. Our results show that all force fields yield good predictions when both solute and solvent are small linear or branched alkanes (up to pentane). However, as the size of the alkanes increases, all models tend to increasingly deviate from experimental data in a systematic fashion. Furthermore, our results confirm that specific interaction parameters for cyclic alkanes in the united-atom representation are required to account for the additional excluded volume within the ring. Overall, the TraPPE model performs best for all alkanes, but systematically underpredicts the magnitude of solvation free energies by about 6% (RMSD of 1.2 kJ/mol). Conversely, both GROMOS and OPLS-UA systematically overpredict solvation free energies (by ∼13% and 15%, respectively). The systematic trends suggest that all models can be improved by a slight adjustment of their Lennard-Jones parameters. © 2016 Wiley Periodicals, Inc.

  13. Modeling and testing of ethernet transformers

    Science.gov (United States)

    Bowen, David

    2011-12-01

    Twisted-pair Ethernet is now the standard home and office last-mile network technology. For decades, the IEEE standard that defines Ethernet has required electrical isolation between the twisted pair cable and the Ethernet device. So, for decades, every Ethernet interface has used magnetic core Ethernet transformers to isolate Ethernet devices and keep users safe in the event of a potentially dangerous fault on the network media. The current state-of-the-art Ethernet transformers are miniature (explored which are capable of exceptional miniaturization or on-chip fabrication. This dissertation thoroughly explores the performance of the current commercial Ethernet transformers to both increase understanding of the device's behavior and outline performance parameters for replacement devices. Lumped element and distributed circuit models are derived; testing schemes are developed and used to extract model parameters from commercial Ethernet devices. Transfer relation measurements of the commercial Ethernet transformers are compared against the model's behavior and it is found that the tuned, distributed models produce the best transfer relation match to the measured data. Process descriptions and testing results on fabricated thin-film dielectric-core toroid transformers are presented. The best results were found for a 32-turn transformer loaded with 100Ω, the impedance of twisted pair cable. This transformer gave a flat response from about 10MHz to 40MHz with a height of approximately 0.45. For the fabricated transformer structures, theoretical methods to determine resistance, capacitance and inductance are presented. A special analytical and numerical analysis of the fabricated transformer inductance is presented. Planar cuts of magnetic slope fields around the dielectric-core toroid are shown that describe the effect of core height and winding density on flux uniformity without a magnetic core.

  14. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    S. Finsterle

    2004-09-02

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross

  15. Bond Graph Modeling and Validation of an Energy Regenerative System for Emulsion Pump Tests

    Directory of Open Access Journals (Sweden)

    Yilei Li

    2014-01-01

    Full Text Available The test system for emulsion pump is facing serious challenges due to its huge energy consumption and waste nowadays. To settle this energy issue, a novel energy regenerative system (ERS for emulsion pump tests is briefly introduced at first. Modeling such an ERS of multienergy domains needs a unified and systematic approach. Bond graph modeling is well suited for this task. The bond graph model of this ERS is developed by first considering the separate components before assembling them together and so is the state-space equation. Both numerical simulation and experiments are carried out to validate the bond graph model of this ERS. Moreover the simulation and experiments results show that this ERS not only satisfies the test requirements, but also could save at least 25% of energy consumption as compared to the original test system, demonstrating that it is a promising method of energy regeneration for emulsion pump tests.

  16. Survival and testing parameters of zirconia-based crowns under cyclic loading in an aqueous environment: A systematic review.

    Science.gov (United States)

    Elshiyab, Shareen Hayel; Nawafleh, Noor; George, Roy

    2017-03-19

    To study the hypothesis that in vitro fatigue testing variables in an aqueous environment affect the survival results of zirconia-based restorations, and evaluate the level of agreement between in vitro and previous in vivo data. An electronic search of literature was conducted in PubMed and Scopus to identify in vitro studies testing zirconia-based crowns using cyclic loading in an aqueous environment. Only studies that complied with the inclusion criteria were included. Data extracted were used for survival analysis and assessment of in vitro parameters for fatigue testing of implant and tooth-supported crowns. Using "Assessing the Methodological Quality of Systematic Reviews" (AMSTAR), recent in vivo systematic review studies were assessed prior to consideration for comparison with the current in vitro data. After applying the inclusion criteria only 25 articles were included. Five-year cumulative survival rate of zirconia-based implant-supported crowns was lower than tooth-supported crowns (84% and 88.8% respectively). Tooth-supported crowns subjected to wet fatigue showed a lower 5-year cumulative survival rate compared to thermocycling (62.8% and 92.6% respectively). Monolithic crowns showed higher fracture resistance compared to bi-layered structure (pressed or hand-layered). Only in vivo systematic reviews, which complied with AMSTAR assessment criteria, were used for comparison to the in vitro data. As for fatigue testing parameters, differences in the experimental setting were evident and affected the outcomes. Crown survivals depend on type of support, type of fatigue test conducted, crown structure, and veneering method. In vitro fatigue testing protocols are highly variable, which introduces a need for international standardization to allow for more valid comparability of data. © 2017 John Wiley & Sons Australia, Ltd.

  17. What women want. Women's preferences for the management of low-grade abnormal cervical screening tests: a systematic review

    DEFF Research Database (Denmark)

    Frederiksen, Maria Eiholm; Lynge, E; Rebolj, M

    2012-01-01

    cytology in primary cervical screening, the frequency of low-grade abnormal screening tests will double. Several available alternatives for the follow-up of low-grade abnormal screening tests have similar outcomes. In this situation, women's preferences have been proposed as a guide for management......Please cite this paper as: Frederiksen M, Lynge E, Rebolj M. What women want. Women's preferences for the management of low-grade abnormal cervical screening tests: a systematic review. BJOG 2011; DOI: 10.1111/j.1471-0528.2011.03130.x. Background If human papillomavirus (HPV) testing will replace....... Selection criteria Studies asking women to state a preference between active follow-up and observation for the management of low-grade abnormalities on screening cytology or HPV tests. Data collection and analysis Information on study design, participants and outcomes was retrieved using a prespecified form...

  18. Finds in Testing Experiments for Model Evaluation

    Institute of Scientific and Technical Information of China (English)

    WU Ji; JIA Xiaoxia; LIU Chang; YANG Haiyan; LIU Chao

    2005-01-01

    To evaluate the fault location and the failure prediction models, simulation-based and code-based experiments were conducted to collect the required failure data. The PIE model was applied to simulate failures in the simulation-based experiment. Based on syntax and semantic level fault injections, a hybrid fault injection model is presented. To analyze the injected faults, the difficulty to inject (DTI) and difficulty to detect (DTD) are introduced and are measured from the programs used in the code-based experiment. Three interesting results were obtained from the experiments: 1) Failures simulated by the PIE model without consideration of the program and testing features are unreliably predicted; 2) There is no obvious correlation between the DTI and DTD parameters; 3) The DTD for syntax level faults changes in a different pattern to that for semantic level faults when the DTI increases. The results show that the parameters have a strong effect on the failures simulated, and the measurement of DTD is not strict.

  19. Morphology of rain water channelization in systematically varied model sandy soils

    OpenAIRE

    Wei, Y.; Cejas, C. M.; Barrois, R.; Dreyfus, R.; Durian, D. J.

    2014-01-01

    We visualize the formation of fingered flow in dry model sandy soils under different raining conditions using a quasi-2d experimental set-up, and systematically determine the impact of soil grain diameter and surface wetting property on water channelization phenomenon. The model sandy soils we use are random closely-packed glass beads with varied diameters and surface treatments. For hydrophilic sandy soils, our experiments show that rain water infiltrates into a shallow top layer of soil and...

  20. Combinatorial QSAR modeling of chemical toxicants tested against Tetrahymena pyriformis.

    Science.gov (United States)

    Zhu, Hao; Tropsha, Alexander; Fourches, Denis; Varnek, Alexandre; Papa, Ester; Gramatica, Paola; Oberg, Tomas; Dao, Phuong; Cherkasov, Artem; Tetko, Igor V

    2008-04-01

    Selecting most rigorous quantitative structure-activity relationship (QSAR) approaches is of great importance in the development of robust and predictive models of chemical toxicity. To address this issue in a systematic way, we have formed an international virtual collaboratory consisting of six independent groups with shared interests in computational chemical toxicology. We have compiled an aqueous toxicity data set containing 983 unique compounds tested in the same laboratory over a decade against Tetrahymena pyriformis. A modeling set including 644 compounds was selected randomly from the original set and distributed to all groups that used their own QSAR tools for model development. The remaining 339 compounds in the original set (external set I) as well as 110 additional compounds (external set II) published recently by the same laboratory (after this computational study was already in progress) were used as two independent validation sets to assess the external predictive power of individual models. In total, our virtual collaboratory has developed 15 different types of QSAR models of aquatic toxicity for the training set. The internal prediction accuracy for the modeling set ranged from 0.76 to 0.93 as measured by the leave-one-out cross-validation correlation coefficient ( Q abs2). The prediction accuracy for the external validation sets I and II ranged from 0.71 to 0.85 (linear regression coefficient R absI2) and from 0.38 to 0.83 (linear regression coefficient R absII2), respectively. The use of an applicability domain threshold implemented in most models generally improved the external prediction accuracy but at the same time led to a decrease in chemical space coverage. Finally, several consensus models were developed by averaging the predicted aquatic toxicity for every compound using all 15 models, with or without taking into account their respective applicability domains. We find that consensus models afford higher prediction accuracy for the

  1. Developing population models: A systematic approach for pesticide risk assessment using herbaceous plants as an example.

    Science.gov (United States)

    Schmolke, Amelie; Kapo, Katherine E; Rueda-Cediel, Pamela; Thorbek, Pernille; Brain, Richard; Forbes, Valery

    2017-12-01

    Population models are used as tools in species management and conservation and are increasingly recognized as important tools in pesticide risk assessments. A wide variety of population model applications and resources on modeling techniques, evaluation and documentation can be found in the literature. In this paper, we add to these resources by introducing a systematic, transparent approach to developing population models. The decision guide that we propose is intended to help model developers systematically address data availability for their purpose and the steps that need to be taken in any model development. The resulting conceptual model includes the necessary complexity to address the model purpose on the basis of current understanding and available data. We provide specific guidance for the development of population models for herbaceous plant species in pesticide risk assessment and demonstrate the approach with an example of a conceptual model developed following the decision guide for herbicide risk assessment of Mead's milkweed (Asclepias meadii), a species listed as threatened under the US Endangered Species Act. The decision guide specific to herbaceous plants demonstrates the details, but the general approach can be adapted for other species groups and management objectives. Population models provide a tool to link population-level dynamics, species and habitat characteristics as well as information about stressors in a single approach. Developing such models in a systematic, transparent way will increase their applicability and credibility, reduce development efforts, and result in models that are readily available for use in species management and risk assessments. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Real-time screening tests for functional alignment of the trunk and lower extremities in adolescent – a systematic review

    DEFF Research Database (Denmark)

    Junge, Tina; Wedderkopp, Niels; Juul-Kristensen, Birgit

    2012-01-01

    mechanisms resulting in ACL injuries (Hewett, 2010). Prevention may therefore depend on identifying these potentially injury risk factors. Screening tools must thus include patterns of typical movements in sport and leisure time activities, consisting of high-load and multi-directional tests, focusing...... of knee alignment, there is a further need to evaluate reliability and validity of real-time functional alignment tests, before the can be used as screening tools for prevention of knee injuries among adolescents. Still the next step in this systematic review is to evaluate the quality and feasibility...

  3. Integrated outburst detector sensor-model tests

    Institute of Scientific and Technical Information of China (English)

    DZIURZY(N)SKI Wac(I)aw; WASILEWSKI Stanis(I)aw

    2011-01-01

    Outbursts of methane and rocks are,similarly to rock bursts,the biggest hazards in deep mines and are equally difficult to predict.The violent process of the outburst itself,along with the scale and range of hazards following the rapid discharge of gas and rocks,requires solutions which would enable quick and unambiguous detection of the hazard,immediate power supply cut-off and evacuation of personnel from potentially hazardous areas.For this purpose,an integrated outburst detector was developed.Assumed functions of the sensor which was equipped with three measuring and detection elements:a chamber for constant measurement of methane concentration,pressure sensor and microphone.Tests of the sensor model were carried out to estimate the parameters which characterize the dynamic properties of the sensor.Given the impossibility of carrying out the full scale experimental outburst,the sensor was tested during the methane and coal dust explosions in the testing gallery at KD Barbara.The obtained results proved that the applied solutions have been appropriate.

  4. Applying Model Checking to Generate Model-Based Integration Tests from Choreography Models

    Science.gov (United States)

    Wieczorek, Sebastian; Kozyura, Vitaly; Roth, Andreas; Leuschel, Michael; Bendisposto, Jens; Plagge, Daniel; Schieferdecker, Ina

    Choreography models describe the communication protocols between services. Testing of service choreographies is an important task for the quality assurance of service-based systems as used e.g. in the context of service-oriented architectures (SOA). The formal modeling of service choreographies enables a model-based integration testing (MBIT) approach. We present MBIT methods for our service choreography modeling approach called Message Choreography Models (MCM). For the model-based testing of service choreographies, MCMs are translated into Event-B models and used as input for our test generator which uses the model checker ProB.

  5. Deterministic Modeling of the High Temperature Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Ortensi, J.; Cogliati, J. J.; Pope, M. A.; Ferrer, R. M.; Ougouag, A. M.

    2010-06-01

    Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the

  6. Experimentally testing the standard cosmological model

    Energy Technology Data Exchange (ETDEWEB)

    Schramm, D.N. (Chicago Univ., IL (USA) Fermi National Accelerator Lab., Batavia, IL (USA))

    1990-11-01

    The standard model of cosmology, the big bang, is now being tested and confirmed to remarkable accuracy. Recent high precision measurements relate to the microwave background; and big bang nucleosynthesis. This paper focuses on the latter since that relates more directly to high energy experiments. In particular, the recent LEP (and SLC) results on the number of neutrinos are discussed as a positive laboratory test of the standard cosmology scenario. Discussion is presented on the improved light element observational data as well as the improved neutron lifetime data. alternate nucleosynthesis scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conclusions on the baryonic density relative to the critical density, {Omega}{sub b}, remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the standard model conclusion that {Omega}{sub b} {approximately} 0.06. This latter point is the deriving force behind the need for non-baryonic dark matter (assuming {Omega}{sub total} = 1) and the need for dark baryonic matter, since {Omega}{sub visible} < {Omega}{sub b}. Recent accelerator constraints on non-baryonic matter are discussed, showing that any massive cold dark matter candidate must now have a mass M{sub x} {approx gt} 20 GeV and an interaction weaker than the Z{sup 0} coupling to a neutrino. It is also noted that recent hints regarding the solar neutrino experiments coupled with the see-saw model for {nu}-masses may imply that the {nu}{sub {tau}} is a good hot dark matter candidate. 73 refs., 5 figs.

  7. Ablative Rocket Deflector Testing and Computational Modeling

    Science.gov (United States)

    Allgood, Daniel C.; Lott, Jeffrey W.; Raines, Nickey

    2010-01-01

    A deflector risk mitigation program was recently conducted at the NASA Stennis Space Center. The primary objective was to develop a database that characterizes the behavior of industry-grade refractory materials subjected to rocket plume impingement conditions commonly experienced on static test stands. The program consisted of short and long duration engine tests where the supersonic exhaust flow from the engine impinged on an ablative panel. Quasi time-dependent erosion depths and patterns generated by the plume impingement were recorded for a variety of different ablative materials. The erosion behavior was found to be highly dependent on the material s composition and corresponding thermal properties. For example, in the case of the HP CAST 93Z ablative material, the erosion rate actually decreased under continued thermal heating conditions due to the formation of a low thermal conductivity "crystallization" layer. The "crystallization" layer produced near the surface of the material provided an effective insulation from the hot rocket exhaust plume. To gain further insight into the complex interaction of the plume with the ablative deflector, computational fluid dynamic modeling was performed in parallel to the ablative panel testing. The results from the current study demonstrated that locally high heating occurred due to shock reflections. These localized regions of shock-induced heat flux resulted in non-uniform erosion of the ablative panels. In turn, it was observed that the non-uniform erosion exacerbated the localized shock heating causing eventual plume separation and reversed flow for long duration tests under certain conditions. Overall, the flow simulations compared very well with the available experimental data obtained during this project.

  8. Uptake and yield of HIV testing and counselling among children and adolescents in sub-Saharan Africa: a systematic review

    Directory of Open Access Journals (Sweden)

    Darshini Govindasamy

    2015-10-01

    Full Text Available Introduction: In recent years children and adolescents have emerged as a priority for HIV prevention and care services. We conducted a systematic review to investigate the acceptability, yield and prevalence of HIV testing and counselling (HTC strategies in children and adolescents (5 to 19 years in sub-Saharan Africa. Methods: An electronic search was conducted in MEDLINE, EMBASE, Global Health and conference abstract databases. Studies reporting on HTC acceptability, yield and prevalence and published between January 2004 and September 2014 were included. Pooled proportions for these three outcomes were estimated using a random effects model. A quality assessment was conducted on included studies. Results and discussion: A total of 16,380 potential citations were identified, of which 21 studies (23 entries were included. Most studies were conducted in Kenya (n=5 and Uganda (n=5 and judged to provide moderate (n=15 to low quality (n=7 evidence, with data not disaggregated by age. Seven studies reported on provider-initiated testing and counselling (PITC, with the remainder reporting on family-centred (n=5, home-based (n=5, outreach (n=5 and school-linked HTC among primary schoolchildren (n=1. PITC among inpatients had the highest acceptability (86.3%; 95% confidence interval [CI]: 65.5 to 100%, yield (12.2%; 95% CI: 6.1 to 18.3% and prevalence (15.4%; 95% CI: 5.0 to 25.7%. Family-centred HTC had lower acceptance compared to home-based HTC (51.7%; 95% CI: 10.4 to 92.9% vs. 84.9%; 95% CI: 74.4 to 95.4% yet higher prevalence (8.4%; 95% CI: 3.4 to 13.5% vs. 3.0%; 95% CI: 1.0 to 4.9%. School-linked HTC showed poor acceptance and low prevalence. Conclusions: While PITC may have high test acceptability priority should be given to evaluating strategies beyond healthcare settings (e.g. home-based HTC among families to identify individuals earlier in their disease progression. Data on linkage to care and cost-effectiveness of HTC strategies are needed to

  9. On the Systematic Errors of Cosmological-Scale Gravity Tests using Redshift Space Distortion: Non-linear Effects and the Halo Bias

    CERN Document Server

    Ishikawa, Takashi; Nishimichi, Takahiro; Takahashi, Ryuichi; Yoshida, Naoki; Tonegawa, Motonari

    2013-01-01

    Redshift space distortion (RSD) observed in galaxy redshift surveys is a powerful tool to test gravity theories on cosmological scales, but the systematic uncertainties must carefully be examined for future surveys with large statistics. Here we employ various analytic models of RSD and estimate the systematic errors on measurements of the structure growth-rate parameter, f\\sigma_8, induced by non-linear effects and the halo bias with respect to the dark matter distribution, by using halo catalogues from 40 realisations of 3.4 \\times 10^8 comoving h^{-3}Mpc^3 cosmological N-body simulations. We consider hypothetical redshift surveys at redshifts z=0.5, 1.35 and 2, and different minimum halo mass thresholds in the range of 5.0 \\times 10^{11} -- 2.0 \\times 10^{13} h^{-1} M_\\odot. We find that the systematic error of f\\sigma_8 is greatly reduced to ~4 per cent level, when a recently proposed analytical formula of RSD that takes into account the higher-order coupling between the density and velocity fields is ado...

  10. [Application analysis of Nursing Care Systematization according to Horta's Conceptual Model].

    Science.gov (United States)

    da Cunha, Sandra Maria Botelho; Barros, Alba Lúcia Botura Leite

    2005-01-01

    This study has as purpose to analyse the implementation of the Nursing Care Systematization in a private hospital in medical surgical units. Results evidenced that the Horta's Conceptual Model was present only in part of nursing hystory instrument, that the remaining phases of nursing process were not inter-related and that there was a lack of coherence of the prescribed actions in relation to the patient's health condition. From the results of the study it can be concluded that the model used for Nursing Care Systematization is eclectic, not obeying therefore, only to Horta's conceptual model; the totality of the data had not been collected in some phases of the nursing process; there is no correlation of the phases in the majority of analyzed patient records; diagnostic and planning phases do not comprise the phases of the nursing process as proposed by Horta.

  11. Simulation Modelling in Healthcare: An Umbrella Review of Systematic Literature Reviews.

    Science.gov (United States)

    Salleh, Syed; Thokala, Praveen; Brennan, Alan; Hughes, Ruby; Booth, Andrew

    2017-05-30

    Numerous studies examine simulation modelling in healthcare. These studies present a bewildering array of simulation techniques and applications, making it challenging to characterise the literature. The aim of this paper is to provide an overview of the level of activity of simulation modelling in healthcare and the key themes. We performed an umbrella review of systematic literature reviews of simulation modelling in healthcare. Searches were conducted of academic databases (JSTOR, Scopus, PubMed, IEEE, SAGE, ACM, Wiley Online Library, ScienceDirect) and grey literature sources, enhanced by citation searches. The articles were included if they performed a systematic review of simulation modelling techniques in healthcare. After quality assessment of all included articles, data were extracted on numbers of studies included in each review, types of applications, techniques used for simulation modelling, data sources and simulation software. The search strategy yielded a total of 117 potential articles. Following sifting, 37 heterogeneous reviews were included. Most reviews achieved moderate quality rating on a modified AMSTAR (A Measurement Tool used to Assess systematic Reviews) checklist. All the review articles described the types of applications used for simulation modelling; 15 reviews described techniques used for simulation modelling; three reviews described data sources used for simulation modelling; and six reviews described software used for simulation modelling. The remaining reviews either did not report or did not provide enough detail for the data to be extracted. Simulation modelling techniques have been used for a wide range of applications in healthcare, with a variety of software tools and data sources. The number of reviews published in recent years suggest an increased interest in simulation modelling in healthcare.

  12. Developing Risk Prediction Models for Postoperative Pancreatic Fistula: a Systematic Review of Methodology and Reporting Quality.

    Science.gov (United States)

    Wen, Zhang; Guo, Ya; Xu, Banghao; Xiao, Kaiyin; Peng, Tao; Peng, Minhao

    2016-04-01

    Postoperative pancreatic fistula is still a major complication after pancreatic surgery, despite improvements of surgical technique and perioperative management. We sought to systematically review and critically access the conduct and reporting of methods used to develop risk prediction models for predicting postoperative pancreatic fistula. We conducted a systematic search of PubMed and EMBASE databases to identify articles published before January 1, 2015, which described the development of models to predict the risk of postoperative pancreatic fistula. We extracted information of developing a prediction model including study design, sample size and number of events, definition of postoperative pancreatic fistula, risk predictor selection, missing data, model-building strategies, and model performance. Seven studies of developing seven risk prediction models were included. In three studies (42 %), the number of events per variable was less than 10. The number of candidate risk predictors ranged from 9 to 32. Five studies (71 %) reported using univariate screening, which was not recommended in building a multivariate model, to reduce the number of risk predictors. Six risk prediction models (86 %) were developed by categorizing all continuous risk predictors. The treatment and handling of missing data were not mentioned in all studies. We found use of inappropriate methods that could endanger the development of model, including univariate pre-screening of variables, categorization of continuous risk predictors, and model validation. The use of inappropriate methods affects the reliability and the accuracy of the probability estimates of predicting postoperative pancreatic fistula.

  13. Clinical information modeling processes for semantic interoperability of electronic health records: systematic review and inductive analysis.

    Science.gov (United States)

    Moreno-Conde, Alberto; Moner, David; Cruz, Wellington Dimas da; Santos, Marcelo R; Maldonado, José Alberto; Robles, Montserrat; Kalra, Dipak

    2015-07-01

    This systematic review aims to identify and compare the existing processes and methodologies that have been published in the literature for defining clinical information models (CIMs) that support the semantic interoperability of electronic health record (EHR) systems. Following the preferred reporting items for systematic reviews and meta-analyses systematic review methodology, the authors reviewed published papers between 2000 and 2013 that covered that semantic interoperability of EHRs, found by searching the PubMed, IEEE Xplore, and ScienceDirect databases. Additionally, after selection of a final group of articles, an inductive content analysis was done to summarize the steps and methodologies followed in order to build CIMs described in those articles. Three hundred and seventy-eight articles were screened and thirty six were selected for full review. The articles selected for full review were analyzed to extract relevant information for the analysis and characterized according to the steps the authors had followed for clinical information modeling. Most of the reviewed papers lack a detailed description of the modeling methodologies used to create CIMs. A representative example is the lack of description related to the definition of terminology bindings and the publication of the generated models. However, this systematic review confirms that most clinical information modeling activities follow very similar steps for the definition of CIMs. Having a robust and shared methodology could improve their correctness, reliability, and quality. Independently of implementation technologies and standards, it is possible to find common patterns in methods for developing CIMs, suggesting the viability of defining a unified good practice methodology to be used by any clinical information modeler. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. The Systematic Guideline Review: Method, rationale, and test on chronic heart failure

    Directory of Open Access Journals (Sweden)

    Hutchinson Allen

    2009-05-01

    Full Text Available Abstract Background Evidence-based guidelines have the potential to improve healthcare. However, their de-novo-development requires substantial resources – especially for complex conditions, and adaptation may be biased by contextually influenced recommendations in source guidelines. In this paper we describe a new approach to guideline development – the systematic guideline review method (SGR, and its application in the development of an evidence-based guideline for family physicians on chronic heart failure (CHF. Methods A systematic search for guidelines was carried out. Evidence-based guidelines on CHF management in adults in ambulatory care published in English or German between the years 2000 and 2004 were included. Guidelines on acute or right heart failure were excluded. Eligibility was assessed by two reviewers, methodological quality of selected guidelines was appraised using the AGREE instrument, and a framework of relevant clinical questions for diagnostics and treatment was derived. Data were extracted into evidence tables, systematically compared by means of a consistency analysis and synthesized in a preliminary draft. Most relevant primary sources were re-assessed to verify the cited evidence. Evidence and recommendations were summarized in a draft guideline. Results Of 16 included guidelines five were of good quality. A total of 35 recommendations were systematically compared: 25/35 were consistent, 9/35 inconsistent, and 1/35 un-rateable (derived from a single guideline. Of the 25 consistencies, 14 were based on consensus, seven on evidence and four differed in grading. Major inconsistencies were found in 3/9 of the inconsistent recommendations. We re-evaluated the evidence for 17 recommendations (evidence-based, differing evidence levels and minor inconsistencies – the majority was congruent. Incongruity was found where the stated evidence could not be verified in the cited primary sources, or where the evaluation in the

  15. Solar system tests of brane world models

    CERN Document Server

    Boehmer, Christian G; Lobo, Francisco S N

    2008-01-01

    The classical tests of general relativity (perihelion precession, deflection of light, and the radar echo delay) are considered for the Dadhich, Maartens, Papadopoulos and Rezania (DMPR) solution of the spherically symmetric static vacuum field equations in brane world models. For this solution the metric in the vacuum exterior to a brane world star is similar to the Reissner-Nordstrom form of classical general relativity, with the role of the charge played by the tidal effects arising from projections of the fifth dimension. The existing observational solar system data on the perihelion shift of Mercury, on the light bending around the Sun (obtained using long-baseline radio interferometry), and ranging to Mars using the Viking lander, constrain the numerical values of the bulk tidal parameter and of the brane tension.

  16. Solar system tests of brane world models

    Energy Technology Data Exchange (ETDEWEB)

    Boehmer, Christian G [Department of Mathematics, University College London, Gower Street, London WC1E 6BT (United Kingdom); Harko, Tiberiu [Department of Physics and Center for Theoretical and Computational Physics, University of Hong Kong, Pok Fu Lam Road (Hong Kong); Lobo, Francisco S N [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 2EG (United Kingdom)], E-mail: c.boehmer@ucl.ac.uk, E-mail: harko@hkucc.hku.hk, E-mail: francisco.lobo@port.ac.uk

    2008-02-21

    The classical tests of general relativity (perihelion precession, deflection of light and the radar echo delay) are considered for the Dadhich, Maartens, Papadopoulos and Rezania (DMPR) solution of the spherically symmetric static vacuum field equations in brane world models. For this solution the metric in the vacuum exterior to a brane world star is similar to the Reissner-Nordstroem form of classical general relativity, with the role of the charge played by the tidal effects arising from projections of the fifth dimension. The existing observational solar system data on the perihelion shift of Mercury, on the light bending around the Sun (obtained using long-baseline radio interferometry), and ranging to Mars using the Viking lander, constrain the numerical values of the bulk tidal parameter and of the brane tension.

  17. Scaling analysis in modeling transport and reaction processes a systematic approach to model building and the art of approximation

    CERN Document Server

    Krantz, William B

    2007-01-01

    This book is unique as the first effort to expound on the subject of systematic scaling analysis. Not written for a specific discipline, the book targets any reader interested in transport phenomena and reaction processes. The book is logically divided into chapters on the use of systematic scaling analysis in fluid dynamics, heat transfer, mass transfer, and reaction processes. An integrating chapter is included that considers more complex problems involving combined transport phenomena. Each chapter includes several problems that are explained in considerable detail. These are followed by several worked examples for which the general outline for the scaling is given. Each chapter also includes many practice problems. This book is based on recognizing the value of systematic scaling analysis as a pedagogical method for teaching transport and reaction processes and as a research tool for developing and solving models and in designing experiments. Thus, the book can serve as both a textbook and a reference boo...

  18. Systematic Multi‐Scale Model Development Strategy for the Fragrance Spraying Process and Transport

    DEFF Research Database (Denmark)

    Heitzig, M.; Rong, Y.; Gregson, C.;

    2012-01-01

    The fast and efficient development and application of reliable models with appropriate degree of detail to predict the behavior of fragrance aerosols are challenging problems of high interest to the related industries. A generic modeling template for the systematic derivation of specific fragrance...... aerosol models is proposed. The main benefits of the fragrance spraying template are the speed‐up of the model development/derivation process, the increase in model quality, and the provision of structured domain knowledge where needed. The fragrance spraying template is integrated in a generic computer......‐aided modeling framework, which is structured based on workflows for different general modeling tasks. The benefits of the fragrance spraying template are highlighted by a case study related to the derivation of a fragrance aerosol model that is able to reflect measured dynamic droplet size distribution profiles...

  19. Two Bayesian tests of the GLOMOsys Model.

    Science.gov (United States)

    Field, Sarahanne M; Wagenmakers, Eric-Jan; Newell, Ben R; Zeelenberg, René; van Ravenzwaaij, Don

    2016-12-01

    Priming is arguably one of the key phenomena in contemporary social psychology. Recent retractions and failed replication attempts have led to a division in the field between proponents and skeptics and have reinforced the importance of confirming certain priming effects through replication. In this study, we describe the results of 2 preregistered replication attempts of 1 experiment by Förster and Denzler (2012). In both experiments, participants first processed letters either globally or locally, then were tested using a typicality rating task. Bayes factor hypothesis tests were conducted for both experiments: Experiment 1 (N = 100) yielded an indecisive Bayes factor of 1.38, indicating that the in-lab data are 1.38 times more likely to have occurred under the null hypothesis than under the alternative. Experiment 2 (N = 908) yielded a Bayes factor of 10.84, indicating strong support for the null hypothesis that global priming does not affect participants' mean typicality ratings. The failure to replicate this priming effect challenges existing support for the GLOMO(sys) model. (PsycINFO Database Record

  20. Trace-Based Code Generation for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    Paper Submitted for review at the Eighth International Conference on Generative Programming and Component Engineering. Model-based testing can be a powerful means to generate test cases for the system under test. However, creating a useful model for model-based testing requires expertise in the (fo

  1. Trace-Based Code Generation for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    Paper Submitted for review at the Eighth International Conference on Generative Programming and Component Engineering. Model-based testing can be a powerful means to generate test cases for the system under test. However, creating a useful model for model-based testing requires expertise in the

  2. Systematic review and proposal of a field-based physical fitness-test battery in preschool children: the PREFIT battery.

    Science.gov (United States)

    Ortega, Francisco B; Cadenas-Sánchez, Cristina; Sánchez-Delgado, Guillermo; Mora-González, José; Martínez-Téllez, Borja; Artero, Enrique G; Castro-Piñero, Jose; Labayen, Idoia; Chillón, Palma; Löf, Marie; Ruiz, Jonatan R

    2015-04-01

    Physical fitness is a powerful health marker in childhood and adolescence, and it is reasonable to think that it might be just as important in younger children, i.e. preschoolers. At the moment, researchers, clinicians and sport practitioners do not have enough information about which fitness tests are more reliable, valid and informative from the health point of view to be implemented in preschool children. Our aim was to systematically review the studies conducted in preschool children using field-based fitness tests, and examine their (1) reliability, (2) validity, and (3) relationship with health outcomes. Our ultimate goal was to propose a field-based physical fitness-test battery to be used in preschool children. PubMed and Web of Science. Studies conducted in healthy preschool children that included field-based fitness tests. When using PubMed, we included Medical Subject Heading (MeSH) terms to enhance the power of the search. A set of fitness-related terms were combined with 'child, preschool' [MeSH]. The same strategy and terms were used for Web of Science (except for the MeSH option). Since no previous reviews with a similar aim were identified, we searched for all articles published up to 1 April 2014 (no starting date). A total of 2,109 articles were identified, of which 22 articles were finally selected for this review. Most studies focused on reliability of the fitness tests (n = 21, 96%), while very few focused on validity (0 criterion-related validity and 4 (18%) convergent validity) or relationship with health outcomes (0 longitudinal and 1 (5%) cross-sectional study). Motor fitness, particularly balance, was the most studied fitness component, while cardiorespiratory fitness was the least studied. After analyzing the information retrieved in the current systematic review about fitness testing in preschool children, we propose the PREFIT battery, field-based FITness testing in PREschool children. The PREFIT battery is composed of the following

  3. Designing healthy communities: Testing the walkability model

    Directory of Open Access Journals (Sweden)

    Adriana A. Zuniga-Teran

    2017-03-01

    Full Text Available Research from multiple domains has provided insights into how neighborhood design can be improved to have a more favorable effect on physical activity, a concept known as walkability. The relevant research findings/hypotheses have been integrated into a Walkability Framework, which organizes the design elements into nine walkability categories. The purpose of this study was to test whether this conceptual framework can be used as a model to measure the interactions between the built environment and physical activity. We explored correlations between the walkability categories and physical activity reported through a survey of residents of Tucson, Arizona (n=486. The results include significant correlations between the walkability categories and physical activity as well as between the walkability categories and the two motivations for walking (recreation and transportation. To our knowledge, this is the first study that reports links between walkability and walking for recreation. Additionally, the use of the Walkability Framework allowed us to identify the walkability categories most strongly correlated with the two motivations for walking. The results of this study support the use of the Walkability Framework as a model to measure the built environment in relation to its ability to promote physical activity.

  4. Tests of Local Lorentz Invariance Violation of Gravity in the Standard-Model Extension with Pulsars

    CERN Document Server

    Shao, Lijing

    2014-01-01

    Standard-model extension (SME) is an effective field theory introducing all possible Lorentz-violating (LV) operators to the standard model (SM) and general relativity (GR). In the pure-gravity sector of minimal SME (mSME), nine coefficients describe dominant observable deviations from GR. We systematically implemented twenty-seven tests from thirteen pulsar systems to tightly constrain eight linear combinations of these coefficients with extensive Monte Carlo simulations. It constitutes the first detailed and systematic test of the pure-gravity sector of mSME with the state-of-the-art pulsar observations. No deviation from GR was detected. The limits of LV coefficients are expressed in the canonical Sun-centered celestial-equatorial frame for convenience of further studies. They are all improved by significant factors of tens to hundreds with existing ones. As a consequence, Einstein's equivalence principle is verified substantially further by pulsar experiments in terms of local Lorentz invariance in gravit...

  5. Pharmacological and methodological aspects of the separation-induced vocalization test in guinea pig pups; a systematic review and meta-analysis.

    Science.gov (United States)

    Groenink, Lucianne; Verdouw, P Monika; Bakker, Brenda; Wever, Kimberley E

    2015-04-15

    The separation-induced vocalization test in guinea pig pups is one of many that has been used to screen for anxiolytic-like properties of drugs. The test is based on the cross-species phenomenon that infants emit distress calls when placed in social isolation. Here we report a systematic review and meta-analysis of pharmacological intervention in the separation-induced vocalization test in guinea pig pups. Electronic databases were searched for original research articles, yielding 32 studies that met inclusion criteria. We extracted data on pharmacological intervention, animal and methodological characteristics, and study quality indicators. Meta-analysis showed that the different drug classes in clinical use for the treatment of anxiety disorders, have comparable effects on vocalization behaviour, irrespective of their mechanism of action. Of the experimental drugs, nociception (NOP) receptor agonists proved very effective in this test. Analysis further indicated that the commonly used read-outs total number and total duration of vocalizations are equally valid. With regard to methodological characteristics, repeated testing of pups as well as selecting pups with moderate or high levels of vocalization were associated with larger treatment effects. Finally, reporting of study methodology, randomization and blinding was poor and Egger's test for small study effects showed that publication bias likely occurred. This review illustrates the value of systematic reviews and meta-analyses in improving translational value and methodological aspects of animal models. It further shows the urgent need to implement existing publication guidelines to maximize the output and impact of experimental animal studies.

  6. Application of a systematic finite-element model modification technique to dynamic analysis of structures

    Science.gov (United States)

    Robinson, J. C.

    1982-01-01

    A systematic finite-element model modification technique has been applied to two small problems and a model of the main wing box of a research drone aircraft. The procedure determines the sensitivity of the eigenvalues and eigenvector components to specific structural changes, calculates the required changes and modifies the finite-element model. Good results were obtained where large stiffness modifications were required to satisfy large eigenvalue changes. Sensitivity matrix conditioning problems required the development of techniques to insure existence of a solution and accelerate its convergence. A method is proposed to assist the analyst in selecting stiffness parameters for modification.

  7. Screening to prevent spontaneous preterm birth: systematic reviews of accuracy and effectiveness literature with economic modelling.

    Science.gov (United States)

    Honest, H; Forbes, C A; Durée, K H; Norman, G; Duffy, S B; Tsourapas, A; Roberts, T E; Barton, P M; Jowett, S M; Hyde, C J; Khan, K S

    2009-09-01

    To identify combinations of tests and treatments to predict and prevent spontaneous preterm birth. Searches were run on the following databases up to September 2005 inclusive: MEDLINE, EMBASE, DARE, the Cochrane Library (CENTRAL and Cochrane Pregnancy and Childbirth Group trials register) and MEDION. We also contacted experts including the Cochrane Pregnancy and Childbirth Group and checked reference lists of review articles and papers that were eligible for inclusion. Two series of systematic reviews were performed: (1) accuracy of tests for the prediction of spontaneous preterm birth in asymptomatic women in early pregnancy and in women symptomatic with threatened preterm labour in later pregnancy; (2) effectiveness of interventions with potential to reduce cases of spontaneous preterm birth in asymptomatic women in early pregnancy and to reduce spontaneous preterm birth or improve neonatal outcome in women with a viable pregnancy symptomatic of threatened preterm labour. For the health economic evaluation, a model-based analysis incorporated the combined effect of tests and treatments and their cost-effectiveness. Of the 22 tests reviewed for accuracy, the quality of studies and accuracy of tests was generally poor. Only a few tests had LR+ > 5. In asymptomatic women these were ultrasonographic cervical length measurement and cervicovaginal prolactin and fetal fibronectin screening for predicting spontaneous preterm birth before 34 weeks. In this group, tests with LR- preterm labour, tests with LR+ > 5 were absence of fetal breathing movements, cervical length and funnelling, amniotic fluid interleukin-6 (IL-6), serum CRP for predicting birth within 2-7 days of testing, and matrix metalloprotease-9, amniotic fluid IL-6, cervicovaginal fetal fibronectin and cervicovaginal human chorionic gonadotrophin (hCG) for predicting birth before 34 or 37 weeks. In this group, tests with LR- preterm birth. Smoking cessation programmes, progesterone, periodontal therapy and

  8. The utility of repeat enzyme immunoassay testing for the diagnosis of Clostridium difficile infection: A systematic review of the literature

    Directory of Open Access Journals (Sweden)

    P S Garimella

    2012-01-01

    Full Text Available Over the last 20 years, the prevalence of healthcare-associated Clostridium difficile (C. diff disease has increased. While multiple tests are available for the diagnosis of C. diff infection, enzyme immunoassay (EIA testing for toxin is the most used. Repeat EIA testing, although of limited utility, is common in medical practice. To assess the utility of repeat EIA testing to diagnose C. diff infections. Systematic literature review. Eligible studies performed >1 EIA test for C. diff toxin and were published in English. Electronic searches of MEDLINE and EMBASE were performed and bibliographies of review articles and conference abstracts were hand searched. Of 805 citations identified, 32 were reviewed in detail and nine were included in the final review. All studies except one were retrospective chart reviews. Seven studies had data on number of participants (32,526, and the overall reporting of test setting and patient characteristics was poor. The prevalence of C. diff infection ranged from 9.1% to 18.5%. The yield of the first EIA test ranged from 8.4% to 16.6%, dropping to 1.5-4.7% with a second test. The utility of repeat testing was evident in outbreak settings, where the yield of repeat testing was 5%. Repeat C. diff testing for hospitalized patients has low clinical utility and may be considered in outbreak settings or when the pre-test probability of disease is high. Future studies should aim to identify patients with a likelihood of disease and determine the utility of repeat testing compared with empiric treatment.

  9. The utility of repeat enzyme immunoassay testing for the diagnosis of Clostridium difficile infection: a systematic review of the literature.

    Science.gov (United States)

    Garimella, P S; Agarwal, R; Katz, A

    2012-01-01

    Over the last 20 years, the prevalence of healthcare-associated Clostridium difficile (C. diff) disease has increased. While multiple tests are available for the diagnosis of C. diff infection, enzyme immunoassay (EIA) testing for toxin is the most used. Repeat EIA testing, although of limited utility, is common in medical practice. To assess the utility of repeat EIA testing to diagnose C. diff infections. Systematic literature review. Eligible studies performed >1 EIA test for C. diff toxin and were published in English. Electronic searches of MEDLINE and EMBASE were performed and bibliographies of review articles and conference abstracts were hand searched. Of 805 citations identified, 32 were reviewed in detail and nine were included in the final review. All studies except one were retrospective chart reviews. Seven studies had data on number of participants (32,526), and the overall reporting of test setting and patient characteristics was poor. The prevalence of C. diff infection ranged from 9.1% to 18.5%. The yield of the first EIA test ranged from 8.4% to 16.6%, dropping to 1.5-4.7% with a second test. The utility of repeat testing was evident in outbreak settings, where the yield of repeat testing was 5%. Repeat C. diff testing for hospitalized patients has low clinical utility and may be considered in outbreak settings or when the pre-test probability of disease is high. Future studies should aim to identify patients with a likelihood of disease and determine the utility of repeat testing compared with empiric treatment.

  10. Dynamic testing of learning potential in adults with cognitive impairments: A systematic review of methodology and predictive value.

    Science.gov (United States)

    Boosman, Hileen; Bovend'Eerdt, Thamar J H; Visser-Meily, Johanna M A; Nijboer, Tanja C W; van Heugten, Caroline M

    2016-09-01

    Dynamic testing includes procedures that examine the effects of brief training on test performance where pre- to post-training change reflects patients' learning potential. The objective of this systematic review was to provide clinicians and researchers insight into the concept and methodology of dynamic testing and to explore its predictive validity in adult patients with cognitive impairments. The following electronic databases were searched: PubMed, PsychINFO, and Embase/Medline. Of 1141 potentially relevant articles, 24 studies met the inclusion criteria. The mean methodological quality score was 4.6 of 8. Eleven different dynamic tests were used. The majority of studies used dynamic versions of the Wisconsin Card Sorting Test. The training mostly consisted of a combination of performance feedback, reinforcement, expanded instruction, or strategy training. Learning potential was quantified using numerical (post-test score, difference score, gain score, regression residuals) and categorical (groups) indices. In five of six longitudinal studies, learning potential significantly predicted rehabilitation outcome. Three of four studies supported the added value of dynamic testing over conventional testing in predicting rehabilitation outcome. This review provides preliminary support that dynamic tests can provide a valuable addition to conventional tests to assess patients' abilities. Although promising, there was a large variability in methods used for dynamic testing and, therefore, it remains unclear which dynamic testing methods are most appropriate for patients with cognitive impairments. More research is warranted to further evaluate and refine dynamic testing methodology and to further elucidate its predictive validity concerning rehabilitation outcomes relative to other cognitive and functional status indices.

  11. The systematic study of the stability of forecasts in the rate- and state-dependent model.

    Science.gov (United States)

    De Gaetano, D.; McCloskey, J.; Nalbant, S.

    2012-04-01

    Numerous observations have shown a general spatial correlation between positive Coulomb failure stress changes due to an earthquake and the locations of aftershocks. However this correlation does not give any indication of the rate from which we can infer the magnitude using the Gutenberg-Richter law. Dieterich's rate- and state-dependent model can be used to obtain a forecast of the observed aftershock rate for the space and time evolution of seismicity caused by stress changes applied to an infinite population of nucleating patches. The seismicity rate changes on this model depend on eight parameters: the stressing rate, the amplitude of the stress perturbation, the physical constitutive properties of faults, the spatial parameters (location and radii of the cells), the start and duration of each of the temporal windows as well as the background seismicity rate. The background seismicity is obtained from the epidemic type aftershock sequence model. We use the 1992 Landers earthquake as a case study, using the Southern California Earthquake Data Centre (SCEDC) catalogue, to examine if Dieterich's rate- and state-dependent model can forecast the aftershock seismicity rate. A systematic study is performed on a range of values on all the parameters to test the forecasting ability of this model. The results obtained suggest variable success in forecasting, when varying the values for the parameters, with the spatial and temporal parameters being the most sensitive. Dieterich's rate- and state-dependent model is compared with a well studied null hypothesis, the Omori-Utsu law. This law describes the aftershock rate as a power law in time following the main shock and depends on only three parameters: the aftershock productivity, the elapsed time since the main shock and the constant time shift, all of which can be estimated in the early part of the aftershock sequence and then extrapolated to give a long term rate forecast. All parameters are estimated using maximum

  12. Logarithmic discretization and systematic derivation of shell models in two-dimensional turbulence.

    Science.gov (United States)

    Gürcan, Ö D; Morel, P; Kobayashi, S; Singh, Rameswar; Xu, S; Diamond, P H

    2016-09-01

    A detailed systematic derivation of a logarithmically discretized model for two-dimensional turbulence is given, starting from the basic fluid equations and proceeding with a particular form of discretization of the wave-number space. We show that it is possible to keep all or a subset of the interactions, either local or disparate scale, and recover various limiting forms of shell models used in plasma and geophysical turbulence studies. The method makes no use of the conservation laws even though it respects the underlying conservation properties of the fluid equations. It gives a family of models ranging from shell models with nonlocal interactions to anisotropic shell models depending on the way the shells are constructed. Numerical integration of the model shows that energy and enstrophy equipartition seem to dominate over the dual cascade, which is a common problem of two-dimensional shell models.

  13. Crash test for groundwater recharge models: The effects of model complexity and calibration period on groundwater recharge predictions

    Science.gov (United States)

    Moeck, Christian; Von Freyberg, Jana; Schrimer, Maria

    2016-04-01

    An important question in recharge impact studies is how model choice, structure and calibration period affect recharge predictions. It is still unclear if a certain model type or structure is less affected by running the model on time periods with different hydrological conditions compared to the calibration period. This aspect, however, is crucial to ensure reliable predictions of groundwater recharge. In this study, we quantify and compare the effect of groundwater recharge model choice, model parametrization and calibration period in a systematic way. This analysis was possible thanks to a unique data set from a large-scale lysimeter in a pre-alpine catchment where daily long-term recharge rates are available. More specifically, the following issues are addressed: We systematically evaluate how the choice of hydrological models influences predictions of recharge. We assess how different parameterizations of models due to parameter non-identifiability affect predictions of recharge by applying a Monte Carlo approach. We systematically assess how the choice of calibration periods influences predictions of recharge within a differential split sample test focusing on the model performance under extreme climatic and hydrological conditions. Results indicate that all applied models (simple lumped to complex physically based models) were able to simulate the observed recharge rates for five different calibration periods. However, there was a marked impact of the calibration period when the complete 20 years validation period was simulated. Both, seasonal and annual differences between simulated and observed daily recharge rates occurred when the hydrological conditions were different to the calibration period. These differences were, however, less distinct for the physically based models, whereas the simpler models over- or underestimate the observed recharge depending on the considered season. It is, however, possible to reduce the differences for the simple models by

  14. Systematic Assessment of Neutron and Gamma Backgrounds Relevant to Operational Modeling and Detection Technology Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Daniel E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hornback, Donald Eric [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Jeffrey O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nicholson, Andrew D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peplow, Douglas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ayaz-Maierhafer, Birsen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-01

    This report summarizes the findings of a two year effort to systematically assess neutron and gamma backgrounds relevant to operational modeling and detection technology implementation. The first year effort focused on reviewing the origins of background sources and their impact on measured rates in operational scenarios of interest. The second year has focused on the assessment of detector and algorithm performance as they pertain to operational requirements against the various background sources and background levels.

  15. Are chiropractic tests for the lumbo-pelvic spine reliable and valid? A systematic critical literature review

    DEFF Research Database (Denmark)

    Hestbaek, L; Leboeuf-Yde, C

    2000-01-01

    was searched for the years 1976 to 1995 with the following index terms: "chiropractic tests," "chiropractic adjusting technique," "motion palpation," "movement palpation," "leg length," "applied kinesiology," and "sacrooccipital technique." In addition, a manual search was performed at the libraries....... Documentation of applied kinesiology was not available. Palpation for muscle tension, palpation for misalignment, and visual inspection were either undocumented, unreliable, or not valid. CONCLUSION: The detection of the manipulative lesion in the lumbo-pelvic spine depends on valid and reliable tests. Because......OBJECTIVE: To systematically review the peer-reviewed literature about the reliability and validity of chiropractic tests used to determine the need for spinal manipulative therapy of the lumbo-pelvic spine, taking into account the quality of the studies. DATA SOURCES: The CHIROLARS database...

  16. Clinical uncertainties, health service challenges, and ethical complexities of HIV "test-and-treat": a systematic review.

    Science.gov (United States)

    Kulkarni, Sonali P; Shah, Kavita R; Sarma, Karthik V; Mahajan, Anish P

    2013-06-01

    Despite the HIV "test-and-treat" strategy's promise, questions about its clinical rationale, operational feasibility, and ethical appropriateness have led to vigorous debate in the global HIV community. We performed a systematic review of the literature published between January 2009 and May 2012 using PubMed, SCOPUS, Global Health, Web of Science, BIOSIS, Cochrane CENTRAL, EBSCO Africa-Wide Information, and EBSCO CINAHL Plus databases to summarize clinical uncertainties, health service challenges, and ethical complexities that may affect the test-and-treat strategy's success. A thoughtful approach to research and implementation to address clinical and health service questions and meaningful community engagement regarding ethical complexities may bring us closer to safe, feasible, and effective test-and-treat implementation.

  17. Clinical Uncertainties, Health Service Challenges, and Ethical Complexities of HIV “Test-and-Treat”: A Systematic Review

    Science.gov (United States)

    Shah, Kavita R.; Sarma, Karthik V.; Mahajan, Anish P.

    2013-01-01

    Despite the HIV “test-and-treat” strategy’s promise, questions about its clinical rationale, operational feasibility, and ethical appropriateness have led to vigorous debate in the global HIV community. We performed a systematic review of the literature published between January 2009 and May 2012 using PubMed, SCOPUS, Global Health, Web of Science, BIOSIS, Cochrane CENTRAL, EBSCO Africa-Wide Information, and EBSCO CINAHL Plus databases to summarize clinical uncertainties, health service challenges, and ethical complexities that may affect the test-and-treat strategy’s success. A thoughtful approach to research and implementation to address clinical and health service questions and meaningful community engagement regarding ethical complexities may bring us closer to safe, feasible, and effective test-and-treat implementation. PMID:23597344

  18. Computerized classification testing with the Rasch model

    NARCIS (Netherlands)

    Eggen, Theo J.H.M.

    2011-01-01

    If classification in a limited number of categories is the purpose of testing, computerized adaptive tests (CATs) with algorithms based on sequential statistical testing perform better than estimation-based CATs (e.g., Eggen & Straetmans, 2000). In these computerized classification tests (CCTs), the

  19. Standardized tests of handwriting readiness: a systematic review of the literature

    NARCIS (Netherlands)

    Hartingsveldt, M.J. van; Groot, I.J.M. de; Aarts, P.B.M.; Nijhuis-Van der Sanden, M.W.G.

    2011-01-01

    AIM: To establish if there are psychometrically sound standardized tests or test items to assess handwriting readiness in 5- and 6-year-old children on the levels of occupations activities/tasks and performance. METHOD: Electronic databases were searched to identify measurement instruments. Tests we

  20. Bacteriophage- based tests for the detection of Mycobacterium tuberculosis in clinical specimens: a systematic review and meta- analysis

    Directory of Open Access Journals (Sweden)

    Pascopella Lisa

    2005-07-01

    Full Text Available Abstract Background Sputum microscopy, the most important conventional test for tuberculosis, is specific in settings with high burden of tuberculosis and low prevalence of non tuberculous mycobacteria. However, the test lacks sensitivity. Although bacteriophage-based tests for tuberculosis have shown promising results, their overall accuracy has not been systematically evaluated. Methods We did a systematic review and meta-analysis of published studies to evaluate the accuracy of phage-based tests for the direct detection of M. tuberculosis in clinical specimens. To identify studies, we searched Medline, EMBASE, Web of science and BIOSIS, and contacted authors, experts and test manufacturers. Thirteen studies, all based on phage amplification method, met our inclusion criteria. Overall accuracy was evaluated using forest plots, summary receiver operating (SROC curves, and subgroup analyses. Results The data suggest that phage-based assays have high specificity (range 0.83 to 1.00, but modest and variable sensitivity (range 0.21 to 0.88. The sensitivity ranged between 0.29 and 0.87 among smear-positive, and 0.13 to 0.78 among smear-negative specimens. The specificity ranged between 0.60 and 0.88 among smear-positive and 0.89 to 0.99 among smear-negative specimens. SROC analyses suggest that overall accuracy of phage-based assays is slightly higher than smear microscopy in direct head-to-head comparisons. Conclusion Phage-based assays have high specificity but lower and variable sensitivity. Their performance characteristics are similar to sputum microscopy. Phage assays cannot replace conventional diagnostic tests such as microscopy and culture at this time. Further research is required to identify methods that can enhance the sensitivity of phage-based assays without compromising the high specificity.

  1. Friction at seismic slip rates: testing thermal weakening models experimentally

    Science.gov (United States)

    Nielsen, S. B.; Spagnuolo, E.; Violay, M.; Di Toro, G.

    2013-12-01

    Recent experiments systematically explore rock friction under crustal earthquake conditions (fast slip rate 1desing an efficient and accurate wavenumber approximation for a solution of the temperature evolution on the fault. Finally, we propose a compact and paractical model based on a small number of memory variables for the implementation of thermal weakening friction in seismic fault simulations.

  2. Accelerated testing statistical models, test plans, and data analysis

    CERN Document Server

    Nelson, Wayne B

    2009-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "". . . a goldmine of knowledge on accelerated life testing principles and practices . . . one of the very few capable of advancing the science of reliability. It definitely belongs in every bookshelf on engineering.""-Dev G.

  3. Impact of moisture divergence on systematic errors in precipitation around the Tibetan Plateau in a general circulation model

    Science.gov (United States)

    Zhang, Yi; Li, Jian

    2016-11-01

    Current state-of-the-art atmospheric general circulation models tend to strongly overestimate the amount of precipitation around steep mountains, which constitutes a stubborn systematic error that causes the climate drift and hinders the model performance. In this study, two contrasting model tests are performed to investigate the sensitivity of precipitation around steep slopes. The first model solves a true moisture advection equation, whereas the second solves an artificial advection equation with an additional moisture divergence term. It is shown that the orographic precipitation can be largely impacted by this term. Excessive (insufficient) precipitation amounts at the high (low) parts of the steep slopes decrease (increase) when the moisture divergence term is added. The precipitation changes between the two models are primarily attributed to large-scale precipitation, which is directly associated with water vapor saturation and condensation. Numerical weather prediction experiments using these two models suggest that precipitation differences between the models emerge shortly after the model startup. The implications of the results are also discussed.

  4. Several submaximal exercise tests are reliable, valid and acceptable in people with chronic pain, fibromyalgia or chronic fatigue: a systematic review

    Directory of Open Access Journals (Sweden)

    Julia Ratter

    2014-09-01

    [Ratter J, Radlinger L, Lucas C (2014 Several submaximal exercise tests are reliable, valid and acceptable in people with chronic pain, fibromyalgia or chronic fatigue: a systematic review. Journal of Physiotherapy 60: 144–150

  5. Oro-facial functions in experimental models of cerebral palsy: a systematic review.

    Science.gov (United States)

    Lacerda, D C; Ferraz-Pereira, K N; Bezerra de Morais, A T; Costa-de-Santana, B J R; Quevedo, O G; Manhães-de-Castro, R; Toscano, A E

    2017-04-01

    Children who suffer from cerebral palsy (CP) often present comorbidities in the form of oro-facial dysfunctions. Studies in animals have contributed to elaborate potential therapies aimed at minimising the chronic disability of the syndrome. To systematically review the scientific literature regarding the possible effects that experimental models of CP can have on oro-facial functions. Two independent authors conducted a systematic review in the electronic databases Medline, Scopus, CINAHL, Web of Science and Lilacs, using Mesh and Decs terms in animal models. The motor and sensory parameters of sucking, chewing and swallowing were considered as primary outcomes; reactivity odour, controlled salivation, postural control, head mobility during feeding and the animal's ability to acquire food were secondary outcomes. Ten studies were included in the present review. Most studies used rabbits as experimental models of CP, which was induced by either hypoxia-ischemia, inflammation or intraventricular haemorrhage. Oro-facial functions were altered in all experimental models of CP. However, we found more modifications in hypoxia-ischemia models overall. On the other hand, the model of inflammation was more effective to reproduce higher damage for coordinating sucking and swallowing. All of the CP experimental models that were assessed modified the oral functions in different animal species. However, further studies should be conducted in order to clarify the mechanisms underlying oro-facial damage in order to optimise treatment strategies for children who suffer from CP. © 2017 John Wiley & Sons Ltd.

  6. Direct-to-consumer genomic testing from the perspective of the health professional: a systematic review of the literature.

    Science.gov (United States)

    Goldsmith, Lesley; Jackson, Leigh; O'Connor, Anita; Skirton, Heather

    2013-04-01

    Since the 1990s, there has been a rapid expansion in the number and type of genetic tests available via health professionals; the last 10 years, however, have seen certain types of genetic and genomic tests available direct-to-consumer. The aim of this systematic review was to explore the topic of direct-to-consumer genetic testing from the health professional perspective. Search terms used to identify studies were 'direct-to-consumer', personal genom*, health* professional*, physician* 'genomic, genetic' in five bibliographic databases, together with citation searching. Eight quantitative papers were reviewed. Findings indicate a low level of awareness and experience of direct-to-consumer testing in health professionals. Inconsistent levels of knowledge and understanding were also found with two studies showing significant effects for gender and age. Concerns about clinical utility and lack of counselling were identified. Health professionals specialising in genetics were most likely to express concerns. There was also evidence of perceived increased workload for health professionals post-testing. However, some health professionals rated such tests clinically useful and cited benefits such as the increased opportunity for early screening. Despite limited awareness, knowledge and experience of actual cases, we concluded that the concerns and potential benefits expressed may be warranted. It may be useful to explore the attitudes and experiences of health professionals in more depth using a qualitative approach. Finally, it is essential that health professionals receive sufficient education and guidelines to equip them to help patients presenting with the results of these tests.

  7. Decoding Beta-Decay Systematics: A Global Statistical Model for Beta^- Halflives

    CERN Document Server

    Costiris, N J; Gernoth, K A; Clark, J W

    2008-01-01

    Statistical modeling of nuclear data provides a novel approach to nuclear systematics complementary to established theoretical and phenomenological approaches based on quantum theory. Continuing previous studies in which global statistical modeling is pursued within the general framework of machine learning theory, we implement advances in training algorithms designed to improved generalization, in application to the problem of reproducing and predicting the halflives of nuclear ground states that decay 100% by the beta^- mode. More specifically, fully-connected, multilayer feedforward artificial neural network models are developed using the Levenberg-Marquardt optimization algorithm together with Bayesian regularization and cross-validation. The predictive performance of models emerging from extensive computer experiments is compared with that of traditional microscopic and phenomenological models as well as with the performance of other learning systems, including earlier neural network models as well as th...

  8. Systematic Selection of Key Logistic Regression Variables for Risk Prediction Analyses: A Five-Factor Maximum Model.

    Science.gov (United States)

    Hewett, Timothy E; Webster, Kate E; Hurd, Wendy J

    2017-08-16

    The evolution of clinical practice and medical technology has yielded an increasing number of clinical measures and tests to assess a patient's progression and return to sport readiness after injury. The plethora of available tests may be burdensome to clinicians in the absence of evidence that demonstrates the utility of a given measurement. Thus, there is a critical need to identify a discrete number of metrics to capture during clinical assessment to effectively and concisely guide patient care. The data sources included Pubmed and PMC Pubmed Central articles on the topic. Therefore, we present a systematic approach to injury risk analyses and how this concept may be used in algorithms for risk analyses for primary anterior cruciate ligament (ACL) injury in healthy athletes and patients after ACL reconstruction. In this article, we present the five-factor maximum model, which states that in any predictive model, a maximum of 5 variables will contribute in a meaningful manner to any risk factor analysis. We demonstrate how this model already exists for prevention of primary ACL injury, how this model may guide development of the second ACL injury risk analysis, and how the five-factor maximum model may be applied across the injury spectrum for development of the injury risk analysis.

  9. A systematic approach for comparing modeled biospheric carbon fluxes across regional scales

    Directory of Open Access Journals (Sweden)

    D. N. Huntzinger

    2011-06-01

    Full Text Available Given the large differences between biospheric model estimates of regional carbon exchange, there is a need to understand and reconcile the predicted spatial variability of fluxes across models. This paper presents a set of quantitative tools that can be applied to systematically compare flux estimates despite the inherent differences in model formulation. The presented methods include variogram analysis, variable selection, and geostatistical regression. These methods are evaluated in terms of their ability to assess and identify differences in spatial variability in flux estimates across North America among a small subset of models, as well as differences in the environmental drivers that best explain the spatial variability of predicted fluxes. The examined models are the Simple Biosphere (SiB 3.0, Carnegie Ames Stanford Approach (CASA, and CASA coupled with the Global Fire Emissions Database (CASA GFEDv2, and the analyses are performed on model-predicted net ecosystem exchange, gross primary production, and ecosystem respiration. Variogram analysis reveals consistent seasonal differences in spatial variability among modeled fluxes at a 1° × 1° spatial resolution. However, significant differences are observed in the overall magnitude of the carbon flux spatial variability across models, in both net ecosystem exchange and component fluxes. Results of the variable selection and geostatistical regression analyses suggest fundamental differences between the models in terms of the factors that explain the spatial variability of predicted flux. For example, carbon flux is more strongly correlated with percent land cover in CASA GFEDv2 than in SiB or CASA. Some of the differences in spatial patterns of estimated flux can be linked back to differences in model formulation, and would have been difficult to identify simply by comparing net fluxes between models. Overall, the systematic approach presented here provides a set of tools for comparing

  10. A systematic review of repetitive functional task practice with modelling of resource use, costs and effectiveness.

    Science.gov (United States)

    French, B; Leathley, M; Sutton, C; McAdam, J; Thomas, L; Forster, A; Langhorne, P; Price, C; Walker, A; Watkins, C

    2008-07-01

    To determine whether repetitive functional task practice (RFTP) after stroke improves limb-specific or global function or activities of daily living and whether treatment effects are dependent on the amount of practice, or the type or timing of the intervention. Also to provide estimates of the cost-effectiveness of RFTP. The main electronic databases were searched from inception to week 4, September 2006. Searches were also carried out on non-English-language databases and for unpublished trials up to May 2006. Standard quantitative methods were used to conduct the systematic review. The measures of efficacy of RFTP from the data synthesis were used to inform an economic model. The model used a pre-existing data set and tested the potential impact of RFTP on cost. An incremental cost per quality-adjusted life-year (QALY) gained for RFTP was estimated from the model. Sensitivity analyses around the assumptions made for the model were used to test the robustness of the estimates. Thirty-one trials with 34 intervention-control pairs and 1078 participants were included. Overall, it was found that some forms of RFTP resulted in improvement in global function, and in both arm and lower limb function. Overall standardised mean difference in data suitable for pooling was 0.38 [95% confidence interval (CI) 0.09 to 0.68] for global motor function, 0.24 (95% CI 0.06 to 0.42) for arm function and 0.28 (95% CI 0.05 to 0.51) for functional ambulation. Results suggest that training may be sufficient to have an impact on activities of daily living. Retention effects of training persist for up to 6 months, but whether they persist beyond this is unclear. There was little or no evidence that treatment effects overall were modified by time since stroke or dosage of task practice, but results for upper limb function were modified by type of intervention. The economic modelling suggested that RFTP was cost-effective. Given a threshold for cost-effectiveness of 20,000 pounds per QALY

  11. A systematic approach to obtain validated partial least square models for predicting lipoprotein subclasses from serum NMR spectra.

    Science.gov (United States)

    Mihaleva, Velitchka V; van Schalkwijk, Daniël B; de Graaf, Albert A; van Duynhoven, John; van Dorsten, Ferdinand A; Vervoort, Jacques; Smilde, Age; Westerhuis, Johan A; Jacobs, Doris M

    2014-01-07

    A systematic approach is described for building validated PLS models that predict cholesterol and triglyceride concentrations in lipoprotein subclasses in fasting serum from a normolipidemic, healthy population. The PLS models were built on diffusion-edited (1)H NMR spectra and calibrated on HPLC-derived lipoprotein subclasses. The PLS models were validated using an independent test set. In addition to total VLDL, LDL, and HDL lipoproteins, statistically significant PLS models were obtained for 13 subclasses, including 5 VLDLs (particle size 64-31.3 nm), 4 LDLs (particle size 28.6-20.7 nm) and 4 HDLs (particle size 13.5-9.8 nm). The best models were obtained for triglycerides in VLDL (0.82 < Q(2) <0.92) and HDL (0.69 < Q(2) <0.79) subclasses and for cholesterol in HDL subclasses (0.68 < Q(2) <0.96). Larger variations in the model performance were observed for triglycerides in LDL subclasses and cholesterol in VLDL and LDL subclasses. The potential of the NMR-PLS model was assessed by comparing the LPD of 52 subjects before and after a 4-week treatment with dietary supplements that were hypothesized to change blood lipids. The supplements induced significant (p < 0.001) changes on multiple subclasses, all of which clearly exceeded the prediction errors.

  12. TURBHO - Higher order turbulence modeling for industrial applications. Design document: Module Test Phase (MTP). Software engineering module: Testing; TURBHO. Turbulenzmodellierung hoeherer Ordnung fuer industrielle Anwendungen. Design document: Module Test Phase (MTP). Software engineering module: testing

    Energy Technology Data Exchange (ETDEWEB)

    Grotjans, H.

    1998-11-19

    In the current Software Engineering Module (SEM-4) new physical model implementations have been tested and additional complex test cases have been investigated with the available models. For all validation test cases it has been shown that the computed results are grid independent. This has been done by systematic grid refinement studies. No grid independence has been shown so far for the Aerospatiale-A airfoil, the draft tube flow, the transonic bump flow and the impinging jet flow. Most of the main objectives of the current SEM, cf. Chapter 1, are fulfilled. These are the verification of the alternative pressure-strain term (SSG-model), the implementation of a swirl correction for the standard-{kappa}-{epsilon} turbulence model and the assembling of additional test cases. However, few results are available so far for the industrial test cases. These have to be provided in the remaining time of this project. The implementation of the Low-Reynolds model has not been completed in this SEM as the other topics were preferred for completion. Additionally to the planned items two models have been implemented and tested. These are the wall distance equation, which is considered to give an important part of a low-Reynolds model implementation, and the {kappa}-{omega} turbulence model. (orig.)

  13. A systematic evaluation of a multidisciplinary social work-lawyer elder mistreatment intervention model.

    Science.gov (United States)

    Rizzo, Victoria M; Burnes, David; Chalfy, Amy

    2015-01-01

    This study introduces a conceptually based, systematic evaluation process employing multivariate techniques to evaluate a multidisciplinary social work-lawyer intervention model (JASA-LEAP). Logistic regression analyses were used with a random sample of case records (n = 250) from three intervention sites. Client retention, program fidelity, and exposure to multidisciplinary services were significantly related to reduction in mistreatment risk at case closure. Female gender, married status, and living with perpetrator significantly predicted unfavorable outcomes. This study extends the elder mistreatment program evaluation literature beyond descriptive/bivariate evaluation strategies. Findings suggest that a multidisciplinary social work-lawyer elder mistreatment intervention model is a successful approach.

  14. Microscopic Calibration and Validation of Car-Following Models -- A Systematic Approach

    CERN Document Server

    Treiber, Martin

    2014-01-01

    Calibration and validation techniques are crucial in assessing the descriptive and predictive power of car-following models and their suitability for analyzing traffic flow. Using real and generated floating-car and trajectory data, we systematically investigate following aspects: Data requirements and preparation, conceptional approach including local maximum-likelihood and global LSE calibration with several objective functions, influence of the data sampling rate and measuring errors, the effect of data smoothing on the calibration result, and model performance in terms of fitting quality, robustness, parameter orthogonality, completeness and plausible parameter values.

  15. Beta-2 receptor antagonists for traumatic brain injury: a systematic review of controlled trials in animal models.

    Science.gov (United States)

    Ker, K; Perel, P; Blackhall, K

    2009-01-01

    A systematic review and meta-analysis of controlled trials was undertaken to assess the effects of beta-2 receptor antagonists in animal models of traumatic brain injury (TBI). Database and reference list searches were performed to identify eligible studies. Outcome data were extracted on functional status, as measured by the grip test or neurological severity score (NSS), and cerebral edema, as measured by brain water content (BWC). Data were pooled using the random-effects model. Seventeen controlled trials involving 817 animals were identified. Overall methodological quality was poor. Results from the grip test suggest that the treatment group maintained grip for a longer period than the control group; pooled weighted mean difference (WMD) = 8.28 (95% CI 5.78-10.78). The treatment group was found to have a lower NSS (i.e., better neurological function); pooled WMD =-3.28 (95% CI -4.72 to -1.85). Analysis of the cerebral edema data showed that the treatment group had a lower BWC than the control; pooled WMD =-0.42 (95% CI -0.59 to -0.26). There was evidence of statistical heterogeneity between comparisons for all outcomes. Evidence for small study effects was found for the grip test and BWC outcomes. The evidence from animal models of TBI suggests that beta-2 receptor antagonists can improve functional outcome and lessen cerebral edema. However, the poor methodological quality of the included studies and presence of small study effects may have influenced these findings.

  16. Economic Evaluations of Multicomponent Disease Management Programs with Markov Models: A Systematic Review.

    Science.gov (United States)

    Kirsch, Florian

    2016-12-01

    Disease management programs (DMPs) for chronic diseases are being increasingly implemented worldwide. To present a systematic overview of the economic effects of DMPs with Markov models. The quality of the models is assessed, the method by which the DMP intervention is incorporated into the model is examined, and the differences in the structure and data used in the models are considered. A literature search was conducted; the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement was followed to ensure systematic selection of the articles. Study characteristics e.g. results, the intensity of the DMP and usual care, model design, time horizon, discount rates, utility measures, and cost-of-illness were extracted from the reviewed studies. Model quality was assessed by two researchers with two different appraisals: one proposed by Philips et al. (Good practice guidelines for decision-analytic modelling in health technology assessment: a review and consolidation of quality asessment. Pharmacoeconomics 2006;24:355-71) and the other proposed by Caro et al. (Questionnaire to assess relevance and credibility of modeling studies for informing health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force report. Value Health 2014;17:174-82). A total of 16 studies (9 on chronic heart disease, 2 on asthma, and 5 on diabetes) met the inclusion criteria. Five studies reported cost savings and 11 studies reported additional costs. In the quality, the overall score of the models ranged from 39% to 65%, it ranged from 34% to 52%. Eleven models integrated effectiveness derived from a clinical trial or a meta-analysis of complete DMPs and only five models combined intervention effects from different sources into a DMP. The main limitations of the models are bad reporting practice and the variation in the selection of input parameters. Eleven of the 14 studies reported cost-effectiveness results of less than $30,000 per quality-adjusted life-year and

  17. Systematic Testing should not be a Topic in the Computer Science Curriculum!

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    2003-01-01

    In this paper we argue that treating "testing" as an isolated topic is a wrong approach in computer science and software engineering teaching. Instead testing should pervade practical topics and exercises in the computer science curriculum to teach students the importance of producing software...

  18. Reference value for the 6-minute walk test in children and adolescents : A systematic review

    NARCIS (Netherlands)

    Mylius, C F; Paap, D; Takken, T

    2016-01-01

    INTRODUCTION: The 6-minute walk test is a submaximal exercise test used to quantify the functional exercise capacity in clinical populations. It measures the distance walked within a period of 6-minutes. Obtaining reference values in the pediatric population is especially demanding due to factors as

  19. Criterion-concurrent validity of spinal mobility tests in ankylosing spondylitis: a systematic review of the literature.

    Science.gov (United States)

    Castro, Marcelo P; Stebbings, Simon M; Milosavljevic, Stephan; Bussey, Melanie D

    2015-02-01

    To examine the level of evidence for criterion-concurrent validity of spinal mobility assessments in patients with ankylosing spondylitis (AS). Guidelines proposed in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses were used to undertake a search strategy involving 3 sets of keywords: accura*, truth, valid*; ankylosing spondylitis, spondyloarthritis, spondyloarthropathy, spondylarthritis; mobility, spinal measure*, (a further 16 keywords with similar meaning were used). Seven databases were searched from their inception to February 2014: AMED, Embase, ProQuest, PubMed, Science Direct, Scopus, and Web of Science. The Quality Assessment of Diagnostic Accuracy Studies (with modifications) was used to assess the quality of articles reviewed. An article was considered high quality when it received "yes" in at least 9 of the 13 items. From the 741 records initially identified, 10 articles were retained for our systematic review. Only 1 article was classified as high quality, and this article suggests that 3 variants of the Schober test (original, modified, and modified-modified) poorly reflect lumbar range of motion where radiographs were used as the reference standard. The level of evidence considering criterion-concurrent validity of clinical tests used to assess spinal mobility in patients with AS is low. Clinicians should be aware that current practice when measuring spinal mobility in AS may not accurately reflect true spinal mobility.

  20. Putting hydrological modelling practice to the test

    NARCIS (Netherlands)

    Melsen, Lieke Anna

    2017-01-01

    Six steps can be distinguished in the process of hydrological modelling: the perceptual model (deciding on the processes), the conceptual model (deciding on the equations), the procedural model (get the code to run on a computer), calibration (identify the parameters), evaluation (confronting output

  1. Fitness tests and occupational tasks of military interest: a systematic review of correlations.

    Science.gov (United States)

    Hauschild, Veronique D; DeGroot, David W; Hall, Shane M; Grier, Tyson L; Deaver, Karen D; Hauret, Keith G; Jones, Bruce H

    2017-02-01

    : Physically demanding occupations (ie, military, firefighter, law enforcement) often use fitness tests for job selection or retention. Despite numerous individual studies, the relationship of these tests to job performance is not always clear. : This review examined the relationship by aggregating previously reported correlations between different fitness tests and common occupational tasks. : Search criteria were applied to PUBMED, EBSCO, EMBASE and military sources; scoring yielded 27 original studies providing 533 Pearson correlation coefficients (r) between fitness tests and 12 common physical job task categories. Fitness tests were grouped into predominant health-related fitness components and body regions: cardiorespiratory endurance (CRe); upper body, lower body and trunk muscular strength and muscular endurance (UBs, LBs, TRs, UBe, LBe, TRe) and flexibility (FLX). Meta-analyses provided pooled r's between each fitness component and task category. : The CRe tests had the strongest pooled correlations with most tasks (eight pooled r values 0.80-0.52). Next were LBs (six pooled r values >0.50) and UBe (four pooled r values >0.50). UBs and LBe correlated strongly to three tasks. TRs, TRe and FLX did not strongly correlate to tasks. : Employers can maximise the relevancy of assessing workforce health by using fitness tests with strong correlations between fitness components and job performance, especially those that are also indicators for injury risk. Potentially useful field-expedient tests include timed-runs (CRe), jump tests (LBs) and push-ups (UBe). Impacts of gender and physiological characteristics (eg, lean body mass) should be considered in future study and when implementing tests.

  2. Value of physical tests in diagnosing cervical radiculopathy: a systematic review.

    Science.gov (United States)

    Thoomes, Erik J; van Geest, Sarita; van der Windt, Danielle A; Falla, Deborah; Verhagen, Arianne P; Koes, Bart W; Thoomes-de Graaf, Marloes; Kuijper, Barbara; Scholten-Peeters, Wendy Gm; Vleggeert-Lankamp, Carmen L

    2017-08-21

    Background context In clinical practice, the diagnosis of cervical radiculopathy is based on information from the patient history, physical examination and diagnostic imaging. Various physical tests may be performed, but their diagnostic accuracy is unknown. Purpose To summarize and update the evidence on diagnostic performance of tests carried out during a physical examination for the diagnosis of cervical radiculopathy. Study design Review of the accuracy of diagnostic tests. Study Sample Diagnostic studies comparing results of tests performed during a physical examination in diagnosing cervical radiculopathy with a reference standard of imaging or surgical findings. Outcome measures Sensitivity, specificity, likelihood ratios are presented, together with pooled results for sensitivity and specificity. Methods A literature search up to March 2016 was performed in CENTRAL, PubMed (MEDLINE), EMBASE, CINAHL, Web of Science and Google Scholar. Methodological quality of studies was assessed using the QUADAS-2. Results Five diagnostic accuracy studies were identified. Only Spurling's test was evaluated in more than one study, showing high specificity ranging from 0.89-1.00 (95%CI: 0.59-1.00); sensitivity varied from 0.38-0.97 (95%CI: 0.21-0.99). No studies were found that assessed the diagnostic accuracy of widely used neurological tests such as key muscle strength, tendon reflexes and sensory impairments. Conclusions There is limited evidence for accuracy of physical examination tests for the diagnosis of cervical radiculopathy. When consistent with the patient history, clinicians may use a combination of Spurling's, axial traction and an Arm Squeeze test to increase the likelihood of a cervical radiculopathy; whereas a negative combined neurodynamic testing and an Arm Squeeze test could be used to rule out the disorder. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Hydraulic model tests on modified Wave Dragon. Phase 3

    Energy Technology Data Exchange (ETDEWEB)

    Hald, T.; Lynggaard, J.

    2002-11-01

    The purpose of this report is to describe the model tests conducted with a new designed 2. generation WD model as well as obtained model test results. Tests are conducted as sequential reconstruction followed by physical model tests. All details concerning the reconstruction are found in Hald and Lynggaard (2001). Model tests and reconstruction are carried out during the phase 3 project: 'Wave Dragon. Reconstruction of an existing model in scale 1:50 and sequential tests of changes to the model geometry and mass distribution parameters' sponsored by the Danish Energy Agency (DEA) wave energy programme. The tests will establish a well documented basis for the development of a 1:4.5 scale prototype planned for testing Nissum Bredning, a sea inlet on the Danish West Coast. (au)

  4. Real-time screening tests for functional alignment of the trunk and lower extremities in adolescent – a systematic review

    DEFF Research Database (Denmark)

    Junge, Tina; Wedderkopp, N; Juul-Kristensen, B

    2012-01-01

    mechanisms resulting in ACL injuries (Hewett, 2010). Prevention may therefore depend on identifying these potentially injury risk factors. Screening tools must thus include patterns of typical movements in sport and leisure time activities, consisting of high-load and multi-directional tests, focusing...... of knee alignment, there is a further need to evaluate reliability and validity of real-time functional alignment tests, before the can be used as screening tools for prevention of knee injuries among adolescents. Still the next step in this systematic review is to evaluate the quality and feasibility......Introduction The presence of neuromuscular imbalance (Hewett, 2010) or functional malalignment may be part of the increased rate of adolescent knee injuries, specifically the anterior cruciate ligament (ACL). Abnormal lateral trunk and knee abduction moment are possibly linked to the biomechanical...

  5. The clinimetric properties of performance-based gross motor tests used for children with developmental coordination disorder: a systematic review.

    Science.gov (United States)

    Slater, Leanne M; Hillier, Susan L; Civetta, Lauren R

    2010-01-01

    Performance-based measures of gross motor skills are required for children with developmental coordination disorder to quantify motor ability and objectify change. Information related to psychometrics, clinical utility, feasibility, and client appropriateness and acceptability is needed so that clinicians and researchers are assured that they have chosen the most appropriate and robust tool. This review identified performance-based measures of gross motor skills for this population, and the research evidence for their clinimetric properties through a systematic literature search. Seven measures met the inclusion criteria and were appraised for their clinimetric properties. The Movement Assessment Battery for Children and the Test for Gross Motor Development (second version) scored highest on appraisal. The 2 highest scoring measures are recommended in the first instance for clinicians wishing to evaluate gross motor performance in children with developmental coordination disorder. However, both measures require further testing to increase confidence in their validity for this population.

  6. Testing AGN feedback models in galaxy evolution

    Science.gov (United States)

    Shin, Min-Su

    Galaxy formation and evolution have been one of the most challenging problems in astrophysics. A single galaxy has various components (stars, atomic and molecular gas, a supermassive black hole, and dark matter) and has interacted with its cosmic environment throughout its history. A key issue in understanding galaxy evolution is to find the dominant physical processes in the interactions between the components of a galaxy and between a galaxy and its environment. AGN feedback has been proposed as a key process to suppress late star formation in massive elliptical galaxies and as a general consequence of galaxy mergers and interactions. In this thesis, I investigate feedback effects from active galactic nuclei (AGN) using a new simulation code and data from the Sloan Digital Sky Survey. In the first chapter, I test purely mechanical AGN feedback models via a nuclear wind around the central SMBH in elliptical galaxies by comparing simulation results to four well-defined observational constraints: the mass ratio between the SMBH and its host galaxy, the lifetime of the quasar phase, the X-ray luminosity from the hot interstellar medium, and the mass fraction of young stars. Even though purely mechanical AGN feedback is commonly assumed in cosmological simulations, I find that it is inadequate, and cannot reproduce all four observational constraints simultaneously. This result suggests that both mechanical and radiative feedback modes are important physical processes. In the second chapter, I simulate the coevolution of the SMBH and its host galaxy under different environments, represented by different amounts of gas stripping. Though the connection between environment and galaxy evolution has been well-studied, environmental effects on the growth of the SMBH have not been answered yet. I find that strong gas stripping, which satellite galaxies might experience, highly suppresses SMBH mass accretion and AGN activity. Moreover, the suppression of the SMBH growth is

  7. Measuring and modelling the effects of systematic non-adherence to mass drug administration.

    Science.gov (United States)

    Dyson, Louise; Stolk, Wilma A; Farrell, Sam H; Hollingsworth, T Déirdre

    2017-03-01

    It is well understood that the success or failure of a mass drug administration campaign critically depends on the level of coverage achieved. To that end coverage levels are often closely scrutinised during campaigns and the response to underperforming campaigns is to attempt to improve coverage. Modelling work has indicated, however, that the quality of the coverage achieved may also have a significant impact on the outcome. If the coverage achieved is likely to miss similar people every round then this can have a serious detrimental effect on the campaign outcome. We begin by reviewing the current modelling descriptions of this effect and introduce a new modelling framework that can be used to simulate a given level of systematic non-adherence. We formalise the likelihood that people may miss several rounds of treatment using the correlation in the attendance of different rounds. Using two very simplified models of the infection of helminths and non-helminths, respectively, we demonstrate that the modelling description used and the correlation included between treatment rounds can have a profound effect on the time to elimination of disease in a population. It is therefore clear that more detailed coverage data is required to accurately predict the time to disease elimination. We review published coverage data in which individuals are asked how many previous rounds they have attended, and show how this information may be used to assess the level of systematic non-adherence. We note that while the coverages in the data found range from 40.5% to 95.5%, still the correlations found lie in a fairly narrow range (between 0.2806 and 0.5351). This indicates that the level of systematic non-adherence may be similar even in data from different years, countries, diseases and administered drugs. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Analytical Scenario of Software Testing Using Simplistic Cost Model

    Directory of Open Access Journals (Sweden)

    RAJENDER BATHLA

    2012-02-01

    Full Text Available Software can be tested either manually or automatically.The two approaches are complementary: automated testingcan perform a huge number of tests in short time or period,whereas manual testing uses the knowledge of the testingengineer to target testing to the parts of the system that areassumed to be more error-prone. Despite this contemporary,tools for manual and automatic testing are usually different,leading to decreased productivity and reliability of thetesting process. Auto Test is a testing tool that provides a“best of both worlds” strategy: it integrates developers’ testcases into an automated process of systematic contractdriventesting.This allows it to combine the benefits of both approacheswhile keeping a simple interface, and to treat the two typesof tests in a unified fashion: evaluation of results is thesame, coverage measures are added up, and both types oftests can be saved in the same format. The objective of thispaper is to discuss the Importance of Automation tool withassociate to software testing techniques in softwareengineering. In this paper we provide introduction ofsoftware testing and describe the CASE tools. The solutionof this problem leads to the new approach of softwaredevelopment known as software testing in the IT world.Software Test Automation is the process of automating thesteps of manual test cases using an automation tool or utilityto shorten the testing life cycle with respect to time.

  9. Parents' Attitudes toward Genetic Testing of Children for Health Conditions: A Systematic Review.

    Science.gov (United States)

    Lim, Qishan; McGill, Brittany C; Quinn, Veronica F; Tucker, Katherine M; Mizrahi, David; Farkas Patenaude, Andrea; Warby, Meera; Cohn, Richard J; Wakefield, Claire E

    2017-02-07

    This review assessed parents' attitudes toward childhood genetic testing for health conditions, with a focus on perceived advantages and disadvantages. We also evaluated the factors that influence parents' attitudes toward childhood genetic testing. We searched Medline, Medline In-Process, EMBASE, PsycINFO, Social Work Abstracts and CINAHL. We screened 945 abstracts and identified 21 studies representing the views of 3934 parents. Parents reported largely positive attitudes toward childhood genetic testing across different genetic tests with varying medical utility. Parents perceived a range of advantages and disadvantages of childhood genetic testing. Childhood genetic testing was viewed by most as beneficial. Parents' education level, genetic status, sex and socio-demographic status were associated with reported attitudes. This yielded some conflicting findings, indicating the need for further research. Genetic counseling remains essential to support this population in making well-informed decisions. Targeted interventions tailored to specific families with different socio-demographic characteristics may be useful. Further research on the long-term impact of childhood genetic testing on families is warranted.

  10. Construct validity of clinical spinal mobility tests in ankylosing spondylitis: a systematic review and meta-analysis.

    Science.gov (United States)

    Castro, Marcelo P; Stebbings, Simon M; Milosavljevic, Stephan; Bussey, Melanie D

    2016-07-01

    The study aimed to determine, using systematic review and meta-analysis, the level of evidence supporting the construct validity of spinal mobility tests for assessing patients with ankylosing spondylitis. Following the guidelines proposed in the Preferred Reporting Items for Systematic reviews and Meta-Analyses, three sets of keywords were used for data searching: (i) ankylosing spondylitis, spondyloarthritis, spondyloarthropathy, spondylarthritis; (ii) accuracy, association, construct, correlation, Outcome Measures in Rheumatoid Arthritis Clinical Trials, OMERACT, truth, validity; (iii) mobility, Bath Ankylosing Spondylitis Metrology Index-BASMI, radiography, spinal measures, cervical rotation, Schober (a further 19 keywords were used). Initially, 2558 records were identified, and from these, 21 studies were retained. Fourteen of these studies were considered high level of evidence. Compound indexes of spinal mobility showed mostly substantial to excellent levels of agreement with global structural damage. Individual mobility tests for the cervico-thoracic spine showed only moderate agreements with cervical structural damage, and considering structural damage at the lumbar spine, the original Schober was the only test that presented consistently substantial levels of agreement. Three studies assessed the construct validity of mobility measures for inflammation and low to fair levels of agreement were observed. Two meta-analyses were conducted, with assessment of agreement between BASMI and two radiological indexes of global structural damage. The spinal mobility indexes and the original Schober test show acceptable construct validity for inferring the extent of structural damage when assessing patients with ankylosing spondylitis. Spinal mobility measures do not reflect levels of inflammation at either the sacroiliac joints and/or the spine.

  11. Is the Timed Up and Go test a useful predictor of risk of falls in community dwelling older adults: a systematic review and meta-analysis.

    Science.gov (United States)

    Barry, Emma; Galvin, Rose; Keogh, Claire; Horgan, Frances; Fahey, Tom

    2014-02-01

    The Timed Up and Go test (TUG) is a commonly used screening tool to assist clinicians to identify patients at risk of falling. The purpose of this systematic review and meta-analysis is to determine the overall predictive value of the TUG in community-dwelling older adults. A literature search was performed to identify all studies that validated the TUG test. The methodological quality of the selected studies was assessed using the QUADAS-2 tool, a validated tool for the quality assessment of diagnostic accuracy studies. A TUG score of ≥13.5 seconds was used to identify individuals at higher risk of falling. All included studies were combined using a bivariate random effects model to generate pooled estimates of sensitivity and specificity at ≥13.5 seconds. Heterogeneity was assessed using the variance of logit transformed sensitivity and specificity. Twenty-five studies were included in the systematic review and 10 studies were included in meta-analysis. The TUG test was found to be more useful at ruling in rather than ruling out falls in individuals classified as high risk (>13.5 sec), with a higher pooled specificity (0.74, 95% CI 0.52-0.88) than sensitivity (0.31, 95% CI 0.13-0.57). Logistic regression analysis indicated that the TUG score is not a significant predictor of falls (OR = 1.01, 95% CI 1.00-1.02, p = 0.05). The Timed Up and Go test has limited ability to predict falls in community dwelling elderly and should not be used in isolation to identify individuals at high risk of falls in this setting.

  12. A systematic review of p16/Ki-67 immuno-testing for triage of low grade cervical cytology.

    Science.gov (United States)

    Kisser, A; Zechmeister-Koss, I

    2015-01-01

    Screening for cervical cancer precursors by Papanicolaou cytology is a public health success story; however, its low sensitivity entails unnecessary referrals to colposcopy of healthy women with equivocal (ASCUS) or mild dysplasia (LSIL) cytology. We assessed the accuracy of p16/Ki-67 immuno-testing for triage of low grade cervical cytology. We systematically searched Medline, Embase, CRD and Cochrane databases, and handsearched key references. Eligible studies included women with ASCUS or LSIL cervical cytology who had undergone p16/Ki-67 testing and subsequent verification by colposcopy-directed biopsies and histologic analysis. We extracted data on patient characteristics and test conduct, diagnostic accuracy measures and assessed the methodological quality of the studies. R software was used to perform a bivariate analysis of test performance data. Five eligible studies were identified. Four of the studies had high risk of bias. In the LSIL subgroup, the sensitivity of p16/Ki-67 testing ranged from 0.86 to 0.98, compared with 0.92-0.96 of high-risk HPV testing (hrHPV); specificity ranged from 0.43 to 0.68 versus 0.19 to 0.37, respectively. In the ASCUS subgroup, sensitivity ranged from 0.64 to 0.92 (p16/Ki67 test) versus 0.91 to 0.97 (hrHPV); specificity ranged from 0.53 to 0.81 versus 0.26 to 0.44, respectively. p16/Ki-67 testing cannot be recommended for triage women with ASCUS or LSIL cytology due to insufficient high-quality evidence. Further studies on test performance and the impact of p16/Ki-67-based triage on health outcomes are needed for a definitive evaluation of its clinical utility. © 2014 Royal College of Obstetricians and Gynaecologists.

  13. The Internationalization of Testing and New Models of Test Delivery on the Internet

    Science.gov (United States)

    Bartram, Dave

    2006-01-01

    The Internet has opened up a whole new set of opportunities for advancing the science of psychometrics and the technology of testing. It has also created some new challenges for those of us involved in test design and testing. In particular, we are seeing impacts from internationalization of testing and new models for test delivery. These are…

  14. The diagnostic accuracy of the Kemp’s test: a systematic review

    Science.gov (United States)

    Stuber, Kent; Lerede, Caterina; Kristmanson, Kevyn; Sajko, Sandy; Bruno, Paul

    2014-01-01

    Background: The objective of this review was to evaluate the existing literature regarding the accuracy of the Kemp’s test in the diagnosis of facet joint pain compared to a reference standard. Methods: Several databases were searched. All diagnostic accuracy studies comparing the Kemp’s test with an acceptable reference standard were included. Included studies were scored for quality and internal validity. Results: Five articles met the inclusion criteria of this review. Two studies had a low risk of bias, and three had a low concern regarding applicability. Pooling of data from studies using similar methods revealed that the test’s negative predictive value was the only diagnostic accuracy measure above 50% (56.8%, 59.9%). Conclusions: Currently, the literature supporting the use of the Kemp’s test is limited and indicates that it has poor diagnostic accuracy. It is debatable whether clinicians should continue to use this test to diagnose facet joint pain. PMID:25202153

  15. Methods and models for the construction of weakly parallel tests

    NARCIS (Netherlands)

    Adema, Jos J.

    1992-01-01

    Several methods are proposed for the construction of weakly parallel tests [i.e., tests with the same test information function (TIF)]. A mathematical programming model that constructs tests containing a prespecified TIF and a heuristic that assigns items to tests with information functions that are

  16. TESTING FOR VARYING DISPERSION IN DISCRETE EXPONENTIAL FAMILY NONLINEAR MODELS

    Institute of Scientific and Technical Information of China (English)

    LinJinguan; WeiBocheng; ZhangNansong

    2003-01-01

    It is necessary to test for varying dispersion in generalized nonlinear models. Wei ,et al(1998) developed a likelihood ratio test,a score test and their adjustments to test for varying dispersion in continuous exponential family nonlinear models. This type of problem in the framework of general discrete exponential family nonlinear models is discussed. Two types of varying dispersion, which are random coefficients model and random effects model, are proposed,and corresponding score test statistics are constructed and expressed in simple ,easy to use ,matrix formulas.

  17. Testing the Reliability of Cluster Mass Indicators with a Systematics Limited Dataset

    CERN Document Server

    Juett, Adrienne M; Mushotzky, Richard

    2009-01-01

    We present the mass-X-ray observable scaling relationships for clusters of galaxies using the XMM-Newton cluster catalog of Snowden et al. Our results are roughly consistent with previous observational and theoretical work, with one major exception. We find 2-3 times the scatter around the best fit mass scaling relationships as expected from cluster simulations or seen in other observational studies. We suggest that this is a consequence of using hydrostatic mass, as opposed to virial mass, and is due to the explicit dependence of the hydrostatic mass on the gradients of the temperature and gas density profiles. We find a larger range of slope in the cluster temperature profiles at r_{500} than previous observational studies. Additionally, we find only a weak dependence of the gas mass fraction on cluster mass, consistent with a constant. Our average gas mass fraction results argue for a closer study of the systematic errors due to instrumental calibration and analysis method variations. We suggest that a mor...

  18. What do Cochrane systematic reviews say about the clinical effectiveness of screening and diagnostic tests for cancer?

    Science.gov (United States)

    Bueno, André Tito Pereira; Capelasso, Vladimir Lisboa; Pacheco, Rafael Leite; Latorraca, Carolina de Oliveira Cruz; Castria, Tiago Biachi de; Pachito, Daniela Vianna; Riera, Rachel

    2017-01-01

    The purpose of screening tests for cancer is to detect it at an early stage in order to increase the chances of treatment. However, their unrestrained use may lead to unnecessary examinations, overdiagnosis and higher costs. It is thus necessary to evaluate their clinical effects in terms of benefits and harm. Review of Cochrane systematic reviews, carried out in the Discipline of Evidence-Based Medicine, Escola Paulista de Medicina, Universidade Federal de São Paulo. Cochrane reviews on the clinical effectiveness of cancer screening procedures were included. Study titles and abstracts were independently assessed by two authors. Conflicts were resolved by another two authors. Findings were summarized and discussed. Seventeen reviews were selected: fifteen on screening for specific cancers (bladder, breast, colorectal, hepatic, lung, nasopharyngeal, esophageal, oral, prostate, testicular and uterine) and two others on cancer in general. The quality of evidence of the findings varied among the reviews. Only two reviews resulted in high-quality evidence: screening using low-dose computed tomography scans for high-risk individuals seems to reduce lung cancer mortality; and screening using flexible sigmoidoscopy and fecal occult blood tests seems to reduce colorectal cancer mortality. The evidence found through Cochrane reviews did not support most of the commonly used screening tests for cancer. It is recommended that patients should be informed of the possibilities of false positives and false negatives before they undergo the tests. Further studies to fully assess the effectiveness of cancer screening tests and adverse outcomes are required.

  19. Provider-initiated testing and counselling programmes in sub-Saharan Africa: a systematic review of their operational implementation.

    Science.gov (United States)

    Roura, Maria; Watson-Jones, Deborah; Kahawita, Tanya M; Ferguson, Laura; Ross, David A

    2013-02-20

    The routine offer of an HIV test during patient-provider encounters is gaining momentum within HIV treatment and prevention programmes. This review examined the operational implementation of provider-initiated testing and counselling (PITC) programmes in sub-Saharan Africa. PUBMED, EMBASE, Global Health, COCHRANE Library and JSTOR databases were searched systematically for articles published in English between January 2000 and November 2010. Grey literature was explored through the websites of international and nongovernmental organizations. Eligibility of studies was based on predetermined criteria applied during independent screening by two researchers. We retained 44 studies out of 5088 references screened. PITC polices have been effective at identifying large numbers of previously undiagnosed individuals. However, the translation of policy guidance into practice has had mixed results, and in several studies of routine programmes the proportion of patients offered an HIV test was disappointingly low. There were wide variations in the rates of acceptance of the test and poor linkage of those testing positive to follow-up assessments and antiretroviral treatment. The challenges encountered encompass a range of areas from logistics, to data systems, human resources and management, reflecting some of the weaknesses of health systems in the region. The widespread adoption of PITC provides an unprecedented opportunity for identifying HIV-positive individuals who are already in contact with health services and should be accompanied by measures aimed at strengthening health systems and fostering the normalization of HIV at community level. The resources and effort needed to do this successfully should not be underestimated.

  20. The effectiveness of psychoeducation and systematic desensitization to reduce test anxiety among first-year pharmacy students.

    Science.gov (United States)

    Rajiah, Kingston; Saravanan, Coumaravelou

    2014-11-15

    To analyze the effect of psychological intervention on reducing performance anxiety and the consequences of the intervention on first-year pharmacy students. In this experimental study, 236 first-year undergraduate pharmacy students from a private university in Malaysia were approached between weeks 5 and 7 of their first semester to participate in the study. The completed responses for the Westside Test Anxiety Scale (WTAS), the Kessler Perceived Distress Scale (PDS), and the Academic Motivation Scale (AMS) were received from 225 students. Out of 225 students, 42 exhibited moderate to high test anxiety according to the WTAS (score ranging from 30 to 39) and were randomly placed into either an experiment group (n=21) or a waiting list control group (n=21). The prevalence of test anxiety among pharmacy students in this study was lower compared to other university students in previous studies. The present study's anxiety management of psychoeducation and systematic education for test anxiety reduced lack of motivation and psychological distress and improved grade point average (GPA). Psychological intervention helped significantly reduce scores of test anxiety, psychological distress, and lack of motivation, and it helped improve students' GPA.

  1. Eating disorders among fashion models: a systematic review of the literature.

    Science.gov (United States)

    Zancu, Simona Alexandra; Enea, Violeta

    2016-06-02

    In the light of recent concerns regarding the eating disorders among fashion models and professional regulations of fashion model occupation, an examination of the scientific evidence on this issue is necessary. The article reviews findings on the prevalence of eating disorders and body image concerns among professional fashion models. A systematic literature search was conducted using ProQUEST, EBSCO, PsycINFO, SCOPUS, and Gale Canage electronic databases. A very low number of studies conducted on fashion models and eating disorders resulted between 1980 and 2015, with seven articles included in this review. Overall, results of these studies do not indicate a higher prevalence of eating disorders among fashion models compared to non-models. Fashion models have a positive body image and generally do not report more dysfunctional eating behaviors than controls. However, fashion models are on average slightly underweight with significantly lower BMI than controls, and give higher importance to appearance and thin body shape, and thus have a higher prevalence of partial-syndrome eating disorders than controls. Despite public concerns, research on eating disorders among professional fashion models is extremely scarce and results cannot be generalized to all models. The existing research fails to clarify the matter of eating disorders among fashion models and given the small number of studies, further research is needed.

  2. State of the art hydraulic turbine model test

    Science.gov (United States)

    Fabre, Violaine; Duparchy, Alexandre; Andre, Francois; Larroze, Pierre-Yves

    2016-11-01

    Model tests are essential in hydraulic turbine development and related fields. The methods and technologies used to perform these tests show constant progress and provide access to further information. In addition, due to its contractual nature, the test demand evolves continuously in terms of quantity and accuracy. Keeping in mind that the principal aim of model testing is the transposition of the model measurements to the real machine, the measurements should be performed accurately, and a critical analysis of the model test results is required to distinguish the transposable hydraulic phenomena from the test rig interactions. Although the resonances’ effects are known and described in the IEC standard, their identification is difficult. Leaning on a strong experience of model testing, we will illustrate with a few examples of how to identify the potential problems induced by the test rig. This paper contains some of our best practices to obtain the most accurate, relevant, and independent test-rig measurements.

  3. Optimization models for flight test scheduling

    Science.gov (United States)

    Holian, Derreck

    As threats around the world increase with nations developing new generations of warfare technology, the Unites States is keen on maintaining its position on top of the defense technology curve. This in return indicates that the U.S. military/government must research, develop, procure, and sustain new systems in the defense sector to safeguard this position. Currently, the Lockheed Martin F-35 Joint Strike Fighter (JSF) Lightning II is being developed, tested, and deployed to the U.S. military at Low Rate Initial Production (LRIP). The simultaneous act of testing and deployment is due to the contracted procurement process intended to provide a rapid Initial Operating Capability (IOC) release of the 5th Generation fighter. For this reason, many factors go into the determination of what is to be tested, in what order, and at which time due to the military requirements. A certain system or envelope of the aircraft must be assessed prior to releasing that capability into service. The objective of this praxis is to aide in the determination of what testing can be achieved on an aircraft at a point in time. Furthermore, it will define the optimum allocation of test points to aircraft and determine a prioritization of restrictions to be mitigated so that the test program can be best supported. The system described in this praxis has been deployed across the F-35 test program and testing sites. It has discovered hundreds of available test points for an aircraft to fly when it was thought none existed thus preventing an aircraft from being grounded. Additionally, it has saved hundreds of labor hours and greatly reduced the occurrence of test point reflight. Due to the proprietary nature of the JSF program, details regarding the actual test points, test plans, and all other program specific information have not been presented. Generic, representative data is used for example and proof-of-concept purposes. Apart from the data correlation algorithms, the optimization associated

  4. Systematic coarse-grained modeling of complexation between small interfering RNA and polycations

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Zonghui [Graduate Program in Applied Physics, Northwestern University, Evanston, Illinois 60208 (United States); Luijten, Erik, E-mail: luijten@northwestern.edu [Graduate Program in Applied Physics, Northwestern University, Evanston, Illinois 60208 (United States); Department of Materials Science and Engineering, Northwestern University, Evanston, Illinois 60208 (United States); Department of Engineering Sciences and Applied Mathematics, Northwestern University, Evanston, Illinois 60208 (United States); Department of Physics and Astronomy, Northwestern University, Evanston, Illinois 60208 (United States)

    2015-12-28

    All-atom molecular dynamics simulations can provide insight into the properties of polymeric gene-delivery carriers by elucidating their interactions and detailed binding patterns with nucleic acids. However, to explore nanoparticle formation through complexation of these polymers and nucleic acids and study their behavior at experimentally relevant time and length scales, a reliable coarse-grained model is needed. Here, we systematically develop such a model for the complexation of small interfering RNA (siRNA) and grafted polyethyleneimine copolymers, a promising candidate for siRNA delivery. We compare the predictions of this model with all-atom simulations and demonstrate that it is capable of reproducing detailed binding patterns, charge characteristics, and water release kinetics. Since the coarse-grained model accelerates the simulations by one to two orders of magnitude, it will make it possible to quantitatively investigate nanoparticle formation involving multiple siRNA molecules and cationic copolymers.

  5. Clinical application of micronucleus test in exfoliated buccal cells: A systematic review and metanalysis.

    Science.gov (United States)

    Bolognesi, Claudia; Bonassi, Stefano; Knasmueller, Siegfried; Fenech, Michael; Bruzzone, Marco; Lando, Cecilia; Ceppi, Marcello

    2015-01-01

    The micronucleus assay in uncultured exfoliated buccal mucosa cells, involving minimally invasive sampling, was successfully applied to evaluate inhalation and local exposure to genotoxic agents, impact of nutrition and lifestyle factors. The potential use of the assay in clinics to monitor the development of local oral lesions and as an early biomarker for tumors and different chronic disorders was also investigated. A systematic review of the literature was carried out focusing on the clinical application of the assay. The literature search updated to January 2015 allowed to retrieve 42 eligible articles. Fifty three percent of investigations are related to oral, head and neck cancer, and premalignant oral diseases. Our analysis evidences a potential usefulness of the MN assay applied in buccal exfoliated cells in the prescreening and in the follow up of precancerous oral lesions. A significant excess of MN, in patients compared with matched controls was observed for subgroups of oral and neck cancer (meta-MR of 2.40, 95% CI: 2.02-2.85) and leukoplakia (meta-MR 1.88, 95% CI: 1.51-2.35). The meta-analysis of studies available on other tumors (meta-MR 2.00; 95% CI:1.66-2.41) indicates that the MN frequency in buccal cells could reflect the chromosomal instability of other organs. Increased MN frequency was also observed in small size studies on patients with chronic diseases, with Alzheimer's disease and with Down syndrome. The application of the cytome approach providing information of genotoxic, cytotoxic and cytostatic effects is suggestive of the possibility of an improvement in the predictive value of the assay and this deserves further investigations.

  6. Horns Rev II, 2D-Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Brorsen, Michael

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), Denmark. The starting point for the present report is the previously carried out run-up tests described in Lykke Andersen & Frigaard, 2006......-shaped access platforms on piles. The Model tests include mainly regular waves and a few irregular wave tests. These tests have been conducted at Aalborg University from 9. November, 2006 to 17. November, 2006....

  7. Testing for Causality in Variance Usinf Multivariate GARCH Models

    OpenAIRE

    Christian M. Hafner; Herwartz, Helmut

    2008-01-01

    Tests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently, little is known about their power properties. In this paper we show that a convenient alternative to residual based testing is to specify a multivariate volatility model, such as multivariate GARCH (or BEKK), and construct a Wald test on noncausality in variance. We compare both approaches to testing causality in var...

  8. Testing for causality in variance using multivariate GARCH models

    OpenAIRE

    Hafner, Christian; Herwartz, H.

    2004-01-01

    textabstractTests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently little is known about their power properties. In this paper we show that a convenient alternative to residual based testing is to specify a multivariate volatility model, such as multivariate GARCH (or BEKK), and construct a Wald test on noncausality in variance. We compare both approaches to testing causa...

  9. In vitro biofilm models to study dental caries: a systematic review.

    Science.gov (United States)

    Maske, T T; van de Sande, F H; Arthur, R A; Huysmans, M C D N J M; Cenci, M S

    2017-09-01

    The aim of this systematic review is to characterize and discuss key methodological aspects of in vitro biofilm models for caries-related research and to verify the reproducibility and dose-response of models considering the response to anti-caries and/or antimicrobial substances. Inclusion criteria were divided into Part I (PI): an in vitro biofilm model that produces a cariogenic biofilm and/or caries-like lesions and allows pH fluctuations; and Part II (PII): models showing an effect of anti-caries and/or antimicrobial substances. Within PI, 72.9% consisted of dynamic biofilm models, while 27.1% consisted of batch models. Within PII, 75.5% corresponded to dynamic models, whereas 24.5% corresponded to batch models. Respectively, 20.4 and 14.3% of the studies reported dose-response validations and reproducibility, and 32.7% were classified as having a high risk of bias. Several in vitro biofilm models are available for caries-related research; however, most models lack validation by dose-response and reproducibility experiments for each proposed protocol.

  10. A Validation Process for the Groundwater Flow and Transport Model of the Faultless Nuclear Test at Central Nevada Test Area

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed Hassan

    2003-01-01

    Many sites of groundwater contamination rely heavily on complex numerical models of flow and transport to develop closure plans. This has created a need for tools and approaches that can be used to build confidence in model predictions and make it apparent to regulators, policy makers, and the public that these models are sufficient for decision making. This confidence building is a long-term iterative process and it is this process that should be termed ''model validation.'' Model validation is a process not an end result. That is, the process of model validation cannot always assure acceptable prediction or quality of the model. Rather, it provides safeguard against faulty models or inadequately developed and tested models. Therefore, development of a systematic approach for evaluating and validating subsurface predictive models and guiding field activities for data collection and long-term monitoring is strongly needed. This report presents a review of model validation studies that pertain to groundwater flow and transport modeling. Definitions, literature debates, previously proposed validation strategies, and conferences and symposia that focused on subsurface model validation are reviewed and discussed. The review is general in nature, but the focus of the discussion is on site-specific, predictive groundwater models that are used for making decisions regarding remediation activities and site closure. An attempt is made to compile most of the published studies on groundwater model validation and assemble what has been proposed or used for validating subsurface models. The aim is to provide a reasonable starting point to aid the development of the validation plan for the groundwater flow and transport model of the Faultless nuclear test conducted at the Central Nevada Test Area (CNTA). The review of previous studies on model validation shows that there does not exist a set of specific procedures and tests that can be easily adapted and

  11. Economic Evaluations of Pharmacogenetic and Pharmacogenomic Screening Tests : A Systematic Review. Second Update of the Literature

    NARCIS (Netherlands)

    Berm, Elizabeth J J; Looff, Margot de; Wilffert, Bob; Boersma, Cornelis; Annemans, Lieven; Vegter, Stefan; van Boven, Job FM; Postma, Maarten J

    2016-01-01

    Objective Due to extended application of pharmacogenetic and pharmacogenomic screening (PGx) tests it is important to assess whether they provide good value for money. This review provides an update of the literature. Methods A literature search was performed in PubMed and papers published between

  12. Walking tests for stroke survivors: a systematic review of their measurement properties

    NARCIS (Netherlands)

    Bloemendaal, M.; Water, A.T. van de; Port, I.G. van de

    2012-01-01

    PURPOSE: To provide an overview of walking tests including their measurement properties that have been used in stroke survivors. METHOD: Electronic databases were searched using specific search strategies. Retrieved studies were selected by using specified inclusion criteria. A modified consensus-ba

  13. A Practical Methodology for the Systematic Development of Multiple Choice Tests.

    Science.gov (United States)

    Blumberg, Phyllis; Felner, Joel

    Using Guttman's facet design analysis, four parallel forms of a multiple-choice test were developed. A mapping sentence, logically representing the universe of content of a basic cardiology course, specified the facets of the course and the semantic structural units linking them. The facets were: cognitive processes, disease priority, specific…

  14. Equation-free analysis of agent-based models and systematic parameter determination

    Science.gov (United States)

    Thomas, Spencer A.; Lloyd, David J. B.; Skeldon, Anne C.

    2016-12-01

    Agent based models (ABM)s are increasingly used in social science, economics, mathematics, biology and computer science to describe time dependent systems in circumstances where a description in terms of equations is difficult. Yet few tools are currently available for the systematic analysis of ABM behaviour. Numerical continuation and bifurcation analysis is a well-established tool for the study of deterministic systems. Recently, equation-free (EF) methods have been developed to extend numerical continuation techniques to systems where the dynamics are described at a microscopic scale and continuation of a macroscopic property of the system is considered. To date, the practical use of EF methods has been limited by; (1) the over-head of application-specific implementation; (2) the laborious configuration of problem-specific parameters; and (3) large ensemble sizes (potentially) leading to computationally restrictive run-times. In this paper we address these issues with our tool for the EF continuation of stochastic systems, which includes algorithms to systematically configuration problem specific parameters and enhance robustness to noise. Our tool is generic and can be applied to any 'black-box' simulator and determines the essential EF parameters prior to EF analysis. Robustness is significantly improved using our convergence-constraint with a corrector-repeat (C3R) method. This algorithm automatically detects outliers based on the dynamics of the underlying system enabling both an order of magnitude reduction in ensemble size and continuation of systems at much higher levels of noise than classical approaches. We demonstrate our method with application to several ABM models, revealing parameter dependence, bifurcation and stability analysis of these complex systems giving a deep understanding of the dynamical behaviour of the models in a way that is not otherwise easily obtainable. In each case we demonstrate our systematic parameter determination stage for

  15. Reference Values for the Six-Minute Walk Test in Healthy Children and Adolescents: a Systematic Review

    Directory of Open Access Journals (Sweden)

    Lucas de Assis Pereira Cacau

    Full Text Available Abstract Objective: The aim of the study is to compare the available reference values and the six-minute walk test equations in healthy children/adolescents. Our systematic review was planned and performed in accordance with the PRISMA guidelines. We included all studies that established reference values for the six-minute walk test in healthy children/adolescents. Methods: To perform this review, a research was performed in PubMed, EMBASE (via SCOPUS and Cochrane (LILACS, Bibliographic Index Spanish in Health Sciences, Organization Collection Pan-American Health Organization, Publications of the World Health Organization and Scientific Electronic Library Online (SciELO via Virtual Health Library until June 2015 without language restriction. Results: The initial research identified 276 abstracts. Twelve studies met the inclusion criteria and were fully reviewed and approved by both reviewers. None of the selected studies presented sample size calculation. Most of the studies recruited children and adolescents from school. Six studies reported the use of random samples. Most studies used a corridor of 30 meters. All studies followed the American Thoracic Society guidelines to perform the six-minute walk test. The walked distance ranged 159 meters among the studies. Of the 12 included studies, 7 (58% reported descriptive data and 6 (50% established reference equation for the walked distance in the six-minute walk test. Conclusion: The reference value for the six-minute walk test in children and adolescents ranged substantially from studies in different countries. A reference equation was not provided in all studies, but the ones available took into account well established variables in the context of exercise performance, such as height, heart rate, age and weight. Countries that did not established reference values for the six-minute walk test should be encouraged to do because it would help their clinicians and researchers have a more precise

  16. Reference Values for the Six-Minute Walk Test in Healthy Children and Adolescents: a Systematic Review

    Science.gov (United States)

    Cacau, Lucas de Assis Pereira; de Santana-Filho, Valter Joviniano; Maynard, Luana G.; Gomes Neto, Mansueto; Fernandes, Marcelo; Carvalho, Vitor Oliveira

    2016-01-01

    Objective The aim of the study is to compare the available reference values and the six-minute walk test equations in healthy children/adolescents. Our systematic review was planned and performed in accordance with the PRISMA guidelines. We included all studies that established reference values for the six-minute walk test in healthy children/adolescents. Methods To perform this review, a research was performed in PubMed, EMBASE (via SCOPUS) and Cochrane (LILACS), Bibliographic Index Spanish in Health Sciences, Organization Collection Pan-American Health Organization, Publications of the World Health Organization and Scientific Electronic Library Online (SciELO) via Virtual Health Library until June 2015 without language restriction. Results The initial research identified 276 abstracts. Twelve studies met the inclusion criteria and were fully reviewed and approved by both reviewers. None of the selected studies presented sample size calculation. Most of the studies recruited children and adolescents from school. Six studies reported the use of random samples. Most studies used a corridor of 30 meters. All studies followed the American Thoracic Society guidelines to perform the six-minute walk test. The walked distance ranged 159 meters among the studies. Of the 12 included studies, 7 (58%) reported descriptive data and 6 (50%) established reference equation for the walked distance in the six-minute walk test. Conclusion The reference value for the six-minute walk test in children and adolescents ranged substantially from studies in different countries. A reference equation was not provided in all studies, but the ones available took into account well established variables in the context of exercise performance, such as height, heart rate, age and weight. Countries that did not established reference values for the six-minute walk test should be encouraged to do because it would help their clinicians and researchers have a more precise interpretation of the test

  17. Platelet-reactivity tests identify patients at risk of secondary cardiovascular events: a systematic review and meta-analysis.

    Science.gov (United States)

    Wisman, P P; Roest, M; Asselbergs, F W; de Groot, P G; Moll, F L; van der Graaf, Y; de Borst, G J

    2014-05-01

    Antiplatelet therapy is the standard treatment for the prevention of cardiovascular events (CVEs). High on-treatment platelet reactivity (HPR) is a risk factor for secondary CVEs in patients prescribed aspirin and/or clopidogrel. The present review and meta-analysis was aimed at assessing the ability of individual platelet-function tests to reliably identify patients at risk of developing secondary CVEs. A systematic literature search was conducted to identify studies on platelet-reactivity measurements and CVEs. The main inclusion criteria were: (i) prospective study design; (ii) study medication, including aspirin and/or clopidogrel; and (iii) a platelet-function test being performed at baseline, before follow-up started. Of 3882 identified studies, 102 (2.6%; reporting on 44 098 patients) were included in the meta-analysis. With regard to high on-aspirin platelet reactivity (HAPR), 22 different tests were discussed in 55 studies (22 441 patients). Pooled analysis showed that HAPR was diagnosed in 22.2% of patients, and was associated with an increased CVE risk (relative risk [RR] 2.09; 95% confidence interval [CI] 1.77-2.47). Eleven HAPR tests independently showed a significantly increased CVE risk in patients with HAPR as compared with those with normal on-aspirin platelet reactivity. As regards high on-clopidogrel platelet reactivity (HCPR), 59 studies (34 776 patients) discussed 15 different tests, and reported that HCPR was present in 40.4% of patients and was associated with an increased CVE risk (RR 2.80; 95% CI 2.40-3.27). Ten tests showed a significantly increased CVE risk. Patients with HPR are suboptimally protected against future cardiovascular complications. Furthermore, not all of the numerous platelet tests proved to be able to identify patients at increased cardiovascular risk. © 2014 International Society on Thrombosis and Haemostasis.

  18. Model Based Analysis and Test Generation for Flight Software

    Science.gov (United States)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  19. Systematic Review and Meta-Analysis of Bone Marrow-Derived Mononuclear Cells in Animal Models of Ischemic Stroke.

    Science.gov (United States)

    Vahidy, Farhaan S; Rahbar, Mohammad H; Zhu, Hongjian; Rowan, Paul J; Bambhroliya, Arvind B; Savitz, Sean I

    2016-06-01

    Bone marrow-derived mononuclear cells (BMMNCs) offer the promise of augmenting poststroke recovery. There is mounting evidence of safety and efficacy of BMMNCs from preclinical studies of ischemic stroke; however, their pooled effects have not been described. Using Preferred Reporting Items for Systematic Review and Meta-Analysis guidelines, we conducted a systematic review of preclinical literature for intravenous use of BMMNCs followed by meta-analyses of histological and behavioral outcomes. Studies were selected based on predefined criteria. Data were abstracted by 2 independent investigators. After quality assessment, the pooled effects were generated using mixed-effect models. Impact of possible biases on estimated effect size was evaluated. Standardized mean difference and 95% confidence interval for reduction in lesion volume was significantly beneficial for BMMNC treatment (standardized mean difference: -3.3; 95% confidence interval, -4.3 to -2.3). n=113 each for BMMNC and controls. BMMNC-treated animals (n=161) also had improved function measured by cylinder test (standardized mean difference: -2.4; 95% confidence interval, -3.1 to -1.6), as compared with controls (n=205). A trend for benefit was observed for adhesive removal test and neurological deficit score. Study quality score (median: 6; Q1-Q3: 5-7) was correlated with year of publication. There was funnel plot asymmetry; however, the pooled effects were robust to the correction of this bias and remained significant in favor of BMMNC treatment. BMMNCs demonstrate beneficial effects across histological and behavioral outcomes in animal ischemic stroke models. Although study quality has improved over time, considerable degree of heterogeneity calls for standardization in the conduct and reporting of experimentation. © 2016 American Heart Association, Inc.

  20. Methods Used in Economic Evaluations of Tuberculin Skin Tests and Interferon Gamma Release Assays for the Screening of Latent Tuberculosis Infection: A Systematic Review.

    Science.gov (United States)

    Koufopoulou, Maria; Sutton, Andrew John; Breheny, Katie; Diwakar, Lavanya

    2016-01-01

    Latent tuberculosis infection (LTBI) provides a constant pool of new active tuberculosis cases; a third of the earth's population is estimated to be infected with LTBI. The objective of this systematic review was to assess the quality and summarize the available evidence from published economic evaluations reporting on the cost-effectiveness of tuberculin skin tests (TSTs) compared with interferon gamma release assays (IGRAs) for the screening of LTBI. An extensive systematic review of the published literature was conducted. A two-step process was adopted to identify relevant articles: information was extracted into evidence tables and then analyzed. The quality of the publications was assessed using a 10-item checklist specific for economic evaluations. Twenty-eight studies were identified for inclusion in this review. Most of the studies found IGRAs to be more cost-effective than TSTs; however, the conclusions from the studies varied significantly. Most studies scored highly on the checklist although only one fulfilled all the stipulated criteria. A wide variety of methodological approaches were documented; identified differences included the type of economic evaluation and model, time horizon, perspective, and outcomes measures. The lack of consistent methods across studies makes it difficult to draw any firm conclusions about the most cost-effective option between TSTs and IGRAs. This problem can be solved by improving the quality of economic evaluation studies in the field of LTBI screening, through adherence to quality checklists. Copyright © 2016. Published by Elsevier Inc.

  1. Neuropsychological assessment without upper limb involvement: a systematic review of oral versions of the Trail Making Test and Symbol-Digit Modalities Test.

    Science.gov (United States)

    Jaywant, Abhishek; Barredo, Jennifer; Ahern, David C; Resnik, Linda

    2016-10-18

    The Trail Making Test (TMT) and written version of the Symbol Digit Modalities Test (SDMT) assess attention, processing speed, and executive functions but their utility is limited in populations with upper limb dysfunction. Oral versions of the TMT and SDMT exist, but a systematic review of their psychometric properties and clinical utility has not been conducted, which was the goal of this study. Searches were conducted in PubMed and PsycINFO, test manuals, and the reference lists of included articles. Four measures were identified: the SDMT-oral, oral TMT-A, oral TMT-B, and the Mental Alternation Test (MAT). Two investigators independently reviewed abstracts to identify peer-reviewed articles that reported on these measures in adult populations. From each article, one investigator extracted information on reliability, validity, responsiveness, minimum detectable change, normative data, and demographic influences. A second investigator verified the accuracy of the data in a random selection of 10% of papers. The quality of the evidence for each psychometric property was rated on a 4-point scale (unknown, poor, adequate, excellent). Results showed excellent evidence for the SDMT-oral, adequate evidence for the oral TMT-B and MAT, and adequate to poor evidence for the oral TMT-A. These findings inform the clinical assessment of attention, processing speed, and executive functions in individuals with upper limb disability.

  2. Supervised and unsupervised self-testing for HIV in high- and low-risk populations: a systematic review.

    Directory of Open Access Journals (Sweden)

    Nitika Pant Pai

    Full Text Available BACKGROUND: Stigma, discrimination, lack of privacy, and long waiting times partly explain why six out of ten individuals living with HIV do not access facility-based testing. By circumventing these barriers, self-testing offers potential for more people to know their sero-status. Recent approval of an in-home HIV self test in the US has sparked self-testing initiatives, yet data on acceptability, feasibility, and linkages to care are limited. We systematically reviewed evidence on supervised (self-testing and counselling aided by a health care professional and unsupervised (performed by self-tester with access to phone/internet counselling self-testing strategies. METHODS AND FINDINGS: Seven databases (Medline [via PubMed], Biosis, PsycINFO, Cinahl, African Medicus, LILACS, and EMBASE and conference abstracts of six major HIV/sexually transmitted infections conferences were searched from 1st January 2000-30th October 2012. 1,221 citations were identified and 21 studies included for review. Seven studies evaluated an unsupervised strategy and 14 evaluated a supervised strategy. For both strategies, data on acceptability (range: 74%-96%, preference (range: 61%-91%, and partner self-testing (range: 80%-97% were high. A high specificity (range: 99.8%-100% was observed for both strategies, while a lower sensitivity was reported in the unsupervised (range: 92.9%-100%; one study versus supervised (range: 97.4%-97.9%; three studies strategy. Regarding feasibility of linkage to counselling and care, 96% (n = 102/106 of individuals testing positive for HIV stated they would seek post-test counselling (unsupervised strategy, one study. No extreme adverse events were noted. The majority of data (n = 11,019/12,402 individuals, 89% were from high-income settings and 71% (n = 15/21 of studies were cross-sectional in design, thus limiting our analysis. CONCLUSIONS: Both supervised and unsupervised testing strategies were highly acceptable, preferred, and more

  3. Supervised and Unsupervised Self-Testing for HIV in High- and Low-Risk Populations: A Systematic Review

    Science.gov (United States)

    Pant Pai, Nitika; Sharma, Jigyasa; Shivkumar, Sushmita; Pillay, Sabrina; Vadnais, Caroline; Joseph, Lawrence; Dheda, Keertan; Peeling, Rosanna W.

    2013-01-01

    Background Stigma, discrimination, lack of privacy, and long waiting times partly explain why six out of ten individuals living with HIV do not access facility-based testing. By circumventing these barriers, self-testing offers potential for more people to know their sero-status. Recent approval of an in-home HIV self test in the US has sparked self-testing initiatives, yet data on acceptability, feasibility, and linkages to care are limited. We systematically reviewed evidence on supervised (self-testing and counselling aided by a health care professional) and unsupervised (performed by self-tester with access to phone/internet counselling) self-testing strategies. Methods and Findings Seven databases (Medline [via PubMed], Biosis, PsycINFO, Cinahl, African Medicus, LILACS, and EMBASE) and conference abstracts of six major HIV/sexually transmitted infections conferences were searched from 1st January 2000–30th October 2012. 1,221 citations were identified and 21 studies included for review. Seven studies evaluated an unsupervised strategy and 14 evaluated a supervised strategy. For both strategies, data on acceptability (range: 74%–96%), preference (range: 61%–91%), and partner self-testing (range: 80%–97%) were high. A high specificity (range: 99.8%–100%) was observed for both strategies, while a lower sensitivity was reported in the unsupervised (range: 92.9%–100%; one study) versus supervised (range: 97.4%–97.9%; three studies) strategy. Regarding feasibility of linkage to counselling and care, 96% (n = 102/106) of individuals testing positive for HIV stated they would seek post-test counselling (unsupervised strategy, one study). No extreme adverse events were noted. The majority of data (n = 11,019/12,402 individuals, 89%) were from high-income settings and 71% (n = 15/21) of studies were cross-sectional in design, thus limiting our analysis. Conclusions Both supervised and unsupervised testing strategies were highly acceptable

  4. Modelling of the spallation reaction: analysis and testing of nuclear models; Simulation de la spallation: analyse et test des modeles nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Toccoli, C

    2000-04-03

    The spallation reaction is considered as a 2-step process. First a very quick stage (10{sup -22}, 10{sup -29} s) which corresponds to the individual interaction between the incident projectile and nucleons, this interaction is followed by a series of nucleon-nucleon collisions (intranuclear cascade) during which fast particles are emitted, the nucleus is left in a strongly excited level. Secondly a slower stage (10{sup -18}, 10{sup -19} s) during which the nucleus is expected to de-excite completely. This de-excitation is performed by evaporation of light particles (n, p, d, t, {sup 3}He, {sup 4}He) or/and fission or/and fragmentation. The HETC code has been designed to simulate spallation reactions, this simulation is based on the 2-steps process and on several models of intranuclear cascades (Bertini model, Cugnon model, Helder Duarte model), the evaporation model relies on the statistical theory of Weiskopf-Ewing. The purpose of this work is to evaluate the ability of the HETC code to predict experimental results. A methodology about the comparison of relevant experimental data with results of calculation is presented and a preliminary estimation of the systematic error of the HETC code is proposed. The main problem of cascade models originates in the difficulty of simulating inelastic nucleon-nucleon collisions, the emission of pions is over-estimated and corresponding differential spectra are badly reproduced. The inaccuracy of cascade models has a great impact to determine the excited level of the nucleus at the end of the first step and indirectly on the distribution of final residual nuclei. The test of the evaporation model has shown that the emission of high energy light particles is under-estimated. (A.C.)

  5. A Lagrange Multiplier Test for Testing the Adequacy of the Constant Conditional Correlation GARCH Model

    DEFF Research Database (Denmark)

    Catani, Paul; Teräsvirta, Timo; Yin, Meiqun

    A Lagrange multiplier test for testing the parametric structure of a constant conditional correlation generalized autoregressive conditional heteroskedasticity (CCC-GARCH) model is proposed. The test is based on decomposing the CCC-GARCH model multiplicatively into two components, one of which...

  6. A systematic literature review of open source software quality assessment models.

    Science.gov (United States)

    Adewumi, Adewole; Misra, Sanjay; Omoregbe, Nicholas; Crawford, Broderick; Soto, Ricardo

    2016-01-01

    Many open source software (OSS) quality assessment models are proposed and available in the literature. However, there is little or no adoption of these models in practice. In order to guide the formulation of newer models so they can be acceptable by practitioners, there is need for clear discrimination of the existing models based on their specific properties. Based on this, the aim of this study is to perform a systematic literature review to investigate the properties of the existing OSS quality assessment models by classifying them with respect to their quality characteristics, the methodology they use for assessment, and their domain of application so as to guide the formulation and development of newer models. Searches in IEEE Xplore, ACM, Science Direct, Springer and Google Search is performed so as to retrieve all relevant primary studies in this regard. Journal and conference papers between the year 2003 and 2015 were considered since the first known OSS quality model emerged in 2003. A total of 19 OSS quality assessment model papers were selected. To select these models we have developed assessment criteria to evaluate the quality of the existing studies. Quality assessment models are classified into five categories based on the quality characteristics they possess namely: single-attribute, rounded category, community-only attribute, non-community attribute as well as the non-quality in use models. Our study reflects that software selection based on hierarchical structures is found to be the most popular selection method in the existing OSS quality assessment models. Furthermore, we found that majority (47%) of the existing models do not specify any domain of application. In conclusion, our study will be a valuable contribution to the community and helps the quality assessment model developers in formulating newer models and also to the practitioners (software evaluators) in selecting suitable OSS in the midst of alternatives.

  7. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik;

    2015-01-01

    provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...

  8. An evaluation of a model for the systematic documentation of hospital based health promotion activities: results from a multicentre study

    DEFF Research Database (Denmark)

    Tønnesen, Hanne; Christensen, Mette E; Groene, Oliver;

    2007-01-01

    The first step of handling health promotion (HP) in Diagnosis Related Groups (DRGs) is a systematic documentation and registration of the activities in the medical records. So far the possibility and tradition for systematic registration of clinical HP activities in the medical records and in pat......The first step of handling health promotion (HP) in Diagnosis Related Groups (DRGs) is a systematic documentation and registration of the activities in the medical records. So far the possibility and tradition for systematic registration of clinical HP activities in the medical records...... of two parts; first part includes motivational counselling (7 codes) and the second part comprehends intervention, rehabilitation and after treatment (8 codes).The objective was to evaluate in an international study the usefulness, applicability and sufficiency of a simple model for the systematic...

  9. Systematic review and meta-analysis of studies evaluating diagnostic test accuracy: A practical review for clinical researchers-Part I. general guidance and tips

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi; Park, Seong Ho [Dept. of Radiology, and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of); Lee, June Young [Dept. of Biostatistics, Korea University College of Medicine, Seoul (Korea, Republic of)

    2015-12-15

    In the field of diagnostic test accuracy (DTA), the use of systematic review and meta-analyses is steadily increasing. By means of objective evaluation of all available primary studies, these two processes generate an evidence-based systematic summary regarding a specific research topic. The methodology for systematic review and meta-analysis in DTA studies differs from that in therapeutic/interventional studies, and its content is still evolving. Here we review the overall process from a practical standpoint, which may serve as a reference for those who implement these methods.

  10. Peak Vertical Ground Reaction Force during Two-Leg Landing: A Systematic Review and Mathematical Modeling

    Directory of Open Access Journals (Sweden)

    Wenxin Niu

    2014-01-01

    Full Text Available Objectives. (1 To systematically review peak vertical ground reaction force (PvGRF during two-leg drop landing from specific drop height (DH, (2 to construct a mathematical model describing correlations between PvGRF and DH, and (3 to analyze the effects of some factors on the pooled PvGRF regardless of DH. Methods. A computerized bibliographical search was conducted to extract PvGRF data on a single foot when participants landed with both feet from various DHs. An innovative mathematical model was constructed to analyze effects of gender, landing type, shoes, ankle stabilizers, surface stiffness and sample frequency on PvGRF based on the pooled data. Results. Pooled PvGRF and DH data of 26 articles showed that the square root function fits their relationship well. An experimental validation was also done on the regression equation for the medicum frequency. The PvGRF was not significantly affected by surface stiffness, but was significantly higher in men than women, the platform than suspended landing, the barefoot than shod condition, and ankle stabilizer than control condition, and higher than lower frequencies. Conclusions. The PvGRF and root DH showed a linear relationship. The mathematical modeling method with systematic review is helpful to analyze the influence factors during landing movement without considering DH.

  11. Systematic problems with using dark matter simulations to model stellar halos

    Energy Technology Data Exchange (ETDEWEB)

    Bailin, Jeremy [Department of Physics and Astronomy, University of Alabama, Box 870324, Tuscaloosa, AL 35487-0324 (United States); Bell, Eric F.; Valluri, Monica [Department of Astronomy, University of Michigan, 830 Dennison Building, 500 Church Street, Ann Arbor, MI 48109 (United States); Stinson, Greg S. [Max-Planck-Institut für Astronomie (MPIA), Königstuhl 17, D-69117 Heidelberg (Germany); Debattista, Victor P. [Jeremiah Horrocks Institute, University of Central Lancashire, Preston PR1 2HE (United Kingdom); Couchman, H. M. P.; Wadsley, James, E-mail: jbailin@ua.edu [Department of Physics and Astronomy, McMaster University, 1280 Main Street West, Hamilton, ON L8S 4M1 (Canada)

    2014-03-10

    The limits of available computing power have forced models for the structure of stellar halos to adopt one or both of the following simplifying assumptions: (1) stellar mass can be 'painted' onto dark matter (DM) particles in progenitor satellites; (2) pure DM simulations that do not form a luminous galaxy can be used. We estimate the magnitude of the systematic errors introduced by these assumptions using a controlled set of stellar halo models where we independently vary whether we look at star particles or painted DM particles, and whether we use a simulation in which a baryonic disk galaxy forms or a matching pure DM simulation that does not form a baryonic disk. We find that the 'painting' simplification reduces the halo concentration and internal structure, predominantly because painted DM particles have different kinematics from star particles even when both are buried deep in the potential well of the satellite. The simplification of using pure DM simulations reduces the concentration further, but increases the internal structure, and results in a more prolate stellar halo. These differences can be a factor of 1.5-7 in concentration (as measured by the half-mass radius) and 2-7 in internal density structure. Given this level of systematic uncertainty, one should be wary of overinterpreting differences between observations and the current generation of stellar halo models based on DM-only simulations when such differences are less than an order of magnitude.

  12. 2-D Model Test Study of the Suape Breakwater, Brazil

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Burcharth, Hans F.; Sopavicius, A.;

    This report deals with a two-dimensional model test study of the extension of the breakwater in Suape, Brazil. One cross-section was tested for stability and overtopping in various sea conditions. The length scale used for the model tests was 1:35. Unless otherwise specified all values given...

  13. Airship Model Tests in the Variable Density Wind Tunnel

    Science.gov (United States)

    Abbott, Ira H

    1932-01-01

    This report presents the results of wind tunnel tests conducted to determine the aerodynamic characteristics of airship models. Eight Goodyear-Zeppelin airship models were tested in the original closed-throat tunnel. After the tunnel was rebuilt with an open throat a new model was tested, and one of the Goodyear-Zeppelin models was retested. The results indicate that much may be done to determine the drag of airships from evaluations of the pressure and skin-frictional drags on models tested at large Reynolds number.

  14. Systematic model for lean product development implementation in an automotive related company

    Directory of Open Access Journals (Sweden)

    Daniel Osezua Aikhuele

    2017-07-01

    Full Text Available Lean product development is a major innovative business strategy that employs sets of practices to achieve an efficient, innovative and a sustainable product development. Despite the many benefits and high hopes in the lean strategy, many companies are still struggling, and unable to either achieve or sustain substantial positive results with their lean implementation efforts. However, as the first step towards addressing this issue, this paper seeks to propose a systematic model that considers the administrative and implementation limitations of lean thinking practices in the product development process. The model which is based on the integration of fuzzy Shannon’s entropy and Modified Technique for Order Preference by Similarity to the Ideal Solution (M-TOPSIS model for the lean product development practices implementation with respective to different criteria including management and leadership, financial capabilities, skills and expertise and organization culture, provides a guide or roadmap for product development managers on the lean implementation route.

  15. Systematic spectral analysis of GX 339-4: influence of Galactic background and reflection models

    CERN Document Server

    Clavel, M; Corbel, S; Coriat, M

    2016-01-01

    Black hole X-ray binaries display large outbursts, during which their properties are strongly variable. We develop a systematic spectral analysis of the 3-40 keV RXTE/PCA data in order to study the evolution of these systems and apply it to GX 339-4. Using the low count rate observations, we provide a precise model of the Galactic background at GX 339-4's location and discuss its possible impact on the source spectral parameters. At higher fluxes, the use of a Gaussian line to model the reflection component can lead to the detection of a high-temperature disk, in particular in the high-hard state. We demonstrate that this component is an artifact arising from an incomplete modeling of the reflection spectrum.

  16. Systematic Features of Axisymmetric Neutrino-Driven Core-Collapse Supernova Models in Multiple Progenitors

    CERN Document Server

    Nakamura, Ko; Kuroda, Takami; Kotake, Kei

    2014-01-01

    We present an overview of axisymmetric core-collapse supernova simulations employing neutrino transport scheme by the isotropic diffusion source approximation. Studying 101 solar-metallicity progenitors covering zero-age main sequence mass from 10.8 to 75.0 solar masses, we systematically investigate how the differences in the structures of these multiple progenitors impact the hydrodynamics evolution. By following a long-term evolution over 1.0 s after bounce, most of the computed models exhibit neutrino-driven revival of the stalled bounce shock at about 200 - 800 ms postbounce, leading to the possibility of explosion. Pushing the boundaries of expectations in previous one-dimensional studies, our results show that the time of shock revival, evolution of shock radii, and diagnostic explosion energies are tightly correlated with the compactness parameter xi which characterizes the structure of the progenitors. Compared to models with low xi, models with high xi undergo high ram pressure from the accreting ma...

  17. Dynamic epidemiological models for dengue transmission: a systematic review of structural approaches.

    Science.gov (United States)

    Andraud, Mathieu; Hens, Niel; Marais, Christiaan; Beutels, Philippe

    2012-01-01

    Dengue is a vector-borne disease recognized as the major arbovirose with four immunologically distant dengue serotypes coexisting in many endemic areas. Several mathematical models have been developed to understand the transmission dynamics of dengue, including the role of cross-reactive antibodies for the four different dengue serotypes. We aimed to review deterministic models of dengue transmission, in order to summarize the evolution of insights for, and provided by, such models, and to identify important characteristics for future model development. We identified relevant publications using PubMed and ISI Web of Knowledge, focusing on mathematical deterministic models of dengue transmission. Model assumptions were systematically extracted from each reviewed model structure, and were linked with their underlying epidemiological concepts. After defining common terms in vector-borne disease modelling, we generally categorised fourty-two published models of interest into single serotype and multiserotype models. The multi-serotype models assumed either vector-host or direct host-to-host transmission (ignoring the vector component). For each approach, we discussed the underlying structural and parameter assumptions, threshold behaviour and the projected impact of interventions. In view of the expected availability of dengue vaccines, modelling approaches will increasingly focus on the effectiveness and cost-effectiveness of vaccination options. For this purpose, the level of representation of the vector and host populations seems pivotal. Since vector-host transmission models would be required for projections of combined vaccination and vector control interventions, we advocate their use as most relevant to advice health policy in the future. The limited understanding of the factors which influence dengue transmission as well as limited data availability remain important concerns when applying dengue models to real-world decision problems.

  18. Dynamic epidemiological models for dengue transmission: a systematic review of structural approaches.

    Directory of Open Access Journals (Sweden)

    Mathieu Andraud

    Full Text Available Dengue is a vector-borne disease recognized as the major arbovirose with four immunologically distant dengue serotypes coexisting in many endemic areas. Several mathematical models have been developed to understand the transmission dynamics of dengue, including the role of cross-reactive antibodies for the four different dengue serotypes. We aimed to review deterministic models of dengue transmission, in order to summarize the evolution of insights for, and provided by, such models, and to identify important characteristics for future model development. We identified relevant publications using PubMed and ISI Web of Knowledge, focusing on mathematical deterministic models of dengue transmission. Model assumptions were systematically extracted from each reviewed model structure, and were linked with their underlying epidemiological concepts. After defining common terms in vector-borne disease modelling, we generally categorised fourty-two published models of interest into single serotype and multiserotype models. The multi-serotype models assumed either vector-host or direct host-to-host transmission (ignoring the vector component. For each approach, we discussed the underlying structural and parameter assumptions, threshold behaviour and the projected impact of interventions. In view of the expected availability of dengue vaccines, modelling approaches will increasingly focus on the effectiveness and cost-effectiveness of vaccination options. For this purpose, the level of representation of the vector and host populations seems pivotal. Since vector-host transmission models would be required for projections of combined vaccination and vector control interventions, we advocate their use as most relevant to advice health policy in the future. The limited understanding of the factors which influence dengue transmission as well as limited data availability remain important concerns when applying dengue models to real-world decision problems.

  19. The utility of cardiac stress testing for detection of cardiovascular disease in breast cancer survivors: a systematic review

    Directory of Open Access Journals (Sweden)

    Kirkham AA

    2015-01-01

    Full Text Available Amy A Kirkham,1 Sean A Virani,2 Kristin L Campbell1,31Rehabilitation Sciences, 2Department of Medicine, 3Department of Physical Therapy, University of British Columbia, Vancouver, BC, CanadaBackground: Heart function tests performed with myocardial stress, or “cardiac stress tests”, may be beneficial for detection of cardiovascular disease. Women who have been diagnosed with breast cancer are more likely to develop cardiovascular diseases than the general population, in part due to the direct toxic effects of cancer treatment on the cardiovascular system. The aim of this review was to determine the utility of cardiac stress tests for the detection of cardiovascular disease after cardiotoxic breast cancer treatment.Design: Systematic review.Methods: Medline and Embase were searched for studies utilizing heart function tests in breast cancer survivors. Studies utilizing a cardiac stress test and a heart function test performed at rest were included to determine whether stress provided added benefit to identifying cardiac abnormalities that were undetected at rest within each study.Results: Fourteen studies were identified. Overall, there was a benefit to utilizing stress tests over tests at rest in identifying evidence of cardiovascular disease in five studies, a possible benefit in five studies, and no benefit in four studies. The most common type of stress test was myocardial perfusion imaging, where reversible perfusion defects were detected under stress in individuals who had no defects at rest, in five of seven studies of long-term follow-up. Two studies demonstrated the benefit of stress echocardiography over resting echocardiography for detecting left ventricular dysfunction in anthracycline-treated breast cancer survivors. There was no benefit of stress cardiac magnetic resonance imaging in one study. Two studies showed a potential benefit of stress electrocardiography, whereas three others did not.Conclusion: The use of cardiac stress

  20. Predictive value of p16/Ki-67 immunocytochemistry for triage of women with abnormal Papanicolaou test in cervical cancer screening: a systematic review and meta-analysis.

    Science.gov (United States)

    Chen, Cheng-Chieh; Huang, Lee-Wen; Bai, Chyi-Huey; Lee, Chin-Cheng

    2016-01-01

    The Papanicolaou (Pap) test is one screening strategy used to prevent cervical cancer in developed countries. The p16/Ki-67 immunocytochemistry is a triage test performed on Pap smears in women with atypical squamous cells of undetermined significance (ASCUS) or low grade squamous intraepithelial lesion. Our objective was to review studies investigating the diagnostic performance of p16/Ki-67 dual stain for triage of women with abnormal Pap tests. We conducted a systematic review and meta-analysis of diagnostic test accuracy studies. We followed the protocol of systematic review of diagnostic accuracy studies. We searched PubMed, The Cochrane Library, BioMed Central, and ClinicalTrials.gov for relevant studies. We included research that assessed the accuracy of p16/Ki-67 dual stain and high risk human papillomavirus testing for triage of abnormal Pap smears. Review articles and studies that provided insufficient data to construct 2.2 tables were excluded. Data synthesis was conducted using a random-effects model. Sensitivity and specificity. In seven studies encompassing 2628 patients, the pooled sensitivity and specificity of p16/Ki-67 for triage of abnormal Pap smear results were 0.91 (95% CI, 0.89 to 0.93) and 0.64 (95% CI, 0.62 to 0.66), respectively. No study used a case-control design. A subgroup analysis involving liquid-based cytology showed a sensitivity of 0.91 (95%CI, 0.89 to 0.93) and specificity of 0.64 (95%CI, 0.61 to 0.66). Our meta-analysis of p16/Ki-67 dual stain studies showed that the test achieved high sensitivity and moderate specificity for p16/Ki-67 immunocytochemistry for high-grade squamous intraepi.thelial lesion and cervical cancer. We suggest that p16/Ki-67 dual stain might be a reliable ancillary method identifying high-grade squamous intraepithelial lesions in women with abnormal Pap tests. No study in the meta-analysis examined the accuracy of the p16/Ki-67 dual stain for inter.pretation of glandular neoplasms.

  1. Factors associated with functional capacity test results in patients with non-specific chronic low back pain: a systematic review.

    Science.gov (United States)

    van Abbema, Renske; Lakke, Sandra E; Reneman, Michiel F; van der Schans, Cees P; van Haastert, Corrien J M; Geertzen, Jan H B; Wittink, Harriët

    2011-12-01

    Functional capacity tests are standardized instruments to evaluate patients' capacities to execute work-related activities. Functional capacity test results are associated with biopsychosocial factors, making it unclear what is being measured in capacity testing. An overview of these factors was missing. The objective of this review was to investigate the level of evidence for factors that are associated with functional capacity test results in patients with non-specific chronic low back pain. A systematic literature review was performed identifying relevant studies from an electronic journal databases search. Candidate studies employed a cross-sectional or RCT design and were published between 1980 and October 2010. The quality of these studies was determined and level of evidence was reported for factors that were associated with capacity results in at least 3 studies. Twenty-two studies were included. The level of evidence was reported for lifting low, lifting high, carrying, and static lifting capacity. Lifting low test results were associated with self-reported disability and specific self-efficacy but not with pain duration. There was conflicting evidence for associations of lifting low with pain intensity, fear of movement/(re)injury, depression, gender and age. Lifting high was associated with gender and specific self-efficacy, but not with pain intensity or age. There is conflicting evidence for the association of lifting high with the factors self-reported disability, pain duration and depression. Carrying was associated with self-reported disability and not with pain intensity and there is conflicting evidence for associations with specific self-efficacy, gender and age. Static lifting was associated with fear of movement/(re)injury. Much heterogeneity was observed in investigated capacity tests and candidate associated factors. There was some evidence for biological and psychological factors that are or are not associated with capacity results but there

  2. A systematic review of the diagnostic accuracy of provocative tests of the neck for diagnosing cervical radiculopathy

    Science.gov (United States)

    Pool, Jan J. M.; van Tulder, Maurits W.; Riphagen, Ingrid I.; de Vet, Henrica C. W.

    2006-01-01

    Clinical provocative tests of the neck, which position the neck and arm inorder to aggravate or relieve arm symptoms, are commonly used in clinical practice in patients with a suspected cervical radiculopathy. Their diagnostic accuracy, however, has never been examined in a systematic review. A comprehensive search was conducted in order to identify all possible studies fulfilling the inclusion criteria. A study was included if: (1) any provocative test of the neck for diagnosing cervical radiculopathy was identified; (2) any reference standard was used; (3) sensitivity and specificity were reported or could be (re-)calculated; and, (4) the publication was a full report. Two reviewers independently selected studies, and assessed methodological quality. Only six studies met the inclusion criteria, which evaluated five provocative tests. In general, Spurling’s test demonstrated low to moderate sensitivity and high specificity, as did traction/neck distraction, and Valsalva’s maneuver. The upper limb tension test (ULTT) demonstrated high sensitivity and low specificity, while the shoulder abduction test demonstrated low to moderate sensitivity and moderate to high specificity. Common methodological flaws included lack of an optimal reference standard, disease progression bias, spectrum bias, and review bias. Limitations include few primary studies, substantial heterogeneity, and numerous methodological flaws among the studies; therefore, a meta-analysis was not conducted. This review suggests that, when consistent with the history and other physical findings, a positive Spurling’s, traction/neck distraction, and Valsalva’s might be indicative of a cervical radiculopathy, while a negative ULTT might be used to rule it out. However, the lack of evidence precludes any firm conclusions regarding their diagnostic value, especially when used in primary care. More high quality studies are necessary in order to resolve this issue. PMID:17013656

  3. Role of advanced neuroimaging, fluid biomarkers and genetic testing in the assessment of sport-related concussion: a systematic review.

    Science.gov (United States)

    McCrea, Michael; Meier, Timothy; Huber, Daniel; Ptito, Alain; Bigler, Erin; Debert, Chantel T; Manley, Geoff; Menon, David; Chen, Jen-Kai; Wall, Rachel; Schneider, Kathryn J; McAllister, Thomas

    2017-06-01

    To conduct a systematic review of published literature on advanced neuroimaging, fluid biomarkers and genetic testing in the assessment of sport-related concussion (SRC). Computerised searches of Medline, PubMed, Cumulative Index to Nursing and Allied Health Literature (CINAHL), PsycINFO, Scopus and Cochrane Library from 1 January 2000 to 31 December 2016 were done. There were 3222 articles identified. In addition to medical subject heading terms, a study was included if (1) published in English, (2) represented original research, (3) involved human research, (4) pertained to SRC and (5) involved data from neuroimaging, fluid biomarkers or genetic testing collected within 6 months of injury. Ninety-eight studies qualified for review (76 neuroimaging, 16 biomarkers and 6 genetic testing). Separate reviews were conducted for neuroimaging, biomarkers and genetic testing. A standardised data extraction tool was used to document study design, population, tests employed and key findings. Reviewers used a modified quality assessment of studies of diagnostic accuracy studies (QUADAS-2) tool to rate the risk of bias, and a modified Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system to rate the overall level of evidence for each search. Results from the three respective reviews are compiled in separate tables and an interpretive summary of the findings is provided. Advanced neuroimaging, fluid biomarkers and genetic testing are important research tools, but require further validation to determine their ultimate clinical utility in the evaluation of SRC. Future research efforts should address current gaps that limit clinical translation. Ultimately, research on neurobiological and genetic aspects of SRC is predicted to have major translational significance to evidence-based approaches to clinical management of SRC, much like applied clinical research has had over the past 20 years. © Article author(s) (or their employer(s) unless otherwise

  4. ADBT Frame Work as a Testing Technique: An Improvement in Comparison with Traditional Model Based Testing

    Directory of Open Access Journals (Sweden)

    Mohammed Akour

    2016-05-01

    Full Text Available Software testing is an embedded activity in all software development life cycle phases. Due to the difficulties and high costs of software testing, many testing techniques have been developed with the common goal of testing software in the most optimal and cost-effective manner. Model-based testing (MBT is used to direct testing activities such as test verification and selection. MBT is employed to encapsulate and understand the behavior of the system under test, which supports and helps software engineers to validate the system with various likely actions. The widespread usage of models has influenced the usage of MBT in the testing process, especially with UML. In this research, we proposed an improved model based testing strategy, which involves and uses four different diagrams in the testing process. This paper also discusses and explains the activities in the proposed model with the finite state model (FSM. The comparisons have been done with traditional model based testings in terms of test case generation and result.

  5. Pharmacological and methodological aspects of the separation-induced vocalization test in guinea pig pups; a systematic review and meta-analysis

    NARCIS (Netherlands)

    Groenink, Lucianne; Verdouw, P Monika; Bakker, Brenda; Wever, Kimberley E

    2015-01-01

    The separation-induced vocalization test in guinea pig pups is one of many that has been used to screen for anxiolytic-like properties of drugs. The test is based on the cross-species phenomenon that infants emit distress calls when placed in social isolation. Here we report a systematic review and

  6. Pharmacological and methodological aspects of the separation-induced vocalization test in guinea pig pups; a systematic review and meta-analysis

    NARCIS (Netherlands)

    Groenink, L.; Verdouw, P.M.; Bakker, B.; Wever, K.E.

    2015-01-01

    The separation-induced vocalization test in guinea pig pups is one of many that has been used to screen for anxiolytic-like properties of drugs. The test is based on the cross-species phenomenon that infants emit distress calls when placed in social isolation. Here we report a systematic review and

  7. Observational Tests of Planet Formation Models

    CERN Document Server

    Sozzetti, A; Latham, D W; Carney, B W; Laird, J B; Stefanik, R P; Boss, A P; Charbonneau, D; O'Donovan, F T; Holman, M J; Winn, J N

    2007-01-01

    We summarize the results of two experiments to address important issues related to the correlation between planet frequencies and properties and the metallicity of the hosts. Our results can usefully inform formation, structural, and evolutionary models of gas giant planets.

  8. A Model for Random Student Drug Testing

    Science.gov (United States)

    Nelson, Judith A.; Rose, Nancy L.; Lutz, Danielle

    2011-01-01

    The purpose of this case study was to examine random student drug testing in one school district relevant to: (a) the perceptions of students participating in competitive extracurricular activities regarding drug use and abuse; (b) the attitudes and perceptions of parents, school staff, and community members regarding student drug involvement; (c)…

  9. The predictive value of skin prick testing for challenge-proven food allergy: a systematic review.

    Science.gov (United States)

    Peters, Rachel L; Gurrin, Lyle C; Allen, Katrina J

    2012-06-01

    Immunoglobulin E-mediated (IgE) food allergy affects 6-8% of children, and the prevalence is believed to be increasing. The gold standard of food allergy diagnosis is oral food challenges (OFCs); however, they are resource-consuming and potentially dangerous. Skin prick tests (SPTs) are able to detect the presence of allergen-specific IgE antibodies (sensitization), but they have low specificity for clinically significant food allergy. To reduce the need for OFCs, it has been suggested that children forgo an OFC if their SPT wheal size exceeds a cutoff that has a high predictability for food allergy. Although data for these studies are almost always gathered from high-risk populations, the 95% positive predictive values (PPVs) vary substantially between studies. SPT thresholds with a high probability of food allergy generated from these studies may not be generalizable to other populations, because of highly selective samples and variability in participant's age, test allergens, and food challenge protocol. Standardization of SPT devices and allergens, OFC protocols including standardized cessation criteria, and population-based samples would all help to improve generalizability of PPVs of SPTs.

  10. Regression Test-Selection Technique Using Component Model Based Modification: Code to Test Traceability

    Directory of Open Access Journals (Sweden)

    Ahmad A. Saifan

    2016-04-01

    Full Text Available Regression testing is a safeguarding procedure to validate and verify adapted software, and guarantee that no errors have emerged. However, regression testing is very costly when testers need to re-execute all the test cases against the modified software. This paper proposes a new approach in regression test selection domain. The approach is based on meta-models (test models and structured models to decrease the number of test cases to be used in the regression testing process. The approach has been evaluated using three Java applications. To measure the effectiveness of the proposed approach, we compare the results using the re-test to all approaches. The results have shown that our approach reduces the size of test suite without negative impact on the effectiveness of the fault detection.

  11. A comprehensive model for executing knowledge management audit