WorldWideScience

Sample records for meta-data based approach

  1. Data analysis with the DIANA meta-scheduling approach

    International Nuclear Information System (INIS)

    Anjum, A; McClatchey, R; Willers, I

    2008-01-01

    The concepts, design and evaluation of the Data Intensive and Network Aware (DIANA) meta-scheduling approach for solving the challenges of data analysis being faced by CERN experiments are discussed in this paper. Our results suggest that data analysis can be made robust by employing fault tolerant and decentralized meta-scheduling algorithms supported in our DIANA meta-scheduler. The DIANA meta-scheduler supports data intensive bulk scheduling, is network aware and follows a policy centric meta-scheduling. In this paper, we demonstrate that a decentralized and dynamic meta-scheduling approach is an effective strategy to cope with increasing numbers of users, jobs and datasets. We present 'quality of service' related statistics for physics analysis through the application of a policy centric fair-share scheduling model. The DIANA meta-schedulers create a peer-to-peer hierarchy of schedulers to accomplish resource management that changes with evolving loads and is dynamic and adapts to the volatile nature of the resources

  2. Multivariate meta-analysis: a robust approach based on the theory of U-statistic.

    Science.gov (United States)

    Ma, Yan; Mazumdar, Madhu

    2011-10-30

    Meta-analysis is the methodology for combining findings from similar research studies asking the same question. When the question of interest involves multiple outcomes, multivariate meta-analysis is used to synthesize the outcomes simultaneously taking into account the correlation between the outcomes. Likelihood-based approaches, in particular restricted maximum likelihood (REML) method, are commonly utilized in this context. REML assumes a multivariate normal distribution for the random-effects model. This assumption is difficult to verify, especially for meta-analysis with small number of component studies. The use of REML also requires iterative estimation between parameters, needing moderately high computation time, especially when the dimension of outcomes is large. A multivariate method of moments (MMM) is available and is shown to perform equally well to REML. However, there is a lack of information on the performance of these two methods when the true data distribution is far from normality. In this paper, we propose a new nonparametric and non-iterative method for multivariate meta-analysis on the basis of the theory of U-statistic and compare the properties of these three procedures under both normal and skewed data through simulation studies. It is shown that the effect on estimates from REML because of non-normal data distribution is marginal and that the estimates from MMM and U-statistic-based approaches are very similar. Therefore, we conclude that for performing multivariate meta-analysis, the U-statistic estimation procedure is a viable alternative to REML and MMM. Easy implementation of all three methods are illustrated by their application to data from two published meta-analysis from the fields of hip fracture and periodontal disease. We discuss ideas for future research based on U-statistic for testing significance of between-study heterogeneity and for extending the work to meta-regression setting. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Simulation-based power calculations for planning a two-stage individual participant data meta-analysis.

    Science.gov (United States)

    Ensor, Joie; Burke, Danielle L; Snell, Kym I E; Hemming, Karla; Riley, Richard D

    2018-05-18

    Researchers and funders should consider the statistical power of planned Individual Participant Data (IPD) meta-analysis projects, as they are often time-consuming and costly. We propose simulation-based power calculations utilising a two-stage framework, and illustrate the approach for a planned IPD meta-analysis of randomised trials with continuous outcomes where the aim is to identify treatment-covariate interactions. The simulation approach has four steps: (i) specify an underlying (data generating) statistical model for trials in the IPD meta-analysis; (ii) use readily available information (e.g. from publications) and prior knowledge (e.g. number of studies promising IPD) to specify model parameter values (e.g. control group mean, intervention effect, treatment-covariate interaction); (iii) simulate an IPD meta-analysis dataset of a particular size from the model, and apply a two-stage IPD meta-analysis to obtain the summary estimate of interest (e.g. interaction effect) and its associated p-value; (iv) repeat the previous step (e.g. thousands of times), then estimate the power to detect a genuine effect by the proportion of summary estimates with a significant p-value. In a planned IPD meta-analysis of lifestyle interventions to reduce weight gain in pregnancy, 14 trials (1183 patients) promised their IPD to examine a treatment-BMI interaction (i.e. whether baseline BMI modifies intervention effect on weight gain). Using our simulation-based approach, a two-stage IPD meta-analysis has meta-analysis was appropriate. Pre-specified adjustment for prognostic factors would increase power further. Incorrect dichotomisation of BMI would reduce power by over 20%, similar to immediately throwing away IPD from ten trials. Simulation-based power calculations could inform the planning and funding of IPD projects, and should be used routinely.

  4. Meta-analysis a structural equation modeling approach

    CERN Document Server

    Cheung, Mike W-L

    2015-01-01

    Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo

  5. Sequential sentinel SNP Regional Association Plots (SSS-RAP): an approach for testing independence of SNP association signals using meta-analysis data.

    Science.gov (United States)

    Zheng, Jie; Gaunt, Tom R; Day, Ian N M

    2013-01-01

    Genome-Wide Association Studies (GWAS) frequently incorporate meta-analysis within their framework. However, conditional analysis of individual-level data, which is an established approach for fine mapping of causal sites, is often precluded where only group-level summary data are available for analysis. Here, we present a numerical and graphical approach, "sequential sentinel SNP regional association plot" (SSS-RAP), which estimates regression coefficients (beta) with their standard errors using the meta-analysis summary results directly. Under an additive model, typical for genes with small effect, the effect for a sentinel SNP can be transformed to the predicted effect for a possibly dependent SNP through a 2×2 2-SNP haplotypes table. The approach assumes Hardy-Weinberg equilibrium for test SNPs. SSS-RAP is available as a Web-tool (http://apps.biocompute.org.uk/sssrap/sssrap.cgi). To develop and illustrate SSS-RAP we analyzed lipid and ECG traits data from the British Women's Heart and Health Study (BWHHS), evaluated a meta-analysis for ECG trait and presented several simulations. We compared results with existing approaches such as model selection methods and conditional analysis. Generally findings were consistent. SSS-RAP represents a tool for testing independence of SNP association signals using meta-analysis data, and is also a convenient approach based on biological principles for fine mapping in group level summary data. © 2012 Blackwell Publishing Ltd/University College London.

  6. A meta-model based approach for rapid formability estimation of continuous fibre reinforced components

    Science.gov (United States)

    Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is

  7. An XML-based Schema-less Approach to Managing Diagnostic Data in Heterogeneous Formats

    Energy Technology Data Exchange (ETDEWEB)

    Naito, O. [Japan Atomic Energy Agency, Ibaraki (Japan)

    2009-07-01

    Managing diagnostic data in heterogeneous formats is always a nuisance, especially when a new diagnostic technique requires a new data structure that does not fit in the existing data format. Ideally, it is best to have an all-purpose schema that can specify any data structures. But devising such a schema is a difficult task and the resultant data management system tends to be large and complicated. As a complementary approach, we can think of a system that has no specific schema but requires each of the data to describe itself without assuming any prior information. In this paper, a very primitive implementation of such a system based on extensible Markup Language (XML) is examined. The actual implementation is no more than an addition of a tiny XML meta-data file that describes the detailed format of the associated diagnostic data file. There are many ways to write and read such meta-data files. For example, if the data are in a standard format that is foreign to the existing system, just specify the name of the format and what interface to use for reading the data. If the data are in a non-standard arbitrary format, write what is written and how into the meta-data file at every occurrence of data output. And as a last resort, if the format of the data is too complicated, a code to read the data can be stored in the meta-data file. Of course, this schema-less approach has some drawbacks, two of which are the doubling of the number of files to be managed and the low performance of data handling, though the former can be a merit, when it is necessary to update the meta-data leaving the body data intact. The important point is that the necessary information to read the data is decoupled from data itself. The merits and demerits of this approach are discussed. This document is composed of an abstract followed by the presentation slides. (author)

  8. Ontology-based meta-analysis of global collections of high-throughput public data.

    Directory of Open Access Journals (Sweden)

    Ilya Kupershmidt

    2010-09-01

    Full Text Available The investigation of the interconnections between the molecular and genetic events that govern biological systems is essential if we are to understand the development of disease and design effective novel treatments. Microarray and next-generation sequencing technologies have the potential to provide this information. However, taking full advantage of these approaches requires that biological connections be made across large quantities of highly heterogeneous genomic datasets. Leveraging the increasingly huge quantities of genomic data in the public domain is fast becoming one of the key challenges in the research community today.We have developed a novel data mining framework that enables researchers to use this growing collection of public high-throughput data to investigate any set of genes or proteins. The connectivity between molecular states across thousands of heterogeneous datasets from microarrays and other genomic platforms is determined through a combination of rank-based enrichment statistics, meta-analyses, and biomedical ontologies. We address data quality concerns through dataset replication and meta-analysis and ensure that the majority of the findings are derived using multiple lines of evidence. As an example of our strategy and the utility of this framework, we apply our data mining approach to explore the biology of brown fat within the context of the thousands of publicly available gene expression datasets.Our work presents a practical strategy for organizing, mining, and correlating global collections of large-scale genomic data to explore normal and disease biology. Using a hypothesis-free approach, we demonstrate how a data-driven analysis across very large collections of genomic data can reveal novel discoveries and evidence to support existing hypothesis.

  9. Ontology-based meta-analysis of global collections of high-throughput public data.

    Science.gov (United States)

    Kupershmidt, Ilya; Su, Qiaojuan Jane; Grewal, Anoop; Sundaresh, Suman; Halperin, Inbal; Flynn, James; Shekar, Mamatha; Wang, Helen; Park, Jenny; Cui, Wenwu; Wall, Gregory D; Wisotzkey, Robert; Alag, Satnam; Akhtari, Saeid; Ronaghi, Mostafa

    2010-09-29

    The investigation of the interconnections between the molecular and genetic events that govern biological systems is essential if we are to understand the development of disease and design effective novel treatments. Microarray and next-generation sequencing technologies have the potential to provide this information. However, taking full advantage of these approaches requires that biological connections be made across large quantities of highly heterogeneous genomic datasets. Leveraging the increasingly huge quantities of genomic data in the public domain is fast becoming one of the key challenges in the research community today. We have developed a novel data mining framework that enables researchers to use this growing collection of public high-throughput data to investigate any set of genes or proteins. The connectivity between molecular states across thousands of heterogeneous datasets from microarrays and other genomic platforms is determined through a combination of rank-based enrichment statistics, meta-analyses, and biomedical ontologies. We address data quality concerns through dataset replication and meta-analysis and ensure that the majority of the findings are derived using multiple lines of evidence. As an example of our strategy and the utility of this framework, we apply our data mining approach to explore the biology of brown fat within the context of the thousands of publicly available gene expression datasets. Our work presents a practical strategy for organizing, mining, and correlating global collections of large-scale genomic data to explore normal and disease biology. Using a hypothesis-free approach, we demonstrate how a data-driven analysis across very large collections of genomic data can reveal novel discoveries and evidence to support existing hypothesis.

  10. e-Government Maturity Model Based on Systematic Review and Meta-Ethnography Approach

    Directory of Open Access Journals (Sweden)

    Darmawan Napitupulu

    2016-11-01

    Full Text Available Maturity model based on e-Government portal has been developed by a number of researchers both individually and institutionally, but still scattered in various journals and conference articles and can be said to have a different focus with each other, both in terms of stages and features. The aim of this research is conducting a study to integrate a number of maturity models existing today in order to build generic maturity model based on e-Government portal. The method used in this study is Systematic Review with meta-ethnography qualitative approach. Meta-ethnography, which is part of Systematic Review method, is a technique to perform data integration to obtain theories and concepts with a new level of understanding that is deeper and thorough. The result obtained is a maturity model based on e-Government portal that consists of 7 (seven stages, namely web presence, interaction, transaction, vertical integration, horizontal integration, full integration, and open participation. These seven stages are synthesized from the 111 key concepts related to 25 studies of maturity model based e-Government portal. The maturity model resulted is more comprehensive and generic because it is an integration of models (best practices that exists today.

  11. Spatial Bayesian latent factor regression modeling of coordinate-based meta-analysis data.

    Science.gov (United States)

    Montagna, Silvia; Wager, Tor; Barrett, Lisa Feldman; Johnson, Timothy D; Nichols, Thomas E

    2018-03-01

    Now over 20 years old, functional MRI (fMRI) has a large and growing literature that is best synthesised with meta-analytic tools. As most authors do not share image data, only the peak activation coordinates (foci) reported in the article are available for Coordinate-Based Meta-Analysis (CBMA). Neuroimaging meta-analysis is used to (i) identify areas of consistent activation; and (ii) build a predictive model of task type or cognitive process for new studies (reverse inference). To simultaneously address these aims, we propose a Bayesian point process hierarchical model for CBMA. We model the foci from each study as a doubly stochastic Poisson process, where the study-specific log intensity function is characterized as a linear combination of a high-dimensional basis set. A sparse representation of the intensities is guaranteed through latent factor modeling of the basis coefficients. Within our framework, it is also possible to account for the effect of study-level covariates (meta-regression), significantly expanding the capabilities of the current neuroimaging meta-analysis methods available. We apply our methodology to synthetic data and neuroimaging meta-analysis datasets. © 2017, The International Biometric Society.

  12. Spatial Bayesian Latent Factor Regression Modeling of Coordinate-based Meta-analysis Data

    Science.gov (United States)

    Montagna, Silvia; Wager, Tor; Barrett, Lisa Feldman; Johnson, Timothy D.; Nichols, Thomas E.

    2017-01-01

    Summary Now over 20 years old, functional MRI (fMRI) has a large and growing literature that is best synthesised with meta-analytic tools. As most authors do not share image data, only the peak activation coordinates (foci) reported in the paper are available for Coordinate-Based Meta-Analysis (CBMA). Neuroimaging meta-analysis is used to 1) identify areas of consistent activation; and 2) build a predictive model of task type or cognitive process for new studies (reverse inference). To simultaneously address these aims, we propose a Bayesian point process hierarchical model for CBMA. We model the foci from each study as a doubly stochastic Poisson process, where the study-specific log intensity function is characterised as a linear combination of a high-dimensional basis set. A sparse representation of the intensities is guaranteed through latent factor modeling of the basis coefficients. Within our framework, it is also possible to account for the effect of study-level covariates (meta-regression), significantly expanding the capabilities of the current neuroimaging meta-analysis methods available. We apply our methodology to synthetic data and neuroimaging meta-analysis datasets. PMID:28498564

  13. A META-COMPOSITE SOFTWARE DEVELOPMENT APPROACH FOR TRANSLATIONAL RESEARCH

    Science.gov (United States)

    Sadasivam, Rajani S.; Tanik, Murat M.

    2013-01-01

    Translational researchers conduct research in a highly data-intensive and continuously changing environment and need to use multiple, disparate tools to achieve their goals. These researchers would greatly benefit from meta-composite software development or the ability to continuously compose and recompose tools together in response to their ever-changing needs. However, the available tools are largely disconnected, and current software approaches are inefficient and ineffective in their support for meta-composite software development. Building on the composite services development approach, the de facto standard for developing integrated software systems, we propose a concept-map and agent-based meta-composite software development approach. A crucial step in composite services development is the modeling of users’ needs as processes, which can then be specified in an executable format for system composition. We have two key innovations. First, our approach allows researchers (who understand their needs best) instead of technicians to take a leadership role in the development of process models, reducing inefficiencies and errors. A second innovation is that our approach also allows for modeling of complex user interactions as part of the process, overcoming the technical limitations of current tools. We demonstrate the feasibility of our approach using a real-world translational research use case. We also present results of usability studies evaluating our approach for future refinements. PMID:23504436

  14. A meta-composite software development approach for translational research.

    Science.gov (United States)

    Sadasivam, Rajani S; Tanik, Murat M

    2013-06-01

    Translational researchers conduct research in a highly data-intensive and continuously changing environment and need to use multiple, disparate tools to achieve their goals. These researchers would greatly benefit from meta-composite software development or the ability to continuously compose and recompose tools together in response to their ever-changing needs. However, the available tools are largely disconnected, and current software approaches are inefficient and ineffective in their support for meta-composite software development. Building on the composite services development approach, the de facto standard for developing integrated software systems, we propose a concept-map and agent-based meta-composite software development approach. A crucial step in composite services development is the modeling of users' needs as processes, which can then be specified in an executable format for system composition. We have two key innovations. First, our approach allows researchers (who understand their needs best) instead of technicians to take a leadership role in the development of process models, reducing inefficiencies and errors. A second innovation is that our approach also allows for modeling of complex user interactions as part of the process, overcoming the technical limitations of current tools. We demonstrate the feasibility of our approach using a real-world translational research use case. We also present results of usability studies evaluating our approach for future refinements.

  15. A Karnaugh map based approach towards systemic reviews and meta-analysis.

    Science.gov (United States)

    Hassan, Abdul Wahab; Hassan, Ahmad Kamal

    2016-01-01

    Studying meta-analysis and systemic reviews since long had helped us conclude numerous parallel or conflicting studies. Existing studies are presented in tabulated forms which contain appropriate information for specific cases yet it is difficult to visualize. On meta-analysis of data, this can lead to absorption and subsumption errors henceforth having undesirable potential of consecutive misunderstandings in social and operational methodologies. The purpose of this study is to investigate an alternate forum for meta-data presentation that relies on humans' strong pictorial perception capability. Analysis of big-data is assumed to be a complex and daunting task often reserved on the computational powers of machines yet there exist mapping tools which can analyze such data in a hand-handled manner. Data analysis on such scale can benefit from the use of statistical tools like Karnaugh maps where all studies can be put together on a graph based mapping. Such a formulation can lead to more control in observing patterns of research community and analyzing further for uncertainty and reliability metrics. We present a methodological process of converting a well-established study in Health care to its equaling binary representation followed by furnishing values on to a Karnaugh Map. The data used for the studies presented herein is from Burns et al (J Publ Health 34(1):138-148, 2011) consisting of retrospectively collected data sets from various studies on clinical coding data accuracy. Using a customized filtration process, a total of 25 studies were selected for review with no, partial, or complete knowledge of six independent variables thus forming 64 independent cells on a Karnaugh map. The study concluded that this pictorial graphing as expected had helped in simplifying the overview of meta-analysis and systemic reviews.

  16. Psychological approaches in the treatment of specific phobias: A meta-analysis

    NARCIS (Netherlands)

    Wolitzky-Taylor, K.B.; Horowitz, J.D.; Powers, M.B.; Telch, M.J.

    2008-01-01

    Data from 33 randomized treatment studies were subjected to a meta-analysis to address questions surrounding the efficacy of psychological approaches in the treatment of specific phobia. As expected, exposure-based treatment produced large effects sizes relative to no treatment. They also

  17. Likelihood ratio meta-analysis: New motivation and approach for an old method.

    Science.gov (United States)

    Dormuth, Colin R; Filion, Kristian B; Platt, Robert W

    2016-03-01

    A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Meta-análise em pesquisas científicas: enfoque em metodologias Meta analysis in scientific research: a methodological approach

    Directory of Open Access Journals (Sweden)

    P.A. Lovatto

    2007-07-01

    Full Text Available Este texto descreve os princípios básicos de sistematização com enfoque em meta-análise. É apresentado o estado da arte da meta-análise, recuperando informações de sua evolução e metodologias básicas para sua realização. São descritos seus antecedentes históricos, os limites das abordagens clássicas de revisão de literatura, as bases conceituais, os objetivos e justificativas. É indicada uma metodologia geral para realização da meta-análise. São apresentados os critérios para a definição dos objetivos. São descritos os procedimentos de sistematização das informações e gestão de base bibliográfica de dados destinada aos estudos meta-analíticos (seleção, codificação, filtragem de dados. São apresentadas as análises mais comuns (gráfica, ponderações, a escolha do modelo estatístico (um fator explicativo qualitativo, um efeito qualitativo ou quantitativo, os fatores de interferência, os procedimentos pós-analíticos (variações residuais, heterogeneidade entre resultados. Em síntese, este texto mostra que a meta-análise é superior às formas tradicionais de revisão de literatura por estimar com maior precisão os efeitos dos tratamentos, ajustando-os para a heterogeneidade experimental. No entanto, a meta-análise exige os efeitos na sistematização e análise dos resultados da pesquisa.This text describes a basic approach of systematization with focus on meta-analysis. It is presented the art state of the meta-analysis, recovering information of its evolution and basic methodologies for its accomplishment. In this text are described the historical antecedents of meta-analysis, the limits of the classic approaches of literature review, the conceptual bases and the objectives. It is indicated a general methodology for meta-analysis procedures. The criteria for the definition of the objectives are presented. The procedures of the systematization and management of bibliographical data base selected to

  19. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach

    Directory of Open Access Journals (Sweden)

    Mike W.-L. Cheung

    2016-05-01

    Full Text Available Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists – and probably the most crucial one – is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.

  20. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach.

    Science.gov (United States)

    Cheung, Mike W-L; Jak, Suzanne

    2016-01-01

    Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists-and probably the most crucial one-is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.

  1. Options for a health system researcher to choose in Meta Review (MR approaches-Meta Narrative (MN and Meta Triangulation (MT

    Directory of Open Access Journals (Sweden)

    Sanjeev Davey

    2015-01-01

    Full Text Available Two new approaches in systematic reviewing i.e. Meta-narrative review(MNR (which a health researcher can use for topics which are differently conceptualized and studied by different types of researchers for policy decisions and Meta-triangulation review(MTR (done to build theory for studying multifaceted phenomena characterized by expansive and contested research domains are ready for penetration in an arena of health system research. So critical look at which approach in Meta-review is better i.e. Meta-narrative review or Meta-triangulation review, can give new insights to a health system researcher. A systematic review on 2 key words-"meta-narrative review" and "meta-triangulation review" in health system research, were searched from key search engines, such as Pubmed, Cochrane library, Bio-med Central and Google Scholar etc till 21st March 2014 since last 20 years. Studies from both developed and developing world were included in any form and scope to draw final conclusions. However unpublished data from thesis was not included in systematic review. Meta-narrative review is a type of systematic review which can be used for a wide range of topics and questions involving making judgments and inferences in public health. On the other hand Meta-triangulation review is a three-phased, qualitative meta-analysis process which can be used to explore variations in the assumptions of alternative paradigms, gain insights into these multiple paradigms at one point of time and addresses emerging themes and the resulting theories.

  2. MetaDB a Data Processing Workflow in Untargeted MS-Based Metabolomics Experiments.

    Science.gov (United States)

    Franceschi, Pietro; Mylonas, Roman; Shahaf, Nir; Scholz, Matthias; Arapitsas, Panagiotis; Masuero, Domenico; Weingart, Georg; Carlin, Silvia; Vrhovsek, Urska; Mattivi, Fulvio; Wehrens, Ron

    2014-01-01

    Due to their sensitivity and speed, mass-spectrometry based analytical technologies are widely used to in metabolomics to characterize biological phenomena. To address issues like metadata organization, quality assessment, data processing, data storage, and, finally, submission to public repositories, bioinformatic pipelines of a non-interactive nature are often employed, complementing the interactive software used for initial inspection and visualization of the data. These pipelines often are created as open-source software allowing the complete and exhaustive documentation of each step, ensuring the reproducibility of the analysis of extensive and often expensive experiments. In this paper, we will review the major steps which constitute such a data processing pipeline, discussing them in the context of an open-source software for untargeted MS-based metabolomics experiments recently developed at our institute. The software has been developed by integrating our metaMS R package with a user-friendly web-based application written in Grails. MetaMS takes care of data pre-processing and annotation, while the interface deals with the creation of the sample lists, the organization of the data storage, and the generation of survey plots for quality assessment. Experimental and biological metadata are stored in the ISA-Tab format making the proposed pipeline fully integrated with the Metabolights framework.

  3. Open Hypermedia as User Controlled Meta Data for the Web

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Sloth, Lennert; Bouvin, Niels Olof

    2000-01-01

    This paper introduces an approach to utilise open hypermedia structures such as links, annotations, collections and guided tours as meta data for Web resources. The paper introduces an XML based data format, called Open Hypermedia Interchange Format - OHIF, for such hypermedia structures. OHIF...... distributed open hypermedia linking between Web pages and WebDAV aware desktop applications. The paper describes the OHIF format and demonstrates how the Webvise system handles OHIF. Finally, it argues for better support for handling user controlled meta data, e.g. support for linking in non-XML data...... resembles XLink with respect to its representation of out-of-line links, but it goes beyond XLink with a more rich set of structuring mechanisms, including e.g. composites. Moreover OHIF includes an addressing mechanisms (LocSpecs) that goes beyond XPointer and URL in its ability to locate non-XML data...

  4. Multi-site study of additive genetic effects on fractional anisotropy of cerebral white matter: comparing meta and mega analytical approaches for data pooling

    Science.gov (United States)

    Kochunov, Peter; Jahanshad, Neda; Sprooten, Emma; Nichols, Thomas E.; Mandl, René C.; Almasy, Laura; Booth, Tom; Brouwer, Rachel M.; Curran, Joanne E.; de Zubicaray, Greig I.; Dimitrova, Rali; Duggirala, Ravi; Fox, Peter T.; Hong, L. Elliot; Landman, Bennett A.; Lemaitre, Hervé; Lopez, Lorna; Martin, Nicholas G.; McMahon, Katie L.; Mitchell, Braxton D.; Olvera, Rene L.; Peterson, Charles P.; Starr, John M.; Sussmann, Jessika E.; Toga, Arthur W.; Wardlaw, Joanna M.; Wright, Margaret J.; Wright, Susan N.; Bastin, Mark E.; McIntosh, Andrew M.; Boomsma, Dorret I.; Kahn, René S.; den Braber, Anouk; de Geus, Eco JC; Deary, Ian J.; Hulshoff Pol, Hilleke E.; Williamson, Douglas E.; Blangero, John; van ’t Ent, Dennis; Thompson, Paul M.; Glahn, David C.

    2014-01-01

    Combining datasets across independent studies can boost statistical power by increasing the numbers of observations and can achieve more accurate estimates of effect sizes. This is especially important for genetic studies where a large number of observations are required to obtain sufficient power to detect and replicate genetic effects. There is a need to develop and evaluate methods for joint-analytical analyses of rich datasets collected in imaging genetics studies. The ENIGMA-DTI consortium is developing and evaluating approaches for obtaining pooled estimates of heritability through meta-and mega-genetic analytical approaches, to estimate the general additive genetic contributions to the intersubject variance in fractional anisotropy (FA) measured from diffusion tensor imaging (DTI). We used the ENIGMA-DTI data harmonization protocol for uniform processing of DTI data from multiple sites. We evaluated this protocol in five family-based cohorts providing data from a total of 2248 children and adults (ages: 9–85) collected with various imaging protocols. We used the imaging genetics analysis tool, SOLAR-Eclipse, to combine twin and family data from Dutch, Australian and Mexican-American cohorts into one large “mega-family”. We showed that heritability estimates may vary from one cohort to another. We used two meta-analytical (the sample-size and standard-error weighted) approaches and a mega-genetic analysis to calculate heritability estimates across-population. We performed leave-one-out analysis of the joint estimates of heritability, removing a different cohort each time to understand the estimate variability. Overall, meta- and mega-genetic analyses of heritability produced robust estimates of heritability. PMID:24657781

  5. Multi-site study of additive genetic effects on fractional anisotropy of cerebral white matter: Comparing meta and megaanalytical approaches for data pooling.

    Science.gov (United States)

    Kochunov, Peter; Jahanshad, Neda; Sprooten, Emma; Nichols, Thomas E; Mandl, René C; Almasy, Laura; Booth, Tom; Brouwer, Rachel M; Curran, Joanne E; de Zubicaray, Greig I; Dimitrova, Rali; Duggirala, Ravi; Fox, Peter T; Hong, L Elliot; Landman, Bennett A; Lemaitre, Hervé; Lopez, Lorna M; Martin, Nicholas G; McMahon, Katie L; Mitchell, Braxton D; Olvera, Rene L; Peterson, Charles P; Starr, John M; Sussmann, Jessika E; Toga, Arthur W; Wardlaw, Joanna M; Wright, Margaret J; Wright, Susan N; Bastin, Mark E; McIntosh, Andrew M; Boomsma, Dorret I; Kahn, René S; den Braber, Anouk; de Geus, Eco J C; Deary, Ian J; Hulshoff Pol, Hilleke E; Williamson, Douglas E; Blangero, John; van 't Ent, Dennis; Thompson, Paul M; Glahn, David C

    2014-07-15

    Combining datasets across independent studies can boost statistical power by increasing the numbers of observations and can achieve more accurate estimates of effect sizes. This is especially important for genetic studies where a large number of observations are required to obtain sufficient power to detect and replicate genetic effects. There is a need to develop and evaluate methods for joint-analytical analyses of rich datasets collected in imaging genetics studies. The ENIGMA-DTI consortium is developing and evaluating approaches for obtaining pooled estimates of heritability through meta-and mega-genetic analytical approaches, to estimate the general additive genetic contributions to the intersubject variance in fractional anisotropy (FA) measured from diffusion tensor imaging (DTI). We used the ENIGMA-DTI data harmonization protocol for uniform processing of DTI data from multiple sites. We evaluated this protocol in five family-based cohorts providing data from a total of 2248 children and adults (ages: 9-85) collected with various imaging protocols. We used the imaging genetics analysis tool, SOLAR-Eclipse, to combine twin and family data from Dutch, Australian and Mexican-American cohorts into one large "mega-family". We showed that heritability estimates may vary from one cohort to another. We used two meta-analytical (the sample-size and standard-error weighted) approaches and a mega-genetic analysis to calculate heritability estimates across-population. We performed leave-one-out analysis of the joint estimates of heritability, removing a different cohort each time to understand the estimate variability. Overall, meta- and mega-genetic analyses of heritability produced robust estimates of heritability. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?

    Science.gov (United States)

    Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R

    2018-04-30

    Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  7. A novel bi-level meta-analysis approach: applied to biological pathway analysis.

    Science.gov (United States)

    Nguyen, Tin; Tagett, Rebecca; Donato, Michele; Mitrea, Cristina; Draghici, Sorin

    2016-02-01

    The accumulation of high-throughput data in public repositories creates a pressing need for integrative analysis of multiple datasets from independent experiments. However, study heterogeneity, study bias, outliers and the lack of power of available methods present real challenge in integrating genomic data. One practical drawback of many P-value-based meta-analysis methods, including Fisher's, Stouffer's, minP and maxP, is that they are sensitive to outliers. Another drawback is that, because they perform just one statistical test for each individual experiment, they may not fully exploit the potentially large number of samples within each study. We propose a novel bi-level meta-analysis approach that employs the additive method and the Central Limit Theorem within each individual experiment and also across multiple experiments. We prove that the bi-level framework is robust against bias, less sensitive to outliers than other methods, and more sensitive to small changes in signal. For comparative analysis, we demonstrate that the intra-experiment analysis has more power than the equivalent statistical test performed on a single large experiment. For pathway analysis, we compare the proposed framework versus classical meta-analysis approaches (Fisher's, Stouffer's and the additive method) as well as against a dedicated pathway meta-analysis package (MetaPath), using 1252 samples from 21 datasets related to three human diseases, acute myeloid leukemia (9 datasets), type II diabetes (5 datasets) and Alzheimer's disease (7 datasets). Our framework outperforms its competitors to correctly identify pathways relevant to the phenotypes. The framework is sufficiently general to be applied to any type of statistical meta-analysis. The R scripts are available on demand from the authors. sorin@wayne.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e

  8. A framework for the meta-analysis of Bland-Altman studies based on a limits of agreement approach.

    Science.gov (United States)

    Tipton, Elizabeth; Shuster, Jonathan

    2017-10-15

    Bland-Altman method comparison studies are common in the medical sciences and are used to compare a new measure to a gold-standard (often costlier or more invasive) measure. The distribution of these differences is summarized by two statistics, the 'bias' and standard deviation, and these measures are combined to provide estimates of the limits of agreement (LoA). When these LoA are within the bounds of clinically insignificant differences, the new non-invasive measure is preferred. Very often, multiple Bland-Altman studies have been conducted comparing the same two measures, and random-effects meta-analysis provides a means to pool these estimates. We provide a framework for the meta-analysis of Bland-Altman studies, including methods for estimating the LoA and measures of uncertainty (i.e., confidence intervals). Importantly, these LoA are likely to be wider than those typically reported in Bland-Altman meta-analyses. Frequently, Bland-Altman studies report results based on repeated measures designs but do not properly adjust for this design in the analysis. Meta-analyses of Bland-Altman studies frequently exclude these studies for this reason. We provide a meta-analytic approach that allows inclusion of estimates from these studies. This includes adjustments to the estimate of the standard deviation and a method for pooling the estimates based upon robust variance estimation. An example is included based on a previously published meta-analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Meta-Analysis for Primary and Secondary Data Analysis: The Super-Experiment Metaphor.

    Science.gov (United States)

    Jackson, Sally

    1991-01-01

    Considers the relation between meta-analysis statistics and analysis of variance statistics. Discusses advantages and disadvantages as a primary data analysis tool. Argues that the two approaches are partial paraphrases of one another. Advocates an integrative approach that introduces the best of meta-analytic thinking into primary analysis…

  10. The link between employee attitudes and employee effectiveness: Data matrix of meta-analytic estimates based on 1161 unique correlations

    Directory of Open Access Journals (Sweden)

    Michael M. Mackay

    2016-09-01

    Full Text Available This article offers a correlation matrix of meta-analytic estimates between various employee job attitudes (i.e., Employee engagement, job satisfaction, job involvement, and organizational commitment and indicators of employee effectiveness (i.e., Focal performance, contextual performance, turnover intention, and absenteeism. The meta-analytic correlations in the matrix are based on over 1100 individual studies representing over 340,000 employees. Data was collected worldwide via employee self-report surveys. Structural path analyses based on the matrix, and the interpretation of the data, can be found in “Investigating the incremental validity of employee engagement in the prediction of employee effectiveness: a meta-analytic path analysis” (Mackay et al., 2016 [1]. Keywords: Meta-analysis, Job attitudes, Job performance, Employee, Engagement, Employee effectiveness

  11. Identifying novel glioma associated pathways based on systems biology level meta-analysis.

    Science.gov (United States)

    Hu, Yangfan; Li, Jinquan; Yan, Wenying; Chen, Jiajia; Li, Yin; Hu, Guang; Shen, Bairong

    2013-01-01

    With recent advances in microarray technology, including genomics, proteomics, and metabolomics, it brings a great challenge for integrating this "-omics" data to analysis complex disease. Glioma is an extremely aggressive and lethal form of brain tumor, and thus the study of the molecule mechanism underlying glioma remains very important. To date, most studies focus on detecting the differentially expressed genes in glioma. However, the meta-analysis for pathway analysis based on multiple microarray datasets has not been systematically pursued. In this study, we therefore developed a systems biology based approach by integrating three types of omics data to identify common pathways in glioma. Firstly, the meta-analysis has been performed to study the overlapping of signatures at different levels based on the microarray gene expression data of glioma. Among these gene expression datasets, 12 pathways were found in GeneGO database that shared by four stages. Then, microRNA expression profiles and ChIP-seq data were integrated for the further pathway enrichment analysis. As a result, we suggest 5 of these pathways could be served as putative pathways in glioma. Among them, the pathway of TGF-beta-dependent induction of EMT via SMAD is of particular importance. Our results demonstrate that the meta-analysis based on systems biology level provide a more useful approach to study the molecule mechanism of complex disease. The integration of different types of omics data, including gene expression microarrays, microRNA and ChIP-seq data, suggest some common pathways correlated with glioma. These findings will offer useful potential candidates for targeted therapeutic intervention of glioma.

  12. Meta-analysis and other approaches for synthesizing structured and unstructured data in plant pathology.

    Science.gov (United States)

    Scherm, H; Thomas, C S; Garrett, K A; Olsen, J M

    2014-01-01

    The term data deluge is used widely to describe the rapidly accelerating growth of information in the technical literature, in scientific databases, and in informal sources such as the Internet and social media. The massive volume and increased complexity of information challenge traditional methods of data analysis but at the same time provide unprecedented opportunities to test hypotheses or uncover new relationships via mining of existing databases and literature. In this review, we discuss analytical approaches that are beginning to be applied to help synthesize the vast amount of information generated by the data deluge and thus accelerate the pace of discovery in plant pathology. We begin with a review of meta-analysis as an established approach for summarizing standardized (structured) data across the literature. We then turn to examples of synthesizing more complex, unstructured data sets through a range of data-mining approaches, including the incorporation of 'omics data in epidemiological analyses. We conclude with a discussion of methodologies for leveraging information contained in novel, open-source data sets through web crawling, text mining, and social media analytics, primarily in the context of digital disease surveillance. Rapidly evolving computational resources provide platforms for integrating large and complex data sets, motivating research that will draw on new types and scales of information to address big questions.

  13. Assessment and Comparison of Search capabilities of Web-based Meta-Search Engines: A Checklist Approach

    Directory of Open Access Journals (Sweden)

    Alireza Isfandiyari Moghadam

    2010-03-01

    Full Text Available   The present investigation concerns evaluation, comparison and analysis of search options existing within web-based meta-search engines. 64 meta-search engines were identified. 19 meta-search engines that were free, accessible and compatible with the objectives of the present study were selected. An author’s constructed check list was used for data collection. Findings indicated that all meta-search engines studied used the AND operator, phrase search, number of results displayed setting, previous search query storage and help tutorials. Nevertheless, none of them demonstrated any search options for hypertext searching and displaying the size of the pages searched. 94.7% support features such as truncation, keywords in title and URL search and text summary display. The checklist used in the study could serve as a model for investigating search options in search engines, digital libraries and other internet search tools.

  14. Meta-DiSc: a software for meta-analysis of test accuracy data.

    Science.gov (United States)

    Zamora, Javier; Abraira, Victor; Muriel, Alfonso; Khan, Khalid; Coomarasamy, Arri

    2006-07-12

    Systematic reviews and meta-analyses of test accuracy studies are increasingly being recognised as central in guiding clinical practice. However, there is currently no dedicated and comprehensive software for meta-analysis of diagnostic data. In this article, we present Meta-DiSc, a Windows-based, user-friendly, freely available (for academic use) software that we have developed, piloted, and validated to perform diagnostic meta-analysis. Meta-DiSc a) allows exploration of heterogeneity, with a variety of statistics including chi-square, I-squared and Spearman correlation tests, b) implements meta-regression techniques to explore the relationships between study characteristics and accuracy estimates, c) performs statistical pooling of sensitivities, specificities, likelihood ratios and diagnostic odds ratios using fixed and random effects models, both overall and in subgroups and d) produces high quality figures, including forest plots and summary receiver operating characteristic curves that can be exported for use in manuscripts for publication. All computational algorithms have been validated through comparison with different statistical tools and published meta-analyses. Meta-DiSc has a Graphical User Interface with roll-down menus, dialog boxes, and online help facilities. Meta-DiSc is a comprehensive and dedicated test accuracy meta-analysis software. It has already been used and cited in several meta-analyses published in high-ranking journals. The software is publicly available at http://www.hrc.es/investigacion/metadisc_en.htm.

  15. MetaPhinder-Identifying Bacteriophage Sequences in Metagenomic Data Sets

    DEFF Research Database (Denmark)

    Jurtz, Vanessa Isabell; Villarroel, Julia; Lund, Ole

    2016-01-01

    genome structure of many bacteriophages. The method is demonstrated to outperform both BLAST methods based on single hits and methods based on k-mer comparisons. MetaPhinder is available as a web service at the Center for Genomic Epidemiology https://cge.cbs.dtu.dk/services/MetaPhinder/, while the source...... and understand them. Here we present MetaPhinder, a method to identify assembled genomic fragments (i.e. contigs) of phage origin in metage-nomic data sets. The method is based on a comparison to a database of whole genome bacteriophage sequences, integrating hits to multiple genomes to accomodate for the mosaic...... code can be downloaded from https://bitbucket.org/genomicepidemiology/metaphinder or https://github.com/vanessajurtz/MetaPhinder....

  16. Meta-Analysis for Sociology – A Measure-Driven Approach

    Science.gov (United States)

    Roelfs, David J.; Shor, Eran; Falzon, Louise; Davidson, Karina W.; Schwartz, Joseph E.

    2013-01-01

    Meta-analytic methods are becoming increasingly important in sociological research. In this article we present an approach for meta-analysis which is especially helpful for sociologists. Conventional approaches to meta-analysis often prioritize “concept-driven” literature searches. However, in disciplines with high theoretical diversity, such as sociology, this search approach might constrain the researcher’s ability to fully exploit the entire body of relevant work. We explicate a “measure-driven” approach, in which iterative searches and new computerized search techniques are used to increase the range of publications found (and thus the range of possible analyses) and to traverse time and disciplinary boundaries. We demonstrate this measure-driven search approach with two meta-analytic projects, examining the effects of various social variables on all-cause mortality. PMID:24163498

  17. Meta-shell Approach for Constructing Lightweight and High Resolution X-Ray Optics

    Science.gov (United States)

    McClelland, Ryan S.

    2016-01-01

    Lightweight and high resolution optics are needed for future space-based x-ray telescopes to achieve advances in high-energy astrophysics. Past missions such as Chandra and XMM-Newton have achieved excellent angular resolution using a full shell mirror approach. Other missions such as Suzaku and NuSTAR have achieved lightweight mirrors using a segmented approach. This paper describes a new approach, called meta-shells, which combines the fabrication advantages of segmented optics with the alignment advantages of full shell optics. Meta-shells are built by layering overlapping mirror segments onto a central structural shell. The resulting optic has the stiffness and rotational symmetry of a full shell, but with an order of magnitude greater collecting area. Several meta-shells so constructed can be integrated into a large x-ray mirror assembly by proven methods used for Chandra and XMM-Newton. The mirror segments are mounted to the meta-shell using a novel four point semi-kinematic mount. The four point mount deterministically locates the segment in its most performance sensitive degrees of freedom. Extensive analysis has been performed to demonstrate the feasibility of the four point mount and meta-shell approach. A mathematical model of a meta-shell constructed with mirror segments bonded at four points and subject to launch loads has been developed to determine the optimal design parameters, namely bond size, mirror segment span, and number of layers per meta-shell. The parameters of an example 1.3 m diameter mirror assembly are given including the predicted effective area. To verify the mathematical model and support opto-mechanical analysis, a detailed finite element model of a meta-shell was created. Finite element analysis predicts low gravity distortion and low thermal distortion. Recent results are discussed including Structural Thermal Optical Performance (STOP) analysis as well as vibration and shock testing of prototype meta-shells.

  18. metaCCA: summary statistics-based multivariate meta-analysis of genome-wide association studies using canonical correlation analysis.

    Science.gov (United States)

    Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti

    2016-07-01

    A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness.Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Code is available at https://github.com/aalto-ics-kepaco anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  19. Model and Interoperability using Meta Data Annotations

    Science.gov (United States)

    David, O.

    2011-12-01

    Software frameworks and architectures are in need for meta data to efficiently support model integration. Modelers have to know the context of a model, often stepping into modeling semantics and auxiliary information usually not provided in a concise structure and universal format, consumable by a range of (modeling) tools. XML often seems the obvious solution for capturing meta data, but its wide adoption to facilitate model interoperability is limited by XML schema fragmentation, complexity, and verbosity outside of a data-automation process. Ontologies seem to overcome those shortcomings, however the practical significance of their use remains to be demonstrated. OMS version 3 took a different approach for meta data representation. The fundamental building block of a modular model in OMS is a software component representing a single physical process, calibration method, or data access approach. Here, programing language features known as Annotations or Attributes were adopted. Within other (non-modeling) frameworks it has been observed that annotations lead to cleaner and leaner application code. Framework-supported model integration, traditionally accomplished using Application Programming Interfaces (API) calls is now achieved using descriptive code annotations. Fully annotated components for various hydrological and Ag-system models now provide information directly for (i) model assembly and building, (ii) data flow analysis for implicit multi-threading or visualization, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, calibration, and optimization, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Such a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework but a strong reference to its originating code. Since models and

  20. MetaSeq: privacy preserving meta-analysis of sequencing-based association studies.

    Science.gov (United States)

    Singh, Angad Pal; Zafer, Samreen; Pe'er, Itsik

    2013-01-01

    Human genetics recently transitioned from GWAS to studies based on NGS data. For GWAS, small effects dictated large sample sizes, typically made possible through meta-analysis by exchanging summary statistics across consortia. NGS studies groupwise-test for association of multiple potentially-causal alleles along each gene. They are subject to similar power constraints and therefore likely to resort to meta-analysis as well. The problem arises when considering privacy of the genetic information during the data-exchange process. Many scoring schemes for NGS association rely on the frequency of each variant thus requiring the exchange of identity of the sequenced variant. As such variants are often rare, potentially revealing the identity of their carriers and jeopardizing privacy. We have thus developed MetaSeq, a protocol for meta-analysis of genome-wide sequencing data by multiple collaborating parties, scoring association for rare variants pooled per gene across all parties. We tackle the challenge of tallying frequency counts of rare, sequenced alleles, for metaanalysis of sequencing data without disclosing the allele identity and counts, thereby protecting sample identity. This apparent paradoxical exchange of information is achieved through cryptographic means. The key idea is that parties encrypt identity of genes and variants. When they transfer information about frequency counts in cases and controls, the exchanged data does not convey the identity of a mutation and therefore does not expose carrier identity. The exchange relies on a 3rd party, trusted to follow the protocol although not trusted to learn about the raw data. We show applicability of this method to publicly available exome-sequencing data from multiple studies, simulating phenotypic information for powerful meta-analysis. The MetaSeq software is publicly available as open source.

  1. [The meta-analysis of data from individual patients].

    NARCIS (Netherlands)

    Rovers, M.M.; Reitsma, J.B.

    2012-01-01

    - An IPD (Individual Participant Data) meta-analysis requires collecting original individual patient data and calculating an estimated effect based on these data.- The use of individual patient data has various advantages: the original data and the results of published analyses are verified,

  2. MetaRanker 2.0: a web server for prioritization of genetic variation data.

    Science.gov (United States)

    Pers, Tune H; Dworzyński, Piotr; Thomas, Cecilia Engel; Lage, Kasper; Brunak, Søren

    2013-07-01

    MetaRanker 2.0 is a web server for prioritization of common and rare frequency genetic variation data. Based on heterogeneous data sets including genetic association data, protein-protein interactions, large-scale text-mining data, copy number variation data and gene expression experiments, MetaRanker 2.0 prioritizes the protein-coding part of the human genome to shortlist candidate genes for targeted follow-up studies. MetaRanker 2.0 is made freely available at www.cbs.dtu.dk/services/MetaRanker-2.0.

  3. Meta-analysis of Cancer Gene Profiling Data.

    Science.gov (United States)

    Roy, Janine; Winter, Christof; Schroeder, Michael

    2016-01-01

    The simultaneous measurement of thousands of genes gives the opportunity to personalize and improve cancer therapy. In addition, the integration of meta-data such as protein-protein interaction (PPI) information into the analyses helps in the identification and prioritization of genes from these screens. Here, we describe a computational approach that identifies genes prognostic for outcome by combining gene profiling data from any source with a network of known relationships between genes.

  4. Meta-Analyses of Human Cell-Based Cardiac Regeneration Therapies

    DEFF Research Database (Denmark)

    Gyöngyösi, Mariann; Wojakowski, Wojciech; Navarese, Eliano P

    2016-01-01

    In contrast to multiple publication-based meta-analyses involving clinical cardiac regeneration therapy in patients with recent myocardial infarction, a recently published meta-analysis based on individual patient data reported no effect of cell therapy on left ventricular function or clinical...

  5. A knowledge representation meta-model for rule-based modelling of signalling networks

    Directory of Open Access Journals (Sweden)

    Adrien Basso-Blandin

    2016-03-01

    Full Text Available The study of cellular signalling pathways and their deregulation in disease states, such as cancer, is a large and extremely complex task. Indeed, these systems involve many parts and processes but are studied piecewise and their literatures and data are consequently fragmented, distributed and sometimes—at least apparently—inconsistent. This makes it extremely difficult to build significant explanatory models with the result that effects in these systems that are brought about by many interacting factors are poorly understood. The rule-based approach to modelling has shown some promise for the representation of the highly combinatorial systems typically found in signalling where many of the proteins are composed of multiple binding domains, capable of simultaneous interactions, and/or peptide motifs controlled by post-translational modifications. However, the rule-based approach requires highly detailed information about the precise conditions for each and every interaction which is rarely available from any one single source. Rather, these conditions must be painstakingly inferred and curated, by hand, from information contained in many papers—each of which contains only part of the story. In this paper, we introduce a graph-based meta-model, attuned to the representation of cellular signalling networks, which aims to ease this massive cognitive burden on the rule-based curation process. This meta-model is a generalization of that used by Kappa and BNGL which allows for the flexible representation of knowledge at various levels of granularity. In particular, it allows us to deal with information which has either too little, or too much, detail with respect to the strict rule-based meta-model. Our approach provides a basis for the gradual aggregation of fragmented biological knowledge extracted from the literature into an instance of the meta-model from which we can define an automated translation into executable Kappa programs.

  6. A meta-analysis of the price elasticity of gasoline demand. A SUR approach

    Energy Technology Data Exchange (ETDEWEB)

    Brons, Martijn; Rietveld, Piet [Department of Spatial Economics, Vrije Universiteit, De Boelelaan 1105, 1081 HV Amsterdam (Netherlands); Tinbergen Institute Amsterdam (TIA), Roetersstraat 31, 1018 WB Amsterdam (Netherlands); Nijkamp, Peter [Department of Spatial Economics, Vrije Universiteit, De Boelelaan 1105, 1081 HV Amsterdam (Netherlands); Tinbergen Institute Amsterdam (TIA), Roetersstraat 31, 1018 WB Amsterdam (Netherlands); The Netherlands Organisation of Scientific Research (NWO), postbus 93138 - 2509 AC Den Haag (Netherlands); Pels, Eric [Department of Spatial Economics, Vrije Universiteit, De Boelelaan 1105, 1081 HV Amsterdam (Netherlands)

    2008-09-15

    Automobile gasoline demand can be expressed as a multiplicative function of fuel efficiency, mileage per car and car ownership. This implies a linear relationship between the price elasticity of total fuel demand and the price elasticities of fuel efficiency, mileage per car and car ownership. In this meta-analytical study we aim to investigate and explain the variation in empirical estimates of the price elasticity of gasoline demand. A methodological novelty is that we use the linear relationship between the elasticities to develop a meta-analytical estimation approach based on a Seemingly Unrelated Regression (SUR) model with Cross Equation Restrictions. This approach enables us to combine observations of different elasticities and thus increase our sample size. Furthermore, it allows for a more detailed interpretation of our meta-regression results. The empirical results of the study demonstrate that the SUR approach leads to more precise results (i.e., lower standard errors) than a standard meta-analytical approach. We find that, with mean short run and long run price elasticities of - 0.34 and - 0.84, respectively, the demand for gasoline is not very price sensitive. Both in the short and the long run, the impact of a change in the gasoline price on demand is mainly driven by responses in fuel efficiency and mileage per car and to a slightly lesser degree by changes in car ownership. Furthermore, we find that study characteristics relating to the geographic area studied, the year of the study, the type of data used, the time horizon and the functional specification of the demand equation have a significant impact on the estimated value of the price elasticity of gasoline demand. (author)

  7. A meta-analysis of the price elasticity of gasoline demand. A SUR approach

    International Nuclear Information System (INIS)

    Brons, Martijn; Rietveld, Piet; Nijkamp, Peter; Pels, Eric

    2008-01-01

    Automobile gasoline demand can be expressed as a multiplicative function of fuel efficiency, mileage per car and car ownership. This implies a linear relationship between the price elasticity of total fuel demand and the price elasticities of fuel efficiency, mileage per car and car ownership. In this meta-analytical study we aim to investigate and explain the variation in empirical estimates of the price elasticity of gasoline demand. A methodological novelty is that we use the linear relationship between the elasticities to develop a meta-analytical estimation approach based on a Seemingly Unrelated Regression (SUR) model with Cross Equation Restrictions. This approach enables us to combine observations of different elasticities and thus increase our sample size. Furthermore, it allows for a more detailed interpretation of our meta-regression results. The empirical results of the study demonstrate that the SUR approach leads to more precise results (i.e., lower standard errors) than a standard meta-analytical approach. We find that, with mean short run and long run price elasticities of - 0.34 and - 0.84, respectively, the demand for gasoline is not very price sensitive. Both in the short and the long run, the impact of a change in the gasoline price on demand is mainly driven by responses in fuel efficiency and mileage per car and to a slightly lesser degree by changes in car ownership. Furthermore, we find that study characteristics relating to the geographic area studied, the year of the study, the type of data used, the time horizon and the functional specification of the demand equation have a significant impact on the estimated value of the price elasticity of gasoline demand. (author)

  8. Bayesian meta-analysis models for microarray data: a comparative study

    Directory of Open Access Journals (Sweden)

    Song Joon J

    2007-03-01

    Full Text Available Abstract Background With the growing abundance of microarray data, statistical methods are increasingly needed to integrate results across studies. Two common approaches for meta-analysis of microarrays include either combining gene expression measures across studies or combining summaries such as p-values, probabilities or ranks. Here, we compare two Bayesian meta-analysis models that are analogous to these methods. Results Two Bayesian meta-analysis models for microarray data have recently been introduced. The first model combines standardized gene expression measures across studies into an overall mean, accounting for inter-study variability, while the second combines probabilities of differential expression without combining expression values. Both models produce the gene-specific posterior probability of differential expression, which is the basis for inference. Since the standardized expression integration model includes inter-study variability, it may improve accuracy of results versus the probability integration model. However, due to the small number of studies typical in microarray meta-analyses, the variability between studies is challenging to estimate. The probability integration model eliminates the need to model variability between studies, and thus its implementation is more straightforward. We found in simulations of two and five studies that combining probabilities outperformed combining standardized gene expression measures for three comparison values: the percent of true discovered genes in meta-analysis versus individual studies; the percent of true genes omitted in meta-analysis versus separate studies, and the number of true discovered genes for fixed levels of Bayesian false discovery. We identified similar results when pooling two independent studies of Bacillus subtilis. We assumed that each study was produced from the same microarray platform with only two conditions: a treatment and control, and that the data sets

  9. Power to the People! Meta-algorithmic modelling in applied data science

    NARCIS (Netherlands)

    Spruit, M.; Jagesar, R.

    2016-01-01

    This position paper first defines the research field of applied data science at the intersection of domain expertise, data mining, and engineering capabilities, with particular attention to analytical applications. We then propose a meta-algorithmic approach for applied data science with societal

  10. Meta-Analyst: software for meta-analysis of binary, continuous and diagnostic data

    Directory of Open Access Journals (Sweden)

    Schmid Christopher H

    2009-12-01

    Full Text Available Abstract Background Meta-analysis is increasingly used as a key source of evidence synthesis to inform clinical practice. The theory and statistical foundations of meta-analysis continually evolve, providing solutions to many new and challenging problems. In practice, most meta-analyses are performed in general statistical packages or dedicated meta-analysis programs. Results Herein, we introduce Meta-Analyst, a novel, powerful, intuitive, and free meta-analysis program for the meta-analysis of a variety of problems. Meta-Analyst is implemented in C# atop of the Microsoft .NET framework, and features a graphical user interface. The software performs several meta-analysis and meta-regression models for binary and continuous outcomes, as well as analyses for diagnostic and prognostic test studies in the frequentist and Bayesian frameworks. Moreover, Meta-Analyst includes a flexible tool to edit and customize generated meta-analysis graphs (e.g., forest plots and provides output in many formats (images, Adobe PDF, Microsoft Word-ready RTF. The software architecture employed allows for rapid changes to be made to either the Graphical User Interface (GUI or to the analytic modules. We verified the numerical precision of Meta-Analyst by comparing its output with that from standard meta-analysis routines in Stata over a large database of 11,803 meta-analyses of binary outcome data, and 6,881 meta-analyses of continuous outcome data from the Cochrane Library of Systematic Reviews. Results from analyses of diagnostic and prognostic test studies have been verified in a limited number of meta-analyses versus MetaDisc and MetaTest. Bayesian statistical analyses use the OpenBUGS calculation engine (and are thus as accurate as the standalone OpenBUGS software. Conclusion We have developed and validated a new program for conducting meta-analyses that combines the advantages of existing software for this task.

  11. A Bayesian approach to meta-analysis of plant pathology studies.

    Science.gov (United States)

    Mila, A L; Ngugi, H K

    2011-01-01

    Bayesian statistical methods are used for meta-analysis in many disciplines, including medicine, molecular biology, and engineering, but have not yet been applied for quantitative synthesis of plant pathology studies. In this paper, we illustrate the key concepts of Bayesian statistics and outline the differences between Bayesian and classical (frequentist) methods in the way parameters describing population attributes are considered. We then describe a Bayesian approach to meta-analysis and present a plant pathological example based on studies evaluating the efficacy of plant protection products that induce systemic acquired resistance for the management of fire blight of apple. In a simple random-effects model assuming a normal distribution of effect sizes and no prior information (i.e., a noninformative prior), the results of the Bayesian meta-analysis are similar to those obtained with classical methods. Implementing the same model with a Student's t distribution and a noninformative prior for the effect sizes, instead of a normal distribution, yields similar results for all but acibenzolar-S-methyl (Actigard) which was evaluated only in seven studies in this example. Whereas both the classical (P = 0.28) and the Bayesian analysis with a noninformative prior (95% credibility interval [CRI] for the log response ratio: -0.63 to 0.08) indicate a nonsignificant effect for Actigard, specifying a t distribution resulted in a significant, albeit variable, effect for this product (CRI: -0.73 to -0.10). These results confirm the sensitivity of the analytical outcome (i.e., the posterior distribution) to the choice of prior in Bayesian meta-analyses involving a limited number of studies. We review some pertinent literature on more advanced topics, including modeling of among-study heterogeneity, publication bias, analyses involving a limited number of studies, and methods for dealing with missing data, and show how these issues can be approached in a Bayesian framework

  12. Limitations in Using Multiple Imputation to Harmonize Individual Participant Data for Meta-Analysis.

    Science.gov (United States)

    Siddique, Juned; de Chavez, Peter J; Howe, George; Cruden, Gracelyn; Brown, C Hendricks

    2018-02-01

    Individual participant data (IPD) meta-analysis is a meta-analysis in which the individual-level data for each study are obtained and used for synthesis. A common challenge in IPD meta-analysis is when variables of interest are measured differently in different studies. The term harmonization has been coined to describe the procedure of placing variables on the same scale in order to permit pooling of data from a large number of studies. Using data from an IPD meta-analysis of 19 adolescent depression trials, we describe a multiple imputation approach for harmonizing 10 depression measures across the 19 trials by treating those depression measures that were not used in a study as missing data. We then apply diagnostics to address the fit of our imputation model. Even after reducing the scale of our application, we were still unable to produce accurate imputations of the missing values. We describe those features of the data that made it difficult to harmonize the depression measures and provide some guidelines for using multiple imputation for harmonization in IPD meta-analysis.

  13. The endoscopic endonasal approach is not superior to the microscopic transcranial approach for anterior skull base meningiomas-a meta-analysis.

    Science.gov (United States)

    Muskens, Ivo S; Briceno, Vanessa; Ouwehand, Tom L; Castlen, Joseph P; Gormley, William B; Aglio, Linda S; Zamanipoor Najafabadi, Amir H; van Furth, Wouter R; Smith, Timothy R; Mekary, Rania A; Broekman, Marike L D

    2018-01-01

    In the past decade, the endonasal transsphenoidal approach (eTSA) has become an alternative to the microsurgical transcranial approach (mTCA) for tuberculum sellae meningiomas (TSMs) and olfactory groove meningiomas (OGMs). The aim of this meta-analysis was to evaluate which approach offered the best surgical outcomes. A systematic review of the literature from 2004 and meta-analysis were conducted in accordance with the PRISMA guidelines. Pooled incidence was calculated for gross total resection (GTR), visual improvement, cerebrospinal fluid (CSF) leak, intraoperative arterial injury, and mortality, comparing eTSA and mTCA, with p-interaction values. Of 1684 studies, 64 case series were included in the meta-analysis. Using the fixed-effects model, the GTR rate was significantly higher among mTCA patients for OGM (eTSA: 70.9% vs. mTCA: 88.5%, p-interaction OGM (p-interaction = 0.33). CSF leak was significantly higher among eTSA patients for both OGM (eTSA: 25.1% vs. mTCA: 10.5%, p-interaction OGM resection (p-interaction = 0.10). Mortality was not significantly different between eTSA and mTCA patients for both TSM (p-interaction = 0.14) and OGM resection (p-interaction = 0.88). Random-effect models yielded similar results. In this meta-analysis, eTSA was not shown to be superior to mTCA for resection of both OGMs and TSMs.

  14. MetaRanker 2.0: a web server for prioritization of genetic variation data

    DEFF Research Database (Denmark)

    Pers, Tune Hannes; Dworzynski, Piotr; Thomas, Cecilia Engel

    2013-01-01

    MetaRanker 2.0 is a web server for prioritization of common and rare frequency genetic variation data. Based on heterogeneous data sets including genetic association data, protein–protein interactions, large-scale text-mining data, copy number variation data and gene expression experiments, Meta...

  15. MetaComp: comprehensive analysis software for comparative meta-omics including comparative metagenomics.

    Science.gov (United States)

    Zhai, Peng; Yang, Longshu; Guo, Xiao; Wang, Zhe; Guo, Jiangtao; Wang, Xiaoqi; Zhu, Huaiqiu

    2017-10-02

    During the past decade, the development of high throughput nucleic sequencing and mass spectrometry analysis techniques have enabled the characterization of microbial communities through metagenomics, metatranscriptomics, metaproteomics and metabolomics data. To reveal the diversity of microbial communities and interactions between living conditions and microbes, it is necessary to introduce comparative analysis based upon integration of all four types of data mentioned above. Comparative meta-omics, especially comparative metageomics, has been established as a routine process to highlight the significant differences in taxon composition and functional gene abundance among microbiota samples. Meanwhile, biologists are increasingly concerning about the correlations between meta-omics features and environmental factors, which may further decipher the adaptation strategy of a microbial community. We developed a graphical comprehensive analysis software named MetaComp comprising a series of statistical analysis approaches with visualized results for metagenomics and other meta-omics data comparison. This software is capable to read files generated by a variety of upstream programs. After data loading, analyses such as multivariate statistics, hypothesis testing of two-sample, multi-sample as well as two-group sample and a novel function-regression analysis of environmental factors are offered. Here, regression analysis regards meta-omic features as independent variable and environmental factors as dependent variables. Moreover, MetaComp is capable to automatically choose an appropriate two-group sample test based upon the traits of input abundance profiles. We further evaluate the performance of its choice, and exhibit applications for metagenomics, metaproteomics and metabolomics samples. MetaComp, an integrative software capable for applying to all meta-omics data, originally distills the influence of living environment on microbial community by regression analysis

  16. Understanding the experience of initiating community-based physical activity and social support by people with serious mental illness: a systematic review using a meta-ethnographic approach

    Directory of Open Access Journals (Sweden)

    Helen Quirk

    2017-10-01

    be performed by two reviewers. Data will be extracted by one reviewer, tabled, and checked for accuracy by the second reviewer. The meta-ethnography approach by Noblit and Hare (Meta-ethnography: synthesizing qualitative studies 11, 1988 will be used to synthesise the data. Discussion This systematic review is expected to provide new insights into the experience of community-based group physical activity initiation for adults who have a serious mental illness to inform person-centred improvements to the management of serious mental illness through physical activity. Systematic review registration The protocol has been registered on the International Prospective Register of Systematic Reviews (PROSPERO on 22/03/2017; (registration number CRD42017059948 .

  17. Understanding the experience of initiating community-based physical activity and social support by people with serious mental illness: a systematic review using a meta-ethnographic approach.

    Science.gov (United States)

    Quirk, Helen; Crank, Helen; Harrop, Deborah; Hock, Emma; Copeland, Robert

    2017-10-25

    People with long-term serious mental illness live with severe and debilitating symptoms that can negatively influence their health and quality of life, leading to outcomes such as premature mortality, morbidity and obesity. An interplay of social, behavioural, biological and psychological factors is likely to contribute to their poor physical health. Participating in regular physical activity could bring symptomatic improvements, weight loss benefits, enhanced wellbeing and when undertaken in a community-based group setting can yield additional, important social support benefits. Yet poor uptake of physical activity by people with serious mental illness is a problem. This review will systematically search, appraise and synthesise the existing evidence that has explored the experience of community-based physical activity initiation and key features of social support within these contexts by adults with schizophrenia, bipolar affective disorder, major depressive disorder or psychosis using the meta-ethnography approach. This new understanding may be key in designing more acceptable and effective community-based group PA programmes that meet patients' need and expectations. This will be a systematic review of qualitative studies using the meta-ethnography approach. The following databases will be searched: ASSIA, CINAHL, Cochrane Central Register of Controlled Trials, EMBASE, Health Technology Assessment Database, MEDLINE, PsycINFO, Sociological Abstracts, SPORTDiscus and Web of Science. Grey literature will also be sought. Eligible studies will use qualitative methodology; involve adults (≥18 years) with schizophrenia, bipolar affective disorder, major depressive disorder or psychosis; will report community-based group physical activity; and capture the experience of physical activity initiation and key features of social support from the perspective of the participant. Study selection and assessment of quality will be performed by two reviewers. Data will be

  18. MetaMetaDB: a database and analytic system for investigating microbial habitability.

    Directory of Open Access Journals (Sweden)

    Ching-chia Yang

    Full Text Available MetaMetaDB (http://mmdb.aori.u-tokyo.ac.jp/ is a database and analytic system for investigating microbial habitability, i.e., how a prokaryotic group can inhabit different environments. The interaction between prokaryotes and the environment is a key issue in microbiology because distinct prokaryotic communities maintain distinct ecosystems. Because 16S ribosomal RNA (rRNA sequences play pivotal roles in identifying prokaryotic species, a system that comprehensively links diverse environments to 16S rRNA sequences of the inhabitant prokaryotes is necessary for the systematic understanding of the microbial habitability. However, existing databases are biased to culturable prokaryotes and exhibit limitations in the comprehensiveness of the data because most prokaryotes are unculturable. Recently, metagenomic and 16S rRNA amplicon sequencing approaches have generated abundant 16S rRNA sequence data that encompass unculturable prokaryotes across diverse environments; however, these data are usually buried in large databases and are difficult to access. In this study, we developed MetaMetaDB (Meta-Metagenomic DataBase, which comprehensively and compactly covers 16S rRNA sequences retrieved from public datasets. Using MetaMetaDB, users can quickly generate hypotheses regarding the types of environments a prokaryotic group may be adapted to. We anticipate that MetaMetaDB will improve our understanding of the diversity and evolution of prokaryotes.

  19. MetaMetaDB: a database and analytic system for investigating microbial habitability.

    Science.gov (United States)

    Yang, Ching-chia; Iwasaki, Wataru

    2014-01-01

    MetaMetaDB (http://mmdb.aori.u-tokyo.ac.jp/) is a database and analytic system for investigating microbial habitability, i.e., how a prokaryotic group can inhabit different environments. The interaction between prokaryotes and the environment is a key issue in microbiology because distinct prokaryotic communities maintain distinct ecosystems. Because 16S ribosomal RNA (rRNA) sequences play pivotal roles in identifying prokaryotic species, a system that comprehensively links diverse environments to 16S rRNA sequences of the inhabitant prokaryotes is necessary for the systematic understanding of the microbial habitability. However, existing databases are biased to culturable prokaryotes and exhibit limitations in the comprehensiveness of the data because most prokaryotes are unculturable. Recently, metagenomic and 16S rRNA amplicon sequencing approaches have generated abundant 16S rRNA sequence data that encompass unculturable prokaryotes across diverse environments; however, these data are usually buried in large databases and are difficult to access. In this study, we developed MetaMetaDB (Meta-Metagenomic DataBase), which comprehensively and compactly covers 16S rRNA sequences retrieved from public datasets. Using MetaMetaDB, users can quickly generate hypotheses regarding the types of environments a prokaryotic group may be adapted to. We anticipate that MetaMetaDB will improve our understanding of the diversity and evolution of prokaryotes.

  20. Individual patient data meta-analyses in head and neck carcinoma: what have we learnt?

    International Nuclear Information System (INIS)

    Pignon, J.P.; Baujat, B.; Bourhis, J.

    2005-01-01

    Carcinoma of the upper aero-digestive tract (oral cavity, oropharynx, hypopharynx, nasopharynx, larynx) are frequent tumors for which surgery and/or radiotherapy are the main therapeutic agents. The main results of meta-analyses based on the collection of individual patients data are reported: 1) The meta-analysis on chemotherapy, regrouping data of nearly 11,000 patients issued from 63 randomized trials showed an absolute benefit of 4% at five years in overall survival, in favor of chemotherapy (P < 0.0001). Most of the benefit was seen with concomitant radio-chemotherapy, however with a relatively large heterogeneity in this subgroup of trials. An update of this meta-analysis was performed including 24 additional trials, which confirmed the magnitude of the benefit due to concomitant chemotherapy (8% at 5 years). 2) The meta-analysis on larynx preservation, using induction chemotherapy in larynx and hypopharynx carcinomas. No significant difference was seen between the control arm with total laryngectomy and the larynx preservation approach. 3) The meta-analysis on chemotherapy in nasopharynx carcinomas, from the data of 11 randomized trials including 2722 patients, and comparing the radiotherapy to radio-chemotherapy (1979-2001). The results showed an absolute benefit of 6% at five years in overall survival, in favor of chemotherapy (P < 0.0001). Most of the benefit was seen with concomitant radio-chemotherapy. 4) Finally, a meta-analysis on altered fractionated RT, compared to conventional RT in 15 randomized trials regrouping 6515 patients. The results showed a small but significant improvement in favor of altered fractionated RT for overall survival and local control with an absolute benefit at five years of 3 and 6%, respectively. (author)

  1. A Database-Based and Web-Based Meta-CASE System

    Science.gov (United States)

    Eessaar, Erki; Sgirka, Rünno

    Each Computer Aided Software Engineering (CASE) system provides support to a software process or specific tasks or activities that are part of a software process. Each meta-CASE system allows us to create new CASE systems. The creators of a new CASE system have to specify abstract syntax of the language that is used in the system and functionality as well as non-functional properties of the new system. Many meta-CASE systems record their data directly in files. In this paper, we introduce a meta-CASE system, the enabling technology of which is an object-relational database system (ORDBMS). The system allows users to manage specifications of languages and create models by using these languages. The system has web-based and form-based user interface. We have created a proof-of-concept prototype of the system by using PostgreSQL ORDBMS and PHP scripting language.

  2. An ontology-based semantic configuration approach to constructing Data as a Service for enterprises

    Science.gov (United States)

    Cai, Hongming; Xie, Cheng; Jiang, Lihong; Fang, Lu; Huang, Chenxi

    2016-03-01

    To align business strategies with IT systems, enterprises should rapidly implement new applications based on existing information with complex associations to adapt to the continually changing external business environment. Thus, Data as a Service (DaaS) has become an enabling technology for enterprise through information integration and the configuration of existing distributed enterprise systems and heterogonous data sources. However, business modelling, system configuration and model alignment face challenges at the design and execution stages. To provide a comprehensive solution to facilitate data-centric application design in a highly complex and large-scale situation, a configurable ontology-based service integrated platform (COSIP) is proposed to support business modelling, system configuration and execution management. First, a meta-resource model is constructed and used to describe and encapsulate information resources by way of multi-view business modelling. Then, based on ontologies, three semantic configuration patterns, namely composite resource configuration, business scene configuration and runtime environment configuration, are designed to systematically connect business goals with executable applications. Finally, a software architecture based on model-view-controller (MVC) is provided and used to assemble components for software implementation. The result of the case study demonstrates that the proposed approach provides a flexible method of implementing data-centric applications.

  3. Preferred Reporting Items for Systematic Review and Meta-Analyses of individual participant data: the PRISMA-IPD Statement.

    Science.gov (United States)

    Stewart, Lesley A; Clarke, Mike; Rovers, Maroeska; Riley, Richard D; Simmonds, Mark; Stewart, Gavin; Tierney, Jayne F

    2015-04-28

    Systematic reviews and meta-analyses of individual participant data (IPD) aim to collect, check, and reanalyze individual-level data from all studies addressing a particular research question and are therefore considered a gold standard approach to evidence synthesis. They are likely to be used with increasing frequency as current initiatives to share clinical trial data gain momentum and may be particularly important in reviewing controversial therapeutic areas. To develop PRISMA-IPD as a stand-alone extension to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) Statement, tailored to the specific requirements of reporting systematic reviews and meta-analyses of IPD. Although developed primarily for reviews of randomized trials, many items will apply in other contexts, including reviews of diagnosis and prognosis. Development of PRISMA-IPD followed the EQUATOR Network framework guidance and used the existing standard PRISMA Statement as a starting point to draft additional relevant material. A web-based survey informed discussion at an international workshop that included researchers, clinicians, methodologists experienced in conducting systematic reviews and meta-analyses of IPD, and journal editors. The statement was drafted and iterative refinements were made by the project, advisory, and development groups. The PRISMA-IPD Development Group reached agreement on the PRISMA-IPD checklist and flow diagram by consensus. Compared with standard PRISMA, the PRISMA-IPD checklist includes 3 new items that address (1) methods of checking the integrity of the IPD (such as pattern of randomization, data consistency, baseline imbalance, and missing data), (2) reporting any important issues that emerge, and (3) exploring variation (such as whether certain types of individual benefit more from the intervention than others). A further additional item was created by reorganization of standard PRISMA items relating to interpreting results. Wording

  4. Meta-IDBA: a de Novo assembler for metagenomic data.

    Science.gov (United States)

    Peng, Yu; Leung, Henry C M; Yiu, S M; Chin, Francis Y L

    2011-07-01

    Next-generation sequencing techniques allow us to generate reads from a microbial environment in order to analyze the microbial community. However, assembling of a set of mixed reads from different species to form contigs is a bottleneck of metagenomic research. Although there are many assemblers for assembling reads from a single genome, there are no assemblers for assembling reads in metagenomic data without reference genome sequences. Moreover, the performances of these assemblers on metagenomic data are far from satisfactory, because of the existence of common regions in the genomes of subspecies and species, which make the assembly problem much more complicated. We introduce the Meta-IDBA algorithm for assembling reads in metagenomic data, which contain multiple genomes from different species. There are two core steps in Meta-IDBA. It first tries to partition the de Bruijn graph into isolated components of different species based on an important observation. Then, for each component, it captures the slight variants of the genomes of subspecies from the same species by multiple alignments and represents the genome of one species, using a consensus sequence. Comparison of the performances of Meta-IDBA and existing assemblers, such as Velvet and Abyss for different metagenomic datasets shows that Meta-IDBA can reconstruct longer contigs with similar accuracy. Meta-IDBA toolkit is available at our website http://www.cs.hku.hk/~alse/metaidba. chin@cs.hku.hk.

  5. Meta-Data Objects as the Basis for System Evolution

    CERN Document Server

    Estrella, Florida; Tóth, N; Kovács, Z; Le Goff, J M; Clatchey, Richard Mc; Toth, Norbert; Kovacs, Zsolt; Goff, Jean-Marie Le

    2001-01-01

    One of the main factors driving object-oriented software development in the Web- age is the need for systems to evolve as user requirements change. A crucial factor in the creation of adaptable systems dealing with changing requirements is the suitability of the underlying technology in allowing the evolution of the system. A reflective system utilizes an open architecture where implicit system aspects are reified to become explicit first-class (meta-data) objects. These implicit system aspects are often fundamental structures which are inaccessible and immutable, and their reification as meta-data objects can serve as the basis for changes and extensions to the system, making it self- describing. To address the evolvability issue, this paper proposes a reflective architecture based on two orthogonal abstractions - model abstraction and information abstraction. In this architecture the modeling abstractions allow for the separation of the description meta-data from the system aspects they represent so that th...

  6. A type-driven approach to concrete meta programming.

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen)

    2005-01-01

    textabstractApplications that manipulate programs as data are called meta programs. Examples of meta programs are compilers, source-to-source translators and code generators. Meta programming can be supported by the ability to represent program fragments in concrete syntax instead of abstract

  7. Meta-analysis of cell-based CaRdiac stUdiEs (ACCRUE) in patients with acute myocardial infarction based on individual patient data

    DEFF Research Database (Denmark)

    Gyöngyösi, Mariann; Wojakowski, Wojciech; Lemarchand, Patricia

    2015-01-01

    RATIONALE: The meta-Analysis of Cell-based CaRdiac study is the first prospectively declared collaborative multinational database, including individual data of patients with ischemic heart disease treated with cell therapy. OBJECTIVE: We analyzed the safety and efficacy of intracoronary cell...... therapy after acute myocardial infarction (AMI), including individual patient data from 12 randomized trials (ASTAMI, Aalst, BOOST, BONAMI, CADUCEUS, FINCELL, REGENT, REPAIR-AMI, SCAMI, SWISS-AMI, TIME, LATE-TIME; n=1252). METHODS AND RESULTS: The primary end point was freedom from combined major adverse.......1), end-diastolic volume, or systolic volume were observed compared with controls. These results were not influenced by anterior AMI location, reduced baseline ejection fraction, or the use of MRI for assessing left ventricular parameters. CONCLUSIONS: This meta-analysis of individual patient data from...

  8. When Is Hub Gene Selection Better than Standard Meta-Analysis?

    Science.gov (United States)

    Langfelder, Peter; Mischel, Paul S.; Horvath, Steve

    2013-01-01

    Since hub nodes have been found to play important roles in many networks, highly connected hub genes are expected to play an important role in biology as well. However, the empirical evidence remains ambiguous. An open question is whether (or when) hub gene selection leads to more meaningful gene lists than a standard statistical analysis based on significance testing when analyzing genomic data sets (e.g., gene expression or DNA methylation data). Here we address this question for the special case when multiple genomic data sets are available. This is of great practical importance since for many research questions multiple data sets are publicly available. In this case, the data analyst can decide between a standard statistical approach (e.g., based on meta-analysis) and a co-expression network analysis approach that selects intramodular hubs in consensus modules. We assess the performance of these two types of approaches according to two criteria. The first criterion evaluates the biological insights gained and is relevant in basic research. The second criterion evaluates the validation success (reproducibility) in independent data sets and often applies in clinical diagnostic or prognostic applications. We compare meta-analysis with consensus network analysis based on weighted correlation network analysis (WGCNA) in three comprehensive and unbiased empirical studies: (1) Finding genes predictive of lung cancer survival, (2) finding methylation markers related to age, and (3) finding mouse genes related to total cholesterol. The results demonstrate that intramodular hub gene status with respect to consensus modules is more useful than a meta-analysis p-value when identifying biologically meaningful gene lists (reflecting criterion 1). However, standard meta-analysis methods perform as good as (if not better than) a consensus network approach in terms of validation success (criterion 2). The article also reports a comparison of meta-analysis techniques applied to

  9. When is hub gene selection better than standard meta-analysis?

    Directory of Open Access Journals (Sweden)

    Peter Langfelder

    Full Text Available Since hub nodes have been found to play important roles in many networks, highly connected hub genes are expected to play an important role in biology as well. However, the empirical evidence remains ambiguous. An open question is whether (or when hub gene selection leads to more meaningful gene lists than a standard statistical analysis based on significance testing when analyzing genomic data sets (e.g., gene expression or DNA methylation data. Here we address this question for the special case when multiple genomic data sets are available. This is of great practical importance since for many research questions multiple data sets are publicly available. In this case, the data analyst can decide between a standard statistical approach (e.g., based on meta-analysis and a co-expression network analysis approach that selects intramodular hubs in consensus modules. We assess the performance of these two types of approaches according to two criteria. The first criterion evaluates the biological insights gained and is relevant in basic research. The second criterion evaluates the validation success (reproducibility in independent data sets and often applies in clinical diagnostic or prognostic applications. We compare meta-analysis with consensus network analysis based on weighted correlation network analysis (WGCNA in three comprehensive and unbiased empirical studies: (1 Finding genes predictive of lung cancer survival, (2 finding methylation markers related to age, and (3 finding mouse genes related to total cholesterol. The results demonstrate that intramodular hub gene status with respect to consensus modules is more useful than a meta-analysis p-value when identifying biologically meaningful gene lists (reflecting criterion 1. However, standard meta-analysis methods perform as good as (if not better than a consensus network approach in terms of validation success (criterion 2. The article also reports a comparison of meta-analysis techniques

  10. When is hub gene selection better than standard meta-analysis?

    Science.gov (United States)

    Langfelder, Peter; Mischel, Paul S; Horvath, Steve

    2013-01-01

    Since hub nodes have been found to play important roles in many networks, highly connected hub genes are expected to play an important role in biology as well. However, the empirical evidence remains ambiguous. An open question is whether (or when) hub gene selection leads to more meaningful gene lists than a standard statistical analysis based on significance testing when analyzing genomic data sets (e.g., gene expression or DNA methylation data). Here we address this question for the special case when multiple genomic data sets are available. This is of great practical importance since for many research questions multiple data sets are publicly available. In this case, the data analyst can decide between a standard statistical approach (e.g., based on meta-analysis) and a co-expression network analysis approach that selects intramodular hubs in consensus modules. We assess the performance of these two types of approaches according to two criteria. The first criterion evaluates the biological insights gained and is relevant in basic research. The second criterion evaluates the validation success (reproducibility) in independent data sets and often applies in clinical diagnostic or prognostic applications. We compare meta-analysis with consensus network analysis based on weighted correlation network analysis (WGCNA) in three comprehensive and unbiased empirical studies: (1) Finding genes predictive of lung cancer survival, (2) finding methylation markers related to age, and (3) finding mouse genes related to total cholesterol. The results demonstrate that intramodular hub gene status with respect to consensus modules is more useful than a meta-analysis p-value when identifying biologically meaningful gene lists (reflecting criterion 1). However, standard meta-analysis methods perform as good as (if not better than) a consensus network approach in terms of validation success (criterion 2). The article also reports a comparison of meta-analysis techniques applied to

  11. Meta-STEPP: subpopulation treatment effect pattern plot for individual patient data meta-analysis.

    Science.gov (United States)

    Wang, Xin Victoria; Cole, Bernard; Bonetti, Marco; Gelber, Richard D

    2016-09-20

    We have developed a method, called Meta-STEPP (subpopulation treatment effect pattern plot for meta-analysis), to explore treatment effect heterogeneity across covariate values in the meta-analysis setting for time-to-event data when the covariate of interest is continuous. Meta-STEPP forms overlapping subpopulations from individual patient data containing similar numbers of events with increasing covariate values, estimates subpopulation treatment effects using standard fixed-effects meta-analysis methodology, displays the estimated subpopulation treatment effect as a function of the covariate values, and provides a statistical test to detect possibly complex treatment-covariate interactions. Simulation studies show that this test has adequate type-I error rate recovery as well as power when reasonable window sizes are chosen. When applied to eight breast cancer trials, Meta-STEPP suggests that chemotherapy is less effective for tumors with high estrogen receptor expression compared with those with low expression. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Meta-Analysis of the Reasoned Action Approach (RAA) to Understanding Health Behaviors.

    Science.gov (United States)

    McEachan, Rosemary; Taylor, Natalie; Harrison, Reema; Lawton, Rebecca; Gardner, Peter; Conner, Mark

    2016-08-01

    Reasoned action approach (RAA) includes subcomponents of attitude (experiential/instrumental), perceived norm (injunctive/descriptive), and perceived behavioral control (capacity/autonomy) to predict intention and behavior. To provide a meta-analysis of the RAA for health behaviors focusing on comparing the pairs of RAA subcomponents and differences between health protection and health-risk behaviors. The present research reports a meta-analysis of correlational tests of RAA subcomponents, examination of moderators, and combined effects of subcomponents on intention and behavior. Regressions were used to predict intention and behavior based on data from studies measuring all variables. Capacity and experiential attitude had large, and other constructs had small-medium-sized correlations with intention; all constructs except autonomy were significant independent predictors of intention in regressions. Intention, capacity, and experiential attitude had medium-large, and other constructs had small-medium-sized correlations with behavior; intention, capacity, experiential attitude, and descriptive norm were significant independent predictors of behavior in regressions. The RAA subcomponents have utility in predicting and understanding health behaviors.

  13. Rethinking Meta-Analysis: Applications for Air Pollution Data and Beyond

    Science.gov (United States)

    Goodman, Julie E; Petito Boyce, Catherine; Sax, Sonja N; Beyer, Leslie A; Prueitt, Robyn L

    2015-01-01

    Meta-analyses offer a rigorous and transparent systematic framework for synthesizing data that can be used for a wide range of research areas, study designs, and data types. Both the outcome of meta-analyses and the meta-analysis process itself can yield useful insights for answering scientific questions and making policy decisions. Development of the National Ambient Air Quality Standards illustrates many potential applications of meta-analysis. These applications demonstrate the strengths and limitations of meta-analysis, issues that arise in various data realms, how meta-analysis design choices can influence interpretation of results, and how meta-analysis can be used to address bias and heterogeneity. Reviewing available data from a meta-analysis perspective can provide a useful framework and impetus for identifying and refining strategies for future research. Moreover, increased pervasiveness of a meta-analysis mindset—focusing on how the pieces of the research puzzle fit together—would benefit scientific research and data syntheses regardless of whether or not a quantitative meta-analysis is undertaken. While an individual meta-analysis can only synthesize studies addressing the same research question, the results of separate meta-analyses can be combined to address a question encompassing multiple data types. This observation applies to any scientific or policy area where information from a variety of disciplines must be considered to address a broader research question. PMID:25969128

  14. Predictors of treatment dropout in self-guided web-based interventions for depression: an 'individual patient data' meta-analysis.

    Science.gov (United States)

    Karyotaki, E; Kleiboer, A; Smit, F; Turner, D T; Pastor, A M; Andersson, G; Berger, T; Botella, C; Breton, J M; Carlbring, P; Christensen, H; de Graaf, E; Griffiths, K; Donker, T; Farrer, L; Huibers, M J H; Lenndin, J; Mackinnon, A; Meyer, B; Moritz, S; Riper, H; Spek, V; Vernmark, K; Cuijpers, P

    2015-10-01

    It is well known that web-based interventions can be effective treatments for depression. However, dropout rates in web-based interventions are typically high, especially in self-guided web-based interventions. Rigorous empirical evidence regarding factors influencing dropout in self-guided web-based interventions is lacking due to small study sample sizes. In this paper we examined predictors of dropout in an individual patient data meta-analysis to gain a better understanding of who may benefit from these interventions. A comprehensive literature search for all randomized controlled trials (RCTs) of psychotherapy for adults with depression from 2006 to January 2013 was conducted. Next, we approached authors to collect the primary data of the selected studies. Predictors of dropout, such as socio-demographic, clinical, and intervention characteristics were examined. Data from 2705 participants across ten RCTs of self-guided web-based interventions for depression were analysed. The multivariate analysis indicated that male gender [relative risk (RR) 1.08], lower educational level (primary education, RR 1.26) and co-morbid anxiety symptoms (RR 1.18) significantly increased the risk of dropping out, while for every additional 4 years of age, the risk of dropping out significantly decreased (RR 0.94). Dropout can be predicted by several variables and is not randomly distributed. This knowledge may inform tailoring of online self-help interventions to prevent dropout in identified groups at risk.

  15. Distributed PACS using distributed file system with hierarchical meta data servers.

    Science.gov (United States)

    Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato

    2012-01-01

    In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.

  16. Is self-guided internet-based cognitive behavioural therapy (iCBT) harmful? An individual participant data meta-analysis.

    Science.gov (United States)

    Karyotaki, Eirini; Kemmeren, Lise; Riper, Heleen; Twisk, Jos; Hoogendoorn, Adriaan; Kleiboer, Annet; Mira, Adriana; Mackinnon, Andrew; Meyer, Björn; Botella, Cristina; Littlewood, Elizabeth; Andersson, Gerhard; Christensen, Helen; Klein, Jan P; Schröder, Johanna; Bretón-López, Juana; Scheider, Justine; Griffiths, Kathy; Farrer, Louise; Huibers, Marcus J H; Phillips, Rachel; Gilbody, Simon; Moritz, Steffen; Berger, Thomas; Pop, Victor; Spek, Viola; Cuijpers, Pim

    2018-03-15

    Little is known about potential harmful effects as a consequence of self-guided internet-based cognitive behaviour therapy (iCBT), such as symptom deterioration rates. Thus, safety concerns remain and hamper the implementation of self-guided iCBT into clinical practice. We aimed to conduct an individual participant data (IPD) meta-analysis to determine the prevalence of clinically significant deterioration (symptom worsening) in adults with depressive symptoms who received self-guided iCBT compared with control conditions. Several socio-demographic, clinical and study-level variables were tested as potential moderators of deterioration. Randomised controlled trials that reported results of self-guided iCBT compared with control conditions in adults with symptoms of depression were selected. Mixed effects models with participants nested within studies were used to examine possible clinically significant deterioration rates. Thirteen out of 16 eligible trials were included in the present IPD meta-analysis. Of the 3805 participants analysed, 7.2% showed clinically significant deterioration (5.8% and 9.1% of participants in the intervention and control groups, respectively). Participants in self-guided iCBT were less likely to deteriorate (OR 0.62, p guided iCBT has a lower rate of negative outcomes on symptoms than control conditions and could be a first step treatment approach for adult depression as well as an alternative to watchful waiting in general practice.

  17. Evaluation of Workflow Management Systems - A Meta Model Approach

    Directory of Open Access Journals (Sweden)

    Michael Rosemann

    1998-11-01

    Full Text Available The automated enactment of processes through the use of workflow management systems enables the outsourcing of the control flow from application systems. By now a large number of systems, that follow different workflow paradigms, are available. This leads to the problem of selecting the appropriate workflow management system for a given situation. In this paper we outline the benefits of a meta model approach for the evaluation and comparison of different workflow management systems. After a general introduction on the topic of meta modeling the meta models of the workflow management systems WorkParty (Siemens Nixdorf and FlowMark (IBM are compared as an example. These product specific meta models can be generalized to meta reference models, which helps to specify a workflow methodology. Exemplary, an organisational reference meta model is presented, which helps users in specifying their requirements for a workflow management system.

  18. Prognostic meta-signature of breast cancer developed by two-stage mixture modeling of microarray data

    Directory of Open Access Journals (Sweden)

    Ghosh Debashis

    2004-12-01

    Full Text Available Abstract Background An increasing number of studies have profiled tumor specimens using distinct microarray platforms and analysis techniques. With the accumulating amount of microarray data, one of the most intriguing yet challenging tasks is to develop robust statistical models to integrate the findings. Results By applying a two-stage Bayesian mixture modeling strategy, we were able to assimilate and analyze four independent microarray studies to derive an inter-study validated "meta-signature" associated with breast cancer prognosis. Combining multiple studies (n = 305 samples on a common probability scale, we developed a 90-gene meta-signature, which strongly associated with survival in breast cancer patients. Given the set of independent studies using different microarray platforms which included spotted cDNAs, Affymetrix GeneChip, and inkjet oligonucleotides, the individually identified classifiers yielded gene sets predictive of survival in each study cohort. The study-specific gene signatures, however, had minimal overlap with each other, and performed poorly in pairwise cross-validation. The meta-signature, on the other hand, accommodated such heterogeneity and achieved comparable or better prognostic performance when compared with the individual signatures. Further by comparing to a global standardization method, the mixture model based data transformation demonstrated superior properties for data integration and provided solid basis for building classifiers at the second stage. Functional annotation revealed that genes involved in cell cycle and signal transduction activities were over-represented in the meta-signature. Conclusion The mixture modeling approach unifies disparate gene expression data on a common probability scale allowing for robust, inter-study validated prognostic signatures to be obtained. With the emerging utility of microarrays for cancer prognosis, it will be important to establish paradigms to meta

  19. PanMetaDocs - A tool for collecting and managing the long tail of "small science data"

    Science.gov (United States)

    Klump, J.; Ulbricht, D.

    2011-12-01

    In the early days of thinking about cyberinfrastructure the focus was on "big science data". Today, the challenge is not anymore to store several terabytes of data, but to manage data objects in a way that facilitates their re-use. Key to re-use by a user as a data consumer is proper documentation of the data. Also, data consumers need discovery metadata to find the data they need and they need descriptive metadata to be able to use the data they retrieved. Thus, data documentation faces the challenge to extensively and completely describe these objects, hold the items easily accessible at a sustainable cost level. However, data curation and documentation do not rank high in the everyday work of a scientist as a data producer. Data producers are often frustrated by being asked to provide metadata on their data over and over again, information that seemed very obvious from the context of their work. A challenge to data archives is the wide variety of metadata schemata in use, which creates a number of maintenance and design challenges of its own. PanMetaDocs addresses these issues by allowing an uploaded files to be described by more than one metadata object. PanMetaDocs, which was developed from PanMetaWorks, is a PHP based web application that allow to describe data with any xml-based metadata schema. Its user interface is browser based and was developed to collect metadata and data in collaborative scientific projects situated at one or more institutions. The metadata fields can be filled with static or dynamic content to reduce the number of fields that require manual entries to a minimum and make use of contextual information in a project setting. In the development of PanMetaDocs the business logic of panMetaWorks is reused, except for the authentication and data management functions of PanMetaWorks, which are delegated to the eSciDoc framework. The eSciDoc repository framework is designed as a service oriented architecture that can be controlled through a

  20. Homeopathy: meta-analyses of pooled clinical data.

    Science.gov (United States)

    Hahn, Robert G

    2013-01-01

    In the first decade of the evidence-based era, which began in the mid-1990s, meta-analyses were used to scrutinize homeopathy for evidence of beneficial effects in medical conditions. In this review, meta-analyses including pooled data from placebo-controlled clinical trials of homeopathy and the aftermath in the form of debate articles were analyzed. In 1997 Klaus Linde and co-workers identified 89 clinical trials that showed an overall odds ratio of 2.45 in favor of homeopathy over placebo. There was a trend toward smaller benefit from studies of the highest quality, but the 10 trials with the highest Jadad score still showed homeopathy had a statistically significant effect. These results challenged academics to perform alternative analyses that, to demonstrate the lack of effect, relied on extensive exclusion of studies, often to the degree that conclusions were based on only 5-10% of the material, or on virtual data. The ultimate argument against homeopathy is the 'funnel plot' published by Aijing Shang's research group in 2005. However, the funnel plot is flawed when applied to a mixture of diseases, because studies with expected strong treatments effects are, for ethical reasons, powered lower than studies with expected weak or unclear treatment effects. To conclude that homeopathy lacks clinical effect, more than 90% of the available clinical trials had to be disregarded. Alternatively, flawed statistical methods had to be applied. Future meta-analyses should focus on the use of homeopathy in specific diseases or groups of diseases instead of pooling data from all clinical trials. © 2013 S. Karger GmbH, Freiburg.

  1. An Innovative Approach for online Meta Search Engine Optimization

    OpenAIRE

    Manral, Jai; Hossain, Mohammed Alamgir

    2015-01-01

    This paper presents an approach to identify efficient techniques used in Web Search Engine Optimization (SEO). Understanding SEO factors which can influence page ranking in search engine is significant for webmasters who wish to attract large number of users to their website. Different from previous relevant research, in this study we developed an intelligent Meta search engine which aggregates results from various search engines and ranks them based on several important SEO parameters. The r...

  2. Meta-connectomics: human brain network and connectivity meta-analyses.

    Science.gov (United States)

    Crossley, N A; Fox, P T; Bullmore, E T

    2016-04-01

    Abnormal brain connectivity or network dysfunction has been suggested as a paradigm to understand several psychiatric disorders. We here review the use of novel meta-analytic approaches in neuroscience that go beyond a summary description of existing results by applying network analysis methods to previously published studies and/or publicly accessible databases. We define this strategy of combining connectivity with other brain characteristics as 'meta-connectomics'. For example, we show how network analysis of task-based neuroimaging studies has been used to infer functional co-activation from primary data on regional activations. This approach has been able to relate cognition to functional network topology, demonstrating that the brain is composed of cognitively specialized functional subnetworks or modules, linked by a rich club of cognitively generalized regions that mediate many inter-modular connections. Another major application of meta-connectomics has been efforts to link meta-analytic maps of disorder-related abnormalities or MRI 'lesions' to the complex topology of the normative connectome. This work has highlighted the general importance of network hubs as hotspots for concentration of cortical grey-matter deficits in schizophrenia, Alzheimer's disease and other disorders. Finally, we show how by incorporating cellular and transcriptional data on individual nodes with network models of the connectome, studies have begun to elucidate the microscopic mechanisms underpinning the macroscopic organization of whole-brain networks. We argue that meta-connectomics is an exciting field, providing robust and integrative insights into brain organization that will likely play an important future role in consolidating network models of psychiatric disorders.

  3. Testing moderation in network meta-analysis with individual participant data

    Science.gov (United States)

    Dagne, Getachew A.; Brown, C. Hendricks; Howe, George; Kellam, Sheppard G.; Liu, Lei

    2016-01-01

    Summary Meta-analytic methods for combining data from multiple intervention trials are commonly used to estimate the effectiveness of an intervention. They can also be extended to study comparative effectiveness, testing which of several alternative interventions is expected to have the strongest effect. This often requires network meta-analysis (NMA), which combines trials involving direct comparison of two interventions within the same trial and indirect comparisons across trials. In this paper, we extend existing network methods for main effects to examining moderator effects, allowing for tests of whether intervention effects vary for different populations or when employed in different contexts. In addition, we study how the use of individual participant data (IPD) may increase the sensitivity of NMA for detecting moderator effects, as compared to aggregate data NMA that employs study-level effect sizes in a meta-regression framework. A new network meta-analysis diagram is proposed. We also develop a generalized multilevel model for NMA that takes into account within- and between-trial heterogeneity, and can include participant-level covariates. Within this framework we present definitions of homogeneity and consistency across trials. A simulation study based on this model is used to assess effects on power to detect both main and moderator effects. Results show that power to detect moderation is substantially greater when applied to IPD as compared to study-level effects. We illustrate the use of this method by applying it to data from a classroom-based randomized study that involved two sub-trials, each comparing interventions that were contrasted with separate control groups. PMID:26841367

  4. A Microsoft-Excel-based tool for running and critically appraising network meta-analyses--an overview and application of NetMetaXL.

    Science.gov (United States)

    Brown, Stephen; Hutton, Brian; Clifford, Tammy; Coyle, Doug; Grima, Daniel; Wells, George; Cameron, Chris

    2014-09-29

    The use of network meta-analysis has increased dramatically in recent years. WinBUGS, a freely available Bayesian software package, has been the most widely used software package to conduct network meta-analyses. However, the learning curve for WinBUGS can be daunting, especially for new users. Furthermore, critical appraisal of network meta-analyses conducted in WinBUGS can be challenging given its limited data manipulation capabilities and the fact that generation of graphical output from network meta-analyses often relies on different software packages than the analyses themselves. We developed a freely available Microsoft-Excel-based tool called NetMetaXL, programmed in Visual Basic for Applications, which provides an interface for conducting a Bayesian network meta-analysis using WinBUGS from within Microsoft Excel. . This tool allows the user to easily prepare and enter data, set model assumptions, and run the network meta-analysis, with results being automatically displayed in an Excel spreadsheet. It also contains macros that use NetMetaXL's interface to generate evidence network diagrams, forest plots, league tables of pairwise comparisons, probability plots (rankograms), and inconsistency plots within Microsoft Excel. All figures generated are publication quality, thereby increasing the efficiency of knowledge transfer and manuscript preparation. We demonstrate the application of NetMetaXL using data from a network meta-analysis published previously which compares combined resynchronization and implantable defibrillator therapy in left ventricular dysfunction. We replicate results from the previous publication while demonstrating result summaries generated by the software. Use of the freely available NetMetaXL successfully demonstrated its ability to make running network meta-analyses more accessible to novice WinBUGS users by allowing analyses to be conducted entirely within Microsoft Excel. NetMetaXL also allows for more efficient and transparent

  5. A Microsoft-Excel-based tool for running and critically appraising network meta-analyses—an overview and application of NetMetaXL

    Science.gov (United States)

    2014-01-01

    Background The use of network meta-analysis has increased dramatically in recent years. WinBUGS, a freely available Bayesian software package, has been the most widely used software package to conduct network meta-analyses. However, the learning curve for WinBUGS can be daunting, especially for new users. Furthermore, critical appraisal of network meta-analyses conducted in WinBUGS can be challenging given its limited data manipulation capabilities and the fact that generation of graphical output from network meta-analyses often relies on different software packages than the analyses themselves. Methods We developed a freely available Microsoft-Excel-based tool called NetMetaXL, programmed in Visual Basic for Applications, which provides an interface for conducting a Bayesian network meta-analysis using WinBUGS from within Microsoft Excel. . This tool allows the user to easily prepare and enter data, set model assumptions, and run the network meta-analysis, with results being automatically displayed in an Excel spreadsheet. It also contains macros that use NetMetaXL’s interface to generate evidence network diagrams, forest plots, league tables of pairwise comparisons, probability plots (rankograms), and inconsistency plots within Microsoft Excel. All figures generated are publication quality, thereby increasing the efficiency of knowledge transfer and manuscript preparation. Results We demonstrate the application of NetMetaXL using data from a network meta-analysis published previously which compares combined resynchronization and implantable defibrillator therapy in left ventricular dysfunction. We replicate results from the previous publication while demonstrating result summaries generated by the software. Conclusions Use of the freely available NetMetaXL successfully demonstrated its ability to make running network meta-analyses more accessible to novice WinBUGS users by allowing analyses to be conducted entirely within Microsoft Excel. NetMetaXL also allows

  6. Population pharmacokinetics analysis of olanzapine for Chinese psychotic patients based on clinical therapeutic drug monitoring data with assistance of meta-analysis.

    Science.gov (United States)

    Yin, Anyue; Shang, Dewei; Wen, Yuguan; Li, Liang; Zhou, Tianyan; Lu, Wei

    2016-08-01

    The aim of this study was to build an eligible population pharmacokinetic (PK) model for olanzapine in Chinese psychotic patients based on therapeutic drug monitoring (TDM) data, with assistance of meta-analysis, to facilitate individualized therapy. Population PK analysis for olanzapine was performed using NONMEM software (version 7.3.0). TDM data were collected from Guangzhou Brain Hospital (China). Because of the limitations of TDM data, model-based meta-analysis was performed to construct a structural model to assist the modeling of TDM data as prior estimates. After analyzing related covariates, a simulation was performed to predict concentrations for different types of patients under common dose regimens. A two-compartment model with first-order absorption and elimination was developed for olanzapine oral tablets, based on 23 articles with 390 data points. The model was then applied to the TDM data. Gender and smoking habits were found to be significant covariates that influence the clearance of olanzapine. To achieve a blood concentration of 20 ng/mL (the lower boundary of the recommended therapeutic range), simulation results indicated that the dose regimen of olanzapine should be 5 mg BID (twice a day), ≥ 5 mg QD (every day) plus 10 mg QN (every night), or >10 mg BID for female nonsmokers, male nonsmokers and male smokers, respectively. The population PK model, built using meta-analysis, could facilitate the modeling of TDM data collected from Chinese psychotic patients. The factors that significantly influence olanzapine disposition were determined and the final model could be used for individualized treatment.

  7. OHBM 2017: Practical intensity based meta-analysis

    OpenAIRE

    Maumet, Camille

    2017-01-01

    "Practical intensity-based meta-analysis" slides from my talk in the OHBM 2017 educational talk on Neuroimaging meta-analysis.http://www.humanbrainmapping.org/files/2017/ED Courses/Neuroimaging Meta-Analysis.pdf

  8. An efficient Bayesian meta-analysis approach for studying cross-phenotype genetic associations.

    Directory of Open Access Journals (Sweden)

    Arunabha Majumdar

    2018-02-01

    Full Text Available Simultaneous analysis of genetic associations with multiple phenotypes may reveal shared genetic susceptibility across traits (pleiotropy. For a locus exhibiting overall pleiotropy, it is important to identify which specific traits underlie this association. We propose a Bayesian meta-analysis approach (termed CPBayes that uses summary-level data across multiple phenotypes to simultaneously measure the evidence of aggregate-level pleiotropic association and estimate an optimal subset of traits associated with the risk locus. This method uses a unified Bayesian statistical framework based on a spike and slab prior. CPBayes performs a fully Bayesian analysis by employing the Markov Chain Monte Carlo (MCMC technique Gibbs sampling. It takes into account heterogeneity in the size and direction of the genetic effects across traits. It can be applied to both cohort data and separate studies of multiple traits having overlapping or non-overlapping subjects. Simulations show that CPBayes can produce higher accuracy in the selection of associated traits underlying a pleiotropic signal than the subset-based meta-analysis ASSET. We used CPBayes to undertake a genome-wide pleiotropic association study of 22 traits in the large Kaiser GERA cohort and detected six independent pleiotropic loci associated with at least two phenotypes. This includes a locus at chromosomal region 1q24.2 which exhibits an association simultaneously with the risk of five different diseases: Dermatophytosis, Hemorrhoids, Iron Deficiency, Osteoporosis and Peripheral Vascular Disease. We provide an R-package 'CPBayes' implementing the proposed method.

  9. Data management and data analysis techniques in pharmacoepidemiological studies using a pre-planned multi-database approach

    DEFF Research Database (Denmark)

    Bazelier, Marloes T.; Eriksson, Irene; de Vries, Frank

    2015-01-01

    pharmacoepidemiological multi-database studies published from 2007 onwards that combined data for a pre-planned common analysis or quantitative synthesis. Information was retrieved about study characteristics, methods used for individual-level analyses and meta-analyses, data management and motivations for performing...... meta-analysis (27%), while a semi-aggregate approach was applied in three studies (14%). Information on central programming or heterogeneity assessment was missing in approximately half of the publications. Most studies were motivated by improving power (86%). CONCLUSIONS: Pharmacoepidemiological multi...

  10. panMetaDocs and DataSync - providing a convenient way to share and publish research data

    Science.gov (United States)

    Ulbricht, D.; Klump, J. F.

    2013-12-01

    In recent years research institutions, geological surveys and funding organizations started to build infrastructures to facilitate the re-use of research data from previous work. At present, several intermeshed activities are coordinated to make data systems of the earth sciences interoperable and recorded data discoverable. Driven by governmental authorities, ISO19115/19139 emerged as metadata standards for discovery of data and services. Established metadata transport protocols like OAI-PMH and OGC-CSW are used to disseminate metadata to data portals. With the persistent identifiers like DOI and IGSN research data and corresponding physical samples can be given unambiguous names and thus become citable. In summary, these activities focus primarily on 'ready to give away'-data, already stored in an institutional repository and described with appropriate metadata. Many datasets are not 'born' in this state but are produced in small and federated research projects. To make access and reuse of these 'small data' easier, these data should be centrally stored and version controlled from the very beginning of activities. We developed DataSync [1] as supplemental application to the panMetaDocs [2] data exchange platform as a data management tool for small science projects. DataSync is a JAVA-application that runs on a local computer and synchronizes directory trees into an eSciDoc-repository [3] by creating eSciDoc-objects via eSciDocs' REST API. DataSync can be installed on multiple computers and is in this way able to synchronize files of a research team over the internet. XML Metadata can be added as separate files that are managed together with data files as versioned eSciDoc-objects. A project-customized instance of panMetaDocs is provided to show a web-based overview of the previously uploaded file collection and to allow further annotation with metadata inside the eSciDoc-repository. PanMetaDocs is a PHP based web application to assist the creation of metadata in

  11. Meta-analysis of Jelajah Alam Sekitar (JAS Approach Implementation in Learning Procces

    Directory of Open Access Journals (Sweden)

    S. Ngabekti

    2017-04-01

    Full Text Available The results of tracer studies on the approach of Jelajah Alam Sekitar (JAS or environment exploring learning has been detected is used in eight provinces in Indonesia and studied in the learning begin primary school to college. Then, how the effectiveness of the implementation of the JAS approach in improving the learning process. This study uses meta-analysis-data in the form of descriptive exploratory qualitative. Data was taken from the various thesis, and research faculty in the last 10 years. Data analysis was performed by calculating the percentage of the same findings for similar problems. The results showed a wide range of studies using different methods and approach such as qualitative descriptive, quasi-experimental, PTK and R and D to produce evidence that the approach JAS effective when applied in teaching, especially teaching biology in a variety of teaching materials. Various studies have shown the approach JAS managed to increase learning outcomes, can differentiate learning outcomes between treatment and control groups in which the treatment group had a mean score higher. Models/strategies/methods centered learning students are very relevant to implementation approach JAS making it seem more real, like a model of cooperative learning, think pair share, strategy role-playing, the investigation group, learning cycle 5e, hands-on activity, and so on, making it possible to continuously assessed and developed in the paradigm of competency-based curriculum developed.

  12. Meta-Analysis of Cell-based CaRdiac stUdiEs (ACCRUE) in patients with acute myocardial infarction based on individual patient data.

    Science.gov (United States)

    Gyöngyösi, Mariann; Wojakowski, Wojciech; Lemarchand, Patricia; Lunde, Ketil; Tendera, Michal; Bartunek, Jozef; Marban, Eduardo; Assmus, Birgit; Henry, Timothy D; Traverse, Jay H; Moyé, Lemuel A; Sürder, Daniel; Corti, Roberto; Huikuri, Heikki; Miettinen, Johanna; Wöhrle, Jochen; Obradovic, Slobodan; Roncalli, Jérome; Malliaras, Konstantinos; Pokushalov, Evgeny; Romanov, Alexander; Kastrup, Jens; Bergmann, Martin W; Atsma, Douwe E; Diederichsen, Axel; Edes, Istvan; Benedek, Imre; Benedek, Theodora; Pejkov, Hristo; Nyolczas, Noemi; Pavo, Noemi; Bergler-Klein, Jutta; Pavo, Imre J; Sylven, Christer; Berti, Sergio; Navarese, Eliano P; Maurer, Gerald

    2015-04-10

    The meta-Analysis of Cell-based CaRdiac study is the first prospectively declared collaborative multinational database, including individual data of patients with ischemic heart disease treated with cell therapy. We analyzed the safety and efficacy of intracoronary cell therapy after acute myocardial infarction (AMI), including individual patient data from 12 randomized trials (ASTAMI, Aalst, BOOST, BONAMI, CADUCEUS, FINCELL, REGENT, REPAIR-AMI, SCAMI, SWISS-AMI, TIME, LATE-TIME; n=1252). The primary end point was freedom from combined major adverse cardiac and cerebrovascular events (including all-cause death, AMI recurrance, stroke, and target vessel revascularization). The secondary end point was freedom from hard clinical end points (death, AMI recurrence, or stroke), assessed with random-effects meta-analyses and Cox regressions for interactions. Secondary efficacy end points included changes in end-diastolic volume, end-systolic volume, and ejection fraction, analyzed with random-effects meta-analyses and ANCOVA. We reported weighted mean differences between cell therapy and control groups. No effect of cell therapy on major adverse cardiac and cerebrovascular events (14.0% versus 16.3%; hazard ratio, 0.86; 95% confidence interval, 0.63-1.18) or death (1.4% versus 2.1%) or death/AMI recurrence/stroke (2.9% versus 4.7%) was identified in comparison with controls. No changes in ejection fraction (mean difference: 0.96%; 95% confidence interval, -0.2 to 2.1), end-diastolic volume, or systolic volume were observed compared with controls. These results were not influenced by anterior AMI location, reduced baseline ejection fraction, or the use of MRI for assessing left ventricular parameters. This meta-analysis of individual patient data from randomized trials in patients with recent AMI revealed that intracoronary cell therapy provided no benefit, in terms of clinical events or changes in left ventricular function. URL: http://www.clinicaltrials.gov. Unique

  13. Coordinate based random effect size meta-analysis of neuroimaging studies.

    Science.gov (United States)

    Tench, C R; Tanasescu, Radu; Constantinescu, C S; Auer, D P; Cottam, W J

    2017-06-01

    Low power in neuroimaging studies can make them difficult to interpret, and Coordinate based meta-analysis (CBMA) may go some way to mitigating this issue. CBMA has been used in many analyses to detect where published functional MRI or voxel-based morphometry studies testing similar hypotheses report significant summary results (coordinates) consistently. Only the reported coordinates and possibly t statistics are analysed, and statistical significance of clusters is determined by coordinate density. Here a method of performing coordinate based random effect size meta-analysis and meta-regression is introduced. The algorithm (ClusterZ) analyses both coordinates and reported t statistic or Z score, standardised by the number of subjects. Statistical significance is determined not by coordinate density, but by a random effects meta-analyses of reported effects performed cluster-wise using standard statistical methods and taking account of censoring inherent in the published summary results. Type 1 error control is achieved using the false cluster discovery rate (FCDR), which is based on the false discovery rate. This controls both the family wise error rate under the null hypothesis that coordinates are randomly drawn from a standard stereotaxic space, and the proportion of significant clusters that are expected under the null. Such control is necessary to avoid propagating and even amplifying the very issues motivating the meta-analysis in the first place. ClusterZ is demonstrated on both numerically simulated data and on real data from reports of grey matter loss in multiple sclerosis (MS) and syndromes suggestive of MS, and of painful stimulus in healthy controls. The software implementation is available to download and use freely. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Semantic Oriented Agent based Approach towards Engineering Data Management, Web Information Retrieval and User System Communication Problems

    OpenAIRE

    Ahmed, Zeeshan; Gerhard, Detlef

    2010-01-01

    The four intensive problems to the software rose by the software industry .i.e., User System Communication / Human Machine Interface, Meta Data extraction, Information processing & management and Data representation are discussed in this research paper. To contribute in the field we have proposed and described an intelligent semantic oriented agent based search engine including the concepts of intelligent graphical user interface, natural language based information processing, data management...

  15. Interpreting trial results following use of different intention-to-treat approaches for preventing attrition bias: a meta-epidemiological study protocol.

    Science.gov (United States)

    Dossing, Anna; Tarp, Simon; Furst, Daniel E; Gluud, Christian; Beyene, Joseph; Hansen, Bjarke B; Bliddal, Henning; Christensen, Robin

    2014-09-26

    When participants drop out of randomised clinical trials, as frequently happens, the intention-to-treat (ITT) principle does not apply, potentially leading to attrition bias. Data lost from patient dropout/lack of follow-up are statistically addressed by imputing, a procedure prone to bias. Deviations from the original definition of ITT are referred to as modified intention-to-treat (mITT). As yet, the impact of the potential bias associated with mITT has not been assessed. Our objective is to investigate potential bias and disadvantages of performing mITT and evaluate possible concerns when executing different mITT approaches in meta-analyses. Using meta-epidemiology on randomised trials considered less prone to bias (ie, good internal validity) and assessing biological or targeted agents in patients with rheumatoid arthritis, we will meta-analyse data from 10 biological and targeted drugs based on collections of trials that would correspond to 10 individual meta-analyses. This study will enhance transparency for evaluating mITT treatment effects described in meta-analyses. The intended audience will include healthcare researchers, policymakers and clinicians. Results of the study will be disseminated by peer-review publication. In PROSPERO CRD42013006702, 11. December 2013. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. A meta-BACI approach forevaluating management intervention on chronic wasting disease in mule deer

    OpenAIRE

    Conner, Mary M.; Miller, Michael W.; Ebinger, Michael R.; Burnham, Kenneth P.

    2007-01-01

    Advances in acquiring and analyzing the spatial attributes of data have greatlyenhanced the potential utility of wildlife disease surveillance data for addressing problems ofecological or economic importance. We present an approach for using wildlife diseasesurveillance data to identify areas for (or of ) intervention, to spatially delineate pairedtreatment and control areas, and then to analyze these nonrandomly selected sites in a meta-analysis framework via before–after–control–impact (BAC...

  17. Meta-control of combustion performance with a data mining approach

    Science.gov (United States)

    Song, Zhe

    Large scale combustion process is complex and proposes challenges of optimizing its performance. Traditional approaches based on thermal dynamics have limitations on finding optimal operational regions due to time-shift nature of the process. Recent advances in information technology enable people collect large volumes of process data easily and continuously. The collected process data contains rich information about the process and, to some extent, represents a digital copy of the process over time. Although large volumes of data exist in industrial combustion processes, they are not fully utilized to the level where the process can be optimized. Data mining is an emerging science which finds patterns or models from large data sets. It has found many successful applications in business marketing, medical and manufacturing domains The focus of this dissertation is on applying data mining to industrial combustion processes, and ultimately optimizing the combustion performance. However the philosophy, methods and frameworks discussed in this research can also be applied to other industrial processes. Optimizing an industrial combustion process has two major challenges. One is the underlying process model changes over time and obtaining an accurate process model is nontrivial. The other is that a process model with high fidelity is usually highly nonlinear, solving the optimization problem needs efficient heuristics. This dissertation is set to solve these two major challenges. The major contribution of this 4-year research is the data-driven solution to optimize the combustion process, where process model or knowledge is identified based on the process data, then optimization is executed by evolutionary algorithms to search for optimal operating regions.

  18. Uptake of systematic reviews and meta-analyses based on individual participant data in clinical practice guidelines: descriptive study

    NARCIS (Netherlands)

    Vale, C.L.; Rydzewska, L.H.; Rovers, M.M.; Emberson, J.R.; Gueyffier, F.; Stewart, L.A.

    2015-01-01

    OBJECTIVE: To establish the extent to which systematic reviews and meta-analyses of individual participant data (IPD) are being used to inform the recommendations included in published clinical guidelines. DESIGN: Descriptive study. SETTING: Database maintained by the Cochrane IPD Meta-analysis

  19. Anterior approach versus posterior approach for Pipkin I and II femoral head fractures: A systemic review and meta-analysis.

    Science.gov (United States)

    Wang, Chen-guang; Li, Yao-min; Zhang, Hua-feng; Li, Hui; Li, Zhi-jun

    2016-03-01

    We performed a meta-analysis, pooling the results from controlled clinical trials to compare the efficiency of anterior and posterior surgical approaches to Pipkin I and II fractures of the femoral head. Potential academic articles were identified from the Cochrane Library, Medline (1966-2015.5), PubMed (1966-2015.5), Embase (1980-2015.5) and ScienceDirect (1966-2015.5) databases. Gray studies were identified from the references of the included literature. Pooling of the data was performed and analyzed by RevMan software, version 5.1. Five case-control trials (CCTs) met the inclusion criteria. There were significant differences in the incidence of heterotopic ossification (HO) between the approaches, but no significant differences were found between the two groups regarding functional outcomes of the hip, general postoperative complications, osteonecrosis of the femoral head or post-traumatic arthritis. The present meta-analysis indicated that the posterior approach decreased the risk of heterotopic ossification compared with the anterior approach for the treatment of Pipkin I and II femoral head fractures. No other complications were related to anterior and posterior approaches. Future high-quality randomized, controlled trials (RCTs) are needed to determine the optimal surgical approach and to predict other postoperative complications. III. Copyright © 2016 IJS Publishing Group Limited. Published by Elsevier Ltd. All rights reserved.

  20. A web-based approach to data imputation

    KAUST Repository

    Li, Zhixu

    2013-10-24

    In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Moreover, several optimization techniques are also proposed to reduce the cost of estimating the confidence of imputation queries at both the tuple-level and the database-level. Experiments based on several real-world data collections demonstrate not only the effectiveness of WebPut compared to existing approaches, but also the efficiency of our proposed algorithms and optimization techniques. © 2013 Springer Science+Business Media New York.

  1. Modeling soil evaporation efficiency in a range of soil and atmospheric conditions using a meta-analysis approach

    OpenAIRE

    Merlin , O; Stefan , V ,; Amazirh , A; Chanzy , A; Ceschia , E; Er-Raki , S; Gentine , P; Tallec , T; Ezzahar , J; Bircher , S; Beringer , J; Khabba , S

    2016-01-01

    International audience; A meta-analysis data-driven approach is developed to represent the soil evaporative efficiency (SEE) defined as the ratio of actual to potential soil evaporation. The new model is tested across a bare soil database composed of more than 30 sites around the world, a clay fraction range of 0.02-0.56, a sand fraction range of 0.05-0.92, and about 30,000 acquisition times. SEE is modeled using a soil resistance ($r_{ss}$) formulation based on surface soil moisture ($\\theta...

  2. Testing moderation in network meta-analysis with individual participant data.

    Science.gov (United States)

    Dagne, Getachew A; Brown, C Hendricks; Howe, George; Kellam, Sheppard G; Liu, Lei

    2016-07-10

    Meta-analytic methods for combining data from multiple intervention trials are commonly used to estimate the effectiveness of an intervention. They can also be extended to study comparative effectiveness, testing which of several alternative interventions is expected to have the strongest effect. This often requires network meta-analysis (NMA), which combines trials involving direct comparison of two interventions within the same trial and indirect comparisons across trials. In this paper, we extend existing network methods for main effects to examining moderator effects, allowing for tests of whether intervention effects vary for different populations or when employed in different contexts. In addition, we study how the use of individual participant data may increase the sensitivity of NMA for detecting moderator effects, as compared with aggregate data NMA that employs study-level effect sizes in a meta-regression framework. A new NMA diagram is proposed. We also develop a generalized multilevel model for NMA that takes into account within-trial and between-trial heterogeneity and can include participant-level covariates. Within this framework, we present definitions of homogeneity and consistency across trials. A simulation study based on this model is used to assess effects on power to detect both main and moderator effects. Results show that power to detect moderation is substantially greater when applied to individual participant data as compared with study-level effects. We illustrate the use of this method by applying it to data from a classroom-based randomized study that involved two sub-trials, each comparing interventions that were contrasted with separate control groups. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Mindfulness-Based Approaches in the Treatment of Disordered Gambling: A Systematic Review and Meta-Analysis

    Science.gov (United States)

    Maynard, Brandy R.; Wilson, Alyssa N.; Labuzienski, Elizabeth; Whiting, Seth W.

    2018-01-01

    Background and Aims: To examine the effects of mindfulness-based interventions on gambling behavior and symptoms, urges, and financial outcomes. Method: Systematic review and meta-analytic procedures were employed to search, select, code, and analyze studies conducted between 1980 and 2014, assessing the effects of mindfulness-based interventions…

  4. Document Clustering Approach for Meta Search Engine

    Science.gov (United States)

    Kumar, Naresh, Dr.

    2017-08-01

    The size of WWW is growing exponentially with ever change in technology. This results in huge amount of information with long list of URLs. Manually it is not possible to visit each page individually. So, if the page ranking algorithms are used properly then user search space can be restricted up to some pages of searched results. But available literatures show that no single search system can provide qualitative results from all the domains. This paper provides solution to this problem by introducing a new meta search engine that determine the relevancy of query corresponding to web page and cluster the results accordingly. The proposed approach reduces the user efforts, improves the quality of results and performance of the meta search engine.

  5. A meta-frontier approach for causal inference in productivity analysis

    DEFF Research Database (Denmark)

    Henningsen, Arne; Mpeta, Daniel F.; Adem, Anwar S.

    (2012) and create a meta-frontier in order to estimate the effects of participation on the farms’ meta-technology ratio, their group technical efficiency, and their meta-technology technical efficiency. The empirical analysis uses a cross-sectional data set from sunflower farmers in Tanzania, where some...... by the contractor’s provision of (additional) extension service and seeds of high-yielding varieties to the contract farmers....

  6. Music-based interventions to reduce internalizing symptoms in children and adolescents: A meta-analysis.

    Science.gov (United States)

    Geipel, Josephine; Koenig, Julian; Hillecke, Thomas K; Resch, Franz; Kaess, Michael

    2018-01-01

    Existing systematic reviews provide evidence that music therapy is an effective intervention in the treatment of children and adolescents with psychopathology. The objective of the present review was to systematically review and quantify the effects of music-based interventions in reducing internalizing symptoms (i.e., depression and anxiety) in children and adolescents using a meta-analytical approach. Databases and journals were systematically screened for studies eligible for inclusion in meta-analysis on the effects of music-based interventions in reducing internalizing symptoms. A random-effect meta-analysis using standardized mean differences (SMD) was conducted. Five studies were included. Analysis of data from (randomized) controlled trials, yielded a significant main effect (Hedge's g = -0.73; 95%CI [-1.42;-0.04], Z = 2.08, p = 0.04, k = 5), indicating a greater reduction of internalizing symptoms in youth receiving music-based interventions (n = 100) compared to different control group interventions (n = 95). The existing evidence is limited to studies of low power and methodological quality. Included studies were highly heterogeneous with respect to the nature of the intervention, the measurements applied, the samples studied, and the study design. Findings indicate that music-based interventions may be efficient in reducing the severity of internalizing symptoms in children and adolescents. While these results are encouraging with respect to the application of music-based intervention, rigorous research is necessary to replicate existing findings and provide a broader base of evidence. More research adopting well controlled study designs of high methodological quality is needed. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. A resampling-based meta-analysis for detection of differential gene expression in breast cancer

    International Nuclear Information System (INIS)

    Gur-Dedeoglu, Bala; Konu, Ozlen; Kir, Serkan; Ozturk, Ahmet Rasit; Bozkurt, Betul; Ergul, Gulusan; Yulug, Isik G

    2008-01-01

    Accuracy in the diagnosis of breast cancer and classification of cancer subtypes has improved over the years with the development of well-established immunohistopathological criteria. More recently, diagnostic gene-sets at the mRNA expression level have been tested as better predictors of disease state. However, breast cancer is heterogeneous in nature; thus extraction of differentially expressed gene-sets that stably distinguish normal tissue from various pathologies poses challenges. Meta-analysis of high-throughput expression data using a collection of statistical methodologies leads to the identification of robust tumor gene expression signatures. A resampling-based meta-analysis strategy, which involves the use of resampling and application of distribution statistics in combination to assess the degree of significance in differential expression between sample classes, was developed. Two independent microarray datasets that contain normal breast, invasive ductal carcinoma (IDC), and invasive lobular carcinoma (ILC) samples were used for the meta-analysis. Expression of the genes, selected from the gene list for classification of normal breast samples and breast tumors encompassing both the ILC and IDC subtypes were tested on 10 independent primary IDC samples and matched non-tumor controls by real-time qRT-PCR. Other existing breast cancer microarray datasets were used in support of the resampling-based meta-analysis. The two independent microarray studies were found to be comparable, although differing in their experimental methodologies (Pearson correlation coefficient, R = 0.9389 and R = 0.8465 for ductal and lobular samples, respectively). The resampling-based meta-analysis has led to the identification of a highly stable set of genes for classification of normal breast samples and breast tumors encompassing both the ILC and IDC subtypes. The expression results of the selected genes obtained through real-time qRT-PCR supported the meta-analysis results. The

  8. A resampling-based meta-analysis for detection of differential gene expression in breast cancer

    Directory of Open Access Journals (Sweden)

    Ergul Gulusan

    2008-12-01

    Full Text Available Abstract Background Accuracy in the diagnosis of breast cancer and classification of cancer subtypes has improved over the years with the development of well-established immunohistopathological criteria. More recently, diagnostic gene-sets at the mRNA expression level have been tested as better predictors of disease state. However, breast cancer is heterogeneous in nature; thus extraction of differentially expressed gene-sets that stably distinguish normal tissue from various pathologies poses challenges. Meta-analysis of high-throughput expression data using a collection of statistical methodologies leads to the identification of robust tumor gene expression signatures. Methods A resampling-based meta-analysis strategy, which involves the use of resampling and application of distribution statistics in combination to assess the degree of significance in differential expression between sample classes, was developed. Two independent microarray datasets that contain normal breast, invasive ductal carcinoma (IDC, and invasive lobular carcinoma (ILC samples were used for the meta-analysis. Expression of the genes, selected from the gene list for classification of normal breast samples and breast tumors encompassing both the ILC and IDC subtypes were tested on 10 independent primary IDC samples and matched non-tumor controls by real-time qRT-PCR. Other existing breast cancer microarray datasets were used in support of the resampling-based meta-analysis. Results The two independent microarray studies were found to be comparable, although differing in their experimental methodologies (Pearson correlation coefficient, R = 0.9389 and R = 0.8465 for ductal and lobular samples, respectively. The resampling-based meta-analysis has led to the identification of a highly stable set of genes for classification of normal breast samples and breast tumors encompassing both the ILC and IDC subtypes. The expression results of the selected genes obtained through real

  9. MetaGOmics: A Web-Based Tool for Peptide-Centric Functional and Taxonomic Analysis of Metaproteomics Data.

    Science.gov (United States)

    Riffle, Michael; May, Damon H; Timmins-Schiffman, Emma; Mikan, Molly P; Jaschob, Daniel; Noble, William Stafford; Nunn, Brook L

    2017-12-27

    Metaproteomics is the characterization of all proteins being expressed by a community of organisms in a complex biological sample at a single point in time. Applications of metaproteomics range from the comparative analysis of environmental samples (such as ocean water and soil) to microbiome data from multicellular organisms (such as the human gut). Metaproteomics research is often focused on the quantitative functional makeup of the metaproteome and which organisms are making those proteins. That is: What are the functions of the currently expressed proteins? How much of the metaproteome is associated with those functions? And, which microorganisms are expressing the proteins that perform those functions? However, traditional protein-centric functional analysis is greatly complicated by the large size, redundancy, and lack of biological annotations for the protein sequences in the database used to search the data. To help address these issues, we have developed an algorithm and web application (dubbed "MetaGOmics") that automates the quantitative functional (using Gene Ontology) and taxonomic analysis of metaproteomics data and subsequent visualization of the results. MetaGOmics is designed to overcome the shortcomings of traditional proteomics analysis when used with metaproteomics data. It is easy to use, requires minimal input, and fully automates most steps of the analysis-including comparing the functional makeup between samples. MetaGOmics is freely available at https://www.yeastrc.org/metagomics/.

  10. MetaGOmics: A Web-Based Tool for Peptide-Centric Functional and Taxonomic Analysis of Metaproteomics Data

    Directory of Open Access Journals (Sweden)

    Michael Riffle

    2017-12-01

    Full Text Available Metaproteomics is the characterization of all proteins being expressed by a community of organisms in a complex biological sample at a single point in time. Applications of metaproteomics range from the comparative analysis of environmental samples (such as ocean water and soil to microbiome data from multicellular organisms (such as the human gut. Metaproteomics research is often focused on the quantitative functional makeup of the metaproteome and which organisms are making those proteins. That is: What are the functions of the currently expressed proteins? How much of the metaproteome is associated with those functions? And, which microorganisms are expressing the proteins that perform those functions? However, traditional protein-centric functional analysis is greatly complicated by the large size, redundancy, and lack of biological annotations for the protein sequences in the database used to search the data. To help address these issues, we have developed an algorithm and web application (dubbed “MetaGOmics” that automates the quantitative functional (using Gene Ontology and taxonomic analysis of metaproteomics data and subsequent visualization of the results. MetaGOmics is designed to overcome the shortcomings of traditional proteomics analysis when used with metaproteomics data. It is easy to use, requires minimal input, and fully automates most steps of the analysis—including comparing the functional makeup between samples. MetaGOmics is freely available at https://www.yeastrc.org/metagomics/.

  11. Spatial Data Integration Using Ontology-Based Approach

    Science.gov (United States)

    Hasani, S.; Sadeghi-Niaraki, A.; Jelokhani-Niaraki, M.

    2015-12-01

    In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.

  12. SPATIAL DATA INTEGRATION USING ONTOLOGY-BASED APPROACH

    Directory of Open Access Journals (Sweden)

    S. Hasani

    2015-12-01

    Full Text Available In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.

  13. Fast and accurate taxonomic assignments of metagenomic sequences using MetaBin.

    Directory of Open Access Journals (Sweden)

    Vineet K Sharma

    Full Text Available Taxonomic assignment of sequence reads is a challenging task in metagenomic data analysis, for which the present methods mainly use either composition- or homology-based approaches. Though the homology-based methods are more sensitive and accurate, they suffer primarily due to the time needed to generate the Blast alignments. We developed the MetaBin program and web server for better homology-based taxonomic assignments using an ORF-based approach. By implementing Blat as the faster alignment method in place of Blastx, the analysis time has been reduced by severalfold. It is benchmarked using both simulated and real metagenomic datasets, and can be used for both single and paired-end sequence reads of varying lengths (≥45 bp. To our knowledge, MetaBin is the only available program that can be used for the taxonomic binning of short reads (<100 bp with high accuracy and high sensitivity using a homology-based approach. The MetaBin web server can be used to carry out the taxonomic analysis, by either submitting reads or Blastx output. It provides several options including construction of taxonomic trees, creation of a composition chart, functional analysis using COGs, and comparative analysis of multiple metagenomic datasets. MetaBin web server and a standalone version for high-throughput analysis are available freely at http://metabin.riken.jp/.

  14. Multiple Score Comparison: a network meta-analysis approach to comparison and external validation of prognostic scores

    Directory of Open Access Journals (Sweden)

    Sarah R. Haile

    2017-12-01

    Full Text Available Abstract Background Prediction models and prognostic scores have been increasingly popular in both clinical practice and clinical research settings, for example to aid in risk-based decision making or control for confounding. In many medical fields, a large number of prognostic scores are available, but practitioners may find it difficult to choose between them due to lack of external validation as well as lack of comparisons between them. Methods Borrowing methodology from network meta-analysis, we describe an approach to Multiple Score Comparison meta-analysis (MSC which permits concurrent external validation and comparisons of prognostic scores using individual patient data (IPD arising from a large-scale international collaboration. We describe the challenges in adapting network meta-analysis to the MSC setting, for instance the need to explicitly include correlations between the scores on a cohort level, and how to deal with many multi-score studies. We propose first using IPD to make cohort-level aggregate discrimination or calibration scores, comparing all to a common comparator. Then, standard network meta-analysis techniques can be applied, taking care to consider correlation structures in cohorts with multiple scores. Transitivity, consistency and heterogeneity are also examined. Results We provide a clinical application, comparing prognostic scores for 3-year mortality in patients with chronic obstructive pulmonary disease using data from a large-scale collaborative initiative. We focus on the discriminative properties of the prognostic scores. Our results show clear differences in performance, with ADO and eBODE showing higher discrimination with respect to mortality than other considered scores. The assumptions of transitivity and local and global consistency were not violated. Heterogeneity was small. Conclusions We applied a network meta-analytic methodology to externally validate and concurrently compare the prognostic properties

  15. Big data analytics in immunology: a knowledge-based approach.

    Science.gov (United States)

    Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.

  16. Big Data Analytics in Immunology: A Knowledge-Based Approach

    Directory of Open Access Journals (Sweden)

    Guang Lan Zhang

    2014-01-01

    Full Text Available With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.

  17. Meta-analysis of randomized clinical trials in the era of individual patient data sharing.

    Science.gov (United States)

    Kawahara, Takuya; Fukuda, Musashi; Oba, Koji; Sakamoto, Junichi; Buyse, Marc

    2018-06-01

    Individual patient data (IPD) meta-analysis is considered to be a gold standard when the results of several randomized trials are combined. Recent initiatives on sharing IPD from clinical trials offer unprecedented opportunities for using such data in IPD meta-analyses. First, we discuss the evidence generated and the benefits obtained by a long-established prospective IPD meta-analysis in early breast cancer. Next, we discuss a data-sharing system that has been adopted by several pharmaceutical sponsors. We review a number of retrospective IPD meta-analyses that have already been proposed using this data-sharing system. Finally, we discuss the role of data sharing in IPD meta-analysis in the future. Treatment effects can be more reliably estimated in both types of IPD meta-analyses than with summary statistics extracted from published papers. Specifically, with rich covariate information available on each patient, prognostic and predictive factors can be identified or confirmed. Also, when several endpoints are available, surrogate endpoints can be assessed statistically. Although there are difficulties in conducting, analyzing, and interpreting retrospective IPD meta-analysis utilizing the currently available data-sharing systems, data sharing will play an important role in IPD meta-analysis in the future.

  18. The Effect of the Process Approach on Students’ Writing Success: A Meta-Analysis

    OpenAIRE

    Kansızoğlu, Hasan Basri; Bayrak Cömert, Özlem

    2017-01-01

    Theaim of this study is to identify -by merging the results of a large number ofstudies conducted in related literature review- at which level “writing as aprocess” approach affects students’ writing success. Additionally, this paperinvestigates whether the writing success level differentiates depending oncertain study characteristic. Meta-analysis has been preferred as researchmethod in this study and among the studies which are associated withprocess-based writing practice, only the results...

  19. Variables As Currency: Linking Meta-Analysis Research and Data Paths in Sciences

    Directory of Open Access Journals (Sweden)

    Hua Qin

    2014-11-01

    Full Text Available Meta-analyses are studies that bring together data or results from multiple independent studies to produce new and over-arching findings. Current data curation systems only partially support meta-analytic research. Some important meta-analytic tasks, such as the selection of relevant studies for review and the integration of research datasets or findings, are not well supported in current data curation systems. To design tools and services that more fully support meta-analyses, we need a better understanding of meta-analytic research. This includes an understanding of both the practices of researchers who perform the analyses and the characteristics of the individual studies that are brought together. In this study, we make an initial contribution to filling this gap by developing a conceptual framework linking meta-analyses with data paths represented in published articles selected for the analysis. The framework focuses on key variables that represent primary/secondary datasets or derived socio-ecological data, contexts of use, and the data transformations that are applied. We introduce the notion of using variables and their relevant information (e.g., metadata and variable relationships as a type of currency to facilitate synthesis of findings across individual studies and leverage larger bodies of relevant source data produced in small science research. Handling variables in this manner provides an equalizing factor between data from otherwise disparate data-producing communities. We conclude with implications for exploring data integration and synthesis issues as well as system development.

  20. Learning-based meta-algorithm for MRI brain extraction.

    Science.gov (United States)

    Shi, Feng; Wang, Li; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2011-01-01

    Multiple-segmentation-and-fusion method has been widely used for brain extraction, tissue segmentation, and region of interest (ROI) localization. However, such studies are hindered in practice by their computational complexity, mainly coming from the steps of template selection and template-to-subject nonlinear registration. In this study, we address these two issues and propose a novel learning-based meta-algorithm for MRI brain extraction. Specifically, we first use exemplars to represent the entire template library, and assign the most similar exemplar to the test subject. Second, a meta-algorithm combining two existing brain extraction algorithms (BET and BSE) is proposed to conduct multiple extractions directly on test subject. Effective parameter settings for the meta-algorithm are learned from the training data and propagated to subject through exemplars. We further develop a level-set based fusion method to combine multiple candidate extractions together with a closed smooth surface, for obtaining the final result. Experimental results show that, with only a small portion of subjects for training, the proposed method is able to produce more accurate and robust brain extraction results, at Jaccard Index of 0.956 +/- 0.010 on total 340 subjects under 6-fold cross validation, compared to those by the BET and BSE even using their best parameter combinations.

  1. Meta-analysis of individual registry results enhances international registry collaboration.

    Science.gov (United States)

    Paxton, Elizabeth W; Mohaddes, Maziar; Laaksonen, Inari; Lorimer, Michelle; Graves, Stephen E; Malchau, Henrik; Namba, Robert S; Kärrholm, John; Rolfson, Ola; Cafri, Guy

    2018-03-28

    Background and purpose - Although common in medical research, meta-analysis has not been widely adopted in registry collaborations. A meta-analytic approach in which each registry conducts a standardized analysis on its own data followed by a meta-analysis to calculate a weighted average of the estimates allows collaboration without sharing patient-level data. The value of meta-analysis as an alternative to individual patient data analysis is illustrated in this study by comparing the risk of revision of porous tantalum cups versus other uncemented cups in primary total hip arthroplasties from Sweden, Australia, and a US registry (2003-2015). Patients and methods - For both individual patient data analysis and meta-analysis approaches a Cox proportional hazard model was fit for time to revision, comparing porous tantalum (n = 23,201) with other uncemented cups (n = 128,321). Covariates included age, sex, diagnosis, head size, and stem fixation. In the meta-analysis approach, treatment effect size (i.e., Cox model hazard ratio) was calculated within each registry and a weighted average for the individual registries' estimates was calculated. Results - Patient-level data analysis and meta-analytic approaches yielded the same results with the porous tantalum cups having a higher risk of revision than other uncemented cups (HR (95% CI) 1.6 (1.4-1.7) and HR (95% CI) 1.5 (1.4-1.7), respectively). Adding the US cohort to the meta-analysis led to greater generalizability, increased precision of the treatment effect, and similar findings (HR (95% CI) 1.6 (1.4-1.7)) with increased risk of porous tantalum cups. Interpretation - The meta-analytic technique is a viable option to address privacy, security, and data ownership concerns allowing more expansive registry collaboration, greater generalizability, and increased precision of treatment effects.

  2. Development and validation of MIX: comprehensive free software for meta-analysis of causal research data

    Directory of Open Access Journals (Sweden)

    Ikeda Noriaki

    2006-10-01

    Full Text Available Abstract Background Meta-analysis has become a well-known method for synthesis of quantitative data from previously conducted research in applied health sciences. So far, meta-analysis has been particularly useful in evaluating and comparing therapies and in assessing causes of disease. Consequently, the number of software packages that can perform meta-analysis has increased over the years. Unfortunately, it can take a substantial amount of time to get acquainted with some of these programs and most contain little or no interactive educational material. We set out to create and validate an easy-to-use and comprehensive meta-analysis package that would be simple enough programming-wise to remain available as a free download. We specifically aimed at students and researchers who are new to meta-analysis, with important parts of the development oriented towards creating internal interactive tutoring tools and designing features that would facilitate usage of the software as a companion to existing books on meta-analysis. Results We took an unconventional approach and created a program that uses Excel as a calculation and programming platform. The main programming language was Visual Basic, as implemented in Visual Basic 6 and Visual Basic for Applications in Excel 2000 and higher. The development took approximately two years and resulted in the 'MIX' program, which can be downloaded from the program's website free of charge. Next, we set out to validate the MIX output with two major software packages as reference standards, namely STATA (metan, metabias, and metatrim and Comprehensive Meta-Analysis Version 2. Eight meta-analyses that had been published in major journals were used as data sources. All numerical and graphical results from analyses with MIX were identical to their counterparts in STATA and CMA. The MIX program distinguishes itself from most other programs by the extensive graphical output, the click-and-go (Excel interface, and the

  3. Design database for quantitative trait loci (QTL) data warehouse, data mining, and meta-analysis.

    Science.gov (United States)

    Hu, Zhi-Liang; Reecy, James M; Wu, Xiao-Lin

    2012-01-01

    A database can be used to warehouse quantitative trait loci (QTL) data from multiple sources for comparison, genomic data mining, and meta-analysis. A robust database design involves sound data structure logistics, meaningful data transformations, normalization, and proper user interface designs. This chapter starts with a brief review of relational database basics and concentrates on issues associated with curation of QTL data into a relational database, with emphasis on the principles of data normalization and structure optimization. In addition, some simple examples of QTL data mining and meta-analysis are included. These examples are provided to help readers better understand the potential and importance of sound database design.

  4. Meta-analysis of crowdsourced data compendia suggests pan-disease transcriptional signatures of autoimmunity [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    William W. Lau

    2016-12-01

    Full Text Available Background: The proliferation of publicly accessible large-scale biological data together with increasing availability of bioinformatics tools have the potential to transform biomedical research. Here we report a crowdsourcing Jamboree that explored whether a team of volunteer biologists without formal bioinformatics training could use OMiCC, a crowdsourcing web platform that facilitates the reuse and (meta- analysis of public gene expression data, to compile and annotate gene expression data, and design comparisons between disease and control sample groups. Methods: The Jamboree focused on several common human autoimmune diseases, including systemic lupus erythematosus (SLE, multiple sclerosis (MS, type I diabetes (DM1, and rheumatoid arthritis (RA, and the corresponding mouse models. Meta-analyses were performed in OMiCC using comparisons constructed by the participants to identify 1 gene expression signatures for each disease (disease versus healthy controls at the gene expression and biological pathway levels, 2 conserved signatures across all diseases within each species (pan-disease signatures, and 3 conserved signatures between species for each disease and across all diseases (cross-species signatures. Results: A large number of differentially expressed genes were identified for each disease based on meta-analysis, with observed overlap among diseases both within and across species. Gene set/pathway enrichment of upregulated genes suggested conserved signatures (e.g., interferon across all human and mouse conditions. Conclusions: Our Jamboree exercise provides evidence that when enabled by appropriate tools, a "crowd" of biologists can work together to accelerate the pace by which the increasingly large amounts of public data can be reused and meta-analyzed for generating and testing hypotheses. Our encouraging experience suggests that a similar crowdsourcing approach can be used to explore other biological questions.

  5. Metaprop: a Stata command to perform meta-analysis of binomial data.

    Science.gov (United States)

    Nyaga, Victoria N; Arbyn, Marc; Aerts, Marc

    2014-01-01

    Meta-analyses have become an essential tool in synthesizing evidence on clinical and epidemiological questions derived from a multitude of similar studies assessing the particular issue. Appropriate and accessible statistical software is needed to produce the summary statistic of interest. Metaprop is a statistical program implemented to perform meta-analyses of proportions in Stata. It builds further on the existing Stata procedure metan which is typically used to pool effects (risk ratios, odds ratios, differences of risks or means) but which is also used to pool proportions. Metaprop implements procedures which are specific to binomial data and allows computation of exact binomial and score test-based confidence intervals. It provides appropriate methods for dealing with proportions close to or at the margins where the normal approximation procedures often break down, by use of the binomial distribution to model the within-study variability or by allowing Freeman-Tukey double arcsine transformation to stabilize the variances. Metaprop was applied on two published meta-analyses: 1) prevalence of HPV-infection in women with a Pap smear showing ASC-US; 2) cure rate after treatment for cervical precancer using cold coagulation. The first meta-analysis showed a pooled HPV-prevalence of 43% (95% CI: 38%-48%). In the second meta-analysis, the pooled percentage of cured women was 94% (95% CI: 86%-97%). By using metaprop, no studies with 0% or 100% proportions were excluded from the meta-analysis. Furthermore, study specific and pooled confidence intervals always were within admissible values, contrary to the original publication, where metan was used.

  6. Conducting Meta-Analysis Using SAS

    CERN Document Server

    Arthur, Winfried; Huffcutt, Allen I; Arthur, Winfred

    2001-01-01

    Conducting Meta-Analysis Using SAS reviews the meta-analysis statistical procedure and shows the reader how to conduct one using SAS. It presents and illustrates the use of the PROC MEANS procedure in SAS to perform the data computations called for by the two most commonly used meta-analytic procedures, the Hunter & Schmidt and Glassian approaches. This book serves as both an operational guide and user's manual by describing and explaining the meta-analysis procedures and then presenting the appropriate SAS program code for computing the pertinent statistics. The practical, step-by-step instru

  7. Symmetric or asymmetric oil prices? A meta-analysis approach

    International Nuclear Information System (INIS)

    Perdiguero-García, Jordi

    2013-01-01

    The analysis of price asymmetries in the gasoline market is one of the most widely studied in energy economics. However, the great variation in the outcomes reported makes the drawing of any definitive conclusions difficult. Given this situation, a meta-analysis serves as an excellent tool to discover which characteristics of the various markets analyzed, and which specific features of these studies, might account for these differences. In adopting such an approach, this paper shows how the particular segment of the industry analyzed, the characteristics of the data, the years under review, the type of publication and the introduction of control variables might explain this heterogeneity in results. The paper concludes on these grounds that increased competition may significantly reduce the possibility of occurrence of asymmetric behavior. These results should be taken into consideration therefore in future studies of asymmetries in the oil industry. - Highlights: ► I study asymmetries in the price gasoline industry through a meta-analysis regression. ► The asymmetries are produced mainly in the retail market. ► The asymmetries are less frequent when we analyze recent cases. ► There may be some degree of publication bias. ► The level of competition may explain the patterns of asymmetry

  8. Meta-learning framework applied in bioinformatics inference system design.

    Science.gov (United States)

    Arredondo, Tomás; Ormazábal, Wladimir

    2015-01-01

    This paper describes a meta-learner inference system development framework which is applied and tested in the implementation of bioinformatic inference systems. These inference systems are used for the systematic classification of the best candidates for inclusion in bacterial metabolic pathway maps. This meta-learner-based approach utilises a workflow where the user provides feedback with final classification decisions which are stored in conjunction with analysed genetic sequences for periodic inference system training. The inference systems were trained and tested with three different data sets related to the bacterial degradation of aromatic compounds. The analysis of the meta-learner-based framework involved contrasting several different optimisation methods with various different parameters. The obtained inference systems were also contrasted with other standard classification methods with accurate prediction capabilities observed.

  9. ArrayWiki: an enabling technology for sharing public microarray data repositories and meta-analyses

    Science.gov (United States)

    Stokes, Todd H; Torrance, JT; Li, Henry; Wang, May D

    2008-01-01

    Background A survey of microarray databases reveals that most of the repository contents and data models are heterogeneous (i.e., data obtained from different chip manufacturers), and that the repositories provide only basic biological keywords linking to PubMed. As a result, it is difficult to find datasets using research context or analysis parameters information beyond a few keywords. For example, to reduce the "curse-of-dimension" problem in microarray analysis, the number of samples is often increased by merging array data from different datasets. Knowing chip data parameters such as pre-processing steps (e.g., normalization, artefact removal, etc), and knowing any previous biological validation of the dataset is essential due to the heterogeneity of the data. However, most of the microarray repositories do not have meta-data information in the first place, and do not have a a mechanism to add or insert this information. Thus, there is a critical need to create "intelligent" microarray repositories that (1) enable update of meta-data with the raw array data, and (2) provide standardized archiving protocols to minimize bias from the raw data sources. Results To address the problems discussed, we have developed a community maintained system called ArrayWiki that unites disparate meta-data of microarray meta-experiments from multiple primary sources with four key features. First, ArrayWiki provides a user-friendly knowledge management interface in addition to a programmable interface using standards developed by Wikipedia. Second, ArrayWiki includes automated quality control processes (caCORRECT) and novel visualization methods (BioPNG, Gel Plots), which provide extra information about data quality unavailable in other microarray repositories. Third, it provides a user-curation capability through the familiar Wiki interface. Fourth, ArrayWiki provides users with simple text-based searches across all experiment meta-data, and exposes data to search engine crawlers

  10. Game-based digital interventions for depression therapy: a systematic review and meta-analysis.

    Science.gov (United States)

    Li, Jinhui; Theng, Yin-Leng; Foo, Schubert

    2014-08-01

    The aim of this study was to review the existing literature on game-based digital interventions for depression systematically and examine their effectiveness through a meta-analysis of randomized controlled trials (RCTs). Database searching was conducted using specific search terms and inclusion criteria. A standard meta-analysis was also conducted of available RCT studies with a random effects model. The standard mean difference (Cohen's d) was used to calculate the effect size of each study. Nineteen studies were included in the review, and 10 RCTs (eight studies) were included in the meta-analysis. Four types of game interventions-psycho-education and training, virtual reality exposure therapy, exercising, and entertainment-were identified, with various types of support delivered and populations targeted. The meta-analysis revealed a moderate effect size of the game interventions for depression therapy at posttreatment (d=-0.47 [95% CI -0.69 to -0.24]). A subgroup analysis showed that interventions based on psycho-education and training had a smaller effect than those based on the other forms, and that self-help interventions yielded better outcomes than supported interventions. A higher effect was achieved when a waiting list was used as the control. The review and meta-analysis support the effectiveness of game-based digital interventions for depression. More large-scale, high-quality RCT studies with sufficient long-term data for treatment evaluation are needed.

  11. Evaluating the Quality of Evidence from a Network Meta-Analysis

    Science.gov (United States)

    Salanti, Georgia; Del Giovane, Cinzia; Chaimani, Anna; Caldwell, Deborah M.; Higgins, Julian P. T.

    2014-01-01

    Systematic reviews that collate data about the relative effects of multiple interventions via network meta-analysis are highly informative for decision-making purposes. A network meta-analysis provides two types of findings for a specific outcome: the relative treatment effect for all pairwise comparisons, and a ranking of the treatments. It is important to consider the confidence with which these two types of results can enable clinicians, policy makers and patients to make informed decisions. We propose an approach to determining confidence in the output of a network meta-analysis. Our proposed approach is based on methodology developed by the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group for pairwise meta-analyses. The suggested framework for evaluating a network meta-analysis acknowledges (i) the key role of indirect comparisons (ii) the contributions of each piece of direct evidence to the network meta-analysis estimates of effect size; (iii) the importance of the transitivity assumption to the validity of network meta-analysis; and (iv) the possibility of disagreement between direct evidence and indirect evidence. We apply our proposed strategy to a systematic review comparing topical antibiotics without steroids for chronically discharging ears with underlying eardrum perforations. The proposed framework can be used to determine confidence in the results from a network meta-analysis. Judgements about evidence from a network meta-analysis can be different from those made about evidence from pairwise meta-analyses. PMID:24992266

  12. Gene-based meta-analysis of genome-wide association studies implicates new loci involved in obesity

    DEFF Research Database (Denmark)

    Hägg, Sara; Ganna, Andrea; Van Der Laan, Sander W

    2015-01-01

    ) approach to assign variants to genes and to calculate gene-based P-values based on simulations. The VEGAS method was applied to each cohort separately before a gene-based meta-analysis was performed. In Stage 1, two known (FTO and TMEM18) and six novel (PEX2, MTFR2, SSFA2, IARS2, CEP295 and TXNDC12) loci...

  13. Simulation-based training for nurses: Systematic review and meta-analysis.

    Science.gov (United States)

    Hegland, Pål A; Aarlie, Hege; Strømme, Hilde; Jamtvedt, Gro

    2017-07-01

    Simulation-based training is a widespread strategy to improve health-care quality. However, its effect on registered nurses has previously not been established in systematic reviews. The aim of this systematic review is to evaluate effect of simulation-based training on nurses' skills and knowledge. We searched CDSR, DARE, HTA, CENTRAL, CINAHL, MEDLINE, Embase, ERIC, and SveMed+ for randomised controlled trials (RCT) evaluating effect of simulation-based training among nurses. Searches were completed in December 2016. Two reviewers independently screened abstracts and full-text, extracted data, and assessed risk of bias. We compared simulation-based training to other learning strategies, high-fidelity simulation to other simulation strategies, and different organisation of simulation training. Data were analysed through meta-analysis and narrative syntheses. GRADE was used to assess the quality of evidence. Fifteen RCTs met the inclusion criteria. For the comparison of simulation-based training to other learning strategies on nurses' skills, six studies in the meta-analysis showed a significant, but small effect in favour of simulation (SMD -1.09, CI -1.72 to -0.47). There was large heterogeneity (I 2 85%). For the other comparisons, there was large between-study variation in results. The quality of evidence for all comparisons was graded as low. The effect of simulation-based training varies substantially between studies. Our meta-analysis showed a significant effect of simulation training compared to other learning strategies, but the quality of evidence was low indicating uncertainty. Other comparisons showed inconsistency in results. Based on our findings simulation training appears to be an effective strategy to improve nurses' skills, but further good-quality RCTs with adequate sample sizes are needed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Direct nitrous oxide emissions in Mediterranean climate cropping systems : Emission factors based on a meta-analysis of available measurement data

    NARCIS (Netherlands)

    Cayuela, Maria L.; Aguilera, Eduardo; Sanz-Cobena, Alberto; Adams, Dean C.; Abalos, Diego; Barton, Louise; Ryals, Rebecca; Silver, Whendee L.; Alfaro, Marta A.; Pappa, Valentini A.; Smith, Pete; Garnier, Josette; Billen, Gilles; Bouwman, Lex; Bondeau, Alberte; Lassaletta, Luis

    2017-01-01

    Many recent reviews and meta-analyses of N2O emissions do not include data from Mediterranean studies. In this paper we present a meta-analysis of the N2O emissions from Mediterranean cropping systems, and propose a more robust and reliable regional emission factor (EF) for N2O, distinguishing the

  15. MetaFIND: A feature analysis tool for metabolomics data

    Directory of Open Access Journals (Sweden)

    Cunningham Pádraig

    2008-11-01

    Full Text Available Abstract Background Metabolomics, or metabonomics, refers to the quantitative analysis of all metabolites present within a biological sample and is generally carried out using NMR spectroscopy or Mass Spectrometry. Such analysis produces a set of peaks, or features, indicative of the metabolic composition of the sample and may be used as a basis for sample classification. Feature selection may be employed to improve classification accuracy or aid model explanation by establishing a subset of class discriminating features. Factors such as experimental noise, choice of technique and threshold selection may adversely affect the set of selected features retrieved. Furthermore, the high dimensionality and multi-collinearity inherent within metabolomics data may exacerbate discrepancies between the set of features retrieved and those required to provide a complete explanation of metabolite signatures. Given these issues, the latter in particular, we present the MetaFIND application for 'post-feature selection' correlation analysis of metabolomics data. Results In our evaluation we show how MetaFIND may be used to elucidate metabolite signatures from the set of features selected by diverse techniques over two metabolomics datasets. Importantly, we also show how MetaFIND may augment standard feature selection and aid the discovery of additional significant features, including those which represent novel class discriminating metabolites. MetaFIND also supports the discovery of higher level metabolite correlations. Conclusion Standard feature selection techniques may fail to capture the full set of relevant features in the case of high dimensional, multi-collinear metabolomics data. We show that the MetaFIND 'post-feature selection' analysis tool may aid metabolite signature elucidation, feature discovery and inference of metabolic correlations.

  16. A Visual Analytics Approach for Station-Based Air Quality Data

    Directory of Open Access Journals (Sweden)

    Yi Du

    2016-12-01

    Full Text Available With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.

  17. A Visual Analytics Approach for Station-Based Air Quality Data.

    Science.gov (United States)

    Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui

    2016-12-24

    With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.

  18. Advances in meta-analysis: examples from internal medicine to neurology.

    Science.gov (United States)

    Copetti, Massimiliano; Fontana, Andrea; Graziano, Giusi; Veneziani, Federica; Siena, Federica; Scardapane, Marco; Lucisano, Giuseppe; Pellegrini, Fabio

    2014-01-01

    We review the state of the art in meta-analysis and data pooling following the evolution of the statistical models employed. Starting from a classic definition of meta-analysis of published data, a set of apparent antinomies which characterized the development of the meta-analytic tools are reconciled in dichotomies where the second term represents a possible generalization of the first one. Particular attention is given to the generalized linear mixed models as an overall framework for meta-analysis. Bayesian meta-analysis is discussed as a further possibility of generalization for sensitivity analysis and the use of priors as a data augmentation approach. We provide relevant examples to underline how the need for adequate methods to solve practical issues in specific areas of research have guided the development of advanced methods in meta-analysis. We show how all the advances in meta-analysis naturally merge into the unified framework of generalized linear mixed models and reconcile apparently conflicting approaches. All these complex models can be easily implemented with the standard commercial software available. © 2013 S. Karger AG, Basel.

  19. A Meta-Heuristic Regression-Based Feature Selection for Predictive Analytics

    Directory of Open Access Journals (Sweden)

    Bharat Singh

    2014-11-01

    Full Text Available A high-dimensional feature selection having a very large number of features with an optimal feature subset is an NP-complete problem. Because conventional optimization techniques are unable to tackle large-scale feature selection problems, meta-heuristic algorithms are widely used. In this paper, we propose a particle swarm optimization technique while utilizing regression techniques for feature selection. We then use the selected features to classify the data. Classification accuracy is used as a criterion to evaluate classifier performance, and classification is accomplished through the use of k-nearest neighbour (KNN and Bayesian techniques. Various high dimensional data sets are used to evaluate the usefulness of the proposed approach. Results show that our approach gives better results when compared with other conventional feature selection algorithms.

  20. A non-parametric meta-analysis approach for combining independent microarray datasets: application using two microarray datasets pertaining to chronic allograft nephropathy

    Directory of Open Access Journals (Sweden)

    Archer Kellie J

    2008-02-01

    Full Text Available Abstract Background With the popularity of DNA microarray technology, multiple groups of researchers have studied the gene expression of similar biological conditions. Different methods have been developed to integrate the results from various microarray studies, though most of them rely on distributional assumptions, such as the t-statistic based, mixed-effects model, or Bayesian model methods. However, often the sample size for each individual microarray experiment is small. Therefore, in this paper we present a non-parametric meta-analysis approach for combining data from independent microarray studies, and illustrate its application on two independent Affymetrix GeneChip studies that compared the gene expression of biopsies from kidney transplant recipients with chronic allograft nephropathy (CAN to those with normal functioning allograft. Results The simulation study comparing the non-parametric meta-analysis approach to a commonly used t-statistic based approach shows that the non-parametric approach has better sensitivity and specificity. For the application on the two CAN studies, we identified 309 distinct genes that expressed differently in CAN. By applying Fisher's exact test to identify enriched KEGG pathways among those genes called differentially expressed, we found 6 KEGG pathways to be over-represented among the identified genes. We used the expression measurements of the identified genes as predictors to predict the class labels for 6 additional biopsy samples, and the predicted results all conformed to their pathologist diagnosed class labels. Conclusion We present a new approach for combining data from multiple independent microarray studies. This approach is non-parametric and does not rely on any distributional assumptions. The rationale behind the approach is logically intuitive and can be easily understood by researchers not having advanced training in statistics. Some of the identified genes and pathways have been

  1. A methodological systematic review of what's wrong with meta-ethnography reporting.

    Science.gov (United States)

    France, Emma F; Ring, Nicola; Thomas, Rebecca; Noyes, Jane; Maxwell, Margaret; Jepson, Ruth

    2014-11-19

    Syntheses of qualitative studies can inform health policy, services and our understanding of patient experience. Meta-ethnography is a systematic seven-phase interpretive qualitative synthesis approach well-suited to producing new theories and conceptual models. However, there are concerns about the quality of meta-ethnography reporting, particularly the analysis and synthesis processes. Our aim was to investigate the application and reporting of methods in recent meta-ethnography journal papers, focusing on the analysis and synthesis process and output. Methodological systematic review of health-related meta-ethnography journal papers published from 2012-2013. We searched six electronic databases, Google Scholar and Zetoc for papers using key terms including 'meta-ethnography.' Two authors independently screened papers by title and abstract with 100% agreement. We identified 32 relevant papers. Three authors independently extracted data and all authors analysed the application and reporting of methods using content analysis. Meta-ethnography was applied in diverse ways, sometimes inappropriately. In 13% of papers the approach did not suit the research aim. In 66% of papers reviewers did not follow the principles of meta-ethnography. The analytical and synthesis processes were poorly reported overall. In only 31% of papers reviewers clearly described how they analysed conceptual data from primary studies (phase 5, 'translation' of studies) and in only one paper (3%) reviewers explicitly described how they conducted the analytic synthesis process (phase 6). In 38% of papers we could not ascertain if reviewers had achieved any new interpretation of primary studies. In over 30% of papers seminal methodological texts which could have informed methods were not cited. We believe this is the first in-depth methodological systematic review of meta-ethnography conduct and reporting. Meta-ethnography is an evolving approach. Current reporting of methods, analysis and

  2. Individual-based versus aggregate meta-analysis in multi-database studies of pregnancy outcomes

    DEFF Research Database (Denmark)

    Selmer, Randi; Haglund, Bengt; Furu, Kari

    2016-01-01

    Purpose: Compare analyses of a pooled data set on the individual level with aggregate meta-analysis in a multi-database study. Methods: We reanalysed data on 2.3 million births in a Nordic register based cohort study. We compared estimated odds ratios (OR) for the effect of selective serotonin...... covariates in the pooled data set, and 1.53 (1.19–1.96) after country-optimized adjustment. Country-specific adjusted analyses at the substance level were not possible for RVOTO. Conclusion: Results of fixed effects meta-analysis and individual-based analyses of a pooled dataset were similar in this study...... reuptake inhibitors (SSRI) and venlafaxine use in pregnancy on any cardiovascular birth defect and the rare outcome right ventricular outflow tract obstructions (RVOTO). Common covariates included maternal age, calendar year, birth order, maternal diabetes, and co-medication. Additional covariates were...

  3. Individualised medicine from the perspectives of patients using complementary therapies: a meta-ethnography approach.

    Science.gov (United States)

    Franzel, Brigitte; Schwiegershausen, Martina; Heusser, Peter; Berger, Bettina

    2013-06-03

    Personalised (or individualised) medicine in the days of genetic research refers to molecular biologic specifications in individuals and not to a response to individual patient needs in the sense of person-centred medicine. Studies suggest that patients often wish for authentically person-centred care and personal physician-patient interactions, and that they therefore choose Complementary and Alternative medicine (CAM) as a possibility to complement standard care and ensure a patient-centred approach. Therefore, to build on the findings documented in these qualitative studies, we investigated the various concepts of individualised medicine inherent in patients' reasons for using CAM. We used the technique of meta-ethnography, following a three-stage approach: (1) A comprehensive systematic literature search of 67 electronic databases and appraisal of eligible qualitative studies related to patients' reasons for seeking CAM was carried out. Eligibility for inclusion was determined using defined criteria. (2) A meta-ethnographic study was conducted according to Noblit and Hare's method for translating key themes in patients' reasons for using CAM. (3) A line-of-argument approach was used to synthesize and interpret key concepts associated with patients' reasoning regarding individualized medicine. (1) Of a total of 9,578 citations screened, 38 studies were appraised with a quality assessment checklist and a total of 30 publications were included in the study. (2) Reasons for CAM use evolved following a reciprocal translation. (3) The line-of-argument interpretations of patients' concepts of individualised medicine that emerged based on the findings of our multidisciplinary research team were "personal growth", "holism", "alliance", "integrative care", "self-activation" and "wellbeing". The results of this meta-ethnographic study demonstrate that patients' notions of individualised medicine differ from the current idea of personalised genetic medicine. Our study

  4. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    Science.gov (United States)

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  5. Improved Dietary Guidelines for Vitamin D: Application of Individual Participant Data (IPD)-Level Meta-Regression Analyses

    Science.gov (United States)

    Cashman, Kevin D.; Ritz, Christian; Kiely, Mairead

    2017-01-01

    Dietary Reference Values (DRVs) for vitamin D have a key role in the prevention of vitamin D deficiency. However, despite adopting similar risk assessment protocols, estimates from authoritative agencies over the last 6 years have been diverse. This may have arisen from diverse approaches to data analysis. Modelling strategies for pooling of individual subject data from cognate vitamin D randomized controlled trials (RCTs) are likely to provide the most appropriate DRV estimates. Thus, the objective of the present work was to undertake the first-ever individual participant data (IPD)-level meta-regression, which is increasingly recognized as best practice, from seven winter-based RCTs (with 882 participants ranging in age from 4 to 90 years) of the vitamin D intake–serum 25-hydroxyvitamin D (25(OH)D) dose-response. Our IPD-derived estimates of vitamin D intakes required to maintain 97.5% of 25(OH)D concentrations >25, 30, and 50 nmol/L across the population are 10, 13, and 26 µg/day, respectively. In contrast, standard meta-regression analyses with aggregate data (as used by several agencies in recent years) from the same RCTs estimated that a vitamin D intake requirement of 14 µg/day would maintain 97.5% of 25(OH)D >50 nmol/L. These first IPD-derived estimates offer improved dietary recommendations for vitamin D because the underpinning modeling captures the between-person variability in response of serum 25(OH)D to vitamin D intake. PMID:28481259

  6. Improved Dietary Guidelines for Vitamin D: Application of Individual Participant Data (IPD-Level Meta-Regression Analyses

    Directory of Open Access Journals (Sweden)

    Kevin D. Cashman

    2017-05-01

    Full Text Available Dietary Reference Values (DRVs for vitamin D have a key role in the prevention of vitamin D deficiency. However, despite adopting similar risk assessment protocols, estimates from authoritative agencies over the last 6 years have been diverse. This may have arisen from diverse approaches to data analysis. Modelling strategies for pooling of individual subject data from cognate vitamin D randomized controlled trials (RCTs are likely to provide the most appropriate DRV estimates. Thus, the objective of the present work was to undertake the first-ever individual participant data (IPD-level meta-regression, which is increasingly recognized as best practice, from seven winter-based RCTs (with 882 participants ranging in age from 4 to 90 years of the vitamin D intake–serum 25-hydroxyvitamin D (25(OHD dose-response. Our IPD-derived estimates of vitamin D intakes required to maintain 97.5% of 25(OHD concentrations >25, 30, and 50 nmol/L across the population are 10, 13, and 26 µg/day, respectively. In contrast, standard meta-regression analyses with aggregate data (as used by several agencies in recent years from the same RCTs estimated that a vitamin D intake requirement of 14 µg/day would maintain 97.5% of 25(OHD >50 nmol/L. These first IPD-derived estimates offer improved dietary recommendations for vitamin D because the underpinning modeling captures the between-person variability in response of serum 25(OHD to vitamin D intake.

  7. The Efficacy of Mindfulness-Based Interventions in Primary Care: A Meta-Analytic Review.

    Science.gov (United States)

    Demarzo, Marcelo M P; Montero-Marin, Jesús; Cuijpers, Pim; Zabaleta-del-Olmo, Edurne; Mahtani, Kamal R; Vellinga, Akke; Vicens, Caterina; López-del-Hoyo, Yolanda; García-Campayo, Javier

    2015-11-01

    Positive effects have been reported after mindfulness-based interventions (MBIs) in diverse clinical and nonclinical populations. Primary care is a key health care setting for addressing common chronic conditions, and an effective MBI designed for this setting could benefit countless people worldwide. Meta-analyses of MBIs have become popular, but little is known about their efficacy in primary care. Our aim was to investigate the application and efficacy of MBIs that address primary care patients. We performed a meta-analytic review of randomized controlled trials addressing the effect of MBIs in adult patients recruited from primary care settings. The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) and Cochrane guidelines were followed. Effect sizes were calculated with the Hedges g in random effects models. The meta-analyses were based on 6 trials having a total of 553 patients. The overall effect size of MBI compared with a control condition for improving general health was moderate (g = 0.48; P = .002), with moderate heterogeneity (I(2) = 59; P .05). Although the number of randomized controlled trials applying MBIs in primary care is still limited, our results suggest that these interventions are promising for the mental health and quality of life of primary care patients. We discuss innovative approaches for implementing MBIs, such as complex intervention and stepped care. © 2015 Annals of Family Medicine, Inc.

  8. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    Science.gov (United States)

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Meta-analytic approaches to determine gender differences in the age-incidence characteristics of schizophrenia and related psychoses.

    Science.gov (United States)

    Jackson, Dan; Kirkbride, James; Croudace, Tim; Morgan, Craig; Boydell, Jane; Errazuriz, Antonia; Murray, Robin M; Jones, Peter B

    2013-03-01

    A recent systematic review and meta-analysis of the incidence and prevalence of schizophrenia and other psychoses in England investigated the variation in the rates of psychotic disorders. However, some of the questions of interest, and the data collected to answer these, could not be adequately addressed using established meta-analysis techniques. We developed a novel statistical method, which makes combined use of fractional polynomials and meta-regression. This was used to quantify the evidence of gender differences and a secondary peak onset in women, where the outcome of interest is the incidence of schizophrenia. Statistically significant and epidemiologically important effects were obtained using our methods. Our analysis is based on data from four studies that provide 50 incidence rates, stratified by age and gender. We describe several variations of our method, in particular those that might be used where more data is available, and provide guidance for assessing the model fit. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Sinus tarsi approach (STA) versus extensile lateral approach (ELA) for treatment of closed displaced intra-articular calcaneal fractures (DIACF): A meta-analysis.

    Science.gov (United States)

    Bai, L; Hou, Y-L; Lin, G-H; Zhang, X; Liu, G-Q; Yu, B

    2018-04-01

    Our aim was to compare the effect of sinus tarsi approach (STA) vs extensile lateral approach (ELA) for treatment of closed displaced intra-articular calcaneal fractures (DIACF) is still being debated. A thorough research was carried out in the MEDLINE, EMBASE and Cochrane library databases from inception to December 2016. Only prospective or retrospective comparative studies were selected in this meta-analysis. Two independent reviewers conducted literature search, data extraction and quality assessment. The primary outcomes were anatomical restoration and prevalence of complications. Secondary outcomes included operation time and functional recovery. Four randomized controlled trials involving 326 patients and three cohort studies involving 206 patients were included. STA technique for DIACFs led to a decline in both operation time and incidence of complications. There were no significant differences between the groups in American Orthopedic Foot and Ankle Society scores, nor changes in Böhler angle. This meta-analysis suggests that STA technique may reduce the operation time and incidence of complications. In conclusion, STA technique is reasonably an optimal choice for DIACF. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  11. metaBIT, an integrative and automated metagenomic pipeline for analysing microbial profiles from high-throughput sequencing shotgun data

    DEFF Research Database (Denmark)

    Louvel, Guillaume; Der Sarkissian, Clio; Hanghøj, Kristian Ebbesen

    2016-01-01

    -throughput DNA sequencing (HTS). Here, we develop metaBIT, an open-source computational pipeline automatizing routine microbial profiling of shotgun HTS data. Customizable by the user at different stringency levels, it performs robust taxonomy-based assignment and relative abundance calculation of microbial taxa......, as well as cross-sample statistical analyses of microbial diversity distributions. We demonstrate the versatility of metaBIT within a range of published HTS data sets sampled from the environment (soil and seawater) and the human body (skin and gut), but also from archaeological specimens. We present......-friendly profiling of the microbial DNA present in HTS shotgun data sets. The applications of metaBIT are vast, from monitoring of laboratory errors and contaminations, to the reconstruction of past and present microbiota, and the detection of candidate species, including pathogens....

  12. An Extended Petri-Net Based Approach for Supply Chain Process Enactment in Resource-Centric Web Service Environment

    Science.gov (United States)

    Wang, Xiaodong; Zhang, Xiaoyu; Cai, Hongming; Xu, Boyi

    Enacting a supply-chain process involves variant partners and different IT systems. REST receives increasing attention for distributed systems with loosely coupled resources. Nevertheless, resource model incompatibilities and conflicts prevent effective process modeling and deployment in resource-centric Web service environment. In this paper, a Petri-net based framework for supply-chain process integration is proposed. A resource meta-model is constructed to represent the basic information of resources. Then based on resource meta-model, XML schemas and documents are derived, which represent resources and their states in Petri-net. Thereafter, XML-net, a high level Petri-net, is employed for modeling control and data flow of process. From process model in XML-net, RESTful services and choreography descriptions are deduced. Therefore, unified resource representation and RESTful services description are proposed for cross-system integration in a more effective way. A case study is given to illustrate the approach and the desirable features of the approach are discussed.

  13. The MetaLex Document Server : Legal Documents as Versioned Linked Data

    NARCIS (Netherlands)

    Hoekstra, R.; Aroyo, L.; Welty, C.; Alani, H.; Taylor, J.; Bernstein, A.; Kagal, L.; Noy, N.; Blomqvist, E.

    2011-01-01

    This paper introduces the MetaLex Document Server (MDS), an ongoing project to improve access to legal sources (regulations, court rulings) by means of a generic legal XML syntax (CEN MetaLex) and Linked Data. The MDS defines a generic conversion mechanism from legacy legal XML syntaxes to CEN

  14. Affective and cognitive meta-bases of attitudes: Unique effects on information interest and persuasion.

    Science.gov (United States)

    See, Ya Hui Michelle; Petty, Richard E; Fabrigar, Leandre R

    2008-06-01

    The authors investigated the predictive utility of people's subjective assessments of whether their evaluations are affect- or cognition driven (i.e., meta-cognitive bases) as separate from whether people's attitudes are actually affect- or cognition based (i.e., structural bases). Study 1 demonstrated that meta-bases uniquely predict interest in affective versus cognitive information above and beyond structural bases and other related variables (i.e., need for cognition and need for affect). In Study 2, meta-bases were shown to account for unique variance in attitude change as a function of appeal type. Finally, Study 3 showed that as people became more deliberative in their judgments, meta-bases increased in predictive utility, and structural bases decreased in predictive utility. These findings support the existence of meta-bases of attitudes and demonstrate that meta-bases are distinguishable from structural bases in their predictive utility. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  15. Effectiveness of problem-based learning in Chinese pharmacy education: a meta-analysis.

    Science.gov (United States)

    Zhou, Jiyin; Zhou, Shiwen; Huang, Chunji; Xu, Rufu; Zhang, Zuo; Zeng, Shengya; Qian, Guisheng

    2016-01-19

    This review provides a critical overview of problem-based learning (PBL) practices in Chinese pharmacy education. PBL has yet to be widely applied in pharmaceutical education in China. The results of those studies that have been conducted are published in Chinese and thus may not be easily accessible to international researchers. Therefore, this meta-analysis was carried out to review the effectiveness of PBL. Databases were searched for studies in accordance with the inclusion criteria. Two reviewers independently performed the study identification and data extraction. A meta-analysis was conducted using Revman 5.3 software. Sixteen randomized controlled trials were included. The meta-analysis revealed that PBL had a positive association with higher theoretical scores (SMD = 1.17, 95% CI [0.77, 11.57], P methods are superior to conventional teaching methods in improving students' learning interest, independent analysis skills, scope of knowledge, self-study, team spirit, and oral expression. This meta-analysis indicates that PBL pedagogy is superior to traditional lecture-based teaching in Chinese pharmacy education. PBL methods could be an optional, supplementary method of pharmaceutical teaching in China. However, Chinese pharmacy colleges and universities should revise PBL curricula according to their own needs, which would maximize the effectiveness of PBL.

  16. Development of The Students' Learning Process and Meta cognitive Strategies in Science on Nuclear Energy through Science, Technology and Society (STS) approach

    International Nuclear Information System (INIS)

    Siriuthen, Warawun; Yuenyong, Chokchai

    2009-07-01

    Full text: This research aimed to develop 48 Grade 10 students' learning process and meta cognitive strategies in the 'Nuclear Energy' topic through the Science, Technology and Society (STS) approach, which consists of five teaching stages: identification of social issues; identification of potential solutions; need for knowledge; decision-making; and socialization. The data were analyzed through rubric score of learning process and meta cognitive strategies, which consists of five strategies: recalling, planning, monitoring and maintaining, evaluating, and relating. The findings revealed that most students used learning process in a high level. However, they performed a very low level in almost all of the meta cognitive strategies. The factors potentially impeded their development of awareness about learning process and meta cognitive strategies were characteristics of content and students, learning processes, and student habit

  17. Voxel-Based Morphometry ALE meta-analysis of Bipolar Disorder

    Science.gov (United States)

    Magana, Omar; Laird, Robert

    2012-03-01

    A meta-analysis was performed independently to view the changes in gray matter (GM) on patients with Bipolar disorder (BP). The meta-analysis was conducted on a Talairach Space using GingerALE to determine the voxels and their permutation. In order to achieve the data acquisition, published experiments and similar research studies were uploaded onto the online Voxel-Based Morphometry database (VBM). By doing so, coordinates of activation locations were extracted from Bipolar disorder related journals utilizing Sleuth. Once the coordinates of given experiments were selected and imported to GingerALE, a Gaussian was performed on all foci points to create the concentration points of GM on BP patients. The results included volume reductions and variations of GM between Normal Healthy controls and Patients with Bipolar disorder. A significant amount of GM clusters were obtained in Normal Healthy controls over BP patients on the right precentral gyrus, right anterior cingulate, and the left inferior frontal gyrus. In future research, more published journals could be uploaded onto the database and another VBM meta-analysis could be performed including more activation coordinates or a variation of age groups.

  18. Graphics and Statistics for Cardiology: Data visualisation for meta-analysis.

    Science.gov (United States)

    Kiran, Amit; Crespillo, Abel Pérez; Rahimi, Kazem

    2017-01-01

    Graphical displays play a pivotal role in understanding data sets and disseminating results. For meta-analysis, they are instrumental in presenting findings from multiple studies. This report presents guidance to authors wishing to submit graphical displays as part of their meta-analysis to a clinical cardiology journal, such as HeartWhen using graphical displays for meta-analysis, we recommend the following: Use a flow diagram to describe the number of studies returned from the initial search, the inclusion/exclusion criteria applied and the final number of studies used in the meta-analysis.Present results from the meta-analysis using a figure that incorporates a forest plot and underlying (tabulated) statistics, including test for heterogeneity.Use displays such as funnel plot (minimum 10 studies) and Galbraith plot to visually present distribution of effect sizes or associations in order to evaluate small-study effects and publication bias).For meta-regression, the bubble plot is a useful display for assessing associations by study-level factors.Final checks on graphs, such as appropriate use of axis scale, line pattern, text size and graph resolution, should always be performed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  19. MetaABC--an integrated metagenomics platform for data adjustment, binning and clustering.

    Science.gov (United States)

    Su, Chien-Hao; Hsu, Ming-Tsung; Wang, Tse-Yi; Chiang, Sufeng; Cheng, Jen-Hao; Weng, Francis C; Kao, Cheng-Yan; Wang, Daryi; Tsai, Huai-Kuang

    2011-08-15

    MetaABC is a metagenomic platform that integrates several binning tools coupled with methods for removing artifacts, analyzing unassigned reads and controlling sampling biases. It allows users to arrive at a better interpretation via series of distinct combinations of analysis tools. After execution, MetaABC provides outputs in various visual formats such as tables, pie and bar charts as well as clustering result diagrams. MetaABC source code and documentation are available at http://bits2.iis.sinica.edu.tw/MetaABC/ CONTACT: dywang@gate.sinica.edu.tw; hktsai@iis.sinica.edu.tw Supplementary data are available at Bioinformatics online.

  20. Behavioural effects of advanced cruise control use : a meta-analytic approach.

    NARCIS (Netherlands)

    Dragutinovic, N. Brookhuis, K.A. Hagenzieker, M.P. & Marchau, V.A.W.J.

    2006-01-01

    In this study, a meta-analytic approach was used to analyse effects of Advanced Cruise Control (ACC) on driving behaviour reported in seven driving simulator studies. The effects of ACC on three consistent outcome measures, namely, driving speed, headway and driver workload have been analysed. The

  1. Efficient Generation of Dancing Animation Synchronizing with Music Based on Meta Motion Graphs

    Science.gov (United States)

    Xu, Jianfeng; Takagi, Koichi; Sakazawa, Shigeyuki

    This paper presents a system for automatic generation of dancing animation that is synchronized with a piece of music by re-using motion capture data. Basically, the dancing motion is synthesized according to the rhythm and intensity features of music. For this purpose, we propose a novel meta motion graph structure to embed the necessary features including both rhythm and intensity, which is constructed on the motion capture database beforehand. In this paper, we consider two scenarios for non-streaming music and streaming music, where global search and local search are required respectively. In the case of the former, once a piece of music is input, the efficient dynamic programming algorithm can be employed to globally search a best path in the meta motion graph, where an objective function is properly designed by measuring the quality of beat synchronization, intensity matching, and motion smoothness. In the case of the latter, the input music is stored in a buffer in a streaming mode, then an efficient search method is presented for a certain amount of music data (called a segment) in the buffer with the same objective function, resulting in a segment-based search approach. For streaming applications, we define an additional property in the above meta motion graph to deal with the unpredictable future music, which guarantees that there is some motion to match the unknown remaining music. A user study with totally 60 subjects demonstrates that our system outperforms the stat-of-the-art techniques in both scenarios. Furthermore, our system improves the synthesis speed greatly (maximal speedup is more than 500 times), which is essential for mobile applications. We have implemented our system on commercially available smart phones and confirmed that it works well on these mobile phones.

  2. Effectiveness of problem-based learning in Chinese dental education: a meta-analysis.

    Science.gov (United States)

    Huang, Beilei; Zheng, Liwei; Li, Chunjie; Li, Li; Yu, Haiyang

    2013-03-01

    This article provides a critical overview of problem-based learning (PBL) practice in dental education in China. Because the application of PBL has not been carried out on a large scale in Chinese dental education, this review was performed to investigate its effectiveness. Databases were searched for studies that met the inclusion criteria, with study identification and data extraction performed by two reviewers independently. Meta-analysis was done with Revman 5.1. Eleven randomized controlled trials were included. The meta-analysis found that PBL had a positive effect on gaining higher theoretical (SMD=0.88, 95% CI [0.46, 1.31], peffect on gaining higher pass rates (RR=1.06, 95% CI [0.97, 1.16], p=0.21). This meta-analysis suggests that the PBL pedagogy is considered superior to the traditional lecture-based teaching in this setting. PBL methods could be an optional supplementary method of dental teaching models in China. However, Chinese dental schools should devise PBL curricula according to their own conditions. The effectiveness of PBL should be optimized maximally with all these limitations.

  3. Comparative effects of different dietary approaches on blood pressure in hypertensive and pre-hypertensive patients: A systematic review and network meta-analysis.

    Science.gov (United States)

    Schwingshackl, Lukas; Chaimani, Anna; Schwedhelm, Carolina; Toledo, Estefania; Pünsch, Marina; Hoffmann, Georg; Boeing, Heiner

    2018-05-02

    Pairwise meta-analyses have shown beneficial effects of individual dietary approaches on blood pressure but their comparative effects have not been established. Therefore we performed a systematic review of different dietary intervention trials and estimated the aggregate blood pressure effects through network meta-analysis including hypertensive and pre-hypertensive patients. PubMed, Cochrane CENTRAL, and Google Scholar were searched until June 2017. The inclusion criteria were defined as follows: i) Randomized trial with a dietary approach; ii) hypertensive and pre-hypertensive adult patients; and iii) minimum intervention period of 12 weeks. In order to determine the pooled effect of each intervention relative to each of the other intervention for both diastolic and systolic blood pressure (SBP and DBP), random effects network meta-analysis was performed. A total of 67 trials comparing 13 dietary approaches (DASH, low-fat, moderate-carbohydrate, high-protein, low-carbohydrate, Mediterranean, Palaeolithic, vegetarian, low-GI/GL, low-sodium, Nordic, Tibetan, and control) enrolling 17,230 participants were included. In the network meta-analysis, the DASH, Mediterranean, low-carbohydrate, Palaeolithic, high-protein, low-glycaemic index, low-sodium, and low-fat dietary approaches were significantly more effective in reducing SBP (-8.73 to -2.32 mmHg) and DBP (-4.85 to -1.27 mmHg) compared to a control diet. According to the SUCRAs, the DASH diet was ranked the most effective dietary approach in reducing SBP (90%) and DBP (91%), followed by the Palaeolithic, and the low-carbohydrate diet (ranked 3rd for SBP) or the Mediterranean diet (ranked 3rd for DBP). For most comparisons, the credibility of evidence was rated very low to moderate, with the exception for the DASH vs. the low-fat dietary approach for which the quality of evidence was rated high. The present network meta-analysis suggests that the DASH dietary approach might be the most effective dietary measure

  4. Conducting Meta-Analyses Based on p Values

    Science.gov (United States)

    van Aert, Robbie C. M.; Wicherts, Jelte M.; van Assen, Marcel A. L. M.

    2016-01-01

    Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform. PMID:27694466

  5. Meta-data for a lot of LOD

    NARCIS (Netherlands)

    Rietveld, Laurens; Beek, Wouter; Hoekstra, Rinke; Schlobach, Stefan

    2017-01-01

    This paper introduces the LOD Laundromat meta-dataset, a continuously updated RDF meta-dataset that describes the documents crawled, cleaned and (re)published by the LOD Laundromat. This meta-dataset of over 110 million triples contains structural information for more than 650,000 documents (and

  6. Meta-analysis for evidence synthesis in plant pathology: an overview.

    Science.gov (United States)

    Madden, L V; Paul, P A

    2011-01-01

    Meta-analysis is the analysis of the results of multiple studies, which is typically performed in order to synthesize evidence from many possible sources in a formal probabilistic manner. In a simple sense, the outcome of each study becomes a single observation in the meta-analysis of all available studies. The methodology was developed originally in the social sciences by Smith, Glass, Rosenthal, Hunter, and Schmidt, based on earlier pioneering contributions in statistics by Fisher, Pearson, Yates, and Cochran, but this approach to research synthesis has now been embraced within many scientific disciplines. However, only a handful of articles have been published in plant pathology and related fields utilizing meta-analysis. After reviewing basic concepts and approaches, methods for estimating parameters and interpreting results are shown. The advantages of meta-analysis are presented in terms of prediction and risk analysis, and the high statistical power that can be achieved for detecting significant effects of treatments or significant relationships between variables. Based on power considerations, the fallacy of naïve counting of P values in a narrative review is demonstrated. Although there are many advantages to meta-analysis, results can be biased if the analysis is based on a nonrepresentative sample of study outcomes. Therefore, novel approaches for characterizing the upper bound on the bias are discussed, in order to show the robustness of meta-analysis to possible violation of assumptions.

  7. Meta-design Approach for Mobile Platforms Supporting Creative Tourism Experiences

    DEFF Research Database (Denmark)

    Tussyadiah, Iis P.

    2013-01-01

    with and interpretations of the local attributes of tourism destinations. The mobile system will integrate the sensory stimuli, the intense contact with cultural nuances and social network, the brand-based reputation, and the creative communities at tourism destinations into the iterative process of perception, conception......This paper conceptualizes the application of meta-design approach in the development of a mobile system supporting creative experiences for tourists. Specifically, for those working in creative industries, adaptive mobile system will facilitate effective tourists' interactions......, and expression of creative ideas among tourists. For destinations trying to highlight their unique characteristics as their value proposition, the development of such system may benefit them from a heightened sense of place due to on-going value co-creation. Individuals will benefit from such system from...

  8. Benchmarking energy performance of residential buildings using two-stage multifactor data envelopment analysis with degree-day based simple-normalization approach

    International Nuclear Information System (INIS)

    Wang, Endong; Shen, Zhigang; Alp, Neslihan; Barry, Nate

    2015-01-01

    Highlights: • Two-stage DEA model is developed to benchmark building energy efficiency. • Degree-day based simple normalization is used to neutralize the climatic noise. • Results of a real case study validated the benefits of this new model. - Abstract: Being able to identify detailed meta factors of energy performance is essential for creating effective residential energy-retrofitting strategies. Compared to other benchmarking methods, nonparametric multifactor DEA (data envelopment analysis) is capable of discriminating scale factors from management factors to reveal more details to better guide retrofitting practices. A two-stage DEA energy benchmarking method is proposed in this paper. This method includes (1) first-stage meta DEA which integrates the common degree day metrics for neutralizing noise energy effects of exogenous climatic variables; and (2) second-stage Tobit regression for further detailed efficiency analysis. A case study involving 3-year longitudinal panel data of 189 residential buildings indicated the proposed method has advantages over existing methods in terms of its efficiency in data processing and results interpretation. The results of the case study also demonstrated high consistency with existing linear regression based DEA.

  9. Individualised medicine from the perspectives of patients using complementary therapies: a meta-ethnography approach

    Science.gov (United States)

    2013-01-01

    Background Personalised (or individualised) medicine in the days of genetic research refers to molecular biologic specifications in individuals and not to a response to individual patient needs in the sense of person-centred medicine. Studies suggest that patients often wish for authentically person-centred care and personal physician-patient interactions, and that they therefore choose Complementary and Alternative medicine (CAM) as a possibility to complement standard care and ensure a patient-centred approach. Therefore, to build on the findings documented in these qualitative studies, we investigated the various concepts of individualised medicine inherent in patients’ reasons for using CAM. Methods We used the technique of meta-ethnography, following a three-stage approach: (1) A comprehensive systematic literature search of 67 electronic databases and appraisal of eligible qualitative studies related to patients’ reasons for seeking CAM was carried out. Eligibility for inclusion was determined using defined criteria. (2) A meta-ethnographic study was conducted according to Noblit and Hare's method for translating key themes in patients’ reasons for using CAM. (3) A line-of-argument approach was used to synthesize and interpret key concepts associated with patients’ reasoning regarding individualized medicine. Results (1) Of a total of 9,578 citations screened, 38 studies were appraised with a quality assessment checklist and a total of 30 publications were included in the study. (2) Reasons for CAM use evolved following a reciprocal translation. (3) The line-of-argument interpretations of patients’ concepts of individualised medicine that emerged based on the findings of our multidisciplinary research team were “personal growth”, “holism”, “alliance”, “integrative care”, “self-activation” and “wellbeing”. Conclusions The results of this meta-ethnographic study demonstrate that patients’ notions of individualised medicine

  10. MetaQTL: a package of new computational methods for the meta-analysis of QTL mapping experiments

    Directory of Open Access Journals (Sweden)

    Charcosset Alain

    2007-02-01

    Full Text Available Abstract Background Integration of multiple results from Quantitative Trait Loci (QTL studies is a key point to understand the genetic determinism of complex traits. Up to now many efforts have been made by public database developers to facilitate the storage, compilation and visualization of multiple QTL mapping experiment results. However, studying the congruency between these results still remains a complex task. Presently, the few computational and statistical frameworks to do so are mainly based on empirical methods (e.g. consensus genetic maps are generally built by iterative projection. Results In this article, we present a new computational and statistical package, called MetaQTL, for carrying out whole-genome meta-analysis of QTL mapping experiments. Contrary to existing methods, MetaQTL offers a complete statistical process to establish a consensus model for both the marker and the QTL positions on the whole genome. First, MetaQTL implements a new statistical approach to merge multiple distinct genetic maps into a single consensus map which is optimal in terms of weighted least squares and can be used to investigate recombination rate heterogeneity between studies. Secondly, assuming that QTL can be projected on the consensus map, MetaQTL offers a new clustering approach based on a Gaussian mixture model to decide how many QTL underly the distribution of the observed QTL. Conclusion We demonstrate using simulations that the usual model choice criteria from mixture model literature perform relatively well in this context. As expected, simulations also show that this new clustering algorithm leads to a reduction in the length of the confidence interval of QTL location provided that across studies there are enough observed QTL for each underlying true QTL location. The usefulness of our approach is illustrated on published QTL detection results of flowering time in maize. Finally, MetaQTL is freely available at http://bioinformatics.org/mqtl.

  11. Advanced approaches to characterize the human intestinal microbiota by computational meta-analysis

    NARCIS (Netherlands)

    Nikkilä, J.; Vos, de W.M.

    2010-01-01

    GOALS: We describe advanced approaches for the computational meta-analysis of a collection of independent studies, including over 1000 phylogenetic array datasets, as a means to characterize the variability of human intestinal microbiota. BACKGROUND: The human intestinal microbiota is a complex

  12. Towards Principles-Based Approaches to Governance of Health-related Research using Personal Data.

    Science.gov (United States)

    Laurie, Graeme; Sethi, Nayha

    2013-01-01

    Technological advances in the quality, availability and linkage potential of health data for research make the need to develop robust and effective information governance mechanisms more pressing than ever before; they also lead us to question the utility of governance devices used hitherto such as consent and anonymisation. This article assesses and advocates a principles-based approach, contrasting this with traditional rule-based approaches, and proposes a model of principled proportionate governance . It is suggested that the approach not only serves as the basis for good governance in contemporary data linkage but also that it provides a platform to assess legal reforms such as the draft Data Protection Regulation.

  13. Towards Principles-Based Approaches to Governance of Health-related Research using Personal Data

    Science.gov (United States)

    Laurie, Graeme; Sethi, Nayha

    2013-01-01

    Technological advances in the quality, availability and linkage potential of health data for research make the need to develop robust and effective information governance mechanisms more pressing than ever before; they also lead us to question the utility of governance devices used hitherto such as consent and anonymisation. This article assesses and advocates a principles-based approach, contrasting this with traditional rule-based approaches, and proposes a model of principled proportionate governance. It is suggested that the approach not only serves as the basis for good governance in contemporary data linkage but also that it provides a platform to assess legal reforms such as the draft Data Protection Regulation. PMID:24416087

  14. Placebo cohorts in phase-3 MS treatment trials - predictors for on-trial disease activity 1990-2010 based on a meta-analysis and individual case data.

    Directory of Open Access Journals (Sweden)

    Jan-Patrick Stellmann

    -effects meta-regression. These findings need further clarification based on individual case data.

  15. An ontology-based approach for modelling architectural styles

    OpenAIRE

    Pahl, Claus; Giesecke, Simon; Hasselbring, Wilhelm

    2007-01-01

    peer-reviewed The conceptual modelling of software architectures is of central importance for the quality of a software system. A rich modelling language is required to integrate the different aspects of architecture modelling, such as architectural styles, structural and behavioural modelling, into a coherent framework.We propose an ontological approach for architectural style modelling based on description logic as an abstract, meta-level modelling instrument. Architect...

  16. Providing the meta-model of development of competency using the meta-ethnography approach: Part 2. Synthesis of the available competency development models

    Directory of Open Access Journals (Sweden)

    Shahram Yazdani

    2016-12-01

    Full Text Available Background and Purpose: ConsideringBackground and Purpose: Considering the importance and necessity of competency-based education at a global level and with respect to globalization and the requirement of minimum competencies in medical fields, medical education communities and organizations worldwide have tried to determine the competencies, present frameworks and education models to respond to be sure of the ability of all graduates. In the literature, we observed numerous competency development models that refer to the same issues with different terminologies. It seems that evaluation and synthesis of all these models can finally result in designing a comprehensive meta-model for competency development.Methods: Meta-ethnography is a useful method for synthesis of qualitative research that is used to develop models that interpret the results in several studies. Considering that the aim of this study is to ultimately provide a competency development meta-model, in the previous section of the study, the literature review was conducted to achieve competency development models. Models obtained through the search were studied in details, and the key concepts of the models and overarching concepts were extracted in this section, models’ concepts were reciprocally translated and the available competency development models were synthesized.Results: A presentation of the competency development meta-model and providing a redefinition of the Dreyfus brothers model.Conclusions: Given the importance of competency-based education at a global level and the need to review curricula and competency-based curriculum design, it is required to provide competency development as well as meta-model to be the basis for curriculum development. As there are a variety of competency development models available, in this study, it was tried to develop the curriculum using them.Keywords: Meta-ethnography, Competency development, Meta-model, Qualitative synthesis

  17. Individual participant data meta-analysis of prognostic factor studies: state of the art?

    Directory of Open Access Journals (Sweden)

    Abo-Zaid Ghada

    2012-04-01

    Full Text Available Abstract Background Prognostic factors are associated with the risk of a subsequent outcome in people with a given disease or health condition. Meta-analysis using individual participant data (IPD, where the raw data are synthesised from multiple studies, has been championed as the gold-standard for synthesising prognostic factor studies. We assessed the feasibility and conduct of this approach. Methods A systematic review to identify published IPD meta-analyses of prognostic factors studies, followed by detailed assessment of a random sample of 20 articles published from 2006. Six of these 20 articles were from the IMPACT (International Mission for Prognosis and Analysis of Clinical Trials in traumatic brain injury collaboration, for which additional information was also used from simultaneously published companion papers. Results Forty-eight published IPD meta-analyses of prognostic factors were identified up to March 2009. Only three were published before 2000 but thereafter a median of four articles exist per year, with traumatic brain injury the most active research field. Availability of IPD offered many advantages, such as checking modelling assumptions; analysing variables on their continuous scale with the possibility of assessing for non-linear relationships; and obtaining results adjusted for other variables. However, researchers also faced many challenges, such as large cost and time required to obtain and clean IPD; unavailable IPD for some studies; different sets of prognostic factors in each study; and variability in study methods of measurement. The IMPACT initiative is a leading example, and had generally strong design, methodological and statistical standards. Elsewhere, standards are not always as high and improvements in the conduct of IPD meta-analyses of prognostic factor studies are often needed; in particular, continuous variables are often categorised without reason; publication bias and availability bias are rarely

  18. XML-based approaches for the integration of heterogeneous bio-molecular data.

    Science.gov (United States)

    Mesiti, Marco; Jiménez-Ruiz, Ernesto; Sanz, Ismael; Berlanga-Llavori, Rafael; Perlasca, Paolo; Valentini, Giorgio; Manset, David

    2009-10-15

    The today's public database infrastructure spans a very large collection of heterogeneous biological data, opening new opportunities for molecular biology, bio-medical and bioinformatics research, but raising also new problems for their integration and computational processing. In this paper we survey the most interesting and novel approaches for the representation, integration and management of different kinds of biological data by exploiting XML and the related recommendations and approaches. Moreover, we present new and interesting cutting edge approaches for the appropriate management of heterogeneous biological data represented through XML. XML has succeeded in the integration of heterogeneous biomolecular information, and has established itself as the syntactic glue for biological data sources. Nevertheless, a large variety of XML-based data formats have been proposed, thus resulting in a difficult effective integration of bioinformatics data schemes. The adoption of a few semantic-rich standard formats is urgent to achieve a seamless integration of the current biological resources.

  19. A Meta-Path-Based Prediction Method for Human miRNA-Target Association

    Directory of Open Access Journals (Sweden)

    Jiawei Luo

    2016-01-01

    Full Text Available MicroRNAs (miRNAs are short noncoding RNAs that play important roles in regulating gene expressing, and the perturbed miRNAs are often associated with development and tumorigenesis as they have effects on their target mRNA. Predicting potential miRNA-target associations from multiple types of genomic data is a considerable problem in the bioinformatics research. However, most of the existing methods did not fully use the experimentally validated miRNA-mRNA interactions. Here, we developed RMLM and RMLMSe to predict the relationship between miRNAs and their targets. RMLM and RMLMSe are global approaches as they can reconstruct the missing associations for all the miRNA-target simultaneously and RMLMSe demonstrates that the integration of sequence information can improve the performance of RMLM. In RMLM, we use RM measure to evaluate different relatedness between miRNA and its target based on different meta-paths; logistic regression and MLE method are employed to estimate the weight of different meta-paths. In RMLMSe, sequence information is utilized to improve the performance of RMLM. Here, we carry on fivefold cross validation and pathway enrichment analysis to prove the performance of our methods. The fivefold experiments show that our methods have higher AUC scores compared with other methods and the integration of sequence information can improve the performance of miRNA-target association prediction.

  20. A data fusion based approach for damage detection in linear systems

    Directory of Open Access Journals (Sweden)

    Ernesto Grande

    2014-07-01

    Full Text Available The aim of the present paper is to propose innovative approaches able to improve the capability of classical damage indicators in detecting the damage position in linear systems. In particular, starting from classical indicators based on the change of the flexibility matrix and on the change of the modal strain energy, the proposed approaches consider two data fusion procedures both based on the Dempster-Shafer theory. Numerical applications are reported in the paper in order to assess the reliability of the proposed approaches considering different damage scenarios, different sets of modes of vibration and the presence of errors affecting the accounted modes of vibrations.

  1. An Introduction to MAMA (Meta-Analysis of MicroArray data) System.

    Science.gov (United States)

    Zhang, Zhe; Fenstermacher, David

    2005-01-01

    Analyzing microarray data across multiple experiments has been proven advantageous. To support this kind of analysis, we are developing a software system called MAMA (Meta-Analysis of MicroArray data). MAMA utilizes a client-server architecture with a relational database on the server-side for the storage of microarray datasets collected from various resources. The client-side is an application running on the end user's computer that allows the user to manipulate microarray data and analytical results locally. MAMA implementation will integrate several analytical methods, including meta-analysis within an open-source framework offering other developers the flexibility to plug in additional statistical algorithms.

  2. Meta-analysis in clinical trials revisited.

    Science.gov (United States)

    DerSimonian, Rebecca; Laird, Nan

    2015-11-01

    In this paper, we revisit a 1986 article we published in this Journal, Meta-Analysis in Clinical Trials, where we introduced a random-effects model to summarize the evidence about treatment efficacy from a number of related clinical trials. Because of its simplicity and ease of implementation, our approach has been widely used (with more than 12,000 citations to date) and the "DerSimonian and Laird method" is now often referred to as the 'standard approach' or a 'popular' method for meta-analysis in medical and clinical research. The method is especially useful for providing an overall effect estimate and for characterizing the heterogeneity of effects across a series of studies. Here, we review the background that led to the original 1986 article, briefly describe the random-effects approach for meta-analysis, explore its use in various settings and trends over time and recommend a refinement to the method using a robust variance estimator for testing overall effect. We conclude with a discussion of repurposing the method for Big Data meta-analysis and Genome Wide Association Studies for studying the importance of genetic variants in complex diseases. Published by Elsevier Inc.

  3. Study of HTML Meta-Tags Utilization in Web-based Open-Access Journals

    Directory of Open Access Journals (Sweden)

    Pegah Pishva

    2007-04-01

    Full Text Available The present study investigates the extent of utilization of two meta tags – “keywords” and “descriptors” – in Web-based Open-Access Journals. A sample composed of 707 journals taken from DOAJ was used. These were analyzed on the account of utilization of the said meta tags. Findings demonstrated that these journals utilized “keywords” and “descriptors” meta-tags, 33.1% and 29.9% respectively. It was further demonstrated that among various subject classifications, “General Journals” had been the highest while “Mathematics and Statistics Journals” had the least utilization as “keywords” meta-tags. Moreover, “General Journals” and “Chemistry journals”, with 55.6% and 15.4% utilization respectively, had the highest and the lowest “descriptors” meta-tag usage rate. Based on our findings, and when compared against other similar research findings, there had been no significant growth experienced in utilization of these meta tags.

  4. MetaGaAP: A Novel Pipeline to Estimate Community Composition and Abundance from Non-Model Sequence Data

    Directory of Open Access Journals (Sweden)

    Christopher Noune

    2017-02-01

    Full Text Available Next generation sequencing and bioinformatic approaches are increasingly used to quantify microorganisms within populations by analysis of ‘meta-barcode’ data. This approach relies on comparison of amplicon sequences of ‘barcode’ regions from a population with public-domain databases of reference sequences. However, for many organisms relevant ‘barcode’ regions may not have been identified and large databases of reference sequences may not be available. A workflow and software pipeline, ‘MetaGaAP,’ was developed to identify and quantify genotypes through four steps: shotgun sequencing and identification of polymorphisms in a metapopulation to identify custom ‘barcode’ regions of less than 30 polymorphisms within the span of a single ‘read’, amplification and sequencing of the ‘barcode’, generation of a custom database of polymorphisms, and quantitation of the relative abundance of genotypes. The pipeline and workflow were validated in a ‘wild type’ Alphabaculovirus isolate, Helicoverpa armigera single nucleopolyhedrovirus (HaSNPV-AC53 and a tissue-culture derived strain (HaSNPV-AC53-T2. The approach was validated by comparison of polymorphisms in amplicons and shotgun data, and by comparison of predicted dominant and co-dominant genotypes with Sanger sequences. The computational power required to generate and search the database effectively limits the number of polymorphisms that can be included in a barcode to 30 or less. The approach can be used in quantitative analysis of the ecology and pathology of non-model organisms.

  5. Scattered Data Processing Approach Based on Optical Facial Motion Capture

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2013-01-01

    Full Text Available In recent years, animation reconstruction of facial expressions has become a popular research field in computer science and motion capture-based facial expression reconstruction is now emerging in this field. Based on the facial motion data obtained using a passive optical motion capture system, we propose a scattered data processing approach, which aims to solve the common problems of missing data and noise. To recover missing data, given the nonlinear relationships among neighbors with the current missing marker, we propose an improved version of a previous method, where we use the motion of three muscles rather than one to recover the missing data. To reduce the noise, we initially apply preprocessing to eliminate impulsive noise, before our proposed three-order quasi-uniform B-spline-based fitting method is used to reduce the remaining noise. Our experiments showed that the principles that underlie this method are simple and straightforward, and it delivered acceptable precision during reconstruction.

  6. MetaBAT: Metagenome Binning based on Abundance and Tetranucleotide frequence

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dongwan; Froula, Jeff; Egan, Rob; Wang, Zhong

    2014-03-21

    Grouping large fragments assembled from shotgun metagenomic sequences to deconvolute complex microbial communities, or metagenome binning, enables the study of individual organisms and their interactions. Here we developed automated metagenome binning software, called MetaBAT, which integrates empirical probabilistic distances of genome abundance and tetranucleotide frequency. On synthetic datasets MetaBAT on average achieves 98percent precision and 90percent recall at the strain level with 281 near complete unique genomes. Applying MetaBAT to a human gut microbiome data set we recovered 176 genome bins with 92percent precision and 80percent recall. Further analyses suggest MetaBAT is able to recover genome fragments missed in reference genomes up to 19percent, while 53 genome bins are novel. In summary, we believe MetaBAT is a powerful tool to facilitate comprehensive understanding of complex microbial communities.

  7. Meta-analytic structural equation modelling

    CERN Document Server

    Jak, Suzanne

    2015-01-01

    This book explains how to employ MASEM, the combination of meta-analysis (MA) and structural equation modelling (SEM). It shows how by using MASEM, a single model can be tested to explain the relationships between a set of variables in several studies. This book gives an introduction to MASEM, with a focus on the state of the art approach: the two stage approach of Cheung and Cheung & Chan. Both, the fixed and the random approach to MASEM are illustrated with two applications to real data. All steps that have to be taken to perform the analyses are discussed extensively. All data and syntax files are available online, so that readers can imitate all analyses. By using SEM for meta-analysis, this book shows how to benefit from all available information from all available studies, even if few or none of the studies report about all relationships that feature in the full model of interest.

  8. The Role of Thought Suppression, Meta-Cognitive Factors and Negative Emotions in Prediction of Substance Dependency Disorder

    Directory of Open Access Journals (Sweden)

    Omid Saed

    2011-08-01

    Full Text Available Introduction: This study investigated the role of thought suppression, meta- cognitive factors, and negative emotions in predicting of substance dependency disorder. Method: Subjects were 70 patients with substance dependence disorder and 70 normal individuals (total 140. Substance dependants were selected of outpatient treatment centers and the normal sample was selected of the general population too. Sampling methods in both samples were convenience sampling. All people were assessed by MCQ-30, White Bear Suppression Inventory, and Beck’s Anxiety and Depression Questionnaires. For data analysis, discriminant analysis were used. Results: Negative meta-cognitive beliefs about worry, depression, and thought suppression were the most significant predictors of substance dependence disorder. Conclusion: Through meta-cognitive beliefs, thought suppression and negative emotion (especially depression, substance dependency disorder can be predicted. Based on this model can be used to take a substance dependency disorder prevention approach and psychotherapy approach (based on cognitive and meta-cognitive therapies. In addition, the findings of this research can be applied in clinical and counseling environments to help substance dependant clients.

  9. Capturing the experiences of patients across multiple complex interventions: a meta-qualitative approach.

    Science.gov (United States)

    Webster, Fiona; Christian, Jennifer; Mansfield, Elizabeth; Bhattacharyya, Onil; Hawker, Gillian; Levinson, Wendy; Naglie, Gary; Pham, Thuy-Nga; Rose, Louise; Schull, Michael; Sinha, Samir; Stergiopoulos, Vicky; Upshur, Ross; Wilson, Lynn

    2015-09-08

    The perspectives, needs and preferences of individuals with complex health and social needs can be overlooked in the design of healthcare interventions. This study was designed to provide new insights on patient perspectives drawing from the qualitative evaluation of 5 complex healthcare interventions. Patients and their caregivers were recruited from 5 interventions based in primary, hospital and community care in Ontario, Canada. We included 62 interviews from 44 patients and 18 non-clinical caregivers. Our team analysed the transcripts from 5 distinct projects. This approach to qualitative meta-evaluation identifies common issues described by a diverse group of patients, therefore providing potential insights into systems issues. This study is a secondary analysis of qualitative data; therefore, no outcome measures were identified. We identified 5 broad themes that capture the patients' experience and highlight issues that might not be adequately addressed in complex interventions. In our study, we found that: (1) the emergency department is the unavoidable point of care; (2) patients and caregivers are part of complex and variable family systems; (3) non-medical issues mediate patients' experiences of health and healthcare delivery; (4) the unanticipated consequences of complex healthcare interventions are often the most valuable; and (5) patient experiences are shaped by the healthcare discourses on medically complex patients. Our findings suggest that key assumptions about patients that inform intervention design need to be made explicit in order to build capacity to better understand and support patients with multiple chronic diseases. Across many health systems internationally, multiple models are being implemented simultaneously that may have shared features and target similar patients, and a qualitative meta-evaluation approach, thus offers an opportunity for cumulative learning at a system level in addition to informing intervention design and

  10. ETICS meta-data software editing - from check out to commit operations

    International Nuclear Information System (INIS)

    Begin, M-E; Sancho, G D-A; Ronco, S D; Gentilini, M; Ronchieri, E; Selmi, M

    2008-01-01

    People involved in modular projects need to improve the build software process, planning the correct execution order and detecting circular dependencies. The lack of suitable tools may cause delays in the development, deployment and maintenance of the software. Experience in such projects has shown that the use of version control and build systems is not able to support the development of the software efficiently, due to a large number of errors each of which causes the breaking of the build process. Common causes of errors are for example the adoption of new libraries, libraries incompatibility, the extension of the current project in order to support new software modules. In this paper, we describe a possible solution implemented in ETICS, an integrated infrastructure for the automated configuration, build and test of Grid and distributed software. ETICS has defined meta-data software abstractions, from which it is possible to download, build and test software projects, setting for instance dependencies, environment variables and properties. Furthermore, the meta-data information is managed by ETICS reflecting the version control system philosophy, because of the existence of a meta-data repository and the handling of a list of operations, such as check out and commit. All the information related to a specific software are stored in the repository only when they are considered to be correct. By means of this solution, we introduce a sort of flexibility inside the ETICS system, allowing users to work accordingly to their needs. Moreover, by introducing this functionality, ETICS will be a version control system like for the management of the meta-data

  11. Hyperspectral Data for Mangrove Species Mapping: A Comparison of Pixel-Based and Object-Based Approach

    Directory of Open Access Journals (Sweden)

    Muhammad Kamal

    2011-10-01

    Full Text Available Visual image interpretation and digital image classification have been used to map and monitor mangrove extent and composition for decades. The presence of a high-spatial resolution hyperspectral sensor can potentially improve our ability to differentiate mangrove species. However, little research has explored the use of pixel-based and object-based approaches on high-spatial hyperspectral datasets for this purpose. This study assessed the ability of CASI-2 data for mangrove species mapping using pixel-based and object-based approaches at the mouth of the Brisbane River area, southeast Queensland, Australia. Three mapping techniques used in this study: spectral angle mapper (SAM and linear spectral unmixing (LSU for the pixel-based approaches, and multi-scale segmentation for the object-based image analysis (OBIA. The endmembers for the pixel-based approach were collected based on existing vegetation community map. Nine targeted classes were mapped in the study area from each approach, including three mangrove species: Avicennia marina, Rhizophora stylosa, and Ceriops australis. The mapping results showed that SAM produced accurate class polygons with only few unclassified pixels (overall accuracy 69%, Kappa 0.57, the LSU resulted in a patchy polygon pattern with many unclassified pixels (overall accuracy 56%, Kappa 0.41, and the object-based mapping produced the most accurate results (overall accuracy 76%, Kappa 0.67. Our results demonstrated that the object-based approach, which combined a rule-based and nearest-neighbor classification method, was the best classifier to map mangrove species and its adjacent environments.

  12. Analysis of survival data with dependent censoring copula-based approaches

    CERN Document Server

    Emura, Takeshi

    2018-01-01

    This book introduces readers to copula-based statistical methods for analyzing survival data involving dependent censoring. Primarily focusing on likelihood-based methods performed under copula models, it is the first book solely devoted to the problem of dependent censoring. The book demonstrates the advantages of the copula-based methods in the context of medical research, especially with regard to cancer patients’ survival data. Needless to say, the statistical methods presented here can also be applied to many other branches of science, especially in reliability, where survival analysis plays an important role. The book can be used as a textbook for graduate coursework or a short course aimed at (bio-) statisticians. To deepen readers’ understanding of copula-based approaches, the book provides an accessible introduction to basic survival analysis and explains the mathematical foundations of copula-based survival models.

  13. The use of meta-heuristics for airport gate assignment

    DEFF Research Database (Denmark)

    Cheng, Chun-Hung; Ho, Sin C.; Kwan, Cheuk-Lam

    2012-01-01

    proposed to generate good solutions within a reasonable timeframe. In this work, we attempt to assess the performance of three meta-heuristics, namely, genetic algorithm (GA), tabu search (TS), simulated annealing (SA) and a hybrid approach based on SA and TS. Flight data from Incheon International Airport...... are collected to carry out the computational comparison. Although the literature has documented these algorithms, this work may be a first attempt to evaluate their performance using a set of realistic flight data....

  14. Interpreting trial results following use of different intention-to-treat approaches for preventing attrition bias

    DEFF Research Database (Denmark)

    Dossing, Anna; Tarp, Simon; Furst, Daniel E

    2014-01-01

    10 biological and targeted drugs based on collections of trials that would correspond to 10 individual meta-analyses. ETHICS AND DISSEMINATION: This study will enhance transparency for evaluating mITT treatment effects described in meta-analyses. The intended audience will include healthcare...... concerns when executing different mITT approaches in meta-analyses. METHODS AND ANALYSIS: Using meta-epidemiology on randomised trials considered less prone to bias (ie, good internal validity) and assessing biological or targeted agents in patients with rheumatoid arthritis, we will meta-analyse data from...

  15. Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review.

    Science.gov (United States)

    Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C

    2018-03-07

    Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally

  16. Family-Based Interventions in Preventing Children and Adolescents from Using Tobacco: A Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Thomas, Roger E; Baker, Philip R A; Thomas, Bennett C

    2016-07-01

    Tobacco is the main preventable cause of death and disease worldwide. Adolescent smoking is increasing in many countries with poorer countries following the earlier experiences of affluent countries. Preventing adolescents from starting smoking is crucial to decreasing tobacco-related illness. To assess effectiveness of family-based interventions alone and combined with school-based interventions to prevent children and adolescents from initiating tobacco use. Fourteen bibliographic databases and the Internet, journals hand-searched, and experts consulted. Randomized controlled trials (RCTs) with children or adolescents and families, interventions to prevent starting tobacco use, and follow-up ≥6 months. Abstracts/titles independently assessed and data independently entered by 2 authors. Risk of bias was assessed with the Cochrane Risk-of-Bias tool. Twenty-seven RCTs were included. Nine trials of never-smokers compared with a control provided data for meta-analysis. Family intervention trials had significantly fewer students who started smoking. Meta-analysis of 2 RCTs of combined family and school interventions compared with school only, showed additional significant benefit. The common feature of effective high-intensity interventions was encouraging authoritative parenting. Only 14 RCTs provided data for meta-analysis (approximately a third of participants). Of the 13 RCTs that did not provide data for meta-analysis 8 compared a family intervention with no intervention and 1 reported significant effects, and 5 compared a family combined with school intervention with a school intervention only and none reported additional significant effects. There is moderate-quality evidence that family-based interventions prevent children and adolescents from starting to smoke. Copyright © 2016 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  17. Cluster Analysis-Based Approaches for Geospatiotemporal Data Mining of Massive Data Sets for Identification of Forest Threats

    Energy Technology Data Exchange (ETDEWEB)

    Mills, Richard T [ORNL; Hoffman, Forrest M [ORNL; Kumar, Jitendra [ORNL; HargroveJr., William Walter [USDA Forest Service

    2011-01-01

    We investigate methods for geospatiotemporal data mining of multi-year land surface phenology data (250 m2 Normalized Difference Vegetation Index (NDVI) values derived from the Moderate Resolution Imaging Spectrometer (MODIS) in this study) for the conterminous United States (CONUS) as part of an early warning system for detecting threats to forest ecosystems. The approaches explored here are based on k-means cluster analysis of this massive data set, which provides a basis for defining the bounds of the expected or normal phenological patterns that indicate healthy vegetation at a given geographic location. We briefly describe the computational approaches we have used to make cluster analysis of such massive data sets feasible, describe approaches we have explored for distinguishing between normal and abnormal phenology, and present some examples in which we have applied these approaches to identify various forest disturbances in the CONUS.

  18. A comparison of bivariate, multivariate random-effects, and Poisson correlated gamma-frailty models to meta-analyze individual patient data of ordinal scale diagnostic tests.

    Science.gov (United States)

    Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea

    2017-11-01

    Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. The effectiveness of evidence-based nursing on development of nursing students' critical thinking: A meta-analysis.

    Science.gov (United States)

    Cui, Chuyun; Li, Yufeng; Geng, Dongrong; Zhang, Hui; Jin, Changde

    2018-06-01

    The aim of this meta-analysis was to assess the effectiveness of evidence-based nursing (EBN) on the development of critical thinking for nursing students. A systematic literature review of original studies on randomized controlled trials was conducted. The relevant randomized controlled trials were retrieved from multiple electronic databases including Cochrane Central Register of Controlled Trials (CENTRAL), PubMed, EMBASE, Web of Science, Cumulative Index to Nursing and Allied Health (CINAHL), Chinese BioMed Database (CBM), China National Knowledge Infrastructure (CNKI), and WanFang Database. In order to make a systematic evaluation, studies were selected according to inclusion and exclusion criteria, and then according to extracted data and assessed quality. The data extraction was completed by two independent reviewers, and the methodological quality assessment was completed by another two reviewers. All of the data was analyzed by the software RevMan5.3. A total of nine studies with 1079 nursing students were chosen in this systematic literature review. The result of this meta-analysis showed that the effectiveness of evidence-based nursing was superior to that of traditional teaching on nursing students' critical thinking. The results of this meta-analysis indicate that evidence-based nursing could help nursing students to promote their development of critical thinking. More researches with higher quality and larger sample size can be analyzed in the further. Copyright © 2018. Published by Elsevier Ltd.

  20. An XML-based loose-schema approach to managing diagnostic data in heterogeneous formats

    Energy Technology Data Exchange (ETDEWEB)

    Naito, O., E-mail: naito.osamu@jaea.go.j [Japan Atomic Energy Agency, 801-1 Mukouyama, Naka, Ibaraki 311-0193 (Japan)

    2010-07-15

    An approach to managing diagnostic data in heterogenous formats by using XML-based (eXtensible Markup Language) tag files is discussed. The tag file functions like header information in ordinary data formats but it is separate from the main body of data, human readable, and self-descriptive. Thus all the necessary information for reading the contents of data can be obtained without prior information or reading the data body itself. In this paper, modeling of diagnostic data and its representation in XML are studied and a very primitive implementation of this approach in C++ is presented. The overhead of manipulating XML in a proof-of-principle code was found to be small. The merits, demerits, and possible extensions of this approach are also discussed.

  1. An XML-based loose-schema approach to managing diagnostic data in heterogeneous formats

    International Nuclear Information System (INIS)

    Naito, O.

    2010-01-01

    An approach to managing diagnostic data in heterogenous formats by using XML-based (eXtensible Markup Language) tag files is discussed. The tag file functions like header information in ordinary data formats but it is separate from the main body of data, human readable, and self-descriptive. Thus all the necessary information for reading the contents of data can be obtained without prior information or reading the data body itself. In this paper, modeling of diagnostic data and its representation in XML are studied and a very primitive implementation of this approach in C++ is presented. The overhead of manipulating XML in a proof-of-principle code was found to be small. The merits, demerits, and possible extensions of this approach are also discussed.

  2. Open Hypermedia as User Controlled Meta Data for the Web

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Sloth, Lennert; Bouvin, Niels Olof

    2000-01-01

    segments. By means of the Webvise system, OHIF structures can be authored, imposed on Web pages, and finally linked on the Web as any ordinary Web resource. Following a link to an OHIF file automatically invokes a Webvise download of the meta data structures and the annotated Web content will be displayed...... in the browser. Moreover, the Webvise system provides support for users to create, manipulate, and share the OHIF structures together with custom made web pages and MS Office 2000 documents on WebDAV servers. These Webvise facilities goes beyond ealier open hypermedia systems in that it now allows fully...... distributed open hypermedia linking between Web pages and WebDAV aware desktop applications. The paper describes the OHIF format and demonstrates how the Webvise system handles OHIF. Finally, it argues for better support for handling user controlled meta data, e.g. support for linking in non-XML data...

  3. Solving Large Clustering Problems with Meta-Heuristic Search

    DEFF Research Database (Denmark)

    Turkensteen, Marcel; Andersen, Kim Allan; Bang-Jensen, Jørgen

    In Clustering Problems, groups of similar subjects are to be retrieved from data sets. In this paper, Clustering Problems with the frequently used Minimum Sum-of-Squares Criterion are solved using meta-heuristic search. Tabu search has proved to be a successful methodology for solving optimization...... problems, but applications to large clustering problems are rare. The simulated annealing heuristic has mainly been applied to relatively small instances. In this paper, we implement tabu search and simulated annealing approaches and compare them to the commonly used k-means approach. We find that the meta-heuristic...

  4. Contour plot assessment of existing meta-analyses confirms robust association of statin use and acute kidney injury risk.

    Science.gov (United States)

    Chevance, Aurélie; Schuster, Tibor; Steele, Russell; Ternès, Nils; Platt, Robert W

    2015-10-01

    Robustness of an existing meta-analysis can justify decisions on whether to conduct an additional study addressing the same research question. We illustrate the graphical assessment of the potential impact of an additional study on an existing meta-analysis using published data on statin use and the risk of acute kidney injury. A previously proposed graphical augmentation approach is used to assess the sensitivity of the current test and heterogeneity statistics extracted from existing meta-analysis data. In addition, we extended the graphical augmentation approach to assess potential changes in the pooled effect estimate after updating a current meta-analysis and applied the three graphical contour definitions to data from meta-analyses on statin use and acute kidney injury risk. In the considered example data, the pooled effect estimates and heterogeneity indices demonstrated to be considerably robust to the addition of a future study. Supportingly, for some previously inconclusive meta-analyses, a study update might yield statistically significant kidney injury risk increase associated with higher statin exposure. The illustrated contour approach should become a standard tool for the assessment of the robustness of meta-analyses. It can guide decisions on whether to conduct additional studies addressing a relevant research question. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Effective modelling of percolation at the landscape scale using data-based approaches

    Science.gov (United States)

    Selle, Benny; Lischeid, Gunnar; Huwe, Bernd

    2008-06-01

    Process-based models have been extensively applied to assess the impact of landuse change on water quantity and quality at landscape scales. However, the routine application of those models suffers from large computational efforts, lack of transparency and the requirement of many input parameters. Data-based models such as Feed-Forward Multilayer Perceptrons (MLP) and Classification and Regression Trees (CART) may be used as effective models, i.e. simple approximations of complex process-based models. These data-based approaches can subsequently be applied for scenario analysis and as a transparent management tool provided climatic boundary conditions and the basic model assumptions of the process-based models do not change dramatically. In this study, we apply MLP, CART and Multiple Linear Regression (LR) to model the spatially distributed and spatially aggregated percolation in soils using weather, groundwater and soil data. The percolation data is obtained via numerical experiments with Hydrus1D. Thus, the complex process-based model is approximated using simpler data-based approaches. The MLP model explains most of the percolation variance in time and space without using any soil information. This reflects the effective dimensionality of the process-based model and suggests that percolation in the study area may be modelled much simpler than using Hydrus1D. The CART model shows that soil properties play a negligible role for percolation under wet climatic conditions. However, they become more important if the conditions turn drier. The LR method does not yield satisfactory predictions for the spatially distributed percolation however the spatially aggregated percolation is well approximated. This may indicate that the soils behave simpler (i.e. more linear) when percolation dynamics are upscaled.

  6. Grey literature in meta-analyses.

    Science.gov (United States)

    Conn, Vicki S; Valentine, Jeffrey C; Cooper, Harris M; Rantz, Marilyn J

    2003-01-01

    In meta-analysis, researchers combine the results of individual studies to arrive at cumulative conclusions. Meta-analysts sometimes include "grey literature" in their evidential base, which includes unpublished studies and studies published outside widely available journals. Because grey literature is a source of data that might not employ peer review, critics have questioned the validity of its data and the results of meta-analyses that include it. To examine evidence regarding whether grey literature should be included in meta-analyses and strategies to manage grey literature in quantitative synthesis. This article reviews evidence on whether the results of studies published in peer-reviewed journals are representative of results from broader samplings of research on a topic as a rationale for inclusion of grey literature. Strategies to enhance access to grey literature are addressed. The most consistent and robust difference between published and grey literature is that published research is more likely to contain results that are statistically significant. Effect size estimates of published research are about one-third larger than those of unpublished studies. Unfunded and small sample studies are less likely to be published. Yet, importantly, methodological rigor does not differ between published and grey literature. Meta-analyses that exclude grey literature likely (a) over-represent studies with statistically significant findings, (b) inflate effect size estimates, and (c) provide less precise effect size estimates than meta-analyses including grey literature. Meta-analyses should include grey literature to fully reflect the existing evidential base and should assess the impact of methodological variations through moderator analysis.

  7. Reliability analysis - systematic approach based on limited data

    International Nuclear Information System (INIS)

    Bourne, A.J.

    1975-11-01

    The initial approaches required for reliability analysis are outlined. These approaches highlight the system boundaries, examine the conditions under which the system is required to operate, and define the overall performance requirements. The discussion is illustrated by a simple example of an automatic protective system for a nuclear reactor. It is then shown how the initial approach leads to a method of defining the system, establishing performance parameters of interest and determining the general form of reliability models to be used. The overall system model and the availability of reliability data at the system level are next examined. An iterative process is then described whereby the reliability model and data requirements are systematically refined at progressively lower hierarchic levels of the system. At each stage, the approach is illustrated with examples from the protective system previously described. The main advantages of the approach put forward are the systematic process of analysis, the concentration of assessment effort in the critical areas and the maximum use of limited reliability data. (author)

  8. Meta-Analysis of Placental Transcriptome Data Identifies a Novel Molecular Pathway Related to Preeclampsia.

    Science.gov (United States)

    van Uitert, Miranda; Moerland, Perry D; Enquobahrie, Daniel A; Laivuori, Hannele; van der Post, Joris A M; Ris-Stalpers, Carrie; Afink, Gijs B

    2015-01-01

    Studies using the placental transcriptome to identify key molecules relevant for preeclampsia are hampered by a relatively small sample size. In addition, they use a variety of bioinformatics and statistical methods, making comparison of findings challenging. To generate a more robust preeclampsia gene expression signature, we performed a meta-analysis on the original data of 11 placenta RNA microarray experiments, representing 139 normotensive and 116 preeclamptic pregnancies. Microarray data were pre-processed and analyzed using standardized bioinformatics and statistical procedures and the effect sizes were combined using an inverse-variance random-effects model. Interactions between genes in the resulting gene expression signature were identified by pathway analysis (Ingenuity Pathway Analysis, Gene Set Enrichment Analysis, Graphite) and protein-protein associations (STRING). This approach has resulted in a comprehensive list of differentially expressed genes that led to a 388-gene meta-signature of preeclamptic placenta. Pathway analysis highlights the involvement of the previously identified hypoxia/HIF1A pathway in the establishment of the preeclamptic gene expression profile, while analysis of protein interaction networks indicates CREBBP/EP300 as a novel element central to the preeclamptic placental transcriptome. In addition, there is an apparent high incidence of preeclampsia in women carrying a child with a mutation in CREBBP/EP300 (Rubinstein-Taybi Syndrome). The 388-gene preeclampsia meta-signature offers a vital starting point for further studies into the relevance of these genes (in particular CREBBP/EP300) and their concomitant pathways as biomarkers or functional molecules in preeclampsia. This will result in a better understanding of the molecular basis of this disease and opens up the opportunity to develop rational therapies targeting the placental dysfunction causal to preeclampsia.

  9. Combining evidence from multiple electronic health care databases: performances of one-stage and two-stage meta-analysis in matched case-control studies.

    Science.gov (United States)

    La Gamba, Fabiola; Corrao, Giovanni; Romio, Silvana; Sturkenboom, Miriam; Trifirò, Gianluca; Schink, Tania; de Ridder, Maria

    2017-10-01

    Clustering of patients in databases is usually ignored in one-stage meta-analysis of multi-database studies using matched case-control data. The aim of this study was to compare bias and efficiency of such a one-stage meta-analysis with a two-stage meta-analysis. First, we compared the approaches by generating matched case-control data under 5 simulated scenarios, built by varying: (1) the exposure-outcome association; (2) its variability among databases; (3) the confounding strength of one covariate on this association; (4) its variability; and (5) the (heterogeneous) confounding strength of two covariates. Second, we made the same comparison using empirical data from the ARITMO project, a multiple database study investigating the risk of ventricular arrhythmia following the use of medications with arrhythmogenic potential. In our study, we specifically investigated the effect of current use of promethazine. Bias increased for one-stage meta-analysis with increasing (1) between-database variance of exposure effect and (2) heterogeneous confounding generated by two covariates. The efficiency of one-stage meta-analysis was slightly lower than that of two-stage meta-analysis for the majority of investigated scenarios. Based on ARITMO data, there were no evident differences between one-stage (OR = 1.50, CI = [1.08; 2.08]) and two-stage (OR = 1.55, CI = [1.12; 2.16]) approaches. When the effect of interest is heterogeneous, a one-stage meta-analysis ignoring clustering gives biased estimates. Two-stage meta-analysis generates estimates at least as accurate and precise as one-stage meta-analysis. However, in a study using small databases and rare exposures and/or outcomes, a correct one-stage meta-analysis becomes essential. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Teaching tools in Evidence Based Practice: evaluation of reusable learning objects (RLOs for learning about Meta-analysis

    Directory of Open Access Journals (Sweden)

    Wharrad Heather

    2011-05-01

    Full Text Available Abstract Background All healthcare students are taught the principles of evidence based practice on their courses. The ability to understand the procedures used in systematically reviewing evidence reported in studies, such as meta-analysis, are an important element of evidence based practice. Meta-analysis is a difficult statistical concept for healthcare students to understand yet it is an important technique used in systematic reviews to pool data from studies to look at combined effectiveness of treatments. In other areas of the healthcare curricula, by supplementing lectures, workbooks and workshops with pedagogically designed, multimedia learning objects (known as reusable learning objects or RLOs we have shown an improvement in students' perceived understanding in subjects they found difficult. In this study we describe the development and evaluation of two RLOs on meta-analysis. The RLOs supplement associated lectures and aim to improve students' understanding of meta-analysis in healthcare students. Methods Following a quality controlled design process two RLOs were developed and delivered to two cohorts of students, a Master in Public Health course and Postgraduate diploma in nursing course. Students' understanding of five key concepts of Meta-analysis were measured before and after a lecture and again after RLO use. RLOs were also evaluated for their educational value, learning support, media attributes and usability using closed and open questions. Results Students rated their understanding of meta-analysis as improved after a lecture and further improved after completing the RLOs (Wilcoxon paired test, p Conclusions Meta-analysis RLOs that are openly accessible and unrestricted by usernames and passwords provide flexible support for students who find the process of meta-analysis difficult.

  11. A Model-free Approach to Fault Detection of Continuous-time Systems Based on Time Domain Data

    Institute of Scientific and Technical Information of China (English)

    Ping Zhang; Steven X. Ding

    2007-01-01

    In this paper, a model-free approach is presented to design an observer-based fault detection system of linear continuoustime systems based on input and output data in the time domain. The core of the approach is to directly identify parameters of the observer-based residual generator based on a numerically reliable data equation obtained by filtering and sampling the input and output signals.

  12. A decision support system prototype including human factors based on the TOGA meta-theory approach

    International Nuclear Information System (INIS)

    Cappelli, M.; Memmi, F.; Gadomski, A. M.; Sepielli, M.

    2012-01-01

    The human contribution to the risk of operation of complex technological systems is often not negligible and sometimes tends to become significant, as shown by many reports on incidents and accidents occurred in the past inside Nuclear Power Plants (NPPs). An error of a human operator of a NPP can derive by both omission and commission. For instance, complex commission errors can also lead to significant catastrophic technological accidents, as for the case of the Three Mile Island accident. Typically, the problem is analyzed by focusing on the single event chain that has provoked the incident or accident. What is needed is a general framework able to include as many parameters as possible, i.e. both technological and human factors. Such a general model could allow to envisage an omission or commission error before it can happen or, alternatively, suggest preferred actions to do in order to take countermeasures to neutralize the effect of the error before it becomes critical. In this paper, a preliminary Decision Support System (DSS) based on the so-called (-) TOGA meta-theory approach is presented. The application of such a theory to the management of nuclear power plants has been presented in the previous ICAPP 2011. Here, a human factor simulator prototype is proposed in order to include the effect of human errors in the decision path. The DSS has been developed using a TRIGA research reactor as reference plant, and implemented using the LabVIEW programming environment and the Finite State Machine (FSM) model The proposed DSS shows how to apply the Universal Reasoning Paradigm (URP) and the Universal Management Paradigm (UMP) to a real plant context. The DSS receives inputs from instrumentation data and gives as output a suggested decision. It is obtained as the result of an internal elaborating process based on a performance function. The latter, describes the degree of satisfaction and efficiency, which are dependent on the level of responsibility related to

  13. A Novel Imbalanced Data Classification Approach Based on Logistic Regression and Fisher Discriminant

    Directory of Open Access Journals (Sweden)

    Baofeng Shi

    2015-01-01

    Full Text Available We introduce an imbalanced data classification approach based on logistic regression significant discriminant and Fisher discriminant. First of all, a key indicators extraction model based on logistic regression significant discriminant and correlation analysis is derived to extract features for customer classification. Secondly, on the basis of the linear weighted utilizing Fisher discriminant, a customer scoring model is established. And then, a customer rating model where the customer number of all ratings follows normal distribution is constructed. The performance of the proposed model and the classical SVM classification method are evaluated in terms of their ability to correctly classify consumers as default customer or nondefault customer. Empirical results using the data of 2157 customers in financial engineering suggest that the proposed approach better performance than the SVM model in dealing with imbalanced data classification. Moreover, our approach contributes to locating the qualified customers for the banks and the bond investors.

  14. The influence of affective and cognitive arguments on message judgement and attitude change: The moderating effects of meta-bases and structural bases.

    Science.gov (United States)

    Keer, Mario; van den Putte, Bas; Neijens, Peter; de Wit, John

    2013-01-01

    This study investigated whether the efficacy of affective vs. cognitive persuasive messages was moderated by (1) individuals' subjective assessments of whether their attitudes were based on affect or cognition (i.e. meta-bases) and (2) the degree individuals' attitudes were correlated with affect and cognition (i.e. structural bases). Participants (N = 97) were randomly exposed to a message containing either affective or cognitive arguments discouraging binge drinking. The results demonstrated that meta-bases and not structural bases moderated the influence of argument type on message judgement. Affective (cognitive) messages were judged more positively when individuals' meta-bases were more affective (cognitive). In contrast, structural bases and not meta-bases moderated the influence of argument type on attitude and intention change following exposure to the message. Surprisingly, change was greater among individuals who read a message that mismatched their structural attitude base. Affective messages were more effective as attitudes were more cognition-based, and vice versa. Thus, although individuals prefer messages that match their meta-base, attitude and intention change regarding binge drinking are best established by mismatching their structural base.

  15. A Spatiotemporal Indexing Approach for Efficient Processing of Big Array-Based Climate Data with MapReduce

    Science.gov (United States)

    Li, Zhenlong; Hu, Fei; Schnase, John L.; Duffy, Daniel Q.; Lee, Tsengdar; Bowen, Michael K.; Yang, Chaowei

    2016-01-01

    Climate observations and model simulations are producing vast amounts of array-based spatiotemporal data. Efficient processing of these data is essential for assessing global challenges such as climate change, natural disasters, and diseases. This is challenging not only because of the large data volume, but also because of the intrinsic high-dimensional nature of geoscience data. To tackle this challenge, we propose a spatiotemporal indexing approach to efficiently manage and process big climate data with MapReduce in a highly scalable environment. Using this approach, big climate data are directly stored in a Hadoop Distributed File System in its original, native file format. A spatiotemporal index is built to bridge the logical array-based data model and the physical data layout, which enables fast data retrieval when performing spatiotemporal queries. Based on the index, a data-partitioning algorithm is applied to enable MapReduce to achieve high data locality, as well as balancing the workload. The proposed indexing approach is evaluated using the National Aeronautics and Space Administration (NASA) Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. The experimental results show that the index can significantly accelerate querying and processing (10 speedup compared to the baseline test using the same computing cluster), while keeping the index-to-data ratio small (0.0328). The applicability of the indexing approach is demonstrated by a climate anomaly detection deployed on a NASA Hadoop cluster. This approach is also able to support efficient processing of general array-based spatiotemporal data in various geoscience domains without special configuration on a Hadoop cluster.

  16. Multivariate Meta-Analysis of Genetic Association Studies: A Simulation Study.

    Directory of Open Access Journals (Sweden)

    Binod Neupane

    Full Text Available In a meta-analysis with multiple end points of interests that are correlated between or within studies, multivariate approach to meta-analysis has a potential to produce more precise estimates of effects by exploiting the correlation structure between end points. However, under random-effects assumption the multivariate estimation is more complex (as it involves estimation of more parameters simultaneously than univariate estimation, and sometimes can produce unrealistic parameter estimates. Usefulness of multivariate approach to meta-analysis of the effects of a genetic variant on two or more correlated traits is not well understood in the area of genetic association studies. In such studies, genetic variants are expected to roughly maintain Hardy-Weinberg equilibrium within studies, and also their effects on complex traits are generally very small to modest and could be heterogeneous across studies for genuine reasons. We carried out extensive simulation to explore the comparative performance of multivariate approach with most commonly used univariate inverse-variance weighted approach under random-effects assumption in various realistic meta-analytic scenarios of genetic association studies of correlated end points. We evaluated the performance with respect to relative mean bias percentage, and root mean square error (RMSE of the estimate and coverage probability of corresponding 95% confidence interval of the effect for each end point. Our simulation results suggest that multivariate approach performs similarly or better than univariate method when correlations between end points within or between studies are at least moderate and between-study variation is similar or larger than average within-study variation for meta-analyses of 10 or more genetic studies. Multivariate approach produces estimates with smaller bias and RMSE especially for the end point that has randomly or informatively missing summary data in some individual studies, when

  17. Evaluation of Internet-Based Interventions on Waist Circumference Reduction: A Meta-Analysis.

    Science.gov (United States)

    Seo, Dong-Chul; Niu, Jingjing

    2015-07-21

    Internet-based interventions are more cost-effective than conventional interventions and can provide immediate, easy-to-access, and individually tailored support for behavior change. Waist circumference is a strong predictor of an increased risk for a host of diseases, such as hypertension, diabetes, and dyslipidemia, independent of body mass index. To date, no study has examined the effect of Internet-based lifestyle interventions on waist circumference change. This study aimed to systematically review the effect of Internet-based interventions on waist circumference change among adults. This meta-analysis reviewed randomized controlled trials (N=31 trials and 8442 participants) that used the Internet as a main intervention approach and reported changes in waist circumference. Internet-based interventions showed a significant reduction in waist circumference (mean change -2.99 cm, 95% CI -3.68 to -2.30, I(2)=93.3%) and significantly better effects on waist circumference loss (mean loss 2.38 cm, 95% CI 1.61-3.25, I(2)=97.2%) than minimal interventions such as information-only groups. Meta-regression results showed that baseline waist circumference, gender, and the presence of social support in the intervention were significantly associated with waist circumference reduction. Internet-based interventions have a significant and promising effect on waist circumference change. Incorporating social support into an Internet-based intervention appears to be useful in reducing waist circumference. Considerable heterogeneity exists among the effects of Internet-based interventions. The design of an intervention may have a significant impact on the effectiveness of the intervention.

  18. From MetaCognition to MetaPractition

    DEFF Research Database (Denmark)

    Alcock, Gordon Lindsay

    developed frameworks for both quantitative and qualitative Metacognitive and ‘Meta-practitive’ reflection.. Designed to help students adapt to, and adopt new learning strategies; accelerate their understanding and performance within a collaborative ‘Profession Bachelor’ and PBL culture, the author documents...... ‘Metacognitive’ learning portfolios in the initial ‘Learning to Learn (L2L) environment and a self-authoring, ‘Meta-practitive’ approach in the later stages of an ‘Architectural Technology degree....

  19. Cost-of-illness studies based on massive data: a prevalence-based, top-down regression approach.

    Science.gov (United States)

    Stollenwerk, Björn; Welchowski, Thomas; Vogl, Matthias; Stock, Stephanie

    2016-04-01

    Despite the increasing availability of routine data, no analysis method has yet been presented for cost-of-illness (COI) studies based on massive data. We aim, first, to present such a method and, second, to assess the relevance of the associated gain in numerical efficiency. We propose a prevalence-based, top-down regression approach consisting of five steps: aggregating the data; fitting a generalized additive model (GAM); predicting costs via the fitted GAM; comparing predicted costs between prevalent and non-prevalent subjects; and quantifying the stochastic uncertainty via error propagation. To demonstrate the method, it was applied to aggregated data in the context of chronic lung disease to German sickness funds data (from 1999), covering over 7.3 million insured. To assess the gain in numerical efficiency, the computational time of the innovative approach has been compared with corresponding GAMs applied to simulated individual-level data. Furthermore, the probability of model failure was modeled via logistic regression. Applying the innovative method was reasonably fast (19 min). In contrast, regarding patient-level data, computational time increased disproportionately by sample size. Furthermore, using patient-level data was accompanied by a substantial risk of model failure (about 80 % for 6 million subjects). The gain in computational efficiency of the innovative COI method seems to be of practical relevance. Furthermore, it may yield more precise cost estimates.

  20. Big Data-Based Approach to Detect, Locate, and Enhance the Stability of an Unplanned Microgrid Islanding

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang; Li, Yan; Zhang, Yingchen; Zhang, Jun Jason; Gao, David Wenzhong; Muljadi, Eduard; Gu, Yi

    2017-10-01

    In this paper, a big data-based approach is proposed for the security improvement of an unplanned microgrid islanding (UMI). The proposed approach contains two major steps: the first step is big data analysis of wide-area monitoring to detect a UMI and locate it; the second step is particle swarm optimization (PSO)-based stability enhancement for the UMI. First, an optimal synchrophasor measurement device selection (OSMDS) and matching pursuit decomposition (MPD)-based spatial-temporal analysis approach is proposed to significantly reduce the volume of data while keeping appropriate information from the synchrophasor measurements. Second, a random forest-based ensemble learning approach is trained to detect the UMI. When combined with grid topology, the UMI can be located. Then the stability problem of the UMI is formulated as an optimization problem and the PSO is used to find the optimal operational parameters of the UMI. An eigenvalue-based multiobjective function is proposed, which aims to improve the damping and dynamic characteristics of the UMI. Finally, the simulation results demonstrate the effectiveness and robustness of the proposed approach.

  1. Effects of feedback in a computer-based learning environment on students’ learning outcomes: a meta-analysis

    NARCIS (Netherlands)

    van der Kleij, Fabienne; Feskens, Remco C.W.; Eggen, Theodorus Johannes Hendrikus Maria

    2015-01-01

    In this meta-analysis, we investigated the effects of methods for providing item-based feedback in a computer-based environment on students’ learning outcomes. From 40 studies, 70 effect sizes were computed, which ranged from −0.78 to 2.29. A mixed model was used for the data analysis. The results

  2. Hybrid Feature Selection Approach Based on GRASP for Cancer Microarray Data

    Directory of Open Access Journals (Sweden)

    Arpita Nagpal

    2017-01-01

    Full Text Available Microarray data usually contain a large number of genes, but a small number of samples. Feature subset selection for microarray data aims at reducing the number of genes so that useful information can be extracted from the samples. Reducing the dimension of data sets further helps in improving the computational efficiency of the learning model. In this paper, we propose a modified algorithm based on the tabu search as local search procedures to a Greedy Randomized Adaptive Search Procedure (GRASP for high dimensional microarray data sets. The proposed Tabu based Greedy Randomized Adaptive Search Procedure algorithm is named as TGRASP. In TGRASP, a new parameter has been introduced named as Tabu Tenure and the existing parameters, NumIter and size have been modified. We observed that different parameter settings affect the quality of the optimum. The second proposed algorithm known as FFGRASP (Firefly Greedy Randomized Adaptive Search Procedure uses a firefly optimization algorithm in the local search optimzation phase of the greedy randomized adaptive search procedure (GRASP. Firefly algorithm is one of the powerful algorithms for optimization of multimodal applications. Experimental results show that the proposed TGRASP and FFGRASP algorithms are much better than existing algorithm with respect to three performance parameters viz. accuracy, run time, number of a selected subset of features. We have also compared both the approaches with a unified metric (Extended Adjusted Ratio of Ratios which has shown that TGRASP approach outperforms existing approach for six out of nine cancer microarray datasets and FFGRASP performs better on seven out of nine datasets.

  3. Feature-Based Statistical Analysis of Combustion Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion

  4. Laparoscopic splenectomy is a better surgical approach for spleen-relevant disorders: a comprehensive meta-analysis based on 15-year literatures.

    Science.gov (United States)

    Cheng, Ji; Tao, Kaixiong; Yu, Peiwu

    2016-10-01

    Currently, whether laparoscopic or open splenectomy is a gold standard option for spleen abnormalities remains in controversy. There is in deficiency of academic evidence concerning the surgical efficacy and safety of both comparative managements. In order to surgically appraise the applied potentials of both approaches, we hence performed this comprehensive meta-analysis on the basis of 15-year literatures. Via searching of PubMed, EMBASE, Web of Science, and Cochrane Library databases, overall 37 original articles were eligibly incorporated into our meta-analysis and subdivided into six sections. In accordance with the Cochrane Collaboration protocol, all statistical procedures were mathematically conducted in a standard manner. Publication bias was additionally evaluated by funnel plot and Egger's test. Irrespective of the diversified splenic disorders, laparoscopic splenectomy was superior to open technique owing to its fewer estimated blood loss, shorter postoperative hospital stay as well as lower complication rate (P  0.05). Technically, laparoscopic splenectomy should be recommended as a prior remedy with its advantage of rapid recovery and minimally physical damage, in addition to its comparably surgical efficacy against that of open manipulation.

  5. The Covariance Adjustment Approaches for Combining Incomparable Cox Regressions Caused by Unbalanced Covariates Adjustment: A Multivariate Meta-Analysis Study

    Directory of Open Access Journals (Sweden)

    Tania Dehesh

    2015-01-01

    Full Text Available Background. Univariate meta-analysis (UM procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS method as a multivariate meta-analysis approach. Methods. We evaluated the efficiency of four new approaches including zero correlation (ZC, common correlation (CC, estimated correlation (EC, and multivariate multilevel correlation (MMC on the estimation bias, mean square error (MSE, and 95% probability coverage of the confidence interval (CI in the synthesis of Cox proportional hazard models coefficients in a simulation study. Result. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. Conclusion. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.

  6. The Covariance Adjustment Approaches for Combining Incomparable Cox Regressions Caused by Unbalanced Covariates Adjustment: A Multivariate Meta-Analysis Study.

    Science.gov (United States)

    Dehesh, Tania; Zare, Najaf; Ayatollahi, Seyyed Mohammad Taghi

    2015-01-01

    Univariate meta-analysis (UM) procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS) method as a multivariate meta-analysis approach. We evaluated the efficiency of four new approaches including zero correlation (ZC), common correlation (CC), estimated correlation (EC), and multivariate multilevel correlation (MMC) on the estimation bias, mean square error (MSE), and 95% probability coverage of the confidence interval (CI) in the synthesis of Cox proportional hazard models coefficients in a simulation study. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.

  7. Using meta-analytic path analysis to test theoretical predictions in health behavior: An illustration based on meta-analyses of the theory of planned behavior.

    Science.gov (United States)

    Hagger, Martin S; Chan, Derwin K C; Protogerou, Cleo; Chatzisarantis, Nikos L D

    2016-08-01

    Synthesizing research on social cognitive theories applied to health behavior is an important step in the development of an evidence base of psychological factors as targets for effective behavioral interventions. However, few meta-analyses of research on social cognitive theories in health contexts have conducted simultaneous tests of theoretically-stipulated pattern effects using path analysis. We argue that conducting path analyses of meta-analytic effects among constructs from social cognitive theories is important to test nomological validity, account for mediation effects, and evaluate unique effects of theory constructs independent of past behavior. We illustrate our points by conducting new analyses of two meta-analyses of a popular theory applied to health behaviors, the theory of planned behavior. We conducted meta-analytic path analyses of the theory in two behavioral contexts (alcohol and dietary behaviors) using data from the primary studies included in the original meta-analyses augmented to include intercorrelations among constructs and relations with past behavior missing from the original analysis. Findings supported the nomological validity of the theory and its hypotheses for both behaviors, confirmed important model processes through mediation analysis, demonstrated the attenuating effect of past behavior on theory relations, and provided estimates of the unique effects of theory constructs independent of past behavior. Our analysis illustrates the importance of conducting a simultaneous test of theory-stipulated effects in meta-analyses of social cognitive theories applied to health behavior. We recommend researchers adopt this analytic procedure when synthesizing evidence across primary tests of social cognitive theories in health. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Digital Simulation-Based Training: A Meta-Analysis

    Science.gov (United States)

    Gegenfurtner, Andreas; Quesada-Pallarès, Carla; Knogler, Maximilian

    2014-01-01

    This study examines how design characteristics in digital simulation-based learning environments moderate self-efficacy and transfer of learning. Drawing on social cognitive theory and the cognitive theory of multimedia learning, the meta-analysis psychometrically cumulated k?=?15 studies of 25 years of research with a total sample size of…

  9. Meta-Learning Approach for Automatic Parameter Tuning: A Case Study with Educational Datasets

    Science.gov (United States)

    Molina, M. M.; Luna, J. M.; Romero, C.; Ventura, S.

    2012-01-01

    This paper proposes to the use of a meta-learning approach for automatic parameter tuning of a well-known decision tree algorithm by using past information about algorithm executions. Fourteen educational datasets were analysed using various combinations of parameter values to examine the effects of the parameter values on accuracy classification.…

  10. Effects of Mindfulness-Based Stress Reduction on Depression in Adolescents and Young Adults: A Systematic Review and Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Xinli Chi

    2018-06-01

    Full Text Available Background: Mindfulness as a positive mental health intervention approach has been increasingly applied to address depression in young people. This systematic review and meta-analysis evaluated the effects of mindfulness-based stress reduction (MBSR in the treatment of depression among adolescents and young adults.Methods: Electronic databases and references in articles were searched. Randomized controlled trials (RCTs evaluating MBSR and reporting outcomes for depressive symptoms among young people aged 12 to 25 years were included. Data extraction and risk of bias assessment were conducted by two reviewers independently. Hedges’ g with a 95% confidence interval was calculated to represent intervention effect.Results: Eighteen RCTs featuring 2,042 participants were included in the meta-analysis. Relative to the control groups (e.g., no treatment, treatment as usual, or active control, MBSR had moderate effects in reducing depressive symptoms at the end of intervention (Hedges’ g = −0.45. No statistically significant effects were found in follow-up (Hedges’ g = −0.24 due to a lack of statistical power. Meta-regression found that the average treatment effect might be moderated by control condition, treatment duration, and participants’ baseline depression.Conclusion: MBSR had moderate effects in reducing depression in young people at posttest. Future research is needed to assess the follow-up effects of MBSR on depressive symptoms among adolescents and young adults.

  11. Comparative efficacy of pharmacological and non-pharmacological interventions for the acute treatment of adult outpatients with anorexia nervosa: study protocol for the systematic review and network meta-analysis of individual data.

    Science.gov (United States)

    Wade, Tracey D; Treasure, Janet; Schmidt, Ulrike; Fairburn, Christopher G; Byrne, Susan; Zipfel, Stephan; Cipriani, Andrea

    2017-01-01

    Outpatient treatment studies of anorexia nervosa (AN) are notoriously hard to conduct given the ambivalence of the patient group and high drop-out rates. It is therefore not surprising that previous meta-analyses of pharmacological and psychological treatments for outpatient treatment of adult AN have proved to be inconclusive. Network meta-analysis (NMA) has the potential to overcome the limitations of pairwise meta-analysis, as this approach can compare multiple treatments using both direct comparisons of interventions within randomized controlled trials (RCTs) and indirect comparisons across trials based on a common comparator. To date there is no published example of this approach with eating disorders and the current study provides a protocol which will use NMA to advance knowledge about what outpatient therapy works best for which patients with AN by conducting both direct and indirect comparisons of different treatments and the moderating variables. Searches of electronic data bases will be supplemented with manual searches for published, unpublished and ongoing RCTs in international registries, and clinical trials registries of regulatory agencies and pharmaceutical companies. Two reviewers will independently extract the data and where possible we will access individual data in order to examine moderators of treatment. Two primary outcomes will be selected: changes to body mass index and changes to global eating disorder psychopathology. The secondary outcome is the total number of patients who, at 12-month post-randomization, attained over the previous 28 day period: (i) BMI > 18.5, and (ii) global eating disorder psychopathology to within 1 SD of community norms. We will also provide a statistical evaluation of consistency, the agreement between direct and indirect evidence. Descriptive statistics across all eligible trials will be provided along with a network diagram, where the size of the nodes will reflect the amount of evidence accumulated for

  12. Plasma L-tryptophan concentration in major depressive disorder: new data and meta-analysis.

    Science.gov (United States)

    Ogawa, Shintaro; Fujii, Takashi; Koga, Norie; Hori, Hiroaki; Teraishi, Toshiya; Hattori, Kotaro; Noda, Takamasa; Higuchi, Teruhiko; Motohashi, Nobutaka; Kunugi, Hiroshi

    2014-09-01

    Tryptophan, an essential amino acid, is the precursor to serotonin and is metabolized mainly by the kynurenine pathway. Both serotonin and kynurenine have been implicated in the pathophysiology of major depressive disorder (MDD). However, plasma tryptophan concentration in patients with MDD has not unequivocally been reported to be decreased, which prompted us to perform a meta-analysis on previous studies and our own data. We searched the PubMed database for case-control studies published until August 31, 2013, using the search terms plasma AND tryptophan AND synonyms for MDD. An additional search was performed for the term amino acid instead of tryptophan. We obtained our own data in 66 patients with MDD (DSM-IV) and 82 controls who were recruited from March 2011 to July 2012. The majority of the patients were medicated (N = 53). Total plasma tryptophan concentrations were measured by the liquid chromatography/mass spectrometry method. We scrutinized 160 studies for eligibility. Original articles that were written in English and documented plasma tryptophan values in patients and controls were selected. We included 24 studies from the literature and our own data in the meta-analysis, which involved a total of 744 patients and 793 controls. Data on unmedicated patients (N = 156) and their comparison subjects (N = 203) were also extracted. To see the possible correlation between tryptophan concentrations and depression severity, meta-regression analysis was performed for 10 studies with the Hamilton Depression Rating Scale 17-item version score. In our case-control study, mean (SD) plasma tryptophan level was significantly decreased in the MDD patients versus the controls (53.9 [10.9] vs 57.2 [11.3] μmol/L; P = .03). The meta-analysis after adjusting for publication bias showed a significant decrease in patients with MDD with a modest effect size (Hedges g, -0.45). However, analysis on unmedicated subjects yielded a large effect (Hedges g, -0.84; P = .00015). We

  13. Empirical Comparison of Publication Bias Tests in Meta-Analysis.

    Science.gov (United States)

    Lin, Lifeng; Chu, Haitao; Murad, Mohammad Hassan; Hong, Chuan; Qu, Zhiyong; Cole, Stephen R; Chen, Yong

    2018-04-16

    Decision makers rely on meta-analytic estimates to trade off benefits and harms. Publication bias impairs the validity and generalizability of such estimates. The performance of various statistical tests for publication bias has been largely compared using simulation studies and has not been systematically evaluated in empirical data. This study compares seven commonly used publication bias tests (i.e., Begg's rank test, trim-and-fill, Egger's, Tang's, Macaskill's, Deeks', and Peters' regression tests) based on 28,655 meta-analyses available in the Cochrane Library. Egger's regression test detected publication bias more frequently than other tests (15.7% in meta-analyses of binary outcomes and 13.5% in meta-analyses of non-binary outcomes). The proportion of statistically significant publication bias tests was greater for larger meta-analyses, especially for Begg's rank test and the trim-and-fill method. The agreement among Tang's, Macaskill's, Deeks', and Peters' regression tests for binary outcomes was moderately strong (most κ's were around 0.6). Tang's and Deeks' tests had fairly similar performance (κ > 0.9). The agreement among Begg's rank test, the trim-and-fill method, and Egger's regression test was weak or moderate (κ < 0.5). Given the relatively low agreement between many publication bias tests, meta-analysts should not rely on a single test and may apply multiple tests with various assumptions. Non-statistical approaches to evaluating publication bias (e.g., searching clinical trials registries, records of drug approving agencies, and scientific conference proceedings) remain essential.

  14. Effects of Botanical Insecticides on Hymenopteran Parasitoids: a Meta-analysis Approach.

    Science.gov (United States)

    Monsreal-Ceballos, R J; Ruiz-Sánchez, E; Ballina-Gómez, H S; Reyes-Ramírez, A; González-Moreno, A

    2018-02-10

    Botanical insecticides (BIs) are considered a valuable alternative for plant protection in sustainable agriculture. The use of both BIs and parasitoids are presumed to be mutually compatible pest management practices. However, there is controversy on this subject, as various studies have reported lethal and sublethal effects of BIs on hymenopteran parasitoids. To shed new light on this controversy, a meta-analytic approach of the effects of BIs on adult mortality, parasitism, and parasitoid emergence under laboratory conditions was performed. We show that BIs increased mortality, decreased parasitism, and decreased parasitoid emergence. Botanical insecticides derived from Nicotiana tabacum and Caceolaria andina were particulary lethal. Most of the parasitoid groups showed susceptibility to BIs, but the families Scelionidae and Ichneumonidae were not significantly affected. The negative effects of BIs were seen regardless of the type of exposure (topical, ingestion, or residual). In conclusion, this meta-analysis showed that under laboratory conditions, exposure of hymenopteran parasitoids to BIs had significant negative effects on adult mortality, parasitism, and parasitoid emergence.

  15. Foundation: Transforming data bases into knowledge bases

    Science.gov (United States)

    Purves, R. B.; Carnes, James R.; Cutts, Dannie E.

    1987-01-01

    One approach to transforming information stored in relational data bases into knowledge based representations and back again is described. This system, called Foundation, allows knowledge bases to take advantage of vast amounts of pre-existing data. A benefit of this approach is inspection, and even population, of data bases through an intelligent knowledge-based front-end.

  16. Meta-analyses of workplace physical activity and dietary behaviour interventions on weight outcomes

    NARCIS (Netherlands)

    Verweij, L.M.; Coffeng, J.; Mechelen, W. van; Proper, K.I.

    2011-01-01

    This meta-analytic review critically examines the effectiveness of workplace interventions targeting physical activity, dietary behaviour or both on weight outcomes. Data could be extracted from 22 studies published between 1980 and November 2009 for meta-analyses. The GRADE approach was used to

  17. A SQL-Database Based Meta-CASE System and its Query Subsystem

    Science.gov (United States)

    Eessaar, Erki; Sgirka, Rünno

    Meta-CASE systems simplify the creation of CASE (Computer Aided System Engineering) systems. In this paper, we present a meta-CASE system that provides a web-based user interface and uses an object-relational database system (ORDBMS) as its basis. The use of ORDBMSs allows us to integrate different parts of the system and simplify the creation of meta-CASE and CASE systems. ORDBMSs provide powerful query mechanism. The proposed system allows developers to use queries to evaluate and gradually improve artifacts and calculate values of software measures. We illustrate the use of the systems by using SimpleM modeling language and discuss the use of SQL in the context of queries about artifacts. We have created a prototype of the meta-CASE system by using PostgreSQL™ ORDBMS and PHP scripting language.

  18. Meta-Analysis of Placental Transcriptome Data Identifies a Novel Molecular Pathway Related to Preeclampsia.

    Directory of Open Access Journals (Sweden)

    Miranda van Uitert

    Full Text Available Studies using the placental transcriptome to identify key molecules relevant for preeclampsia are hampered by a relatively small sample size. In addition, they use a variety of bioinformatics and statistical methods, making comparison of findings challenging. To generate a more robust preeclampsia gene expression signature, we performed a meta-analysis on the original data of 11 placenta RNA microarray experiments, representing 139 normotensive and 116 preeclamptic pregnancies. Microarray data were pre-processed and analyzed using standardized bioinformatics and statistical procedures and the effect sizes were combined using an inverse-variance random-effects model. Interactions between genes in the resulting gene expression signature were identified by pathway analysis (Ingenuity Pathway Analysis, Gene Set Enrichment Analysis, Graphite and protein-protein associations (STRING. This approach has resulted in a comprehensive list of differentially expressed genes that led to a 388-gene meta-signature of preeclamptic placenta. Pathway analysis highlights the involvement of the previously identified hypoxia/HIF1A pathway in the establishment of the preeclamptic gene expression profile, while analysis of protein interaction networks indicates CREBBP/EP300 as a novel element central to the preeclamptic placental transcriptome. In addition, there is an apparent high incidence of preeclampsia in women carrying a child with a mutation in CREBBP/EP300 (Rubinstein-Taybi Syndrome. The 388-gene preeclampsia meta-signature offers a vital starting point for further studies into the relevance of these genes (in particular CREBBP/EP300 and their concomitant pathways as biomarkers or functional molecules in preeclampsia. This will result in a better understanding of the molecular basis of this disease and opens up the opportunity to develop rational therapies targeting the placental dysfunction causal to preeclampsia.

  19. Physical activity and stroke: a meta-analysis of observational data

    NARCIS (Netherlands)

    Wendel-Vos, G.C.W.; Schuit, A.J.; Feskens, E.J.M.; Boshuizen, H.C.; Verschuren, W.M.M.; Saris, W.H.M.; Kromhout, D.

    2004-01-01

    Background Based on studies published so far, the protective effect of physical activity on stroke remains controversial. Specifically, there is a lack of insight into the sources of heterogeneity between studies. Methods Meta-analysis of observational studies was used to quantify the relationship

  20. Optimal patient education for cancer pain: a systematic review and theory-based meta-analysis.

    Science.gov (United States)

    Marie, N; Luckett, T; Davidson, P M; Lovell, M; Lal, S

    2013-12-01

    Previous systematic reviews have found patient education to be moderately efficacious in decreasing the intensity of cancer pain, but variation in results warrants analysis aimed at identifying which strategies are optimal. A systematic review and meta-analysis was undertaken using a theory-based approach to classifying and comparing educational interventions for cancer pain. The reference lists of previous reviews and MEDLINE, PsycINFO, and CENTRAL were searched in May 2012. Studies had to be published in a peer-reviewed English language journal and compare the effect on cancer pain intensity of education with usual care. Meta-analyses used standardized effect sizes (ES) and a random effects model. Subgroup analyses compared intervention components categorized using the Michie et al. (Implement Sci 6:42, 2011) capability, opportunity, and motivation behavior (COM-B) model. Fifteen randomized controlled trials met the criteria. As expected, meta-analysis identified a small-moderate ES favoring education versus usual care (ES, 0.27 [-0.47, -0.07]; P = 0.007) with substantial heterogeneity (I² = 71 %). Subgroup analyses based on the taxonomy found that interventions using "enablement" were efficacious (ES, 0.35 [-0.63, -0.08]; P = 0.01), whereas those lacking this component were not (ES, 0.18 [-0.46, 0.10]; P = 0.20). However, the subgroup effect was nonsignificant (P = 0.39), and heterogeneity was not reduced. Factoring in the variable of individualized versus non-individualized influenced neither efficacy nor heterogeneity. The current meta-analysis follows a trend in using theory to understand the mechanisms of complex interventions. We suggest that future efforts focus on interventions that target patient self-efficacy. Authors are encouraged to report comprehensive details of interventions and methods to inform synthesis, replication, and refinement.

  1. Detecting Activation in fMRI Data: An Approach Based on Sparse Representation of BOLD Signal

    Directory of Open Access Journals (Sweden)

    Blanca Guillen

    2018-01-01

    Full Text Available This paper proposes a simple yet effective approach for detecting activated voxels in fMRI data by exploiting the inherent sparsity property of the BOLD signal in temporal and spatial domains. In the time domain, the approach combines the General Linear Model (GLM with a Least Absolute Deviation (LAD based regression method regularized by the pseudonorm l0 to promote sparsity in the parameter vector of the model. In the spatial domain, detection of activated regions is based on thresholding the spatial map of estimated parameters associated with a particular stimulus. The threshold is calculated by exploiting the sparseness of the BOLD signal in the spatial domain assuming a Laplacian distribution model. The proposed approach is validated using synthetic and real fMRI data. For synthetic data, results show that the proposed approach is able to detect most activated voxels without any false activation. For real data, the method is evaluated through comparison with the SPM software. Results indicate that this approach can effectively find activated regions that are similar to those found by SPM, but using a much simpler approach. This study may lead to the development of robust spatial approaches to further simplifying the complexity of classical schemes.

  2. MetaGO: Predicting Gene Ontology of Non-homologous Proteins Through Low-Resolution Protein Structure Prediction and Protein-Protein Network Mapping.

    Science.gov (United States)

    Zhang, Chengxin; Zheng, Wei; Freddolino, Peter L; Zhang, Yang

    2018-03-10

    Homology-based transferal remains the major approach to computational protein function annotations, but it becomes increasingly unreliable when the sequence identity between query and template decreases below 30%. We propose a novel pipeline, MetaGO, to deduce Gene Ontology attributes of proteins by combining sequence homology-based annotation with low-resolution structure prediction and comparison, and partner's homology-based protein-protein network mapping. The pipeline was tested on a large-scale set of 1000 non-redundant proteins from the CAFA3 experiment. Under the stringent benchmark conditions where templates with >30% sequence identity to the query are excluded, MetaGO achieves average F-measures of 0.487, 0.408, and 0.598, for Molecular Function, Biological Process, and Cellular Component, respectively, which are significantly higher than those achieved by other state-of-the-art function annotations methods. Detailed data analysis shows that the major advantage of the MetaGO lies in the new functional homolog detections from partner's homology-based network mapping and structure-based local and global structure alignments, the confidence scores of which can be optimally combined through logistic regression. These data demonstrate the power of using a hybrid model incorporating protein structure and interaction networks to deduce new functional insights beyond traditional sequence homology-based referrals, especially for proteins that lack homologous function templates. The MetaGO pipeline is available at http://zhanglab.ccmb.med.umich.edu/MetaGO/. Copyright © 2018. Published by Elsevier Ltd.

  3. Novel Developments of the MetaCrop Information System for Facilitating Systems Biological Approaches

    Directory of Open Access Journals (Sweden)

    Hippe Klaus

    2010-12-01

    Full Text Available Crop plants play a major role in human and animal nutrition and increasingly contribute to chemical or pharmaceutical industry and renewable resources. In order to achieve important goals, such as the improvement of growth or yield, it is indispensable to understand biological processes on a detailed level. Therefore, the well-structured management of fine-grained information about metabolic pathways is of high interest. Thus, we developed the MetaCrop information system, a manually curated repository of high quality information concerning the metabolism of crop plants. However, the data access to and flexible export of information of MetaCrop in standard exchange formats had to be improved. To automate and accelerate the data access we designed a set of web services to be integrated into external software. These web services have already been used by an add-on for the visualisation toolkit VANTED. Furthermore, we developed an export feature for the MetaCrop web interface, thus enabling the user to compose individual metabolic models using SBML.

  4. iTemplate: A template-based eye movement data analysis approach.

    Science.gov (United States)

    Xiao, Naiqi G; Lee, Kang

    2018-02-08

    Current eye movement data analysis methods rely on defining areas of interest (AOIs). Due to the fact that AOIs are created and modified manually, variances in their size, shape, and location are unavoidable. These variances affect not only the consistency of the AOI definitions, but also the validity of the eye movement analyses based on the AOIs. To reduce the variances in AOI creation and modification and achieve a procedure to process eye movement data with high precision and efficiency, we propose a template-based eye movement data analysis method. Using a linear transformation algorithm, this method registers the eye movement data from each individual stimulus to a template. Thus, users only need to create one set of AOIs for the template in order to analyze eye movement data, rather than creating a unique set of AOIs for all individual stimuli. This change greatly reduces the error caused by the variance from manually created AOIs and boosts the efficiency of the data analysis. Furthermore, this method can help researchers prepare eye movement data for some advanced analysis approaches, such as iMap. We have developed software (iTemplate) with a graphic user interface to make this analysis method available to researchers.

  5. Global meta-analysis of transcriptomics studies.

    Directory of Open Access Journals (Sweden)

    José Caldas

    Full Text Available Transcriptomics meta-analysis aims at re-using existing data to derive novel biological hypotheses, and is motivated by the public availability of a large number of independent studies. Current methods are based on breaking down studies into multiple comparisons between phenotypes (e.g. disease vs. healthy, based on the studies' experimental designs, followed by computing the overlap between the resulting differential expression signatures. While useful, in this methodology each study yields multiple independent phenotype comparisons, and connections are established not between studies, but rather between subsets of the studies corresponding to phenotype comparisons. We propose a rank-based statistical meta-analysis framework that establishes global connections between transcriptomics studies without breaking down studies into sets of phenotype comparisons. By using a rank product method, our framework extracts global features from each study, corresponding to genes that are consistently among the most expressed or differentially expressed genes in that study. Those features are then statistically modelled via a term-frequency inverse-document frequency (TF-IDF model, which is then used for connecting studies. Our framework is fast and parameter-free; when applied to large collections of Homo sapiens and Streptococcus pneumoniae transcriptomics studies, it performs better than similarity-based approaches in retrieving related studies, using a Medical Subject Headings gold standard. Finally, we highlight via case studies how the framework can be used to derive novel biological hypotheses regarding related studies and the genes that drive those connections. Our proposed statistical framework shows that it is possible to perform a meta-analysis of transcriptomics studies with arbitrary experimental designs by deriving global expression features rather than decomposing studies into multiple phenotype comparisons.

  6. Meta-path based heterogeneous combat network link prediction

    Science.gov (United States)

    Li, Jichao; Ge, Bingfeng; Yang, Kewei; Chen, Yingwu; Tan, Yuejin

    2017-09-01

    The combat system-of-systems in high-tech informative warfare, composed of many interconnected combat systems of different types, can be regarded as a type of complex heterogeneous network. Link prediction for heterogeneous combat networks (HCNs) is of significant military value, as it facilitates reconfiguring combat networks to represent the complex real-world network topology as appropriate with observed information. This paper proposes a novel integrated methodology framework called HCNMP (HCN link prediction based on meta-path) to predict multiple types of links simultaneously for an HCN. More specifically, the concept of HCN meta-paths is introduced, through which the HCNMP can accumulate information by extracting different features of HCN links for all the six defined types. Next, an HCN link prediction model, based on meta-path features, is built to predict all types of links of the HCN simultaneously. Then, the solution algorithm for the HCN link prediction model is proposed, in which the prediction results are obtained by iteratively updating with the newly predicted results until the results in the HCN converge or reach a certain maximum iteration number. Finally, numerical experiments on the dataset of a real HCN are conducted to demonstrate the feasibility and effectiveness of the proposed HCNMP, in comparison with 30 baseline methods. The results show that the performance of the HCNMP is superior to those of the baseline methods.

  7. Data management and data analysis techniques in pharmacoepidemiological studies using a pre-planned multi-database approach: a systematic literature review.

    Science.gov (United States)

    Bazelier, Marloes T; Eriksson, Irene; de Vries, Frank; Schmidt, Marjanka K; Raitanen, Jani; Haukka, Jari; Starup-Linde, Jakob; De Bruin, Marie L; Andersen, Morten

    2015-09-01

    To identify pharmacoepidemiological multi-database studies and to describe data management and data analysis techniques used for combining data. Systematic literature searches were conducted in PubMed and Embase complemented by a manual literature search. We included pharmacoepidemiological multi-database studies published from 2007 onwards that combined data for a pre-planned common analysis or quantitative synthesis. Information was retrieved about study characteristics, methods used for individual-level analyses and meta-analyses, data management and motivations for performing the study. We found 3083 articles by the systematic searches and an additional 176 by the manual search. After full-text screening of 75 articles, 22 were selected for final inclusion. The number of databases used per study ranged from 2 to 17 (median = 4.0). Most studies used a cohort design (82%) instead of a case-control design (18%). Logistic regression was most often used for individual-level analyses (41%), followed by Cox regression (23%) and Poisson regression (14%). As meta-analysis method, a majority of the studies combined individual patient data (73%). Six studies performed an aggregate meta-analysis (27%), while a semi-aggregate approach was applied in three studies (14%). Information on central programming or heterogeneity assessment was missing in approximately half of the publications. Most studies were motivated by improving power (86%). Pharmacoepidemiological multi-database studies are a well-powered strategy to address safety issues and have increased in popularity. To be able to correctly interpret the results of these studies, it is important to systematically report on database management and analysis techniques, including central programming and heterogeneity testing. © 2015 The Authors. Pharmacoepidemiology and Drug Safety published by John Wiley & Sons, Ltd.

  8. Exercise-Based Cardiac Rehabilitation for Coronary Heart Disease: Cochrane Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Anderson, Lindsey; Oldridge, Neil; Thompson, David R; Zwisler, Ann-Dorthe; Rees, Karen; Martin, Nicole; Taylor, Rod S

    2016-01-05

    Although recommended in guidelines for the management of coronary heart disease (CHD), concerns have been raised about the applicability of evidence from existing meta-analyses of exercise-based cardiac rehabilitation (CR). The goal of this study is to update the Cochrane systematic review and meta-analysis of exercise-based CR for CHD. The Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE, CINAHL, and Science Citation Index Expanded were searched to July 2014. Retrieved papers, systematic reviews, and trial registries were hand-searched. We included randomized controlled trials with at least 6 months of follow-up, comparing CR to no-exercise controls following myocardial infarction or revascularization, or with a diagnosis of angina pectoris or CHD defined by angiography. Two authors screened titles for inclusion, extracted data, and assessed risk of bias. Studies were pooled using random effects meta-analysis, and stratified analyses were undertaken to examine potential treatment effect modifiers. A total of 63 studies with 14,486 participants with median follow-up of 12 months were included. Overall, CR led to a reduction in cardiovascular mortality (relative risk: 0.74; 95% confidence interval: 0.64 to 0.86) and the risk of hospital admissions (relative risk: 0.82; 95% confidence interval: 0.70 to 0.96). There was no significant effect on total mortality, myocardial infarction, or revascularization. The majority of studies (14 of 20) showed higher levels of health-related quality of life in 1 or more domains following exercise-based CR compared with control subjects. This study confirms that exercise-based CR reduces cardiovascular mortality and provides important data showing reductions in hospital admissions and improvements in quality of life. These benefits appear to be consistent across patients and intervention types and were independent of study quality, setting, and publication date. Copyright © 2016 American College of Cardiology

  9. MetaPro-IQ: a universal metaproteomic approach to studying human and mouse gut microbiota.

    Science.gov (United States)

    Zhang, Xu; Ning, Zhibin; Mayne, Janice; Moore, Jasmine I; Li, Jennifer; Butcher, James; Deeke, Shelley Ann; Chen, Rui; Chiang, Cheng-Kang; Wen, Ming; Mack, David; Stintzi, Alain; Figeys, Daniel

    2016-06-24

    The gut microbiota has been shown to be closely associated with human health and disease. While next-generation sequencing can be readily used to profile the microbiota taxonomy and metabolic potential, metaproteomics is better suited for deciphering microbial biological activities. However, the application of gut metaproteomics has largely been limited due to the low efficiency of protein identification. Thus, a high-performance and easy-to-implement gut metaproteomic approach is required. In this study, we developed a high-performance and universal workflow for gut metaproteome identification and quantification (named MetaPro-IQ) by using the close-to-complete human or mouse gut microbial gene catalog as database and an iterative database search strategy. An average of 38 and 33 % of the acquired tandem mass spectrometry (MS) spectra was confidently identified for the studied mouse stool and human mucosal-luminal interface samples, respectively. In total, we accurately quantified 30,749 protein groups for the mouse metaproteome and 19,011 protein groups for the human metaproteome. Moreover, the MetaPro-IQ approach enabled comparable identifications with the matched metagenome database search strategy that is widely used but needs prior metagenomic sequencing. The response of gut microbiota to high-fat diet in mice was then assessed, which showed distinct metaproteome patterns for high-fat-fed mice and identified 849 proteins as significant responders to high-fat feeding in comparison to low-fat feeding. We present MetaPro-IQ, a metaproteomic approach for highly efficient intestinal microbial protein identification and quantification, which functions as a universal workflow for metaproteomic studies, and will thus facilitate the application of metaproteomics for better understanding the functions of gut microbiota in health and disease.

  10. An on-line Integrated Bookkeeping: electronic run log book and Meta-Data Repository for ATLAS

    CERN Document Server

    Barczyc, M.; Caprini, M.; Da Silva Conceicao, J.; Dobson, M.; Flammer, J.; Burckhart-Chromek, D.; Caprini, M.; Conceicao, J.D.S.; Dobson, M.; Flammer, J.; Jones, R.; Kazarov, A.; Kolos, S.; Kazarov, A.; Kolos, S.; Liko, D.; Mapelli, L.; Soloviev, I.; Hart, R.; Amorim, A.; Mapelli, L.; Soloviev, I.; Amorim, A.; Klose, D.; Lima, J.; Lucio, L.; Pedro, L.; Wolters, H.; Badescu, E.; Alexandrov, I.; Kotov, V.; Mineev, M.; Ryabov, Yu.

    2003-01-01

    In the context of the ATLAS experiment there is growing evidence of the importance of different kinds of Meta-data including all the important details of the detector and data acquisition that are vital for the analysis of the acquired data. The Online BookKeeper (OBK) is a component of ATLAS online software that stores all information collected while running the experiment, including the Meta-data associated with the event acquisition, triggering and storage. The facilities for acquisition of control data within the on-line software framework, together with a full functional Web interface, make the OBK a powerful tool containing all information needed for event analysis, including an electronic log book. In this paper we explain how OBK plays a role as one of the main collectors and managers of Meta-data produced on-line, and we'll also focus on the Web facilities already available. The usage of the web interface as an electronic run logbook is also explained, together with the future extensions. We describe...

  11. Modified temporal approach to meta-optimizing an extended Kalman filter's parameters

    CSIR Research Space (South Africa)

    Salmon

    2014-07-01

    Full Text Available stream_source_info Salmon_2014.pdf.txt stream_content_type text/plain stream_size 1233 Content-Encoding UTF-8 stream_name Salmon_2014.pdf.txt Content-Type text/plain; charset=UTF-8 2014 IEEE International Geoscience... and Remote Sensing Symposium, Québec, Canada, 13-18 July 2014 A modified temporal approach to meta-optimizing an Extended Kalman Filter's parameters B. P. Salmon ; W. Kleynhans ; J. C. Olivier ; W. C. Olding ; K. J. Wessels ; F. van den Bergh...

  12. Efficacy of computer technology-based HIV prevention interventions: a meta-analysis.

    Science.gov (United States)

    Noar, Seth M; Black, Hulda G; Pierce, Larson B

    2009-01-02

    To conduct a meta-analysis of computer technology-based HIV prevention behavioral interventions aimed at increasing condom use among a variety of at-risk populations. Systematic review and meta-analysis of existing published and unpublished studies testing computer-based interventions. Meta-analytic techniques were used to compute and aggregate effect sizes for 12 randomized controlled trials that met inclusion criteria. Variables that had the potential to moderate intervention efficacy were also tested. The overall mean weighted effect size for condom use was d = 0.259 (95% confidence interval = 0.201, 0.317; Z = 8.74, P partners, and incident sexually transmitted diseases. In addition, interventions were significantly more efficacious when they were directed at men or women (versus mixed sex groups), utilized individualized tailoring, used a Stages of Change model, and had more intervention sessions. Computer technology-based HIV prevention interventions have similar efficacy to more traditional human-delivered interventions. Given their low cost to deliver, ability to customize intervention content, and flexible dissemination channels, they hold much promise for the future of HIV prevention.

  13. Meta-analysis data for 104 Energy-Economy Nexus papers

    Directory of Open Access Journals (Sweden)

    Vladimír Hajko

    2017-06-01

    Data cover papers indexed by Scopus, published in economic journals, written in English, after year 2000. In addition, papers were manually filtered to only those that deal with Energy-Economy Nexus investigation and have at least 10 citations at (at the time of query – November 2015. This data are to be used to conduct meta-analysis – associated dataset was used in Hajko [1]. Early version of the dataset was used for multinomial logit estimation in Master thesis by Kociánová [2].

  14. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  15. Two-stage meta-analysis of survival data from individual participants using percentile ratios

    Science.gov (United States)

    Barrett, Jessica K; Farewell, Vern T; Siannis, Fotios; Tierney, Jayne; Higgins, Julian P T

    2012-01-01

    Methods for individual participant data meta-analysis of survival outcomes commonly focus on the hazard ratio as a measure of treatment effect. Recently, Siannis et al. (2010, Statistics in Medicine 29:3030–3045) proposed the use of percentile ratios as an alternative to hazard ratios. We describe a novel two-stage method for the meta-analysis of percentile ratios that avoids distributional assumptions at the study level. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22825835

  16. Algebraic Meta-Theory of Processes with Data

    Directory of Open Access Journals (Sweden)

    Daniel Gebler

    2013-07-01

    Full Text Available There exists a rich literature of rule formats guaranteeing different algebraic properties for formalisms with a Structural Operational Semantics. Moreover, there exist a few approaches for automatically deriving axiomatizations characterizing strong bisimilarity of processes. To our knowledge, this literature has never been extended to the setting with data (e.g. to model storage and memory. We show how the rule formats for algebraic properties can be exploited in a generic manner in the setting with data. Moreover, we introduce a new approach for deriving sound and ground-complete axiom schemata for a notion of bisimilarity with data, called stateless bisimilarity, based on intuitive auxiliary function symbols for handling the store component. We do restrict, however, the axiomatization to the setting where the store component is only given in terms of constants.

  17. AucPR: an AUC-based approach using penalized regression for disease prediction with high-dimensional omics data.

    Science.gov (United States)

    Yu, Wenbao; Park, Taesung

    2014-01-01

    It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. We propose an AUC-based approach using penalized regression (AucPR), which is a parametric method used for obtaining a linear combination for maximizing the AUC. To obtain the AUC maximizer in a high-dimensional context, we transform a classical parametric AUC maximizer, which is used in a low-dimensional context, into a regression framework and thus, apply the penalization regression approach directly. Two kinds of penalization, lasso and elastic net, are considered. The parametric approach can avoid some of the difficulties of a conventional non-parametric AUC-based approach, such as the lack of an appropriate concave objective function and a prudent choice of the smoothing parameter. We apply the proposed AucPR for gene selection and classification using four real microarray and synthetic data. Through numerical studies, AucPR is shown to perform better than the penalized logistic regression and the nonparametric AUC-based method, in the sense of AUC and sensitivity for a given specificity, particularly when there are many correlated genes. We propose a powerful parametric and easily-implementable linear classifier AucPR, for gene selection and disease prediction for high-dimensional data. AucPR is recommended for its good prediction performance. Beside gene expression microarray data, AucPR can be applied to other types of high-dimensional omics data, such as miRNA and protein data.

  18. Psychotherapeutic Treatment for Anorexia Nervosa: A Systematic Review and Network Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Almut Zeeck

    2018-05-01

    Full Text Available Background: The aim of the study was a systematic review of studies evaluating psychotherapeutic treatment approaches in anorexia nervosa and to compare their efficacy. Weight gain was chosen as the primary outcome criterion. We also aimed to compare treatment effects according to service level (inpatient vs. outpatient and age group (adolescents vs. adults.Methods:The data bases PubMed, Cochrane Library, Web of Science, Cinahl, and PsychInfo were used for a systematic literature search (until Feb 2017. Search terms were adapted for data base, combining versions of the search terms anorexia, treat*/therap* and controlled trial. Studies were selected using pre-defined in- and exclusion criteria. Data were extracted by two independent coders using piloted forms. Network-meta-analyses were conducted on all RCTs. For a comparison of service levels and age groups, standard mean change (SMC statistics were used and naturalistic, non-randomized studies included.Results: Eighteen RCTs (trials on adults: 622 participants; trials on adolescents: 625 participants were included in the network meta-analysis. SMC analyses were conducted with 38 studies (1,164 participants. While family-based approaches dominate interventions for adolescents, individual psychotherapy dominates in adults. There was no superiority of a specific approach. Weight gains were more rapid in adolescents and inpatient treatment.Conclusions: Several specialized psychotherapeutic interventions have been developed and can be recommended for AN. However, adult and adolescent patients should be distinguished, as groups differ in terms of treatment approaches considered suitable as well as treatment response. Future trials should replicate previous findings and be multi-center trials with large sample sizes to allow for subgroup analyses. Patient assessment should include variables that can be considered relevant moderators of treatment outcome. It is desirable to explore adaptive treatment

  19. Bio-optical data integration based on a 4 D database system approach

    Science.gov (United States)

    Imai, N. N.; Shimabukuro, M. H.; Carmo, A. F. C.; Alcantara, E. H.; Rodrigues, T. W. P.; Watanabe, F. S. Y.

    2015-04-01

    Bio-optical characterization of water bodies requires spatio-temporal data about Inherent Optical Properties and Apparent Optical Properties which allow the comprehension of underwater light field aiming at the development of models for monitoring water quality. Measurements are taken to represent optical properties along a column of water, and then the spectral data must be related to depth. However, the spatial positions of measurement may differ since collecting instruments vary. In addition, the records should not refer to the same wavelengths. Additional difficulty is that distinct instruments store data in different formats. A data integration approach is needed to make these large and multi source data sets suitable for analysis. Thus, it becomes possible, even automatically, semi-empirical models evaluation, preceded by preliminary tasks of quality control. In this work it is presented a solution, in the stated scenario, based on spatial - geographic - database approach with the adoption of an object relational Database Management System - DBMS - due to the possibilities to represent all data collected in the field, in conjunction with data obtained by laboratory analysis and Remote Sensing images that have been taken at the time of field data collection. This data integration approach leads to a 4D representation since that its coordinate system includes 3D spatial coordinates - planimetric and depth - and the time when each data was taken. It was adopted PostgreSQL DBMS extended by PostGIS module to provide abilities to manage spatial/geospatial data. It was developed a prototype which has the mainly tools an analyst needs to prepare the data sets for analysis.

  20. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    . Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... a model-based synthesis method to systematically generate and evaluate alternatives in the first stage and an experiment-model based validation in the second stage. In this way, the search for alternatives is done very quickly, reliably and systematically over a wide range, while resources are preserved...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model-based...

  1. Individual Participant Data Meta-Analysis of Mechanical Workplace Risk Factors and Low Back Pain

    Science.gov (United States)

    Shannon, Harry S.; Wells, Richard P.; Walter, Stephen D.; Cole, Donald C.; Côté, Pierre; Frank, John; Hogg-Johnson, Sheilah; Langlois, Lacey E.

    2012-01-01

    Objectives. We used individual participant data from multiple studies to conduct a comprehensive meta-analysis of mechanical exposures in the workplace and low back pain. Methods. We conducted a systematic literature search and contacted an author of each study to request their individual participant data. Because outcome definitions and exposure measures were not uniform across studies, we conducted 2 substudies: (1) to identify sets of outcome definitions that could be combined in a meta-analysis and (2) to develop methods to translate mechanical exposure onto a common metric. We used generalized estimating equation regression to analyze the data. Results. The odds ratios (ORs) for posture exposures ranged from 1.1 to 2.0. Force exposure ORs ranged from 1.4 to 2.1. The magnitudes of the ORs differed according to the definition of low back pain, and heterogeneity was associated with both study-level and individual-level characteristics. Conclusions. We found small to moderate ORs for the association of mechanical exposures and low back pain, although the relationships were complex. The presence of individual-level OR modifiers in such an area can be best understood by conducting a meta-analysis of individual participant data. PMID:22390445

  2. A novel approach to generating CER hypotheses based on mining clinical data.

    Science.gov (United States)

    Zhang, Shuo; Li, Lin; Yu, Yiqin; Sun, Xingzhi; Xu, Linhao; Zhao, Wei; Teng, Xiaofei; Pan, Yue

    2013-01-01

    Comparative effectiveness research (CER) is a scientific method of investigating the effectiveness of alternative intervention methods. In a CER study, clinical researchers typically start with a CER hypothesis, and aim to evaluate it by applying a series of medical statistical methods. Traditionally, the CER hypotheses are defined manually by clinical researchers. This makes the task of hypothesis generation very time-consuming and the quality of hypothesis heavily dependent on the researchers' skills. Recently, with more electronic medical data being collected, it is highly promising to apply the computerized method for discovering CER hypotheses from clinical data sets. In this poster, we proposes a novel approach to automatically generating CER hypotheses based on mining clinical data, and presents a case study showing that the approach can facilitate clinical researchers to identify potentially valuable hypotheses and eventually define high quality CER studies.

  3. Alternative approach to nuclear data representation

    International Nuclear Information System (INIS)

    Pruet, J.; Brown, D.; Beck, B.; McNabb, D.P.

    2006-01-01

    This paper considers an approach for representing nuclear data that is qualitatively different from the approach currently adopted by the nuclear science community. Specifically, we examine a representation in which complicated data is described through collections of distinct and self-contained simple data structures. This structure-based representation is compared with the ENDF and ENDL formats, which can be roughly characterized as dictionary-based representations. A pilot data representation for replacing the format currently used at LLNL is presented. Examples are given as is a discussion of promises and shortcomings associated with moving from traditional dictionary-based formats to a structure-rich or class-like representation

  4. Novel citation-based search method for scientific literature: application to meta-analyses.

    Science.gov (United States)

    Janssens, A Cecile J W; Gwinn, M

    2015-10-13

    Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of co-citation with one or more "known" articles before reviewing their eligibility. In two independent studies, we aimed to reproduce the results of literature searches for sets of published meta-analyses (n = 10 and n = 42). For each meta-analysis, we extracted co-citations for the randomly selected 'known' articles from the Web of Science database, counted their frequencies and screened all articles with a score above a selection threshold. In the second study, we extended the method by retrieving direct citations for all selected articles. In the first study, we retrieved 82% of the studies included in the meta-analyses while screening only 11% as many articles as were screened for the original publications. Articles that we missed were published in non-English languages, published before 1975, published very recently, or available only as conference abstracts. In the second study, we retrieved 79% of included studies while screening half the original number of articles. Citation searching appears to be an efficient and reasonably accurate method for finding articles similar to one or more articles of interest for meta-analysis and reviews.

  5. A Grey Theory Based Approach to Big Data Risk Management Using FMEA

    Directory of Open Access Journals (Sweden)

    Maisa Mendonça Silva

    2016-01-01

    Full Text Available Big data is the term used to denote enormous sets of data that differ from other classic databases in four main ways: (huge volume, (high velocity, (much greater variety, and (big value. In general, data are stored in a distributed fashion and on computing nodes as a result of which big data may be more susceptible to attacks by hackers. This paper presents a risk model for big data, which comprises Failure Mode and Effects Analysis (FMEA and Grey Theory, more precisely grey relational analysis. This approach has several advantages: it provides a structured approach in order to incorporate the impact of big data risk factors; it facilitates the assessment of risk by breaking down the overall risk to big data; and finally its efficient evaluation criteria can help enterprises reduce the risks associated with big data. In order to illustrate the applicability of our proposal in practice, a numerical example, with realistic data based on expert knowledge, was developed. The numerical example analyzes four dimensions, that is, managing identification and access, registering the device and application, managing the infrastructure, and data governance, and 20 failure modes concerning the vulnerabilities of big data. The results show that the most important aspect of risk to big data relates to data governance.

  6. FC-TLBO: fully constrained meta-heuristic algorithm for abundance ...

    Indian Academy of Sciences (India)

    Omprakash Tembhurne

    hyperspectral unmixing; meta-heuristic approach; teaching-learning-based optimisation (TLBO). 1. ... area of research due to its real-time applications. Satellite .... describes the detailed methodology of proposed FC-TLBO. Section 4 contains ...

  7. Bayesian Meta-Analysis of Coefficient Alpha

    Science.gov (United States)

    Brannick, Michael T.; Zhang, Nanhua

    2013-01-01

    The current paper describes and illustrates a Bayesian approach to the meta-analysis of coefficient alpha. Alpha is the most commonly used estimate of the reliability or consistency (freedom from measurement error) for educational and psychological measures. The conventional approach to meta-analysis uses inverse variance weights to combine…

  8. A Knowledge Model Sharing Based Approach to Privacy-Preserving Data Mining

    OpenAIRE

    Hongwei Tian; Weining Zhang; Shouhuai Xu; Patrick Sharkey

    2012-01-01

    Privacy-preserving data mining (PPDM) is an important problem and is currently studied in three approaches: the cryptographic approach, the data publishing, and the model publishing. However, each of these approaches has some problems. The cryptographic approach does not protect privacy of learned knowledge models and may have performance and scalability issues. The data publishing, although is popular, may suffer from too much utility loss for certain types of data mining applications. The m...

  9. MetaMeta: integrating metagenome analysis tools to improve taxonomic profiling.

    Science.gov (United States)

    Piro, Vitor C; Matschkowski, Marcel; Renard, Bernhard Y

    2017-08-14

    Many metagenome analysis tools are presently available to classify sequences and profile environmental samples. In particular, taxonomic profiling and binning methods are commonly used for such tasks. Tools available among these two categories make use of several techniques, e.g., read mapping, k-mer alignment, and composition analysis. Variations on the construction of the corresponding reference sequence databases are also common. In addition, different tools provide good results in different datasets and configurations. All this variation creates a complicated scenario to researchers to decide which methods to use. Installation, configuration and execution can also be difficult especially when dealing with multiple datasets and tools. We propose MetaMeta: a pipeline to execute and integrate results from metagenome analysis tools. MetaMeta provides an easy workflow to run multiple tools with multiple samples, producing a single enhanced output profile for each sample. MetaMeta includes a database generation, pre-processing, execution, and integration steps, allowing easy execution and parallelization. The integration relies on the co-occurrence of organisms from different methods as the main feature to improve community profiling while accounting for differences in their databases. In a controlled case with simulated and real data, we show that the integrated profiles of MetaMeta overcome the best single profile. Using the same input data, it provides more sensitive and reliable results with the presence of each organism being supported by several methods. MetaMeta uses Snakemake and has six pre-configured tools, all available at BioConda channel for easy installation (conda install -c bioconda metameta). The MetaMeta pipeline is open-source and can be downloaded at: https://gitlab.com/rki_bioinformatics .

  10. Transaction cost determinants and ownership-based entry mode choice: a meta-analytical review

    OpenAIRE

    Hongxin Zhao; Yadong Luo; Taewon Suh

    2004-01-01

    Entry mode choice is a critical ingredient of international entry strategies, and has been voluminously examined in the field. The findings, however, are very mixed, especially with respect to transaction-cost-related factors in determining the ownership-based entry mode choice. This study conducted a meta-analysis to quantitatively summarize the literature and empirically generalize more conclusive findings. Based on the 106 effect sizes of 38 empirical studies, the meta-analysis shows that ...

  11. Meta Analisis Model Pembelajaran Problem Based Learning dalam Meningkatkan Keterampilan Berpikir Kritis di Sekolah Dasar [A Meta-analysis of Problem-Based Learning Models in Increasing Critical Thinking Skills in Elementary Schools

    Directory of Open Access Journals (Sweden)

    Indri Anugraheni

    2018-01-01

    Full Text Available This study aims to analyze Problem-based Learning models intended to improve critical thinking skills in elementary school students. Problem-based learning models are learning processes where students are open minded, reflexive, active, reflective, and critical through real-world context activities. In this study the researcher used a meta-analysis method. First, the researcher formulated the research problem, then proceeded to review the existing relevant research for analysis. Data were collected by using a non-test technique by browsing electronic journals through Google Scholar and studying documentation in the library. Seven articles were found through Google Scholar and only one was found in the library. Based on the analysis of the results, the problem-based learning model can improve students' thinking ability from as little as 2.87% up to 33.56% with an average of 14.18%. BAHASA INDONESIA ABSTRAK: Penelitian ini bertujuan untuk menganalisis kembali tentang model pembelajaran Problem Based Learning untuk meningkatkan keterampilan berpikir kritis di Sekolah Dasar. Model pembelajaran Problem Based Learning adalah proses pembelajaran dimana siswa mampu memiliki pola pikir yang terbuka, refktif, aktif, reflektif dan kritis melalui kegiatan konteks dunia nyata. Dalam penelitian ini peneliti menggunakan metode meta analisis. Pertama-tama, peneliti merumuskan masalah penelitian, kemudian dilanjutkan dengan menelusuri penelitian yang sudah ada dan relevan untuk dianalisis. Teknik pengumpulan data dengan menggunakan non tes yaitu dengan menelusuri jurnal elektronik melalui google Cendekia dan studi dokumentasi di perpustakaan. Dari hasil penelusuran diperoleh 20 artikel dari jurnal dan 3 dari repository. Berdasarkan hasil analisis ternyata model pembelajaran Problem Based Learning mampu meningkatkan kemampuan berpikir Siswa mulai dari yang terendah 2,87% sampai yang tertinggi 33,56% dengan rata-rata 12,73%.

  12. Technical Efficiency in the Chilean Agribusiness Sector - a Stochastic Meta-Frontier Approach

    OpenAIRE

    Larkner, Sebastian; Brenes Muñoz, Thelma; Aedo, Edinson Rivera; Brümmer, Bernhard

    2013-01-01

    The Chilean economy is strongly export-oriented, which is also true for the Chilean agribusiness industry. This paper investigates the technical efficiency of the Chilean food processing industry between 2001 and 2007. We use a dataset from the 2,471 of firms in food processing industry. The observations are from the ‘Annual National Industrial Survey’. A stochastic meta-frontier approach is used in order to analyse the drivers of technical efficiency. We include variables capturing the effec...

  13. Applying the Business Process and Practice Alignment Meta-model: Daily Practices and Process Modelling

    Directory of Open Access Journals (Sweden)

    Ventura Martins Paula

    2017-03-01

    Full Text Available Background: Business Process Modelling (BPM is one of the most important phases of information system design. Business Process (BP meta-models allow capturing informational and behavioural aspects of business processes. Unfortunately, standard BP meta-modelling approaches focus just on process description, providing different BP models. It is not possible to compare and identify related daily practices in order to improve BP models. This lack of information implies that further research in BP meta-models is needed to reflect the evolution/change in BP. Considering this limitation, this paper introduces a new BP meta-model designed by Business Process and Practice Alignment Meta-model (BPPAMeta-model. Our intention is to present a meta-model that addresses features related to the alignment between daily work practices and BP descriptions. Objectives: This paper intends to present a metamodel which is going to integrate daily work information into coherent and sound process definitions. Methods/Approach: The methodology employed in the research follows a design-science approach. Results: The results of the case study are related to the application of the proposed meta-model to align the specification of a BP model with work practices models. Conclusions: This meta-model can be used within the BPPAM methodology to specify or improve business processes models based on work practice descriptions.

  14. Parents' experiences of neonatal transfer. A meta-study of qualitative research 2000-2017.

    Science.gov (United States)

    Aagaard, Hanne; Hall, Elisabeth O C; Ludvigsen, Mette S; Uhrenfeldt, Lisbeth; Fegran, Liv

    2018-02-15

    Transfers of critically ill neonates are frequent phenomena. Even though parents' participation is regarded as crucial in neonatal care, a transfer often means that parents and neonates are separated. A systematic review of the parents' experiences of neonatal transfer is lacking. This paper describes a meta-study addressing qualitative research about parents' experiences of neonatal transfer. Through deconstruction and reflections of theories, methods, and empirical data, the aim was to achieve a deeper understanding of theoretical, empirical, contextual, historical, and methodological issues of qualitative studies concerning parents' experiences of neonatal transfer over the course of this meta-study (2000-2017). Meta-theory and meta-method analyses showed that caring, transition, and family-centered care were main theoretical frames applied and that interviewing with a small number of participants was the preferred data collection method. The meta-data-analysis showed that transfer was a scary, unfamiliar, and threatening experience for the parents; they were losing familiar context, were separated from their neonate, and could feel their parenthood disrupted. We identified 'wavering and wandering' as a metaphoric representation of the parents' experiences. The findings add knowledge about meta-study as an approach for comprehensive qualitative research and point at the value of meta-theory and meta-method analyses. © 2018 John Wiley & Sons Ltd.

  15. Radiosurgery of Glomus Jugulare Tumors: A Meta-Analysis

    International Nuclear Information System (INIS)

    Guss, Zachary D.; Batra, Sachin; Limb, Charles J.; Li, Gordon; Sughrue, Michael E.; Redmond, Kristin; Rigamonti, Daniele; Parsa, Andrew T.; Chang, Steven; Kleinberg, Lawrence; Lim, Michael

    2011-01-01

    Purpose: During the past two decades, radiosurgery has arisen as a promising approach to the management of glomus jugulare. In the present study, we report on a systematic review and meta-analysis of the available published data on the radiosurgical management of glomus jugulare tumors. Methods and Materials: To identify eligible studies, systematic searches of all glomus jugulare tumors treated with radiosurgery were conducted in major scientific publication databases. The data search yielded 19 studies, which were included in the meta-analysis. The data from 335 glomus jugulare patients were extracted. The fixed effects pooled proportions were calculated from the data when Cochrane's statistic was statistically insignificant and the inconsistency among studies was 36 months. In these studies, 95% of patients achieved clinical control and 96% achieved tumor control. The gamma knife, linear accelerator, and CyberKnife technologies all exhibited high rates of tumor and clinical control. Conclusions: The present study reports the results of a meta-analysis for the radiosurgical management of glomus jugulare. Because of its high effectiveness, we suggest considering radiosurgery for the primary management of glomus jugulare tumors.

  16. Efficacy of Self-guided Internet-Based Cognitive Behavioral Therapy in the Treatment of Depressive Symptoms: A Meta-analysis of Individual Participant Data.

    Science.gov (United States)

    Karyotaki, Eirini; Riper, Heleen; Twisk, Jos; Hoogendoorn, Adriaan; Kleiboer, Annet; Mira, Adriana; Mackinnon, Andrew; Meyer, Björn; Botella, Cristina; Littlewood, Elizabeth; Andersson, Gerhard; Christensen, Helen; Klein, Jan P; Schröder, Johanna; Bretón-López, Juana; Scheider, Justine; Griffiths, Kathy; Farrer, Louise; Huibers, Marcus J H; Phillips, Rachel; Gilbody, Simon; Moritz, Steffen; Berger, Thomas; Pop, Victor; Spek, Viola; Cuijpers, Pim

    2017-04-01

    Self-guided internet-based cognitive behavioral therapy (iCBT) has the potential to increase access and availability of evidence-based therapy and reduce the cost of depression treatment. To estimate the effect of self-guided iCBT in treating adults with depressive symptoms compared with controls and evaluate the moderating effects of treatment outcome and response. A total of 13 384 abstracts were retrieved through a systematic literature search in PubMed, Embase, PsycINFO, and Cochrane Library from database inception to January 1, 2016. Randomized clinical trials in which self-guided iCBT was compared with a control (usual care, waiting list, or attention control) in individuals with symptoms of depression. Primary authors provided individual participant data from 3876 participants from 13 of 16 eligible studies. Missing data were handled using multiple imputations. Mixed-effects models with participants nested within studies were used to examine treatment outcomes and moderators. Outcomes included the Beck Depression Inventory, Center for Epidemiological Studies-Depression Scale, and 9-item Patient Health Questionnaire scores. Scales were standardized across the pool of the included studies. Of the 3876 study participants, the mean (SD) age was 42.0 (11.7) years, 2531 (66.0%) of 3832 were female, 1368 (53.1%) of 2574 completed secondary education, and 2262 (71.9%) of 3146 were employed. Self-guided iCBT was significantly more effective than controls on depressive symptoms severity (β = -0.21; Hedges g  = 0.27) and treatment response (β = 0.53; odds ratio, 1.95; 95% CI, 1.52-2.50; number needed to treat, 8). Adherence to treatment was associated with lower depressive symptoms (β = -0.19; P = .001) and greater response to treatment (β = 0.90; P treatment outcomes. Self-guided iCBT is effective in treating depressive symptoms. The use of meta-analyses of individual participant data provides substantial evidence for clinical and

  17. Explaining the heterogeneous scrapie surveillance figures across Europe: a meta-regression approach

    Directory of Open Access Journals (Sweden)

    Ru Giuseppe

    2007-06-01

    Full Text Available Abstract Background Two annual surveys, the abattoir and the fallen stock, monitor the presence of scrapie across Europe. A simple comparison between the prevalence estimates in different countries reveals that, in 2003, the abattoir survey appears to detect more scrapie in some countries. This is contrary to evidence suggesting the greater ability of the fallen stock survey to detect the disease. We applied meta-analysis techniques to study this apparent heterogeneity in the behaviour of the surveys across Europe. Furthermore, we conducted a meta-regression analysis to assess the effect of country-specific characteristics on the variability. We have chosen the odds ratios between the two surveys to inform the underlying relationship between them and to allow comparisons between the countries under the meta-regression framework. Baseline risks, those of the slaughtered populations across Europe, and country-specific covariates, available from the European Commission Report, were inputted in the model to explain the heterogeneity. Results Our results show the presence of significant heterogeneity in the odds ratios between countries and no reduction in the variability after adjustment for the different risks in the baseline populations. Three countries contributed the most to the overall heterogeneity: Germany, Ireland and The Netherlands. The inclusion of country-specific covariates did not, in general, reduce the variability except for one variable: the proportion of the total adult sheep population sampled as fallen stock by each country. A large residual heterogeneity remained in the model indicating the presence of substantial effect variability between countries. Conclusion The meta-analysis approach was useful to assess the level of heterogeneity in the implementation of the surveys and to explore the reasons for the variation between countries.

  18. A Component Based Approach to Scientific Workflow Management

    CERN Document Server

    Le Goff, Jean-Marie; Baker, Nigel; Brooks, Peter; McClatchey, Richard

    2001-01-01

    CRISTAL is a distributed scientific workflow system used in the manufacturing and production phases of HEP experiment construction at CERN. The CRISTAL project has studied the use of a description driven approach, using meta- modelling techniques, to manage the evolving needs of a large physics community. Interest from such diverse communities as bio-informatics and manufacturing has motivated the CRISTAL team to re-engineer the system to customize functionality according to end user requirements but maximize software reuse in the process. The next generation CRISTAL vision is to build a generic component architecture from which a complete software product line can be generated according to the particular needs of the target enterprise. This paper discusses the issues of adopting a component product line based approach and our experiences of software reuse.

  19. A component based approach to scientific workflow management

    International Nuclear Information System (INIS)

    Baker, N.; Brooks, P.; McClatchey, R.; Kovacs, Z.; LeGoff, J.-M.

    2001-01-01

    CRISTAL is a distributed scientific workflow system used in the manufacturing and production phases of HEP experiment construction at CERN. The CRISTAL project has studied the use of a description driven approach, using meta-modelling techniques, to manage the evolving needs of a large physics community. Interest from such diverse communities as bio-informatics and manufacturing has motivated the CRISTAL team to re-engineer the system to customize functionality according to end user requirements but maximize software reuse in the process. The next generation CRISTAL vision is to build a generic component architecture from which a complete software product line can be generated according to the particular needs of the target enterprise. This paper discusses the issues of adopting a component product line based approach and our experiences of software reuse

  20. An Approximation Approach for Solving the Subpath Planning Problem

    OpenAIRE

    Safilian, Masoud; Tashakkori, S. Mehdi; Eghbali, Sepehr; Safilian, Aliakbar

    2016-01-01

    The subpath planning problem is a branch of the path planning problem, which has widespread applications in automated manufacturing process as well as vehicle and robot navigation. This problem is to find the shortest path or tour subject for travelling a set of given subpaths. The current approaches for dealing with the subpath planning problem are all based on meta-heuristic approaches. It is well-known that meta-heuristic based approaches have several deficiencies. To address them, we prop...

  1. TRIP: An interactive retrieving-inferring data imputation approach

    KAUST Repository

    Li, Zhixu

    2016-06-25

    Data imputation aims at filling in missing attribute values in databases. Existing imputation approaches to nonquantitive string data can be roughly put into two categories: (1) inferring-based approaches [2], and (2) retrieving-based approaches [1]. Specifically, the inferring-based approaches find substitutes or estimations for the missing ones from the complete part of the data set. However, they typically fall short in filling in unique missing attribute values which do not exist in the complete part of the data set [1]. The retrieving-based approaches resort to external resources for help by formulating proper web search queries to retrieve web pages containing the missing values from the Web, and then extracting the missing values from the retrieved web pages [1]. This webbased retrieving approach reaches a high imputation precision and recall, but on the other hand, issues a large number of web search queries, which brings a large overhead [1]. © 2016 IEEE.

  2. TRIP: An interactive retrieving-inferring data imputation approach

    KAUST Repository

    Li, Zhixu; Qin, Lu; Cheng, Hong; Zhang, Xiangliang; Zhou, Xiaofang

    2016-01-01

    Data imputation aims at filling in missing attribute values in databases. Existing imputation approaches to nonquantitive string data can be roughly put into two categories: (1) inferring-based approaches [2], and (2) retrieving-based approaches [1]. Specifically, the inferring-based approaches find substitutes or estimations for the missing ones from the complete part of the data set. However, they typically fall short in filling in unique missing attribute values which do not exist in the complete part of the data set [1]. The retrieving-based approaches resort to external resources for help by formulating proper web search queries to retrieve web pages containing the missing values from the Web, and then extracting the missing values from the retrieved web pages [1]. This webbased retrieving approach reaches a high imputation precision and recall, but on the other hand, issues a large number of web search queries, which brings a large overhead [1]. © 2016 IEEE.

  3. The Effects of the Constructivist Learning Approach on Student's Academic Achievement: A Meta-Analysis Study

    Science.gov (United States)

    Ayaz, Mehmet Fatih; Sekerci, Hanifi

    2015-01-01

    In this research, a meta-analysis study was conducted in order to determine the effects of constructivist learning approach on students' academic achievement. Master's thesis, doctoral dissertation and articles in national and international databases, which are realized between the years of 2003-2014, appropriate to the problem and which can be…

  4. Modeling soil evaporation efficiency in a range of soil and atmospheric conditions using a meta-analysis approach

    Science.gov (United States)

    Merlin, O.; Stefan, V. G.; Amazirh, A.; Chanzy, A.; Ceschia, E.; Er-Raki, S.; Gentine, P.; Tallec, T.; Ezzahar, J.; Bircher, S.; Beringer, J.; Khabba, S.

    2016-05-01

    A meta-analysis data-driven approach is developed to represent the soil evaporative efficiency (SEE) defined as the ratio of actual to potential soil evaporation. The new model is tested across a bare soil database composed of more than 30 sites around the world, a clay fraction range of 0.02-0.56, a sand fraction range of 0.05-0.92, and about 30,000 acquisition times. SEE is modeled using a soil resistance (rss) formulation based on surface soil moisture (θ) and two resistance parameters rss,ref and θefolding. The data-driven approach aims to express both parameters as a function of observable data including meteorological forcing, cut-off soil moisture value θ1/2 at which SEE=0.5, and first derivative of SEE at θ1/2, named Δθ1/2-1. An analytical relationship between >(rss,ref;θefolding) and >(θ1/2;Δθ1/2-1>) is first built by running a soil energy balance model for two extreme conditions with rss = 0 and rss˜∞ using meteorological forcing solely, and by approaching the middle point from the two (wet and dry) reference points. Two different methods are then investigated to estimate the pair >(θ1/2;Δθ1/2-1>) either from the time series of SEE and θ observations for a given site, or using the soil texture information for all sites. The first method is based on an algorithm specifically designed to accomodate for strongly nonlinear SEE>(θ>) relationships and potentially large random deviations of observed SEE from the mean observed SEE>(θ>). The second method parameterizes θ1/2 as a multi-linear regression of clay and sand percentages, and sets Δθ1/2-1 to a constant mean value for all sites. The new model significantly outperformed the evaporation modules of ISBA (Interaction Sol-Biosphère-Atmosphère), H-TESSEL (Hydrology-Tiled ECMWF Scheme for Surface Exchange over Land), and CLM (Community Land Model). It has potential for integration in various land-surface schemes, and real calibration capabilities using combined thermal and microwave

  5. Rehabilitation in neuro-oncology: a meta-analysis of published data and a mono-institutional experience.

    Science.gov (United States)

    Formica, Vincenzo; Del Monte, Girolamo; Giacchetti, Ilaria; Grenga, Italia; Giaquinto, Salvatore; Fini, Massimo; Roselli, Mario

    2011-06-01

    Rehabilitation for cancer patients with central nervous system (CNS) involvement is rarely considered and data on its use are limited. The purpose of the present study is to collect all available published data on neuro-oncology rehabilitation and perform a meta-analysis where results were presented in a comparable manner. Moreover, the authors report results on cancer patients with neurological disabilities undergoing rehabilitation at their unit. A PubMed search was performed to identify studies regarding cancer patients with CNS involvement undergoing inpatient physical rehabilitation. Studies with a complete functional evaluation at admission and discharge were selected. As the most common evaluation scales were Functional Independence Measure (FIM) and Barthel Index (BI), only articles with complete FIM and/or BI data were selected for the meta-analysis. Moreover, 23 cancer patients suffering from diverse neurological disabilities underwent standard rehabilitation program between April 2005 and December 2007 at the San Raffaele Pisana Rehabilitation Center. Patient demographics and relevant clinical data were collected. Motricity Index, Trunk Control Test score, and BI were monitored during rehabilitation to assess patient progresses. BI results of patients in this study were included in the meta-analysis. The meta-analysis included results of a total of 994 patients. A statistically significant (P rehabilitation (standardized mean difference = 0.60 and 0.75, respectively). Functional status determined by either FIM or BI improved on average by 36%. Published data demonstrate that patients with brain tumors undergoing inpatient rehabilitation appear to make functional gains in line with those seen in similar patients with nonneoplastic conditions.

  6. AucPR: An AUC-based approach using penalized regression for disease prediction with high-dimensional omics data

    OpenAIRE

    Yu, Wenbao; Park, Taesung

    2014-01-01

    Motivation It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. Results We propose an AUC-based approach u...

  7. New Student-Centered and Data-Based Approaches to Hydrology Education

    Science.gov (United States)

    Bloeschl, G.; Troch, P. A. A.; Sivapalan, M.

    2014-12-01

    Hydrology as a science has evolved over the last century. The knowledge base has significantly expanded, and there are requirements to meet with the new expectations of a science where the connections between the parts are just as important as the parts themselves. In this new environment, what should we teach, and how should we teach it? Given the limited time we have in an undergraduate (and even graduate) curriculum, what should we include, and what should we leave out? What new material and new methods are essential, as compared to textbooks? Past practices have assumed certain basics as being essential to undergraduate teaching. Depending on the professor's background, these include basic process descriptions (infiltration, runoff generation, evaporation etc.) and basic techniques (unit hydrographs, flood frequency analysis, pumping tests). These are taught using idealized (textbook) examples and examined to test this basic competence. The main idea behind this "reductionist" approach to teaching is that the students will do the rest of the learning during practice and apprenticeship in their workplaces. Much of current hydrology teaching follows this paradigm, and the books provide the backdrop to this approach. Our view is that this approach is less than optimum, as it does not prepare the students to face up to the new challenges of the changing world. It is our view that the basics of hydrologic science are not just a collection of individual processes and techniques, but process interactions and underlying concepts or principles, and a collection of techniques that highlights these, combined with student-driven and data-based learning that enables the students to see the manifestations of these process interactions and principles in action in real world situations. While the actual number of items that can be taught in the classroom by this approach in a limited period of time may be lower than in the traditional approach, it will help the students make

  8. Meta-analytic framework for liquid association.

    Science.gov (United States)

    Wang, Lin; Liu, Silvia; Ding, Ying; Yuan, Shin-Sheng; Ho, Yen-Yi; Tseng, George C

    2017-07-15

    Although coexpression analysis via pair-wise expression correlation is popularly used to elucidate gene-gene interactions at the whole-genome scale, many complicated multi-gene regulations require more advanced detection methods. Liquid association (LA) is a powerful tool to detect the dynamic correlation of two gene variables depending on the expression level of a third variable (LA scouting gene). LA detection from single transcriptomic study, however, is often unstable and not generalizable due to cohort bias, biological variation and limited sample size. With the rapid development of microarray and NGS technology, LA analysis combining multiple gene expression studies can provide more accurate and stable results. In this article, we proposed two meta-analytic approaches for LA analysis (MetaLA and MetaMLA) to combine multiple transcriptomic studies. To compensate demanding computing, we also proposed a two-step fast screening algorithm for more efficient genome-wide screening: bootstrap filtering and sign filtering. We applied the methods to five Saccharomyces cerevisiae datasets related to environmental changes. The fast screening algorithm reduced 98% of running time. When compared with single study analysis, MetaLA and MetaMLA provided stronger detection signal and more consistent and stable results. The top triplets are highly enriched in fundamental biological processes related to environmental changes. Our method can help biologists understand underlying regulatory mechanisms under different environmental exposure or disease states. A MetaLA R package, data and code for this article are available at http://tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  9. Parent-based adolescent sexual health interventions and effect on communication outcomes: a systematic review and meta-analyses.

    Science.gov (United States)

    Santa Maria, Diane; Markham, Christine; Bluethmann, Shirley; Mullen, Patricia Dolan

    2015-03-01

    Parent-based adolescent sexual health interventions aim to reduce sexual risk behaviors by bolstering parental protective behaviors. Few studies of theory use, methods, applications, delivery and outcomes of parent-based interventions have been conducted. A systematic search of databases for the period 1998-2013 identified 28 published trials of U.S. parent-based interventions to examine theory use, setting, reach, delivery mode, dose and effects on parent-child communication. Established coding schemes were used to assess use of theory and describe methods employed to achieve behavioral change; intervention effects were explored in meta-analyses. Most interventions were conducted with minority parents in group sessions or via self-paced activities; interventions averaged seven hours, and most used theory extensively. Meta-analyses found improvements in sexual health communication: Analysis of 11 controlled trials indicated a medium effect on increasing communication (Cohen's d, 0.5), and analysis of nine trials found a large effect on increasing parental comfort with communication (0.7); effects were positive regardless of delivery mode or intervention dose. Intervention participants were 68% more likely than controls to report increased communication and 75% more likely to report increased comfort. These findings point to gaps in the range of programs examined in published trials-for example, interventions for parents of sexual minority youth, programs for custodial grandparents and faith-based services. Yet they provide support for the effectiveness of parent-based interventions in improving communication. Innovative delivery approaches could extend programs' reach, and further research on sexual health outcomes would facilitate the meta-analysis of intervention effectiveness in improving adolescent sexual health behaviors. Copyright © 2015 by the Guttmacher Institute.

  10. Clustering-based approaches to SAGE data mining

    Directory of Open Access Journals (Sweden)

    Wang Haiying

    2008-07-01

    Full Text Available Abstract Serial analysis of gene expression (SAGE is one of the most powerful tools for global gene expression profiling. It has led to several biological discoveries and biomedical applications, such as the prediction of new gene functions and the identification of biomarkers in human cancer research. Clustering techniques have become fundamental approaches in these applications. This paper reviews relevant clustering techniques specifically designed for this type of data. It places an emphasis on current limitations and opportunities in this area for supporting biologically-meaningful data mining and visualisation.

  11. Systematic review finds that study data not published in full text articles have unclear impact on meta-analyses results in medical research.

    Science.gov (United States)

    Schmucker, Christine M; Blümle, Anette; Schell, Lisa K; Schwarzer, Guido; Oeller, Patrick; Cabrera, Laura; von Elm, Erik; Briel, Matthias; Meerpohl, Joerg J

    2017-01-01

    A meta-analysis as part of a systematic review aims to provide a thorough, comprehensive and unbiased statistical summary of data from the literature. However, relevant study results could be missing from a meta-analysis because of selective publication and inadequate dissemination. If missing outcome data differ systematically from published ones, a meta-analysis will be biased with an inaccurate assessment of the intervention effect. As part of the EU-funded OPEN project (www.open-project.eu) we conducted a systematic review that assessed whether the inclusion of data that were not published at all and/or published only in the grey literature influences pooled effect estimates in meta-analyses and leads to different interpretation. Systematic review of published literature (methodological research projects). Four bibliographic databases were searched up to February 2016 without restriction of publication year or language. Methodological research projects were considered eligible for inclusion if they reviewed a cohort of meta-analyses which (i) compared pooled effect estimates of meta-analyses of health care interventions according to publication status of data or (ii) examined whether the inclusion of unpublished or grey literature data impacts the result of a meta-analysis. Seven methodological research projects including 187 meta-analyses comparing pooled treatment effect estimates according to different publication status were identified. Two research projects showed that published data showed larger pooled treatment effects in favour of the intervention than unpublished or grey literature data (Ratio of ORs 1.15, 95% CI 1.04-1.28 and 1.34, 95% CI 1.09-1.66). In the remaining research projects pooled effect estimates and/or overall findings were not significantly changed by the inclusion of unpublished and/or grey literature data. The precision of the pooled estimate was increased with narrower 95% confidence interval. Although we may anticipate that

  12. Preparing laboratory and real-world EEG data for large-scale analysis: A containerized approach

    Directory of Open Access Journals (Sweden)

    Nima eBigdely-Shamlo

    2016-03-01

    Full Text Available Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain-computer interface (BCI models.. However, the absence of standard-ized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the diffi-culty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a containerized approach and freely available tools we have developed to facilitate the process of an-notating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-analysis. The EEG Study Schema (ESS comprises three data Levels, each with its own XML-document schema and file/folder convention, plus a standardized (PREP pipeline to move raw (Data Level 1 data to a basic preprocessed state (Data Level 2 suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are in-creasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at eegstudy.org, and a central cata-log of over 850 GB of existing data in ESS format is available at study-catalog.org. These tools and resources are part of a larger effort to ena-ble data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org.

  13. Machine learning approaches to analysing textual injury surveillance data: a systematic review.

    Science.gov (United States)

    Vallmuur, Kirsten

    2015-06-01

    To synthesise recent research on the use of machine learning approaches to mining textual injury surveillance data. Systematic review. The electronic databases which were searched included PubMed, Cinahl, Medline, Google Scholar, and Proquest. The bibliography of all relevant articles was examined and associated articles were identified using a snowballing technique. For inclusion, articles were required to meet the following criteria: (a) used a health-related database, (b) focused on injury-related cases, AND used machine learning approaches to analyse textual data. The papers identified through the search were screened resulting in 16 papers selected for review. Articles were reviewed to describe the databases and methodology used, the strength and limitations of different techniques, and quality assurance approaches used. Due to heterogeneity between studies meta-analysis was not performed. Occupational injuries were the focus of half of the machine learning studies and the most common methods described were Bayesian probability or Bayesian network based methods to either predict injury categories or extract common injury scenarios. Models were evaluated through either comparison with gold standard data or content expert evaluation or statistical measures of quality. Machine learning was found to provide high precision and accuracy when predicting a small number of categories, was valuable for visualisation of injury patterns and prediction of future outcomes. However, difficulties related to generalizability, source data quality, complexity of models and integration of content and technical knowledge were discussed. The use of narrative text for injury surveillance has grown in popularity, complexity and quality over recent years. With advances in data mining techniques, increased capacity for analysis of large databases, and involvement of computer scientists in the injury prevention field, along with more comprehensive use and description of quality

  14. Dendritic Cell-Based Immunotherapies to Fight HIV: How Far from a Success Story? A Systematic Review and Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Antonio Victor Campos Coelho

    2016-11-01

    Full Text Available The scientific community still faces the challenge of developing strategies to cure HIV-1. One of these pursued strategies is the development of immunotherapeutic vaccines based on dendritic cells (DCs, pulsed with the virus, that aim to boost HIV-1 specific immune response. We aimed to review DCs-based therapeutic vaccines reports and critically assess evidence to gain insights for the improvement of these strategies. We performed a systematic review, followed by meta-analysis and meta-regression, of clinical trial reports. Twelve studies were selected for meta-analysis. The experimental vaccines had low efficiency, with an overall success rate around 38% (95% confidence interval = 26.7%–51.3%. Protocols differed according to antigen choice, DC culture method, and doses, although multivariate analysis did not show an influence of any of them on overall success rate. The DC-based vaccines elicited at least some immunogenicity, that was sometimes associated with plasmatic viral load transient control. The protocols included both naïve and antiretroviral therapy (ART-experienced individuals, and used different criteria for assessing vaccine efficacy. Although the vaccines did not work as expected, they are proof of concept that immune responses can be boosted against HIV-1. Protocol standardization and use of auxiliary approaches, such as latent HIV-1 reservoir activation and patient genomics are paramount for fine-tuning future HIV-1 cure strategies.

  15. A semantic-based approach for querying linked data using natural language

    KAUST Repository

    Paredes-Valverde, Mario André s; Valencia-Garcí a, Rafael; Rodriguez-Garcia, Miguel Angel; Colomo-Palacios, Ricardo; Alor-Herná ndez, Giner

    2016-01-01

    The semantic Web aims to provide to Web information with a well-defined meaning and make it understandable not only by humans but also by computers, thus allowing the automation, integration and reuse of high-quality information across different applications. However, current information retrieval mechanisms for semantic knowledge bases are intended to be only used by expert users. In this work, we propose a natural language interface that allows non-expert users the access to this kind of information through formulating queries in natural language. The present approach uses a domain-independent ontology model to represent the question's structure and context. Also, this model allows determination of the answer type expected by the user based on a proposed question classification. To prove the effectiveness of our approach, we have conducted an evaluation in the music domain using LinkedBrainz, an effort to provide the MusicBrainz information as structured data on the Web by means of Semantic Web technologies. Our proposal obtained encouraging results based on the F-measure metric, ranging from 0.74 to 0.82 for a corpus of questions generated by a group of real-world end users. © The Author(s) 2015.

  16. A semantic-based approach for querying linked data using natural language

    KAUST Repository

    Paredes-Valverde, Mario Andrés

    2016-01-11

    The semantic Web aims to provide to Web information with a well-defined meaning and make it understandable not only by humans but also by computers, thus allowing the automation, integration and reuse of high-quality information across different applications. However, current information retrieval mechanisms for semantic knowledge bases are intended to be only used by expert users. In this work, we propose a natural language interface that allows non-expert users the access to this kind of information through formulating queries in natural language. The present approach uses a domain-independent ontology model to represent the question\\'s structure and context. Also, this model allows determination of the answer type expected by the user based on a proposed question classification. To prove the effectiveness of our approach, we have conducted an evaluation in the music domain using LinkedBrainz, an effort to provide the MusicBrainz information as structured data on the Web by means of Semantic Web technologies. Our proposal obtained encouraging results based on the F-measure metric, ranging from 0.74 to 0.82 for a corpus of questions generated by a group of real-world end users. © The Author(s) 2015.

  17. The effect of problem-based learning in nursing education: a meta-analysis.

    Science.gov (United States)

    Shin, In-Soo; Kim, Jung-Hee

    2013-12-01

    Problem-based learning (PBL) has been identified as an approach that improves the training of nurses by teaching them how to apply theory to clinical practice and by developing their problem-solving skills, which could be used to overcome environmental constraints within clinical practice. A consensus is emerging that there is a need for systematic reviews and meta-analyses regarding a range of selected topics in nursing education. The purpose of this study was to conduct a meta-analysis of the available literature in order to synthesize the effects of PBL in nursing education. Using a number of databases, we identified studies related to the effectiveness of PBL in nursing. An analysis was conducted on a range of outcome variables, including overall effect sizes and effects of evidence and evaluation levels, learning environment, and study characteristics. We found that the effect of PBL in nursing education is 0.70 standard deviations (medium-to-large effect size). We also found that PBL has positive effects on the outcome domains of satisfaction with training, clinical education, and skill course. These results may act as a guide for nurse educators with regard to the conditions under which PBL is more effective than traditional learning strategies.

  18. Unsupervised Approach Data Analysis Based on Fuzzy Possibilistic Clustering: Application to Medical Image MRI

    Directory of Open Access Journals (Sweden)

    Nour-Eddine El Harchaoui

    2013-01-01

    Full Text Available The analysis and processing of large data are a challenge for researchers. Several approaches have been used to model these complex data, and they are based on some mathematical theories: fuzzy, probabilistic, possibilistic, and evidence theories. In this work, we propose a new unsupervised classification approach that combines the fuzzy and possibilistic theories; our purpose is to overcome the problems of uncertain data in complex systems. We used the membership function of fuzzy c-means (FCM to initialize the parameters of possibilistic c-means (PCM, in order to solve the problem of coinciding clusters that are generated by PCM and also overcome the weakness of FCM to noise. To validate our approach, we used several validity indexes and we compared them with other conventional classification algorithms: fuzzy c-means, possibilistic c-means, and possibilistic fuzzy c-means. The experiments were realized on different synthetics data sets and real brain MR images.

  19. Meta-analysis with R

    CERN Document Server

    Schwarzer, Guido; Rücker, Gerta

    2015-01-01

    This book provides a comprehensive introduction to performing meta-analysis using the statistical software R. It is intended for quantitative researchers and students in the medical and social sciences who wish to learn how to perform meta-analysis with R. As such, the book introduces the key concepts and models used in meta-analysis. It also includes chapters on the following advanced topics: publication bias and small study effects; missing data; multivariate meta-analysis, network meta-analysis; and meta-analysis of diagnostic studies.  .

  20. Meta-analyses and Forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis.

    Science.gov (United States)

    Neyeloff, Jeruza L; Fuchs, Sandra C; Moreira, Leila B

    2012-01-20

    Meta-analyses are necessary to synthesize data obtained from primary research, and in many situations reviews of observational studies are the only available alternative. General purpose statistical packages can meta-analyze data, but usually require external macros or coding. Commercial specialist software is available, but may be expensive and focused in a particular type of primary data. Most available softwares have limitations in dealing with descriptive data, and the graphical display of summary statistics such as incidence and prevalence is unsatisfactory. Analyses can be conducted using Microsoft Excel, but there was no previous guide available. We constructed a step-by-step guide to perform a meta-analysis in a Microsoft Excel spreadsheet, using either fixed-effect or random-effects models. We have also developed a second spreadsheet capable of producing customized forest plots. It is possible to conduct a meta-analysis using only Microsoft Excel. More important, to our knowledge this is the first description of a method for producing a statistically adequate but graphically appealing forest plot summarizing descriptive data, using widely available software.

  1. Meta-analyses and Forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis

    Directory of Open Access Journals (Sweden)

    Neyeloff Jeruza L

    2012-01-01

    Full Text Available Abstract Background Meta-analyses are necessary to synthesize data obtained from primary research, and in many situations reviews of observational studies are the only available alternative. General purpose statistical packages can meta-analyze data, but usually require external macros or coding. Commercial specialist software is available, but may be expensive and focused in a particular type of primary data. Most available softwares have limitations in dealing with descriptive data, and the graphical display of summary statistics such as incidence and prevalence is unsatisfactory. Analyses can be conducted using Microsoft Excel, but there was no previous guide available. Findings We constructed a step-by-step guide to perform a meta-analysis in a Microsoft Excel spreadsheet, using either fixed-effect or random-effects models. We have also developed a second spreadsheet capable of producing customized forest plots. Conclusions It is possible to conduct a meta-analysis using only Microsoft Excel. More important, to our knowledge this is the first description of a method for producing a statistically adequate but graphically appealing forest plot summarizing descriptive data, using widely available software.

  2. Design and rationale of a prospective, collaborative meta-analysis of all randomized controlled trials of angiotensin receptor antagonists in Marfan syndrome, based on individual patient data: A report from the Marfan Treatment Trialists' Collaboration

    Science.gov (United States)

    Pitcher, Alex; Emberson, Jonathan; Lacro, Ronald V.; Sleeper, Lynn A.; Stylianou, Mario; Mahony, Lynn; Pearson, Gail D.; Groenink, Maarten; Mulder, Barbara J.; Zwinderman, Aeilko H.; De Backer, Julie; De Paepe, Anne M.; Arbustini, Eloisa; Erdem, Guliz; Jin, Xu Yu; Flather, Marcus D.; Mullen, Michael J.; Child, Anne H.; Forteza, Alberto; Evangelista, Arturo; Chiu, Hsin-Hui; Wu, Mei-Hwan; Sandor, George; Bhatt, Ami B.; Creager, Mark A.; Devereux, Richard B.; Loeys, Bart; Forfar, J. Colin; Neubauer, Stefan; Watkins, Hugh; Boileau, Catherine; Jondeau, Guillaume; Dietz, Harry C.; Baigent, Colin

    2015-01-01

    Rationale A number of randomized trials are underway, which will address the effects of angiotensin receptor blockers (ARBs) on aortic root enlargement and a range of other end points in patients with Marfan syndrome. If individual participant data from these trials were to be combined, a meta-analysis of the resulting data, totaling approximately 2,300 patients, would allow estimation across a number of trials of the treatment effects both of ARB therapy and of β-blockade. Such an analysis would also allow estimation of treatment effects in particular subgroups of patients on a range of end points of interest and would allow a more powerful estimate of the effects of these treatments on a composite end point of several clinical outcomes than would be available from any individual trial. Design A prospective, collaborative meta-analysis based on individual patient data from all randomized trials in Marfan syndrome of (i) ARBs versus placebo (or open-label control) and (ii) ARBs versus β-blockers will be performed. A prospective study design, in which the principal hypotheses, trial eligibility criteria, analyses, and methods are specified in advance of the unblinding of the component trials, will help to limit bias owing to data-dependent emphasis on the results of particular trials. The use of individual patient data will allow for analysis of the effects of ARBs in particular patient subgroups and for time-to-event analysis for clinical outcomes. The meta-analysis protocol summarized in this report was written on behalf of the Marfan Treatment Trialists' Collaboration and finalized in late 2012, without foreknowledge of the results of any component trial, and will be made available online (http://www.ctsu.ox.ac.uk/research/meta-trials). PMID:25965707

  3. Diverging Conclusions from the Same Meta-Analysis in Drug Safety: Source of Data (Primary Versus Secondary) Takes a Toll.

    Science.gov (United States)

    Prada-Ramallal, Guillermo; Takkouche, Bahi; Figueiras, Adolfo

    2017-04-01

    Meta-analyses of observational studies represent an important tool for assessing efficacy and safety in the pharmacoepidemiologic field. The data from the individual studies are either primary (i.e., collected through interviews or self-administered questionnaires) or secondary (i.e., collected from databases that were established for other purposes). So far, the origin of the data (primary vs. secondary) has not been systematically assessed as a source of heterogeneity in pharmacoepidemiologic meta-analyses. The aim was to assess the impact of considering the source of exposure data as a criterion in sensitivity and subgroup analysis on the conclusions of drug safety meta-analyses. We selected meta-analyses published between 2013 and 2015 in which the intake of frequently used over-the-counter medicines was either the main exposure or a concomitant treatment and the outcome had short latency and induction periods. We stratified the results by origin of data (primary vs. secondary) and compared the new results to those presented originally in the meta-analyses. We used four meta-analyses that fulfilled our criteria of inclusion. The results were selective serotonin reuptake inhibitors and upper gastrointestinal bleeding: original estimate odds ratio (OR) = 1.71 [95% confidence interval (CI) 1.44-2.04], OR primary data = 1.19 (95% CI 0.90-1.58), OR secondary data = 1.81 (95% CI 1.50-2.17); proton pump inhibitors and cardiac events: original estimate hazard ratio (HR) = 1.35 (95% CI 1.18-1.54), HR primary data = 1.05 (95% CI 0.87-1.26), HR secondary data = 1.43 (95% CI 1.23-1.66); non-aspirin non-steroidal anti-inflammatory drugs and myocardial infarction: original estimate risk ratio (RR) = 1.08 (95% CI 0.95-1.22), RR primary data = 0.57 (95% CI 0.34-0.96), RR secondary data = 1.15 (95% CI 1.03-1.28); paracetamol during pregnancy and childhood asthma: original estimate OR = 1.32 (95% CI 1.14-1.52), OR primary data = 1.23 (95% CI 1.06-1.42), OR

  4. A MSFD complementary approach for the assessment of pressures, knowledge and data gaps in Southern European Seas: The PERSEUS experience

    DEFF Research Database (Denmark)

    Crise, A.; Kaberi, H.; Ruiz, José Luis Martinez

    2015-01-01

    PERSEUS project aims to identify the most relevant pressures exerted on the ecosystems of the Southern European Seas (SES), highlighting knowledge and data gaps that endanger the achievement of SES Good Environmental Status (GES) as mandated by the Marine Strategy Framework Directive (MSFD......). A complementary approach has been adopted, by a meta-analysis of existing literature on pressure/impact/knowledge gaps summarized in tables related to the MSFD descriptors, discriminating open waters from coastal areas. A comparative assessment of the Initial Assessments (IAs) for five SES countries has been also...... independently performed. The comparison between meta-analysis results and IAs shows similarities for coastal areas only. Major knowledge gaps have been detected for the biodiversity, marine food web, marine litter and underwater noise descriptors. The meta-analysis also allowed the identification of additional...

  5. panMetaDocs, eSciDoc, and DOIDB - an infrastructure for the curation and publication of file-based datasets for 'GFZ Data Services'

    Science.gov (United States)

    Ulbricht, Damian; Elger, Kirsten; Bertelmann, Roland; Klump, Jens

    2016-04-01

    With the foundation of DataCite in 2009 and the technical infrastructure installed in the last six years it has become very easy to create citable dataset DOIs. Nowadays, dataset DOIs are increasingly accepted and required by journals in reference lists of manuscripts. In addition, DataCite provides usage statistics [1] of assigned DOIs and offers a public search API to make research data count. By linking related information to the data, they become more useful for future generations of scientists. For this purpose, several identifier systems, as ISBN for books, ISSN for journals, DOI for articles or related data, Orcid for authors, and IGSN for physical samples can be attached to DOIs using the DataCite metadata schema [2]. While these are good preconditions to publish data, free and open solutions that help with the curation of data, the publication of research data, and the assignment of DOIs in one software seem to be rare. At GFZ Potsdam we built a modular software stack that is made of several free and open software solutions and we established 'GFZ Data Services'. 'GFZ Data Services' provides storage, a metadata editor for publication and a facility to moderate minted DOIs. All software solutions are connected through web APIs, which makes it possible to reuse and integrate established software. Core component of 'GFZ Data Services' is an eSciDoc [3] middleware that is used as central storage, and has been designed along the OAIS reference model for digital preservation. Thus, data are stored in self-contained packages that are made of binary file-based data and XML-based metadata. The eSciDoc infrastructure provides access control to data and it is able to handle half-open datasets, which is useful in embargo situations when a subset of the research data are released after an adequate period. The data exchange platform panMetaDocs [4] makes use of eSciDoc's REST API to upload file-based data into eSciDoc and uses a metadata editor [5] to annotate the files

  6. Unified Sequence-Based Association Tests Allowing for Multiple Functional Annotations and Meta-analysis of Noncoding Variation in Metabochip Data.

    Science.gov (United States)

    He, Zihuai; Xu, Bin; Lee, Seunggeun; Ionita-Laza, Iuliana

    2017-09-07

    Substantial progress has been made in the functional annotation of genetic variation in the human genome. Integrative analysis that incorporates such functional annotations into sequencing studies can aid the discovery of disease-associated genetic variants, especially those with unknown function and located outside protein-coding regions. Direct incorporation of one functional annotation as weight in existing dispersion and burden tests can suffer substantial loss of power when the functional annotation is not predictive of the risk status of a variant. Here, we have developed unified tests that can utilize multiple functional annotations simultaneously for integrative association analysis with efficient computational techniques. We show that the proposed tests significantly improve power when variant risk status can be predicted by functional annotations. Importantly, when functional annotations are not predictive of risk status, the proposed tests incur only minimal loss of power in relation to existing dispersion and burden tests, and under certain circumstances they can even have improved power by learning a weight that better approximates the underlying disease model in a data-adaptive manner. The tests can be constructed with summary statistics of existing dispersion and burden tests for sequencing data, therefore allowing meta-analysis of multiple studies without sharing individual-level data. We applied the proposed tests to a meta-analysis of noncoding rare variants in Metabochip data on 12,281 individuals from eight studies for lipid traits. By incorporating the Eigen functional score, we detected significant associations between noncoding rare variants in SLC22A3 and low-density lipoprotein and total cholesterol, associations that are missed by standard dispersion and burden tests. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  7. Case-fatality of COPD exacerbations: a meta-analysis and statistical modeling approach

    DEFF Research Database (Denmark)

    Hoogendoorn, M; Hoogenveen, R T; Rutten-van Mölken, M P

    2010-01-01

    %-confidence interval. The meta-analysis based on six studies that fulfilled the inclusion criteria resulted in a weighted average case-fatality rate of 15.6% (95%CI:10.9%-20.3%), ranging from 11.4% to 19.0% for the individual studies. A severe COPD exacerbation requiring hospitalization not only results...

  8. Meta-mining: a meta-learning framework to support the recommendation, planning and optimization of data mining workflows

    OpenAIRE

    Nguyen, Phong

    2015-01-01

    La fouille de données ou data mining peut être un processus extrêmement complexe dans lequel le data miner doit assembler dans un flux de travail un nombre d’opérateurs de traitement des données et d’analyse afin d’accomplir sa tâche. Afin de supporter le data miner dans la modélisation de son processus de découverte de connaissances, nous proposons un nouveau cadre de travail que nous appelons meta-mining ou méta-apprentissage orienté processus et qui étend de manière significative l’état de l’a...

  9. Data Hiding and Security for XML Database: A TRBAC- Based Approach

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wan-song; SUN Wei; LIU Da-xin

    2005-01-01

    In order to cope with varying protection granularity levels of XML (eXtensible Markup Language) documents, we propose a TXAC (Two-level XML Access Control) framework, in which an extended TRBAC (Temporal Role-Based Access Control) approach is proposed to deal with the dynamic XML data. With different system components,TXAC algorithm evaluates access requests efficiently by appropriate access control policy in dynamic web environment.The method is a flexible and powerful security system offering a multi-level access control solution.

  10. Unique effects and moderators of effects of sources on self-efficacy: A model-based meta-analysis.

    Science.gov (United States)

    Byars-Winston, Angela; Diestelmann, Jacob; Savoy, Julia N; Hoyt, William T

    2017-11-01

    Self-efficacy beliefs are strong predictors of academic pursuits, performance, and persistence, and in theory are developed and maintained by 4 classes of experiences Bandura (1986) referred to as sources: performance accomplishments (PA), vicarious learning (VL), social persuasion (SP), and affective arousal (AA). The effects of sources on self-efficacy vary by performance domain and individual difference factors. In this meta-analysis (k = 61 studies of academic self-efficacy; N = 8,965), we employed B. J. Becker's (2009) model-based approach to examine cumulative effects of the sources as a set and unique effects of each source, controlling for the others. Following Becker's recommendations, we used available data to create a correlation matrix for the 4 sources and self-efficacy, then used these meta-analytically derived correlations to test our path model. We further examined moderation of these associations by subject area (STEM vs. non-STEM), grade, sex, and ethnicity. PA showed by far the strongest unique association with self-efficacy beliefs. Subject area was a significant moderator, with sources collectively predicting self-efficacy more strongly in non-STEM (k = 14) compared with STEM (k = 47) subjects (R2 = .37 and .22, respectively). Within studies of STEM subjects, grade level was a significant moderator of the coefficients in our path model, as were 2 continuous study characteristics (percent non-White and percent female). Practical implications of the findings and future research directions are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Efficacy and safety of ifosfamide-based chemotherapy for osteosarcoma: a meta-analysis

    Directory of Open Access Journals (Sweden)

    Fan XL

    2015-11-01

    Full Text Available Xiao-Liang Fan,1,* Guo-Ping Cai,2,* Liu-Long Zhu,1 Guo-Ming Ding1 1Department of Orthopaedics, Hangzhou First People’s Hospital, Nanjing Medical University, Hangzhou, 2Department of Orthopaedics, Jinshan Hospital, Fudan University, Shanghai, People’s Republic of China *These authors contributed equally to this work Background: The efficacy of ifosfamide-based chemotherapy in the treatment of osteosarcoma has been investigated; however, results are inconsistent. Therefore, we reviewed the relevant studies and conducted a meta-analysis to assess the efficacy of ifosfamide-based chemotherapy in patients with osteosarcoma.Methods: A systematic literature search on PubMed, Embase, and Web of Science databases was performed. Eligible studies were clinical trials of patients with osteosarcoma who received ifosfamide-based chemotherapy. Hazard ratios (HRs were pooled to compare event-free survival (EFS and overall survival (OS. Risk ratios (RRs were pooled to compare good histologic response rates and adverse event incidence. Meta-analysis was performed using a fixed-effects model or a random-effects model according to heterogeneity.Results: A total of seven randomized controlled trials were included in this meta-analysis. Pooled results showed that ifosfamide-based chemotherapy significantly improved EFS (HR=0.72, 95% confidence interval [CI]: 0.63, 0.82; P=0.000 and OS (HR=0.83, 95% CI: 0.70, 0.99; P=0.034; furthermore, this form of chemotherapy increased good histologic response rate (RR=1.27, 95% CI: 1.10, 1.46; P=0.001. In addition, patients in the ifosfamide group exhibited a significantly higher incidence of fever (RR=2.23, 95% CI: 1.42, 3.50; P=0.000 and required more frequent platelet transfusion (RR=1.92, 95% CI: 1.23, 3.01; P=0.004.Conclusion: This meta-analysis confirmed that ifosfamide-based chemotherapy can significantly improve EFS and OS; this chemotherapy can also increase good histologic response rate in patients with osteosarcoma

  12. LBLOCA sensitivity analysis using meta models

    International Nuclear Information System (INIS)

    Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.

    2014-01-01

    This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)

  13. Endoscopic versus surgical treatment of ampullary adenomas: a systematic review and meta-analysis

    Science.gov (United States)

    Mendonça, Ernesto Quaresma; Bernardo, Wanderley Marques; de Moura, Eduardo Guimarães Hourneaux; Chaves, Dalton Marques; Kondo, André; Pu, Leonardo Zorrón Cheng Tao; Baracat, Felipe Iankelevich

    2016-01-01

    The aim of this study is to address the outcomes of endoscopic resection compared with surgery in the treatment of ampullary adenomas. A systematic review and meta-analysis were performed according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) recommendations. For this purpose, the Medline, Embase, Cochrane, Literatura Latino-Americana e do Caribe em Ciências da Saúde (LILACS), Scopus and Cumulative Index to Nursing and Allied Health Literature (CINAHL) databases were scanned. Studies included patients with ampullary adenomas and data considering endoscopic treatment compared with surgery. The entire analysis was based on a fixed-effects model. Five retrospective cohort studies were selected (466 patients). All five studies (466 patients) had complete primary resection data available and showed a difference that favored surgical treatment (risk difference [RD] = -0.24, 95% confidence interval [CI] = -0.44 to -0.04). Primary success data were identified in all five studies as well. Analysis showed that the surgical approach outperformed endoscopic treatment for this outcome (RD = -0.37, 95% CI = -0.50 to -0.24). Recurrence data were found in all studies (466 patients), with a benefit indicated for surgical treatment (RD = 0.10, 95% CI = -0.01 to 0.19). Three studies (252 patients) presented complication data, but analysis showed no difference between the approaches for this parameter (RD = -0.15, 95% CI = -0.53 to 0.23). Considering complete primary resection, primary success and recurrence outcomes, the surgical approach achieves significantly better results. Regarding complication data, this systematic review concludes that rates are not significantly different. PMID:26872081

  14. A time-series approach for clustering farms based on slaughterhouse health aberration data.

    Science.gov (United States)

    Hulsegge, B; de Greef, K H

    2018-05-01

    A large amount of data is collected routinely in meat inspection in pig slaughterhouses. A time series clustering approach is presented and applied that groups farms based on similar statistical characteristics of meat inspection data over time. A three step characteristic-based clustering approach was used from the idea that the data contain more info than the incidence figures. A stratified subset containing 511,645 pigs was derived as a study set from 3.5 years of meat inspection data. The monthly averages of incidence of pleuritis and of pneumonia of 44 Dutch farms (delivering 5149 batches to 2 pig slaughterhouses) were subjected to 1) derivation of farm level data characteristics 2) factor analysis and 3) clustering into groups of farms. The characteristic-based clustering was able to cluster farms for both lung aberrations. Three groups of data characteristics were informative, describing incidence, time pattern and degree of autocorrelation. The consistency of clustering similar farms was confirmed by repetition of the analysis in a larger dataset. The robustness of the clustering was tested on a substantially extended dataset. This confirmed the earlier results, three data distribution aspects make up the majority of distinction between groups of farms and in these groups (clusters) the majority of the farms was allocated comparable to the earlier allocation (75% and 62% for pleuritis and pneumonia, respectively). The difference between pleuritis and pneumonia in their seasonal dependency was confirmed, supporting the biological relevance of the clustering. Comparison of the identified clusters of statistically comparable farms can be used to detect farm level risk factors causing the health aberrations beyond comparison on disease incidence and trend alone. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Constraints on the nuclear equation of state from nuclear masses and radii in a Thomas-Fermi meta-modeling approach

    Science.gov (United States)

    Chatterjee, D.; Gulminelli, F.; Raduta, Ad. R.; Margueron, J.

    2017-12-01

    The question of correlations among empirical equation of state (EoS) parameters constrained by nuclear observables is addressed in a Thomas-Fermi meta-modeling approach. A recently proposed meta-modeling for the nuclear EoS in nuclear matter is augmented with a single finite size term to produce a minimal unified EoS functional able to describe the smooth part of the nuclear ground state properties. This meta-model can reproduce the predictions of a large variety of models, and interpolate continuously between them. An analytical approximation to the full Thomas-Fermi integrals is further proposed giving a fully analytical meta-model for nuclear masses. The parameter space is sampled and filtered through the constraint of nuclear mass reproduction with Bayesian statistical tools. We show that this simple analytical meta-modeling has a predictive power on masses, radii, and skins comparable to full Hartree-Fock or extended Thomas-Fermi calculations with realistic energy functionals. The covariance analysis on the posterior distribution shows that no physical correlation is present between the different EoS parameters. Concerning nuclear observables, a strong correlation between the slope of the symmetry energy and the neutron skin is observed, in agreement with previous studies.

  16. Protocol-developing meta-ethnography reporting guidelines (eMERGe).

    Science.gov (United States)

    France, E F; Ring, N; Noyes, J; Maxwell, M; Jepson, R; Duncan, E; Turley, R; Jones, D; Uny, I

    2015-11-25

    Designing and implementing high-quality health care services and interventions requires robustly synthesised evidence. Syntheses of qualitative research studies can provide evidence of patients' experiences of health conditions; intervention feasibility, appropriateness and acceptability to patients; and advance understanding of health care issues. The unique, interpretive, theory-based meta-ethnography synthesis approach is suited to conveying patients' views and developing theory to inform service design and delivery. However, meta-ethnography reporting is often poor quality, which discourages trust in, and use of, meta-ethnography findings. Users of evidence syntheses require reports that clearly articulate analytical processes and findings. Tailored research reporting guidelines can raise reporting standards but none exists for meta-ethnography. This study aims to create an evidence-based meta-ethnography reporting guideline articulating the methodological standards and depth of reporting required to improve reporting quality. The mixed-methods design of this National Institute of Health Research-funded study (http://www.stir.ac.uk/emerge/) follows good practice in research reporting guideline development comprising: (1) a methodological systematic review (PROSPERO registration: CRD42015024709) to identify recommendations and guidance in conducting/reporting meta-ethnography; (2) a review and audit of published meta-ethnographies to identify good practice principles and develop standards in conduct/reporting; (3) an online workshop and Delphi studies to agree guideline content with 45 international qualitative synthesis experts and 45 other stakeholders including patients; (4) development and wide dissemination of the guideline and its accompanying detailed explanatory document, a report template for National Institute of Health Research commissioned meta-ethnographies, and training materials on guideline use. Meta-ethnography, devised in the field of education

  17. Nodeomics: pathogen detection in vertebrate lymph nodes using meta-transcriptomics.

    Directory of Open Access Journals (Sweden)

    Nicola E Wittekindt

    Full Text Available The ongoing emergence of human infections originating from wildlife highlights the need for better knowledge of the microbial community in wildlife species where traditional diagnostic approaches are limited. Here we evaluate the microbial biota in healthy mule deer (Odocoileus hemionus by analyses of lymph node meta-transcriptomes. cDNA libraries from five individuals and two pools of samples were prepared from retropharyngeal lymph node RNA enriched for polyadenylated RNA and sequenced using Roche-454 Life Sciences technology. Protein-coding and 16S ribosomal RNA (rRNA sequences were taxonomically profiled using protein and rRNA specific databases. Representatives of all bacterial phyla were detected in the seven libraries based on protein-coding transcripts indicating that viable microbiota were present in lymph nodes. Residents of skin and rumen, and those ubiquitous in mule deer habitat dominated classifiable bacterial species. Based on detection of both rRNA and protein-coding transcripts, we identified two new proteobacterial species; a Helicobacter closely related to Helicobacter cetorum in the Helicobacter pylori/Helicobacter acinonychis complex and an Acinetobacter related to Acinetobacter schindleri. Among viruses, a novel gamma retrovirus and other members of the Poxviridae and Retroviridae were identified. We additionally evaluated bacterial diversity by amplicon sequencing the hypervariable V6 region of 16S rRNA and demonstrate that overall taxonomic diversity is higher with the meta-transcriptomic approach. These data provide the most complete picture to date of the microbial diversity within a wildlife host. Our research advances the use of meta-transcriptomics to study microbiota in wildlife tissues, which will facilitate detection of novel organisms with pathogenic potential to human and animals.

  18. Spectral-spatial classification of hyperspectral data with mutual information based segmented stacked autoencoder approach

    Science.gov (United States)

    Paul, Subir; Nagesh Kumar, D.

    2018-04-01

    Hyperspectral (HS) data comprises of continuous spectral responses of hundreds of narrow spectral bands with very fine spectral resolution or bandwidth, which offer feature identification and classification with high accuracy. In the present study, Mutual Information (MI) based Segmented Stacked Autoencoder (S-SAE) approach for spectral-spatial classification of the HS data is proposed to reduce the complexity and computational time compared to Stacked Autoencoder (SAE) based feature extraction. A non-parametric dependency measure (MI) based spectral segmentation is proposed instead of linear and parametric dependency measure to take care of both linear and nonlinear inter-band dependency for spectral segmentation of the HS bands. Then morphological profiles are created corresponding to segmented spectral features to assimilate the spatial information in the spectral-spatial classification approach. Two non-parametric classifiers, Support Vector Machine (SVM) with Gaussian kernel and Random Forest (RF) are used for classification of the three most popularly used HS datasets. Results of the numerical experiments carried out in this study have shown that SVM with a Gaussian kernel is providing better results for the Pavia University and Botswana datasets whereas RF is performing better for Indian Pines dataset. The experiments performed with the proposed methodology provide encouraging results compared to numerous existing approaches.

  19. Meta-analysis on the effectiveness of team-based learning on medical education in China.

    Science.gov (United States)

    Chen, Minjian; Ni, Chunhui; Hu, Yanhui; Wang, Meilin; Liu, Lu; Ji, Xiaoming; Chu, Haiyan; Wu, Wei; Lu, Chuncheng; Wang, Shouyu; Wang, Shoulin; Zhao, Liping; Li, Zhong; Zhu, Huijuan; Wang, Jianming; Xia, Yankai; Wang, Xinru

    2018-04-10

    Team-based learning (TBL) has been adopted as a new medical pedagogical approach in China. However, there are no studies or reviews summarizing the effectiveness of TBL on medical education. This study aims to obtain an overall estimation of the effectiveness of TBL on outcomes of theoretical teaching of medical education in China. We retrieved the studies from inception through December, 2015. Chinese National Knowledge Infrastructure, Chinese Biomedical Literature Database, Chinese Wanfang Database, Chinese Scientific Journal Database, PubMed, EMBASE and Cochrane Database were searched. The quality of included studies was assessed by the Newcastle-Ottawa scale. Standardized mean difference (SMD) was applied for the estimation of the pooled effects. Heterogeneity assumption was detected by I 2 statistics, and was further explored by meta-regression analysis. A total of 13 articles including 1545 participants eventually entered into the meta-analysis. The quality scores of these studies ranged from 6 to 10. Altogether, TBL significantly increased students' theoretical examination scores when compared with lecture-based learning (LBL) (SMD = 2.46, 95% CI: 1.53-3.40). Additionally, TBL significantly increased students' learning attitude (SMD = 3.23, 95% CI: 2.27-4.20), and learning skill (SMD = 2.70, 95% CI: 1.33-4.07). The meta-regression results showed that randomization, education classification and gender diversity were the factors that caused heterogeneity. TBL in theoretical teaching of medical education seems to be more effective than LBL in improving the knowledge, attitude and skill of students in China, providing evidence for the implement of TBL in medical education in China. The medical schools should implement TBL with the consideration on the practical teaching situations such as students' education level.

  20. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    Science.gov (United States)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  1. A statistical method to base nutrient recommendations on meta-analysis of intake and health-related status biomarkers.

    Directory of Open Access Journals (Sweden)

    Hilko van der Voet

    Full Text Available Nutrient recommendations in use today are often derived from relatively old data of few studies with few individuals. However, for many nutrients, including vitamin B-12, extensive data have now become available from both observational studies and randomized controlled trials, addressing the relation between intake and health-related status biomarkers. The purpose of this article is to provide new methodology for dietary planning based on dose-response data and meta-analysis. The methodology builds on existing work, and is consistent with current methodology and measurement error models for dietary assessment. The detailed purposes of this paper are twofold. Firstly, to define a Population Nutrient Level (PNL for dietary planning in groups. Secondly, to show how data from different sources can be combined in an extended meta-analysis of intake-status datasets for estimating PNL as well as other nutrient intake values, such as the Average Nutrient Requirement (ANR and the Individual Nutrient Level (INL. For this, a computational method is presented for comparing a bivariate lognormal distribution to a health criterion value. Procedures to meta-analyse available data in different ways are described. Example calculations on vitamin B-12 requirements were made for four models, assuming different ways of estimating the dose-response relation, and different values of the health criterion. Resulting estimates of ANRs and less so for INLs were found to be sensitive to model assumptions, whereas estimates of PNLs were much less sensitive to these assumptions as they were closer to the average nutrient intake in the available data.

  2. Individual patient data meta-analysis of acupuncture for chronic pain: protocol of the Acupuncture Trialists' Collaboration

    Directory of Open Access Journals (Sweden)

    Sherman Karen J

    2010-09-01

    analyses will investigate the impact of different sham techniques, styles of acupuncture or frequency and duration of treatment sessions. Discussion Individual patient data meta-analysis of high-quality trials will provide the most reliable basis for treatment decisions about acupuncture. Above all, however, we hope that our approach can serve as a model for future studies in acupuncture and other complementary therapies.

  3. Does the surgical approach for treating mandibular condylar fractures affect the rate of seventh cranial nerve injuries? A systematic review and meta-analysis based on a new classification for surgical approaches.

    Science.gov (United States)

    Al-Moraissi, Essam Ahmed; Louvrier, Aurélien; Colletti, Giacomo; Wolford, Larry M; Biglioli, Federico; Ragaey, Marwa; Meyer, Christophe; Ellis, Edward

    2018-03-01

    The purpose of this study was to determine the rate of facial nerve injury (FNI) when performing (ORIF) of mandibular condylar fractures by different surgical approaches. A systematic review and meta-analysis were performed that included several databases with specific keywords, a reference search, and a manual search for suitable articles. The inclusion criteria were all clinical trials, with the aim of assessing the rate of facial nerve injuries when (ORIF) of mandibular condylar fractures was performed using different surgical approaches. The main outcome variable was transient facial nerve injury (TFNI) and permanent facial nerve injury (PFNI) according to the fracture levels, namely: condylar head fractures (CHFs), condylar neck fractures (CNFs), and condylar base fractures (CBFs). For studies where there was no delineation between CNFs and CBFs, the fractures were defined as CNFs/CBFs. The dependent variables were the surgical approaches. A total of 3873 patients enrolled in 96 studies were included in this analysis. TFNI rates reported in the literature were as follows: A) For the transoral approach: a) for strictly intraoral 0.72% (1.3 in CNFs and 0% for CBFs); b) for the transbuccal trocar instrumentation 2.7% (4.2% in CNFs and 0% for CBFs); and c) for endoscopically assisted ORIF 4.2% (5% in CNFs, and 4% in CBFs). B) For low submandibular approach 15.3% (26.1% for CNFs, 11.8% for CBFs, and 13.7% for CNFs/CBFs). C) For the high submandibular/angular subparotid approach with masseter transection 0% in CBFs. D) For the high submandibular/angular transmassetric anteroparotid approach 0% (CNFs and CBFs). E) For the transparotid retromandibular approach a) with nerve facial preparation 14.4% (23.9% in CNFs, 11.8% in CBFs and 13.7% for CNFs/CBFs); b) without facial nerve preparation 19% (24.3% for CNFs and 10.5% for CBFs). F) For retromandibular transmassetric anteroparotid approach 3.4% in CNFs/CBFs. G) For retromandibular transmassetric anteroparotid

  4. ABERRANT RESTING-STATE BRAIN ACTIVITY IN POSTTRAUMATIC STRESS DISORDER: A META-ANALYSIS AND SYSTEMATIC REVIEW.

    Science.gov (United States)

    Koch, Saskia B J; van Zuiden, Mirjam; Nawijn, Laura; Frijling, Jessie L; Veltman, Dick J; Olff, Miranda

    2016-07-01

    About 10% of trauma-exposed individuals develop PTSD. Although a growing number of studies have investigated resting-state abnormalities in PTSD, inconsistent results suggest a need for a meta-analysis and a systematic review. We conducted a systematic literature search in four online databases using keywords for PTSD, functional neuroimaging, and resting-state. In total, 23 studies matched our eligibility criteria. For the meta-analysis, we included 14 whole-brain resting-state studies, reporting data on 663 participants (298 PTSD patients and 365 controls). We used the activation likelihood estimation approach to identify concurrence of whole-brain hypo- and hyperactivations in PTSD patients during rest. Seed-based studies could not be included in the quantitative meta-analysis. Therefore, a separate qualitative systematic review was conducted on nine seed-based functional connectivity studies. The meta-analysis showed consistent hyperactivity in the ventral anterior cingulate cortex and the parahippocampus/amygdala, but hypoactivity in the (posterior) insula, cerebellar pyramis and middle frontal gyrus in PTSD patients, compared to healthy controls. Partly concordant with these findings, the systematic review on seed-based functional connectivity studies showed enhanced salience network (SN) connectivity, but decreased default mode network (DMN) connectivity in PTSD. Combined, these altered resting-state connectivity and activity patterns could represent neurobiological correlates of increased salience processing and hypervigilance (SN), at the cost of awareness of internal thoughts and autobiographical memory (DMN) in PTSD. However, several discrepancies between findings of the meta-analysis and systematic review were observed, stressing the need for future studies on resting-state abnormalities in PTSD patients. © 2016 Wiley Periodicals, Inc.

  5. Bayesian nonparametric meta-analysis using Polya tree mixture models.

    Science.gov (United States)

    Branscum, Adam J; Hanson, Timothy E

    2008-09-01

    Summary. A common goal in meta-analysis is estimation of a single effect measure using data from several studies that are each designed to address the same scientific inquiry. Because studies are typically conducted in geographically disperse locations, recent developments in the statistical analysis of meta-analytic data involve the use of random effects models that account for study-to-study variability attributable to differences in environments, demographics, genetics, and other sources that lead to heterogeneity in populations. Stemming from asymptotic theory, study-specific summary statistics are modeled according to normal distributions with means representing latent true effect measures. A parametric approach subsequently models these latent measures using a normal distribution, which is strictly a convenient modeling assumption absent of theoretical justification. To eliminate the influence of overly restrictive parametric models on inferences, we consider a broader class of random effects distributions. We develop a novel hierarchical Bayesian nonparametric Polya tree mixture (PTM) model. We present methodology for testing the PTM versus a normal random effects model. These methods provide researchers a straightforward approach for conducting a sensitivity analysis of the normality assumption for random effects. An application involving meta-analysis of epidemiologic studies designed to characterize the association between alcohol consumption and breast cancer is presented, which together with results from simulated data highlight the performance of PTMs in the presence of nonnormality of effect measures in the source population.

  6. Vanadium Redox Flow Batteries Using meta-Polybenzimidazole-Based Membranes of Different Thicknesses.

    Science.gov (United States)

    Noh, Chanho; Jung, Mina; Henkensmeier, Dirk; Nam, Suk Woo; Kwon, Yongchai

    2017-10-25

    15, 25, and 35 μm thick meta-polybenzimidazole (PBI) membranes are doped with H 2 SO 4 and tested in a vanadium redox flow battery (VRFB). Their performances are compared with those of Nafion membranes. Immersed in 2 M H 2 SO 4 , PBI absorbs about 2 mol of H 2 SO 4 per mole of repeat unit. This results in low conductivity and low voltage efficiency (VE). In ex-situ tests, meta-PBI shows a negligible crossover of V 3+ and V 4+ ions, much lower than that of Nafion. This is due to electrostatic repulsive forces between vanadium cations and positively charged protonated PBI backbones, and the molecular sieving effect of PBI's nanosized pores. It turns out that charge efficiency (CE) of VRFBs using meta-PBI-based membranes is unaffected by or slightly increases with decreasing membrane thickness. Thick meta-PBI membranes require about 100 mV larger potentials to achieve the same charging current as thin meta-PBI membranes. This additional potential may increase side reactions or enable more vanadium ions to overcome the electrostatic energy barrier and to enter the membrane. On this basis, H 2 SO 4 -doped meta-PBI membranes should be thin to achieve high VE and CE. The energy efficiency of 15 μm thick PBI reaches 92%, exceeding that of Nafion 212 and 117 (N212 and N117) at 40 mA cm -2 .

  7. Neural model of gene regulatory network: a survey on supportive meta-heuristics.

    Science.gov (United States)

    Biswas, Surama; Acharyya, Sriyankar

    2016-06-01

    Gene regulatory network (GRN) is produced as a result of regulatory interactions between different genes through their coded proteins in cellular context. Having immense importance in disease detection and drug finding, GRN has been modelled through various mathematical and computational schemes and reported in survey articles. Neural and neuro-fuzzy models have been the focus of attraction in bioinformatics. Predominant use of meta-heuristic algorithms in training neural models has proved its excellence. Considering these facts, this paper is organized to survey neural modelling schemes of GRN and the efficacy of meta-heuristic algorithms towards parameter learning (i.e. weighting connections) within the model. This survey paper renders two different structure-related approaches to infer GRN which are global structure approach and substructure approach. It also describes two neural modelling schemes, such as artificial neural network/recurrent neural network based modelling and neuro-fuzzy modelling. The meta-heuristic algorithms applied so far to learn the structure and parameters of neutrally modelled GRN have been reviewed here.

  8. Meta-Analysis of Comparing Personal and Environmental Factors Effective in Addiction Relapse (Iran, 2004 -2012

    Directory of Open Access Journals (Sweden)

    s Safari

    2014-12-01

    Full Text Available Objective: This As a meta-analysis, this study aimed to integrate different studies and investigate the impact of individual and environmental factors on the reappearance of addiction in quitted people. Method: This study is a meta-analysis which uses Hunter and Schmidt approach. For this purpose, 28 out of 42 studies enjoying acceptable methodologies were selected, upon which the meta-analysis was conducted. A meta-analysis checklist was the research instrument. Using summary of the study results, the researcher manually calculated effect size and interpreted it based on the meta-analysis approach and Cohen’s table. Findings: Results revealed that the effect size of environmental factors on addiction relapse was 0.64 while it was obtained 0.41 for individual factors on addiction relapse. Conclusion: According to Cohen’s table, the effect sizes are evaluated as moderate and high for individual factors and environmental factors on addiction relapse, respectively.

  9. Parent-only vs. parent-child (family-focused) approaches for weight loss in obese and overweight children: a systematic review and meta-analysis.

    Science.gov (United States)

    Jull, A; Chen, R

    2013-09-01

    Families are recommended as the agents of change for weight loss in overweight and obese children; family approaches are more effective than those that focus on the child alone. However, interventions that focus on parents alone have not been summarized. The objective of this review was to assess the effectiveness of interventions that compared a parent-only (PO) condition with a parent-child (PC) condition. Four trials using a similar between-group background approaches to overweight and obese children's weight loss met the inclusion criteria, but only one trial reported sufficient data for meta-analysis. Further information was obtained from authors. Meta-analysis showed no significant difference in z-BMI from baseline to end of treatment between the conditions (three trials) or to end of follow up (two trials). The trials were at risk of bias and no single trial was at lower risk of bias than others. There is an absence of high quality evidence regarding the effect of parent-only interventions for weight loss in children compared to parent-child interventions, but current evidence suggests the need for further investigation. © 2013 The Authors. obesity reviews © 2013 International Association for the Study of Obesity.

  10. Long working hours and depressive symptoms: systematic review and meta-analysis of published studies and unpublished individual participant data

    OpenAIRE

    Virtanen, M.; Jokela, M.; Madsen, I. E.; Magnusson Hanson, L. L.; Lallukka, T.; Nyberg, S. T.; Alfredsson, L.; Batty, D.; Bjorner, J. B.; Borritz, M.; Burr, H.; Dragano, N.; Erbel, R.; Ferrie, J. E.; Heikkilä, K.

    2018-01-01

    Objectives This systematic review and meta-analysis combined published study-level data and unpublished individual-participant data with the aim of quantifying the relation between long working hours and the onset of depressive symptoms. Methods We searched PubMed and Embase for published prospective cohort studies and included available cohorts with unpublished individual-participant data. We used a random-effects meta-analysis to calculate summary estimates across studies. Results We identi...

  11. Evidence-based Neuro Linguistic Psychotherapy: a meta-analysis.

    Science.gov (United States)

    Zaharia, Cătălin; Reiner, Melita; Schütz, Peter

    2015-12-01

    Neuro Linguistic Programming (NLP) Framework has enjoyed enormous popularity in the field of applied psychology. NLP has been used in business, education, law, medicine and psychotherapy to identify people's patterns and alter their responses to stimuli, so they are better able to regulate their environment and themselves. NLP looks at achieving goals, creating stable relationships, eliminating barriers such as fears and phobias, building self-confidence, and self-esteem, and achieving peak performance. Neuro Linguistic Psychotherapy (NLPt) encompasses NLP as framework and set of interventions in the treatment of individuals with different psychological and/or social problems. We aimed systematically to analyse the available data regarding the effectiveness of Neuro Linguistic Psychotherapy (NLPt). The present work is a meta-analysis of studies, observational or randomized controlled trials, for evaluating the efficacy of Neuro Linguistic Programming in individuals with different psychological and/or social problems. The databases searched to identify studies in English and German language: CENTRAL in the Cochrane Library; PubMed; ISI Web of Knowledge (include results also from Medline and the Web of Science); PsycINFO (including PsycARTICLES); Psyndex; Deutschsprachige Diplomarbeiten der Psychologie (database of theses in Psychology in German language), Social SciSearch; National library of health and two NLP-specific research databases: one from the NLP Community (http://www.nlp.de/cgi-bin/research/nlprdb.cgi?action=res_entries) and one from the NLP Group (http://www.nlpgrup.com/bilimselarastirmalar/bilimsel-arastirmalar-4.html#Zweig154). From a total number of 425 studies, 350 were removed and considered not relevant based on the title and abstract. Included, in the final analysis, are 12 studies with numbers of participants ranging between 12 and 115 subjects. The vast majority of studies were prospective observational. The actual paper represents the first

  12. Meta-Analysis on Pharmacogenetics of Platinum-Based Chemotherapy in Non Small Cell Lung Cancer (NSCLC) Patients

    Science.gov (United States)

    Yin, Ji-Ye; Huang, Qiong; Zhao, Ying-Chun; Zhou, Hong-Hao; Liu, Zhao-Qian

    2012-01-01

    Aim To determine the pharmacogenetics of platinum-based chemotherapy in Non Small Cell Lung Cancer (NSCLC) patients. Methods Publications were selected from PubMed, Cochrane Library and ISI Web of Knowledge. A meta-analysis was conducted to determine the association between genetic polymorphisms and platinum-based chemotherapy by checking odds ratio (OR) and 95% confidence interval (CI). Results Data were extracted from 24 publications, which included 11 polymorphisms in 8 genes for meta-analysis. MDR1 C3435T (OR = 1.97, 95% CI: 1.11–3.50, P = 0.02), G2677A/T (OR = 2.61, 95% CI: 1.44–4.74, P = 0.002) and GSTP1 A313G (OR = 0.32, 95% CI: 0.17–0.58, P = 0.0002) were significantly correlated with platinum-based chemotherapy in Asian NSCLC patients. Conclusion Attention should be paid to MDR1 C3435T, G2677A/T and GSTP1 A313G for personalized chemotherapy treatment for NSCLC patients in Asian population in the future. PMID:22761669

  13. Prognostic factors of early metastasis and mortality in dogs with appendicular osteosarcoma after receiving surgery : an individual patient data meta-analysis

    NARCIS (Netherlands)

    Schmidt, A.F.; Nielen, M.; Klungel, O.H.; Hoes, A.W.; de Boer, A.; Groenwold, R.H.H.; Kirpensteijn, J.

    2013-01-01

    Recently an aggregated data meta-analysis showed that serum alkaline phosphatase (SALP) and proximal humerus location are predictors for shorter survival in canine osteosarcoma. To identify additional prognostic factors of mortality and metastasis an individual patient data meta-analysis (IPDMA) was

  14. Delineating Individual Trees from Lidar Data: A Comparison of Vector- and Raster-based Segmentation Approaches

    Directory of Open Access Journals (Sweden)

    Maggi Kelly

    2013-08-01

    Full Text Available Light detection and ranging (lidar data is increasingly being used for ecosystem monitoring across geographic scales. This work concentrates on delineating individual trees in topographically-complex, mixed conifer forest across the California’s Sierra Nevada. We delineated individual trees using vector data and a 3D lidar point cloud segmentation algorithm, and using raster data with an object-based image analysis (OBIA of a canopy height model (CHM. The two approaches are compared to each other and to ground reference data. We used high density (9 pulses/m2, discreet lidar data and WorldView-2 imagery to delineate individual trees, and to classify them by species or species types. We also identified a new method to correct artifacts in a high-resolution CHM. Our main focus was to determine the difference between the two types of approaches and to identify the one that produces more realistic results. We compared the delineations via tree detection, tree heights, and the shape of the generated polygons. The tree height agreement was high between the two approaches and the ground data (r2: 0.93–0.96. Tree detection rates increased for more dominant trees (8–100 percent. The two approaches delineated tree boundaries that differed in shape: the lidar-approach produced fewer, more complex, and larger polygons that more closely resembled real forest structure.

  15. Meta-image navigation augmenters for unmanned aircraft systems (MINA for UAS)

    Science.gov (United States)

    Òªelik, Koray; Somani, Arun K.; Schnaufer, Bernard; Hwang, Patrick Y.; McGraw, Gary A.; Nadke, Jeremy

    2013-05-01

    GPS is a critical sensor for Unmanned Aircraft Systems (UASs) due to its accuracy, global coverage and small hardware footprint, but is subject to denial due to signal blockage or RF interference. When GPS is unavailable, position, velocity and attitude (PVA) performance from other inertial and air data sensors is not sufficient, especially for small UASs. Recently, image-based navigation algorithms have been developed to address GPS outages for UASs, since most of these platforms already include a camera as standard equipage. Performing absolute navigation with real-time aerial images requires georeferenced data, either images or landmarks, as a reference. Georeferenced imagery is readily available today, but requires a large amount of storage, whereas collections of discrete landmarks are compact but must be generated by pre-processing. An alternative, compact source of georeferenced data having large coverage area is open source vector maps from which meta-objects can be extracted for matching against real-time acquired imagery. We have developed a novel, automated approach called MINA (Meta Image Navigation Augmenters), which is a synergy of machine-vision and machine-learning algorithms for map aided navigation. As opposed to existing image map matching algorithms, MINA utilizes publicly available open-source geo-referenced vector map data, such as OpenStreetMap, in conjunction with real-time optical imagery from an on-board, monocular camera to augment the UAS navigation computer when GPS is not available. The MINA approach has been experimentally validated with both actual flight data and flight simulation data and results are presented in the paper.

  16. Tunable meta-atom using liquid metal embedded in stretchable polymer

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Peng; Yang, Siming; Wang, Qiugu; Jiang, Huawei; Song, Jiming; Dong, Liang, E-mail: ldong@iastate.edu [Department of Electrical and Computer Engineering, Iowa State University, Ames, Iowa 50011 (United States); Jain, Aditya [Department of Electrical and Computer Engineering, Iowa State University, Ames, Iowa 50011 (United States); Ames Laboratory, U.S. DOE and Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011 (United States); Koschny, Thomas; Soukoulis, Costas M. [Ames Laboratory, U.S. DOE and Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011 (United States)

    2015-07-07

    Reconfigurable metamaterials have great potential to alleviate complications involved in using passive metamaterials to realize emerging electromagnetic functions, such as dynamical filtering, sensing, and cloaking. This paper presents a new type of tunable meta-atoms in the X-band frequency range (8–12 GHz) toward reconfigurable metamaterials. The meta-atom is made of all flexible materials compliant to the surface of an interaction object. It uses a liquid metal-based split-ring resonator as its core constituent embedded in a highly flexible elastomer. We demonstrate that simple mechanical stretching of the meta-atom can lead to the great flexibility in reconfiguring its resonance frequency continuously over more than 70% of the X-band frequency range. The presented meta-atom technique provides a simple approach to dynamically tune response characteristics of metamaterials over a broad frequency range.

  17. Tunable meta-atom using liquid metal embedded in stretchable polymer

    International Nuclear Information System (INIS)

    Liu, Peng; Yang, Siming; Wang, Qiugu; Jiang, Huawei; Song, Jiming; Dong, Liang; Jain, Aditya; Koschny, Thomas; Soukoulis, Costas M.

    2015-01-01

    Reconfigurable metamaterials have great potential to alleviate complications involved in using passive metamaterials to realize emerging electromagnetic functions, such as dynamical filtering, sensing, and cloaking. This paper presents a new type of tunable meta-atoms in the X-band frequency range (8–12 GHz) toward reconfigurable metamaterials. The meta-atom is made of all flexible materials compliant to the surface of an interaction object. It uses a liquid metal-based split-ring resonator as its core constituent embedded in a highly flexible elastomer. We demonstrate that simple mechanical stretching of the meta-atom can lead to the great flexibility in reconfiguring its resonance frequency continuously over more than 70% of the X-band frequency range. The presented meta-atom technique provides a simple approach to dynamically tune response characteristics of metamaterials over a broad frequency range

  18. Long working hours and alcohol use: systematic review and meta-analysis of published studies and unpublished individual participant data.

    Science.gov (United States)

    Virtanen, Marianna; Jokela, Markus; Nyberg, Solja T; Madsen, Ida E H; Lallukka, Tea; Ahola, Kirsi; Alfredsson, Lars; Batty, G David; Bjorner, Jakob B; Borritz, Marianne; Burr, Hermann; Casini, Annalisa; Clays, Els; De Bacquer, Dirk; Dragano, Nico; Erbel, Raimund; Ferrie, Jane E; Fransson, Eleonor I; Hamer, Mark; Heikkilä, Katriina; Jöckel, Karl-Heinz; Kittel, France; Knutsson, Anders; Koskenvuo, Markku; Ladwig, Karl-Heinz; Lunau, Thorsten; Nielsen, Martin L; Nordin, Maria; Oksanen, Tuula; Pejtersen, Jan H; Pentti, Jaana; Rugulies, Reiner; Salo, Paula; Schupp, Jürgen; Siegrist, Johannes; Singh-Manoux, Archana; Steptoe, Andrew; Suominen, Sakari B; Theorell, Töres; Vahtera, Jussi; Wagner, Gert G; Westerholm, Peter J M; Westerlund, Hugo; Kivimäki, Mika

    2015-01-13

    To quantify the association between long working hours and alcohol use. Systematic review and meta-analysis of published studies and unpublished individual participant data. A systematic search of PubMed and Embase databases in April 2014 for published studies, supplemented with manual searches. Unpublished individual participant data were obtained from 27 additional studies. The search strategy was designed to retrieve cross sectional and prospective studies of the association between long working hours and alcohol use. Summary estimates were obtained with random effects meta-analysis. Sources of heterogeneity were examined with meta-regression. Cross sectional analysis was based on 61 studies representing 333,693 participants from 14 countries. Prospective analysis was based on 20 studies representing 100,602 participants from nine countries. The pooled maximum adjusted odds ratio for the association between long working hours and alcohol use was 1.11 (95% confidence interval 1.05 to 1.18) in the cross sectional analysis of published and unpublished data. Odds ratio of new onset risky alcohol use was 1.12 (1.04 to 1.20) in the analysis of prospective published and unpublished data. In the 18 studies with individual participant data it was possible to assess the European Union Working Time Directive, which recommends an upper limit of 48 hours a week. Odds ratios of new onset risky alcohol use for those working 49-54 hours and ≥ 55 hours a week were 1.13 (1.02 to 1.26; adjusted difference in incidence 0.8 percentage points) and 1.12 (1.01 to 1.25; adjusted difference in incidence 0.7 percentage points), respectively, compared with working standard 35-40 hours (incidence of new onset risky alcohol use 6.2%). There was no difference in these associations between men and women or by age or socioeconomic groups, geographical regions, sample type (population based v occupational cohort), prevalence of risky alcohol use in the cohort, or sample attrition rate

  19. Solubility properties of synthetic and natural meta-torbernite

    Science.gov (United States)

    Cretaz, Fanny; Szenknect, Stéphanie; Clavier, Nicolas; Vitorge, Pierre; Mesbah, Adel; Descostes, Michael; Poinssot, Christophe; Dacheux, Nicolas

    2013-11-01

    Meta-torbernite, Cu(UO2)2(PO4)2ṡ8H2O, is one of the most common secondary minerals resulting from the alteration of pitchblende. The determination of the thermodynamic data associated to this phase appears to be a crucial step toward the understanding the origin of uranium deposits or to forecast the fate and transport of uranium in natural media. A parallel approach based on the study of both synthetic and natural samples of meta-torbernite (H3O)0.4Cu0.8(UO2)2(PO4)2ṡ7.6H2O was set up to evaluate its solubility constant. The two solids were first thoroughly characterized and compared by means of XRD, SEM, X-EDS analyses, Raman spectroscopy and BET measurements. The solubility constant was then determined in both under- and supersaturated conditions: the obtained value appeared close to logKs,0°(298 K) = -52.9 ± 0.1 whatever the type of experiment and the sample considered. The joint determination of Gibbs free energy (ΔRG°(298 K) = 300 ± 2 kJ mol-1) then allowed the calculation of ΔRH°(298 K) = 40 ± 3 kJ mol-1 and ΔRS°(298 K) = -879 ± 7 J mol-1 K-1. From these values, the thermodynamic data associated with the formation of meta-torbernite (H3O)0.4Cu0.8(UO2)2(PO4)2ṡ7.6H2O were also evaluated and found to be consistent with those previously obtained by calorimetry, showing the reliability of the method developed in this work. Finally, the obtained data were implemented in a calculation code to determine the conditions of meta-torbernite formation in environmental conditions typical of a former mining site. SI=log({Q}/{Ks}) with Q=∏i( where νi is the stoichiometric coefficient (algebraic value) of species i and ai the nonequilibrium activity of i.

  20. Cognitive-Based Interventions to Improve Mobility: A Systematic Review and Meta-analysis.

    Science.gov (United States)

    Marusic, Uros; Verghese, Joe; Mahoney, Jeannette R

    2018-06-01

    A strong relation between cognition and mobility has been identified in aging, supporting a role for enhancement mobility through cognitive-based interventions. However, a critical evaluation of the consistency of treatment effects of cognitive-based interventions is currently lacking. The objective of this study was 2-fold: (1) to review the existing literature on cognitive-based interventions aimed at improving mobility in older adults and (2) to assess the clinical effectiveness of cognitive interventions on gait performance. A systematic review of randomized controlled trials (RCT) of cognitive training interventions for improving simple (normal walking) and complex (dual task walking) gait was conducted in February 2018. Older adults without major cognitive, psychiatric, neurologic, and/or sensory impairments were included. Random effect meta-analyses and a subsequent meta-regression were performed to generate overall cognitive intervention effects on single- and dual-task walking conditions. Ten RCTs met inclusion criteria, with a total of 351 participants included in this meta-analysis. Cognitive training interventions revealed a small effect of intervention on complex gait [effect size (ES) = 0.47, 95% confidence interval (CI) 0.13 to 0.81, P = .007, I 2  = 15.85%], but not simple gait (ES = 0.35, 95% CI -0.01 to 0.71, P = .057, I 2  = 57.32%). Moreover, a meta-regression analysis revealed that intervention duration, training frequency, total number of sessions, and total minutes spent in intervention were not significant predictors of improvement in dual-task walking speed, though there was a suggestive trend toward a negative association between dual-task walking speed improvements and individual training session duration (P = .067). This meta-analysis provides support for the fact that cognitive training interventions can improve mobility-related outcomes, especially during challenging walking conditions requiring higher-order executive

  1. Beta-binomial model for meta-analysis of odds ratios.

    Science.gov (United States)

    Bakbergenuly, Ilyas; Kulinskaya, Elena

    2017-05-20

    In meta-analysis of odds ratios (ORs), heterogeneity between the studies is usually modelled via the additive random effects model (REM). An alternative, multiplicative REM for ORs uses overdispersion. The multiplicative factor in this overdispersion model (ODM) can be interpreted as an intra-class correlation (ICC) parameter. This model naturally arises when the probabilities of an event in one or both arms of a comparative study are themselves beta-distributed, resulting in beta-binomial distributions. We propose two new estimators of the ICC for meta-analysis in this setting. One is based on the inverted Breslow-Day test, and the other on the improved gamma approximation by Kulinskaya and Dollinger (2015, p. 26) to the distribution of Cochran's Q. The performance of these and several other estimators of ICC on bias and coverage is studied by simulation. Additionally, the Mantel-Haenszel approach to estimation of ORs is extended to the beta-binomial model, and we study performance of various ICC estimators when used in the Mantel-Haenszel or the inverse-variance method to combine ORs in meta-analysis. The results of the simulations show that the improved gamma-based estimator of ICC is superior for small sample sizes, and the Breslow-Day-based estimator is the best for n⩾100. The Mantel-Haenszel-based estimator of OR is very biased and is not recommended. The inverse-variance approach is also somewhat biased for ORs≠1, but this bias is not very large in practical settings. Developed methods and R programs, provided in the Web Appendix, make the beta-binomial model a feasible alternative to the standard REM for meta-analysis of ORs. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  2. Reducing a Knowledge-Base Search Space When Data Are Missing

    Science.gov (United States)

    James, Mark

    2007-01-01

    This software addresses the problem of how to efficiently execute a knowledge base in the presence of missing data. Computationally, this is an exponentially expensive operation that without heuristics generates a search space of 1 + 2n possible scenarios, where n is the number of rules in the knowledge base. Even for a knowledge base of the most modest size, say 16 rules, it would produce 65,537 possible scenarios. The purpose of this software is to reduce the complexity of this operation to a more manageable size. The problem that this system solves is to develop an automated approach that can reason in the presence of missing data. This is a meta-reasoning capability that repeatedly calls a diagnostic engine/model to provide prognoses and prognosis tracking. In the big picture, the scenario generator takes as its input the current state of a system, including probabilistic information from Data Forecasting. Using model-based reasoning techniques, it returns an ordered list of fault scenarios that could be generated from the current state, i.e., the plausible future failure modes of the system as it presently stands. The scenario generator models a Potential Fault Scenario (PFS) as a black box, the input of which is a set of states tagged with priorities and the output of which is one or more potential fault scenarios tagged by a confidence factor. The results from the system are used by a model-based diagnostician to predict the future health of the monitored system.

  3. Meta-analysis data for 104 Energy-Economy Nexus papers.

    Science.gov (United States)

    Hajko, Vladimír; Kociánová, Agáta; Buličková, Martina

    2017-06-01

    The data presented here are manually encoded characteristics of research papers in the area of Energy-Economy Nexus (empirical investigation of Granger causality between energy consumption and economic growth) that describe the methods, samples, and other details related to the individual estimations done in the examined empirical papers. Data cover papers indexed by Scopus, published in economic journals, written in English, after year 2000. In addition, papers were manually filtered to only those that deal with Energy-Economy Nexus investigation and have at least 10 citations at (at the time of query - November 2015). This data are to be used to conduct meta-analysis - associated dataset was used in Hajko [1]. Early version of the dataset was used for multinomial logit estimation in Master thesis by Kociánová [2].

  4. Living network meta-analysis compared with pairwise meta-analysis in comparative effectiveness research: empirical study

    Science.gov (United States)

    Nikolakopoulou, Adriani; Mavridis, Dimitris; Furukawa, Toshi A; Cipriani, Andrea; Tricco, Andrea C; Straus, Sharon E; Siontis, George C M; Egger, Matthias

    2018-01-01

    Abstract Objective To examine whether the continuous updating of networks of prospectively planned randomised controlled trials (RCTs) (“living” network meta-analysis) provides strong evidence against the null hypothesis in comparative effectiveness of medical interventions earlier than the updating of conventional, pairwise meta-analysis. Design Empirical study of the accumulating evidence about the comparative effectiveness of clinical interventions. Data sources Database of network meta-analyses of RCTs identified through searches of Medline, Embase, and the Cochrane Database of Systematic Reviews until 14 April 2015. Eligibility criteria for study selection Network meta-analyses published after January 2012 that compared at least five treatments and included at least 20 RCTs. Clinical experts were asked to identify in each network the treatment comparison of greatest clinical interest. Comparisons were excluded for which direct and indirect evidence disagreed, based on side, or node, splitting test (Pmeta-analysis. The frequency and time to strong evidence was compared against the null hypothesis between pairwise and network meta-analyses. Results 49 comparisons of interest from 44 networks were included; most (n=39, 80%) were between active drugs, mainly from the specialties of cardiology, endocrinology, psychiatry, and rheumatology. 29 comparisons were informed by both direct and indirect evidence (59%), 13 by indirect evidence (27%), and 7 by direct evidence (14%). Both network and pairwise meta-analysis provided strong evidence against the null hypothesis for seven comparisons, but for an additional 10 comparisons only network meta-analysis provided strong evidence against the null hypothesis (P=0.002). The median time to strong evidence against the null hypothesis was 19 years with living network meta-analysis and 23 years with living pairwise meta-analysis (hazard ratio 2.78, 95% confidence interval 1.00 to 7.72, P=0.05). Studies directly comparing

  5. The effectiveness and safety of antifibrinolytics in patients with acute intracranial haemorrhage: statistical analysis plan for an individual patient data meta-analysis.

    Science.gov (United States)

    Ker, Katharine; Prieto-Merino, David; Sprigg, Nikola; Mahmood, Abda; Bath, Philip; Kang Law, Zhe; Flaherty, Katie; Roberts, Ian

    2017-01-01

    Introduction : The Antifibrinolytic Trialists Collaboration aims to increase knowledge about the effectiveness and safety of antifibrinolytic treatment by conducting individual patient data (IPD) meta-analyses of randomised trials. This article presents the statistical analysis plan for an IPD meta-analysis of the effects of antifibrinolytics for acute intracranial haemorrhage. Methods : The protocol for the IPD meta-analysis has been registered with PROSPERO (CRD42016052155). We will conduct an individual patient data meta-analysis of randomised controlled trials with 1000 patients or more assessing the effects of antifibrinolytics in acute intracranial haemorrhage. We will assess the effect on two co-primary outcomes: 1) death in hospital at end of trial follow-up, and 2) death in hospital or dependency at end of trial follow-up. The co-primary outcomes will be limited to patients treated within three hours of injury or stroke onset. We will report treatment effects using odds ratios and 95% confidence intervals. We use logistic regression models to examine how the effect of antifibrinolytics vary by time to treatment, severity of intracranial bleeding, and age. We will also examine the effect of antifibrinolytics on secondary outcomes including death, dependency, vascular occlusive events, seizures, and neurological outcomes. Secondary outcomes will be assessed in all patients irrespective of time of treatment. All analyses will be conducted on an intention-to-treat basis. Conclusions : This IPD meta-analysis will examine important clinical questions about the effects of antifibrinolytic treatment in patients with intracranial haemorrhage that cannot be answered using aggregate data. With IPD we can examine how effects vary by time to treatment, bleeding severity, and age, to gain better understanding of the balance of benefit and harms on which to base recommendations for practice.

  6. GAMLSS for high-dimensional data – a flexible approach based on boosting

    OpenAIRE

    Mayr, Andreas; Fenske, Nora; Hofner, Benjamin; Kneib, Thomas; Schmid, Matthias

    2010-01-01

    Generalized additive models for location, scale and shape (GAMLSS) are a popular semi-parametric modelling approach that, in contrast to conventional GAMs, regress not only the expected mean but every distribution parameter (e.g. location, scale and shape) to a set of covariates. Current fitting procedures for GAMLSS are infeasible for high-dimensional data setups and require variable selection based on (potentially problematic) information criteria. The present work describes a boosting algo...

  7. Meta-analysis for quantitative microbiological risk assessments and benchmarking data

    NARCIS (Netherlands)

    Besten, den H.M.W.; Zwietering, M.H.

    2012-01-01

    Meta-analysis studies are increasingly being conducted in the food microbiology area to quantitatively integrate the findings of many individual studies on specific questions or kinetic parameters of interest. Meta-analyses provide global estimates of parameters and quantify their variabilities, and

  8. PedGenie: meta genetic association testing in mixed family and case-control designs

    Directory of Open Access Journals (Sweden)

    Allen-Brady Kristina

    2007-11-01

    Full Text Available Abstract Background- PedGenie software, introduced in 2006, includes genetic association testing of cases and controls that may be independent or related (nuclear families or extended pedigrees or mixtures thereof using Monte Carlo significance testing. Our aim is to demonstrate that PedGenie, a unique and flexible analysis tool freely available in Genie 2.4 software, is significantly enhanced by incorporating meta statistics for detecting genetic association with disease using data across multiple study groups. Methods- Meta statistics (chi-squared tests, odds ratios, and confidence intervals were calculated using formal Cochran-Mantel-Haenszel techniques. Simulated data from unrelated individuals and individuals in families were used to illustrate meta tests and their empirically-derived p-values and confidence intervals are accurate, precise, and for independent designs match those provided by standard statistical software. Results- PedGenie yields accurate Monte Carlo p-values for meta analysis of data across multiple studies, based on validation testing using pedigree, nuclear family, and case-control data simulated under both the null and alternative hypotheses of a genotype-phenotype association. Conclusion- PedGenie allows valid combined analysis of data from mixtures of pedigree-based and case-control resources. Added meta capabilities provide new avenues for association analysis, including pedigree resources from large consortia and multi-center studies.

  9. Energy consumption and economic growth—New evidence from meta analysis

    International Nuclear Information System (INIS)

    Chen, Ping-Yu; Chen, Sheng-Tung; Chen, Chi-Chung

    2012-01-01

    The causal relationships between energy consumption and economic growth have given rise to much discussion but remain controversial. Alternative data sets based on different time spans, countries, energy policies and econometric approaches result in diverse outcomes. A meta analysis using a multinomial logit model with 174 samples governing the relationships between GDP and energy consumption is applied here to investigate the major factors that affect these controversial outcomes. The empirical results have demonstrated how the time spans, subject selections including GDP and energy consumption, econometric models, and tools for greenhouse gases emission reduction characteristics significantly affect these controversial outcomes. - Highlights: ► The controversial casual relationships between energy consumption and GDP are investigated. ► A meta analysis using a multinomial logit model is adopted. ► 74 studies governing the relationships between GDP and energy consumption was collected. ► The empirical results show how the probability of major factors affects such relationships.

  10. How do the features of mindfulness-based cognitive therapy contribute to positive therapeutic change? A meta-synthesis of qualitative studies.

    Science.gov (United States)

    Cairns, Victoria; Murray, Craig

    2015-05-01

    The exploration of Mindfulness-based Cognitive Therapy through qualitative investigation is a growing area of interest within current literature, providing valuable understanding of the process of change experienced by those engaging in this therapeutic approach. This meta-synthesis aims to gain a deeper understanding of how the features of Mindfulness-based Cognitive Therapy contribute to positive therapeutic change. Noblit and Hare's (1988) 7-step meta-ethnography method was conducted in order to synthesize the findings of seven qualitative studies. The process of reciprocal translation identified the following five major themes: i) Taking control through understanding, awareness and acceptance; ii) The impact of the group; (iii) Taking skills into everyday life; (iv) Feelings towards the self; (v) The role of expectations. The synthesis of translation identified the higher order concept of "The Mindfulness-based Cognitive Therapy Journey to Change", which depicts the complex interaction between the five themes in relation to how they contribute to positive therapeutic change. The findings are discussed in relation to previous research, theory and their implications for clinical practice.

  11. Preoperative radiotherapy in esophageal carcinoma: a meta-analysis using individual patient data (oesophageal cancer collaborative group)

    International Nuclear Information System (INIS)

    Arnott, Sydney J.; Duncan, William; Gignoux, Marc; Girling, David J.; Hansen, Hanne S.; Launois, B.; Nygaard, Knut; Parmar, Mahesh K.B.; Roussel, Alain; Spiliopoulos, G.; Stewart, Lesley A.; Tierney, Jayne F.; Wang Mei; Zhang Rugang

    1998-01-01

    Purpose: The existing randomized evidence has failed to conclusively demonstrate the benefit or otherwise of preoperative radiotherapy in treating patients with potentially resectable esophageal carcinoma. This meta-analysis aimed to assess whether there is benefit from adding radiotherapy prior to surgery. Methods and Materials: This quantitative meta-analysis included updated individual patient data from all properly randomized trials (published or unpublished) comprising 1147 patients (971 deaths) from five randomized trials. Results: With a median follow-up of 9 years, the hazard ratio (HR) of 0.89 (95% CI 0.78-1.01) suggests an overall reduction in the risk of death of 11% and an absolute survival benefit of 3% at 2 years and 4% at 5 years. This result is not conventionally statistically significant (p 0.062). No clear differences in the size of the effect by sex, age, or tumor location were apparent. Conclusion: Based on existing trials, there was no clear evidence that preoperative radiotherapy improves the survival of patients with potentially resectable esophageal cancer. These results indicate that if such preoperative radiotherapy regimens do improve survival, then the effect is likely to be modest with an absolute improvement in survival of around 3 to 4%. Trials or a meta-analysis of around 2000 patients would be needed to reliably detect such an improvement (15→20%)

  12. Novel citation-based search method for scientific literature: application to meta-analyses

    NARCIS (Netherlands)

    Janssens, A.C.J.W.; Gwinn, M.

    2015-01-01

    Background: Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of

  13. Assumption-versus data-based approaches to summarizing species' ranges.

    Science.gov (United States)

    Peterson, A Townsend; Navarro-Sigüenza, Adolfo G; Gordillo, Alejandro

    2018-06-01

    For conservation decision making, species' geographic distributions are mapped using various approaches. Some such efforts have downscaled versions of coarse-resolution extent-of-occurrence maps to fine resolutions for conservation planning. We examined the quality of the extent-of-occurrence maps as range summaries and the utility of refining those maps into fine-resolution distributional hypotheses. Extent-of-occurrence maps tend to be overly simple, omit many known and well-documented populations, and likely frequently include many areas not holding populations. Refinement steps involve typological assumptions about habitat preferences and elevational ranges of species, which can introduce substantial error in estimates of species' true areas of distribution. However, no model-evaluation steps are taken to assess the predictive ability of these models, so model inaccuracies are not noticed. Whereas range summaries derived by these methods may be useful in coarse-grained, global-extent studies, their continued use in on-the-ground conservation applications at fine spatial resolutions is not advisable in light of reliance on assumptions, lack of real spatial resolution, and lack of testing. In contrast, data-driven techniques that integrate primary data on biodiversity occurrence with remotely sensed data that summarize environmental dimensions (i.e., ecological niche modeling or species distribution modeling) offer data-driven solutions based on a minimum of assumptions that can be evaluated and validated quantitatively to offer a well-founded, widely accepted method for summarizing species' distributional patterns for conservation applications. © 2016 Society for Conservation Biology.

  14. Data communication between data terminal equipment and the JPL administrative data base management system

    Science.gov (United States)

    Iverson, R. W.

    1984-01-01

    Approaches to enabling an installed base of mixed data terminal equipment to access a data base management system designed to work with a specific terminal are discussed. The approach taken by the Jet Propulsion Laboratory is described. Background information on the Jet Propulsion Laboratory (JPL), its organization and a description of the Administrative Data Base Management System is included.

  15. De-MetaST-BLAST: a tool for the validation of degenerate primer sets and data mining of publicly available metagenomes.

    Directory of Open Access Journals (Sweden)

    Christopher A Gulvik

    Full Text Available Development and use of primer sets to amplify nucleic acid sequences of interest is fundamental to studies spanning many life science disciplines. As such, the validation of primer sets is essential. Several computer programs have been created to aid in the initial selection of primer sequences that may or may not require multiple nucleotide combinations (i.e., degeneracies. Conversely, validation of primer specificity has remained largely unchanged for several decades, and there are currently few available programs that allows for an evaluation of primers containing degenerate nucleotide bases. To alleviate this gap, we developed the program De-MetaST that performs an in silico amplification using user defined nucleotide sequence dataset(s and primer sequences that may contain degenerate bases. The program returns an output file that contains the in silico amplicons. When De-MetaST is paired with NCBI's BLAST (De-MetaST-BLAST, the program also returns the top 10 nr NCBI database hits for each recovered in silico amplicon. While the original motivation for development of this search tool was degenerate primer validation using the wealth of nucleotide sequences available in environmental metagenome and metatranscriptome databases, this search tool has potential utility in many data mining applications.

  16. NeuroLOG: sharing neuroimaging data using an ontology-based federated approach.

    Science.gov (United States)

    Gibaud, Bernard; Kassel, Gilles; Dojat, Michel; Batrancourt, Bénédicte; Michel, Franck; Gaignard, Alban; Montagnat, Johan

    2011-01-01

    This paper describes the design of the NeuroLOG middleware data management layer, which provides a platform to share heterogeneous and distributed neuroimaging data using a federated approach. The semantics of shared information is captured through a multi-layer application ontology and a derived Federated Schema used to align the heterogeneous database schemata from different legacy repositories. The system also provides a facility to translate the relational data into a semantic representation that can be queried using a semantic search engine thus enabling the exploitation of knowledge embedded in the ontology. This work shows the relevance of the distributed approach for neurosciences data management. Although more complex than a centralized approach, it is also more realistic when considering the federation of large data sets, and open strong perspectives to implement multi-centric neurosciences studies.

  17. Relational-Based Sensor Data Cleansing

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Liu, Xiufeng; Nordbjerg, Finn Ebertsen

    2015-01-01

    cleansing approaches, such as classification, prediction and moving average are not suited for embedded sensor devices, due to the limited storage and processing capabilities. In this paper, we propose a sensor data cleansing approach using the relational-based technologies, including constraints, triggers...... and granularity-based data aggregation. The proposed approach is simple but effective to cleanse different types of dirty data, including delayed data, incomplete data, incorrect data, duplicate data and missing data. We evaluate the proposed strategy to verify its efficiency, effectiveness and adaptability.......Today sensors are widely used in many monitoring applications. Due to some random environmental effects and/or sensing failures, the collected sensor data is typically noisy. Thus, it is critical to cleanse the sensor data before using it to answer queries or conduct data analysis. Popular data...

  18. Relational-Based Sensor Data Cleansing

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Nordbjerg, Finn Ebertsen

    2015-01-01

    approaches, such as classification, prediction and moving average, are not suited for embedded sensor devices, due to their limit storage and processing capabilities. In this paper, we propose a sensor data cleansing approach using the relational-based technologies, including constraints, triggers...... and granularity-based data aggregation. The proposed approach is simple but effective to cleanse different types of dirty data, including delayed data, incomplete data, incorrect data, duplicate data and missing data. We evaluate the proposed strategy to verify its efficiency and effectiveness.......Today sensors are widely used in many monitoring applications. Due to some random environmental effects and/or sensing failures, the collected sensor data is typically noisy. Thus, it is critical to cleanse the data before using it for answering queries or for data analysis. Popular data cleansing...

  19. Online open neuroimaging mass meta-analysis

    DEFF Research Database (Denmark)

    Nielsen, Finn Årup; Kempton, Matthew J.; Williams, Steven C. R.

    We describe a system for meta-analysis where a wiki stores numerical data in a simple format and a web service performs the numerical computation. We initially apply the system on multiple meta-analyses of structural neuroimaging data results. The described system allows for mass meta-analysis, e...

  20. A Meta-Analysis of Educational Data Mining on Improvements in Learning Outcomes

    Science.gov (United States)

    AlShammari, Iqbal A.; Aldhafiri, Mohammed D.; Al-Shammari, Zaid

    2013-01-01

    A meta-synthesis study was conducted of 60 research studies on educational data mining (EDM) and their impacts on and outcomes for improving learning outcomes. After an overview, an examination of these outcomes is provided (Romero, Ventura, Espejo, & Hervas, 2008; Romero, "et al.", 2011). Then, a review of other EDM-related research…

  1. Objective evaluation of analyzer performance based on a retrospective meta-analysis of instrument validation studies: point-of-care hematology analyzers.

    Science.gov (United States)

    Cook, Andrea M; Moritz, Andreas; Freeman, Kathleen P; Bauer, Natali

    2017-06-01

    Information on quality requirements and objective evaluation of performance of veterinary point-of-care analyzers (POCAs) is scarce. The study was aimed at assessing observed total errors (TE obs s) for veterinary hematology POCAs via meta-analysis and comparing TE obs to allowable total error (TE a ) specifications based on experts' opinions. The TE obs for POCAs (impedance and laser-based) was calculated based on data from instrument validation studies published between 2006 and 2013 as follows: TE obs = 2 × CV [%] + bias [%]. The CV was taken from published studies; the bias was estimated from the regression equation at 2 different concentration levels of measurands. To fulfill quality requirements, TE obs should be 60% of analyzers showed TE obs hematology variables, respectively. For the CBC, TE obs was TE a (data from 3 analyzers). This meta-analysis is considered a pilot study. Experts' requirements (TE obs < TE a ) were fulfilled for most measurands except HGB (due to instrument-related bias for the ADVIA 2120) and platelet counts. Available data on the WBC differential count suggest an analytic bias, so nonstatistical quality control is recommended. © 2017 American Society for Veterinary Clinical Pathology.

  2. Comparative linkage meta-analysis reveals regionally-distinct, disparate genetic architectures: application to bipolar disorder and schizophrenia.

    Directory of Open Access Journals (Sweden)

    Brady Tang

    2011-04-01

    Full Text Available New high-throughput, population-based methods and next-generation sequencing capabilities hold great promise in the quest for common and rare variant discovery and in the search for "missing heritability." However, the optimal analytic strategies for approaching such data are still actively debated, representing the latest rate-limiting step in genetic progress. Since it is likely a majority of common variants of modest effect have been identified through the application of tagSNP-based microarray platforms (i.e., GWAS, alternative approaches robust to detection of low-frequency (1-5% MAF and rare (<1% variants are of great importance. Of direct relevance, we have available an accumulated wealth of linkage data collected through traditional genetic methods over several decades, the full value of which has not been exhausted. To that end, we compare results from two different linkage meta-analysis methods--GSMA and MSP--applied to the same set of 13 bipolar disorder and 16 schizophrenia GWLS datasets. Interestingly, we find that the two methods implicate distinct, largely non-overlapping, genomic regions. Furthermore, based on the statistical methods themselves and our contextualization of these results within the larger genetic literatures, our findings suggest, for each disorder, distinct genetic architectures may reside within disparate genomic regions. Thus, comparative linkage meta-analysis (CLMA may be used to optimize low-frequency and rare variant discovery in the modern genomic era.

  3. The Influence of Study-Level Inference Models and Study Set Size on Coordinate-Based fMRI Meta-Analyses

    Directory of Open Access Journals (Sweden)

    Han Bossier

    2018-01-01

    Full Text Available Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1 the balance between false and true positives and (2 the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS, or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35. To do this, we apply a resampling scheme on a large dataset (N = 1,400 to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results.

  4. Why, when and how to update a meta-ethnography qualitative synthesis.

    Science.gov (United States)

    France, Emma F; Wells, Mary; Lang, Heidi; Williams, Brian

    2016-03-15

    Meta-ethnography is a unique, systematic, qualitative synthesis approach widely used to provide robust evidence on patient and clinician beliefs and experiences and understandings of complex social phenomena. It can make important theoretical and conceptual contributions to health care policy and practice. Since beliefs, experiences, health care contexts and social phenomena change over time, the continued relevance of the findings from meta-ethnographies cannot be assumed. However, there is little guidance on whether, when and how meta-ethnographies should be updated; Cochrane guidance on updating reviews of intervention effectiveness is unlikely to be fully appropriate. This is the first in-depth discussion on updating a meta-ethnography; it explores why, when and how to update a meta-ethnography. Three main methods of updating the analysis and synthesis are examined. Advantages and disadvantages of each method are outlined, relating to the context, purpose, process and output of the update and the nature of the new data available. Recommendations are made for the appropriate use of each method, and a worked example of updating a meta-ethnography is provided. This article makes a unique contribution to this evolving area of meta-ethnography methodology.

  5. Ozone Measurements Monitoring Using Data-Based Approach

    KAUST Repository

    Harrou, Fouzi; Kadri, Farid; Khadraoui, Sofiane; Sun, Ying

    2016-01-01

    The complexity of ozone (O3) formation mechanisms in the troposphere make the fast and accurate modeling of ozone very challenging. In the absence of a process model, principal component analysis (PCA) has been extensively used as a data-based monitoring technique for highly correlated process variables; however conventional PCA-based detection indices often fail to detect small or moderate anomalies. In this work, we propose an innovative method for detecting small anomalies in highly correlated multivariate data. The developed method combine the multivariate exponentially weighted moving average (MEWMA) monitoring scheme with PCA modelling in order to enhance anomaly detection performance. Such a choice is mainly motivated by the greater ability of the MEWMA monitoring scheme to detect small changes in the process mean. The proposed PCA-based MEWMA monitoring scheme is successfully applied to ozone measurements data collected from Upper Normandy region, France, via the network of air quality monitoring stations. The detection results of the proposed method are compared to that declared by Air Normand air monitoring association.

  6. Ozone Measurements Monitoring Using Data-Based Approach

    KAUST Repository

    Harrou, Fouzi

    2016-02-01

    The complexity of ozone (O3) formation mechanisms in the troposphere make the fast and accurate modeling of ozone very challenging. In the absence of a process model, principal component analysis (PCA) has been extensively used as a data-based monitoring technique for highly correlated process variables; however conventional PCA-based detection indices often fail to detect small or moderate anomalies. In this work, we propose an innovative method for detecting small anomalies in highly correlated multivariate data. The developed method combine the multivariate exponentially weighted moving average (MEWMA) monitoring scheme with PCA modelling in order to enhance anomaly detection performance. Such a choice is mainly motivated by the greater ability of the MEWMA monitoring scheme to detect small changes in the process mean. The proposed PCA-based MEWMA monitoring scheme is successfully applied to ozone measurements data collected from Upper Normandy region, France, via the network of air quality monitoring stations. The detection results of the proposed method are compared to that declared by Air Normand air monitoring association.

  7. Consensus building for interlaboratory studies, key comparisons, and meta-analysis

    Science.gov (United States)

    Koepke, Amanda; Lafarge, Thomas; Possolo, Antonio; Toman, Blaza

    2017-06-01

    Interlaboratory studies in measurement science, including key comparisons, and meta-analyses in several fields, including medicine, serve to intercompare measurement results obtained independently, and typically produce a consensus value for the common measurand that blends the values measured by the participants. Since interlaboratory studies and meta-analyses reveal and quantify differences between measured values, regardless of the underlying causes for such differences, they also provide so-called ‘top-down’ evaluations of measurement uncertainty. Measured values are often substantially over-dispersed by comparison with their individual, stated uncertainties, thus suggesting the existence of yet unrecognized sources of uncertainty (dark uncertainty). We contrast two different approaches to take dark uncertainty into account both in the computation of consensus values and in the evaluation of the associated uncertainty, which have traditionally been preferred by different scientific communities. One inflates the stated uncertainties by a multiplicative factor. The other adds laboratory-specific ‘effects’ to the value of the measurand. After distinguishing what we call recipe-based and model-based approaches to data reductions in interlaboratory studies, we state six guiding principles that should inform such reductions. These principles favor model-based approaches that expose and facilitate the critical assessment of validating assumptions, and give preeminence to substantive criteria to determine which measurement results to include, and which to exclude, as opposed to purely statistical considerations, and also how to weigh them. Following an overview of maximum likelihood methods, three general purpose procedures for data reduction are described in detail, including explanations of how the consensus value and degrees of equivalence are computed, and the associated uncertainty evaluated: the DerSimonian-Laird procedure; a hierarchical Bayesian

  8. In search of a corrected prescription drug elasticity estimate: a meta-regression approach.

    Science.gov (United States)

    Gemmill, Marin C; Costa-Font, Joan; McGuire, Alistair

    2007-06-01

    An understanding of the relationship between cost sharing and drug consumption depends on consistent and unbiased price elasticity estimates. However, there is wide heterogeneity among studies, which constrains the applicability of elasticity estimates for empirical purposes and policy simulation. This paper attempts to provide a corrected measure of the drug price elasticity by employing meta-regression analysis (MRA). The results indicate that the elasticity estimates are significantly different from zero, and the corrected elasticity is -0.209 when the results are made robust to heteroskedasticity and clustering of observations. Elasticity values are higher when the study was published in an economic journal, when the study employed a greater number of observations, and when the study used aggregate data. Elasticity estimates are lower when the institutional setting was a tax-based health insurance system.

  9. Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model

    Science.gov (United States)

    Musekiwa, Alfred; Manda, Samuel O. M.; Mwambi, Henry G.; Chen, Ding-Geng

    2016-01-01

    Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results. PMID:27798661

  10. A Visual Analysis Approach for Inferring Personal Job and Housing Locations Based on Public Bicycle Data

    Directory of Open Access Journals (Sweden)

    Xiaoying Shi

    2017-07-01

    Full Text Available Information concerning the home and workplace of residents is the basis of analyzing the urban job-housing spatial relationship. Traditional methods conduct time-consuming user surveys to obtain personal job and housing location information. Some new methods define rules to detect personal places based on human mobility data. However, because the travel patterns of residents are variable, simple rule-based methods are unable to generalize highly changing and complex travel modes. In this paper, we propose a visual analysis approach to assist the analyzer in inferring personal job and housing locations interactively based on public bicycle data. All users are first clustered to find potential commuting users. Then, several visual views are designed to find the key candidate stations for a specific user, and the visited temporal pattern of stations and the user’s hire behavior are analyzed, which helps with the inference of station semantic meanings. Finally, a number of users’ job and housing locations are detected by the analyzer and visualized. Our approach can manage the complex and diverse cycling habits of users. The effectiveness of the approach is shown through case studies based on a real-world public bicycle dataset.

  11. Economic evaluation of algae biodiesel based on meta-analyses

    Science.gov (United States)

    Zhang, Yongli; Liu, Xiaowei; White, Mark A.; Colosi, Lisa M.

    2017-08-01

    The objective of this study is to elucidate the economic viability of algae-to-energy systems at a large scale, by developing a meta-analysis of five previously published economic evaluations of systems producing algae biodiesel. Data from original studies were harmonised into a standardised framework using financial and technical assumptions. Results suggest that the selling price of algae biodiesel under the base case would be 5.00-10.31/gal, higher than the selected benchmarks: 3.77/gal for petroleum diesel, and 4.21/gal for commercial biodiesel (B100) from conventional vegetable oil or animal fat. However, the projected selling price of algal biodiesel (2.76-4.92/gal), following anticipated improvements, would be competitive. A scenario-based sensitivity analysis reveals that the price of algae biodiesel is most sensitive to algae biomass productivity, algae oil content, and algae cultivation cost. This indicates that the improvements in the yield, quality, and cost of algae feedstock could be the key factors to make algae-derived biodiesel economically viable.

  12. Detecting small-study effects and funnel plot asymmetry in meta-analysis of survival data: A comparison of new and existing tests.

    Science.gov (United States)

    Debray, Thomas P A; Moons, Karel G M; Riley, Richard D

    2018-03-01

    Small-study effects are a common threat in systematic reviews and may indicate publication bias. Their existence is often verified by visual inspection of the funnel plot. Formal tests to assess the presence of funnel plot asymmetry typically estimate the association between the reported effect size and their standard error, the total sample size, or the inverse of the total sample size. In this paper, we demonstrate that the application of these tests may be less appropriate in meta-analysis of survival data, where censoring influences statistical significance of the hazard ratio. We subsequently propose 2 new tests that are based on the total number of observed events and adopt a multiplicative variance component. We compare the performance of the various funnel plot asymmetry tests in an extensive simulation study where we varied the true hazard ratio (0.5 to 1), the number of published trials (N=10 to 100), the degree of censoring within trials (0% to 90%), and the mechanism leading to participant dropout (noninformative versus informative). Results demonstrate that previous well-known tests for detecting funnel plot asymmetry suffer from low power or excessive type-I error rates in meta-analysis of survival data, particularly when trials are affected by participant dropout. Because our novel test (adopting estimates of the asymptotic precision as study weights) yields reasonable power and maintains appropriate type-I error rates, we recommend its use to evaluate funnel plot asymmetry in meta-analysis of survival data. The use of funnel plot asymmetry tests should, however, be avoided when there are few trials available for any meta-analysis. © 2017 The Authors. Research Synthesis Methods Published by John Wiley & Sons, Ltd.

  13. Using meta-analytic path analysis to test theoretical predictions in health behavior: An illustration based on meta-analyses of the theory of planned behavior

    OpenAIRE

    Hagger, Martin; Chan, Dervin K. C.; Protogerou, Cleo; Chatzisarantis, Nikos L. D.

    2016-01-01

    Objective Synthesizing research on social cognitive theories applied to health behavior is an important step in the development of an evidence base of psychological factors as targets for effective behavioral interventions. However, few meta-analyses of research on social cognitive theories in health contexts have conducted simultaneous tests of theoretically-stipulated pattern effects using path analysis. We argue that conducting path analyses of meta-analytic effects among constructs fr...

  14. Publishing datasets with eSciDoc and panMetaDocs

    Science.gov (United States)

    Ulbricht, D.; Klump, J.; Bertelmann, R.

    2012-04-01

    Currently serveral research institutions worldwide undertake considerable efforts to have their scientific datasets published and to syndicate them to data portals as extensively described objects identified by a persistent identifier. This is done to foster the reuse of data, to make scientific work more transparent, and to create a citable entity that can be referenced unambigously in written publications. GFZ Potsdam established a publishing workflow for file based research datasets. Key software components are an eSciDoc infrastructure [1] and multiple instances of the data curation tool panMetaDocs [2]. The eSciDoc repository holds data objects and their associated metadata in container objects, called eSciDoc items. A key metadata element in this context is the publication status of the referenced data set. PanMetaDocs, which is based on PanMetaWorks [3], is a PHP based web application that allows to describe data with any XML-based metadata schema. The metadata fields can be filled with static or dynamic content to reduce the number of fields that require manual entries to a minimum and make use of contextual information in a project setting. Access rights can be applied to set visibility of datasets to other project members and allow collaboration on and notifying about datasets (RSS) and interaction with the internal messaging system, that was inherited from panMetaWorks. When a dataset is to be published, panMetaDocs allows to change the publication status of the eSciDoc item from status "private" to "submitted" and prepare the dataset for verification by an external reviewer. After quality checks, the item publication status can be changed to "published". This makes the data and metadata available through the internet worldwide. PanMetaDocs is developed as an eSciDoc application. It is an easy to use graphical user interface to eSciDoc items, their data and metadata. It is also an application supporting a DOI publication agent during the process of

  15. Old and new approaches to the interpretation of acid-base metabolism, starting from historical data applied to diabetic acidosis.

    Science.gov (United States)

    Mioni, Roberto; Marega, Alessandra; Lo Cicero, Marco; Montanaro, Domenico

    2016-11-01

    The approach to acid-base chemistry in medicine includes several methods. Currently, the two most popular procedures are derived from Stewart's studies and from the bicarbonate/BE-based classical formulation. Another method, unfortunately little known, follows the Kildeberg theory applied to acid-base titration. By using the data produced by Dana Atchley in 1933, regarding electrolytes and blood gas analysis applied to diabetes, we compared the three aforementioned methods, in order to highlight their strengths and their weaknesses. The results obtained, by reprocessing the data of Atchley, have shown that Kildeberg's approach, unlike the other two methods, is consistent, rational and complete for describing the organ-physiological behavior of the hydrogen ion turnover in human organism. In contrast, the data obtained using the Stewart approach and the bicarbonate-based classical formulation are misleading and fail to specify which organs or systems are involved in causing or maintaining the diabetic acidosis. Stewart's approach, despite being considered 'quantitative', does not propose in any way the concept of 'an amount of acid' and becomes even more confusing, because it is not clear how to distinguish between 'strong' and 'weak' ions. As for Stewart's approach, the classical method makes no distinction between hydrogen ions managed by the intermediate metabolism and hydroxyl ions handled by the kidney, but, at least, it is based on the concept of titration (base-excess) and indirectly defines the concept of 'an amount of acid'. In conclusion, only Kildeberg's approach offers a complete understanding of the causes and remedies against any type of acid-base disturbance.

  16. [Considerations on the use of meta-analyses in the orientation of knowledge and decisions in Occupational Medicine].

    Science.gov (United States)

    Catalani, Simona; Berra, Alessandro; Tomasi, Cesare; Romano, Canzio; Pira, Enrico; Garzaro, Giacomo; Apostoli, Pietro

    2015-01-01

    In recent years, due to the need to elaborate the amount of information available from the scientific literature, the meta-analyses and systematic reviews have become very numerous. The meta-analyses are carried out to evaluate the association between two events when single researches haven't provided comprehensive data. On the other hand, a good meta-analysis must satisfy certain criteria, from the selection of the studies until the evaluation of the outcomes; to this purpose, the application of methods for quality assessment is a crucial point to obtain data of adequate reliability. The aim of this review is to give some introductory tools for a critical approach to meta-analyses and systematic reviews, which have become useful instruments also in occupational medicine.

  17. A repository based on a dynamically extensible data model supporting multidisciplinary research in neuroscience.

    Science.gov (United States)

    Corradi, Luca; Porro, Ivan; Schenone, Andrea; Momeni, Parastoo; Ferrari, Raffaele; Nobili, Flavio; Ferrara, Michela; Arnulfo, Gabriele; Fato, Marco M

    2012-10-08

    Robust, extensible and distributed databases integrating clinical, imaging and molecular data represent a substantial challenge for modern neuroscience. It is even more difficult to provide extensible software environments able to effectively target the rapidly changing data requirements and structures of research experiments. There is an increasing request from the neuroscience community for software tools addressing technical challenges about: (i) supporting researchers in the medical field to carry out data analysis using integrated bioinformatics services and tools; (ii) handling multimodal/multiscale data and metadata, enabling the injection of several different data types according to structured schemas; (iii) providing high extensibility, in order to address different requirements deriving from a large variety of applications simply through a user runtime configuration. A dynamically extensible data structure supporting collaborative multidisciplinary research projects in neuroscience has been defined and implemented. We have considered extensibility issues from two different points of view. First, the improvement of data flexibility has been taken into account. This has been done through the development of a methodology for the dynamic creation and use of data types and related metadata, based on the definition of "meta" data model. This way, users are not constrainted to a set of predefined data and the model can be easily extensible and applicable to different contexts. Second, users have been enabled to easily customize and extend the experimental procedures in order to track each step of acquisition or analysis. This has been achieved through a process-event data structure, a multipurpose taxonomic schema composed by two generic main objects: events and processes. Then, a repository has been built based on such data model and structure, and deployed on distributed resources thanks to a Grid-based approach. Finally, data integration aspects have been

  18. Individual behavioral phenotypes: an integrative meta-theoretical framework. Why "behavioral syndromes" are not analogs of "personality".

    Science.gov (United States)

    Uher, Jana

    2011-09-01

    Animal researchers are increasingly interested in individual differences in behavior. Their interpretation as meaningful differences in behavioral strategies stable over time and across contexts, adaptive, heritable, and acted upon by natural selection has triggered new theoretical developments. However, the analytical approaches used to explore behavioral data still address population-level phenomena, and statistical methods suitable to analyze individual behavior are rarely applied. I discuss fundamental investigative principles and analytical approaches to explore whether, in what ways, and under which conditions individual behavioral differences are actually meaningful. I elaborate the meta-theoretical ideas underlying common theoretical concepts and integrate them into an overarching meta-theoretical and methodological framework. This unravels commonalities and differences, and shows that assumptions of analogy to concepts of human personality are not always warranted and that some theoretical developments may be based on methodological artifacts. Yet, my results also highlight possible directions for new theoretical developments in animal behavior research. Copyright © 2011 Wiley Periodicals, Inc.

  19. Recognition of Emotions in Autism: A Formal Meta-Analysis

    Science.gov (United States)

    Uljarevic, Mirko; Hamilton, Antonia

    2013-01-01

    Determining the integrity of emotion recognition in autistic spectrum disorder is important to our theoretical understanding of autism and to teaching social skills. Previous studies have reported both positive and negative results. Here, we take a formal meta-analytic approach, bringing together data from 48 papers testing over 980 participants…

  20. Meta-research matters: Meta-spin cycles, the blindness of bias, and rebuilding trust.

    Science.gov (United States)

    Bero, Lisa

    2018-04-01

    Meta-research is research about research. Meta-research may not be as click-worthy as a meta-pug-a pug dog dressed up in a pug costume-but it is crucial to understanding research. A particularly valuable contribution of meta-research is to identify biases in a body of evidence. Bias can occur in the design, conduct, or publication of research and is a systematic deviation from the truth in results or inferences. The findings of meta-research can tell us which evidence to trust and what must be done to improve future research. We should be using meta-research to provide the evidence base for implementing systemic changes to improve research, not for discrediting it.

  1. Proposed prediction algorithms based on hybrid approach to deal with anomalies of RFID data in healthcare

    Directory of Open Access Journals (Sweden)

    A. Anny Leema

    2013-07-01

    Full Text Available The RFID technology has penetrated the healthcare sector due to its increased functionality, low cost, high reliability, and easy-to-use capabilities. It is being deployed for various applications and the data captured by RFID readers increase according to timestamp resulting in an enormous volume of data duplication, false positive, and false negative. The dirty data stream generated by the RFID readers is one of the main factors limiting the widespread adoption of RFID technology. In order to provide reliable data to RFID application, it is necessary to clean the collected data and this should be done in an effective manner before they are subjected to warehousing. The existing approaches to deal with anomalies are physical, middleware, and deferred approach. The shortcomings of existing approaches are analyzed and found that robust RFID system can be built by integrating the middleware and deferred approach. Our proposed algorithms based on hybrid approach are tested in the healthcare environment which predicts false positive, false negative, and redundant data. In this paper, healthcare environment is simulated using RFID and the data observed by RFID reader consist of anomalies false positive, false negative, and duplication. Experimental evaluation shows that our cleansing methods remove errors in RFID data more accurately and efficiently. Thus, with the aid of the planned data cleaning technique, we can bring down the healthcare costs, optimize business processes, streamline patient identification processes, and improve patient safety.

  2. MetaStorm: A Public Resource for Customizable Metagenomics Annotation.

    Directory of Open Access Journals (Sweden)

    Gustavo Arango-Argoty

    Full Text Available Metagenomics is a trending research area, calling for the need to analyze large quantities of data generated from next generation DNA sequencing technologies. The need to store, retrieve, analyze, share, and visualize such data challenges current online computational systems. Interpretation and annotation of specific information is especially a challenge for metagenomic data sets derived from environmental samples, because current annotation systems only offer broad classification of microbial diversity and function. Moreover, existing resources are not configured to readily address common questions relevant to environmental systems. Here we developed a new online user-friendly metagenomic analysis server called MetaStorm (http://bench.cs.vt.edu/MetaStorm/, which facilitates customization of computational analysis for metagenomic data sets. Users can upload their own reference databases to tailor the metagenomics annotation to focus on various taxonomic and functional gene markers of interest. MetaStorm offers two major analysis pipelines: an assembly-based annotation pipeline and the standard read annotation pipeline used by existing web servers. These pipelines can be selected individually or together. Overall, MetaStorm provides enhanced interactive visualization to allow researchers to explore and manipulate taxonomy and functional annotation at various levels of resolution.

  3. MetaStorm: A Public Resource for Customizable Metagenomics Annotation.

    Science.gov (United States)

    Arango-Argoty, Gustavo; Singh, Gargi; Heath, Lenwood S; Pruden, Amy; Xiao, Weidong; Zhang, Liqing

    2016-01-01

    Metagenomics is a trending research area, calling for the need to analyze large quantities of data generated from next generation DNA sequencing technologies. The need to store, retrieve, analyze, share, and visualize such data challenges current online computational systems. Interpretation and annotation of specific information is especially a challenge for metagenomic data sets derived from environmental samples, because current annotation systems only offer broad classification of microbial diversity and function. Moreover, existing resources are not configured to readily address common questions relevant to environmental systems. Here we developed a new online user-friendly metagenomic analysis server called MetaStorm (http://bench.cs.vt.edu/MetaStorm/), which facilitates customization of computational analysis for metagenomic data sets. Users can upload their own reference databases to tailor the metagenomics annotation to focus on various taxonomic and functional gene markers of interest. MetaStorm offers two major analysis pipelines: an assembly-based annotation pipeline and the standard read annotation pipeline used by existing web servers. These pipelines can be selected individually or together. Overall, MetaStorm provides enhanced interactive visualization to allow researchers to explore and manipulate taxonomy and functional annotation at various levels of resolution.

  4. MetaStorm: A Public Resource for Customizable Metagenomics Annotation

    Science.gov (United States)

    Arango-Argoty, Gustavo; Singh, Gargi; Heath, Lenwood S.; Pruden, Amy; Xiao, Weidong; Zhang, Liqing

    2016-01-01

    Metagenomics is a trending research area, calling for the need to analyze large quantities of data generated from next generation DNA sequencing technologies. The need to store, retrieve, analyze, share, and visualize such data challenges current online computational systems. Interpretation and annotation of specific information is especially a challenge for metagenomic data sets derived from environmental samples, because current annotation systems only offer broad classification of microbial diversity and function. Moreover, existing resources are not configured to readily address common questions relevant to environmental systems. Here we developed a new online user-friendly metagenomic analysis server called MetaStorm (http://bench.cs.vt.edu/MetaStorm/), which facilitates customization of computational analysis for metagenomic data sets. Users can upload their own reference databases to tailor the metagenomics annotation to focus on various taxonomic and functional gene markers of interest. MetaStorm offers two major analysis pipelines: an assembly-based annotation pipeline and the standard read annotation pipeline used by existing web servers. These pipelines can be selected individually or together. Overall, MetaStorm provides enhanced interactive visualization to allow researchers to explore and manipulate taxonomy and functional annotation at various levels of resolution. PMID:27632579

  5. Parents’ experiences of neonatal transfer. A meta-study of qualitative research 2000-2017

    DEFF Research Database (Denmark)

    Aagaard, Hanne; Hall, Elisabeth; Ludvigsen, Mette Spliid

    2018-01-01

    Transfers of critically ill neonates are frequent phenomena. Even though parents’ par- ticipation is regarded as crucial in neonatal care, a transfer often means that parents and neonates are separated. A systematic review of the parents’ experiences of neo- natal transfer is lacking. This paper ...... identified ‘wavering and wandering’ as a metaphoric representation of the parents’ experiences. The findings add knowledge about meta- study as an approach for com- prehensive qualitative research and point at the value of meta- theory and meta- method analyses....... describes a meta- study addressing qualitative re- search about parents’ experiences of neonatal transfer. Through deconstruction and reflections of theories, methods, and empirical data, the aim was to achieve a deeper understanding of theoretical, empirical, contextual, historical, and methodological is......- sues of qualitative studies concerning parents’ experiences of neonatal transfer over the course of this meta- study (2000–2017). Meta- theory and meta- method analyses showed that caring, transition, and family- centered care were main theoretical frames applied and that interviewing with a small...

  6. OntoCR: A CEN/ISO-13606 clinical repository based on ontologies.

    Science.gov (United States)

    Lozano-Rubí, Raimundo; Muñoz Carrero, Adolfo; Serrano Balazote, Pablo; Pastor, Xavier

    2016-04-01

    To design a new semantically interoperable clinical repository, based on ontologies, conforming to CEN/ISO 13606 standard. The approach followed is to extend OntoCRF, a framework for the development of clinical repositories based on ontologies. The meta-model of OntoCRF has been extended by incorporating an OWL model integrating CEN/ISO 13606, ISO 21090 and SNOMED CT structure. This approach has demonstrated a complete evaluation cycle involving the creation of the meta-model in OWL format, the creation of a simple test application, and the communication of standardized extracts to another organization. Using a CEN/ISO 13606 based system, an indefinite number of archetypes can be merged (and reused) to build new applications. Our approach, based on the use of ontologies, maintains data storage independent of content specification. With this approach, relational technology can be used for storage, maintaining extensibility capabilities. The present work demonstrates that it is possible to build a native CEN/ISO 13606 repository for the storage of clinical data. We have demonstrated semantic interoperability of clinical information using CEN/ISO 13606 extracts. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. A comparative analysis of meta-heuristic methods for power management of a dual energy storage system for electric vehicles

    International Nuclear Information System (INIS)

    Trovão, João P.; Antunes, Carlos Henggeler

    2015-01-01

    Highlights: • Two meta-heuristic approaches are evaluated for multi-ESS management in electric vehicles. • An online global energy management strategy with two different layers is studied. • Meta-heuristic techniques are used to define optimized energy sharing mechanisms. • A comparative analysis for ARTEMIS driving cycle is addressed. • The effectiveness of the double-layer management with meta-heuristic is presented. - Abstract: This work is focused on the performance evaluation of two meta-heuristic approaches, simulated annealing and particle swarm optimization, to deal with power management of a dual energy storage system for electric vehicles. The proposed strategy is based on a global energy management system with two layers: long-term (energy) and short-term (power) management. A rule-based system deals with the long-term (strategic) layer and for the short-term (action) layer meta-heuristic techniques are developed to define optimized online energy sharing mechanisms. Simulations have been made for several driving cycles to validate the proposed strategy. A comparative analysis for ARTEMIS driving cycle is presented evaluating three performance indicators (computation time, final value of battery state of charge, and minimum value of supercapacitors state of charge) as a function of input parameters. The results show the effectiveness of an implementation based on a double-layer management system using meta-heuristic methods for online power management supported by a rule set that restricts the search space

  8. VOXEL-BASED APPROACH FOR ESTIMATING URBAN TREE VOLUME FROM TERRESTRIAL LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    C. Vonderach

    2012-07-01

    Full Text Available The importance of single trees and the determination of related parameters has been recognized in recent years, e.g. for forest inventories or management. For urban areas an increasing interest in the data acquisition of trees can be observed concerning aspects like urban climate, CO2 balance, and environmental protection. Urban trees differ significantly from natural systems with regard to the site conditions (e.g. technogenic soils, contaminants, lower groundwater level, regular disturbance, climate (increased temperature, reduced humidity and species composition and arrangement (habitus and health status and therefore allometric relations cannot be transferred from natural sites to urban areas. To overcome this problem an extended approach was developed for a fast and non-destructive extraction of branch volume, DBH (diameter at breast height and height of single trees from point clouds of terrestrial laser scanning (TLS. For data acquisition, the trees were scanned with highest scan resolution from several (up to five positions located around the tree. The resulting point clouds (20 to 60 million points are analysed with an algorithm based on voxel (volume elements structure, leading to an appropriate data reduction. In a first step, two kinds of noise reduction are carried out: the elimination of isolated voxels as well as voxels with marginal point density. To obtain correct volume estimates, the voxels inside the stem and branches (interior voxels where voxels contain no laser points must be regarded. For this filling process, an easy and robust approach was developed based on a layer-wise (horizontal layers of the voxel structure intersection of four orthogonal viewing directions. However, this procedure also generates several erroneous "phantom" voxels, which have to be eliminated. For this purpose the previous approach was extended by a special region growing algorithm. In a final step the volume is determined layer-wise based on the

  9. A meta-analysis based method for prioritizing candidate genes involved in a pre-specific function

    Directory of Open Access Journals (Sweden)

    Jingjing Zhai

    2016-12-01

    Full Text Available The identification of genes associated with a given biological function in plants remains a challenge, although network-based gene prioritization algorithms have been developed for Arabidopsis thaliana and many non-model plant species. Nevertheless, these network-based gene prioritization algorithms have encountered several problems; one in particular is that of unsatisfactory prediction accuracy due to limited network coverage, varying link quality, and/or uncertain network connectivity. Thus a model that integrates complementary biological data may be expected to increase the prediction accuracy of gene prioritization. Towards this goal, we developed a novel gene prioritization method named RafSee, to rank candidate genes using a random forest algorithm that integrates sequence, evolutionary, and epigenetic features of plants. Subsequently, we proposed an integrative approach named RAP (Rank Aggregation-based data fusion for gene Prioritization, in which an order statistics-based meta-analysis was used to aggregate the rank of the network-based gene prioritization method and RafSee, for accurately prioritizing candidate genes involved in a pre-specific biological function. Finally, we showcased the utility of RAP by prioritizing 380 flowering-time genes in Arabidopsis. The ‘leave-one-out’ cross-validation experiment showed that RafSee could work as a complement to a current state-of-art network-based gene prioritization system (AraNet v2. Moreover, RAP ranked 53.68% (204/380 flowering-time genes higher than AraNet v2, resulting in an 39.46% improvement in term of the first quartile rank. Further evaluations also showed that RAP was effective in prioritizing genes-related to different abiotic stresses. To enhance the usability of RAP for Arabidopsis and non-model plant species, an R package implementing the method is freely available at http://bioinfo.nwafu.edu.cn/software.

  10. The Role of Sexual Orientation in School-Based Victimization: A Meta-Analysis

    Science.gov (United States)

    Toomey, Russell B.; Russell, Stephen T.

    2016-01-01

    School-based victimization is associated with poorer developmental, academic, and health outcomes. This meta-analytic review compared the mean levels of school-based victimization experienced by sexual minority youth to those of heterosexual youth, and examined moderators of this difference. Results from 18 independent studies (N = 56,752…

  11. Meta-analysis of single-arm survival studies: a distribution-free approach for estimating summary survival curves with random effects.

    Science.gov (United States)

    Combescure, Christophe; Foucher, Yohann; Jackson, Daniel

    2014-07-10

    In epidemiologic studies and clinical trials with time-dependent outcome (for instance death or disease progression), survival curves are used to describe the risk of the event over time. In meta-analyses of studies reporting a survival curve, the most informative finding is a summary survival curve. In this paper, we propose a method to obtain a distribution-free summary survival curve by expanding the product-limit estimator of survival for aggregated survival data. The extension of DerSimonian and Laird's methodology for multiple outcomes is applied to account for the between-study heterogeneity. Statistics I(2)  and H(2) are used to quantify the impact of the heterogeneity in the published survival curves. A statistical test for between-strata comparison is proposed, with the aim to explore study-level factors potentially associated with survival. The performance of the proposed approach is evaluated in a simulation study. Our approach is also applied to synthesize the survival of untreated patients with hepatocellular carcinoma from aggregate data of 27 studies and synthesize the graft survival of kidney transplant recipients from individual data from six hospitals. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Meta Analysis of Gene Expression Data within and Across Species.

    Science.gov (United States)

    Fierro, Ana C; Vandenbussche, Filip; Engelen, Kristof; Van de Peer, Yves; Marchal, Kathleen

    2008-12-01

    Since the second half of the 1990s, a large number of genome-wide analyses have been described that study gene expression at the transcript level. To this end, two major strategies have been adopted, a first one relying on hybridization techniques such as microarrays, and a second one based on sequencing techniques such as serial analysis of gene expression (SAGE), cDNA-AFLP, and analysis based on expressed sequence tags (ESTs). Despite both types of profiling experiments becoming routine techniques in many research groups, their application remains costly and laborious. As a result, the number of conditions profiled in individual studies is still relatively small and usually varies from only two to few hundreds of samples for the largest experiments. More and more, scientific journals require the deposit of these high throughput experiments in public databases upon publication. Mining the information present in these databases offers molecular biologists the possibility to view their own small-scale analysis in the light of what is already available. However, so far, the richness of the public information remains largely unexploited. Several obstacles such as the correct association between ESTs and microarray probes with the corresponding gene transcript, the incompleteness and inconsistency in the annotation of experimental conditions, and the lack of standardized experimental protocols to generate gene expression data, all impede the successful mining of these data. Here, we review the potential and difficulties of combining publicly available expression data from respectively EST analyses and microarray experiments. With examples from literature, we show how meta-analysis of expression profiling experiments can be used to study expression behavior in a single organism or between organisms, across a wide range of experimental conditions. We also provide an overview of the methods and tools that can aid molecular biologists in exploiting these public data.

  13. A Novel Approach to Asynchronous MVP Data Interpretation Based on Elliptical-Vectors

    Science.gov (United States)

    Kruglyakov, M.; Trofimov, I.; Korotaev, S.; Shneyer, V.; Popova, I.; Orekhova, D.; Scshors, Y.; Zhdanov, M. S.

    2014-12-01

    We suggest a novel approach to asynchronous magnetic-variation profiling (MVP) data interpretation. Standard method in MVP is based on the interpretation of the coefficients of linear relation between vertical and horizontal components of the measured magnetic field.From mathematical point of view this pair of linear coefficients is not a vector which leads to significant difficulties in asynchronous data interpretation. Our approach allows us to actually treat such a pair of complex numbers as a special vector called an ellipse-vector (EV). By choosing the particular definitions of complex length and direction, the basic relation of MVP can be considered as the dot product. This considerably simplifies the interpretation of asynchronous data. The EV is described by four real numbers: the values of major and minor semiaxes, the angular direction of the major semiaxis and the phase. The notation choice is motivated by historical reasons. It is important that different EV's components have different sensitivity with respect to the field sources and the local heterogeneities. Namely, the value of major semiaxis and the angular direction are mostly determined by the field source and the normal cross-section. On the other hand, the value of minor semiaxis and the phase are responsive to local heterogeneities. Since the EV is the general form of complex vector, the traditional Schmucker vectors can be explicitly expressed through its components.The proposed approach was successfully applied to interpretation the results of asynchronous measurements that had been obtained in the Arctic Ocean at the drift stations "North Pole" in 1962-1976.

  14. Vitamin D levels do not predict the stage of hepatic fibrosis in patients with non-alcoholic fatty liver disease: A PRISMA compliant systematic review and meta-analysis of pooled data.

    Science.gov (United States)

    Saberi, Behnam; Dadabhai, Alia S; Nanavati, Julie; Wang, Lin; Shinohara, Russell T; Mullin, Gerard E

    2018-01-27

    To investigate the relationship between 25-hydroxyvitamin D [25(OH)D] levels and fibrosis stage in patients with non-alcoholic fatty liver disease (NAFLD). Two individual reviewers identified relevant studies using the PubMed, EMBASE, Cochrane, and Scopus databases. Inclusion criteria were as follows: (1) Studies that evaluated adults with NAFLD and serum or plasma 25(OH)D levels; and (2) assessed fibrosis stage using liver biopsy. A rigorous analysis yielded six articles as having sufficient data to employ in evaluating the association of serum vitamin D levels in patients with NAFLD based on their liver fibrosis stage by histopathological analysis. The lead investigators of each of the six studies were contacted and the data were collected. To meta-analyze vitamin D levels in F0-F2 vs F3-F4 fibrosis, a random-effects meta-analysis fit using restricted maximum likelihood was applied. To examine trends across each stage of fibrosis with respect to vitamin D levels, a meta-regression was performed. P analysis to evaluate the association of serum vitamin D levels in patients with NAFLD based on their liver fibrosis stage by histopathological analysis. The lead investigators of each of the six studies were contacted and the data were collected. First, the investigators performed a meta-analysis to compare serum vitamin D levels in patients with NAFLD with stage F0-F2 compared to F3-F4, which did not show significance [meta-estimate of the pooled mean difference = -0.86, P = 0.08 (-4.17, 2.46)]. A meta-regression evaluation of serum vitamin 25 (OH)D levels across the individual stages (F0-F4) of fibrosis did not show an association for the six included studies. Low vitamin D status is not associated with higher stages of liver fibrosis in patients with NAFLD.

  15. Reconciling evidence-based medicine and precision medicine in the era of big data: challenges and opportunities.

    Science.gov (United States)

    Beckmann, Jacques S; Lew, Daniel

    2016-12-19

    This era of groundbreaking scientific developments in high-resolution, high-throughput technologies is allowing the cost-effective collection and analysis of huge, disparate datasets on individual health. Proper data mining and translation of the vast datasets into clinically actionable knowledge will require the application of clinical bioinformatics. These developments have triggered multiple national initiatives in precision medicine-a data-driven approach centering on the individual. However, clinical implementation of precision medicine poses numerous challenges. Foremost, precision medicine needs to be contrasted with the powerful and widely used practice of evidence-based medicine, which is informed by meta-analyses or group-centered studies from which mean recommendations are derived. This "one size fits all" approach can provide inadequate solutions for outliers. Such outliers, which are far from an oddity as all of us fall into this category for some traits, can be better managed using precision medicine. Here, we argue that it is necessary and possible to bridge between precision medicine and evidence-based medicine. This will require worldwide and responsible data sharing, as well as regularly updated training programs. We also discuss the challenges and opportunities for achieving clinical utility in precision medicine. We project that, through collection, analyses and sharing of standardized medically relevant data globally, evidence-based precision medicine will shift progressively from therapy to prevention, thus leading eventually to improved, clinician-to-patient communication, citizen-centered healthcare and sustained well-being.

  16. A Feedback-Based Secure Path Approach for Wireless Sensor Network Data Collection

    Science.gov (United States)

    Mao, Yuxin; Wei, Guiyi

    2010-01-01

    The unattended nature of wireless sensor networks makes them very vulnerable to malicious attacks. Therefore, how to preserve secure data collection is an important issue to wireless sensor networks. In this paper, we propose a novel approach of secure data collection for wireless sensor networks. We explore secret sharing and multipath routing to achieve secure data collection in wireless sensor network with compromised nodes. We present a novel tracing-feedback mechanism, which makes full use of the routing functionality of wireless sensor networks, to improve the quality of data collection. The major advantage of the approach is that the secure paths are constructed as a by-product of data collection. The process of secure routing causes little overhead to the sensor nodes in the network. Compared with existing works, the algorithms of the proposed approach are easy to implement and execute in resource-constrained wireless sensor networks. According to the result of a simulation experiment, the performance of the approach is better than the recent approaches with a similar purpose. PMID:22163424

  17. A feedback-based secure path approach for wireless sensor network data collection.

    Science.gov (United States)

    Mao, Yuxin; Wei, Guiyi

    2010-01-01

    The unattended nature of wireless sensor networks makes them very vulnerable to malicious attacks. Therefore, how to preserve secure data collection is an important issue to wireless sensor networks. In this paper, we propose a novel approach of secure data collection for wireless sensor networks. We explore secret sharing and multipath routing to achieve secure data collection in wireless sensor network with compromised nodes. We present a novel tracing-feedback mechanism, which makes full use of the routing functionality of wireless sensor networks, to improve the quality of data collection. The major advantage of the approach is that the secure paths are constructed as a by-product of data collection. The process of secure routing causes little overhead to the sensor nodes in the network. Compared with existing works, the algorithms of the proposed approach are easy to implement and execute in resource-constrained wireless sensor networks. According to the result of a simulation experiment, the performance of the approach is better than the recent approaches with a similar purpose.

  18. A Feedback-Based Secure Path Approach for Wireless Sensor Network Data Collection

    Directory of Open Access Journals (Sweden)

    Guiyi Wei

    2010-10-01

    Full Text Available The unattended nature of wireless sensor networks makes them very vulnerable to malicious attacks. Therefore, how to preserve secure data collection is an important issue to wireless sensor networks. In this paper, we propose a novel approach of secure data collection for wireless sensor networks. We explore secret sharing and multipath routing to achieve secure data collection in wireless sensor network with compromised nodes. We present a novel tracing-feedback mechanism, which makes full use of the routing functionality of wireless sensor networks, to improve the quality of data collection. The major advantage of the approach is that the secure paths are constructed as a by-product of data collection. The process of secure routing causes little overhead to the sensor nodes in the network. Compared with existing works, the algorithms of the proposed approach are easy to implement and execute in resource-constrained wireless sensor networks. According to the result of a simulation experiment, the performance of the approach is better than the recent approaches with a similar purpose.

  19. Formalizing the definition of meta-analysis in Molecular Ecology.

    Science.gov (United States)

    ArchMiller, Althea A; Bauer, Eric F; Koch, Rebecca E; Wijayawardena, Bhagya K; Anil, Ammu; Kottwitz, Jack J; Munsterman, Amelia S; Wilson, Alan E

    2015-08-01

    Meta-analysis, the statistical synthesis of pertinent literature to develop evidence-based conclusions, is relatively new to the field of molecular ecology, with the first meta-analysis published in the journal Molecular Ecology in 2003 (Slate & Phua 2003). The goal of this article is to formalize the definition of meta-analysis for the authors, editors, reviewers and readers of Molecular Ecology by completing a review of the meta-analyses previously published in this journal. We also provide a brief overview of the many components required for meta-analysis with a more specific discussion of the issues related to the field of molecular ecology, including the use and statistical considerations of Wright's FST and its related analogues as effect sizes in meta-analysis. We performed a literature review to identify articles published as 'meta-analyses' in Molecular Ecology, which were then evaluated by at least two reviewers. We specifically targeted Molecular Ecology publications because as a flagship journal in this field, meta-analyses published in Molecular Ecology have the potential to set the standard for meta-analyses in other journals. We found that while many of these reviewed articles were strong meta-analyses, others failed to follow standard meta-analytical techniques. One of these unsatisfactory meta-analyses was in fact a secondary analysis. Other studies attempted meta-analyses but lacked the fundamental statistics that are considered necessary for an effective and powerful meta-analysis. By drawing attention to the inconsistency of studies labelled as meta-analyses, we emphasize the importance of understanding the components of traditional meta-analyses to fully embrace the strengths of quantitative data synthesis in the field of molecular ecology. © 2015 John Wiley & Sons Ltd.

  20. Does Surgical Approach Affect Outcomes in Total Hip Arthroplasty Through 90 Days of Follow-Up? A Systematic Review With Meta-Analysis.

    Science.gov (United States)

    Miller, Larry E; Gondusky, Joseph S; Bhattacharyya, Samir; Kamath, Atul F; Boettner, Friedrich; Wright, John

    2018-04-01

    The choice between anterior approach (AA) and posterior approach (PA) in primary total hip arthroplasty (THA) is controversial. Previous reviews have predominantly relied on data from retrospective studies. This systematic review included prospective studies comparing postoperative outcomes through 90 days of AA vs PA in primary THA. Outcomes were pain severity, narcotic usage, hip function using Harris Hip Score, and complications. Random effects meta-analysis was performed for all outcomes. Efficacy data were reported as standardized mean difference (SMD) where values of 0.2, 0.5, 0.8, and 1.0 were defined as small, medium, large, and very large effect sizes, respectively. Complications were reported as the absolute risk difference (RD) where a positive value implied higher risk with AA and a lower value implied lower risk with AA. A total of 13 prospective comparative studies (7 randomized) with patients treated with AA (n = 524) or PA (n = 520) were included. The AA was associated with lower pain severity (SMD = -0.37, P SMD = -0.36, P = .002), and improved hip function (SMD = 0.31, P = .002) compared to PA. No differences between surgical approaches were observed for dislocation (RD = 0.2%, P = .87), fracture (RD = 0.2%, P = .87), hematoma (RD = 0%, P = .99), infection (RD = 0.2%, P = .85), thromboembolic event (RD = -0.9%, P = .42), or reoperation (RD = 1.3%, P = .26). Conclusions of this study were unchanged when subjected to sensitivity analyses. In this systematic review and meta-analysis of prospective studies comparing postoperative outcomes through 90 days of AA vs PA in primary THA, patients treated with AA reported less pain, consumed fewer narcotics, and reported better hip function. No statistical differences in complication rates were detected between AA and PA. Ultimately, the choice of surgical approach in primary THA should consider preference and experience of the surgeon as well as preference and anatomy of the patient

  1. Development of a meta-algorithm for guiding primary care encounters for patients with multimorbidity using evidence-based and case-based guideline development methodology.

    Science.gov (United States)

    Muche-Borowski, Cathleen; Lühmann, Dagmar; Schäfer, Ingmar; Mundt, Rebekka; Wagner, Hans-Otto; Scherer, Martin

    2017-06-22

    The study aimed to develop a comprehensive algorithm (meta-algorithm) for primary care encounters of patients with multimorbidity. We used a novel, case-based and evidence-based procedure to overcome methodological difficulties in guideline development for patients with complex care needs. Systematic guideline development methodology including systematic evidence retrieval (guideline synopses), expert opinions and informal and formal consensus procedures. Primary care. The meta-algorithm was developed in six steps:1. Designing 10 case vignettes of patients with multimorbidity (common, epidemiologically confirmed disease patterns and/or particularly challenging health care needs) in a multidisciplinary workshop.2. Based on the main diagnoses, a systematic guideline synopsis of evidence-based and consensus-based clinical practice guidelines was prepared. The recommendations were prioritised according to the clinical and psychosocial characteristics of the case vignettes.3. Case vignettes along with the respective guideline recommendations were validated and specifically commented on by an external panel of practicing general practitioners (GPs).4. Guideline recommendations and experts' opinions were summarised as case specific management recommendations (N-of-one guidelines).5. Healthcare preferences of patients with multimorbidity were elicited from a systematic literature review and supplemented with information from qualitative interviews.6. All N-of-one guidelines were analysed using pattern recognition to identify common decision nodes and care elements. These elements were put together to form a generic meta-algorithm. The resulting meta-algorithm reflects the logic of a GP's encounter of a patient with multimorbidity regarding decision-making situations, communication needs and priorities. It can be filled with the complex problems of individual patients and hereby offer guidance to the practitioner. Contrary to simple, symptom-oriented algorithms, the meta

  2. Three-dimensional fusion of spaceborne and ground radar reflectivity data using a neural network-based approach

    Science.gov (United States)

    Kou, Leilei; Wang, Zhuihui; Xu, Fen

    2018-03-01

    The spaceborne precipitation radar onboard the Tropical Rainfall Measuring Mission satellite (TRMM PR) can provide good measurement of the vertical structure of reflectivity, while ground radar (GR) has a relatively high horizontal resolution and greater sensitivity. Fusion of TRMM PR and GR reflectivity data may maximize the advantages from both instruments. In this paper, TRMM PR and GR reflectivity data are fused using a neural network (NN)-based approach. The main steps included are: quality control of TRMM PR and GR reflectivity data; spatiotemporal matchup; GR calibration bias correction; conversion of TRMM PR data from Ku to S band; fusion of TRMM PR and GR reflectivity data with an NN method; interpolation of reflectivity data that are below PR's sensitivity; blind areas compensation with a distance weighting-based merging approach; combination of three types of data: data with the NN method, data below PR's sensitivity and data within compensated blind areas. During the NN fusion step, the TRMM PR data are taken as targets of the training NNs, and gridded GR data after horizontal downsampling at different heights are used as the input. The trained NNs are then used to obtain 3D high-resolution reflectivity from the original GR gridded data. After 3D fusion of the TRMM PR and GR reflectivity data, a more complete and finer-scale 3D radar reflectivity dataset incorporating characteristics from both the TRMM PR and GR observations can be obtained. The fused reflectivity data are evaluated based on a convective precipitation event through comparison with the high resolution TRMM PR and GR data with an interpolation algorithm.

  3. PSA data base, comparison of the German and French approach

    International Nuclear Information System (INIS)

    Kreuser, A.; Tirira, J.

    2001-01-01

    The results of probabilistic safety assessments (PSA) of nuclear power plants strongly depend on the reliability data used. This report describes coarsely the general process to generate reliability data for components and resumes the differences between the German and French approaches. As has been shown in former studies which compared international PSA data, PSA data are closely related to the model definitions of the PSA. Therefore single PSA data cannot be compared directly without regard e.g. to the corresponding fault trees. These findings are confirmed by this study. The comparison of German and French methods shows a lot of differences concerning various details of the data generation process. Some differences between single reliability data should be eliminated when taking into account the complete fault tree analysis. But there are some other differences which have a direct impact on the obtained results of a PSA. In view of the all differences between both approaches concerning the definition of data and the data collection process, it is not possible to compare directly German and French PSA data. However, the database differences give no indication on the influence on the PSA results. Therefore, it is a need to perform a common IPSN/GRS assessment on how the different databases impact the PSA results. (orig.)

  4. Postoperative glaucoma following infantile cataract surgery: an individual patient data meta-analysis.

    Science.gov (United States)

    Mataftsi, Asimina; Haidich, Anna-Bettina; Kokkali, Stamatia; Rabiah, Peter K; Birch, Eileen; Stager, David R; Cheong-Leen, Richard; Singh, Vineet; Egbert, James E; Astle, William F; Lambert, Scott R; Amitabh, Purohit; Khan, Arif O; Grigg, John; Arvanitidou, Malamatenia; Dimitrakos, Stavros A; Nischal, Ken K

    2014-09-01

    Infantile cataract surgery bears a significant risk for postoperative glaucoma, and no consensus exists on factors that may reduce this risk. To assess the effect of primary intraocular lens implantation and timing of surgery on the incidence of postoperative glaucoma. We searched multiple databases to July 14, 2013, to identify studies with eligible patients, including PubMed, MEDLINE, EMBASE, ISI Web of Science, Scopus, Central, Google Scholar, Intute, and Tripdata. We also searched abstracts of ophthalmology society meetings. We included studies reporting on postoperative glaucoma in infants undergoing cataract surgery with regular follow-up for at least 1 year. Infants with concurrent ocular anomalies were excluded. Authors of eligible studies were invited to contribute individual patient data on infants who met the inclusion criteria. We also performed an aggregate data meta-analysis of published studies that did not contribute to the individual patient data. Data were pooled using a random-effects model. Time to glaucoma with the effect of primary implantation, additional postoperative intraocular procedures, and age at surgery. Seven centers contributed individual patient data on 470 infants with a median age at surgery of 3.0 months and median follow-up of 6.0 years. Eighty patients (17.0%) developed glaucoma at a median follow-up of 4.3 years. Only 2 of these patients had a pseudophakic eye. The risk for postoperative glaucoma appeared to be lower after primary implantation (hazard ratio [HR], 0.10 [95% CI, 0.01-0.70]; P = .02; I(2) = 34%), higher after surgery at 4 weeks or younger (HR, 2.10 [95% CI, 1.14-3.84]; P = .02; I(2) = 0%), and higher after additional procedures (HR, 2.52 [95% CI, 1.11-5.72]; P = .03; I(2) = 32%). In multivariable analysis, additional procedures independently increased the risk for glaucoma (HR, 2.25 [95% CI, 1.20-4.21]; P = .01), and primary implantation independently reduced it (HR, 0.10 [95% CI, 0.01-0.76]; P =

  5. Sexual minority youth and depressive symptoms or depressive disorder: A systematic review and meta-analysis of population-based studies.

    Science.gov (United States)

    Lucassen, Mathijs Fg; Stasiak, Karolina; Samra, Rajvinder; Frampton, Christopher Ma; Merry, Sally N

    2017-08-01

    Research has suggested that sexual minority young people are more likely to have depressive symptoms or depressive disorder, but to date most studies in the field have relied on convenience-based samples. This study overcomes this limitation by systematically reviewing the literature from population-based studies and conducting a meta-analysis to identify whether depressive disorder and depressive symptoms are elevated in sexual minority youth. A systematic review and meta-analysis were conducted and informed by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement to determine if rates of depressive symptoms or depressive disorder differ for sexual minority youth, relative to heterosexual adolescents. MEDLINE, PsycINFO, EMBASE and ERIC databases were searched. Studies reporting depressive symptom data or the prevalence of depressive disorder in population-based samples of adolescents, which included sexual minority youth and heterosexual young people, were included in the review. A meta-analysis was conducted to examine differences between groups. Twenty-three articles met the inclusion criteria. The proportion of sexual minority youth in the studies ranged from 2.3% to 12%. Sexual minority youth reported higher rates of depressive symptoms and depressive disorder (odds ratio = 2.94, p depressive symptoms when compared to male sexual minority youth (standardized mean difference, d = 0.34, p depressive symptoms or depressive disorder was measured. There is robust evidence that rates of depressive disorder and depressive symptoms are elevated in sexual minority youth in comparison to heterosexual young people. Despite the elevated risk of depressive symptoms or depressive disorder for sexual minority youth, the treatment for this group of young people has received little attention.

  6. Health-Related Lifestyle Factors and Sexual Dysfunction: A Meta-Analysis of Population-Based Research.

    Science.gov (United States)

    Allen, Mark S; Walter, Emma E

    2018-04-01

    Sexual dysfunction is a common problem among men and women and is associated with negative individual functioning, relationship difficulties, and lower quality of life. To determine the magnitude of associations between 6 health-related lifestyle factors (cigarette smoking, alcohol intake, physical activity, diet, caffeine, and cannabis use) and 3 common sexual dysfunctions (erectile dysfunction, premature ejaculation, and female sexual dysfunction). A comprehensive literature search of 10 electronic databases identified 89 studies that met the inclusion criteria (452 effect sizes; N = 348,865). Pooled mean effects (for univariate, age-adjusted, and multivariable-adjusted estimates) were computed using inverse-variance weighted random-effects meta-analysis and moderation by study and population characteristics were tested using random-effects meta-regression. Mean effect sizes from 92 separate meta-analyses provided evidence that health-related lifestyle factors are important for sexual dysfunction. Cigarette smoking (past and current), alcohol intake, and physical activity had dose-dependent associations with erectile dysfunction. Risk of erectile dysfunction increased with greater cigarette smoking and decreased with greater physical activity. Alcohol had a curvilinear association such that moderate intake was associated with a lower risk of erectile dysfunction. Participation in physical activity was associated with a lower risk of female sexual dysfunction. There was some evidence that a healthy diet was related to a lower risk of erectile dysfunction and female sexual dysfunction, and caffeine intake was unrelated to erectile dysfunction. Publication bias appeared minimal and findings were similar for clinical and non-clinical samples. Modification of lifestyle factors would appear to be a useful low-risk approach to decreasing the risk of erectile dysfunction and female sexual dysfunction. Strengths include the testing of age-adjusted and multivariable

  7. Risk of fracture with thiazolidinediones: an individual patient data meta-analysis

    Directory of Open Access Journals (Sweden)

    Marloes T Bazelier

    2013-02-01

    Full Text Available Background: The use of thiazolidinediones (TZDs has been associated with increased fracture risks. Our aim was to estimate the risk of fracture with TZDs in three different healthcare registries, using exactly the same study design, and to perform an individual patient data meta-analysis of these three studies. Methods: Population-based cohort studies were performed utilizing the British General Practice Research Database (GPRD, the Dutch PHARMO Record Linkage System, and the Danish National Health Registers. In all three databases, the exposed cohort consisted of all patients (aged 18+ with at least one prescription of antidiabetic (AD medication. Cox proportional hazards models were used to estimate hazard ratios (HRs of fracture. The total period of follow-up for each patient was divided into periods of current exposure and past exposure, with patients moving between current and past use.Results: In all three registries, the risk of fracture was increased for women who were exposed to TZDs: HR 1.48 [1.37-1.60] in GPRD, HR 1.35 [1.15-1.58] in PHARMO and HR 1.22 [1.03-1.44] in Denmark. Combining the data in an individual patient data meta-analysis resulted, for women, in a 1.4-fold increased risk of any fracture for current TZD users versus other AD drug users (adj. HR 1.44 [1.35-1.53]. For men, there was no increased fracture risk (adj. HR 1.05 [0.96-1.14]. Risks were increased for fractures of the radius/ulna, humerus, tibia/fibula, ankle and foot, but not for hip/femur or vertebral fractures. Current TZD users with more than 25 TZD presciptions ever before had a 1.6-fold increased risk of fracture compared with other AD drug users (HR 1.59 [1.46-1.74].Conclusion: In this study, we consistently found a 1.2- to 1.5-fold increased risk of fractures for women using TZDs, but not for men, across three different healthcare registries. TZD users had an increased risk for fractures of the extremities, and risks further increased for prolonged users

  8. [Effectiveness of acupuncture in postoperative ileus: a systematic review and Meta-analysis].

    Science.gov (United States)

    Cheong, Kah Bik; Zhang, Jiping; Huang, Yong

    2016-06-01

    To conduct a systematic review and Meta-analysis of the effectiveness of acupuncture and common acupoint selection for postoperative ileus (POI). Randomized controlled trials (RCTs) comparing acupuncture and non-acupuncture treatment were identified from the databases PubMed, Cochrane, EBSCO (Academic Source Premier and MEDLINE), Ovid (including Evidence-Based Medicine Reviews), China National Knowledge Infrastructure, and Wanfang Data. The data from eligible studies were extracted and a Meta-analysis performed using a fixed-effects model. Results were expressed as relative risk (RR) for dichotomous data, and 95% CI (confidence intervals) were calculated. Each trial was evaluated using the CONSORT (Consolidated Standards of Reporting Trials) and STRICTA (STandards for Reporting Interventions in Controlled Trials of Acupuncture) guideline . The quality of the study was assessed using the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) approach. Of the 69 studies screened, eight RCTs were included for review. Among these, four RCTs (with a total of 123 patients in the intervention groups and 124 patients in the control groups) met the criteria for Meta-analysis. The Meta-analysis results indicated that acupuncture combined with usual care showed a significantly higher total effective rate than the control condition (usual care) (RR 1.09, 95% CI 1.01, 1.18; P = 0.02). Zusanli (ST 36) and Shangjuxu (ST 37) were the most common acupoints selected. However, the quality of the studies was generally low, as they did not emphasize the use of blinding. The results suggested that acupuncture might be effective in improving POI; however, a definite conclusion could not be drawn because of the low quality of trials. Further large-scale, high-quality randomized clinical trials are needed to validate these findings and to develop a standardized method of treatment. We hope that the present results will lead to improved research, resulting in better

  9. Adaptive patch-based POCS approach for super resolution reconstruction of 4D-CT lung data

    International Nuclear Information System (INIS)

    Wang, Tingting; Cao, Lei; Yang, Wei; Feng, Qianjin; Chen, Wufan; Zhang, Yu

    2015-01-01

    Image enhancement of lung four-dimensional computed tomography (4D-CT) data is highly important because image resolution remains a crucial point in lung cancer radiotherapy. In this paper, we proposed a method for lung 4D-CT super resolution (SR) by using an adaptive-patch-based projection onto convex sets (POCS) approach, which is in contrast with the global POCS SR algorithm, to recover fine details with lesser artifacts in images. The main contribution of this patch-based approach is that the interfering local structure from other phases can be rejected by employing a similar patch adaptive selection strategy. The effectiveness of our approach is demonstrated through experiments on simulated images and real lung 4D-CT datasets. A comparison with previously published SR reconstruction methods highlights the favorable characteristics of the proposed method. (paper)

  10. Land cover classification of Landsat 8 satellite data based on Fuzzy Logic approach

    Science.gov (United States)

    Taufik, Afirah; Sakinah Syed Ahmad, Sharifah

    2016-06-01

    The aim of this paper is to propose a method to classify the land covers of a satellite image based on fuzzy rule-based system approach. The study uses bands in Landsat 8 and other indices, such as Normalized Difference Water Index (NDWI), Normalized difference built-up index (NDBI) and Normalized Difference Vegetation Index (NDVI) as input for the fuzzy inference system. The selected three indices represent our main three classes called water, built- up land, and vegetation. The combination of the original multispectral bands and selected indices provide more information about the image. The parameter selection of fuzzy membership is performed by using a supervised method known as ANFIS (Adaptive neuro fuzzy inference system) training. The fuzzy system is tested for the classification on the land cover image that covers Klang Valley area. The results showed that the fuzzy system approach is effective and can be explored and implemented for other areas of Landsat data.

  11. Longitudinal Meta-analysis

    NARCIS (Netherlands)

    Hox, J.J.; Maas, C.J.M.; Lensvelt-Mulders, G.J.L.M.

    2004-01-01

    The goal of meta-analysis is to integrate the research results of a number of studies on a specific topic. Characteristic for meta-analysis is that in general only the summary statistics of the studies are used and not the original data. When the published research results to be integrated

  12. MacCormack's technique-based pressure reconstruction approach for PIV data in compressible flows with shocks

    Science.gov (United States)

    Liu, Shun; Xu, Jinglei; Yu, Kaikai

    2017-06-01

    This paper proposes an improved approach for extraction of pressure fields from velocity data, such as obtained by particle image velocimetry (PIV), especially for steady compressible flows with strong shocks. The principle of this approach is derived from Navier-Stokes equations, assuming adiabatic condition and neglecting viscosity of flow field boundaries measured by PIV. The computing method is based on MacCormack's technique in computational fluid dynamics. Thus, this approach is called the MacCormack method. Moreover, the MacCormack method is compared with several approaches proposed in previous literature, including the isentropic method, the spatial integration and the Poisson method. The effects of velocity error level and PIV spatial resolution on these approaches are also quantified by using artificial velocity data containing shock waves. The results demonstrate that the MacCormack method has higher reconstruction accuracy than other approaches, and its advantages become more remarkable with shock strengthening. Furthermore, the performance of the MacCormack method is also validated by using synthetic PIV images with an oblique shock wave, confirming the feasibility and advantage of this approach in real PIV experiments. This work is highly significant for the studies on aerospace engineering, especially the outer flow fields of supersonic aircraft and the internal flow fields of ramjets.

  13. Introduction, comparison, and validation of Meta-Essentials: A free and simple tool for meta-analysis.

    Science.gov (United States)

    Suurmond, Robert; van Rhee, Henk; Hak, Tony

    2017-12-01

    We present a new tool for meta-analysis, Meta-Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta-analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta-Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta-analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp-Hartung adjustment of the DerSimonian-Laird estimator. However, more advanced meta-analysis methods such as meta-analytical structural equation modelling and meta-regression with multiple covariates are not available. In summary, Meta-Essentials may prove a valuable resource for meta-analysts, including researchers, teachers, and students. © 2017 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

  14. MADAM - An open source meta-analysis toolbox for R and Bioconductor

    Directory of Open Access Journals (Sweden)

    Graber Armin

    2010-03-01

    Full Text Available Abstract Background Meta-analysis is a major theme in biomedical research. In the present paper we introduce a package for R and Bioconductor that provides useful tools for performing this type of work. One idea behind the development of MADAM was that many meta-analysis methods, which are available in R, are not able to use the capacities of parallel computing yet. In this first version, we implemented one meta-analysis method in such a parallel manner. Additionally, we provide tools for combining the results from a set of methods in an ensemble approach. Functionality for visualization of results is also provided. Results The presented package enables the carrying out of meta-analysis either by providing functions directly or by wrapping them to existing implementations. Overall, five different meta-analysis methods are now usable through MADAM, along with another three methods for combining the corresponding results. Visualizing the results is eased by three included functions. For developing and testing meta-analysis methods, a mock up data generator is integrated. Conclusions The use of MADAM enables a user to focus on one package, in turn enabling them to work with the same data types across a set of methods. By making use of the snow package, MADAM can be made compatible with an existing parallel computing infrastructure. MADAM is open source and freely available within CRAN http://cran.r-project.org.

  15. Community-Based Mental Health and Behavioral Programs for Low-Income Urban Youth: A Meta-Analytic Review

    Science.gov (United States)

    Farahmand, Farahnaz K.; Duffy, Sophia N.; Tailor, Megha A.; Dubois, David L.; Lyon, Aaron L.; Grant, Kathryn E.; Zarlinski, Jennifer C.; Masini, Olivia; Zander, Keith J.; Nathanson, Alison M.

    2012-01-01

    A meta-analytic review of 33 studies and 41 independent samples was conducted of the effectiveness of community-based mental health and behavioral programs for low-income urban youth. Findings indicated positive effects, with an overall mean effect of 0.25 at post-test. While this is comparable to previous meta-analytic intervention research with…

  16. Meta-Analytical Studies in Transport Economics. Methodology and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Brons, M.R.E.

    2006-05-18

    Vast increases in the external costs of transport in the late twentieth century have caused national and international governmental bodies to worry about the sustainability of their transport systems. In this thesis we use meta-analysis as a research method to study various topics in transport economics that are relevant for sustainable transport policymaking. Meta-analysis is a research methodology that is based on the quantitative summarisation of a body of previously documented empirical evidence. In several fields of economic, meta-analysis has become a well-accepted research tool. Despite the appeal of the meta-analytical approach, there are methodological difficulties that need to be acknowledged. We study a specific methodological problem which is common in meta-analysis in economics, viz., within-study dependence caused by multiple sampling techniques. By means of Monte Carlo analysis we investigate the effect of such dependence on the performance of various multivariate estimators. In the applied part of the thesis we use and develop meta-analytical techniques to study the empirical variation in indicators of the price sensitivity of demand for aviation transport, the price sensitivity of demand for gasoline, the efficiency of urban public transport and the valuation of the external costs of noise from rail transport. We focus on the estimation of mean values for these indicators and on the identification of the impact of conditioning factors.

  17. Tackling Biocomplexity with Meta-models for Species Risk Assessment

    Directory of Open Access Journals (Sweden)

    Philip J. Nyhus

    2007-06-01

    Full Text Available We describe results of a multi-year effort to strengthen consideration of the human dimension into endangered species risk assessments and to strengthen research capacity to understand biodiversity risk assessment in the context of coupled human-natural systems. A core group of social and biological scientists have worked with a network of more than 50 individuals from four countries to develop a conceptual framework illustrating how human-mediated processes influence biological systems and to develop tools to gather, translate, and incorporate these data into existing simulation models. A central theme of our research focused on (1 the difficulties often encountered in identifying and securing diverse bodies of expertise and information that is necessary to adequately address complex species conservation issues; and (2 the development of quantitative simulation modeling tools that could explicitly link these datasets as a way to gain deeper insight into these issues. To address these important challenges, we promote a "meta-modeling" approach where computational links are constructed between discipline-specific models already in existence. In this approach, each model can function as a powerful stand-alone program, but interaction between applications is achieved by passing data structures describing the state of the system between programs. As one example of this concept, an integrated meta-model of wildlife disease and population biology is described. A goal of this effort is to improve science-based capabilities for decision making by scientists, natural resource managers, and policy makers addressing environmental problems in general, and focusing on biodiversity risk assessment in particular.

  18. MetaSensing's FastGBSAR: ground based radar for deformation monitoring

    Science.gov (United States)

    Rödelsperger, Sabine; Meta, Adriano

    2014-10-01

    The continuous monitoring of ground deformation and structural movement has become an important task in engineering. MetaSensing introduces a novel sensor system, the Fast Ground Based Synthetic Aperture Radar (FastGBSAR), based on innovative technologies that have already been successfully applied to airborne SAR applications. The FastGBSAR allows the remote sensing of deformations of a slope or infrastructure from up to a distance of 4 km. The FastGBSAR can be setup in two different configurations: in Real Aperture Radar (RAR) mode it is capable of accurately measuring displacements along a linear range profile, ideal for monitoring vibrations of structures like bridges and towers (displacement accuracy up to 0.01 mm). Modal parameters can be determined within half an hour. Alternatively, in Synthetic Aperture Radar (SAR) configuration it produces two-dimensional displacement images with an acquisition time of less than 5 seconds, ideal for monitoring areal structures like dams, landslides and open pit mines (displacement accuracy up to 0.1 mm). The MetaSensing FastGBSAR is the first ground based SAR instrument on the market able to produce two-dimensional deformation maps with this high acquisition rate. By that, deformation time series with a high temporal and spatial resolution can be generated, giving detailed information useful to determine the deformation mechanisms involved and eventually to predict an incoming failure. The system is fully portable and can be quickly installed on bedrock or a basement. The data acquisition and processing can be fully automated leading to a low effort in instrument operation and maintenance. Due to the short acquisition time of FastGBSAR, the coherence between two acquisitions is very high and the phase unwrapping is simplified enormously. This yields a high density of resolution cells with good quality and high reliability of the acquired deformations. The deformation maps can directly be used as input into an Early

  19. Comparison of SMOS and SMAP Soil Moisture Retrieval Approaches Using Tower-based Radiometer Data over a Vineyard Field

    Science.gov (United States)

    Miernecki, Maciej; Wigneron, Jean-Pierre; Lopez-Baeza, Ernesto; Kerr, Yann; DeJeu, Richard; DeLannoy, Gabielle J. M.; Jackson, Tom J.; O'Neill, Peggy E.; Shwank, Mike; Moran, Roberto Fernandez; hide

    2014-01-01

    The objective of this study was to compare several approaches to soil moisture (SM) retrieval using L-band microwave radiometry. The comparison was based on a brightness temperature (TB) data set acquired since 2010 by the L-band radiometer ELBARA-II over a vineyard field at the Valencia Anchor Station (VAS) site. ELBARA-II, provided by the European Space Agency (ESA) within the scientific program of the SMOS (Soil Moisture and Ocean Salinity) mission, measures multiangular TB data at horizontal and vertical polarization for a range of incidence angles (30-60). Based on a three year data set (2010-2012), several SM retrieval approaches developed for spaceborne missions including AMSR-E (Advanced Microwave Scanning Radiometer for EOS), SMAP (Soil Moisture Active Passive) and SMOS were compared. The approaches include: the Single Channel Algorithm (SCA) for horizontal (SCA-H) and vertical (SCA-V) polarizations, the Dual Channel Algorithm (DCA), the Land Parameter Retrieval Model (LPRM) and two simplified approaches based on statistical regressions (referred to as 'Mattar' and 'Saleh'). Time series of vegetation indices required for three of the algorithms (SCA-H, SCA-V and Mattar) were obtained from MODIS observations. The SM retrievals were evaluated against reference SM values estimated from a multiangular 2-Parameter inversion approach. The results obtained with the current base line algorithms developed for SMAP (SCA-H and -V) are in very good agreement with the reference SM data set derived from the multi-angular observations (R2 around 0.90, RMSE varying between 0.035 and 0.056 m3m3 for several retrieval configurations). This result showed that, provided the relationship between vegetation optical depth and a remotely-sensed vegetation index can be calibrated, the SCA algorithms can provide results very close to those obtained from multi-angular observations in this study area. The approaches based on statistical regressions provided similar results and the

  20. Detecting and correcting for publication bias in meta-analysis - A truncated normal distribution approach.

    Science.gov (United States)

    Zhu, Qiaohao; Carriere, K C

    2016-01-01

    Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.

  1. Innovative Technology-Based Interventions for Autism Spectrum Disorders: A Meta-Analysis

    Science.gov (United States)

    Grynszpan, Ouriel; Weiss, Patrice L.; Perez-Diaz, Fernando; Gal, Eynat

    2014-01-01

    This article reports the results of a meta-analysis of technology-based intervention studies for children with autism spectrum disorders. We conducted a systematic review of research that used a pre-post design to assess innovative technology interventions, including computer programs, virtual reality, and robotics. The selected studies provided…

  2. Meta-Teaching: Meaning and Strategy

    Science.gov (United States)

    Chen, Xiaoduan

    2013-01-01

    Meta-teaching is the knowledge and reflection on teaching based on meta-ideas. It is the teaching about teaching, a teaching process with practice consciously guided by thinking, inspiring teachers to teach more effectively. Meta-teaching is related to the knowledge, inspection and amendment of teaching activities in terms of their design,…

  3. Geographically selective assortment of cycles in pandemics: meta-analysis of data collected by Chizhevsky.

    Science.gov (United States)

    Gumarova, L; Cornélissen, G; Hillman, D; Halberg, F

    2013-10-01

    In the incidence patterns of cholera, diphtheria and croup during the past when they were of epidemic proportions, we document a set of cycles (periods), one of which was reported and discussed by A. L. Chizhevsky in the same data with emphasis on the mirroring in human disease of the ~11-year sunspot cycle. The data in this study are based on Chizhevsky’s book The Terrestrial Echo of Solar Storms and on records from the World Health Organization. For meta-analysis, we used the extended linear and nonlinear cosinor. We found a geographically selective assortment of various cycles characterizing the epidemiology of infections, which is the documented novel topic of this paper, complementing the earlier finding in the 21st century or shortly before, of a geographically selective assortment of cycles characterizing human sudden cardiac death. Solar effects, if any, interact with geophysical processes in contributing to this assortment.

  4. Feature-Space Clustering for fMRI Meta-Analysis

    DEFF Research Database (Denmark)

    Goutte, Cyril; Hansen, Lars Kai; Liptrot, Mathew G.

    2001-01-01

    MRI sequences containing several hundreds of images, it is sometimes necessary to invoke feature extraction to reduce the dimensionality of the data space. A second interesting application is in the meta-analysis of fMRI experiment, where features are obtained from a possibly large number of single......-voxel analyses. In particular this allows the checking of the differences and agreements between different methods of analysis. Both approaches are illustrated on a fMRI data set involving visual stimulation, and we show that the feature space clustering approach yields nontrivial results and, in particular......, shows interesting differences between individual voxel analysis performed with traditional methods. © 2001 Wiley-Liss, Inc....

  5. The effectiveness of problem-based learning on development of nursing students' critical thinking: a systematic review and meta-analysis.

    Science.gov (United States)

    Kong, Ling-Na; Qin, Bo; Zhou, Ying-qing; Mou, Shao-yu; Gao, Hui-Ming

    2014-03-01

    The objective of this systematic review and meta-analysis was to estimate the effectiveness of problem-based learning in developing nursing students' critical thinking. Searches of PubMed, EMBASE, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Proquest, Cochrane Central Register of Controlled Trials (CENTRAL) and China National Knowledge Infrastructure (CNKI) were undertaken to identify randomized controlled trails from 1965 to December 2012, comparing problem-based learning with traditional lectures on the effectiveness of development of nursing students' critical thinking, with no language limitation. The mesh-terms or key words used in the search were problem-based learning, thinking, critical thinking, nursing, nursing education, nurse education, nurse students, nursing students and pupil nurse. Two reviewers independently assessed eligibility and extracted data. Quality assessment was conducted independently by two reviewers using the Cochrane Collaboration's Risk of Bias Tool. We analyzed critical thinking scores (continuous outcomes) using a standardized mean difference (SMD) or weighted mean difference (WMD) with a 95% confidence intervals (CIs). Heterogeneity was assessed using the Cochran's Q statistic and I(2) statistic. Publication bias was assessed by means of funnel plot and Egger's test of asymmetry. Nine articles representing eight randomized controlled trials were included in the meta-analysis. Most studies were of low risk of bias. The pooled effect size showed problem-based learning was able to improve nursing students' critical thinking (overall critical thinking scores SMD=0.33, 95%CI=0.13-0.52, P=0.0009), compared with traditional lectures. There was low heterogeneity (overall critical thinking scores I(2)=45%, P=0.07) in the meta-analysis. No significant publication bias was observed regarding overall critical thinking scores (P=0.536). Sensitivity analysis showed that the result of our meta-analysis was reliable. Most

  6. Multivariate Meta-Analysis Using Individual Participant Data

    Science.gov (United States)

    Riley, R. D.; Price, M. J.; Jackson, D.; Wardle, M.; Gueyffier, F.; Wang, J.; Staessen, J. A.; White, I. R.

    2015-01-01

    When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is…

  7. Creation of Nuclear Data Base up to 150 MeV and corresponding scaling approach for ADS

    International Nuclear Information System (INIS)

    Shubin, Y. N.; Gai, E. V.; Ignatyuk, A. V.; Lunev, V. P.

    1997-01-01

    The status of nuclear data in the energy region up to 150 MeV is outlined. The specific physical reasons for the detailed investigations of nuclear structure effects is noted out. The necessity of the development of Nuclear Data System for ADS is stressed. The program for the creation of nuclear data base up to 150 MeV and corresponding scaling approach for ADS is proposed. (Author) 14 refs

  8. Meta-optimization of the extended kalman filter's parameters for improved feature extraction on hyper-temporal images

    CSIR Research Space (South Africa)

    Salmon, BP

    2011-07-01

    Full Text Available . This paper proposes a meta-optimization approach for setting the parameters of the non-linear Extended Kalman Filter to rapidly and efficiently estimate the features for the pair of triply modulated cosine functions. The approach is based on a unsupervised...

  9. Causal Meta-Analysis : Methodology and Applications

    NARCIS (Netherlands)

    Bax, L.J.

    2009-01-01

    Meta-analysis is a statistical method to summarize research data from multiple studies in a quantitative manner. This dissertation addresses a number of methodological topics in causal meta-analysis and reports the development and validation of meta-analysis software. In the first (methodological)

  10. Efficient clustering aggregation based on data fragments.

    Science.gov (United States)

    Wu, Ou; Hu, Weiming; Maybank, Stephen J; Zhu, Mingliang; Li, Bing

    2012-06-01

    Clustering aggregation, known as clustering ensembles, has emerged as a powerful technique for combining different clustering results to obtain a single better clustering. Existing clustering aggregation algorithms are applied directly to data points, in what is referred to as the point-based approach. The algorithms are inefficient if the number of data points is large. We define an efficient approach for clustering aggregation based on data fragments. In this fragment-based approach, a data fragment is any subset of the data that is not split by any of the clustering results. To establish the theoretical bases of the proposed approach, we prove that clustering aggregation can be performed directly on data fragments under two widely used goodness measures for clustering aggregation taken from the literature. Three new clustering aggregation algorithms are described. The experimental results obtained using several public data sets show that the new algorithms have lower computational complexity than three well-known existing point-based clustering aggregation algorithms (Agglomerative, Furthest, and LocalSearch); nevertheless, the new algorithms do not sacrifice the accuracy.

  11. Using structural equation modeling for network meta-analysis.

    Science.gov (United States)

    Tu, Yu-Kang; Wu, Yun-Chun

    2017-07-14

    Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison

  12. Privacy-Preserving Data Mining of Medical Data Using Data Separation-Based Techniques

    Directory of Open Access Journals (Sweden)

    Gang Kou

    2007-08-01

    Full Text Available Data mining is concerned with the extraction of useful knowledge from various types of data. Medical data mining has been a popular data mining topic of late. Compared with other data mining areas, medical data mining has some unique characteristics. Because medical files are related to human subjects, privacy concerns are taken more seriously than other data mining tasks. This paper applied data separation-based techniques to preserve privacy in classification of medical data. We take two approaches to protect privacy: one approach is to vertically partition the medical data and mine these partitioned data at multiple sites; the other approach is to horizontally split data across multiple sites. In the vertical partition approach, each site uses a portion of the attributes to compute its results, and the distributed results are assembled at a central trusted party using a majority-vote ensemble method. In the horizontal partition approach, data are distributed among several sites. Each site computes its own data, and a central trusted party is responsible to integrate these results. We implement these two approaches using medical datasets from UCI KDD archive and report the experimental results.

  13. Can statistic adjustment of OR minimize the potential confounding bias for meta-analysis of case-control study? A secondary data analysis.

    Science.gov (United States)

    Liu, Tianyi; Nie, Xiaolu; Wu, Zehao; Zhang, Ying; Feng, Guoshuang; Cai, Siyu; Lv, Yaqi; Peng, Xiaoxia

    2017-12-29

    Different confounder adjustment strategies were used to estimate odds ratios (ORs) in case-control study, i.e. how many confounders original studies adjusted and what the variables are. This secondary data analysis is aimed to detect whether there are potential biases caused by difference of confounding factor adjustment strategies in case-control study, and whether such bias would impact the summary effect size of meta-analysis. We included all meta-analyses that focused on the association between breast cancer and passive smoking among non-smoking women, as well as each original case-control studies included in these meta-analyses. The relative deviations (RDs) of each original study were calculated to detect how magnitude the adjustment would impact the estimation of ORs, compared with crude ORs. At the same time, a scatter diagram was sketched to describe the distribution of adjusted ORs with different number of adjusted confounders. Substantial inconsistency existed in meta-analysis of case-control studies, which would influence the precision of the summary effect size. First, mixed unadjusted and adjusted ORs were used to combine individual OR in majority of meta-analysis. Second, original studies with different adjustment strategies of confounders were combined, i.e. the number of adjusted confounders and different factors being adjusted in each original study. Third, adjustment did not make the effect size of original studies trend to constringency, which suggested that model fitting might have failed to correct the systematic error caused by confounding. The heterogeneity of confounder adjustment strategies in case-control studies may lead to further bias for summary effect size in meta-analyses, especially for weak or medium associations so that the direction of causal inference would be even reversed. Therefore, further methodological researches are needed, referring to the assessment of confounder adjustment strategies, as well as how to take this kind

  14. Teaching meta-analysis using MetaLight

    Directory of Open Access Journals (Sweden)

    Thomas James

    2012-10-01

    Full Text Available Abstract Background Meta-analysis is a statistical method for combining the results of primary studies. It is often used in systematic reviews and is increasingly a method and topic that appears in student dissertations. MetaLight is a freely available software application that runs simple meta-analyses and contains specific functionality to facilitate the teaching and learning of meta-analysis. While there are many courses and resources for meta-analysis available and numerous software applications to run meta-analyses, there are few pieces of software which are aimed specifically at helping those teaching and learning meta-analysis. Valuable teaching time can be spent learning the mechanics of a new software application, rather than on the principles and practices of meta-analysis. Findings We discuss ways in which the MetaLight tool can be used to present some of the main issues involved in undertaking and interpreting a meta-analysis. Conclusions While there are many software tools available for conducting meta-analysis, in the context of a teaching programme such software can require expenditure both in terms of money and in terms of the time it takes to learn how to use it. MetaLight was developed specifically as a tool to facilitate the teaching and learning of meta-analysis and we have presented here some of the ways it might be used in a training situation.

  15. Drivers of wetland conversion: a global meta-analysis.

    Science.gov (United States)

    van Asselen, Sanneke; Verburg, Peter H; Vermaat, Jan E; Janse, Jan H

    2013-01-01

    Meta-analysis of case studies has become an important tool for synthesizing case study findings in land change. Meta-analyses of deforestation, urbanization, desertification and change in shifting cultivation systems have been published. This present study adds to this literature, with an analysis of the proximate causes and underlying forces of wetland conversion at a global scale using two complementary approaches of systematic review. Firstly, a meta-analysis of 105 case-study papers describing wetland conversion was performed, showing that different combinations of multiple-factor proximate causes, and underlying forces, drive wetland conversion. Agricultural development has been the main proximate cause of wetland conversion, and economic growth and population density are the most frequently identified underlying forces. Secondly, to add a more quantitative component to the study, a logistic meta-regression analysis was performed to estimate the likelihood of wetland conversion worldwide, using globally-consistent biophysical and socioeconomic location factor maps. Significant factors explaining wetland conversion, in order of importance, are market influence, total wetland area (lower conversion probability), mean annual temperature and cropland or built-up area. The regression analyses results support the outcomes of the meta-analysis of the processes of conversion mentioned in the individual case studies. In other meta-analyses of land change, similar factors (e.g., agricultural development, population growth, market/economic factors) are also identified as important causes of various types of land change (e.g., deforestation, desertification). Meta-analysis helps to identify commonalities across the various local case studies and identify which variables may lead to individual cases to behave differently. The meta-regression provides maps indicating the likelihood of wetland conversion worldwide based on the location factors that have determined historic

  16. A SOA-based approach to geographical data sharing

    Science.gov (United States)

    Li, Zonghua; Peng, Mingjun; Fan, Wei

    2009-10-01

    In the last few years, large volumes of spatial data have been available in different government departments in China, but these data are mainly used within these departments. With the e-government project initiated, spatial data sharing become more and more necessary. Currently, the Web has been used not only for document searching but also for the provision and use of services, known as Web services, which are published in a directory and may be automatically discovered by software agents. Particularly in the spatial domain, the possibility of accessing these large spatial datasets via Web services has motivated research into the new field of Spatial Data Infrastructure (SDI) implemented using service-oriented architecture. In this paper a Service-Oriented Architecture (SOA) based Geographical Information Systems (GIS) is proposed, and a prototype system is deployed based on Open Geospatial Consortium (OGC) standard in Wuhan, China, thus that all the departments authorized can access the spatial data within the government intranet, and also these spatial data can be easily integrated into kinds of applications.

  17. Toward better public health reporting using existing off the shelf approaches: A comparison of alternative cancer detection approaches using plaintext medical data and non-dictionary based feature selection.

    Science.gov (United States)

    Kasthurirathne, Suranga N; Dixon, Brian E; Gichoya, Judy; Xu, Huiping; Xia, Yuni; Mamlin, Burke; Grannis, Shaun J

    2016-04-01

    Increased adoption of electronic health records has resulted in increased availability of free text clinical data for secondary use. A variety of approaches to obtain actionable information from unstructured free text data exist. These approaches are resource intensive, inherently complex and rely on structured clinical data and dictionary-based approaches. We sought to evaluate the potential to obtain actionable information from free text pathology reports using routinely available tools and approaches that do not depend on dictionary-based approaches. We obtained pathology reports from a large health information exchange and evaluated the capacity to detect cancer cases from these reports using 3 non-dictionary feature selection approaches, 4 feature subset sizes, and 5 clinical decision models: simple logistic regression, naïve bayes, k-nearest neighbor, random forest, and J48 decision tree. The performance of each decision model was evaluated using sensitivity, specificity, accuracy, positive predictive value, and area under the receiver operating characteristics (ROC) curve. Decision models parameterized using automated, informed, and manual feature selection approaches yielded similar results. Furthermore, non-dictionary classification approaches identified cancer cases present in free text reports with evaluation measures approaching and exceeding 80-90% for most metrics. Our methods are feasible and practical approaches for extracting substantial information value from free text medical data, and the results suggest that these methods can perform on par, if not better, than existing dictionary-based approaches. Given that public health agencies are often under-resourced and lack the technical capacity for more complex methodologies, these results represent potentially significant value to the public health field. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Trial Sequential Analysis in systematic reviews with meta-analysis

    Directory of Open Access Journals (Sweden)

    Jørn Wetterslev

    2017-03-01

    Full Text Available Abstract Background Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors and too many false negative conclusions (type II errors. Methods We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. Results The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D2 measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in

  19. Graph-based sequence annotation using a data integration approach

    Directory of Open Access Journals (Sweden)

    Pesch Robert

    2008-06-01

    Full Text Available The automated annotation of data from high throughput sequencing and genomics experiments is a significant challenge for bioinformatics. Most current approaches rely on sequential pipelines of gene finding and gene function prediction methods that annotate a gene with information from different reference data sources. Each function prediction method contributes evidence supporting a functional assignment. Such approaches generally ignore the links between the information in the reference datasets. These links, however, are valuable for assessing the plausibility of a function assignment and can be used to evaluate the confidence in a prediction. We are working towards a novel annotation system that uses the network of information supporting the function assignment to enrich the annotation process for use by expert curators and predicting the function of previously unannotated genes. In this paper we describe our success in the first stages of this development. We present the data integration steps that are needed to create the core database of integrated reference databases (UniProt, PFAM, PDB, GO and the pathway database Ara- Cyc which has been established in the ONDEX data integration system. We also present a comparison between different methods for integration of GO terms as part of the function assignment pipeline and discuss the consequences of this analysis for improving the accuracy of gene function annotation.

  20. Integrating landscape system and meta-ecosystem frameworks to advance the understanding of ecosystem function in heterogeneous landscapes: An analysis on the carbon fluxes in the Northern Highlands Lake District (NHLD) of Wisconsin and Michigan.

    Science.gov (United States)

    Yang, Haile; Chen, Jiakuan

    2018-01-01

    The successful integration of ecosystem ecology with landscape ecology would be conducive to understanding how landscapes function. There have been several attempts at this, with two main approaches: (1) an ecosystem-based approach, such as the meta-ecosystem framework and (2) a landscape-based approach, such as the landscape system framework. These two frameworks are currently disconnected. To integrate these two frameworks, we introduce a protocol, and then demonstrate application of the protocol using a case study. The protocol includes four steps: 1) delineating landscape systems; 2) classifying landscape systems; 3) adjusting landscape systems to meta-ecosystems and 4) integrating landscape system and meta-ecosystem frameworks through meta-ecosystems. The case study is the analyzing of the carbon fluxes in the Northern Highlands Lake District (NHLD) of Wisconsin and Michigan using this protocol. The application of this protocol revealed that one could follow this protocol to construct a meta-ecosystem and analyze it using the integrative framework of landscape system and meta-ecosystem frameworks. That is, one could (1) appropriately describe and analyze the spatial heterogeneity of the meta-ecosystem; (2) understand the emergent properties arising from spatial coupling of local ecosystems in the meta-ecosystem. In conclusion, this protocol is a useful approach for integrating the meta-ecosystem framework and the landscape system framework, which advances the describing and analyzing of the spatial heterogeneity and ecosystem function of interconnected ecosystems.

  1. Meta-Analysis With Complex Research Designs: Dealing With Dependence From Multiple Measures and Multiple Group Comparisons

    Science.gov (United States)

    Scammacca, Nancy; Roberts, Greg; Stuebing, Karla K.

    2013-01-01

    Previous research has shown that treating dependent effect sizes as independent inflates the variance of the mean effect size and introduces bias by giving studies with more effect sizes more weight in the meta-analysis. This article summarizes the different approaches to handling dependence that have been advocated by methodologists, some of which are more feasible to implement with education research studies than others. A case study using effect sizes from a recent meta-analysis of reading interventions is presented to compare the results obtained from different approaches to dealing with dependence. Overall, mean effect sizes and variance estimates were found to be similar, but estimates of indexes of heterogeneity varied. Meta-analysts are advised to explore the effect of the method of handling dependence on the heterogeneity estimates before conducting moderator analyses and to choose the approach to dependence that is best suited to their research question and their data set. PMID:25309002

  2. A FUZZY LOGIC-BASED APPROACH FOR THE DETECTION OF FLOODED VEGETATION BY MEANS OF SYNTHETIC APERTURE RADAR DATA

    Directory of Open Access Journals (Sweden)

    V. Tsyganskaya

    2016-06-01

    Full Text Available In this paper an algorithm designed to map flooded vegetation from synthetic aperture radar (SAR imagery is introduced. The approach is based on fuzzy logic which enables to deal with the ambiguity of SAR data and to integrate multiple ancillary data containing topographical information, simple hydraulic considerations and land cover information. This allows the exclusion of image elements with a backscatter value similar to flooded vegetation, to significantly reduce misclassification errors. The flooded vegetation mapping procedure is tested on a flood event that occurred in Germany over parts of the Saale catchment on January 2011 using a time series of high resolution TerraSAR-X data covering the time interval from 2009 to 2015. The results show that the analysis of multi-temporal X-band data combined with ancillary data using a fuzzy logic-based approach permits the detection of flooded vegetation areas.

  3. The Yusuf-Peto method was not a robust method for meta-analyses of rare events data from antidepressant trials

    DEFF Research Database (Denmark)

    Sharma, Tarang; Gøtzsche, Peter C.; Kuss, Oliver

    2017-01-01

    Objectives The aim of the study was to identify the validity of effect estimates for serious rare adverse events in clinical study reports of antidepressants trials, across different meta-analysis methods. Study Design and Setting Four serious rare adverse events (all-cause mortality, suicidality......, aggressive behavior, and akathisia) were meta-analyzed using different methods. The Yusuf-Peto odds ratio ignores studies with no events and was compared with the alternative approaches of generalized linear mixed models (GLMMs), conditional logistic regression, a Bayesian approach using Markov Chain Monte...... from 1. For example, the odds ratio for suicidality for children and adolescents was 2.39 (95% confidence interval = 1.32–4.33), using the Yusuf-Peto method but increased to 2.64 (1.33–5.26) using conditional logistic regression, to 2.69 (1.19–6.09) using beta-binomial, to 2.73 (1.37–5.42) using...

  4. A mixture model-based approach to the clustering of microarray expression data.

    Science.gov (United States)

    McLachlan, G J; Bean, R W; Peel, D

    2002-03-01

    This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets. EMMIX-GENE is available at http://www.maths.uq.edu.au/~gjm/emmix-gene/

  5. A Meta-Analytic Review of School-Based Prevention for Cannabis Use

    Science.gov (United States)

    Porath-Waller, Amy J.; Beasley, Erin; Beirness, Douglas J.

    2010-01-01

    This investigation used meta-analytic techniques to evaluate the effectiveness of school-based prevention programming in reducing cannabis use among youth aged 12 to 19. It summarized the results from 15 studies published in peer-reviewed journals since 1999 and identified features that influenced program effectiveness. The results from the set of…

  6. A cloud-based data network approach for translational cancer research.

    Science.gov (United States)

    Xing, Wei; Tsoumakos, Dimitrios; Ghanem, Moustafa

    2015-01-01

    We develop a new model and associated technology for constructing and managing self-organizing data to support translational cancer research studies. We employ a semantic content network approach to address the challenges of managing cancer research data. Such data is heterogeneous, large, decentralized, growing and continually being updated. Moreover, the data originates from different information sources that may be partially overlapping, creating redundancies as well as contradictions and inconsistencies. Building on the advantages of elasticity of cloud computing, we deploy the cancer data networks on top of the CELAR Cloud platform to enable more effective processing and analysis of Big cancer data.

  7. Transgender Population Size in the United States: a Meta-Regression of Population-Based Probability Samples

    Science.gov (United States)

    Sevelius, Jae M.

    2017-01-01

    Background. Transgender individuals have a gender identity that differs from the sex they were assigned at birth. The population size of transgender individuals in the United States is not well-known, in part because official records, including the US Census, do not include data on gender identity. Population surveys today more often collect transgender-inclusive gender-identity data, and secular trends in culture and the media have created a somewhat more favorable environment for transgender people. Objectives. To estimate the current population size of transgender individuals in the United States and evaluate any trend over time. Search methods. In June and July 2016, we searched PubMed, Cumulative Index to Nursing and Allied Health Literature, and Web of Science for national surveys, as well as “gray” literature, through an Internet search. We limited the search to 2006 through 2016. Selection criteria. We selected population-based surveys that used probability sampling and included self-reported transgender-identity data. Data collection and analysis. We used random-effects meta-analysis to pool eligible surveys and used meta-regression to address our hypothesis that the transgender population size estimate would increase over time. We used subsample and leave-one-out analysis to assess for bias. Main results. Our meta-regression model, based on 12 surveys covering 2007 to 2015, explained 62.5% of model heterogeneity, with a significant effect for each unit increase in survey year (F = 17.122; df = 1,10; b = 0.026%; P = .002). Extrapolating these results to 2016 suggested a current US population size of 390 adults per 100 000, or almost 1 million adults nationally. This estimate may be more indicative for younger adults, who represented more than 50% of the respondents in our analysis. Authors’ conclusions. Future national surveys are likely to observe higher numbers of transgender people. The large variety in questions used to ask

  8. Quick, “Imputation-free” meta-analysis with proxy-SNPs

    Directory of Open Access Journals (Sweden)

    Meesters Christian

    2012-09-01

    Full Text Available Abstract Background Meta-analysis (MA is widely used to pool genome-wide association studies (GWASes in order to a increase the power to detect strong or weak genotype effects or b as a result verification method. As a consequence of differing SNP panels among genotyping chips, imputation is the method of choice within GWAS consortia to avoid losing too many SNPs in a MA. YAMAS (Yet Another Meta Analysis Software, however, enables cross-GWAS conclusions prior to finished and polished imputation runs, which eventually are time-consuming. Results Here we present a fast method to avoid forfeiting SNPs present in only a subset of studies, without relying on imputation. This is accomplished by using reference linkage disequilibrium data from 1,000 Genomes/HapMap projects to find proxy-SNPs together with in-phase alleles for SNPs missing in at least one study. MA is conducted by combining association effect estimates of a SNP and those of its proxy-SNPs. Our algorithm is implemented in the MA software YAMAS. Association results from GWAS analysis applications can be used as input files for MA, tremendously speeding up MA compared to the conventional imputation approach. We show that our proxy algorithm is well-powered and yields valuable ad hoc results, possibly providing an incentive for follow-up studies. We propose our method as a quick screening step prior to imputation-based MA, as well as an additional main approach for studies without available reference data matching the ethnicities of study participants. As a proof of principle, we analyzed six dbGaP Type II Diabetes GWAS and found that the proxy algorithm clearly outperforms naïve MA on the p-value level: for 17 out of 23 we observe an improvement on the p-value level by a factor of more than two, and a maximum improvement by a factor of 2127. Conclusions YAMAS is an efficient and fast meta-analysis program which offers various methods, including conventional MA as well as inserting proxy

  9. Brute-Force Approach for Mass Spectrometry-Based Variant Peptide Identification in Proteogenomics without Personalized Genomic Data

    Science.gov (United States)

    Ivanov, Mark V.; Lobas, Anna A.; Levitsky, Lev I.; Moshkovskii, Sergei A.; Gorshkov, Mikhail V.

    2018-02-01

    In a proteogenomic approach based on tandem mass spectrometry analysis of proteolytic peptide mixtures, customized exome or RNA-seq databases are employed for identifying protein sequence variants. However, the problem of variant peptide identification without personalized genomic data is important for a variety of applications. Following the recent proposal by Chick et al. (Nat. Biotechnol. 33, 743-749, 2015) on the feasibility of such variant peptide search, we evaluated two available approaches based on the previously suggested "open" search and the "brute-force" strategy. To improve the efficiency of these approaches, we propose an algorithm for exclusion of false variant identifications from the search results involving analysis of modifications mimicking single amino acid substitutions. Also, we propose a de novo based scoring scheme for assessment of identified point mutations. In the scheme, the search engine analyzes y-type fragment ions in MS/MS spectra to confirm the location of the mutation in the variant peptide sequence.

  10. Planning future studies based on the conditional power of a meta-analysis

    Science.gov (United States)

    Roloff, Verena; Higgins, Julian PT; Sutton, Alex J

    2013-01-01

    Systematic reviews often provide recommendations for further research. When meta-analyses are inconclusive, such recommendations typically argue for further studies to be conducted. However, the nature and amount of future research should depend on the nature and amount of the existing research. We propose a method based on conditional power to make these recommendations more specific. Assuming a random-effects meta-analysis model, we evaluate the influence of the number of additional studies, of their information sizes and of the heterogeneity anticipated among them on the ability of an updated meta-analysis to detect a prespecified effect size. The conditional powers of possible design alternatives can be summarized in a simple graph which can also be the basis for decision making. We use three examples from the Cochrane Database of Systematic Reviews to demonstrate our strategy. We demonstrate that if heterogeneity is anticipated, it might not be possible for a single study to reach the desirable power no matter how large it is. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22786670

  11. Comparing the Performance of NoSQL Approaches for Managing Archetype-Based Electronic Health Record Data.

    Directory of Open Access Journals (Sweden)

    Sergio Miranda Freire

    Full Text Available This study provides an experimental performance evaluation on population-based queries of NoSQL databases storing archetype-based Electronic Health Record (EHR data. There are few published studies regarding the performance of persistence mechanisms for systems that use multilevel modelling approaches, especially when the focus is on population-based queries. A healthcare dataset with 4.2 million records stored in a relational database (MySQL was used to generate XML and JSON documents based on the openEHR reference model. Six datasets with different sizes were created from these documents and imported into three single machine XML databases (BaseX, eXistdb and Berkeley DB XML and into a distributed NoSQL database system based on the MapReduce approach, Couchbase, deployed in different cluster configurations of 1, 2, 4, 8 and 12 machines. Population-based queries were submitted to those databases and to the original relational database. Database size and query response times are presented. The XML databases were considerably slower and required much more space than Couchbase. Overall, Couchbase had better response times than MySQL, especially for larger datasets. However, Couchbase requires indexing for each differently formulated query and the indexing time increases with the size of the datasets. The performances of the clusters with 2, 4, 8 and 12 nodes were not better than the single node cluster in relation to the query response time, but the indexing time was reduced proportionally to the number of nodes. The tested XML databases had acceptable performance for openEHR-based data in some querying use cases and small datasets, but were generally much slower than Couchbase. Couchbase also outperformed the response times of the relational database, but required more disk space and had a much longer indexing time. Systems like Couchbase are thus interesting research targets for scalable storage and querying of archetype-based EHR data when

  12. Comparing the Performance of NoSQL Approaches for Managing Archetype-Based Electronic Health Record Data

    Science.gov (United States)

    Freire, Sergio Miranda; Teodoro, Douglas; Wei-Kleiner, Fang; Sundvall, Erik; Karlsson, Daniel; Lambrix, Patrick

    2016-01-01

    This study provides an experimental performance evaluation on population-based queries of NoSQL databases storing archetype-based Electronic Health Record (EHR) data. There are few published studies regarding the performance of persistence mechanisms for systems that use multilevel modelling approaches, especially when the focus is on population-based queries. A healthcare dataset with 4.2 million records stored in a relational database (MySQL) was used to generate XML and JSON documents based on the openEHR reference model. Six datasets with different sizes were created from these documents and imported into three single machine XML databases (BaseX, eXistdb and Berkeley DB XML) and into a distributed NoSQL database system based on the MapReduce approach, Couchbase, deployed in different cluster configurations of 1, 2, 4, 8 and 12 machines. Population-based queries were submitted to those databases and to the original relational database. Database size and query response times are presented. The XML databases were considerably slower and required much more space than Couchbase. Overall, Couchbase had better response times than MySQL, especially for larger datasets. However, Couchbase requires indexing for each differently formulated query and the indexing time increases with the size of the datasets. The performances of the clusters with 2, 4, 8 and 12 nodes were not better than the single node cluster in relation to the query response time, but the indexing time was reduced proportionally to the number of nodes. The tested XML databases had acceptable performance for openEHR-based data in some querying use cases and small datasets, but were generally much slower than Couchbase. Couchbase also outperformed the response times of the relational database, but required more disk space and had a much longer indexing time. Systems like Couchbase are thus interesting research targets for scalable storage and querying of archetype-based EHR data when population-based use

  13. Comparing the Performance of NoSQL Approaches for Managing Archetype-Based Electronic Health Record Data.

    Science.gov (United States)

    Freire, Sergio Miranda; Teodoro, Douglas; Wei-Kleiner, Fang; Sundvall, Erik; Karlsson, Daniel; Lambrix, Patrick

    2016-01-01

    This study provides an experimental performance evaluation on population-based queries of NoSQL databases storing archetype-based Electronic Health Record (EHR) data. There are few published studies regarding the performance of persistence mechanisms for systems that use multilevel modelling approaches, especially when the focus is on population-based queries. A healthcare dataset with 4.2 million records stored in a relational database (MySQL) was used to generate XML and JSON documents based on the openEHR reference model. Six datasets with different sizes were created from these documents and imported into three single machine XML databases (BaseX, eXistdb and Berkeley DB XML) and into a distributed NoSQL database system based on the MapReduce approach, Couchbase, deployed in different cluster configurations of 1, 2, 4, 8 and 12 machines. Population-based queries were submitted to those databases and to the original relational database. Database size and query response times are presented. The XML databases were considerably slower and required much more space than Couchbase. Overall, Couchbase had better response times than MySQL, especially for larger datasets. However, Couchbase requires indexing for each differently formulated query and the indexing time increases with the size of the datasets. The performances of the clusters with 2, 4, 8 and 12 nodes were not better than the single node cluster in relation to the query response time, but the indexing time was reduced proportionally to the number of nodes. The tested XML databases had acceptable performance for openEHR-based data in some querying use cases and small datasets, but were generally much slower than Couchbase. Couchbase also outperformed the response times of the relational database, but required more disk space and had a much longer indexing time. Systems like Couchbase are thus interesting research targets for scalable storage and querying of archetype-based EHR data when population-based use

  14. No study left behind: a network meta-analysis in non-small-cell lung cancer demonstrating the importance of considering all relevant data.

    Science.gov (United States)

    Hawkins, Neil; Scott, David A; Woods, Beth S; Thatcher, Nicholas

    2009-09-01

    To demonstrate the importance of considering all relevant indirect data in a network meta-analysis of treatments for non-small-cell lung cancer (NSCLC). A recent National Institute for Health and Clinical Excellence appraisal focussed on the indirect comparison of docetaxel with erlotinib in second-line treatment of NSCLC based on trials including a common comparator. We compared the results of this analysis to a network meta-analysis including other trials that formed a network of evidence. We also examined the importance of allowing for the correlations between the estimated treatment effects that can arise when analysing such networks. The analysis of the restricted network including only trials of docetaxel and erlotinib linked via the common placebo comparator produced an estimated mean hazard ratio (HR) for erlotinib compared with docetaxel of 1.55 (95% confidence interval [CI] 0.72-2.97). In contrast, the network meta-analysis produced an estimated HR for erlotinib compared with docetaxel of 0.83 (95% CI 0.65-1.06). Analyzing the wider network improved the precision of estimated treatment effects, altered their rankings and also allowed further treatments to be compared. Some of the estimated treatment effects from the wider network were highly correlated. This empirical example shows the importance of considering all potentially relevant data when comparing treatments. Care should therefore be taken to consider all relevant information, including correlations induced by the network of trial data, when comparing treatments.

  15. Meta-analysis to predict the effects of metabolizable amino acids on dairy cattle performance.

    Science.gov (United States)

    Lean, I J; de Ondarza, M B; Sniffen, C J; Santos, J E P; Griswold, K E

    2018-01-01

    Meta-analytic methods were used to determine statistical relationships between metabolizable AA supplies and milk protein yield, milk protein percentage, and milk yield in lactating dairy cows. Sixty-three research publications (258 treatment means) were identified through a search of published literature using 3 search engines and met the criteria for inclusion in this meta-analysis. The Cornell Net Carbohydrate and Protein System (CNCPS) version 6.5 was used to determine dietary nutrient parameters including metabolizable AA. Two approaches were used to analyze the data. First, mixed models were fitted to determine whether explanatory variables predicted responses. Each mixed model contained a global intercept, a random intercept for each experiment, and data were weighted by the inverse of the SEM squared. The second analysis approach used classical effect size meta-analytical evaluation of responses to treatment weighted by the inverse of the treatment variance and with a random effect of treatment nested within experiment. Regardless of the analytical approach, CNCPS-predicted metabolizable Met (g/d) was associated with milk protein percentage and yield. Milk yield was positively associated with CNCPS-predicted metabolizable His, Leu, Trp, Thr, and nonessential AA (g/d). Milk true protein yield was also associated with CNCPS-predicted metabolizable Leu (g/d). Predicted metabolizable Lysine (g/d) did not increase responses in production outcomes. However, mean metabolizable Lys supply was less than typically recommended and the change with treatment was minimal (157 vs. 162 g; 6.36 vs. 6.38% metabolizable protein). Experiments based solely on Lys or Met interventions were excluded from the study database. It is possible that the inclusion of these experiments may have provided additional insight into the effect of these AA on responses. This meta-analysis supports other research indicating a positive effect of Met and His as co-limiting AA in dairy cows and

  16. The effectiveness and safety of antifibrinolytics in patients with acute intracranial haemorrhage: statistical analysis plan for an individual patient data meta-analysis

    OpenAIRE

    Ker, Katharine; Prieto-Merino, David; Sprigg, Nikola; Mahmood, Abda; Bath, Philip; Kang Law, Zhe; Flaherty, Katie; Roberts, Ian

    2017-01-01

    Introduction: The Antifibrinolytic Trialists Collaboration aims to increase knowledge about the effectiveness and safety of antifibrinolytic treatment by conducting individual patient data (IPD) meta-analyses of randomised trials. This article presents the statistical analysis plan for an IPD meta-analysis of the effects of antifibrinolytics for acute intracranial haemorrhage. Methods: The protocol for the IPD meta-analysis has been registered with PROSPERO (CRD42016052155). We will conduct a...

  17. Evaluation of the association between acne and smoking: systematic review and meta-analysis of cross-sectional studies

    Directory of Open Access Journals (Sweden)

    Alice Mannocci

    2010-09-01

    Full Text Available MetaPlusBook-Roman; font-size: x-small;">

    Background: Acne vulgaris is one of the most common skin diseases with a multifactorial pathogenesis. Examination of the literature regarding the contribution of smoking to acne shows contradictory results. The aim of this study was to undertake a systematic review of the literature and meta-analysis about the association between acne and smoking.

    Methods: A systematic review and meta-analysis, when possible were performed. The literature review was based on Pubmed, Scopus and Google Scholar searches using the keywords “(smoking OR tobacco OR nicotine OR cigarettes AND acne”. Only cross-sectional studies were included. Meta-analyses were performed using the RevMan software version 5 for Windows. Four different meta-analyses were carried out: one evaluating the association between smoking habit and acne, one including data stratified by gender, one for studies with a quality score > 6, and one relating to acne classification.

    Results: Six studies were selected. The first meta-analysis, including all studies, showed a non significant role of smoke in the development of acne: OR 1.05 (95% CI: 0.66–1.67 with random effect estimate. The second meta-analyses, including data stratified by gender, showed a OR=0.99 (95% CI: 0.57–1.73 for males and a OR of 1.45 (95% CI: 0.08–24.64 for females, using random effect for the heterogeneity in both cases. The third meta-analysis, included studies with a quality score >6 resulted in an estimated OR= 0.69 (95% CI: 0.55–0.85: in this case it was possible to use the fixed effect estimate. The last meta-analysis, concerning the severity grading, showed a non-significant result: OR=1.09 (95% CI: 0.61–1.95 using the random effect approach.

    Conclusions: The first two meta-analyses found no signification association between smoking and

  18. Interactive and Approachable Web-Based Tools for Exploring Global Geophysical Data Records

    Science.gov (United States)

    Croteau, M. J.; Nerem, R. S.; Merrifield, M. A.; Thompson, P. R.; Loomis, B. D.; Wiese, D. N.; Zlotnicki, V.; Larson, J.; Talpe, M.; Hardy, R. A.

    2017-12-01

    Making global and regional data accessible and understandable for non-experts can be both challenging and hazardous. While data products are often developed with end users in mind, the ease of use of these data can vary greatly. Scientists must take care to provide detailed guides for how to use data products to ensure users are not incorrectly applying data to their problem. For example, terrestrial water storage data from the Gravity Recovery and Climate Experiment (GRACE) satellite mission is notoriously difficult for non-experts to access and correctly use. However, allowing these data to be easily accessible to scientists outside the GRACE community is desirable because this would allow that data to see much wider-spread use. We have developed a web-based interactive mapping and plotting tool that provides easy access to geophysical data. This work presents an intuitive method for making such data widely accessible to experts and non-experts alike, making the data approachable and ensuring proper use of the data. This tool has proven helpful to experts by providing fast and detailed access to the data. Simultaneously, the tool allows non-experts to gain familiarity with the information contained in the data and access to that information for both scientific studies and public use. In this presentation, we discuss the development of this tool and application to both GRACE and ocean altimetry satellite missions, and demonstrate the capabilities of the tool. Focusing on the data visualization aspects of the tool, we showcase our integrations of the Mapbox API and the D3.js data-driven web document framework. We then explore the potential of these tools in other web-based visualization projects, and how incorporation of such tools into science can improve the presentation of research results. We demonstrate how the development of an interactive and exploratory resource can enable further layers of exploratory and scientific discovery.

  19. Evaluation Considerations for Secondary Uses of Clinical Data: Principles for an Evidence-based Approach to Policy and Implementation of Secondary Analysis.

    Science.gov (United States)

    Scott, P J; Rigby, M; Ammenwerth, E; McNair, J Brender; Georgiou, A; Hyppönen, H; de Keizer, N; Magrabi, F; Nykänen, P; Gude, W T; Hackl, W

    2017-08-01

    Objectives: To set the scientific context and then suggest principles for an evidence-based approach to secondary uses of clinical data, covering both evaluation of the secondary uses of data and evaluation of health systems and services based upon secondary uses of data. Method: Working Group review of selected literature and policy approaches. Results: We present important considerations in the evaluation of secondary uses of clinical data from the angles of governance and trust, theory, semantics, and policy. We make the case for a multi-level and multi-factorial approach to the evaluation of secondary uses of clinical data and describe a methodological framework for best practice. We emphasise the importance of evaluating the governance of secondary uses of health data in maintaining trust, which is essential for such uses. We also offer examples of the re-use of routine health data to demonstrate how it can support evaluation of clinical performance and optimize health IT system design. Conclusions: Great expectations are resting upon "Big Data" and innovative analytics. However, to build and maintain public trust, improve data reliability, and assure the validity of analytic inferences, there must be independent and transparent evaluation. A mature and evidence-based approach needs not merely data science, but must be guided by the broader concerns of applied health informatics. Georg Thieme Verlag KG Stuttgart.

  20. A data base approach for prediction of deforestation-induced mass wasting events

    Science.gov (United States)

    Logan, T. L.

    1981-01-01

    A major topic of concern in timber management is determining the impact of clear-cutting on slope stability. Deforestation treatments on steep mountain slopes have often resulted in a high frequency of major mass wasting events. The Geographic Information System (GIS) is a potentially useful tool for predicting the location of mass wasting sites. With a raster-based GIS, digitally encoded maps of slide hazard parameters can be overlayed and modeled to produce new maps depicting high probability slide areas. The present investigation has the objective to examine the raster-based information system as a tool for predicting the location of the clear-cut mountain slopes which are most likely to experience shallow soil debris avalanches. A literature overview is conducted, taking into account vegetation, roads, precipitation, soil type, slope-angle and aspect, and models predicting mass soil movements. Attention is given to a data base approach and aspects of slide prediction.

  1. Development of composite outcomes for individual patient data (IPD) meta-analysis on the effects of diet and lifestyle in pregnancy

    DEFF Research Database (Denmark)

    Rogozinska, Ewelina; D'Amico, M. I.; Khan, Khalid S

    2016-01-01

    Objective To develop maternal, fetal, and neonatal composite outcomes relevant to the evaluation of diet and lifestyle interventions in pregnancy by individual patient data (IPD) meta-analysis. Design Delphi survey. Setting The International Weight Management in Pregnancy (i-WIP) collaborative...... by IPD meta-analysis. Tweetable abstract Composite outcomes in IPD meta-analysis on diet and lifestyle in pregnancy. Tweetable abstract Composite outcomes in IPD meta-analysis on diet and lifestyle in pregnancy. This article includes Author Insights, a video abstract available at https...... and the following were included in the final composite: pre-eclampsia or pregnancy-induced hypertension, gestational diabetes mellitus (GDM), elective or emergency caesarean section, and preterm delivery. Of the 27 fetal and neonatal outcomes, nine were further evaluated, with the final composite consisting...

  2. Trust in automation and meta-cognitive accuracy in NPP operating crews

    Energy Technology Data Exchange (ETDEWEB)

    Skraaning Jr, G.; Miberg Skjerve, A. B. [OECD Halden Reactor Project, PO Box 173, 1751 Halden (Norway)

    2006-07-01

    Nuclear power plant operators can over-trust or under-trust automation. Operator trust in automation is said to be mis-calibrated when the level of trust is not corresponding to the actual level of automation reliability. A possible consequence of mis-calibrated trust is degraded meta-cognitive accuracy. Meta-cognitive accuracy is the ability to correctly monitor the effectiveness of ones own performance while engaged in complex tasks. When operators misjudge their own performance, human control actions will be poorly regulated and safety and/or efficiency may suffer. An analysis of simulator data showed that meta-cognitive accuracy and trust in automation were highly correlated for knowledge-based scenarios, but uncorrelated for rule-based scenarios. In the knowledge-based scenarios, the operators overestimated their performance effectiveness under high levels of trust, they underestimated performance under low levels of trust, but showed realistic self-assessment under intermediate levels of trust in automation. The result was interpreted to suggest that trust in automation impact the meta-cognitive accuracy of the operators. (authors)

  3. Trust in automation and meta-cognitive accuracy in NPP operating crews

    International Nuclear Information System (INIS)

    Skraaning Jr, G.; Miberg Skjerve, A. B.

    2006-01-01

    Nuclear power plant operators can over-trust or under-trust automation. Operator trust in automation is said to be mis-calibrated when the level of trust is not corresponding to the actual level of automation reliability. A possible consequence of mis-calibrated trust is degraded meta-cognitive accuracy. Meta-cognitive accuracy is the ability to correctly monitor the effectiveness of ones own performance while engaged in complex tasks. When operators misjudge their own performance, human control actions will be poorly regulated and safety and/or efficiency may suffer. An analysis of simulator data showed that meta-cognitive accuracy and trust in automation were highly correlated for knowledge-based scenarios, but uncorrelated for rule-based scenarios. In the knowledge-based scenarios, the operators overestimated their performance effectiveness under high levels of trust, they underestimated performance under low levels of trust, but showed realistic self-assessment under intermediate levels of trust in automation. The result was interpreted to suggest that trust in automation impact the meta-cognitive accuracy of the operators. (authors)

  4. Graph-based sequence annotation using a data integration approach.

    Science.gov (United States)

    Pesch, Robert; Lysenko, Artem; Hindle, Matthew; Hassani-Pak, Keywan; Thiele, Ralf; Rawlings, Christopher; Köhler, Jacob; Taubert, Jan

    2008-08-25

    The automated annotation of data from high throughput sequencing and genomics experiments is a significant challenge for bioinformatics. Most current approaches rely on sequential pipelines of gene finding and gene function prediction methods that annotate a gene with information from different reference data sources. Each function prediction method contributes evidence supporting a functional assignment. Such approaches generally ignore the links between the information in the reference datasets. These links, however, are valuable for assessing the plausibility of a function assignment and can be used to evaluate the confidence in a prediction. We are working towards a novel annotation system that uses the network of information supporting the function assignment to enrich the annotation process for use by expert curators and predicting the function of previously unannotated genes. In this paper we describe our success in the first stages of this development. We present the data integration steps that are needed to create the core database of integrated reference databases (UniProt, PFAM, PDB, GO and the pathway database Ara-Cyc) which has been established in the ONDEX data integration system. We also present a comparison between different methods for integration of GO terms as part of the function assignment pipeline and discuss the consequences of this analysis for improving the accuracy of gene function annotation. The methods and algorithms presented in this publication are an integral part of the ONDEX system which is freely available from http://ondex.sf.net/.

  5. Evaluation of patient involvement in a systematic review and meta-analysis of individual patient data in cervical cancer treatment

    Directory of Open Access Journals (Sweden)

    Vale Claire L

    2012-05-01

    Full Text Available Abstract Background In April 2005, researchers based at the Medical Research Council Clinical Trials Unit, set out to involve women affected by cervical cancer in a systematic review and meta-analysis of individual patient data to evaluate treatments for this disease. Each of the women had previously been treated for cervical cancer. Following completion of the meta-analysis, we aimed to evaluate the process of involvement from the researcher and research partner perspective. Methods An advisory group was first established to give advice on recruiting, supporting and involving women and led to efforts to recruit women to take part in the systematic review using different approaches. Evaluation of the process and outcomes of the partnership between the systematic reviewers and the patients, in respect to what the partnership achieved; what worked well and what were the difficulties; what was learned and the resource requirements, took place during the conduct of the meta-analysis and again after completion of the project. Results Six women, each of whom had received treatments for cervical cancer, were recruited as Patient Research Partners and five of these women subsequently took part in a variety of activities around the systematic review. They attended progress meetings and all but one attended a meeting at which the first results of the review were presented to all collaborators and gave feedback. Three of the women also became involved in a further related research project which led to an editorial publication from the patient perspective and also participated, along with two lead researchers, in the evaluation of the process and outcomes. While they were generally positive about the experience, one Patient Research Partner questioned the extent of the impact patients could make to the systematic review process. Conclusions In general, researchers and patient research partners felt that they had learned a lot from the process and considered

  6. Pay No Attention to That Data Behind the Curtain: On Angry Birds, Happy Children, Scholarly Squabbles, Publication Bias, and Why Betas Rule Metas.

    Science.gov (United States)

    Ferguson, Christopher J

    2015-09-01

    This article responds to five comments on my "Angry Birds" meta-analysis of video game influences on children (Ferguson, 2015, this issue). Given ongoing debates on video game influences, comments varied from the supportive to the self-proclaimed "angry," yet hopefully they and this response will contribute to constructive discussion as the field moves forward. In this reply, I address some misconceptions in the comments and present data that challenge the assumption that standardized regression coefficients are invariably unsuitable for meta-analysis or that bivariate correlations are invariably suitable for meta-analysis. The suitability of any data should be considered on a case-by-case basis, and data indicates that the coefficients included in the "Angry Birds" meta-analysis did not distort results. Study selection, effect size extraction, and interpretation improved upon problematic issues in other recent meta-analyses. Further evidence is also provided to support the contention that publication bias remains problematic in video game literature. Sources of acrimony among scholars are explored as are areas of agreement. Ultimately, debates will only be resolved through a commitment to newer, more rigorous methods and open science. © The Author(s) 2015.

  7. A genetic meta-algorithm-assisted inversion approach: hydrogeological study for the determination of volumetric rock properties and matrix and fluid parameters in unsaturated formations

    Science.gov (United States)

    Szabó, Norbert Péter

    2018-03-01

    An evolutionary inversion approach is suggested for the interpretation of nuclear and resistivity logs measured by direct-push tools in shallow unsaturated sediments. The efficiency of formation evaluation is improved by estimating simultaneously (1) the petrophysical properties that vary rapidly along a drill hole with depth and (2) the zone parameters that can be treated as constant, in one inversion procedure. In the workflow, the fractional volumes of water, air, matrix and clay are estimated in adjacent depths by linearized inversion, whereas the clay and matrix properties are updated using a float-encoded genetic meta-algorithm. The proposed inversion method provides an objective estimate of the zone parameters that appear in the tool response equations applied to solve the forward problem, which can significantly increase the reliability of the petrophysical model as opposed to setting these parameters arbitrarily. The global optimization meta-algorithm not only assures the best fit between the measured and calculated data but also gives a reliable solution, practically independent of the initial model, as laboratory data are unnecessary in the inversion procedure. The feasibility test uses engineering geophysical sounding logs observed in an unsaturated loessy-sandy formation in Hungary. The multi-borehole extension of the inversion technique is developed to determine the petrophysical properties and their estimation errors along a profile of drill holes. The genetic meta-algorithmic inversion method is recommended for hydrogeophysical logging applications of various kinds to automatically extract the volumetric ratios of rock and fluid constituents as well as the most important zone parameters in a reliable inversion procedure.

  8. A systematic meta-review of evaluations of youth violence prevention programs: Common and divergent findings from 25 years of meta-analyses and systematic reviews☆

    Science.gov (United States)

    Matjasko, Jennifer L.; Vivolo-Kantor, Alana M.; Massetti, Greta M.; Holland, Kristin M.; Holt, Melissa K.; Cruz, Jason Dela

    2018-01-01

    Violence among youth is a pervasive public health problem. In order to make progress in reducing the burden of injury and mortality that result from youth violence, it is imperative to identify evidence-based programs and strategies that have a significant impact on violence. There have been many rigorous evaluations of youth violence prevention programs. However, the literature is large, and it is difficult to draw conclusions about what works across evaluations from different disciplines, contexts, and types of programs. The current study reviews the meta-analyses and systematic reviews published prior to 2009 that synthesize evaluations of youth violence prevention programs. This meta-review reports the findings from 37 meta-analyses and 15 systematic reviews; the included reviews were coded on measures of the social ecology, prevention approach, program type, and study design. A majority of the meta-analyses and systematic reviews were found to demonstrate moderate program effects. Meta-analyses yielded marginally smaller effect sizes compared to systematic reviews, and those that included programs targeting family factors showed marginally larger effects than those that did not. In addition, there are a wide range of individual/family, program, and study moderators of program effect sizes. Implications of these findings and suggestions for future research are discussed. PMID:29503594

  9. Community-based management versus traditional hospitalization in treatment of drug-resistant tuberculosis: a systematic review and meta-analysis.

    Science.gov (United States)

    Williams, Abimbola Onigbanjo; Makinde, Olusesan Ayodeji; Ojo, Mojisola

    2016-01-01

    Multidrug drug resistant Tuberculosis (MDR-TB) and extensively drug resistant Tuberculosis (XDR-TB) have emerged as significant public health threats worldwide. This systematic review and meta-analysis aimed to investigate the effects of community-based treatment to traditional hospitalization in improving treatment success rates among MDR-TB and XDR-TB patients in the 27 MDR-TB High burden countries (HBC). We searched PubMed, Cochrane, Lancet, Web of Science, International Journal of Tuberculosis and Lung Disease, and Centre for Reviews and Dissemination (CRD) for studies on community-based treatment and traditional hospitalization and MDR-TB and XDR-TB from the 27 MDR-TB HBC. Data on treatment success and failure rates were extracted from retrospective and prospective cohort studies, and a case control study. Sensitivity analysis, subgroup analyses, and meta-regression analysis were used to explore bias and potential sources of heterogeneity. The final sample included 16 studies involving 3344 patients from nine countries; Bangladesh, China, Ethiopia, Kenya, India, South Africa, Philippines, Russia, and Uzbekistan. Based on a random-effects model, we observed a higher treatment success rate in community-based treatment (Point estimate = 0.68, 95 % CI: 0.59 to 0.76, p   18 months, and regimen with drugs >5 reported higher treatment success rate. In the meta-regression model, age of patients, adverse events, treatment duration, and lost to follow up explains some of the heterogeneity of treatment effects between studies. Community-based management improved treatment outcomes. A mix of interventions with DOTS-Plus throughout therapy and treatment duration > 18 months as well as strategies in place for lost to follow up and adverse events should be considered in MDR-TB and XDR-TB interventions, as they influenced positively, treatment success.

  10. One-stage individual participant data meta-analysis models: estimation of treatment-covariate interactions must avoid ecological bias by separating out within-trial and across-trial information.

    Science.gov (United States)

    Hua, Hairui; Burke, Danielle L; Crowther, Michael J; Ensor, Joie; Tudur Smith, Catrin; Riley, Richard D

    2017-02-28

    Stratified medicine utilizes individual-level covariates that are associated with a differential treatment effect, also known as treatment-covariate interactions. When multiple trials are available, meta-analysis is used to help detect true treatment-covariate interactions by combining their data. Meta-regression of trial-level information is prone to low power and ecological bias, and therefore, individual participant data (IPD) meta-analyses are preferable to examine interactions utilizing individual-level information. However, one-stage IPD models are often wrongly specified, such that interactions are based on amalgamating within- and across-trial information. We compare, through simulations and an applied example, fixed-effect and random-effects models for a one-stage IPD meta-analysis of time-to-event data where the goal is to estimate a treatment-covariate interaction. We show that it is crucial to centre patient-level covariates by their mean value in each trial, in order to separate out within-trial and across-trial information. Otherwise, bias and coverage of interaction estimates may be adversely affected, leading to potentially erroneous conclusions driven by ecological bias. We revisit an IPD meta-analysis of five epilepsy trials and examine age as a treatment effect modifier. The interaction is -0.011 (95% CI: -0.019 to -0.003; p = 0.004), and thus highly significant, when amalgamating within-trial and across-trial information. However, when separating within-trial from across-trial information, the interaction is -0.007 (95% CI: -0.019 to 0.005; p = 0.22), and thus its magnitude and statistical significance are greatly reduced. We recommend that meta-analysts should only use within-trial information to examine individual predictors of treatment effect and that one-stage IPD models should separate within-trial from across-trial information to avoid ecological bias. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd

  11. Weighing Evidence "Steampunk" Style via the Meta-Analyser.

    Science.gov (United States)

    Bowden, Jack; Jackson, Chris

    2016-10-01

    The funnel plot is a graphical visualization of summary data estimates from a meta-analysis, and is a useful tool for detecting departures from the standard modeling assumptions. Although perhaps not widely appreciated, a simple extension of the funnel plot can help to facilitate an intuitive interpretation of the mathematics underlying a meta-analysis at a more fundamental level, by equating it to determining the center of mass of a physical system. We used this analogy to explain the concepts of weighing evidence and of biased evidence to a young audience at the Cambridge Science Festival, without recourse to precise definitions or statistical formulas and with a little help from Sherlock Holmes! Following on from the science fair, we have developed an interactive web-application (named the Meta-Analyser) to bring these ideas to a wider audience. We envisage that our application will be a useful tool for researchers when interpreting their data. First, to facilitate a simple understanding of fixed and random effects modeling approaches; second, to assess the importance of outliers; and third, to show the impact of adjusting for small study bias. This final aim is realized by introducing a novel graphical interpretation of the well-known method of Egger regression.

  12. Work-related critical incidents in hospital-based health care providers and the risk of post-traumatic stress symptoms, anxiety, and depression: a meta-analysis

    NARCIS (Netherlands)

    de Boer, Jacoba; Lok, Anja; van 't Verlaat, Ellen; Duivenvoorden, Hugo J.; Bakker, Arnold B.; Smit, Bert J.

    2011-01-01

    This meta-analysis reviewed existing data on the impact of work-related critical incidents in hospital-based health care professionals. Work-related critical incidents may induce post-traumatic stress symptoms or even post-traumatic stress disorder (PTSD), anxiety, and depression and may negatively

  13. School-Based Sleep Education Programs for Short Sleep Duration in Adolescents: A Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Chung, Ka-Fai; Chan, Man-Sum; Lam, Ying-Yin; Lai, Cindy Sin-Yee; Yeung, Wing-Fai

    2017-06-01

    Insufficient sleep among students is a major school health problem. School-based sleep education programs tailored to reach large number of students may be one of the solutions. A systematic review and meta-analysis was conducted to summarize the programs' effectiveness and current status. Electronic databases were searched up until May 2015. Randomized controlled trials of school-based sleep intervention among 10- to 19-year-old students with outcome on total sleep duration were included. Methodological quality of the studies was assessed using the Cochrane's risk of bias assessment. Seven studies were included, involving 1876 students receiving sleep education programs and 2483 attending classes-as-usual. Four weekly 50-minute sleep education classes were most commonly provided. Methodological quality was only moderate, with a high or an uncertain risk of bias in several domains. Compared to classes-as-usual, sleep education programs produced significantly longer weekday and weekend total sleep time and better mood among students at immediate post-treatment, but the improvements were not maintained at follow-up. Limited by the small number of studies and methodological limitations, the preliminary data showed that school-based sleep education programs produced short-term benefits. Future studies should explore integrating sleep education with delayed school start time or other more effective approaches. © 2017, American School Health Association.

  14. MetaQUAST: evaluation of metagenome assemblies.

    Science.gov (United States)

    Mikheenko, Alla; Saveliev, Vladislav; Gurevich, Alexey

    2016-04-01

    During the past years we have witnessed the rapid development of new metagenome assembly methods. Although there are many benchmark utilities designed for single-genome assemblies, there is no well-recognized evaluation and comparison tool for metagenomic-specific analogues. In this article, we present MetaQUAST, a modification of QUAST, the state-of-the-art tool for genome assembly evaluation based on alignment of contigs to a reference. MetaQUAST addresses such metagenome datasets features as (i) unknown species content by detecting and downloading reference sequences, (ii) huge diversity by giving comprehensive reports for multiple genomes and (iii) presence of highly relative species by detecting chimeric contigs. We demonstrate MetaQUAST performance by comparing several leading assemblers on one simulated and two real datasets. http://bioinf.spbau.ru/metaquast aleksey.gurevich@spbu.ru Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. A meta-model for computer executable dynamic clinical safety checklists.

    Science.gov (United States)

    Nan, Shan; Van Gorp, Pieter; Lu, Xudong; Kaymak, Uzay; Korsten, Hendrikus; Vdovjak, Richard; Duan, Huilong

    2017-12-12

    Safety checklist is a type of cognitive tool enforcing short term memory of medical workers with the purpose of reducing medical errors caused by overlook and ignorance. To facilitate the daily use of safety checklists, computerized systems embedded in the clinical workflow and adapted to patient-context are increasingly developed. However, the current hard-coded approach of implementing checklists in these systems increase the cognitive efforts of clinical experts and coding efforts for informaticists. This is due to the lack of a formal representation format that is both understandable by clinical experts and executable by computer programs. We developed a dynamic checklist meta-model with a three-step approach. Dynamic checklist modeling requirements were extracted by performing a domain analysis. Then, existing modeling approaches and tools were investigated with the purpose of reusing these languages. Finally, the meta-model was developed by eliciting domain concepts and their hierarchies. The feasibility of using the meta-model was validated by two case studies. The meta-model was mapped to specific modeling languages according to the requirements of hospitals. Using the proposed meta-model, a comprehensive coronary artery bypass graft peri-operative checklist set and a percutaneous coronary intervention peri-operative checklist set have been developed in a Dutch hospital and a Chinese hospital, respectively. The result shows that it is feasible to use the meta-model to facilitate the modeling and execution of dynamic checklists. We proposed a novel meta-model for the dynamic checklist with the purpose of facilitating creating dynamic checklists. The meta-model is a framework of reusing existing modeling languages and tools to model dynamic checklists. The feasibility of using the meta-model is validated by implementing a use case in the system.

  16. Open Versus Laparoscopic Approach for Morgagni's Hernia in Infants and Children: A Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Lauriti, Giuseppe; Zani-Ruttenstock, Elke; Catania, Vincenzo D; Antounians, Lina; Lelli Chiesa, Pierluigi; Pierro, Agostino; Zani, Augusto

    2018-05-18

    The laparoscopic repair of Morgagni's hernia (MH) has been reported to be safe and feasible. However, it is still unclear whether laparoscopy is superior to open surgery in repairing MH. Using a defined search strategy, three investigators independently identified all comparative studies reporting data on open and laparoscopic MH repair in patients open approaches and 39 (42%) laparoscopy. Meta-analysis - The length of surgery was shorter in laparoscopy (50.5 ± 17.0 min) than in open procedure (90.0 ± 15.0 min; P open surgery (4.5 ± 2.1 days; P open: 9.4% ± 1.6%; P = .087) and recurrences (laparoscopy: 2.9% ± 5.0%, open: 5.7% ± 1.8%; P = .84). Comparative studies indicate that laparoscopic MH repair can be performed in infants and children. Laparoscopy is associated with shortened length of surgery and hospital stay in comparison to open procedure. Prospective randomized studies would be needed to confirm present data.

  17. Risk factors for radiation-induced hypothyroidism: A Literature-Based Meta-Analysis

    DEFF Research Database (Denmark)

    Vogelius, Ivan R; Bentzen, Søren; Maraldo, Maja V

    2011-01-01

    BACKGROUND: A systematic overview and meta-analysis of studies reporting data on hypothyroidism (HT) after radiation therapy was conducted to identify risk factors for development of HT. METHODS: Published studies were identified from the PubMed and Embase databases and by hand-searching published...... reviews. Studies allowing the extraction of odds ratios (OR) for HT in 1 or more of several candidate clinical risk groups were included. A meta-analysis of the OR for development of HT with or without each of the candidate risk factors was performed. Furthermore, studies allowing the extraction......% risk of HT at a dose of 45 Gy but with considerable variation in the dose response between studies. Chemotherapy and age were not associated with risk of HT in this analysis. CONCLUSIONS: Several clinical risk factors for HT were identified. The risk of HT increases with increasing radiation dose...

  18. Mindfulness-based interventions for binge eating: a systematic review and meta-analysis.

    Science.gov (United States)

    Godfrey, Kathryn M; Gallo, Linda C; Afari, Niloofar

    2015-04-01

    Mindfulness-based interventions are increasingly used to treat binge eating. The effects of these interventions have not been reviewed comprehensively. This systematic review and meta-analysis sought to summarize the literature on mindfulness-based interventions and determine their impact on binge eating behavior. PubMED, Web of Science, and PsycINFO were searched using keywords binge eating, overeating, objective bulimic episodes, acceptance and commitment therapy, dialectical behavior therapy, mindfulness, meditation, mindful eating. Of 151 records screened, 19 studies met inclusion criteria. Most studies showed effects of large magnitude. Results of random effects meta-analyses supported large or medium-large effects of these interventions on binge eating (within-group random effects mean Hedge's g = -1.12, 95 % CI -1.67, -0.80, k = 18; between-group mean Hedge's g = -0.70, 95 % CI -1.16, -0.24, k = 7). However, there was high statistical heterogeneity among the studies (within-group I(2) = 93 %; between-group I(2) = 90 %). Limitations and future research directions are discussed.

  19. Can trial sequential monitoring boundaries reduce spurious inferences from meta-analyses?

    DEFF Research Database (Denmark)

    Thorlund, Kristian; Devereaux, P J; Wetterslev, Jørn

    2008-01-01

    BACKGROUND: Results from apparently conclusive meta-analyses may be false. A limited number of events from a few small trials and the associated random error may be under-recognized sources of spurious findings. The information size (IS, i.e. number of participants) required for a reliable......-analyses after each included trial and evaluated their results using a conventional statistical criterion (alpha = 0.05) and two-sided Lan-DeMets monitoring boundaries. We examined the proportion of false positive results and important inaccuracies in estimates of treatment effects that resulted from the two...... approaches. RESULTS: Using the random-effects model and final data, 12 of the meta-analyses yielded P > alpha = 0.05, and 21 yielded P alpha = 0.05. The monitoring boundaries eliminated all false positives. Important inaccuracies in estimates were observed in 6 out of 21 meta-analyses using the conventional...

  20. Effectiveness of prenatal treatment for congenital toxoplasmosis: a meta-analysis of individual patients' data

    DEFF Research Database (Denmark)

    Thiébaut, Rodolphe; Leproust, Sandy; Chêne, Geneviève

    2007-01-01

    BACKGROUND: Despite three decades of prenatal screening for congenital toxoplasmosis in some European countries, uncertainty remains about the effectiveness of prenatal treatment. METHODS: We did a systematic review of cohort studies based on universal screening for congenital toxoplasmosis. We did...... a meta-analysis using individual patients' data to assess the effect of timing and type of prenatal treatment on mother-to-child transmission of infection and clinical manifestations before age 1 year. Analyses were adjusted for gestational age at maternal seroconversion and other covariates. FINDINGS......: We included 26 cohorts in the review. In 1438 treated mothers identified by prenatal screening, we found weak evidence that treatment started within 3 weeks of seroconversion reduced mother-to-child transmission compared with treatment started after 8 or more weeks (adjusted odds ratio [OR] 0.48, 95...

  1. Where to go from here? An exploratory meta-analysis of the most promising approaches to depression prevention programs for children and adolescents.

    Science.gov (United States)

    Hetrick, Sarah E; Cox, Georgina R; Merry, Sally N

    2015-04-30

    To examine the overall effect of individual depression prevention programs on future likelihood of depressive disorder and reduction in depressive symptoms. In addition, we have investigated whether Cognitive Behavioural Therapy (CBT), Interpersonal Therapy (IPT) and other therapeutic techniques may modify this effectiveness. This study is based on and includes the trial data from meta-analyses conducted in the Cochrane systematic review of depression prevention programs for children and adolescents by Merry et al. (2011). All trials were published or unpublished English language randomized controlled trials (RCTs) or cluster RCTs of any psychological or educational intervention compared to no intervention to prevent depression in children and adolescents aged 5-19 years. There is some evidence that the therapeutic approach used in prevention programs modifies the overall effect. CBT is the most studied type of intervention for depression prevention, and there is some evidence of its effectiveness in reducing the risk of developing a depressive disorder, particularly in targeted populations. Fewer studies employed IPT, however this approach appears promising. To our knowledge, this is the first study to have explored how differences in the approach taken in the prevention programs modify the overall treatment effects of prevention programs for children and adolescents. More research is needed to identify the specific components of CBT that are most effective or indeed if there are other approaches that are more effective in reducing the risk of future depressive episodes. It is imperative that prevention programs are suitable for large scale roll-out, and that emerging popular modes of delivery, such as online dissemination continue to be rigorously tested.

  2. Utilizing a structural meta-ontology for family-based quality assurance of the BioPortal ontologies.

    Science.gov (United States)

    Ochs, Christopher; He, Zhe; Zheng, Ling; Geller, James; Perl, Yehoshua; Hripcsak, George; Musen, Mark A

    2016-06-01

    An Abstraction Network is a compact summary of an ontology's structure and content. In previous research, we showed that Abstraction Networks support quality assurance (QA) of biomedical ontologies. The development of an Abstraction Network and its associated QA methodologies, however, is a labor-intensive process that previously was applicable only to one ontology at a time. To improve the efficiency of the Abstraction-Network-based QA methodology, we introduced a QA framework that uses uniform Abstraction Network derivation techniques and QA methodologies that are applicable to whole families of structurally similar ontologies. For the family-based framework to be successful, it is necessary to develop a method for classifying ontologies into structurally similar families. We now describe a structural meta-ontology that classifies ontologies according to certain structural features that are commonly used in the modeling of ontologies (e.g., object properties) and that are important for Abstraction Network derivation. Each class of the structural meta-ontology represents a family of ontologies with identical structural features, indicating which types of Abstraction Networks and QA methodologies are potentially applicable to all of the ontologies in the family. We derive a collection of 81 families, corresponding to classes of the structural meta-ontology, that enable a flexible, streamlined family-based QA methodology, offering multiple choices for classifying an ontology. The structure of 373 ontologies from the NCBO BioPortal is analyzed and each ontology is classified into multiple families modeled by the structural meta-ontology. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. A robust approach based on Weibull distribution for clustering gene expression data

    Directory of Open Access Journals (Sweden)

    Gong Binsheng

    2011-05-01

    Full Text Available Abstract Background Clustering is a widely used technique for analysis of gene expression data. Most clustering methods group genes based on the distances, while few methods group genes according to the similarities of the distributions of the gene expression levels. Furthermore, as the biological annotation resources accumulated, an increasing number of genes have been annotated into functional categories. As a result, evaluating the performance of clustering methods in terms of the functional consistency of the resulting clusters is of great interest. Results In this paper, we proposed the WDCM (Weibull Distribution-based Clustering Method, a robust approach for clustering gene expression data, in which the gene expressions of individual genes are considered as the random variables following unique Weibull distributions. Our WDCM is based on the concept that the genes with similar expression profiles have similar distribution parameters, and thus the genes are clustered via the Weibull distribution parameters. We used the WDCM to cluster three cancer gene expression data sets from the lung cancer, B-cell follicular lymphoma and bladder carcinoma and obtained well-clustered results. We compared the performance of WDCM with k-means and Self Organizing Map (SOM using functional annotation information given by the Gene Ontology (GO. The results showed that the functional annotation ratios of WDCM are higher than those of the other methods. We also utilized the external measure Adjusted Rand Index to validate the performance of the WDCM. The comparative results demonstrate that the WDCM provides the better clustering performance compared to k-means and SOM algorithms. The merit of the proposed WDCM is that it can be applied to cluster incomplete gene expression data without imputing the missing values. Moreover, the robustness of WDCM is also evaluated on the incomplete data sets. Conclusions The results demonstrate that our WDCM produces clusters

  4. A meta model-based methodology for an energy savings uncertainty assessment of building retrofitting

    Directory of Open Access Journals (Sweden)

    Caucheteux Antoine

    2016-01-01

    Full Text Available To reduce greenhouse gas emissions, energy retrofitting of building stock presents significant potential for energy savings. In the design stage, energy savings are usually assessed through Building Energy Simulation (BES. The main difficulty is to first assess the energy efficiency of the existing buildings, in other words, to calibrate the model. As calibration is an under determined problem, there is many solutions for building representation in simulation tools. In this paper, a method is proposed to assess not only energy savings but also their uncertainty. Meta models, using experimental designs, are used to identify many acceptable calibrations: sets of parameters that provide the most accurate representation of the building are retained to calculate energy savings. The method was applied on an existing office building modeled with the TRNsys BES. The meta model, using 13 parameters, is built with no more than 105 simulations. The evaluation of the meta model on thousands of new simulations gives a normalized mean bias error between the meta model and BES of <4%. Energy savings are assessed based on six energy savings concepts, which indicate savings of 2–45% with a standard deviation ranging between 1.3% and 2.5%.

  5. Object-based semi-automatic approach for forest structure characterization using lidar data in heterogeneous Pinus sylvestris stands

    Science.gov (United States)

    C. Pascual; A. Garcia-Abril; L.G. Garcia-Montero; S. Martin-Fernandez; W.B. Cohen

    2008-01-01

    In this paper, we present a two-stage approach for characterizing the structure of Pinus sylvestris L. stands in forests of central Spain. The first stage was to delimit forest stands using eCognition and a digital canopy height model (DCHM) derived from lidar data. The polygons were then clustered into forest structure types based on the DCHM data...

  6. Systematizing Web Search through a Meta-Cognitive, Systems-Based, Information Structuring Model (McSIS)

    Science.gov (United States)

    Abuhamdieh, Ayman H.; Harder, Joseph T.

    2015-01-01

    This paper proposes a meta-cognitive, systems-based, information structuring model (McSIS) to systematize online information search behavior based on literature review of information-seeking models. The General Systems Theory's (GST) prepositions serve as its framework. Factors influencing information-seekers, such as the individual learning…

  7. Towards universal voluntary HIV testing and counselling: a systematic review and meta-analysis of community-based approaches.

    Directory of Open Access Journals (Sweden)

    Amitabh B Suthar

    2013-08-01

    was as follows: index testing, 88% of 12,052 participants; self-testing, 87% of 1,839 participants; mobile testing, 87% of 79,475 participants; door-to-door testing, 80% of 555,267 participants; workplace testing, 67% of 62,406 participants; and school-based testing, 62% of 2,593 participants. Mobile HTC uptake among key populations (men who have sex with men, people who inject drugs, female sex workers, and adolescents ranged from 9% to 100% (among 41,110 participants across studies, with heterogeneity related to how testing was offered. Community-based approaches increased HTC uptake (relative risk [RR] 10.65, 95% confidence interval [CI] 6.27-18.08, the proportion of first-time testers (RR 1.23, 95% CI 1.06-1.42, and the proportion of participants with CD4 counts above 350 cells/µl (RR 1.42, 95% CI 1.16-1.74, and obtained a lower positivity rate (RR 0.59, 95% CI 0.37-0.96, relative to facility-based approaches. 80% (95% CI 75%-85% of 5,832 community-based HTC participants obtained a CD4 measurement following HIV diagnosis, and 73% (95% CI 61%-85% of 527 community-based HTC participants initiated antiretroviral therapy following a CD4 measurement indicating eligibility. The data on linking participants without HIV to prevention services were limited. In low- and middle-income countries, the cost per person tested ranged from US$2-US$126. At the population level, community-based HTC increased HTC coverage (RR 7.07, 95% CI 3.52-14.22 and reduced HIV incidence (RR 0.86, 95% CI 0.73-1.02, although the incidence reduction lacked statistical significance. No studies reported any harm arising as a result of having been tested. CONCLUSIONS: Community-based HTC achieved high rates of HTC uptake, reached people with high CD4 counts, and linked people to care. It also obtained a lower HIV positivity rate relative to facility-based approaches. Further research is needed to further improve acceptability of community-based HTC for key populations. HIV programmes should offer

  8. A combined data mining approach using rough set theory and case-based reasoning in medical datasets

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Rezvan

    2014-06-01

    Full Text Available Case-based reasoning (CBR is the process of solving new cases by retrieving the most relevant ones from an existing knowledge-base. Since, irrelevant or redundant features not only remarkably increase memory requirements but also the time complexity of the case retrieval, reducing the number of dimensions is an issue worth considering. This paper uses rough set theory (RST in order to reduce the number of dimensions in a CBR classifier with the aim of increasing accuracy and efficiency. CBR exploits a distance based co-occurrence of categorical data to measure similarity of cases. This distance is based on the proportional distribution of different categorical values of features. The weight used for a feature is the average of co-occurrence values of the features. The combination of RST and CBR has been applied to real categorical datasets of Wisconsin Breast Cancer, Lymphography, and Primary cancer. The 5-fold cross validation method is used to evaluate the performance of the proposed approach. The results show that this combined approach lowers computational costs and improves performance metrics including accuracy and interpretability compared to other approaches developed in the literature.

  9. Look-up-table approach for leaf area index retrieval from remotely sensed data based on scale information

    Science.gov (United States)

    Zhu, Xiaohua; Li, Chuanrong; Tang, Lingli

    2018-03-01

    Leaf area index (LAI) is a key structural characteristic of vegetation and plays a significant role in global change research. Several methods and remotely sensed data have been evaluated for LAI estimation. This study aimed to evaluate the suitability of the look-up-table (LUT) approach for crop LAI retrieval from Satellite Pour l'Observation de la Terre (SPOT)-5 data and establish an LUT approach for LAI inversion based on scale information. The LAI inversion result was validated by in situ LAI measurements, indicating that the LUT generated based on the PROSAIL (PROSPECT+SAIL: properties spectra + scattering by arbitrarily inclined leaves) model was suitable for crop LAI estimation, with a root mean square error (RMSE) of ˜0.31m2 / m2 and determination coefficient (R2) of 0.65. The scale effect of crop LAI was analyzed based on Taylor expansion theory, indicating that when the SPOT data aggregated by 200 × 200 pixel, the relative error is significant with 13.7%. Finally, an LUT method integrated with scale information was proposed in this article, improving the inversion accuracy with RMSE of 0.20 m2 / m2 and R2 of 0.83.

  10. For better or worse: An individual patient data meta-analysis of deterioration among participants receiving Internet-based cognitive behavior therapy.

    Science.gov (United States)

    Rozental, Alexander; Magnusson, Kristoffer; Boettcher, Johanna; Andersson, Gerhard; Carlbring, Per

    2017-02-01

    Psychological treatments can relieve mental distress and improve well-being, and the dissemination of evidence-based methods can help patients gain access to the right type of aid. Meanwhile, Internet-based cognitive-behavioral therapy (ICBT) has shown promising results for many psychiatric disorders. However, research on the potential for negative effects of psychological treatments has been lacking. An individual patient data meta-analysis of 29 clinical trials of ICBT (N = 2,866) was performed using the Reliable Change Index for each primary outcome measures to distinguish deterioration rates among patients in treatment and control conditions. Statistical analyses of predictors were conducted using generalized linear mixed models. Missing data was handled by multiple imputation. Deterioration rates were 122 (5.8%) in treatment and 130 (17.4%) in control conditions. Relative to receiving treatment, patients in a control condition had higher odds of deteriorating, odds ratios (ORs) = 3.10, 95% confidence interval (CI) [2.21, 4.34]. Clinical severity at pretreatment was related to lower odds, OR = 0.62, 95% CI [0.50, 0.77], and OR = 0.51, 95% CI [0.51, 0.80], for treatment and control conditions. In terms of sociodemographic variables, being in a relationship, OR = 0.58, 95% CI [0.35, 0.95], having at least a university degree, OR = 0.54, 95% CI [0.33, 0.88], and being older, OR = 0.78, 95% CI, [0.62, 0.98], were also associated with lower odds of deterioration, but only for patients assigned to a treatment condition. Deterioration among patients receiving ICBT or being in a control condition can occur and should be monitored by researchers to reverse and prevent a negative treatment trend. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data

    DEFF Research Database (Denmark)

    Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira

    2011-01-01

    than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....

  12. Maintenance based Bevacizumab versus complete stop or continuous therapy after induction therapy in first line treatment of stage IV colorectal cancer: A meta-analysis of randomized clinical trials.

    Science.gov (United States)

    Tamburini, Emiliano; Rudnas, Britt; Santelmo, Carlotta; Drudi, Fabrizio; Gianni, Lorenzo; Nicoletti, Stefania V L; Ridolfi, Claudio; Tassinari, Davide

    2016-08-01

    In stage IV colorectal cancer, bevacizumab-based maintenance therapy, complete stop therapy and continuous therapy are considered all possible approaches after first line induction chemotherapy. However, there are no clear data about which approach is preferable. All randomized phase III trials comparing bevacizumab-based maintenance therapy (MB) with complete stop therapy (ST) or with continuous therapy (CT) were considered eligible and included into the analysis. Primary endpoint was the Time to failure strategies (TFS). Secondary endpoints were Overall Survival (OS) and Progression free survival (PFS). Meta-analysis was performed in line with the PRISMA statement. 1892 patients of five trials were included into the analysis. A significant improvement in TFS (HR 0.79; CI 95% 0.7-0.9 p=0.0005) and PFS (HR 0.56; CI 95% 0.44-0.71 p<0.00001) were observed in favour of MB versus ST. A trend, but not statistically significant, in favour of MB versus ST was also observed for OS (HR 0.88; CI 95% 0.77-1.01, p=0.08). Comparing maintenance therapy versus continuous therapy no statistically differences were observed in the outcomes evaluated (OS 12 months OR 1.1 p=0.62, OS 24 months OR 1 p=1, OS 36 months OR 0.54 p=0.3, TFS 12 months OR 0.76 p=0.65). Our meta-analysis suggests that use of MB approach increases TFS, PFS compared to ST. Although without observing any statistically advantage, it should be highlighted that MB versus ST showed a trend in favour of MB. We observed no difference between MB and CT. MB should be considered the standard regimen in patients with stage IV colorectal cancer after first line induction therapy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Meta-modelling, visualization and emulation of multi-dimensional data for virtual production intelligence

    Science.gov (United States)

    Schulz, Wolfgang; Hermanns, Torsten; Al Khawli, Toufik

    2017-07-01

    Decision making for competitive production in high-wage countries is a daily challenge where rational and irrational methods are used. The design of decision making processes is an intriguing, discipline spanning science. However, there are gaps in understanding the impact of the known mathematical and procedural methods on the usage of rational choice theory. Following Benjamin Franklin's rule for decision making formulated in London 1772, he called "Prudential Algebra" with the meaning of prudential reasons, one of the major ingredients of Meta-Modelling can be identified finally leading to one algebraic value labelling the results (criteria settings) of alternative decisions (parameter settings). This work describes the advances in Meta-Modelling techniques applied to multi-dimensional and multi-criterial optimization by identifying the persistence level of the corresponding Morse-Smale Complex. Implementations for laser cutting and laser drilling are presented, including the generation of fast and frugal Meta-Models with controlled error based on mathematical model reduction Reduced Models are derived to avoid any unnecessary complexity. Both, model reduction and analysis of multi-dimensional parameter space are used to enable interactive communication between Discovery Finders and Invention Makers. Emulators and visualizations of a metamodel are introduced as components of Virtual Production Intelligence making applicable the methods of Scientific Design Thinking and getting the developer as well as the operator more skilled.

  14. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis

    Directory of Open Access Journals (Sweden)

    Tashkova Katerina

    2011-10-01

    Full Text Available Abstract Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA, particle-swarm optimization (PSO, and differential evolution (DE, as well as a local-search derivative-based algorithm 717 (A717 to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE clearly and significantly outperform the local derivative-based method (A717. Among the three meta-heuristics, differential evolution (DE performs best in terms of the objective function, i.e., reconstructing the output, and in terms of

  15. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis.

    Science.gov (United States)

    Tashkova, Katerina; Korošec, Peter; Silc, Jurij; Todorovski, Ljupčo; Džeroski, Sašo

    2011-10-11

    We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and

  16. DataSync - sharing data via filesystem

    Science.gov (United States)

    Ulbricht, Damian; Klump, Jens

    2014-05-01

    Datasync's major task is to distribute directory trees, we complement its functionality with the PHP-based application panMetaDocs [3]. panMetaDocs is the successor to panMetaWorks [4] and inherits most of its functionality. Through an internet browser PanMetaDocs provides a web-based overview of the datasets inside the eSciDoc-infrastructure. The software allows to upload further data, to add and edit metadata using the metadata editor, and it disseminates metadata through various channels. In addition, previous versions of a file can be downloaded and access rights can be defined on files and folders to control visibility of files for users of both panMetaDocs and Datasync. panMetaDocs serves as a publication agent for datasets and it serves as a registration agent for dataset DOIs. The application stack presented here allows sharing, versioning, and central storage of data from the very beginning of project activities by using the file synchronization service Datasync. The web-application panMetaDocs complements the functionality of DataSync by providing a dataset publication agent and other tools to handle administrative tasks on the data. [1] http://github.com/ulbricht/datasync [2] http://github.com/escidoc [3] http://panmetadocs.sf.net [4] http://metaworks.pangaea.de

  17. Combining engineering and data-driven approaches

    DEFF Research Database (Denmark)

    Fischer, Katharina; De Sanctis, Gianluca; Kohler, Jochen

    2015-01-01

    Two general approaches may be followed for the development of a fire risk model: statistical models based on observed fire losses can support simple cost-benefit studies but are usually not detailed enough for engineering decision-making. Engineering models, on the other hand, require many assump...... to the calibration of a generic fire risk model for single family houses to Swiss insurance data. The example demonstrates that the bias in the risk estimation can be strongly reduced by model calibration.......Two general approaches may be followed for the development of a fire risk model: statistical models based on observed fire losses can support simple cost-benefit studies but are usually not detailed enough for engineering decision-making. Engineering models, on the other hand, require many...... assumptions that may result in a biased risk assessment. In two related papers we show how engineering and data-driven modelling can be combined by developing generic risk models that are calibrated to statistical data on observed fire events. The focus of the present paper is on the calibration procedure...

  18. Protocol - realist and meta-narrative evidence synthesis: Evolving Standards (RAMESES

    Directory of Open Access Journals (Sweden)

    Westhorp Gill

    2011-08-01

    Full Text Available Abstract Background There is growing interest in theory-driven, qualitative and mixed-method approaches to systematic review as an alternative to (or to extend and supplement conventional Cochrane-style reviews. These approaches offer the potential to expand the knowledge base in policy-relevant areas - for example by explaining the success, failure or mixed fortunes of complex interventions. However, the quality of such reviews can be difficult to assess. This study aims to produce methodological guidance, publication standards and training resources for those seeking to use the realist and/or meta-narrative approach to systematic review. Methods/design We will: [a] collate and summarise existing literature on the principles of good practice in realist and meta-narrative systematic review; [b] consider the extent to which these principles have been followed by published and in-progress reviews, thereby identifying how rigour may be lost and how existing methods could be improved; [c] using an online Delphi method with an interdisciplinary panel of experts from academia and policy, produce a draft set of methodological steps and publication standards; [d] produce training materials with learning outcomes linked to these steps; [e] pilot these standards and training materials prospectively on real reviews-in-progress, capturing methodological and other challenges as they arise; [f] synthesise expert input, evidence review and real-time problem analysis into more definitive guidance and standards; [g] disseminate outputs to audiences in academia and policy. The outputs of the study will be threefold: 1. Quality standards and methodological guidance for realist and meta-narrative reviews for use by researchers, research sponsors, students and supervisors 2. A 'RAMESES' (Realist and Meta-review Evidence Synthesis: Evolving Standards statement (comparable to CONSORT or PRISMA of publication standards for such reviews, published in an open

  19. Impact of different dietary approaches on glycemic control and cardiovascular risk factors in patients with type 2 diabetes: a protocol for a systematic review and network meta-analysis.

    Science.gov (United States)

    Schwingshackl, Lukas; Chaimani, Anna; Hoffmann, Georg; Schwedhelm, Carolina; Boeing, Heiner

    2017-03-20

    Dietary advice is one of the cornerstones in the management of type 2 diabetes mellitus. The American Diabetes Association recommended a hypocaloric diet for overweight or obese adults with type 2 diabetes in order to induce weight loss. However, there is limited evidence on the optimal approaches to control hyperglycemia in type 2 diabetes patients. The aim of the present study is to assess the comparative efficacy of different dietary approaches on glycemic control and blood lipids in patients with type 2 diabetes mellitus in a systematic review including a standard pairwise and network meta-analysis of randomized trials. We will conduct searches in Cochrane Central Register of Controlled Trials (CENTRAL) on the Cochrane Library, PubMed (from 1966), and Google Scholar. Citations, abstracts, and relevant papers will be screened for eligibility by two reviewers independently. Randomized controlled trials (with a control group or randomized trials with at least two intervention groups) will be included if they meet the following criteria: (1) include type 2 diabetes mellitus, (2) include patients aged ≥18 years, (3) include dietary intervention (different type of diets: e.g., Mediterranean dietary pattern, low-carbohydrate diet, low-fat diet, vegetarian diet, high protein diet); either hypo, iso-caloric, or ad libitum diets, (4) minimum intervention period of 12 weeks. For each outcome measure of interest, random effects pairwise and network meta-analyses will be performed in order to determine the pooled relative effect of each intervention relative to every other intervention in terms of the post-intervention values (or mean differences between the changes from baseline value scores). Subgroup analyses are planned for study length, sample size, age, and sex. This systematic review will synthesize the available evidence on the comparative efficacy of different dietary approaches in the management of glycosylated hemoglobin (primary outcome), fasting glucose

  20. The Effectiveness of Digital Game-Based Vocabulary Learning: A Framework-Based View of Meta-Analysis

    Science.gov (United States)

    Chen, Meng-Hua; Tseng, Wen-Ta; Hsiao, Tsung-Yuan

    2018-01-01

    This study presents the results of a meta-analytic study about the effects of digital game-based learning (DGBL) on vocabulary. The results of the study showed that the effects of DGBL on vocabulary learning may vary with game design features (Q = 5.857, df = 1, p = 0.016), but not with learners' age (Q = 0.906, df = 1, p = 0.341) or linguistic…