ESMARConf2023: Workshop 1 - Test, adjust for and report publication bias

preview_player
Показать описание
Coordinators: Yefeng Yang, Malgorzata Lagisz and Alfredo Sánchez-Tójar
Title: Test, adjust for and report publication bias
Abstract: Meta-analyses are essential for summarising cumulative science, but their validity can be compromised by publication bias. Thus, it is essential to test whether publication bias occurs and adjust for its impact when drawing meta-analytic inferences. Large-scale surveys in many fields have shown that meta-analyses often distort the estimated effect size and evidence when no bias correction is made. We have two aims: (1) raising awareness of the importance of performing publication bias tests when conducting a meta-analysis, (2) providing meta-analysts with a tutorial on advanced but easy-to-implement techniques to properly test, adjust for and report publication bias.

Learning objectives:
- Introduction to publication bias and how it affects the reliability of science
- Overview of the traditional methods to study publication bias in meta-analysis
- What are the state-of-the-art methods to test and adjust for publication bias when heterogeneity among effect sizes exists
- How to practically implement the state-of-the-art methods to test and adjust for publication bias when heterogeneity among effect sizes exists
- How to report and interpret the results of a meta-analysis when publication bias exists

Audience:
The targeted audience corresponds to any researcher conducting or planning to conduct a meta-analysis. There are no formal requirements for participating, but knowledge of R is desirable given that our practical session will be based entirely on R programming language.

Requirements:
A computer/laptop with the latest version of R installed and of the following R packages: ‘metafor’ and ‘orchaRd’, which we recommend installing using the following code:
devtools::install_github('daniel1noble/orchaRd', force = TRUE)
Рекомендации по теме