This website implements the sensitivity analyses described in Mathur & VanderWeele (2020a), Mathur & VanderWeele (2020b), and VanderWeele & Ding (2017).

**Sensitivity analysis for the pooled point estimate**

This tab computes the E-value for the pooled point estimate of a meta-analysis (Section 7.2 of Mathur & VanderWeele, 2020a; see VanderWeele & Ding (2017) and this page for more on E-values in general). This meta-analysis E-value represents the average severity of confounding in the meta-analyzed studies (i.e., the minimum strength of association on the risk ratio scale that unmeasured confounder(s) would need to have with both the exposure and the outcome, conditional on the measured covariates), to fully explain away the observed meta-analytic point estimate in the sense of shifting it to the null. Note that for outcome types other than relative risks, assumptions are involved with the approximate conversions used. See VanderWeele & Ding (2017) for details.

Alternatively, you can consider the average confounding strength required to reduce the observed point estimate to any other value (e.g. attenuating the observed association to a true causal effect that is no longer scientifically important, or alternatively increasing a near-null observed association to a value that is of scientific importance). For this purpose, simply type a non-null effect size into the box "True causal effect to which to shift estimate" when computing the meta-analytic E-value.

**Interpreting the results**

For example, if your meta-analytic point estimate on the relative risk scale is 1.5 (95% confidence interval: [1.4, 1.5]), you will obtain an E-value for the point estimate of 2.37 and an E-value for the lower confidence interval limit of 2.15. This means that if, hypothetically, the meta-analyzed studies were subject to confounding such that, on average across the studies, there were unmeasured confounder(s) that were associated with the studies' exposures and outcomes by relative risks of at least 2.37 each, this amount of average confounding could potentially explain away the point estimate of 1.5 (i.e., to have the true causal effect be a relative risk of 1), but weaker average confounding could not. Similarly, if this strength of average confounding were at least 2.15 across studies, this amount of confounding could potentially shift the confidence interval to include the null, but weaker average confounding could not.

**A caveat about the pooled point estimate**

Note that this tab of the website conducts sensitivity analyses that describe evidence strength only in terms of the pooled point estimate, a measure that does not fully characterize effect heterogeneity in a meta-analysis. For example, consider two meta-analyses with the pooled point estimate of relative risk = 1.1. The first, Meta-Analysis A, has very little heterogeneity, such that all true population effects are very close to 1.1. In contrast, despite having the same point estimate, Meta-Analysis B could have substantial heterogeneity, such that a large proportion of the true population effects are of scientifically meaningful size (e.g., >1.2). Thus, Meta-Analysis B provides stronger support for the presence of meaningfully strong effects than does Meta-Analysis A, and furthermore Meta-Analysis B might also suggest that a non-negligible proportion of the effects are actually preventive rather than causative (i.e., with relative risks less than 1). For this reason, meta-analyses that have some heterogeneity should generally report not only the point estimate, but also the estimated percentage of meaningfully strong population effects (Mathur & VanderWeele, 2019)., and sensitivity analyses should consider this quantity as well (which you can do using the tab "Sensitivity analysis for the proportion of meaningfully strong effects").

This tab computes the E-value for the pooled point estimate of a meta-analysis (Section 7.2 of Mathur & VanderWeele, 2020a; see VanderWeele & Ding (2017) and this page for more on E-values in general). This meta-analysis E-value represents the average severity of confounding in the meta-analyzed studies (i.e., the minimum strength of association on the risk ratio scale that unmeasured confounder(s) would need to have with both the exposure and the outcome, conditional on the measured covariates), to fully explain away the observed meta-analytic point estimate in the sense of shifting it to the null. Note that for outcome types other than relative risks, assumptions are involved with the approximate conversions used. See VanderWeele & Ding (2017) for details.

Alternatively, you can consider the average confounding strength required to reduce the observed point estimate to any other value (e.g. attenuating the observed association to a true causal effect that is no longer scientifically important, or alternatively increasing a near-null observed association to a value that is of scientific importance). For this purpose, simply type a non-null effect size into the box "True causal effect to which to shift estimate" when computing the meta-analytic E-value.

For example, if your meta-analytic point estimate on the relative risk scale is 1.5 (95% confidence interval: [1.4, 1.5]), you will obtain an E-value for the point estimate of 2.37 and an E-value for the lower confidence interval limit of 2.15. This means that if, hypothetically, the meta-analyzed studies were subject to confounding such that, on average across the studies, there were unmeasured confounder(s) that were associated with the studies' exposures and outcomes by relative risks of at least 2.37 each, this amount of average confounding could potentially explain away the point estimate of 1.5 (i.e., to have the true causal effect be a relative risk of 1), but weaker average confounding could not. Similarly, if this strength of average confounding were at least 2.15 across studies, this amount of confounding could potentially shift the confidence interval to include the null, but weaker average confounding could not.

Note that this tab of the website conducts sensitivity analyses that describe evidence strength only in terms of the pooled point estimate, a measure that does not fully characterize effect heterogeneity in a meta-analysis. For example, consider two meta-analyses with the pooled point estimate of relative risk = 1.1. The first, Meta-Analysis A, has very little heterogeneity, such that all true population effects are very close to 1.1. In contrast, despite having the same point estimate, Meta-Analysis B could have substantial heterogeneity, such that a large proportion of the true population effects are of scientifically meaningful size (e.g., >1.2). Thus, Meta-Analysis B provides stronger support for the presence of meaningfully strong effects than does Meta-Analysis A, and furthermore Meta-Analysis B might also suggest that a non-negligible proportion of the effects are actually preventive rather than causative (i.e., with relative risks less than 1). For this reason, meta-analyses that have some heterogeneity should generally report not only the point estimate, but also the estimated percentage of meaningfully strong population effects (Mathur & VanderWeele, 2019)., and sensitivity analyses should consider this quantity as well (which you can do using the tab "Sensitivity analysis for the proportion of meaningfully strong effects").

Note: You are calculating a "non-null" E-value, i.e., an E-value for the minimum
amount of unmeasured confounding needed to move the estimate and confidence interval
to your specified true value rather than to the null value.

Note: You are calculating a "non-null" E-value, i.e., an E-value for the minimum
amount of unmeasured confounding needed to move the estimate and confidence interval
to your specified true value rather than to the null value.

Note: You are calculating a "non-null" E-value, i.e., an E-value for the minimum
amount of unmeasured confounding needed to move the estimate and confidence interval
to your specified true value rather than to the null value.

Note: Using the standard deviation of the outcome yields a conservative approximation
of the standardized mean difference. For a non-conservative estimate, you could instead use the estimated residual standard deviation from your linear
regression model. Regardless, the reported E-value for the confidence interval treats the
standard deviation as known, not estimated.

This website implements the sensitivity analyses described in Mathur & VanderWeele (2020a), Mathur & VanderWeele (2020b), and VanderWeele & Ding (2017).

**Sensitivity analysis for the proportion of meaningfully strong causal effects**

Here, you can choose a fixed set of sensitivity parameters (the mean of the bias factor distribution and the proportion of the estimated heterogeneity that is due to confounding) to estimate:

**Choosing robust versus parametric estimation**

There are two statistical methods to estimate the metrics described above. The "Robust" tab below uses a nonparametric method (Mathur & VanderWeele (2020b); Wang & Lee (2019)) that makes no assumptions about the distribution of population effects, can be used in meta-analyses with as few as 10 studies, and can be used even when the proportion being estimated is close to 0 or 1. However, the robust method only accommodates bias whose strength is the same in all studies (homogeneous bias). When using the robust method, you will need to upload a dataset of study-level point estimates and variance estimates because inference is conducted by bootstrapping.

The "Parametric" tab uses a method that assumes that the population causal effects are approximately normal across studies and that the number of studies is large. Parametric confidence intervals should only be used when the proportion estimate is between 0.15 and 0.85, and you will get a warning message otherwise. Unlike the calibrated method, the parametric method can accommodate bias that is heterogeneous across studies, specifically bias factors that are log-normal across studies. When using the parametric method, you will specify summary estimates from your confounded meta-analysis rather than uploading a dataset.

**Effect size measures other than log-relative risks**

If your meta-analysis uses effect sizes other than log-relative risks, you should first approximately convert them to log-relative risks, for example using the function

**When these methods should be used**

These methods perform well only in meta-analyses with at least 10 studies; we do not recommend reporting them in smaller meta-analyses. Additionally, it only makes sense to consider proportions of effects stronger than a threshold when the heterogeneity estimate is greater than 0. For meta-analyses with fewer than 10 studies or with a heterogeneity estimate of 0, you can simply report E-values for the point estimate using the tab "Sensitivity analysis for the point estimate".

Here, you can choose a fixed set of sensitivity parameters (the mean of the bias factor distribution and the proportion of the estimated heterogeneity that is due to confounding) to estimate:

- The proportion of meaningfully strong causal effect sizes (i.e., those stronger than a chosen threshold q)
- The minimum bias factor that would be required to "explain away" the effect in the sense of reducing to less than r (e.g., 0.10 or 0.20) the proportion of meaningfully strong effects
- The minimum confounding strength (i.e., strength of association on the relative risk scale between the unmeasured confounder(s) and the exposure, as well as between the unmeasured confounder(s) and the outcome) that would be required to "explain away" the effect

There are two statistical methods to estimate the metrics described above. The "Robust" tab below uses a nonparametric method (Mathur & VanderWeele (2020b); Wang & Lee (2019)) that makes no assumptions about the distribution of population effects, can be used in meta-analyses with as few as 10 studies, and can be used even when the proportion being estimated is close to 0 or 1. However, the robust method only accommodates bias whose strength is the same in all studies (homogeneous bias). When using the robust method, you will need to upload a dataset of study-level point estimates and variance estimates because inference is conducted by bootstrapping.

The "Parametric" tab uses a method that assumes that the population causal effects are approximately normal across studies and that the number of studies is large. Parametric confidence intervals should only be used when the proportion estimate is between 0.15 and 0.85, and you will get a warning message otherwise. Unlike the calibrated method, the parametric method can accommodate bias that is heterogeneous across studies, specifically bias factors that are log-normal across studies. When using the parametric method, you will specify summary estimates from your confounded meta-analysis rather than uploading a dataset.

If your meta-analysis uses effect sizes other than log-relative risks, you should first approximately convert them to log-relative risks, for example using the function

`convert_measures`

in the R package EValue.
These methods perform well only in meta-analyses with at least 10 studies; we do not recommend reporting them in smaller meta-analyses. Additionally, it only makes sense to consider proportions of effects stronger than a threshold when the heterogeneity estimate is greater than 0. For meta-analyses with fewer than 10 studies or with a heterogeneity estimate of 0, you can simply report E-values for the point estimate using the tab "Sensitivity analysis for the point estimate".

Note: Robust estimation may take up to a few minutes depending on the size of your dataset.

Note: This may take up to a few minutes depending on the size of your dataset.

In addition to using this website, you can alternatively conduct these sensitivity analyses using the functions

`confounded_meta`

and `sens_plot`

in the R package EValue (Mathur et al., 2018). For more information on the interpretation of these sensitivity analyses and guidance on choosing the sensitivity parameters, see Mathur & VanderWeele (2020a), and for a review of methods to choose a threshold representing a meaningfully strong effect size, see the Supplement of Mathur & VanderWeele (2019). For more on the robust estimation methods, see Mathur & VanderWeele (2020b).

Similar methods and tools are also available to conduct analogous sensitivity analyses for other types of biases as follows.

- Publication bias in meta-analyses (Mathur & VanderWeele, 2020c; R package PublicationBias)

- Mathur MB, Ding P, Riddell CA, & VanderWeele TJ (2018). Website and R package for computing E-values.
*Epidemiology*29(5), e45. - Mathur MB & VanderWeele TJ (2019). New statistical metrics for meta-analyses of heterogeneous effects.
*Statistics in Medicine*38(8), 1336-1342. - Mathur MB & VanderWeele TJ (2020a). Sensitivity analysis for unmeasured confounding in meta-analyses.
*Journal of the American Statistical Association*115(529), 163-170. - Mathur MB & VanderWeele TJ (2020b). Robust metrics and sensitivity analyses for meta-analyses of heterogeneous effects.
*Epidemiology*31(3), 356-358.