Browsing by Subject "Diagnostic tests"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Detecting Inconsistency and Using Non-randomized Studies in Research Synthesis(2016-05) Zhao, HongIn scientific research, multiple studies on the same intervention occur for many reasons, such as using different study populations or designs. Research synthesis attempts to integrate different data on the same topic for the purpose of making generalizations, and provides us with a formal method to systematically combine all available evidence. In this thesis, we focus on the synthesis of evidence from multiple clinical trials using network meta-analysis, and the more challenging problem of combining information from randomized clinical trials and less rigorous observational studies. Network meta-analysis (NMA) is an extension of standard pairwise meta-analysis to permit combination of results on more than two treatments. This enables both direct and indirect comparisons of treatments, and addresses the comparative effectiveness or safety of the treatments based on all sources of data. Current NMA methods are usually based on a contrast-based (CB) model to estimate the relative treatment effects for each study. While popular and often effective, this model suffers from certain limitations. An alternative is the arm-based (AB) model, which estimates the mean response directly for each treatment. Compared to the CB framework, AB models are more straightforward to interpret, especially when implemented in a missing-data framework, by allowing use of a common baseline treatment across all trials. In a NMA, when direct and indirect evidence differ, the analysis is said to suffer from inconsistency, and the treatment effect estimates may be biased. Inconsistency detection methods using CB models have already been developed, but no corresponding method based on the newer AB models has yet been proposed. Here, we develop a Bayesian AB approach to detecting inconsistency. After detecting inconsistency, formal diagnostic tests should be performed to check whether this violation of assumption results in the change of treatment effects. Therefore, we next explore whether the trial-arm combinations that are sources of inconsistency are influential or outlying observations. To do this, we modify the "constraint case" method to produce diagnostics suitable for generalized linear models in NMA using either AB or CB models, where the outcome is binary. Lastly, we develop methods to combine the data from a randomized clinical trial and a propensity score-matched non-randomized study using commensurate priors. The approach determines the proper degree of borrowing from the non-randomized data by the similarity of the estimated treatment effects in the two studies. Performance of all our methods is evaluated via both example datasets and simulation studies. In summary, this dissertation work enables improved research synthesis in biomedical applications and sheds light on future research directions in the aforementioned areas.Item Statistical methods for multivariate meta-analysis(2018-07) Lian, QinshuAs health problems get more complicated, the medical decisions and policies are rarely determined by evidence on a single effect. In recent years, there is a wide acknowledgment of the drawbacks of using separate univariate meta-analyses to solve a clearly multivariate problem. This has led to increased attention to multivariate meta-analysis, which is a generalization of standard univariate meta-analysis to synthesize evidence on multiple outcomes or treatments. Recently developments in multivariate meta-analysis have been driven by a wide variety of application areas. This thesis focuses on three areas in which multivariate meta-analysis is highly important but is not yet well developed: network meta-analysis of diagnostic tests, meta-analysis of observational studies accounting for exposure misclassification, and meta-regression methods adjusting for post-randomization variables. In studies evaluating the accuracy of diagnostic tests, three designs are commonly used, crossover, randomized, and non-comparative. Existing methods for meta-analysis of diagnostic tests mainly consider simple cases in which the reference test in all or none of the studies can be considered a gold standard test, and in which all studies use either a randomized or non-comparative design. To overcome the limitations of current methods, the Bayesian hierarchical summary receiver operating characteristic model is extended to network meta-analysis of diagnostic tests to simultaneously compare multiple tests within a missing data framework. The method accounts for correlations between multiple tests and for heterogeneity between studies. It also allows different studies to include different subsets of diagnostic tests and provides flexibility in the choice of summary statistics. In observational studies, misclassification of exposure is ubiquitous and can substantially bias the estimated association between an outcome and an exposure. Although misclassification in a single observational study has been well studied, few papers have considered it in a meta-analysis. A novel Bayesian approach is proposed to fill this methodological gap. We simultaneously synthesize two (or more) meta-analyses, with one on the association between a misclassified exposure and an outcome (main studies), and the other on the association between the misclassified exposure and the true exposure (validation studies). We extend the current scope for using external validation data by relaxing the "transportability'' assumption by means of random effects models. The proposed model accounts for heterogeneity between studies and can be extended to allow different studies to have different exposure measurements. Meta-regression is widely used in systematic reviews to investigate sources of heterogeneity and the association of study-level covariates with treatment effectiveness. Although existing meta-regression approaches have been successful in adjusting for baseline covariates, these methods have several limitations in adjusting for post-randomization variables. We propose a joint meta-regression method adjusting for post-randomization variables. The proposed method simultaneously estimates the treatment effect on the primary outcome and on the post-randomization variables. It takes both between- and within-study variability in post-randomization variables into consideration. Missing data is allowed in the primary outcome and the post-randomization variables, and uncertainty in the missing data is taken into consideration. All the proposed models are evaluated in simulation studies and are illustrated using real meta-analytic datasets.