Browsing by Subject "partial verification bias"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Statistical methods for multivariate meta-analysis of diagnostic tests(2015-05) Ma, XiaoyeAccurate diagnosis is often the first step towards the treatment and prevention of disease. Many quantitative comparisons of diagnostic tests have relied on meta-analyses, which are statistical methods to synthesize all available information in various clinical studies. In addition, in order to effectively compare the growing number of diagnostic tests for a specific disease, innovative and efficient statistical methods to simultaneously compare multiple diagnostic tests are urgently needed for physicians and patients to make better decisions. In the literature of meta-analysis of diagnostic tests (MA-DT), discussions have been focused on statistical models under two scenarios: (1) when the reference test can be considered a gold standard, and (2) when the reference test cannot be considered a gold standard. We present an overview of statistical methods for MA-DT in both scenarios. This dissertation covers both conventional and advanced multivariate approaches for the first scenario, and a latent class random effects model when the reference test itself is imperfect. As study design and populations vary, the definition of disease status or severity could differ across studies. A trivariate generalized linear mixed model (TGLMM) has been proposed to account for this situation; however, its application is limited to cohort studies. In practice, meta-analytic data is often a mixture of cohort and case-control studies. In addition, some diagnostic accuracy studies only select a subset of samples to be verified by the reference test, which is known as potential source of partial verification bias in single studies. The impact of this bias on a meta-analysis has not been investigated. We propose a novel hybrid Bayesian hierarchical model to combine cohort and case-control studies, and correct partial verification bias at the same time. A recent paper proposed an intent-to-diagnose approach to handle non-evaluable index test results, and discussed several alternative approaches. However, no simulation studies have been conducted to test the performance of the methods. We propose an extended TGLMM to handle non-evaluable index test results, and examine the performance of the intent-to-diagnose approach, the alternative approaches, and the proposed approach by extensive simulation studies. To compare the accuracy of multiple tests in a single study, three designs are commonly used: 1) the multiple test comparison design; 2) the randomized design; and 3) the non-comparative design. Existing MA-DT methods have been focused on evaluating the performance of a single test by comparing it with a reference test. The increasing number of available diagnostic instruments for a disease condition and the different study designs being used have generated the need to develop an efficient and flexible meta-analysis framework to combine all designs for simultaneous inference. We develop a missing data framework and a Bayesian hierarchical model for network meta-analysis of diagnostic tests (NMA-DT), and offer key advantages over traditional MA-DT methods.