Browsing by Subject "Replicability"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Innovative Statistical Methods for Meta-analyses with Between-study Heterogeneity(2022-06) Xiao, MengliTo assess the benefits and harms of medical interventions, meta-analysis plays an important role in combining results from multiple studies. While the notion of combining independent results is motivated by similarities between studies, a pooled estimate may be insufficient in the presence of between-study heterogeneity in a meta-analysis. The sources of between-study heterogeneity come from studies being: 1) different and unrelated (possibly due to a mixture of non-replicable study findings); 2) different but similar (i.e., drawn from the same distribution); or 3) susceptible to modeling using covariates. In the first, studies do not replicate each other, and meta-analysis is not considered an option. In the second, a random-effects model may be used to reflect the similarity of studies, and in the third, a meta-regression analysis is suggested. To differentiate the first from the others, it is essential to develop a statistical framework establishing whether multiple studies give sufficiently similar results, i.e., replicate each other, before undertaking a meta-analysis. However, traditional meta-analysis approaches cannot effectively distinguish whether the between-study difference is from non-replicability or unknown study-specific characteristics. No rigorous statistical methods exist to characterize the non-replicability of multiple studies in a meta-analysis. In Chapter 2, we introduce a new measure, the externally standardized residuals from a leave-m-studies-out procedure, to quantify replicability. We also explore its asymptotic properties and use extensive simulations and three real-data studies to illustrate this measure's performance. We also provide the R package "repMeta" to implement the proposed approach. The remainder of this dissertation concerns scenarios when substantial heterogeneity still exists among replicable studies in a meta-analysis. Such heterogeneity may or may not decrease by incorporating available covariates in a meta-analysis, given that the sources of effect heterogeneity are commonly unknown and unmeasured. A proxy for those unknown and unmeasured factors may still be available in a meta-analysis, namely the baseline risk. Chapter 3 proposes using the bivariate generalized linear mixed-effects model (BGLMM) to 1) account for the potential correlation of the baseline risk with the treatment effect measure, and 2) obtain estimated effects conditioning on the baseline risk. We demonstrate a strong negative correlation between study effects and the baseline risk, and the conditional effects notably vary with baseline risks. Chapter 4 reinforces the suggestion that a meta-analysis should model the heterogeneity in effect measures with respect to baseline risks and study conditions. It finds that two commonly-used binary effect measures, the odds ratio (OR) and risk ratio (RR), have a similar dependence on the baseline risk in 20,198 meta-analyses from the Cochrane Database of Systematic Reviews, a leading source of healthcare evidence. This empirical evidence contrasts with a false argument that OR does not vary with study conditions. We illustrate that understanding effect heterogeneity is essential to patient-centered practice in an actual meta-analysis of the interventions addressing the chronic hepatitis B virus infection.Item Striking A Balance Between Psychometric Integrity and Efficiency for Assessing Reinforcement Learning and Working Memory in Psychosis-Spectrum Disorders(2021-06) Pratt, DanielleCognitive deficits are well-established in psychosis-spectrum disorders and are highly related to functional outcomes for those individuals. Therefore, it is imperative to measure cognition in reliable and replicable ways, particularly when assessing for change over time. Notably, despite revolutionizing our measurement of specific cognitive abilities, parameters from computational models are rarely psychometrically assessed. Cognitive tests often include vast numbers of trials in order to increase psychometric properties, however long tests cause undue stress on the participant, limit the amount of data that can be collected in a study, and may even result in a less accurate measurement of the domain of interest. Thus, balancing psychometrics with efficiency can lead to better assessments of cognition in psychosis. The goal of this dissertation is to establish the psychometric properties and replicability of reinforcement learning and working memory tasks and determine the extent to which they could be made more efficient without sacrificing the psychometric integrity. The results provide support that these tests of reinforcement learning are appropriate for use in studies with only one time point but may not currently be appropriate for retest studies due to the inherent learning that occurs during the first time performing the task. The working memory tasks are ready for use in intervention studies, with the computational parameters of working memory appearing slightly less reliable than observed measures, but potentially more sensitive to detecting group differences. Lastly, these reinforcement learning and working memory tasks can be made 25%-50% more efficient without sacrificing reliability and optimized by focusing on items yielding the most information. Altogether, this dissertation provides guidance for using reinforcement learning and working memory tests in studies of cognition in psychosis in the most appropriate, efficient, and effective ways.Item Summary of Companion Guidelines on Replication and Reproducibility in Education Research(EBSS Newsletter, 2019) Riegelman, Amy L.