Browsing by Subject "Misclassification"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Ascertaining the validity of suicide data to quantify the impacts and identify predictors of suicide misclassification(2023-11) Wright, NateBackground: Misclassification plagues suicide data, but few evaluations of these data have been done, and even fewer studies have estimated the impacts of misclassification. There are specific concerns about misclassifying true suicides as other deaths are rarely, if ever, certified as suicides. Public health surveillance efforts may be hampered through inaccurate accounting of suicide because of misclassification. Furthermore, misclassification biases estimates of effect when identifying risk and protective factors for suicide. In parallel, with an enrichment of data sources, data-driven machine learning methods have begun to be harnessed as a tool to predict misclassification and identify factors or decedent characteristics associated with increased likelihood of misclassification. Thus, there is a critical need to examine the validity of suicide data and the ways misclassification is understood in order to support public health interventions and policy that are directed by these data.The overall objective of this proposal was to examine misclassification in suicides from death certificates and estimate the impact and determinants of this misclassification. The central hypothesis was that suicide data were misclassified and the validity of such data was poor. The central hypothesis was tested by pursuing three specific aims. Aim 1: Calculate estimates of sensitivity, specificity, and positive and negative predictive values for the misclassification of suicides identified from death certificates in Minnesota overall and by industry group. Methods: A classic validation study was conducted that compared suicides reported from death certificates with suicides classified using a proxy gold standard, the Self-Directed Violence Classification System (SDVCS). Results: Contrary to our hypothesis, minimal misclassification of suicides was identified. One exception was observed in the Armed Forces industry, where relatively poor sensitivity estimates suggested potential underreporting of suicide. The data abstraction process, however, revealed common circumstances and risk factors shared between suicides and non-suicides, including mental and physical health diagnoses, substance use, and treatment for mental and substance use conditions. Aim 2a: Demonstrate the impact of misclassification on suicide incidence by applying estimates of sensitivity and specificity to suicide counts from death certificates; Aim 2b: determine the impact of misclassification on the association between opioid use and suicide through misclassification bias analysis. For aim (2a), suicide counts along with sensitivity and specificity estimates were used to calculate and compare corrected suicide incidence rates. For aim (2b), a probabilistic misclassification bias analysis was conducted where estimates of sensitivity and specificity were used to produce a record-level bias-adjusted data set. The bias-adjusted data were then used to calculate the measure of association between opioid use and suicide, which was compared with the results using the original data. Results: The true incidence of suicide increased after suicide misclassification was accounted for in each industry sector. This was consistent across the various validity scenarios. For the misclassification bias analysis, the odds ratio showed that opioid-involved deaths were 0.25 (95% CI: 0.20, 0.32) times as likely to be classified as suicide compared with non-opioid-involved deaths. After correcting for misclassification, the association estimate did not change from the original estimate (0.22, 95% simulation interval: 0.07, 0.32). Aim 3: Identify factors indicative and predictive of suicide misclassification through machine learning. Methods: Aim 3 was attained by developing classification algorithms that predicted and identified risk factors associated with suicide misclassification under various suicide comparison scenarios (i.e., medical examiner/coroner certified suicides, probable suicides, and possible suicides). Results: Accurate models were developed across the suicide comparison scenarios that consistently performed well and offered valuable insights into suicide misclassification. The top variables influencing the classification of overdose suicides included previous suicidal behaviors, the presence of a suicide note, substance use history, and evidence of mental health treatment. Treatment for pain, recent release from an institution, and prior overdose were also important factors that had not been previously identified as predictors of suicide classification. Conclusion: The proposed research was innovative because it represented a substantive departure from acknowledgment of suicide misclassification to attempting to understand and correct measures of suicide incidence and association, along with identifying factors indicative of suicide misclassification. However, minimal evidence of misclassification was found, and when a misclassification bias analysis was done it showed no effect on the association estimate. Data limitations, such as missing or not collected circumstance factors, along with a control group that may not meet the exchangeability assumption, likely impacted the results. Novel factors associated with suicide misclassification were identified, providing a foundation for future research to build upon. The need remains, though, for accurate and valid suicide data both for public health surveillance, as well as to produce unbiased association estimates to identify risk and predictive factors of suicide. The extent to which suicide statistics are used by researchers and policy makers mandates that efforts be made to understand and improve the validity of suicide data.Item Statistical methods for multivariate meta-analysis(2018-07) Lian, QinshuAs health problems get more complicated, the medical decisions and policies are rarely determined by evidence on a single effect. In recent years, there is a wide acknowledgment of the drawbacks of using separate univariate meta-analyses to solve a clearly multivariate problem. This has led to increased attention to multivariate meta-analysis, which is a generalization of standard univariate meta-analysis to synthesize evidence on multiple outcomes or treatments. Recently developments in multivariate meta-analysis have been driven by a wide variety of application areas. This thesis focuses on three areas in which multivariate meta-analysis is highly important but is not yet well developed: network meta-analysis of diagnostic tests, meta-analysis of observational studies accounting for exposure misclassification, and meta-regression methods adjusting for post-randomization variables. In studies evaluating the accuracy of diagnostic tests, three designs are commonly used, crossover, randomized, and non-comparative. Existing methods for meta-analysis of diagnostic tests mainly consider simple cases in which the reference test in all or none of the studies can be considered a gold standard test, and in which all studies use either a randomized or non-comparative design. To overcome the limitations of current methods, the Bayesian hierarchical summary receiver operating characteristic model is extended to network meta-analysis of diagnostic tests to simultaneously compare multiple tests within a missing data framework. The method accounts for correlations between multiple tests and for heterogeneity between studies. It also allows different studies to include different subsets of diagnostic tests and provides flexibility in the choice of summary statistics. In observational studies, misclassification of exposure is ubiquitous and can substantially bias the estimated association between an outcome and an exposure. Although misclassification in a single observational study has been well studied, few papers have considered it in a meta-analysis. A novel Bayesian approach is proposed to fill this methodological gap. We simultaneously synthesize two (or more) meta-analyses, with one on the association between a misclassified exposure and an outcome (main studies), and the other on the association between the misclassified exposure and the true exposure (validation studies). We extend the current scope for using external validation data by relaxing the "transportability'' assumption by means of random effects models. The proposed model accounts for heterogeneity between studies and can be extended to allow different studies to have different exposure measurements. Meta-regression is widely used in systematic reviews to investigate sources of heterogeneity and the association of study-level covariates with treatment effectiveness. Although existing meta-regression approaches have been successful in adjusting for baseline covariates, these methods have several limitations in adjusting for post-randomization variables. We propose a joint meta-regression method adjusting for post-randomization variables. The proposed method simultaneously estimates the treatment effect on the primary outcome and on the post-randomization variables. It takes both between- and within-study variability in post-randomization variables into consideration. Missing data is allowed in the primary outcome and the post-randomization variables, and uncertainty in the missing data is taken into consideration. All the proposed models are evaluated in simulation studies and are illustrated using real meta-analytic datasets.