Browsing by Subject "Bias"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Item Evaluating bias caused by screening in observational risk-factor studies of lung cancer nested in the PLCO randomized screening trial.(2009-09) Jansen, Ricky JeffreyIt is well-known that bias such as lead-time and length distort studies of screening efficacy whether survival or incidence is of interest. A third bias, usually called overdiagnosis bias, occurs when an individual is only diagnosed with disease before death from a different cause because he/she is screened. These forms of bias can also arise in observational studies where the proportion screened and screening rates vary by risk-factor strata. This difference in screening behaviors influences corresponding case ascertainment or case enrollment probabilities which can lead to erroneous conclusions about the size of the risk-factor effect on the disease. It has been suggested that classic confounding occurs in such risk-factor studies when screening is efficacious; therefore, it can be addressed by conventional analyses such as stratification or confounder adjustment in regression models. However, even if the test is not efficacious, screening creates changes in case ascertainment probabilities which must be addressed using alternative methods. Recurrence-time models, long used for screening programs, can be adapted to model the affect screening use has on risk-factor studies. These models can be used to study the magnitude of potential bias, but may also be adapted to provide an analytic approach to correct estimates for such bias. The risk-factor studies nested in the PLCO trial are potentially affected by such bias, and this randomized study also provides a structure within which models of screening bias may be tested and validated. To validate our model, a variety of nested case-control studies will be developed that measure the effect smoking has on lung cancer and the degree to which the bias affecting those estimates change based on the study design will be determined. This process will include a) expanding a previously developed lead-time bias model to incorporate length and overdiagnosis; b) incorporating a more flexible and realistic model of screening that can incorporate the patterns documented in the PLCO trial; c) exploring if the mathematical model is valid using varied nested study designs within PLCO and comparing resulting logistic regression estimates to simulated results; and d) using the validated models to produce correction factors for use in other nested risk-factor studies. Results indicate that the mathematical model is highly sensitive to overdiagnosis as increasing rates increase expected bias, but relatively insensitive to using different screening test sensitivities. Increasing screening behavior differential during the study, preclinical duration length, and selecting from the intervention group are associated with increasing expected screening bias. Increasing screening behavior before the study and selecting from the usual-care group are associated with a decreasing expected screening bias. Although the mathematical model couldn't be validated as a correction factor here, the results suggest using a shorter preclinical duration distribution for the model may produce more accurate screening bias values. The focus of this work was to identify if chest x-ray screening could modify the estimated risk of smoking on lung cancer diagnosis. An additional goal was to develop a usable method for adjusting observational studies of lung cancer for the bias arising from differential chest x-ray screening between ever and never smoking groups. In a boarder sense, this work has provided an explanation of the effect screening use may have on an observational risk-factor study and an example of how to implement the mathematical technique. Additionally, this project has provided a more general method for doing sensitivity analyses on the screening related assumptions involved with these studies, whether nested in a randomized trial or sampled from the population at large.Item Forecasting inaccuracies: a result of unexpected events, optimism bias, technical problems, or strategic misrepresentation?(Journal of Transport and Land Use, 2015) Naess, Petter; Andersen, Jeppe Astrup; Nicolaisen, Morten Skou; Strand, ArvidBased on the results from a questionnaire survey and qualitative interviews among different actors involved in traffic forecasting, this paper discusses what evidence can be found in support of competing explanations of forecasting errors. There are indications that technical problems and manipulation, and to a lesser extent optimism bias, may be part of the explanation of observed systematic biases in forecasting. In addition, unexpected events can render the forecasts erroneous, and many respondents and interviewees consider it to be simply not possible to make precise predictions about the future. The results give rise to some critical reflections about the reliability of project evaluations based on traffic forecasts susceptible to several systematic as well as random sources of error.Item Habits of meaning: when legal education and other professional training attenuate bias in social judgments.(2012-05) Girvan, Erik JamesSocial-cognitive theory explains the persistence of social bias in terms of the automatic placement of individuals into social categories, the function of which is to conserve cognitive resources while providing a basis for some (even if inaccurate) inferences. Within that paradigm, bias attenuation involves transcending social categorization through effortful individuation. Research on learning and expertise supports an alternative perspective: That training to categorize entire situations using, e.g., legal rules, their implications, and associated responses, can attenuate bias in social judgments by displacing or reducing the need to rely upon social categorization. The Competing Category Application Model (CCAM), a novel model of the effects of expertise on use of social stereotypes in judgment and decision-making, is proposed and tested. The results of three experimental studies provide strong evidence for CCAM. Across the studies, the liability decisions of untrained participants, participants trained on unrelated legal rules, and participants trained on indeterminate legal rules were consistent with the use of social stereotypes. By comparison, such stereotypes did not affect the decisions of trained participants who were applying determinate legal rules. Implications of the results and for future directions are discussed.Item Political disagreement and decision-making in American politics(2013-06) Sheagley, Geoffrey DavidThis dissertation explores how political disagreement and disagreeable information shape the nature and quality of citizens' political judgments. People encounter disagreeable information on a routine basis, yet little is known about how exposure to this kind of information shapes people's political decision-making. I examine if and when exposure to political disagreement and disagreeable information leads people to make open-minded, accurate political judgments rather than closed minded, biased decisions. Using a series of experiments, I demonstrate that exposure to high levels of political disagreement can shape how people make judgments, and that, at times, it leads people to be more open-minded and accurate in their approach to decision-making. This research has important implications for understanding how inherent features of the democratic process shape the quality of citizens' judgments.Item Potential bias associated with modeling the effectiveness of treatment using an overall hazard ratio(SAGE, 2014-10) Alarid-Escudero, FernandoPurpose: Clinical trials often report treatment efficacy in terms of the reduction of all-cause mortality [i.e., overall hazard ratio (OHR)], and not the reduction in disease-specific mortality [i.e., disease-specific hazard ratio (DSHR)]. Using an OHR to reduce all-cause mortality beyond the time horizon of the clinical trial may introduce bias if the relative proportion of other-cause mortality increases with age. We aim to quantify this bias. Methods: We simulated a hypothetical cohort of patients with a generic disease that increases the age-, sex-, and race-specific mortality rate (μASR) by a constant additive disease-specific rate (μDis). We assumed a DSHR of 0.75 (unreported) and an OHR of 0.80 (reported, derived from DSHR and assumptions of clinical trial population). We quantified the bias in terms of the difference in life expectancy (LE) gains with treatment between using an OHR approach to reduce all-cause mortality over a lifetime [(μASR+ μDis)xOHR] compared with using a DSHR approach to reduce disease-specific mortality [μASR+(μDis)xDSHR]. We varied the starting age of the cohort from 40 to 70 years old. Results: The OHR bias increases as DSHR decreases and with younger starting ages of the cohort. For a cohort of 60 year-old sick patients, the mortality rate under OHR approach crosses μASR at the age of 90 (see figure) and LE gain is overestimated by 0.6 years (a 3.7% increase). We also used OHR as an estimate of DSHR [μASR+(μDis)xOHR] (as the latter is not often reported). This resulted in a slight shift in the mortality rate compared to the DSHR approach (see figure), yielding in an underestimation of the LE gain. Conclusions: The use of an OHR approach to model treatment effectiveness beyond the time horizon of the trial overestimates the effectiveness of the treatment. Under an OHR approach it is possible that sick individuals at some point will face a lower mortality rate compared with healthy individuals. We recommend either to derive a DSHR from trials and use the DSHR approach, or to use the OHR as an estimate of DSHR in the model, which is a conservative assumption.