Choosing a “Source of Truth”: The Implications of using Self versus Interviewer Ratings of Interviewee Personality as Training Data for Language-Based Personality Assessments

Loading...
Thumbnail Image

Persistent link to this item

Statistics
View Statistics

Journal Title

Journal ISSN

Volume Title

Title

Choosing a “Source of Truth”: The Implications of using Self versus Interviewer Ratings of Interviewee Personality as Training Data for Language-Based Personality Assessments

Published Date

2022-12

Publisher

Type

Thesis or Dissertation

Abstract

Advancement in research and practice in the application of machine learning (ML) and natural language processing (NLP) in psychological measurement has primarily focused on the implementation of new NLP techniques, new data sources (e.g., social media), or cutting-edge ML models. However, research attention, particularly in psychology, has lacked a major focus on the importance of criterion choice when training ML and NLP models. Core to almost all models designed to predict psychological constructs or attributes is the choice of a “source of truth.” Models are typically optimally trained to predict something, meaning the choice of scores the models are attempting to predict (e.g., self-reported personality) is critical to understanding the constructs reflected by the ML or NLP-based measures. The goal of this study was to begin to understand the nuances of selecting a “source of truth” by identifying and exploring the impact of the methodological effects attributable to choosing a “source of truth” when generating language-based personality scores. There were four primary findings that emerged. First, in the context of scoring interview transcripts, there was a clear performance difference between language-based models predicting self-reported scores and interviewer ratings such that language-based models could predict interviewer ratings much better than self-reported ratings of conscientiousness. Second, this is some of the first explicit empirical evidence of the method effects that can occur in the context of language-based scores. Third, there are clear differences between the psychometric properties of language-based self-report and language-based interviewer rating scores and these patterns seemed to be the result of a proxy effect, where the psychometric properties of the language-based ratings mimicked the psychometric properties of the human ratings they were derived from. Fourth, while there was evidence of a proxy effect, language-based scores had slightly different psychometric properties compared to the scores they were trained on, suggesting that it would not be appropriate to fully assume the psychometric properties of language-based assessments based on the ratings the models were trained on. Ultimately, this study is one of the first attempts towards better isolating and understanding the modular effects of language-based assessment methods and future research should continue the application of psychometric theory and research to advances in language-based psychological assessment tools.

Description

University of Minnesota Ph.D. dissertation. December 2022. Major: Psychology. Advisor: Richard Landers. 1 computer file (PDF); ix, 210 pages.

Related to

Replaces

License

Collections

Series/Report Number

Funding information

Isbn identifier

Doi identifier

Previously Published Citation

Suggested citation

Auer, Elena. (2022). Choosing a “Source of Truth”: The Implications of using Self versus Interviewer Ratings of Interviewee Personality as Training Data for Language-Based Personality Assessments. Retrieved from the University Digital Conservancy, https://hdl.handle.net/11299/252559.

Content distributed via the University Digital Conservancy may be subject to additional license and use restrictions applied by the depositor. By using these files, users agree to the Terms of Use. Materials in the UDC may contain content that is disturbing and/or harmful. For more information, please see our statement on harmful content in digital repositories.