Repository logo
Log In

University Digital Conservancy

University Digital Conservancy

Communities & Collections
Browse
About
AboutHow to depositPolicies
Contact

Browse by Subject

  1. Home
  2. Browse by Subject

Browsing by Subject "Source of Truth"

Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Item
    Choosing a “Source of Truth”: The Implications of using Self versus Interviewer Ratings of Interviewee Personality as Training Data for Language-Based Personality Assessments
    (2022-12) Auer, Elena
    Advancement in research and practice in the application of machine learning (ML) and natural language processing (NLP) in psychological measurement has primarily focused on the implementation of new NLP techniques, new data sources (e.g., social media), or cutting-edge ML models. However, research attention, particularly in psychology, has lacked a major focus on the importance of criterion choice when training ML and NLP models. Core to almost all models designed to predict psychological constructs or attributes is the choice of a “source of truth.” Models are typically optimally trained to predict something, meaning the choice of scores the models are attempting to predict (e.g., self-reported personality) is critical to understanding the constructs reflected by the ML or NLP-based measures. The goal of this study was to begin to understand the nuances of selecting a “source of truth” by identifying and exploring the impact of the methodological effects attributable to choosing a “source of truth” when generating language-based personality scores. There were four primary findings that emerged. First, in the context of scoring interview transcripts, there was a clear performance difference between language-based models predicting self-reported scores and interviewer ratings such that language-based models could predict interviewer ratings much better than self-reported ratings of conscientiousness. Second, this is some of the first explicit empirical evidence of the method effects that can occur in the context of language-based scores. Third, there are clear differences between the psychometric properties of language-based self-report and language-based interviewer rating scores and these patterns seemed to be the result of a proxy effect, where the psychometric properties of the language-based ratings mimicked the psychometric properties of the human ratings they were derived from. Fourth, while there was evidence of a proxy effect, language-based scores had slightly different psychometric properties compared to the scores they were trained on, suggesting that it would not be appropriate to fully assume the psychometric properties of language-based assessments based on the ratings the models were trained on. Ultimately, this study is one of the first attempts towards better isolating and understanding the modular effects of language-based assessment methods and future research should continue the application of psychometric theory and research to advances in language-based psychological assessment tools.

UDC Services

  • About
  • How to Deposit
  • Policies
  • Contact

Related Services

  • University Archives
  • U of M Web Archive
  • UMedia Archive
  • Copyright Services
  • Digital Library Services

Libraries

  • Hours
  • News & Events
  • Staff Directory
  • Subject Librarians
  • Vision, Mission, & Goals
University Libraries

© 2025 Regents of the University of Minnesota. All rights reserved. The University of Minnesota is an equal opportunity educator and employer.
Policy statement | Acceptable Use of IT Resources | Report web accessibility issues