Browsing by Subject "Judgment"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item The role of assessment center work simulation exercises in determining or influencing assessors’ judgments of leadership competencies.(2011-06) Jaeger, Kerri ShearThe study purpose was to examine how leadership skills and abilities are measured using the assessment center method. The specific question addressed was whether the work simulation ratings made unique, incremental contributions to the overall competency ratings over and above those from the testing. Archival data from a consulting firm specializing in leadership assessment were used to address the research questions. The population consisted of 200 manager and executives assessed for selection or development over 3 years. For each of the 8 competencies, a preliminary backwards stepwise multiple regression analysis was used to eliminate personality and motives inventory scales that did not significantly contribute to the overall leadership competency rating. Upon determining which variables should remain in the full analyses, eight multiple regression analyses were conducted in which variables were introduced in two blocks, the first of which contained the remaining cognitive ability testing and personality and motives inventory scales, while the latter added work simulation ratings. Results showed that work simulation exercises made significant contributions to assessors’ ratings of 7 of the 8 overall leadership competencies. Assessors considered half of the 8 competencies to be trait-based, and thus expected to draw more heavily on the personality and motives inventory scales and cognitive ability test score when making judgments of these overall competency ratings. Similarly, assessors considered the other four competencies to be skill-based, suggesting greater reliance on the performance-based simulations when determining overall competency ratings. These assumptions were upheld for 6 of the 8 competencies. Assessors expected to rely more on simulation data for 2 of the competencies they considered to be skill-based, but in fact, ended up placing more weight on the personality and motives inventories and cognitive ability test results. Implications for future research include conducting similar analyses of individual assessors’ overall competency model determinations, doing predictive validity studies, such as on-the-job performance studies that seek to determine the most predictive sources of data, and studying how these findings could be applied to design simulations in such a manner as to yield the most useful information to assessors when making their judgments of overall leadership competencies.Item The validity of judgment : can the assessor learn to outperform the equation?(2010-09) Klieger, David M.When judgments (i.e., predictions of outcomes) are incorrect, the negative consequences for individuals, organizations, and society can be serious. For various kinds of outcomes, meta-analyses and literature reviews reveal, time and again, that the predictive validity of information combined in the mind of the assessor ("clinical data combination") is smaller than the predictive validity when the information is combined using an equation or actuarial table ("mechanical data combination"). Therefore, using mechanical approaches instead of clinical ones would seem prudent. However, judgment validity encompasses consequential validity as well as predictive accuracy. Furthermore, even some of the scholars who have emphasized the superior accuracy of mechanical methods admit that it may be possible for a judge to systematically out-predict a mechanical method. One such possible approach is configural reasoning, an assessor's use of a functional form (e.g. an interaction) absent from the mechanical method and yet predictive of the outcome. As indicated by the aforementioned studies indicating the superior accuracy of mechanical combination, judges do not productively employ such techniques in general. Nevertheless, it remains an empirical question whether assessors can be taught to utilize configural reasoning to outperform an equation. In addition, it is important to determine the traits of those individuals who predict and learn to predict most accurately, because identifying such people can minimize the costs of error and training. This dissertation tries to be comprehensive in scope. It employs experimental designs and methods of assessing individual differences to answer questions about the degree (if any) to which people can be taught to outperform a mechanical equation, the degree (if any) to which assessors can learn to improve the accuracy of their judgments, the degree (if any) to which judges can be made less overconfident in their judgment strategies, the relationship of any changes in accuracy to any changes in confidence, the individual differences that define those who predict and learn to predict most accurately, and the timing of and extent to which (if any) assessors gain insight about the most accurate predictive approach. Prior to addressing these issues, this dissertation lays certain groundwork. It clarifies the nomological networks for clinical and mechanical combination. It enumerates much of the vast research that reveals the human cognitive limitations and informational barriers that are thought to contribute to a vicious cycle of lesser clinical accuracy and overconfidence in judgment strategies. Furthermore, it discusses why one should even care about clinical combination if mechanical procedures are generally more accurate. The most extensive background information provided prior to discussion of the studies conducted by this author concerns the Lens Model as a toolkit for measuring accuracy as well as the determinants of accuracy. Although this portion of the dissertation is somewhat detailed and intricate, it is necessary. First, understanding the Lens Model leads to understanding the determinants of judgment accuracy. Second, understanding the Lens Model leads to understanding how the judge can and cannot outperform the mechanical approach. Third, understanding the Lens Model leads to understanding the limitations of prior research. Fourth, understanding the Lens Model is essential if the reader is to fully understand results, discussion, and conclusions of the author's experiments. Also reviewed are the "skill score" as an alternative to the Lens Model for measuring accuracy as well as the major considerations involved when teaching people to improve their accuracy and lessen in confidence. The "skill score" provides information about elevation and scatter that is not available from the Lens Model. Final preliminaries focus on experimental design, namely how and why use of a disordinal interaction is central to the experiments conducted by the author, as well as issues concerning the number of experimental cues (predictors) employed, cue redundancy (intercorrelation), the importance of representative design in the experiments, the conduciveness of various types of experimental feedback to learning, and the impact of incentives on judgment accuracy in the experiments. The author conducted two studies - one in Fall 2009 and another in Spring 2010. Although some of the experimental design details of the studies differed in important ways, their general blueprints were quite similar. Using mostly undergraduate subjects at the University of Minnesota, both studies collected information about individual differences (cognitive ability, gender, personality, interests, and experience). In the experimental portions of the studies, subjects were asked to make predictions of job performance for hypothetical job candidates based on the cognitive ability test score for each candidate as well as how interesting or boring the candidate was expected to find the job. The most accurate clinical prediction strategy would involve applying knowledge that the correlation between cognitive ability and job performance was positive when the applicant was expected to find the job interesting but negative when the applicant was expected to find the job boring (i.e. a disordinal interaction). The competing mechanical model was a linear version of a model that incorporated the disordinal interaction. Subjects were asked about their confidence in how accurately they were making predictions, and in order to assess insight, subjects were asked to narratively self-report the nature of their judgment strategies. Data were analyzed using longitudinal hierarchical linear modeling (for within-person change over time in accuracy, the determinants of accuracy, and confidence), correlation (for between-person differences), and frequencies (mainly for evaluating insight). Results were fascinating, although many were inconclusive (often due to lack of statistical significance). Although subjects could outperform the mechanical model under certain experimental conditions, this superiority was not statistically significant. Some of the individuals, experimental groups, and/or subject pool means increased or declined in accuracy, the determinants of accuracy, and confidence over time as expected, but often these results were not statistically significant. Nevertheless, there was some evidence that criterion-related feedback about the disordinal interaction led to improved accuracy and decreased confidence while lack of it had the opposite effects. Several individual differences were significantly associated with accuracy, with cognitive ability being the difference most pervasively related to accuracy to a statistically significant degree. Findings for insight were complicated by the inconsistent nature of subjects' narratives. Nevertheless, there was relatively high agreement between raters of subjects' insight, and ratings of insight often had statistically significant correlations with objective measures of accuracy. Moreover, insight as variously measured was often achieved, and if achieved was usually achieved early.Item Viewing Expert Judgment in Individual Assessments through the Lens Model: Testing the Limits of Expert Information Processing(2018-05) Yu, MartinThe predictive validity of any assessment system is only as good as its implementation. Across a range of decision settings, algorithmic methods of data combination often match or outperform the judgmental accuracy of expert judges. Despite this, individual assessments still largely rely on the use of expert judgment to combine candidate assessment information into an overall assessment rating to predict desired criteria such as job performance. This typically results in lower levels of validity than what could theoretically have been achieved. Based on archival assessment data from an international management consulting firm, this dissertation presents three related studies with an overarching goal of better understanding the processes underlying why expert judgment tends to be less accurate in prediction compared to algorithmic judgmental methods. First, the Lens Model is used to break down expert judgment in individual assessments into its component processes, finding that when combining assessment information into an overall evaluation of candidates, expert assessors use suboptimal predictor weighting schemes and also use them inconsistently when evaluating multiple candidates. Second, the ability of expert assessors to tailor their judgments to maximise predictive power for specific organisations is tested by comparing models of expert judgment local and non-local to organisations. No evidence of valid expertise tailored to organisations is found as models of expert judgment local to a specific organisation performed only as well as models non-local to that organisation. Third, the importance of judgmental consistency in maximising predictive validity is evaluated by testing random weighting schemes. Here, simply exercising mindless consistency by applying a randomly generated weighting scheme consistently is enough to outperform expert judgment. Taken together, these results suggest that the suboptimal and inconsistent ways that expert assessors combine assessment information is drastically hampering their ability to make accurate evaluations of assessment candidates and to predict candidates’ future job performance. Even if they are able to demonstrate valid expert insight from time to time, over the long run the opportunities for human error far outweigh any opportunity for expertise to be truly influential. Implications of these findings for how assessments are conducted in organisations as well as recommendations for how expert judgment could still be retained and improved are discussed.