Browsing by Author "Yu, Martin"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Recruitment, Staffing, and Retention Strategies for the Rosemount Fire Department(Resilient Communities Project (RCP), University of Minnesota, 2015) Ellis, Brenda; Matsuda, Win; Moore, Mariah; Shewach, Ori; Yamada, Tetsuhiro; Yu, MartinThis project was completed as part of a year-long partnership between the City of Rosemount and the University of Minnesota’s Resilient Communities Project (http://www.rcp.umn.edu). With an all-volunteer Fire Department, daytime staffing of fire responders—when most residents are at jobs outside of the community—has regularly posed a challenge for Rosemount. The goal of this project was to identify areas for growth in the recruitment and retention of firefighters available during work-week hours. In collaboration with city project lead Rick Schroeder, Fire Chief for the City of Rosemount, a team of students in PSY 5707: Personnel Psychology examined national, statewide, and local volunteer fire departments to identify strategies for more successfully attracting, recruiting, and retaining firefighters. A final report and poster from the project are available.Item Viewing Expert Judgment in Individual Assessments through the Lens Model: Testing the Limits of Expert Information Processing(2018-05) Yu, MartinThe predictive validity of any assessment system is only as good as its implementation. Across a range of decision settings, algorithmic methods of data combination often match or outperform the judgmental accuracy of expert judges. Despite this, individual assessments still largely rely on the use of expert judgment to combine candidate assessment information into an overall assessment rating to predict desired criteria such as job performance. This typically results in lower levels of validity than what could theoretically have been achieved. Based on archival assessment data from an international management consulting firm, this dissertation presents three related studies with an overarching goal of better understanding the processes underlying why expert judgment tends to be less accurate in prediction compared to algorithmic judgmental methods. First, the Lens Model is used to break down expert judgment in individual assessments into its component processes, finding that when combining assessment information into an overall evaluation of candidates, expert assessors use suboptimal predictor weighting schemes and also use them inconsistently when evaluating multiple candidates. Second, the ability of expert assessors to tailor their judgments to maximise predictive power for specific organisations is tested by comparing models of expert judgment local and non-local to organisations. No evidence of valid expertise tailored to organisations is found as models of expert judgment local to a specific organisation performed only as well as models non-local to that organisation. Third, the importance of judgmental consistency in maximising predictive validity is evaluated by testing random weighting schemes. Here, simply exercising mindless consistency by applying a randomly generated weighting scheme consistently is enough to outperform expert judgment. Taken together, these results suggest that the suboptimal and inconsistent ways that expert assessors combine assessment information is drastically hampering their ability to make accurate evaluations of assessment candidates and to predict candidates’ future job performance. Even if they are able to demonstrate valid expert insight from time to time, over the long run the opportunities for human error far outweigh any opportunity for expertise to be truly influential. Implications of these findings for how assessments are conducted in organisations as well as recommendations for how expert judgment could still be retained and improved are discussed.