Browsing by Subject "Psychometrics"
Item A Comparison of Item Selection Methods and Stopping Rules in Multi-category Computerized Classification Testing(2022-12) Suen, King YiuComputerized classification testing (CCT) aims to classify people into one of two or more possible categories while maximizing accuracy and minimizing test length. Two key components of CCT are the item selection method and the stopping rule. The current study used simulation to compare the performance of various item selection methods and stopping rules for multi-category CCT in terms of average test length (ATL) and percentage of correct classifications (PCC) under a wide variety of conditions. Item selection methods examined include selecting items to maximize the Fisher information at the ability estimate, Fisher information at the nearest cutoff, and the sum of Fisher information of all cutoffs weighted with the likelihood function. The stopping rules considered were a multi-hypothesis sequential probability ratio test (mSPRT) and a multi-category generalized likelihood ratio test (mGLR), combined with three variations of stochastic curtailment methods (SC-Standard, SC-MLE and SC-CI). Manipulated conditions included the number of cutoffs, the distribution of the examinees’ abilities, the width of the indifference region, the shape of the item bank information function, and whether the items were calibrated with estimation error. Results suggested that the combination of mGLR and SC-MLE consistently had the best balance of ATL and PCC. The three item selection methods performed similarly across all conditions.Item Incorporation of Covariates in Bayesian Piecewise Growth Mixture Models(2022-12) Lamm, RikThe Bayesian Covariate Influenced Piecewise Growth Mixture Model (CI-PGMM) is an extension of the Piecewise Growth Mixture Model (PGMM, Lock et al., 2018) with the incorporation of covariates. This was done by using a piecewise nonlinear trajectory over time, meaning that the slope has a point where the trajectory changes, called a knot. Additionally, the outcome data belong to two or more latent classes with their own mean trajectories, referred to as a mixture model. Covariates were incorporated into the model in two ways. The first was influencing the outcome variable directly, explaining additional random error variance. The second is the influence of the covariates on the class membership directly with the use of multinomial logistic regression. Both uses of covariates can potentially influence the class memberships and along with that, the trajectories and locations of the knot(s). This additional explanation of class memberships and trajectories can provide information on how individuals change, who is likely to belong in certain unknown classes, and how these class memberships can affect when the rapid change of a knot will happen. The model is shown to be appropriate and effective using two steps. First, a real data application using the National Longitudinal Survey of Youth is used to show the motivation for the model. This dataset measures income over time each year for individuals following high school. Covariates of sex and dropout status were used in the class predictive logistic regression model. This resulted in a two-class solution showing effective use of the covariates with the logistic regression coefficients drastically affecting the class memberships. The second step is using a simulation after the motivating real data application. Pilot studies were used to show if the model was suitable for a full simulation using the coefficients from the real data example as a basis for the data generation. Four pilot studies were performed, and reasonable estimates were found for the full simulation. The conditions were set up with a two class model. One class containing one knot, and the second class as a linear slope. Two class predictive covariates and one outcome predictive covariate were used. A full simulation with 200 generated datasets was performed with manipulated conditions being error variance, sample size, model type, and class probability for a 3x3x3x2 model with 54 total conditions. Outcome measures of convergence, average relative bias, RMSE, and coverage rate were used to show suitability of the model. The simulation showed the use for the CI-PGMM was stable and accurate for multiple conditions. Sample size and model type were the most impactful predictors of appropriate model use. All outcome measures were worse for the small sample sizes and became more accurate when the sample sizes were larger. Also, the simpler models showed less bias and better convergence. However, these differences are smaller when the sample size is sufficiently large. These findings were supported with multi-factor ANOVA comparing simulation conditions. Use of the CI-PGMM in the real data example and the full simulation allowed for incorporation of covariates when appropriate. I show that model complexity can lead to issues of lower convergence, thus the model should only be used when appropriate and the sample size is sufficiently large. When used, however, the model can shed light on associations between covariates, class memberships, and locations of knots that were previously unavailable.Item Planned Missingness: A Sheep in Wolf's Clothing(2021-06) Zhang, CharleneThere has been an extensive body of methodological literature supporting the effectiveness of planned missingness (PM) designs for reducing survey length. However, in industrial/organizational (I/O) psychology, it is still rarely applied. Instead, when there is a need to reduce survey length, the standard practice is to either reduce the number of constructs measured or to use short forms rather than full measures. The former is obviously unideal. The latter requires prioritizing the measurement of some items over that of others and can also quickly become time and labor intensive, as not all measures have established short forms. This dissertation presents three studies that compare the relatively unused methodology of PM against the common practice of using short forms. First, the two approaches are compared in three archival datasets, finding that PM consistently yields more accurate correlational estimates than short forms. Second, a Monte Carlo simulation is conducted to explore how this comparison may be affected by data characteristics, including the number of constructs, construct intercorrelations, sample size, amount of missingness, as well as different types of short forms. Average of all conditions simulated, short forms produce slightly more accurate estimates than PM when empirically developed short forms are readily available for use. When a part of the sample needs to be used to first develop short forms, the two approaches perform equivalently. When the selection of items for short forms strays from being purely empirical, PM outperforms short forms. Lastly, a qualitative survey exploring social science researchers’ knowledge about PM finds that most are not familiar with PM or have an inaccurate understanding of the concept despite working with surveys frequently. A number of research contexts are identified for which PM may not be suitable. Overall, the findings of this dissertation demonstrate that PM designs are technically effective in producing accurate estimates. Its effectiveness, along with its convenience, makes it a valuable survey design tool. It is apparent that the road to popularizing this technique within the I/O field will require much education in its understanding and application, and this dissertation serves as a first step in doing so.