In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item, or at least that any differential effect is negligible. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. Previous work has approached position effects in testing from a variety of methodological perspectives, resulting in a variety of findings. This study presents a hierarchical generalized linear model, a type of multilevel model, for estimating item position effects. Previous approaches to estimating and modeling position effects are described within a multilevel framework, and an extension of these approaches is demonstrated, one which incorporates item position as a continuous variable. Position effects are estimated as interactions between the position and the item, in other words, as slopes or changes in item difficulty per shift in the position of the item within the test form. The model is demonstrated using real and simulated data. Real data came from two sources: a K-12 reading achievement test administered to over 90,000 students in which pilot items were included in random positions; and pilot sections of the GRE administered to roughly 1,800 examinees, where the same items appeared in different positions across the form. Data were simulated to have item-position effects similar to those found in the real data studies and in previous research. A base model and two position effect models were then compared in terms of parameter recovery and fit to the simulated data. Practical applications of the model are discussed.