Two experiments were done to study the biasing
effects of a pretest on subsequent posttest results.
The problem of the first experiment was the evaluation
of a programmed textbook used by psychology
freshmen. It used a separate-sample
design and showed that a pretest containing mostly
negative statements on programmed instruction
confounded posttest results. The second experiment,
using a different treatment, studied the pretest
effects of positive or negative statements. The
positive version counteracted the development of
negative feelings towards the treatment. The
negative version did not show a similar sensitizing
effect. This was considered a consequence of the
rather controversial character of the treatment and
the obligatory participation of subjects. The negative
statements perhaps confirmed existing attitudes.
Three suggestions to control for pretest sensitization
effects were given: (1) use research designs
with control conditions; (2) separate the pretest
phase from the posttest phase; and (3) give
more emphasis to designs without pretests.