Browsing by Author "Maxwell, Scott E."
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Item Dependent variable reliability and determination of sample size(1980) Maxwell, Scott E.Arguments have recently been put forth that standard textbook procedures for determining the sample size necessary to achieve a certain level of power in a completely randomized design are incorrect when the dependent variable is fallible. In fact, however, there are several correct procedures-one of which is the standard textbook approach-because there are several ways of defining the magnitude of group differences. The standard formula is appropriate when group differences are defined relative to the within-group standard deviation of observed scores. Advantages and disadvantages of the various approaches are discussed.Item Internal invalidity in pretest-posttest self-report evaluations and a re-evaluation of retrospective pretests.(1979) Howard, George S.; Ralph, Kenneth M.; Gulanick, Nancy A.; Maxwell, Scott E.; Nance, Don W.; Gerber, Sterling K.True experimental designs (Designs 4, 5, and 6 of Campbell & Stanley, 1963) are thought to provide internally valid results. This paper describes five studies involving the evaluation of various treatment interventions and identifies a source of internal invalidity when self-report measures are used in a Pretest-Posttest manner. An alternative approach (Retrospective Pretest-Posttest design) to measuring change is suggested, and data comparing its accuracy with the traditional Pretest-Posttest design in measuring treatment effects is presented. Finally, the implications of these findings for evaluation research using self-report instruments and the strengths and limitations of retrospective measures are discussed.Item Is a behavioral measure the best estimate of behavioral parameters? Perhaps not.(1980) Howard, George S.; Maxwell, Scott E.; Wiener, Richard L.; Boynton, Kathy S.; Rooney, William M.In many areas of psychological research various measurement procedures are employed in order to obtain estimates of some set of parameter values. A common practice is to validate one measurement device by demonstrating its relationship to some criterion. However, in many cases the measurement of that criterion is less than a perfect estimate of true parameters. Self-report measures are often validated by comparing them with behavioral measures of the dimension of interest. This procedure is only justifiable insofar as the behavioral measure represents an accurate estimate of population parameters. Three studies, dealing with the assessment of assertiveness, students’ in-class verbal and nonverbal behaviors, and a number of teacher-student in-class interactions, tested the adequacy of behavioral versus self-report measures as accurate estimates of behavioral parameters. In Studies 2 and 3 self-reports were found to be as good as behavioral measures as estimates of behavioral parameters, while Study 1 found self-reports to be significantly superior.Item Linked raters' judgments: Combating problems of statistical conclusion validity(1983) Howard, George S.; Obledo, Fernando H.; Cole, David A.; Maxwell, Scott E.The traditional procedure for obtaining judged ratings, to ascertain if treatment-related change has occurred, involves the randomization of the materials to be rated. An alternative approach (linked judgments) is investigated as a potential solution to certain instrumentation- related threats to statistical conclusion validity of the incumbent rating procedure. Data from a weight reduction study are presented which suggest that linked raters’ judgments provide both a more powerful and a more valid index of treatment effectiveness than the traditional procedure.Item A psychometric investigation of the Survey of Study Habits and Attitudes(1980) Bray, James H.; Maxwell, Scott E.; Schmeck, Ronald R.The present study investigated the reliability of the previously hypothesized four-factor model of the Survey of Study Habits and Attitudes (SSHA; Brown & Holtzman, 1953, 1967). The reliabilities of the scales were marginal as measured by coefficient alpha. The hierarchical model of the SSHA was not supported by confirmatory factor analysis. Numerous test items were found to load highest on a factor other than the one hypothesized by the Brown-Holtzman model. In addition, many items exhibited very low communalities and failed to load highly on any factor.