Browsing by Subject "statistics education"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
Item Assessing the Development of Students' Statistical Thinking: An Exploratory Study(2017-02) Le, LauraDeveloping students’ statistical thinking has been stressed as an important learning objective for statistics courses. In general, statistical thinking has been defined as “thinking like an expert applied statistician.” However, there is currently no consensus on the characteristics that make up statistical thinking. In addition, there is no known assessment that measures the complete construct of statistical thinking. The purpose of this study was to assess students’ statistical thinking in an introductory statistics course that is based on modeling and simulation. Specifically, the research question of interest was what components of students’ statistical thinking are revealed and developed in an introductory course that is based on modeling and simulation? To assess this, an assessment was created, called Modeling to Elicit Statistical Thinking (MODEST), that was based on a model of statistical thinking and utilized a type of problem that has been suggested to assess expert-like thinking (i.e., a Model-Eliciting Activity; MEA). To try to ensure that MODEST was an assessment of statistical thinking, several phases of feedback and pilot testing were carried out during the assessment development phase. In the field test phase, MODEST was administered online twice, at the beginning and at the end of the semester, to students enrolled in an introductory course that is based on modeling and simulation. Responses from 88 students were scored using a detailed scoring rubric to answer the research question. The results indicated that students appeared to enter the course with a moderate amount of statistical thinking (average score = 52%) and leave having developed some statistical thinking as a result of the course (average score difference = 6%; 95% CI: 2% to 10%). Even though the increase in their overall statistical thinking was significant, it was moderate (Cohen’s d = 0.34). Based on this, it appears that more could be done in the course to increase students’ statistical thinking. MODEST can be a valuable addition to the statistics education community by filling in the gap of assessing students’ statistical thinking. Both statistics education researchers and instructors would benefit from using MODEST to understand statistical thinking.Item Growing Certain: Students’ Mechanistic Reasoning about the Empirical Law of Large Numbers(2019-05) Brown, EthanExtensive research has documented students’ difficulty understanding and applying the Empirical Law of Large Numbers, the statistical principle that larger random samples result in more precise estimation. However, existing interventions appear to have had limited success, perhaps because they merely demonstrate the Empirical Law of Large Numbers rather than support students’ conceptual understanding of why this phenomenon occurs. This dissertation developed a sequence of activities, Growing Certain, which provided support for two mechanistic explanations of the Empirical Law of Large Numbers for students in a simulation-based introductory statistics course: swamping, the decreasing influence of extreme values on the mean as sample size increases, and heaping, the increasing concentration of possible sample means around the population mean. Five students participated in over six hours of one-on-one clinical interviews, with analysis focused on one focal participant, “S”. S’s responses were analyzed using a detailed coding of S’s articulation of mechanism components. S already displayed strong inclination towards swamping in the pre-interview questions, and their articulation of swamping became more sophisticated as they progressed in Growing Certain. However, S’s understanding of the connections between population and sample were weak throughout, and S had a lot of difficulty reasoning about multiple sample means simultaneously in a sampling distribution. S’s lack of abstraction of the sample mean appeared to support them in attending to the dynamics of swamping, but hindered them in being able to reason about heaping. Future research could examine representations that bridge swamping and heaping, and to examine individual differences in attention to the mechanistic components of the Empirical Law of Large Numbers.Item Introductory statistics students’ conceptual understanding of study design and conclusions(2017-12) Fry, ElizabethRecommended learning goals for students in introductory statistics courses include the ability to recognize and explain the key role of randomness in designing studies and in drawing conclusions from those studies involving generalizations to a population or causal claims (GAISE College Report ASA Revision Committee, 2016). The purpose of this study was to explore introductory statistics students’ understanding of the distinct roles that random sampling and random assignment play in study design and the conclusions that can be made from each. A study design unit lasting two and a half weeks was designed and implemented in four sections of an undergraduate introductory statistics course based on modeling and simulation. The research question that this study attempted to answer is: How does introductory statistics students’ conceptual understanding of study design and conclusions (in particular, unbiased estimation and establishing causation) change after participating in a learning intervention designed to promote conceptual change in these areas? In order to answer this research question, a forced-choice assessment called the Inferences from Design Assessment (IDEA) was developed as a pretest and posttest, along with two open-ended assignments, a group quiz and a lab assignment. Quantitative analysis of IDEA results and qualitative analysis of the group quiz and lab assignment revealed that overall, students’ mastery of study design concepts significantly increased after the unit, and the great majority of students successfully made the appropriate connections between random sampling and generalization, and between random assignment and causal claims. However, a small, but noticeable portion of students continued to demonstrate misunderstandings, such as confusion between random sampling and random assignment.Item A Multi-Modal Multiple Descriptive Case Study of Graduate Students’ Statistical Thinking in Statistical Tests Seven Months After Completing a Simulation-Based Introductory Level Course(2023-05) Rao, V.N. VimalThough statistical testing is commonly practiced, the logic of statistical tests is confusing, thinking about distributions is difficult, and the way statisticians formulate expectations as probability distributions is poorly understood. To support instruction, the statistics education community has increasingly utilized simulation-based pedagogies that place the logic of statistical inference at the core of instruction. Might this approach support and sustain the development of graduate students' statistical thinking, especially during statistical testing? How do graduate students, who have completed a simulation-based course, think while conducting statistical tests, months after completing the course?To answer these questions, a multi-modal multiple descriptive case study of six graduate students in the educational sciences was conducted. Data sources included audio, video, and gaze recordings, analytic memos generated by the researcher, as well as written artifacts generated by the participants. Participants generated concept maps for the logic of statistical tests, conducted statistical tests using statistical software, interpreted results from statistical tests, and participated in a retrospective video-cued interview. Data were analyzed through an interpretivist epistemological stance and employed the constant comparative method to identify relevant moments across all data artifacts to credibly describe participants’ thinking. Results suggest that students’ planning (i.e., deciding what to do and when to do it) was generally quite good. However, students generally struggled in monitoring and evaluating their plan (i.e., ensuring that the plan was being executed correctly, and that no changes to the plan were needed). Furthermore, they generally did not seem to think about null models, core to the logic of statistical testing. Instead, they focused on point and interval estimates for statistics of interest, and primarily thought about sampling variability in terms of a bootstrap dot plot, if at all. This study is one of the first to examine graduate students’ statistical thinking several months after the completion of a simulation-based introductory course. How students were thinking – generally able to reproduce a plan for analyzing the data consistent with what they were taught, and with a focus on variability through the examination of a bootstrap dot plot – suggests that statistics instructors might anchor instruction about statistical tests to descriptive statistics and their interpretation and contextualization. Furthermore, it suggests that the likelihood approach to statistical inference, evaluating hypotheses against given data, may be conceptually easier for students to think about.Item Statistics Graduate Students’ Professional Development for Teaching: A Communities of Practice Model(2017-05) Justice, NicolaGraduate teaching assistants (GTAs) are responsible for instructing approximately 25% of introductory statistics courses in the United States (Blair, Kirkman, & Maxwell, 2013). Most research on GTA professional development focuses on structured activities (e.g., courses, workshops) that have been developed to improve GTAs’ pedagogy and content knowledge. Few studies take into account the social contexts of GTAs’ professional development. However, GTAs perceive their social interactions with other GTAs to be a vital part of their preparation and support for teaching (e.g., Staton & Darling, 1989). Communities of practice (CoPs) are one way to bring together the study of the social contexts and structured activities of GTA professional development. CoPs are defined as groups of practitioners who deepen their knowledge and expertise by interacting with each other on an ongoing basis (e.g., Lave & Wenger, 1991). Graduate students may participate in CoPs related to teaching in many ways, including attending courses or workshops, participating in weekly meetings, engaging in informal discussions about teaching, or participating in e-mail conversations related to teaching tasks. This study explored the relationship between statistics graduate students’ experiences in CoPs and the extent to which they hold student-centered teaching beliefs. A framework for characterizing GTAs’ experiences in CoPs was described and a theoretical model relating these characteristics to GTAs’ beliefs was developed. To gather data to test the model, the Graduate Students’ Experiences Teaching Statistics (GETS) Inventory was created. Items were written to collect information about GTAs’ current teaching beliefs, teaching beliefs before entering their degree programs, characteristics of GTAs’ experiences in CoPs, and demographic information. Using an online program, the GETS Inventory was administered to N=218 statistics graduate students representing 37 institutions in 24 different U.S. states. The data gathered from the national survey suggest that statistics graduate students often experience CoPs through required meetings and voluntary discussions about teaching. Participants feel comfortable disagreeing with the people they perceive to be most influential on their teaching beliefs. Most participants perceive a faculty member to have the most influential role in shaping their teaching beliefs. The survey data did not provide evidence to support the proposed theoretical model relating characteristics of experiences in CoPs and beliefs about teaching statistics. Based on cross-validation results, prior beliefs about teaching statistics was the best predictor of current beliefs. Additional models were retained that included student characteristics suggested by previous literature to be associated with student-centered or traditional teaching beliefs (e.g., prior teaching experience, international student status). The results of this study can be used to inform future efforts to help promote student-centered teaching beliefs and teaching practices among statistics GTAs. Modifications to the GETS Inventory are suggested for use in future research designed to gather information about GTAs, their teaching beliefs, and their experiences in CoPs. Suggestions are also made for aspects of CoPs that might be studied further in order to learn how CoPs can promote teaching beliefs and practices that support student learning.Item Understanding the Development of Students' Multivariate Statistical Thinking in a Data Visualization Course(2022-08) Legacy, ChelseyMultivariate thinking is an increasingly recommended and important skill for developing statistical thinking. Currently, few studies have explored how students develop multivariate thinking. This study was conducted to learn more about developing this skill particularly when using visualization. It explored the following research questions: (1) How does students’ multivariable thinking develop as they take part in a series of activities designed to introduce and promote reasoning with multiple variables? How do student responses to questions requiring multivariable thinking change throughout the semester? (2) What challenges surrounding multivariable thinking persist after taking part in the intervention? Do any new challenges emerge after the completion of these activities?For this study a unit on multivariable thinking was created for a data visualization course that consisted of ten activities and three assignments, implemented in Fall 2021. The students’ responses on assignments were qualitatively analyzed for evidence of multivariable thinking pertaining to seven learning outcomes. Two students were observed from different sections of the course to gain insight into students' multivariable reasoning throughout the unit. Additionally, three students were interviewed at the end of the unit to provide rationale for their answers on the last assignment. Results indicated that over the course of the multivariable thinking unit, students improved in their ability to create multivariable graphs using R. Overall students’ reasoning with multiple variables improved throughout the unit, until the assignments and activities asked them to reason with more than three variables. At the end of the unit, most students still did not know if it was appropriate to make causal claims with their data. However, they remained consistently apt in their ability to create and update directed acyclic graphs, propose relationships among their variables of interest, and provide logical potential causal variables. Analysis of responses across the three assignments helped identify trends in the students’ performance on each learning outcome and identified similar challenges as seen in the literature, such as confusion about observational data, making causal claims, and potential bias in responses due to the context of the data. Finally, the cognitive interviews provided insight into some challenges and misconception students held and gave a sense of their final multivariable reasoning kills at the end of this unit. Future work is needed to define the skills needed for multivariable thinking, the sequence of those skills for a learning trajectory, and to determine additional ways to support students’ development of multivariable thinking.