A group of researchers from the University of Delaware, in a study that looked at the effectiveness of personal response systems, found that modest use of "clickers" increased exam performance. They did not see evidence that clickers actually increased engagement in their study. In a reference to the literature they write: "According to some researchers, students like clickers, and students also believe clickers make them feel more engaged" (45). As far as their own students went, however, they note: "Although Dr. B reported that students 'got a kick out of them,' clickers had only marginal effects on self-reports of student engagement, attendance, and reading in this study--effects that may be attributable to Type I error" (48).
Freshmen students at the University of Delaware, at the time of the study that is, mark which classes they want to take their first year, and a computer assigns their schedule. Morley, McAuliffe, and DiLorenzo mention more than once that this made their study more reliable and random, due to the random nature of the process. Students personal preferences did not determine when they took the psychology class. Morley explains that the teachers used the clicker system only minimally:
Both professors taught using an interactive lecture style. Both professors taught the earlier section without clickers ('traditional' sections) and the later section with clickers. In clicker sections, at the beginning of class, the instructor posted five multiple-choice, fact-based questions, based on the day's required reading. Students earned extra credit for answering these questions correctly. Later in the class period, if relevant, the instructor would briefely elaborate on a clicker question that most students had misunderstood. Other than this change, instructors taught the two sections identically. (46)Data gathered from the exam results did indicate that "exam scores were higher for clicker sections than for tratidtional sections" (47). This occurred regardless of the teacher; there were two teachers who taught two sections of large, introductory psychology classes. Morling et al. summarize it in more formal language: "Our data suggest that using clickers to quiz students in class on material from their reading resulted in a small, positive effect on exam performance in large introductory psychology classes" (47).
Further studies might consider looking at teaching methods used in conjunction with the technology. For example, they suggest looking at concept inventories, group discussions, and Just in Time Teaching (JiTT), which could all be joined with clickers to see how they might enhance learning (48). For more clarification, the authors write: "In our study, the instructors used clickers very minimally--to administer quizzes, publicly display the results, and quickly correct any widespread misunderstandings" (47-48).
Moreover, the article addressed the possibility that some students cheated while taking the reading quizzes, though they concede that this may have actually promoted a cooperative learning environment, which would have improved their engagement in the class (49). Overall, this was a good article, as it found a positive result of using clickers via a scientific study, rather than relied on anecdotes or the fact that the technology was trendy at the time.
Works Cited
Morling, Beth, Meghan McAuliffe, Lawrence Cohen, and Thomas M. DiLorenzo. "Efficacy of Personal Response Systems ("Clickers") in Large, Introductory Psychology Classes." Teaching of Psychology 35.1 (2008): 45-50.Stowell, Jeffrey R., and Jason M. Nelson. Teaching of Psychology 34.4 (2007): 253-58.
No comments:
Post a Comment