A group of researchers at Brigham Young University collaborated to write an article in Active Learning in Higher Education. It was published in 2007 and discussed a gap in the research on clickers or Audience Response Systems (ARS). Many studies have looked into the participation factors: how they prompt discussions, how they uncover misconceptions in the learning, how they offer immediate feedback. They write: "Studies that explored direct measures of student learning with audience response systems have shown mixed results [...] Although these studies showed mixed results, most studies that looked at indirect measures of student learning (including levels of participation and engagement) found positive results" (236-37).
What about the groups of students who typically hold back and do not participate? They cite studies showing that females tend to participate less, "because of they worry that they might appear to dominate discussion" (237). Students from other cultures also participate less frequently, not wanting to give incorrect answers, "fearful of 'losing face'" (237). These researchers call this group "reluctant participators," saying that other studies on ARS have not looked into this demographic (237).
They found that students perceive the ARS to be less helpful when they are used to grade or mark attendance. Also, technical difficulties with the systems came back as the number one negative aspect of the systems, followed by their cost to the student, grading, and mandatory attendance. Students had to pay $40 a piece, so "a significant number of students were critical of cost effectiveness" (240). However, when the ARS were used to provide formative feedback, students perceived the use of the technology as positive.
On the whole, however, "the overall reaction to the use of the ARS in the pilot was positive. [...] For all of the measures except one, a strong majority of the students 'agreed' or somewhat agreed' that the ARS was helpful to them in the learning experience" (238, 240). Nonetheless, reluctant participators tended to view it as less helpful than the rest of the group. Moreover, "students in classes where the ARS was not used for grading viewed the ARS to be a more helpful tool" (242). This seems to be a major advantage of using Audience Response Systems (ARS).
During the survey, students were given opportunities to comment on what they liked as well as what they did not like about ARS. "One student wrote, 'It was nice to have an immediate response so I could know whether or not I was doing something right'" (248). Other students appreciated knowing what their peers thought on issues or content related to the class. The authors included this positive comment as well: "'The best part is that we could see the results of the surveys and what not instantly, and they were applicable to us in the class, not some foreign group of individuals. It brought the issues to life which we were discussing in class'" (248). This emphasizes the potential power of these systems, bringing forth relevant issues that can energize discussions.
At this point it seems appropriate to go back to the article's introduction to mention what happens when student have limited opportunities to participate and the method involves raising hands. The authors write: "When classes have limited opportunities for students to respond, participation can be unbalanced in favor of the most knowledgable students who are most willing to respond in front of their peers" (234). Personally, I have experienced this countless times. Students often have good things to share, but it often happens that one or two students dominate any sort of discussion. The strength of the ARS is that eveyone can participate anonymously.
While we are talking about active learning in general, it may be helpful to consider what the authors of this study wrote: "A diverse body of educational research has shown that academic achievement is positively influenced by the amount of active participation of students in the learning process" (233-34). In the past, response cards have been used, and they helped to increase student participation and performance in higher education. Today many have taken the ARS to invite participation and performance. Active learning makes a difference, but it helps to consider the subgroups who reluctantly participate.
Again, it is better to avoid using these systems for grading and attendance purposes, considering how many variables obstruct the success of the system. The authors included this student comment: "'I think it's an unfair grading system. I think they're great, as long as grades aren't based on them. There are too many variables like battery failure, problems with being on the correct channel and so forth that interfere. These variables make it unfair to base grades on clicker systems'" (240). Clickers can be a powerful tool; however, students seems to dislike the tool when it appears faulty and they will be assigned a grade via the performance of this faulty tool.
Instructors should review the purpose of the tools or methodologies they use in the class. The authors write: "The researchers in this study believe that the central concern of instructional technologists should be that of 'helping' students" (248). Though the grading and attendance features may be helpful for instructors, if they create negative feelings in the students that inhibit potential learning, then perhas it should not be used to grade student work. In their concluding remarks, the authors wrote: "Students perceived strategies that provided formative feedback and empowered them to evaluate their own performance as more helpful than strategies oriented towards grading and compelling participation" (251). This explains the title of their article rather succinctly.
For researchers it may be helpful to know what they suggest for later studies with ARS. "Future research could investigate a wider range of pedagogical strategies and the environments in which they are successful. A measure of the goodness of the strategies could be student and instructor perceptions of their helpfulness in the learning process" (250).
Graham, Charles R., Tonya R. Tripp, Larry Seawright, and George L. Joeckel III. "Empowering or Compelling Reluctant Participators Using Audience Response Systems." Active Learning in Higher Education 8.3 (2007): 233-58. Print.
What about the groups of students who typically hold back and do not participate? They cite studies showing that females tend to participate less, "because of they worry that they might appear to dominate discussion" (237). Students from other cultures also participate less frequently, not wanting to give incorrect answers, "fearful of 'losing face'" (237). These researchers call this group "reluctant participators," saying that other studies on ARS have not looked into this demographic (237).
Reluctant Participants. by Middle Age Biker. |
On the whole, however, "the overall reaction to the use of the ARS in the pilot was positive. [...] For all of the measures except one, a strong majority of the students 'agreed' or somewhat agreed' that the ARS was helpful to them in the learning experience" (238, 240). Nonetheless, reluctant participators tended to view it as less helpful than the rest of the group. Moreover, "students in classes where the ARS was not used for grading viewed the ARS to be a more helpful tool" (242). This seems to be a major advantage of using Audience Response Systems (ARS).
During the survey, students were given opportunities to comment on what they liked as well as what they did not like about ARS. "One student wrote, 'It was nice to have an immediate response so I could know whether or not I was doing something right'" (248). Other students appreciated knowing what their peers thought on issues or content related to the class. The authors included this positive comment as well: "'The best part is that we could see the results of the surveys and what not instantly, and they were applicable to us in the class, not some foreign group of individuals. It brought the issues to life which we were discussing in class'" (248). This emphasizes the potential power of these systems, bringing forth relevant issues that can energize discussions.
At this point it seems appropriate to go back to the article's introduction to mention what happens when student have limited opportunities to participate and the method involves raising hands. The authors write: "When classes have limited opportunities for students to respond, participation can be unbalanced in favor of the most knowledgable students who are most willing to respond in front of their peers" (234). Personally, I have experienced this countless times. Students often have good things to share, but it often happens that one or two students dominate any sort of discussion. The strength of the ARS is that eveyone can participate anonymously.
While we are talking about active learning in general, it may be helpful to consider what the authors of this study wrote: "A diverse body of educational research has shown that academic achievement is positively influenced by the amount of active participation of students in the learning process" (233-34). In the past, response cards have been used, and they helped to increase student participation and performance in higher education. Today many have taken the ARS to invite participation and performance. Active learning makes a difference, but it helps to consider the subgroups who reluctantly participate.
Again, it is better to avoid using these systems for grading and attendance purposes, considering how many variables obstruct the success of the system. The authors included this student comment: "'I think it's an unfair grading system. I think they're great, as long as grades aren't based on them. There are too many variables like battery failure, problems with being on the correct channel and so forth that interfere. These variables make it unfair to base grades on clicker systems'" (240). Clickers can be a powerful tool; however, students seems to dislike the tool when it appears faulty and they will be assigned a grade via the performance of this faulty tool.
Instructors should review the purpose of the tools or methodologies they use in the class. The authors write: "The researchers in this study believe that the central concern of instructional technologists should be that of 'helping' students" (248). Though the grading and attendance features may be helpful for instructors, if they create negative feelings in the students that inhibit potential learning, then perhas it should not be used to grade student work. In their concluding remarks, the authors wrote: "Students perceived strategies that provided formative feedback and empowered them to evaluate their own performance as more helpful than strategies oriented towards grading and compelling participation" (251). This explains the title of their article rather succinctly.
For researchers it may be helpful to know what they suggest for later studies with ARS. "Future research could investigate a wider range of pedagogical strategies and the environments in which they are successful. A measure of the goodness of the strategies could be student and instructor perceptions of their helpfulness in the learning process" (250).
Work Cited
No comments:
Post a Comment