Showing posts with label assessment. Show all posts
Showing posts with label assessment. Show all posts

Tuesday, June 12, 2012

Clickers, Participation, Assessment Outcomes, and Library Instruction

Clickers, or personal response systems, may encourage participation and help students enjoy library instruction more.  Emily Chan and Lorrie Knight, from the University of the Pacific,  conducted a study that discovered this to be true.  They also learned that assessment outcomes may not necessarily improve as a result of using clickers. 

Published in Communications in Information Literacy, their article "Clicking with Your Audience: Evaluating the Use of Personal Response Systems in Library Instruction" first identifies the makeup of college students participating in their study.  They belong to the Millennial generation who "tend to share these main character traits: feeling special, being sheltered, having confidence, preferring team or group activities, favoring the conventional, feeling pressured, and needing to achieve" (193).  They make the following claim: "Library instruction, often delivered through one-shot sessions, may seem out of touch to Millennials if it does not incorporate technology in a meaningful and entertaining manner" (193).  With this premise as their foundation, they propose the usage of personal response systems (PRS or clickers) to engage students.

Through their look at the literature they show that others have found that professional use of PRS helps students be involved in the classroom, promotes conversations, and enhances learning among students (193).  They note that PRS make the lectures and class activities more lively and less "stagnant" (193).  As mentioned elsewhere, clickers allow instructors to adjust in the moment they are teaching.  They can understand what the students know.  Therefore, an atmosphere of active engagement and learning may be easier to establish with clickers (193). 

Not enough has been written about the PRS and actual learning outcomes, so Chan and Knight worked to look at this with their study.  They cite Anne C. Osterman's 2008 article that identifies library instructors' two greatest fears: (1) boring students and (2) teaching above their heads.  At this point they refer to another article when they write: "The use of clickers can prompt greater classroom interactivity through an assessment of students' understanding of IL concepts" (194).  Additionally, they found another article that states the finding that clickers increase student involvement in the classroom as well as their usage of resources in the library (194).  To repeat myself once more, this study looks at student enjoyment, engagement, and achievement as they relate to the implementation of clickers in the classroom.

As with other studies, they prepare the reader by defining the constituents involved--in this case freshmen at the University of the Pacific--and explain the course objectives of the freshman seminar courses and the library evaluations gathered before the study took place.  "At the end of each library session, students completed a brief evaluation measuring their achievement of learning outcomes.  The Assessment Office tabulated and analyzed the results, which proved to be inconclusive" (195).  Librarians convinced their library dean to fund a second instruction room equipped with more technology, such as a smart board, a computer for all participants, and clickers.  This allowed the librarians to conduct an experiment to see if the technology influenced student learning outcomes. 

Surprisingly enough, they found that the classes without clickers scored slightly better than the ones with them.  They write: "The students in Classroom NC (non-clicker) scored significantly higher in the assessment than the students who had their library session in Classroom C (clickers) (P value < 0.001)" (197).  That is not to say that there were no positive outcomes for students taking the instruction with the clickers.  Chan and Knight write: "The students in the technology-rich Classroom C found the library sessions to be more enjoyable, organized, well-presented, and participatory" (197).  Perhaps these positive results would continue to justify the use of clickers in the classroom.

No doubt the authors must have been perplexed that the technology did not increase content retention; however, they offer some reasons why the students in technology-rich classroom may not have achieved higher scores on the assessment measures.  They do so by noting potential benefits of a paper assessment:
  1. Able to use the paper assessment as a resource
  2. Allows the student to self-regulate order and pace during the test time
  3. Lets them to see all the questions from the start (this is similar to reason #1 above)
  4. With paper exams students can review and correct their answers before turning them in to be graded
  5. A paper test gives students the opportunity to judge how they use their time; they are more in control of this than if the test is offered with technology, especially if the instructor changes the questions (198)
If librarians use the clicker technology to assess learning, these reasons may be worth remembering. 
Boulder Chain Lakes area in White Clouds of Idaho.  Lakes in photo may be of Sliderock Lake (l) and Shelf Lake (r) Photo by Spencer Jardine.  2010.
Here are a few other things worth mentioning from this article.  Classes with clickers seemed to enjoy the instruction more, felt it was more organized, well-presented, and participatory than those that did not have them (199).  Millenials may expect and want technology to be used.  Indeed, Chan and Knight also mention another study that suggests "the use of clickers can restart the attention span of students" (199).  Sometimes this is necessary to bring back students to the subject at hand. 

The authors see clickers as useful tools to invite participation, adjust to student needs, and as a means to get things going at the beginning of library instruction sessions.  They write: "With the clickers' ability instantly to poll the audience, library faculty used warm-up questions as icebreakers in order to foster a more collaborative and engaging environment" (199).  They had wanted the clickers to increase content retention, but the non-clicker classroom student out-performed their peers in the classroom with clickers.  Naturally, other researchers, just as the authors mention, should look to see how learning outcomes are influenced by the use of technology in the library instruction classroom.

Chan, Emily K., and Lorrie A. Knight.  "Clicking with Your Audience: Evaluating the Use of Personal Response Systems in Library Instruction."  Communications in Information Literacy 4.2 (2010): 192-201.  Print.

Thursday, May 24, 2012

Audience Response Systems in the Classroom

Heidi Adams and Laura Howard write: "Audience Response Systems, commonly known as clickers, are gaining recognition as a useful classroom tool" (54).  In their short article titled "Clever Clickers: Using Audience Response Systems in the Classroom" they define Audience Response Systems (ARS), explain the two major systems (radio frequency and infrared), and explain how the systems can be used to gather feedback, check for understanding, assess student learning, and provide specific ideas for using ARS in the classroom. 

As a tool to promote student learning, each student must answer questions with a remote control.  Results are shown right away.  Adams and Howard write: "Since the educators are able to see the results instantly, it permits them to evaluate student understanding at that very moment and provides an opportunity to adjust the lesson accordingly to improve student comprehension" (54).  It helps instructors know if students got it.  ARS can be used and adapted to meet the needs of each student and each class.

Of particular value, this article offers twenty ideas for using ARS in the classroom.  Here are just a few:
  • Comprehension Testing
  • Drill and Practice
  • Review Games
  • Questionnaires/Surveys
  • Voting
  • Checking for understanding during a lecture
  • Fact Finding or Pre- and Post-Tests (55)
Adams and Howard cite some of the literature in making their point that ARS are good for students, because they increase engagement in the classroom.  Students seem to like it more.  They write: "With clickers, every student answers every question.  Additionally, the questions will spark more questions from students that will lead to further discussion and understanding regarding the material" (55).  Additionally, the on-the-spot assessment or feedback lets students understand if they got it right or not (56).  They do not have to guess; this seems to cement the learning and can solidify the learning process, or the correct cerebral paths in the brain. 

Naturally, the ARS do not solve all problems and have a few drawbacks.  Adams and Howard claim: "As with any other type of learning, if ARS is used too often, students tire of it" (56).  In other words, students like the newness of the technology, but with time will become less interested in it.  Still, they assert "that the advantages such as instant feedback and increased student engagement far outweigh the downsides" (56).  The potential of these systems does seem fairly expansive.
Qwizdom Clicker.  See "Spotlight on Education: Sandwood's S.A.IN.T Academy Hosts First Annual Media Day." Duval County Public Schools.
 A sidebar in the article shows a half dozen brand names of ARS:

Work Cited
Adams, Heidi, and Laura Howard.  "Clever Clickers: Using Audience Response Systems in the Classroom."  Library Media Connection 28.2 (October 2009): 54-56.

Wednesday, May 23, 2012

How does empowering versus compelling students influence participation attitudes?

A group of researchers at Brigham Young University collaborated to write an article in Active Learning in Higher Education.  It was published in 2007 and discussed a gap in the research on clickers or Audience Response Systems (ARS).  Many studies have looked into the participation factors: how they prompt discussions, how they uncover misconceptions in the learning, how they offer immediate feedback.  They write: "Studies that explored direct measures of student learning with audience response systems have shown mixed results [...] Although these studies showed mixed results, most studies that looked at indirect measures of student learning (including levels of participation and engagement) found positive results" (236-37). 

What about the groups of students who typically hold back and do not participate?  They cite studies showing that females tend to participate less, "because of they worry that they might appear to dominate discussion" (237).  Students from other cultures also participate less frequently, not wanting to give incorrect answers, "fearful of 'losing face'" (237).  These researchers call this group "reluctant participators," saying that other studies on ARS have not looked into this demographic (237). 
Reluctant Participants.  by Middle Age Biker.
They found that students perceive the ARS to be less helpful when they are used to grade or mark attendance.  Also, technical difficulties with the systems came back as the number one negative aspect of the systems, followed by their cost to the student, grading, and mandatory attendance.  Students had to pay $40 a piece, so "a significant number of students were critical of cost effectiveness" (240).   However, when the ARS were used to provide formative feedback, students perceived the use of the technology as positive.

On the whole, however, "the overall reaction to the use of the ARS in the pilot was positive.  [...] For all of the measures except one, a strong majority of the students 'agreed' or somewhat agreed' that the ARS was helpful to them in the learning experience" (238, 240).  Nonetheless, reluctant participators tended to view it as less helpful than the rest of the group.  Moreover, "students in classes where the ARS was not used for grading viewed the ARS to be a more helpful tool" (242).  This seems to be a major advantage of using Audience Response Systems (ARS).

During the survey, students were given opportunities to comment on what they liked as well as what they did not like about ARS.  "One student wrote, 'It was nice to have an immediate response so I could know whether or not I was doing something right'" (248).  Other students appreciated knowing what their peers thought on issues or content related to the class. The authors included this positive comment as well: "'The best part is that we could see the results of the surveys and what not instantly, and they were applicable to us in the class, not some foreign group of individuals.  It brought the issues to life which we were discussing in class'" (248).  This emphasizes the potential power of these systems, bringing forth relevant issues that can energize discussions.

At this point it seems appropriate to go back to the article's introduction to mention what happens when student have limited opportunities to participate and the method involves raising hands.  The authors write: "When classes have limited opportunities for students to respond, participation can be unbalanced in favor of the most knowledgable students who are most willing to respond in front of their peers" (234).  Personally, I have experienced this countless times.  Students often have good things to share, but it often happens that one or two students dominate any sort of discussion.  The strength of the ARS is that eveyone can participate anonymously.

While we are talking about active learning in general, it may be helpful to consider what the authors of this study wrote: "A diverse body of educational research has shown that academic achievement is positively influenced by the amount of active participation of students in the learning process" (233-34).  In the past, response cards have been used, and they helped to increase student participation and performance in higher education.  Today many have taken the ARS to invite participation and performance.  Active learning makes a difference, but it helps to consider the subgroups who reluctantly participate. 

Again, it is better to avoid using these systems for grading and attendance purposes, considering how many variables obstruct the success of the system.  The authors included this student comment: "'I think it's an unfair grading system.  I think they're great, as long as grades aren't based on them.  There are too many variables like battery failure, problems with being on the correct channel and so forth that interfere.  These variables make it unfair to base grades on clicker systems'" (240).  Clickers can be a powerful tool; however, students seems to dislike the tool when it appears faulty and they will be assigned a grade via the performance of this faulty tool.

Instructors should review the purpose of the tools or methodologies they use in the class.  The authors write: "The researchers in this study believe that the central concern of instructional technologists should be that of 'helping' students" (248).  Though the grading and attendance features may be helpful for instructors, if they create negative feelings in the students that inhibit potential learning, then perhas it should not be used to grade student work.  In their concluding remarks, the authors wrote: "Students perceived strategies that provided formative feedback and empowered them to evaluate their own performance as more helpful than strategies oriented towards grading and compelling participation" (251).  This explains the title of their article rather succinctly.

For researchers it may be helpful to know what they suggest for later studies with ARS.  "Future research could investigate a wider range of pedagogical strategies and the environments in which they are successful.  A measure of the goodness of the strategies could be student and instructor perceptions of their helpfulness in the learning process" (250).

Work Cited
Graham, Charles R., Tonya R. Tripp, Larry Seawright, and George L. Joeckel III.  "Empowering or Compelling Reluctant Participators Using Audience Response Systems."  Active Learning in Higher Education 8.3 (2007): 233-58.  Print.

Friday, May 11, 2012

"Participatory Technologies" Article by Meredith Farkas

Meredith Farkas has written a good article for librarians involved with information literacy.  She celebrates the advantages of incorporating participatory technologies into the information-literacy classroom.  Using Web 2.0 tools such as blogs and wikis in the classroom can increase student learning and responsibility.  In fact, she claims that these tools "have the potential to create a more engaging learning environment.  Increased learner autonomy give[s] students a greater sense of responsibility for their learning and has been shown to improve student achievement" (84). 

One of the great advantages of blogs in relation to information-literacy learning is that blogs encourage reflective thinking, which can potentially guide students to think about their own research process.  Farkas notes that blogs can invite "reflection within an environment of peer interaction" (84).  Students will often listen to their peers before their instructors; they may be following the rule "Don't trust anyone over 30."

In theory, Farkas also extols the constructivist model, which naturally downgrades the traditional model of the teacher being the authority figure.  Students can learn and grow more when they interact with each other, challenging each others' ideas.  Of course, she explains this a bit more eloquently: "Constructivist pedagogy views students as active participants in learning who construct knowledge based on their existing understanding as well as interactions with peers and their instructor.  Unlike in behaviorism, the instructor is not seen as being wholly responsible for student learning" (86).  She ties this teaching theory to Web 2.0 and calls it Pedagogy 2.0, though I have not verified if she is the one to coin this term.
Microsoft Office Clipart.
As an instruction librarian myself, her final section appealed to me the most: "Information Literacy and Pedagogy 2.0."  She emphasizes the importance of evaluation:
In a world where the nature of authority has come into question (Chang et al., 2008), students will need to evaluate information in more nuanced ways then they are currently being taught at most colleges and universities.  Information literacy needs to be increasingly focused on teaching evaluative skills to students; skills that go well beyond determining whether or not something is peer-reviewed.  (90)
 Four or five years ago it seemed that librarians, myself included, still showed bogus websites to their students to raise awareness of the importance of evaluation.  In certain cases, this may still be appropriate and get students attendance.  However, it seems that college students need to evaluate information at a more sophisticated level.  Farkas' claim that "students will need to evaluate information in more nuanced ways" (90) makes sense.  Rather than looking at sources to see if their are black or white, legitimate or bogus, genuine or fake; students need to determine if information is relevant to their research question, to understand if it is credible, objective, current, accurate, and authoritative. 

The hardest thing for students to determine may be the accuracy of information, so looking at credibility, objectivity, and authority may give necessary clues for them to determine accuracy.  Most of all, they should be concerned with the relevance of sources, yet some students seem to be too quick with the gun at killing sources.  Part of college involves creativity at understanding how the broader subject relates to the more specific paper topic.

Meredith Farkas addresses a new issue, at least it did catch me off guard: "Those teaching information literacy will also need to focus on developing in students the dispositions needed to be a successful consumer and producer of knowledge" (90).  It seems easy to teach content and research strategies as a librarian, but developing new dispositions in students seems a tall order, not to say it is not desirable.  With one- or two-shot sessions how much can library instructors really do? 

Undoubtedly, this work of influencing student attitudes in the direction of knowledge creation may seem daunting for library instruction, yet it may also insert some life into the instruction.  This goes beyond just showing the steps of how to use a database and taking advantage of the features that can be easily explored independent of the instructor.  Therefore, I agree with Farkas, though it may require some stretching for most library instructors.  "It is important for librarians to consider how we can help students develop the attitudes that will make them critical and effective information seekers through learning activities" (Farkas 90).  Indeed, librarians should take the time to reflect on how to inspire students in this direction, but it may start with librarians becoming more passionate and confident about their own information-seeking abilities.

How does this translate to the library instruction classroom?  It means that librarians need to get students actively engaged in the process.  Farkas writes:
Librarians still offering lecture-based information literacy instruction need to explore ways to make their instruction more engaging and student-centered through collaborative, problem-based learning.  The Library literature is replete with case studies suggesting creative active techniques for enhancing student learning.  (90)
She goes on to encourage more questions, dialogue, and group work.  Rather than creating a set outline, librarians should conduct formative assessment to understand the constituents of their classes (91).  Each class coming to the library consists of a different group of individuals with different experiences.  Bending the instruction focus to meet students' needs seems to be more effective.  Going a step beyond this, it seems that success in the library instruction room may increase if the formative assessment is sent out and completed prior to attending the library workshop.  This allows the librarian time to think about where adjustments should be made.  Not all librarians like to adjust on the fly.
I agree with Farkas when she says the students do not reflect: "Students rarely reflect on their research process, which can result in the need to re-learn skills they used in their last assignment" (91).  Working with the instructor, a librarian may be able to leverage a reflective requirement, and they could do so with a blog (91).  I have always liked this idea, and Farkas explains why blogging and wiki creation are such good ideas.  "Blogs could also be used to have students investigate the social origins of information and identify bias within writing.  Students can engage with the peer review process through reviewing the work of their classmates on blogs and wikis" (92). 

Participatory technology, then, engages students in the peer-review process, invites them to critically assess the research process, provides "teachable moments" for the instructor, and increases student learning (92).  "These activities can generate an understanding of peer-review at a level far beyond simply checking a box in a database search interface" (92).  They can also increase the sense of community, enliven the classroom, allow the instructor to offer guidance and feedback, and lead to positive student learning outcomes.  Moreover, it may even increase writing and communication skills.
 
 
This article was very well written and offered a number of insights into the value of participatory technology as it relates to library instruction in higher education.
 
 
I do recognize her from American Libraries as the author of numerous technology columns.  She is a librarian at Portland State University and writes the blog titled Information Wants to be Free.

Work Cited
Farkas, Meredith.  "Participatory technologies, pedagogy 2.0 and information literacy."  Library Hi Tech 30.1 (2012): 82-94.

Thursday, April 5, 2012

Gathering Feedback with Free Online Tools

Polldaddy.com, LetsGoVote.com, and Polleverywhere.com all offer free online software to let anyone in the world create online polls.  Personally, I have more experience with Polldaddy.com and the surveys, polls, and ratings I can create and share online.  LetsGoVote and PollEverywhere let users create polls that can be answered with cell phones that can text message responses. 

Because cell phones are nearly ubiquitous and text messaging is definitely mainstream (at least in the United States), providing quick polls that can be answered with text messaging makes sense.  Most college students have cell phones, so these spontaneous polls can be created "on the fly" in the classroom for immediate feedback to the instructor. 

Students do not always want to answer questions in front of their peers for fear of being embarrassed after a wrong answer or too much attention from the instructor.  Text messaging lets students answer anonymously, still giving the instructor a sense for the understanding of his/her class.

A couple of years back, I got excited about Google Documents, and the surveys, quizzes, or polls that can be created with them.  I had forgotten about them, recalling how clunky and unintuitive they were to use and create, but I have taken another look at them recently.  They are free.  With that in mind, it takes a few more steps to get some things accomplished.  The results of a survey are listedin a spreadsheet format; however, the three tools listed above can automatically display results in visual graphs, which are much more appealing. 

Still, a lot can be done with Google Documents, and I do not believe that users are limited to a certain number or responses received to polls/quizzes/surveys or number of surveys created.  On the other hand, the three tools mentioned above do limit users to 100 responses a month, or 20 responses per survey, or 40 audience members per poll. 

Below is a presentation I created for a workshop yesterday:
What do you think of online surveys? Do you create polls to gather feedback? Are they helpful? How?

Wednesday, February 1, 2012

Analyzing Library Skills Survey Results

Following are the questions included in a recent survey designed for a class that met in the Library for instruction.
  1. How do you keep the related terms grouped together in a search statement? Results
  2. What will a truncation or wildcard symbol do?  Results
  3. Which Boolean operator reduces the number of results the more times it is used between search terms?  Results
  4. Which Boolean operator will typically return the largest set of results?  Results
  5. When you need to find the full text of an article for which you already have the full citation, which tool works the best?  Results
  6. Have you had library instruction before?  Results
  7. Do you understand the assignment for this class? Results
  8. Which of these databases have you used? Results
  9. Have you chosen a disability to research for the assignment in this class? If so, which one? Results
  10. What one thing would you like to learn today?  Results
For several of the responses it appears that a good number chose the correct answer, but the majority did not answer it right.  Ten out of 18 seem to know that parentheses keep related words grouped together in a search statement.  Only seven understand that a truncation code (the asterisk in most databases *) will help find variations on a word. 

Question 3 provided two correct answers, so I should have thought through that a bit more.  Both the Boolean operators AND and NOT will continue to reduce your results.  If I did this again, I would delete the operator NOT from the list of possible choices.

Eight correctly chose OR as the operator that brings back more results, while nine chose AND.  Only five selected the A-Z Journal List as the place to go to find the full text of an article.  This is one of the least understood research tools on our campus, so it is no wonder.  We need to do better at instructing students on its use.  Ten students chose the library catalog as the place to go for the full text, two chose Google or Google Scholar, and one said their smart phone. 

Fifteen stated that they had received library instruction before, though 11 said it was a long time ago.  One claimed that he/she could teach the class, because he/she had attended so many times.  This is the person that I need to involve in teaching the class.  How can I do that?  I need to get the students to teach each other.

Admittedly, I goofed on the database question, not making it possible for them to select more than one database, so this was not as accurate as it should have been.  Still, it gives me a sense for which databases they know.

I wish I had looked at the answers for question #9 and searched the topic(s) they entered in the survey.  Indubitably, this is a learning experience for me.  They wanted learn more about Down Syndrome, Asperger's Syndrome, and prominent persons like athletes with disabilies.

Photo found on Aspergers and the Alien blog written by Amy Murphy.
Following are the comments provided when asked what they wanted to learn in the class:
  • I would like to learn more about notable figures who have down syndrome
  • how to find articles that are to the point
  • Find reliable and easy to read sources
  • find articles on Downs
  • I would like to find an athlete that i would like to report on
  • i would like to learn how these data bases can help me find valid information quickly and effectively.
It strikes as interesting that bullet points two and three speak to the challenge of finding reliable, credible, and scholarly sources that are easy to read or understand.  It seems that many of today's college students really struggle reading the peer-reviewed articles.  This is something I encountered during the one-credit course I taught last semester.  In fact, one of my colleagues has begun to conduct some research on the reading levels of college students.  Well, this is a hard thing to gauge, so she has gathered their bibliographies that are attached to actual research papers and calculated the reading levels of the articles they cite.  I'm uncertain whether or not she includes the grades they receive on the paper, which might offer clues on their comprehension of the cited sources, but it seems she has not as that may conflict with policies governing research subjects. 

Anyway, reading abilities, or the lack thereof, do inhibit many college students from succeeding in higher education.

Young Girl Reading by Jean Baptiste Camille Corot. Photo by Cliff1066 on Flickr.com.

Wednesday, January 25, 2012

Library Survey for Upper-division Students

A week ago I tried something new.  I used Poll Daddy to gather feedback from students who attended a library workshop.  In retrospect it seems I ought to have done a bit more research and prepared myself better for responding to the responses.

The free Poll Daddy account allows for the following possibilities:
  • 200 survey responses per month
  • 10 questions per survey
  • Content contains Polldaddy links
  • Basic reports for polls, surveys, & quizzes
  • 1 User account
Because I can only have 200 survey responses in a given month, I capped the number of survey responses to 50.  Only 23 students attended the class, and out of that number only 18 completed the short, 10-question survey.  My idea was to ask questions about their library research knowledge, so I could understand where to direct my instruction focus.  It did help a little, but it may be that I abandoned the effort a little early.  Requiring students to enter the web address to access the survey seemed to be one obstacle.  In the future it might be better to have the web address printed out on a handout or sitting at their desk when they arrive to class.  Using Poll Everywhere might be another option, since students can take the quiz or answer individual questions instantly with a mobile device.

If you have time or interest, take the survey.  As of today, only 32 more people can take the survey.  A subsequent blog post will analyze the results of those who attended the class I taught last week.

At one point I had thought to direct them to this blog, where they could click the link and then take the quiz, but I was uncertain about sharing my blog with them, plus with only 50 responses I could not find out how to activate the blog post just before class started.  Admittedly, I didn't want responses from anyone not a part of the class--at least not initially.

My original message:
Please take a few minutes to complete this survey:
Thank you.  This should help us in our class today.

Photo taken by John Haydon and posted on Flickr.com.
Asking for feedback and understanding what knowledge the students come to the library with has been a concern of mine for some time, though I have not always acted on this.  In my opinion this is inhibiting me from becoming a better library instructor.

An article I re-read recently talks about this.  JaNae Kinikin and Shaun Jackson of Weber State University wrote a short article for LOEX Quarterly in Fall 2010 titled "Using a Back and Forth Presentation Format to Engage Students in Introductory English Composition Courses."  They revised their library instruction plan for English composition classes.  They adopted TurningPoint technology to ask questions.  Some of those questions asked for basic library knowledge:
  1. Have you ever used a library catalog?
  2. Have you ever used an article database?
  3. When you begin a research project, where do you start?
This article and this portion of it in particular has been something I have remembered off and on since I first read it.  I should look in my blog archive to see if I have already written about it.  Specifically, I recall that they endeavored to "incorporate humor, allowing instructors to engage students and put them at ease.  They also offer an avenue for discussion" (5).  To illustrate this point they shared the answers for question one:
  • Yes
  • No
  • What the heck is a library catalog?
Naturally, they have had to adjust their teaching styles as they have adopted usage of this technology in the classroom.  One of the things I enjoyed in the article the second time around deals with their description of making the library instruction more interactive.  Because they teach three resources (at least in 2009 they did), they break up the instruction to demonstrate one resource, then students must work on the section of the worksheet that pertains to that resource.  This straightforward method seems like one that would be a good model to follow.  At times I have done this to one extent or another.

Responses to the questions posed to students can guide the library instructor to understand how much to teach.  Varying the pace keeps student interest as well.  Students usually appreciate efforts made by instructors to gauge their knowledge base.  Instructors who do this may well succeed in avoiding the experience described by Leza Madsen in her "Book Review: Why Don't Students Like School? A Cognitive Scientist Answers Questions about How the Mind Works and What it Means for the Classroom by Daniel T. Willingham (Jossey-Bass, 2008).  She recounts the oft-repeated allusion (at least in my experience) to Ferris Bueller's Day Off where the dull high school teacher drones on and on, then asks a question followed up with one of my favorite movie quotes (too easy to remember I suppose): "Anyone.  Anyone?"

Asking good questions and doing it with technology may prevent that moment of dead silence in the classroom.  Let's hope so anyway.

Thursday, May 27, 2010

Assessing Library Instruction

As a librarian I belong to the American Library Association (ALA). The organization makes it easier to connect with other individuals in the profession. While many think of librarians in the generic sense, each librarian fills a different role within the library. For example, the Eli M. Oboler Library has only one electronic resources librarian, though she also has reference, instruction, and collection development duties. (Yes, variety remains one of the positive aspects of librarianship.)

So what do you do if you have a question or problem that none of your immediate colleagues can answer? Well, that's part of the beauty of ALA. Many others in similar positions around the country (even the world) willingly share their expertise with fellow, like-minded librarians. Last week I wanted to know how to assess my colleagues and their library instruction, so I sent out an email to other instruction librarians, including many coordinators of instruction.

The Association of College and Research Librarians (ACRL) manages a number of listservs. One of these, the information literacy and instruction listserv (ili-l), devotes itself to instruction and info-lit issues. We talk about teaching in libraries, developing information-literacy skills, and so forth. A fair number of librarians responded to my question about assessing library instruction, so I created a Google Site to summarize their responses.

With so many libraries scattered throughout the country, ALA is huge, and so is ACRL. Library school seems like a good time to consider which nook within the larger library umbrella you wish to make a name for yourself. More and more young librarians seem to be entering the academic libraries as instruction and reference librarians. ACRL's Instruction Section can be quite supportive of instruction librarians, depending on your level of involvement.

If you are searching for academic library reference and instruction job positions the ili-l listserv frequently sends out job postings. To learn how to sign up for the listserv/discussion list, go to this link.