Edinburgh University Logo College of Science and Engineering Logo

 

events

26th of August 2009
eLearning@ed conference at the National e-Science Center: peer group and collaborative learning more...


news

15th Sept 2009
eLearning Initiatives 2009/2010 more...

14th of May 2009
Report on RF clicker upgrade published more...


Effective use of clickers in the College of Science  and Engineering
  

Introduction

What evidence is there that clickers are effective?
What sorts of things can I achieve with clickers?
What makes for a good clicker question?
How can I get the best use out of clickers within a particular course?

Further Reading

 

This information below provides a brief introduction to why you might want to use clickers and how to use them effectively. The points made are based on the growing body of research into clickers and our experiences here in CSE. The intention was to provide an easily accessible guide, so if you want more details of the underlying theory and evidence then see the 'further reading' section.

 

Velda McCune, Centre for Teaching, Learning and Assessment, velda.mccune@ed.ac.uk

 

 

What evidence is there that clickers are effective?

 

The data we have collected from students in the College suggests that they value the use of clickers in lectures because it helps them to maintain attention and improve their understanding. These findings are echoed by the wider literature on clicker use[1]. One important theme in the students' comments was that the clicker questions helped them to realise when they had not understood correctly. New students in particular are sometimes not very good at judging how well their learning is going[2] and clickers seem to help with this.

 

Staff in the College who responded to a recent survey[3] were generally positive about the educational benefits of clickers, although technical problems with the old clicker system were problematic for some. Staff comments echoed the findings from students about the value of challenging misconceptions and maintaining student attention. Staff also felt that clickers helped them to tailor their teaching to students' understanding of particular areas, focusing more on those parts of the content which were difficult for students. Similar findings have been reported elsewhere[4].

 

Looking at more quantifiable effects on student learning, one interesting study assigned low, medium or high clicker question frequency randomly throughout several courses in the biological sciences[5]. The authors found that there was a statistically significant improvement in exam performance on those parts of the courses where clickers were used more often. Another body of research considers the impact of a particular form of clicker use, peer instruction, on student learning in introductory physics courses[6]. This research found significant gains in student learning on standardised tests (such as the Force Concept Inventory) when peer instruction was introduced. These gains did not depend on a particular member of staff teaching the class.

 

Overall, the data we have collected in the College and the wider research literature suggest that clickers are often, although not universally, well received by students and staff. There is also a growing body of evidence about the educational benefits of clicker use, particularly with respect to promoting conceptual understanding.[7] It is worth bearing in mind, however, that there is only a small number of well controlled studies in the literature and further research is needed to provide a fully convincing picture of the value of this educational strategy.

 

What sorts of things can I achieve with clickers?

 

There is a wide range of possible uses of clickers, here are some you might want to consider:

 

Peer instruction

The rationale for peer instruction is that true learning is always an active process in which learners build personal understanding by comparing and connecting new information with their current understanding. It is easier for this to happen if the learner has their current understanding activated (e.g. through answering a clicker question) and then has to compare their understanding with other possible conceptualisations. The peer instruction sequence is set up to have exactly this effect. Peer instruction is described as follows[8]:

A class taught with PI is divided into a series of short presentations, each focused on a central point and followed by a related conceptual question, called a ConcepTest [...] which probes students' understanding of the ideas just presented. Students are given one or two minutes to formulate individual answers and report their answers to the instructor. Students then discuss their answers with others sitting around them; the instructor urges students to try to convince each other of the correctness of their own answer  by explaining the underlying reasoning. During the discussion, which typically lasts two to four minutes, the instructor moves around the room listening. Finally, the instructor calls an end to the discussion, polls students for their answers again (which may have changed based on the discussion), explains the answer, and moves on to the next topic. [...] Students are not graded on their answers to the ConcepTests, but do receive a small amount of credit for participating consistently over the semester. They also have a strong incentive to participate because the midterm and final exams include a significant number of ConcepTest-like questions.

Staged questions to enhance conceptual understanding

A series of linked questions can be used as follows[9]:

 

  • Students are presented with a three-question sequence in which all three questions focus on the same concept, but have different features. The questions appear similar to experts, but appear different to beginning students who often are attracted to surface features of the context. Voting summaries are shown to the class after each question, followed by a discussion with students. The first question is a simple warm-up that builds confidence. Because most students answer the question correctly, the discussion typically is brief.
  • The second question is more difficult, and significant fractions of students usually select different answers. The spread of selected answers in the ensuing voting summary exhibits an impasse, and students realize that they do not yet fully grasp the concept. The correct answer is not revealed at this point. Rather, students are asked to volunteer why they selected each of the answers, and real viewpoints are expressed. It is important to take another “straw” vote after the discussion before revealing the correct answer [...].
  • The final question is used to check whether students have assimilated the concept. It also must be difficult and have surface features different from the previous two. Usually, most students answer correctly and discussion is brief. In a few cases students will continue to select incorrect answers, indicating that addtional work is needed.

 

Contingent teaching

Where the direction and content of the lecture is made more heavily dependent on the students’ responses to clicker questions, rather than following a predetermined sequence[10].

 

Formative feedback

Any questions which help learners and/or staff judge how well students are learning and what they need to do to improve. As well as the learning benefits, students appreciate knowing how well they are doing relative to the rest of the class.[11]

 

Testing students' preparation for classes

Some lecturers have used clickers to test whether students have completed pre class reading or other homework.[12]

 

Experiments using human responses

Perhaps most useful in psychology[13] but there may be possibilities in other subject areas. In genetics, for example, it might be possible to poll students on whether they have a particular trait, such as the capacity to curl their tongue.

 

Ongoing course evaluation

Students can get tired of filling in lots of similar end of course questionnaires which will not have an impact for them as they have finished the course. Evaluation questions can be asked using clickers as a course is running and, if you pick options that can realistically be implemented quickly, then students will really see the benefit of the feedback they have given. We have developed some sample evaluation questions that you might consider Clicker Questions for Course Evaluation Purposes.

 

Giving students a stronger sense of the subject area

Many students start university with fairly vague ideas about how knowledge is generated in their subject area and what it would be like to work in that area. You could poll your colleagues for answers to questions like 'What is the most important attribute of a good chemist/biologist etc.?' and then ask students to vote for one of the options.

 

Encouraging students to reflect on how they are learning
Many new students will not have thought about how they might need to change their ways of learning to succeed at university. Students with poor study skills often don't find this out until they do badly on an assessment. While there is no one correct way of studying, clicker questions can be used to get students to think about how to be effective and to discuss this with their peers. This also helps to create a culture in which poor results are seen as a symptom of poor skills or insufficient effort, rather than innate ability. Students who believe that learning outcomes can be controlled by studying better or harder tend to be more successful and this effect appears to be independent of other measures of ability[14]. We have prepared some example questions you could use Clicker Questions to prompt reflection.

 

What makes for a good clicker question?

 

Perhaps the key point here is that the quality of clicker questions ‘depends heavily on the intent and thought behind their design’[15]. Questions which test conceptual understanding are very useful, particularly if the question is pitched so that around 35-70% of the students get the correct answer. It helps to choose incorrect responses that mirror the kinds of mistakes students commonly make, these can sometimes be gleaned from examples of previous students' work[16]. More information and sample questions can be found at: http://galileo.harvard.edu

 

In her review of the literature, Caldwell[17] suggests that good clicker questions should evaluate factual knowledge separately from conceptual understanding to ensure that both are adequately addressed. She also advises listening to how students reason about wrong answers to get ideas about question writing. She notes that qualitative questions are often favoured to allow students to focus on concepts but clickers have successfully been used with quantitative questions as well. Some of the styles of questions Caldwell selects as valuable from the literature include those that help students to: distinguish between related ideas; see parallels and connections between ideas; and make use of ideas in new contexts.

 

How can I get the best use out of clickers within a particular course?

 

As with many other innovations, clickers are likely to be most useful when they are implemented to remedy a perceived pedagogic need, rather than for the sake of using new technology. Several authors also advocate telling students explicitly about this rationale so that they can see the purpose of what you are doing . The pedagogic rationale most commonly put forward for clicker use is that anonymous responding to questions encourages all students to think actively about a question and to commit to a response even if they are uncertain[19]. This kind of activation and challenging of learners' current understanding is central for learning, according to some educational theorists[20].

 

Giving some though to how the use of clickers fits in with the wider aims of the course and the other teaching and assessment strategies used is also important. Where all of the different facets of the course - such as workload, teacher enthusiasm, and students' perceptions of what is rewarded in assessment - all promote active learning and conceptual understanding the course is likely to be more successful.[21] Crouch and Mazur give a particularly rich overview of how clickers can be embedded within a wider set of strategies to promote better student learning[22].

Several studies have also suggested that effective use of clickers requires practice. Learning gains seem to increase as the lecturer becomes more proficient in using the technology and asking good questions.[23]

 

Further reading

 

  • A useful and accessible overview of the research in this area accompanied by practical hints and tips: Caldwell, J.E. (2007). Clickers in the large classroom: Current research and best-practice tips. Life Sciences Education, 6(1), 9-20. - PDF
  • An overview of  ten years of research and development with clickers at Harvard. Offers some good ideas about how to fit clickers into wider pedagogic approaches: Crouch, C. & Mazur, E. (2001). Peer instruction: ten years of experience and results. American Journal of Physics, 69(9), 970-977. - PDF

  • An accessible introduction to the use of clickers: Draper, S. W. & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20, 81-94. - PDF

    Steve Drapers PRS page - UK PRS guru from Glasgow University, lots of info and links related to PRS stuff - WEB
  • A more theoretical evaluation of two different methods for using clickers in large classes: Nicol, D. & Boyle, J. T. (2003). Peer instruction versus class-wide discussion in large classes: A comparison of two interaction methods in the wired classroom. Studies in Higher Education 28(4), 457-473 - PDF
  • A review which focuses on how thinking about clicker use has developed over time: Simpson, V. and Oliver, M. (2007). Electronic voting systems for lectures then and now: A comparison of research and practice. Australasian Journal of Educational Technology, 23(2), 187-208 - WEB
  • Extensive EVS bibliography collected by the Vanderbilt Center for Teaching, including discipline-specific literature – WEB

 

 

References

 

1. See, for example:
Nicol, D. & Boyle, J. T. (2003). Peer instruction versus class-wide discussion in large classes: A comparison of two interaction methods in the wired classroom. Studies in Higher Education 28(4), 457-473.
Draper, S. W. & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20, 81-94.

2. Vermunt, J. D. & Verloop, N. (1999). Congruence and friction between learning and teaching. Learning and Instruction, 9, 257-280.

3. Gachago, D. (2008). Feedback on personal response systems ('Clickers') - lecturers' perspectives. Retrieved on 18 July 2008 from: www.scieng.ed.ac.uk/ltstrategy/clickers.html

4. See, for example:
Caldwell, J. E. (2007). Clickers in the large classroom: Curent research and best practice tips. CBE-Life Sciences Education, 6, 9-20.
Simpson, V. and Oliver, M. (2007). Electronic voting systems for lectures then and now: A comparison of research and practice. Australasian Journal of Educational Technology, 23(2), 187-208.

5. Preszler, R. W., Dawe, A., Shuster, C. B. & Shuster, M. (2007). Assessment of the effects of student response systems on student learning and attitudes over a broad range of biology courses. CBE-Life Sciences Education, 6, 29-41.

6. Crouch, C. & Mazur, E. (2001). Peer instruction: ten years of experience and results. American Journal of Physics, 69(9), 970-977.

7. Caldwell, J. E. (2007). Clickers in the large classroom: Curent research and best practice tips. CBE-Life Sciences Education, 6, 9-20.

8. Crouch, C. & Mazur, E. (2001). Peer instruction: ten years of experience and results. American Journal of Physics, 69(9), p. 970.

9. Reay, N. W., Bao, L., Pengfei, L., Warnakulasooriya, R. and Baugh, G. (2005). Toward the effective using of voting machines in physics lectures. American Journal of Physics, 73(6), p. 554.

10. Draper, S. W. & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20, 81-94.

11. Sharma, M. D., Khachan, J., Chan, B. & O'Byrne, J. (2005). An investigation into the use of electronic classroom communication systems in large lecture classes. Australasian Journal of Educational Technology, 21(2), 137-154.

12. Caldwell, J. E. (2007). Clickers in the large classroom: Curent research and best practice tips. CBE-Life Sciences Education, 6, 9-20.

13. Draper, S. W. & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20, 81-94.

14. Dweck, C. (2000). Self-theories: their role in motivation, personality and development. Essays in social psychology. Hove, Sussex: Psychology Press.

15. Caldwell, J. E. (2007). Clickers in the large classroom: Curent research and best practice tips. CBE-Life Sciences Education, 6, p.17.

16. Crouch, C. & Mazur, E. (2001). Peer instruction: ten years of experience and results. American Journal of Physics, 69(9), 970-977.

17. Caldwell, J. E. (2007). Clickers in the large classroom: Curent research and best practice tips. CBE-Life Sciences Education, 6, 9-20.

18. For example, Caldwell, J. E. (2007). Clickers in the large classroom: Curent research and best practice tips. CBE-Life Sciences Education, 6, 9-20.

19. Draper, S. W. & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20, 81-94.

20. Tynjälä, P. (1999). Towards expert knowledge? A comparision between a constructivist and a traditional learning environment in the university. International Journal of Educational Research, 31, 357-442.

21. Hounsell, D. & Hounsell, J.  (2007)  Teaching-learning environments in contemporary mass higher education.  In: N. J. Entwistle and P. Tomlinson (Eds.) Student learning and university teaching (pp. 91-111).  BJEP Monograph Series No. 2: Psychological Aspects of Education – Current Trends. Leicester: British Psychological Society.

22. Crouch, C. & Mazur, E. (2001). Peer instruction: ten years of experience and results. American Journal of Physics, 69(9), 970-977.

23. See, for example:
Draper, S. W. & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20, 81-94.
Crouch, C. & Mazur, E. (2001). Peer instruction: ten years of experience and results. American Journal of Physics, 69(9), 970-977.