• No results found

PeerWise

In document Koli Calling 2008 (Page 113-117)

Figure 2: Students can see how their classmates re-sponded to the question.

Students are encouraged to apply critical analysis skills by judging the contributions of others. They are able to rate both the quality and the difficulty of a question, as a means of providing feedback to the author and to encour-age or discourencour-age other students from attempting it. Peer-Wise provides a discussion thread on each question, to allow uncertainties to be debated and for conveying possible im-provements. Students are able to express their agreement or disagreement, displayed as a small star or cross, with any comment posted and the comments are displayed in the dis-cussion thread in order of agreement. Prolific authors and authors who contribute popular or highly rated questions are shown on leader boards, which serve to stimulate high quality engagement with the system. Figure 3 shows a typ-ical discussion thread associated with a question.

Figure 3: A discussion thread is included with each question.

PeerWise was first used in 2006, and is currently used by courses in Computer Science, Engineering, Population Health and Pharmacology at The University of Auckland,

and Computer Science and Chemistry at The University of British Columbia. The classes range in size from 16 to 869.

1.1 Typical usage

PeerWise works best with large classes, ideally with a hun-dred or more students. We have limited experience in its use with small classes. Our best practice recommendations are to require each student to contribute a small number of questions (perhaps two) and to answer, say, ten or twenty questions. Awarding a few marks for achieving these mini-mal participation requirements is usually sufficient to ensure a rich question bank.

We have not found it necessary to restrict the choice of topic, or to insist on students posting comments. Some guid-ance in selecting keyword tags may be appropriate, however.

Generally, students are given several weeks in which to contribute their questions, followed by a shorter period in which to answer the minimum requirement. There is usually no need to close access to the system until after the end of the course, as students are likely to voluntarily use the repository as a revision tool.

2. EVIDENCE FOR SUCCESS

We have looked at the following measures of success:

2.1 Student usage patterns

Usage log data provides detailed information of when stu-dents are using PeerWise and what they are doing. We have found distinct patterns of use for contributing and for an-swering questions [6]. Where contributions of new questions are required by a specific date, there is a distinct peak lead-ing up until that date, with little or no activity after. Stu-dents tend not to continue writing new questions after the assessment of that component is complete. Most students contribute the minimum number of questions.

Answering questions follows a different pattern. There is a distinct peak before the minimum answer due date, but further peaks precede test and examination dates. However, most feedback on the quality of questions is done during the time questions are being contributed.

These patterns are consistent across different courses, and do not appear to be dependent on the lecturer, grading in-centives, or use of MCQ questions in the final exam. They show that students value PeerWise as a revision tool, and are willing to spend time using it without any explicit incentive.

2.2 Examination performance

We have also done a correlation study between PeerWise activity and overall course performance [5]. Our approach involved dividing the students into quartiles based on their performance in a mid-semester test that was administered before any use of PeerWise. Each quartile was then divided into equal-sized “most PeerWise active” and “least PeerWise active” groups, using various measures of activity (number of contributed questions, number of questions answered, num-ber of comments, total size of comments, numnum-ber of days active, and a combined measure).

We found a significant correlation between all activity measures and performance on the MCQ section of the exam.

Further, three of the activity measures (total length of com-ments, days active, and the combined measure) showed a significant correlation with the written examination ques-tions. There is no reason to expect that extensive experience

with MCQs would help in performance of written questions, unless such experience led to a deeper understanding of the material. Our results suggest it is not merely the activities of creating and answering MCQs that result in improved non-MCQ performance, but a high engagement with Peer-Wise (as evidenced by comments and activity days). This engagement thus suggests the development of a deeper level of understanding.

2.3 Course coverage

We have looked at the topics on which students choose to write questions and the “tags” they use to classify their ques-tions, in a course that provided neither guidance or incentive to influence student choice. Our interest in this study was in the coverage of PeerWise question banks, and in the ability of students to come up with accurate tag.

We found that the coverage was indeed comprehensive, and the tagging was largely effective.

2.4 Question quality

We studied the quality of questions created by students in a large, first-year programming course [7]. We found that students are capable of writing questions that faculty judge to be of high quality. The best questions have well written question stems, good distracters and detailed explanations that discuss possible misconceptions.

We inspected a sample of questions closely to study spe-cific aspects of question construction. In particular, we as-sessed the clarity of the language used describing the ques-tion and recorded whether any minor grammatical errors were present. We also inspected the set of distracters for each question, recording how many of them were meaning-ful and feasible. Finally, we classified the usemeaning-fulness of the explanations written by question authors.

Because students can write comments about the questions they answer, even poorly written questions can become use-ful learning resources if the comments left by others are in-sightful. In all of the cases we examined, questions with incorrect solutions were identified by other students in the class, and comments describing the mistake were provided.

We also looked at the accuracy of the student ratings (i.e. how accurately the students were able to judge the qual-ity of the questions). The judgements that students make about the quality of questions they answer are reasonably accurate, and correlate strongly with staff judgements. This is particularly interesting since students are using questions as a learning resource and a self-assessment tool, whereas staff generally use questions for summative assessment and diagnosis of misconceptions.

These results suggest that not only can students create high quality questions, but their ability to accurately deter-mine the quality of the questions created by other students ensures that high quality questions are answered more fre-quently than low quality questions. Figure 4 illustrates how the high quality questions in a typical course are answered more frequently than the low-quality questions.

3. RELATED WORK

The idea of student-contributed questions is not new, and a number of initiatives that share our aims have been re-ported in the literature.

Horgen [9] used a lecture management system to share student generated MCQs. Fellenz [8] reported on a course

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5

0 50 100 150 200 250 300 350 400

Number of responses to question

Student rating of question

Figure 4: Students choose to answer high quality questions more frequently than low quality ques-tions

where students generated MCQs which were reviewed by their peers, although technology was not used to support this process. Fellenz reported that the activity increased student ownership of the material and motivated students to participate. Barak [2] reports on a system named QSIA used in a postgraduate MBA course in which students contribute questions to an on-line repository and rank the contributions of their peers. Arthur [1] reports on a large course activity in which students in one lecture stream prepare questions for a short quiz which is then presented to students in an-other stream. The questions are stored in a simple electronic repository. Yu [10] has students construct MCQ items and submit them to an on-line database where they are peer-assessed. Feedback about quality is used to improve the items before they are transferred to a test bank database to be used for drill-and-practice exercises.

All of these reports agree that student-contributed MCQs is a powerful idea. The major contribution of PeerWise is a tool that is simple for new institutions to adopt and which supports very large classes with little or no moderation re-quired by instructors.

4. REFERENCES

[1] N. Arthur. Using student-generated assessment items to enhance teamwork, feedback and the learning process. Synergy, 24:21–23, Nov. 2006.

www.itl.usyd.edu.au/synergy.

[2] M. Barak and S. Rafaeli. On-line question-posing and peer-assessment as means for web-based knowledge sharing in learning. International Journal of Human-Computer Studies, 61:84–103, 2004.

[3] M. Birenbaum. Assessment 2000: toward a pluralistic approach to assessment. In M. Birenbaum and F. Dochy, editors, Alternatives in Assessment of Achievement, Learning Processes and Prior Knowledge, pages 3–31, Boston, MA., 1996. Kluwer Academic.

[4] B. Collis. The contributing student: A blend of pedagogy and technology. In EDUCAUSE Australasia, Auckland, New Zealand, Apr. 2005.

[5] P. Denny, J. Hamer, A. Luxton-Reilly, and

H. Purchase. Peerwise: students sharing their multiple

choice questions. In ICER’08: Proceedings of the 2008 International Workshop on Computing Education Research, Sydney, Australia, Sept. 2008.

[6] P. Denny, A. Luxton-Reilly, and J. Hamer. The PeerWise system of student contributed assessment questions. In Simon and M. Hamilton, editors, Tenth Australasian Computing Education Conference (ACE 2008), volume 78 of CRPIT, pages 69–74, Wollongong, NSW, Australia, 2008. ACS.

[7] P. Denny, A. Luxton-Reilly, and B. Simon. Quality of student contributed questions using peerwise. In M. Hamilton and T. Clear, editors, ACE’09:

Proceedings of the Eleventh Australasian Computing Education Conference (ACE2009), CRPIT,

Wellington, New Zealand, Jan. 2009. ACS. (submitted for publication).

[8] M. Fellenz. Using assessment to support higher level learning: the multiple choice item development assignment. Assessment and Evaluation in Higher Education, 29(6):703–719, 2004.

[9] S. Horgen. Pedagogical use of multiple choice tests -students create their own tests. In P. Kefalas, A. Sotiriadou, G. Davies, and A. McGettrick, editors, Proceedings of the Informatics Education Europe II Conference. SEERC, 2007.

[10] F.-Y. Yu, Y.-H. Liu, and T.-W. Chan. A web-based learning system for question posing and peer assessment. Innovations in Education and Teaching International, 42(4):337–348, Nov. 2005.

Towards Students’ Motivation and Interest –

In document Koli Calling 2008 (Page 113-117)