• No results found

EXAMPLE RUN THROUGH OF SQLify SYSTEM

Skills

3. EXAMPLE RUN THROUGH OF SQLify SYSTEM

The system suggests a final mark for a student's assignment. It does so by summing both the correctness marks for each query answer and accuracy marks for the reviews conducted by that student. The weighting of correctness and review accuracy for each problem in each assignment could be varied according to the effort for each. An example would be weighting the cor-rectness marks to 70% of the entire assessment and review ac-curacy marks to 30%. The instructor then chooses to accept or modify the suggested mark. Such marks may be released indi-vidually by the instructor or en masse. Details of how an accu-racy mark is determined by the system and how an instructor determines their accuracy mark are given in [7].

3. EXAMPLE RUN THROUGH OF

3.4 Checking Accuracy of Reviews

Table 7 lists one row per peer review that is performed in the context of an assignment. The first row, for instance, shows that student 1 was a reviewer for a query (SA2) submitted by stu-dent 4 in answer to query problem QP1. Stustu-dent 1 gave this query answer a correctness mark of L7. The accuracy mark for the submitted query answer given by the instructor was also L7.

Hence, the accuracy mark for this particular review is 100. For the next review performed by this student there is a difference between the correctness mark given by this student and the accuracy mark set by the instructor. This difference causes their mark for accuracy to be reduced.

Table 7: Accuracy marks for reviews

1 4 QP1 SA2 L7 L7 0% 100%

1 5 QP1 SA3 L6 L4 20% 80%

3.5 Calculating a Final Mark

The last table below summarizes the various marks that a par-ticular student received for various query problems and for the reviews performed. A weighted final mark is given in the last row using the suggested weightings of 70% for correctness and 30% for accuracy of reviews.

Table 8: Final mark calculation Student: 1

Correctness marks QP1 100%

(Weight 70%) QP2 50%

QP3 70%

Review accuracy QP1 100%

(Weight 30%) QP2 80%

QP3 50%

Final Mark 74%

4. CONCLUSIONS

In this paper a small set of existing tools used for teaching and assessing SQL writing skills was reviewed. The tools were evaluated from both from Computing Education and Database Theory perspectives, noting possible areas of enhancement.

A new tool called SQLify was introduced which is used for practice and submission of database query assignments. Central to SQLify is the use of an intricate automatic grading system and of peer review. The main reason for including peer review is to offer the students a richer learning experience. Addition-ally, the peer reviews will assist in the assessment of assign-ments.

SQLify uses a relatively complex method to suggest marks for assignments, designed to:

ƒ yield a much wider range of accuracy marks than sim-ply correct or incorrect;

ƒ employ peer review of assignment work by students en-couraging evaluation and producing more sources of feedback to students;

ƒ utilize database theory to enhance computer assisted grading;

ƒ set high quality demands for student reviews, yielding higher learning outcomes; and

ƒ reduce the number of necessary moderations by course instructors.

Each of these objectives must be made transparent to students.

Students are informed of the possible learning benefits for stu-dents and the time-saving benefits for instructors. Stustu-dents must be made aware of how the marking approach will be used to assess their work and their reviews and how they must use the system to succeed in assessments.

SQLify has been prototyped and implemented and is ready to be used in a live course by the end of 2006, with the exception of Relational Algebra support. Student use of the system will be monitored. The usefulness of the system as perceived by stu-dents and instructors will then be evaluated. Any change in student outcomes will be measured.

With this new tool it will also be possible to effectively distin-guish specific problems within the areas of difficulty suggested in section 1.1, allowing feedback into the existing curriculum to improve teaching in these areas.

5. REFERENCES

[1] Bloom, B.S., Taxonomy of Educational Objectives. Ed-wards Bros., Ann Arbor, Michigan, 1956.

[2] Brook, C. and Oliver, R., Online learning communities:

Investigating a design framework. Australian Journal of Educational Technology, 19, 2, 2003, 139 - 160.

[3] Chapman, O.L. The White Paper: A Description of CPR.

2006 [cited February 23, 2006]; Available from:

http://cpr.molsci.ucla.edu/cpr/resources/documents/misc/C PR_White_Paper.pdf

[4] de Raadt, M., Toleman, M., and Watson, R. Electronic peer review: A large cohort teaching themselves? In Pro-ceedings of the 22nd Annual Conference of the Austral-asian Society for Computers in Learning in Tertiary Edu-cation (ASCILITE'05). (Brisbane, December 4-7, 2005).

QUT, Brisbane, 2005, 159 - 168.

[5] de Raadt, M., Toleman, M., and Watson, R., An Effective System for Electronic Peer Review. International Journal of Business and Management Education, 13, 9, 2006, 48 - 62.

[6] Dekeyser, S. and de Raadt, M. SQLify project website.

2006 [cited May 15, 2006, 2006]; Available from:

http://www.sci.usq.edu.au/projects/sqlify/.

[7] Dekeyser, S., de Raadt, M., and Lee, T.Y. Computer As-sisted Assessment of SQL Query Skills. 2006 [cited 1st September, 2006]; Available from:

http://www.sci.usq.edu.au/research/workingpapers/sc-mc-0610.ps.

[8] Dietrich, S.W., Eckert, E., and Piscator, K. WinRDBI: a Windows-based relational database educational tool. In Proceedings of the twenty-eighth SIGCSE technical sym-posium on Computer science education (San Jose, Cali-fornia, United States, February 27 - March 1, 1997). ACM Press, 1997, 126 - 130.

Reviewer Reviewee Problem Submission Reviewer’s mark for submission Accuracy nark set by instructor Difference Accuracy mark for this review

[9] Kearns, R., Shead, S., and Fekete, A. A teaching system for SQL. In Proceedings of the 2nd Australasian confer-ence on Computer sciconfer-ence education. (Melbourne, Austra-lia, 2 - 4 July 1997), 1997, 224 - 231.

[10] Kurhila, J., Miettinen, M., Nokelainen, P., Floreen, P., and Tirri, H. Peer-to-Peer Learning with Open-Ended Writable Web. In Proceedings of the 8th annual conference on In-novation and technology in computer science education ITiCSE '03. (Thessaloniki, Greece, June 30 - July 2, 2003). ACM Press, 2003, 173 - 178.

[11] Mitrovic, A. Learning SQL with a computerized tutor. In Proceedings of the twenty-ninth SIGCSE technical sympo-sium on Computer science education SIGCSE '98. (At-lanta, United States, 25 - 28 Feb, 1998). ACM Press, 1998, 307 - 311.

[12] Prior, J. Online assessment of SQL query formulation skills. In Proceedings of the fifth Australasian conference on Computing education. (Adelaide, Australia, 4-7 Febru-ary, 2003). Australian Computer Society, 2003, 247 - 256.

[13] Prior, J. and Lister, R. The Backwash Effect on SQL Skills Grading. In Proceedings of the 9th annual SIGCSE conference on Innovation and technology in computer sci-ence education. (Leeds, UK, 28 - 30 June, 2004). ACM Press, 2004, 32 - 36.

[14] Sadiq, S., Orlowska, M., Sadiq, W., and Lin, J. SQLator:

An Online SQL Learning Workbench. In Proceedings of the 9th annual SIGCSE conference on Innovation and technology in computer science education ITiCSE '04.

(Leeds, UK, 28 - 30 June, 2004). ACM Press, 2004, 223 - 227.

[15] Saunders, D., Peer tutoring in higher education. Studies in Higher Education, 17, 2, 1992, 211 - 218.

[16] Shneiderman, B., Creating creativity: user interfaces for supporting innovation. ACM Transactions on Computer-Human Interaction, 7, 1, 2000, 114-138.

Modelling Student Behavior in Algorithm Simulation Exercises with Code Mutation

Otto Sep pälä

Helsinki University of Technology PL5400, 02015 TKK, Finland

oseppala@cs.hut.fi

ABSTRACT

Visual algorithm simulation exercises test student knowledge of different algorithms by making them trace the steps of how a given algorithm would have manipulated a set of input data. When assessing such exercises the main difference between a human assessor and an automated assessment procedure is the human ability to adapt to the possible errors made by the student. A human assessor can continue past the point where the model so-lution and the student soso-lution deviate and make a hypothesis on the source of the error based on the student’s answer. Our goal is to bring some of that ability to automated assessment. We antic-ipate that providing better feedback on student errors might help reduce persistent misconceptions.

The method described tries to automatically recreate erroneous student behavior by introducing a set of code mutations on the original algorithm code. The available mutations correspond to different careless errors and misconceptions held by the student.

The results show that such automatically generated ”misconceived”

algorithms can explain much of the student behavior found in er-roneous solutions to the exercise. Non-systematic mutations can also be used to simulate slips which greatly reduces the number of erroneous solutions without explanations.

1. INTRODUCTION

On the Data Structures and Algorithms courses in the Helsinki University of Technology we use the TRAKLA2 system[7] to assess how well students know how different algorithms taught on our course should operate. Rather than requiring the students to implement these algorithms, the system tests their knowledge using visual algorithm simulation exercises. These exercises are then automatically graded and form a part of the course grade.

While automatic assessment saves us hours and hours of assess-ment time it also gives the students the possibility of getting im-mediate feedback on their solutions during day and night.

This far the feedback has typically consisted only of the number of correct steps and a model solution. As the student solution is also still available, an interested student has been given a pos-sibility to review the answer against the solution and figure out what went wrong. For some time now we have been researching on how to improve the quality of this feedback and essentially it all boils down to being able to interpret the error made by the student.

Our previous paper[8] on the subject studied the possibility of simulating the errors by manually implementing algorithm vari-ants that correspond to different misconceptions. The approach described in this paper extends on this work with a way to au-tomatically generate some of the algorithm variants as well as a method to handle careless errors.

Figure 1: TRAKLA2 applet page and the model solution win-dow. In this exercise the heap operations can be simulated by moving the keys in the data structures using a mouse.

2. VISUAL ALGORITHM SIMULATION