Evaluation of the usefulness of self-assessment, peer assessment and academic feedback mechanisms
This exploratory quantitative case study examined students’ perceptions of the usefulness of assessment feedback provided by academics using ReMarksPDF, student self-assessment and student peer assessment. ReMarksPDF is an enterprise level e-marking system designed by the author integrated with Blackboard 9.1 and Moodle 2.1 – see www.remarkspdf.com. The ReMarksPDF workflow includes e-submission, allocation to markers, marking, moderation, and return of annotated PDF assessment submissions to students and marks into Grade Center. A summary is presented of the positive and negative aspects of different types of feedback annotations and how feedback may be improved. 78.7% of students found ReMarksPDF feedback better than that they have received in the past and 70.2% of students agreed or strongly agreed that other courses should adopt the ReMarksPDF system. Students found written comments, text boxes, assessment rubric, underlining, ticks, colour coding, spider chart (with average), spider chart, and smileys to be significantly valuable feedback in that order of preference. Students indicated that ReMarksPDF feedback was easy to read and understand, it was beneficial to have comments appear in a side column note and to have a visual breakdown of results according to assessment criteria. Students were ambivalent about inclusion of either audio or video comments and did not prefer audio to written comments. Students appear to prefer text-based rather than more abstract presentations of feedback. Students who completed a self-assessment rubric or peer assessment rubric reported that this assisted in their understanding of the associated marking rubric and the results they received for their assessment. In contrast, students who received a peer assessment and also completed a self-assessment rubric were negative in their assessment of whether these two measures in concert assisted in their understanding of the marking rubric and the results they received for their assessment. There was only a weak level of accuracy in student self-assessment and peer-assessment ability compared with professional marking. E-marking software, such as ReMarksPDF, will have a positive effect on student engagement and perceptions of feedback mechanisms by enabling markers to efficiently provide detailed individual feedback, outlining strengths and weaknesses of the student assessment submission and avenues for self-improvement.
Keywords: Assessment feedback, self-assessment, peer assessment, smileys, audio and video feedback, ReMarksPDF.
Assessment drives student learning and effort (Kendle & Northcote, 2000) and in turn influences the direction and quality of student learning (Maclellan, 2004). Numerous literature reviews indicate that feedback is critical to improving the standard of student work and learning (Black & William1998a; Hattie 1999; Heinrich 2006, Huber & Mowbray 2011) and that both formative and summative assessment directly affect student engagement. The structure of assessment designs often includes formative feedback for self-improvement rather than summative feedback concerned with marks and grades. Feedback, at its best is individual in focus, outlining strengths and weaknesses and avenues for self-improvement (Linn & Miller, 2005; Heinrich 2006). Electronic feedback management systems such as ReMarksPDF are promoted on the basis that they offer opportunities for improvement in assessment practice and outcomes for students, including:
E-submission, allocation, marking, moderation and assessment return via a learning management system.
Links to electronic portfolios classified by learning outcomes or graduate attributes.
Quality management including consistency among markers, reporting or results, and self-reflection amongst markers.
ReMarksPDF, Blackboard 9.1 and TurnitIn all contain enterprise level feedback systems. ReMarksPDF is the most comprehensive of the three – in terms of annotation types and sophistication, moderation and on and off-line capabilities.
ReMarksPDF has extensive PDF annotation capabilities including:
Ability to designate macros for Auto Text, Sounds, and Video links;
Import and export .cvs database files, linking marking to student documents, and uploading to a
Drag and drop graph gallery, indicating individual and relative student performance.
Style tool specifically designed to rapidly incorporate English Style and Grammar comments for
essays, plus the ability to build specialist comment libraries in any discipline;
Advanced moderation capabilities enabling statistical and visual comparison of markers, individual and
global moderation of student assessment; and
Integration with Learning Management Systems – Blackboard 9.1 and Moodle 2.1.
Examples of annotation types provided by the ReMarksPDF system appear in Figure 1.
Figure 1: Spider chart, Spider chart (with average), Smiley scale
A lecturer simply downloads the software, installs it on their desktop or tablet PC, creates an assessment in their Learning Management System for e-submission, allocates submissions to markers, opens a student assessment submission and starts marking. Once marking and moderation is complete the marked assessment is automatically returned to students and marks stored in Grade Center. The student logs into their Learning Management System and can retrieve and view all the marking annotations made on their PDF assessment submission. The e-marking workflow is complete. The objective is to provide high quality, consistent feedback for a reasonable exertion of effort by the academic and cost to the academic unit or department. Lecturers and markers can save considerable time by reusing annotations relating to common student errors, and in moderation practices.
The primary purpose of this exploratory case study was to examine student perceptions of self-assessment, peer assessment and assessment by an academic marker as alternative and cumulative approaches to student feedback using the ReMarksPDF Feedback Management System. The study produced a range of statistics useful for planning a later mixed method research project exploring assessment feedback, particularly that employing enterprise level automated systems, such as ReMarksPDF.
Lew et al (2010) found that overall correlations between the scores of self-, peer and tutor assessments suggested weak to moderate accuracy of student self-assessment ability, together with an ‘ability effect’; i.e., students judged as more academically competent were able to self-assess with higher accuracy than their less competent peers. Lew concluded that there is no significant association between student beliefs about the utility of self-assessment and the accuracy of their self-assessments.
A subsidiary purpose of this exploratory case study was to examine Lunt and Curran (2010) results that suggest electronic audio feedback has advantages compared with written feedback and that students were 10 times more likely to listen to audio commentary than open written comments. Student opinions were sought on their preference for audio feedback.
Third year Law students enrolled in LS377 Information Technology Law (n= 60) at the University of New England voluntarily completed a survey on feedback received in relation to an assignment submitted in PDF format in satisfaction of 30% of their grade. A single marker marked student submissions using ReMarksPDF without knowledge of each student’s self or peer assessment marks. All assessment was based on 8 criteria and marked on a 5 - point LIKERT scale from excellent to very poor. Volunteer students were randomly assigned to one of four groups. Each group had 15 participants. Refer Table 1.
Table 1: Groups, Feedback type and Response rate
Peer assessment of Group 3
Self-assessment + Peer assessment from Group 2
There is nothing to suggest that the implications of this exploratory study would not be generalizable to students of other disciplines. The assessment task was an essay on a topic of student choice in the field of Information Technology Law. This is not unlike any essay requiring student discipline based research. The focus of this research is on feedback not discipline specific content.
All students were provided with academic feedback consisting of an assessment rubric (Appendix A), colour coding of their text according to a colour key (Appendix B), pre-prepared comments based on a marking guide, and a final mark. Marking was done electronically using ReMarksPDF <www.remarkspdf.com>.
A survey instrument was prepared and administered on-line using Qualtrics <www.qualitics.com>. All four groups were evaluated on their perceptions of the usefulness of the feedback they received.
Students were asked to rate the overall value to them of the types of feedback annotations received. The results appear in Figure 2. A one-sample t test indicated all were significant at the 5% level (p = 0.000). A one-way ANOVA and post-hoc multiple comparisons did not reveal any association with sex, age, mark received, type of smiley, mode (full-time, part-time), equivalent full-time year of study or age, or group. In total, 76.1 % of students of all groups combined reported that the feedback received was above average or excellent. There was no significant difference between groups.
Students rated the different types of annotations they received on a 5-point LIKERT scale from 1 Useless, 3 Neutral, through to 5 Very Useful. The average ratings are shown in Table 2. The table indicates that all forms of annotations were significantly more useful than a neutral value of 3.
Table 2 ranks the annotations from the most useful to the least according to student perceptions. Written comments are rated as most important, with more abstract annotations, such as colour coding and statistical representations rated of lower importance. The most abstract annotation in the form of a smiley indicating the emotional rating given by the marker was rated as least valued, though still significant.
Table 2: Annotation data