The usefulness of self-assessment, peer assessment and academic feedback mechanisms



Download 0.67 Mb.
Page2/2
Date15.02.2017
Size0.67 Mb.
1   2




  • Sig. (2 tailed) One-Sample t test based on a neutral response of 3.

An independent samples t-test confirmed there was no association between responses and age, mode, and attendance. One-way ANOVAs and post-hoc multiple comparisons did not reveal any association with equivalent full-time year, age, smiley type, mark, or group.


Students were also asked a series of questions on what they thought of aspects of ReMarksPDF, self and peer assessment, based on a 5-point LIKERT scale from 1=Strongly disagree, 3=Neutral, through to 5=Strongly agree. The results are shown in Table 3.

Table 3: Question data





Question

n

1

2

3

4

5

Mean

t

p*

All students [Groups 1-4]

The ReMarks system provides better feedback than I have experienced in the past.

47

1 (2.1%)

3 (6.4%)

6 (12.8%)

26 (55.3%)

11 (23.4%)

3.91

6.932

0.000

Other units should adopt the ReMarks feedback system.

47

2 (4.3%)

0 (0.0%)

12 (25.5%)

21 (44.7%)

12 (25.5%)

3.87

6.317

0.000

ReMarks feedback is easy to read.

47

2 (4.3%)

4 (8.5%)

6 (12.8%)

22 (46.8%)

13 (27.7%)

3.85

5.490

0.000

It is beneficial to be able to view side column comments.

47

0 (0.0%)

0 (0.0%)

3 (6.3%)

19 (40.4%)

25 (53.2%)

4.47

16.224

0.000

ReMarks feedback is easy to understand.

47

2 (4.3%)

4 (8.5%)

4 (8.5%)

21 (44.7%)

16 (34.0%)

3.96

6.063

0.000

It is beneficial to have a visual breakdown of my results according to assessment criteria.

47

1 (2.1%)

0 (0.0%)

9 (19.1%)

15 (31.9%)

22 (46.8%)

4.21

9.163

0.000

Audio comments should be included as a form of feedback annotation.

45

4 (8.9%)

11 (24.4%)

16 (35.6%)

9 (20%)

5 (11.1%)

3.00

0.000

1.000

I would prefer audio comments to written comments.

46

7 (15.2%)

20 (43.5%)

14 (30.4%)

4 (8.7%)

1 (2.2%)

2.39

-4.437

0.000

Video comments should be included as a form of feedback annotation.

46

9 (19.6%)

15 (32.6%)

16 (34.8%)

5 (10.9%)

1 (2.2%)

2.43

-3.821

0.000

Students who completed a self-assessment rubric [Groups 1 and 3]

Completing a self-assessment rubric has assisted my understanding of the marking rubric.

34

1 (2.9%)

1 (2.9%)

6 (17.6%)

19 (55.9%)

7 (20.6%)

3.88

5.849

0.000

Completing a self-assessment rubric has assisted me to understand the results I received for this assessment.

34

1 (2.9%)

2 (5.9%)

8 (23.5%)

16 (47.1%)

7 (20.6%)

3.76

4.667

0.000

Students who completed a peer assessment rubric for another student [Group 2]

Completing a peer-assessment rubric has assisted my understanding of the marking rubric.

19

0 (0.0%)

1 (5.3%)

7 (36.8%)

3 (15.8%)

8 (42.1%)

3.95

4.025

0.001

Completing a peer-assessment rubric has assisted me to understand the results I received for this assessment.

19

0 (0.0%)

0 (0.0%)

8 (42.1%)

3 (15.8%)

8 (42.1%)

4.00

4.623

0.000

Students who received a peer assessment and also completed a self-assessment [Group 3]

Receiving a peer-assessment rubric has assisted my understanding of the marking rubric.

14

2 (14.3%)

5 (35.7%)

2 (14.3%)

2 (14.3%)

3 (21.4%)

2.93

-.186

0.856

Receiving a peer-assessment rubric has assisted me to understand the results I received for this assessment.

14

3 (21.4%)

3 (21.4%)

3 (21.4%)

2 (14.3%)

3 (21.4%)

2.93

-.179

0.861




  • Sig. (2 tailed) One-Sample t test based on a neutral response of 3.

An independent samples t test did not reveal any gender or mode differences at the 5% level. One-way ANOVA’s and post-hoc multiple comparisons did not reveal any associations except as follows. Whether participants preferred audio to written comments: Smiley (F = 3.356, p = 0.019), Mark (F = 3.527, p = 0.015). I.e. in the case of the Smiley only 5% of the cases would have an F ratio equal to or above 3.356 if the Ho ( that the population means have identical variances). Whether video comments should be included as a form of feedback annotation: Age (F = 3.432, p = 0.016). In relation to students who completed a self-assessment, whether the self-assessment assisted in understanding the marking rubric: Type of Feedback (F = 3.617, p = .026). A one-way ANOVA detects the presence of significant differences in the means, not which of the means differ significantly as revealed by post-hoc multiple comparisons.


The general impression of the ReMarksPDF feedback management system was quite favourable. 78.7% of students indicated agreement that the ReMarks system provides better feedback than they had experienced in the past. 70.2% agreed that the system should be adopted in other units they were studying. They reported that ReMarksPDF feedback was easy to read (74.5%) and understand (78.7%). 93.6% of students agreed that it is beneficial to be able to view side column comments. This was not a comment on the quality of the comments, but rather the method by which they were displayed to students. Other PDF readers display clickable bubbles, which hide feedback. 78.7% of students agreed that it is beneficial to have a visual breakdown of their results according to assessment criteria.
In considering audio comments students reported that they did not agree with the results of Lunt and Curran (2010). While no audio comments were included in annotations, students preferred written to audio comments. Students were ambivalent as to whether audio comments should be included as a form of feedback annotation. Students were also not in favour of including video commentary. Further research, including actual audio and video annotations in groups is necessary to explore these initial exploratory results.
Of those students who completed a self-assessment rubric, 76.5% agreed that the exercise assisted their understanding of the marking rubric. 67.7% agreed that the exercise assisted them to understand the results that they received for the assessment. These results suggest that inclusion of a self-assessment process based on a rubric is a beneficial exercise. Of those students who completed a peer assessment rubric for another student, 57.9% agreed that the exercise assisted their understanding of the marking rubric. 57.9% agreed that the exercise assisted them to understand the results that they received for the assessment. These results while lower than those for self-assessment included a higher percentage of strong agreement. These results although based on a small sample, suggest that the inclusion of a rubric based peer-assessment process, is seen by students as a beneficial exercise. Instructors know and the literature suggests that peer assessment is beneficial – what these results indicate, is that students also share this common perception.
Students who received a peer assessment from another student and also completed a self-assessment were less impressed with the peer assessment - 50% disagreed (35.7.5% agreed) that the exercise assisted their understanding of the marking rubric and 42.8% disagreed (35.7% agreed) that the exercise assisted them to understand the results that they received for the assessment. These results suggest that the receipt of a student peer assessment may not be perceived as enhancing the understanding of students who receive them. It is important to note that we are not assessing student learning, only perceptions – students may nevertheless have learned from the approach.
The data enabled a comparison of the correlation between marks awarded by self-assessment compared to marker results and also marks awarded by peers compared with marker results. Pearson correlation coefficients were calculated in relation to self-assessments, peer-assessments, marker-assessments and total mark % awarded by the marker. See Table 4.
In contrast to Lew et al (2010), the overall correlations between the marks given in self, peer and marker assessments only indicated weak (rather than moderate) levels of accuracy in student self-assessment and peer-assessment ability. There was a moderate correlation between self and peer assessment results. There was no evidence to suggest the ability effect reported by Lew et al (2010). Students judged as more academically competent were unable to self-assess or peer-assess with higher accuracy than their less competent peers. Consistent with Lew et al (2010), it is concluded that there is no significant association between student beliefs about the utility of self-assessment and the accuracy of their self-assessments. Students’ low perceptions of the utility of peer assessment did match the inaccuracy of peer-assessments.
Open-ended questions sought to elicit positive and negative aspects of ReMarksPDF and the types of feedback annotations provided. Selected results appear in Table 5.
Table 4: Correlation




Total%

S1

S2

S3

S4

S5

S6

S7

S8

P1

P2

P3

P4

P5

P6

P7

P8

M1

Pearson
Sig. (2-tailed)
N

.759**

.000


21

.112

.629


21

.167

.469


21

-.268

.469


21

-.180

.240


21

-.140

.435


21

.022

.923


21

.169

.463


21

-.180

.436


21

.400

.252


10

.121

.739


10

-.480

.160


10

-.017

.963


10

-.287

.421


10

.021

.955


10

-.236

.511


10

.017

.963


10

M2

Pearson
Sig. (2-tailed)
N

.397

.075


21

.530*

.013


21

.172

.456


21

.119

.607


21

.080

.730


21

-.110

.635


21

-.129

.578


21

-.038

.870


21

.235

.305


21

-.134

.713


10

.389

.267


10

-.093

.799


10

.063

.864


10

-.075

.836


10

-.452

.189


10

-.134

.713


10

-.063

.864


10

M3

Pearson
Sig. (2-tailed)
N

.614**

.003


21

.357

.113


21

.391

.079


21

.376

.093


21

.253

.269


21

-.032

.890


21

.182

.429


21

.117

.614


21

.188

.611


21

.105

.773


10

.131

.719


10

-.473

.167


10

.196

.587


10

.059

.871


10

.207

.566


10

.105

.773


10

.049

.893


10

M4

Pearson
Sig. (2-tailed)
N

.835**

.000


21

.060

.797


21

.176

.446


21

.108

.640


21

.175

.449


21

-.018

.939


21

.331

.142


21

.083

.722


21

.045

.847


21

.250

.468


10

.163

.652


10

.050

.892


10

-.033

.927


10

-.564

.089


10

-.060

.868


10

-.107

.768


10

-.050

.891


10

M5

Pearson
Sig. (2-tailed)
N

.767**

.000


21

-.147

.525


21

-.070

.764


21

.000

1.00


21

.042

.857


21

-.052

.824


21

.103

.657


21

.216

.347


21

-.103

.657


21

.031

.933


10

-.122

.738


10

-.353

.371


10

-.375

.286


10

-.591

.072


10

-.417

.230


10

-.585

.075


10

-.418

.229


10

M6

Pearson
Sig. (2-tailed)
N

.560**

.008


21

.307

.176


21

.457*

.037


21

.241

.292


21

.228

.320


21

-.061

.791


21

.223

.332


21

-.010

.967


21

.234

.307


21

.501

.140


10

.191

.597


10

-.667*

.319


10

.351

.319


10

.047

.897


10

.283

.49


10

.083

.819


10

.234

.515


10

M7

Pearson
Sig. (2-tailed)
N

.692**

.001


21

-.174

.449


21

.073

.754


21

-.004

.988


21

-.002

.992


21

.029

,900


21

.277

.225


21

.037

.875


21

-.012

.960


21

.206

.568


10

-.171

.636


10

-.182

.615


10

-.228

.527


10

-.464

.176


10

-.253

.480


10

-.356

.313


10

-.210

.560


10

M8

Pearson
Sig. (2-tailed)
N

.793**

.000


21

.000

1.00


.123

.597


21

-.279

.221


21

-.308

.175


21

-.115

.619


21

.263

.249


21

.041

.859


21

-.033

.887


21

.440

.203


10

.050

.891


10

-.065

.859


10

-.112

.757


10

-.391

.263


10

-.166

.647


10

-.227

.528


10

.050

.891


10

Total%

Pearson
Sig. (2-tailed)
N

1
21

.160

.490


21

.193

.401


21

.039

.868


21

.043

.853


21

-.141

.541


21

.173

.454


21

.117

.614


21

.002

.993


21

.326

.358


10

.089

.808


10

-.351

.320


10

-.075

.838


10

-.473

.168


10

-.154

.671


10

-.289

.417


10

-.096

.791


10

P1

Pearson
Sig. (2-tailed)
N




-.408

.242


10

-.521

.122


10

-.423

.224


10

-.611

.061


10

-.535

.111


10

-.055

.111


10

.000

1.00


10

-.468

.173


10

























P2

Pearson
Sig. (2-tailed)
N




-.024

.947


10

-.459

.182


10

.000

1.00


10

-.206

.567


10

.167

.645


10

-.408

.242


10

-.111

.760


10

-.167

.645


10

























P3

Pearson
Sig. (2-tailed)
N




-.689*

.028


10

-.043

.907


10

.000

1.00


10

-.027

.942


10

.093

.799


10

.152

.676


10

-.557

.094


10

-.093

.799


10

























P4

Pearson
Sig. (2-tailed)
N




-.218

.545


10

-.115

.752


10

.395

.258


10

.107

.768


10

-.063

.864


10

.408

.242


10

-.250

.486


10

-.563

.091


10

























P5

Pearson
Sig. (2-tailed)
N




-.395

.259


10

-.035

.294


10

.138

.507


10

.086

.813


10

.075

.836


10

.431

.214


10

.000

1.00


10

-.452

.189


10

























P6

Pearson
Sig. (2-tailed)
N




-.395

.259


10

.311

.381


10

.238

.507


10

.086

.813


10

-.302

.397


10

.431

.214


10

-.302

.397


10

-.452

.189


10

























P7

Pearson
Sig. (2-tailed)
N




-.408

.242


10

-.061

.866


10

.423

.224


10

.153

.674


10

.134

.713


10

.218

.545


10

-.267

.455


10

-.468

.173


10

























P8

Pearson
Sig. (2-tailed)
N




-.600

.067


10

-.315

.375


10

.000

1.00


10

-.286

.424


10

-.250

.486


10

.357

.311


10

-.250

.486


10

-.688*

.028


10

























Table 5: Annotation positive and negative aspects


ReMarksPDF

Positive

Negative

  • It makes the marking feel fair as it seems systematic, it makes you understand where you need to improve.

  • Comprehensive and clear in the areas assessed

  • Very detailed and specific feedback. Provides more guidance on areas for improvement.

  • An evaluation from peers is important.

  • Easier to interpret, clear and direct.

  • Easy to read at a glance, appears comprehensive.

  • Easy to understand.

  • The strength of the ReMarks feedback system is that it focuses the students on the marking criteria that have been used. Further it gives a clear indication of the areas that can be improved.

  • Indicating my marks against each criterion; mapping my performance against the mean for all enrolled students.

  • I like the spider chart - I can really gauge how well I performed in comparison with the rest of the class.

  • The text boxes were very useful. It much easier that trying to read illegible hand writing.

  • The text boxes on the side are easy to read and understand, rather than pencil or pen handwriting. Many times I have had assignments returned and have not been able to read the comments so have not been able to improve my work.

  • A good range of comments is made and allow the student to compare where they believe they have succeeded but have lost marks and understand why.

  • Students are able to get a clear and concise breakdown of their results, the colour coding proves to be very useful, previous marking systems (mostly handwritten) are not so effective given.

  • The handwriting may not be legible and/or difficult to read. So I think it's fantastic overall and should be adopted by other units.

  • Being able to compare our self-assessment against the actual feedback system - makes for an interesting and constructive comparison.

  • Receiving specific feedback, provision of detailed marking criteria.

  • The whole system is structured well.




  • It's very truncated - I had to manually add up my marks to see my total - that should be evident from the start. I personally do not like peer reviews - this marker obviously did not take the time to read the legislation in making adverse comments on my conclusions (which were limited to the legislation - no cases yet to be heard on this topic either); marker did not appreciate my argument. At least comments were not personal or vindictive. I feel somewhat cheated that my paper was marked by a fellow student with limited knowledge on the topic I chose rather than by my teacher. I would rather have had a set question rather than a choice.

  • I don’t think there was any aspect of this that was what I would consider negative.

  • Providing peer feedback was difficult I was afraid of being too hard. Other than that no negatives.

  • It's still only as good as the person who uses it and I don't believe it is possible to remove the subjectivity from marking.

  • To me the spider chart is a bit of a gimmick and seems a complex way to provide the average mark, it could just be provided next to the individual students mark in each criteria section.

  • I did not find the spider chart very useful.

  • Not sure how it works- how much is human generated and how much is computer generated.

  • Little hard to find a mark at first glance.

  • What is the smiley face about. I think I understood how I went before that!

  • It may not be as personal, it seems a little robotic but I think the clarity it provides outweighs this.

  • For research because it has been drilled in to me previously that it is unreliable and disreputable.

  • Sometimes the side column comments were a little too brief.

  • I'm still trying to get used to it, I'm not sure yet

  • Spider chart took some extra time to understand what it meant, you couldn't just glance at it and appreciate what it meant immediately.




  • Text boxes were very useful as they pinpoint comments to particular text. It overcomes the inclusion of general comments, which sometimes fail to provide constructive feedback.

  • The comments and colour coding were extremely helpful in distinguishing the strengths and weakness of my essay.

  • Good visual aid and comprehensive. I like the colour.

  • Where the comments were illegible due to poor handwriting.

  • The spider graph provides a very visual form of feedback and the markers comments are very easy to read provided constructive feedback that was easy to understand.

  • Clear and easy to read unlike some markers handwritten comments is also very helpful.

  • Feedback is more structured and detailed than what I have received in the past.

  • Easy to understand, objective, provides feedback on areas where improvement possible.

  • Great to see where you’re at compared with peers.

  • Clear, great structure to understand how assignment marked.

  • More detailed.

  • Easy to use, easy to understand. Succinct comments. Many assignments are scrawled with markers handwriting and it is hard to understand, this system lets me comprehend all.

  • Comments and compare them to an overall marking system.

  • Was not really sure how this works but the spider format was useful to visualize.

  • I can easily see what my strength and weakness are so that I can more focus on the weakness in the future studies.

  • It is a new system and as such should be given a chance to shine.

  • I found the written comments most useful. Also to be able to see the average mark of class was nice.

  • Beneficial to see where you rank in each part in relation to other students as provided in the spider. I appreciate written comments such as those in the text boxes.

  • You can really see where you have gone wrong or right. It is precise and highlights where you need improving.

  • I can see where I need to improve.

  • I felt that the ReMarks system gave a very interesting and easy to understand visual guide. Also, the comments were far easier to read. I have had many assignments in other units.

  • The colour coding is an excellent way to indicate the strengths/weaknesses of the assignment it is far better than the usual system of just receiving a tick next to the text, the feedback is also very clear due to it being typed as opposed to the usual handwritten feedback which is usually very difficult to read, the visual breakdown of marking criteria and results for that criteria.

  • The only thing I did not like was the spider chart. I found it very difficult to understand.

  • Some of the graphs didn't make much sense

  • It's hard to sort through all the data and compare it - please note for me this is a matter of what a person becomes accustomed to, for instance if I was in the habit of receiving assignment.

  • A bit confronting and challenging I think.

  • It could be matter of time that I need to be used to this new ReMarks system, but it took some time to understand the marking measures and how the system works.

  • I prefer the personal touch in explaining where I had gone wrong.

  • Takes longer for the assignments to get back to students when there marked. Different approaches are taken by the reader and the writer (e.g. feedback on my assignment had mentioned using Google to find material where as I never use a generic search engine.




An open-ended question sought to elicit how the ReMarksPDF feedback system could be improved. Representative responses appear in Table 5.
Table 5: Improvement suggestions


ReMarksPDF

  • Maybe a little bit of a high level introduction on how to read it?

  • Provide an overview of how the software works via a web link.

  • Seems to be fairly well explained as is.

  • Found the spider chart confusing at first glance, don’t see the need for it.

  • Remove colour coding. Text boxes linked to the relevant paragraphs are sufficient. Spider chart is good, but simply putting the average in brackets next to my mark on the rubric would be sufficient. I do appreciate seeing my mark compared to course average in each section.

  • There doesn’t seem to be any need to include statistical measurements such as standard deviation, as some may not know what it means and how it applied to their mark.

  • I really liked the colour coding and found it helpful. The ability to see the actual criteria and mark received is brilliant.

  • Audio would be a great improvement to ReMarks.

  • I like it. Maybe a little more instructions concerning the purpose of the graph.

  • I’m not sure, I haven’t had enough experience with the feedback system given this is the first unit marked in this manner. I do think it achieves its purpose and should be adopted in other units.

  • Perhaps a bell curve or similar statistical model of the spread of student marks for the assessment. Other than that, very little needs improving.

  • I think this is a better way forward than the marking feedback I have received in the past.

Future research should concentrate on determining which form of annotation is most useful for student learning. Further research could also be conducted to compare the final marks of students in relation to the mechanism for receiving feedback. If the assessment had been a formative draft provided with feedback in preparation for a summative assessment submission, it would be possible to assess the extent to which students actually employed various feedback types in improving their final summative submission. A secondary consideration is to address student’s responses to actual audio comments and compare students’ perceptions as against other forms of annotation commentary enabling further examination the results of Lunt and Curran (2010).


Comment
While there were both positive and negative aspects reported for ReMarksPDF and the types of annotations discussed, ReMarksPDF was nevertheless reported by students to be a valuable new tool for assessment feedback. 78.7% of students found ReMarksPDF feedback better than that they have received in the past and 70.2% of students agreed or strongly agreed that other courses should adopt the ReMarksPDF system. These results suggest that use of ReMarksPDF may have a positive impact on teaching and learning.
Students found written comments, text boxes, assessment rubric, underlining, ticks, colour coding, spider chart (with average), spider chart, and smileys to be significantly valuable feedback in that order of preference.

Students indicated that ReMarksPDF feedback was easy to read and understand, it was beneficial to have comments appear in a side column note and to have a visual breakdown of results according to assessment criteria. Students were ambivalent about inclusion of either audio or video comments and, while not having experienced audio comments, noted that they did not prefer audio to written comments. Students appear, while not having experienced audio or video recorded comments, to prefer text-based presentations of feedback with less inclination to more abstract forms of commentary. These perceptions need to be tested in future research including audio-visual commentary.


Students who completed a self-assessment rubric or peer assessment rubric reported that this assisted in their understanding of the associated marking rubric and the results they received for their assessment. In contrast, students who received a peer assessment and also completed a self-assessment rubric were negative in their assessment of whether this combination assisted in their understanding of the marking rubric and the results they received for their assessment. One possible explanation relates to the master apprentice model of education – students may place little or no value on the perceptions of their anonymous peers, but valued their own self-improvement processes.
In contrast to Lew et al (2010), the overall correlations between the scores of self, peer and marker assessments only indicated weak (rather than moderate) levels of accuracy in student self-assessment and peer-assessment ability. There was a moderate correlation between self and peer assessment results. There was no evidence to suggest the ability effect reported by Lew et al (2010). Students judged as more academically competent were unable to self-assess or peer-assess with higher accuracy (higher correlation with an academic’s mark) than their less competent peers. Consistent with Lew et al (2010) it is concluded that there is no significant association between student beliefs about the utility of self-assessment and the accuracy of their self-assessments. Students’ low perceptions of the utility of peer assessment did match the inaccuracy of peer-assessments.
The results have several implications for teaching and research. The correlation between self, peer and marker assessments may be impacted by other factors such as training, self-refection and consensus as to the meaning of assessment criteria and how to rate them. Openly discussing peer review processes may improve student appreciation, understanding and engagement with the process. The ability differentiation within the student cohort may not have been sufficient to enable observation of ability effect reported by Lew et al (2010). Several measures for ability may need to be derived for subsequent research. It would also be interesting to evaluate the impact on perceptions of the utility of peer assessment, if students peer assessed were informed of the ability of their peer assessors.
Software such as ReMarksPDF offers the opportunity to use types of feedback that would otherwise be impractical to manually implement - such as dashboard charts and auto comments. Dashboard charts are calculated from the internal database of student marks captured by ReMarksPDF and displayed graphically. An example appears in Figure 1. Auto comments are comments saved in a re-useable library designed to decrease the time associated with annotating repetitive comments. An example appears in Figure 4.

Figure 4: Example Auto comment – student identifier obscured

It is anticipated that e-marking software will have a positive effect on student perceptions of feedback mechanisms by enabling markers to efficiently provide detailed individual feedback, outlining strengths and weaknesses of the student assessment submission and avenues for self-improvement.


References


Black, P., & William, D. (1998). Assessment and classroom learning. Assessment in Education, 5, 7-74.

Hattie, J. (1999). Influences on student learning, Inaugural professorial lecture. University of Auckland, New Zealand.

Heinrich, E., Milne, J., Crooks, T., Granshaw, B. & Moore, M. (2006) Literature review on the use of e-learning tools for formative essay-type assessment (2006). http://etools.massey.ac.nz/documents/LitReview101006.pdf

Huber, E., & Mowbray, L. (2011) Greening the paper trial: Improving the efficiency of assessment feedback through sustainable online assignment submission. http://www.mq.edu.au/ltc/projects/greening_paper_trail.htm

Kendle, A., & Northcote, M. (2000). The Struggle for Balance in the Use of Quantitative and Qualitative Online Assessment Tasks. Proceedings 17th Annual Ascilite Conference, Coffs Harbour 2000.

Lew, M., Alwis, W., & Schmidt, H. (2010): Accuracy of students' self‐assessment and their beliefs about its utility, Assessment & Evaluation in Higher Education, 35(2), 135-156

Linn, R. L., & Miller, M. D. (2005) Measurement and Assessment in Teaching. Columbus: Pearson Merrill Prentice Hall.

Lunt, T., & Curran, J. (2010) ‘Are you listening please?’ The advantages of electronic audio feedback compared to written feedback, Assessment & Evaluation in Higher Education, 35(7), 759-769



Maclellan, E. (2004). How convincing is alternative assessment for use in higher education? Assessment & Evaluation in Higher Education, 29(3), 311-321.

Appendix A

Rubric – Self-Assessment LS377 2010 Assignment (Group 3)


Criteria

Excellent

Good

Average

Poor

Very Poor

Basic goals:

Identification of relevant source materials

5

4

3

2

1

Accurate citation

5

4

3

2

1

Identification of primary issues

5

4

3

2

1

Structured themed arguments

5

4

3

2

1

Higher order goals:

Intellectual initiative

5

4

3

2

1

Analytical ability

5

4

3

2

1

Interpretative ability

5

4

3

2

1

Skills:

Argue and express an informed opinion based upon a critique of the relevant literature

5

4

3

2

1


How to complete
For each criteria, shaded in yellow above, rate your performance on the 5-point LIKERT scale from Excellent through Very Poor. Type an X after the number in the cell, which you believe best reflects your assignment performance in the row for each criterion. The measures for each criterion are defined below.
Once you have completed your self-assessment please attach a copy and email it back to Professor Stephen Colbran (Stephen.Colbran@gmail.com).
Definition of Measures



  1. Identification of relevant source materials – This measure is a value judgment assessing the extent to which the source materials have been identified which are relevant to the topic. Five measures are presented: Excellent – All relevant major primary and secondary sources are detected, Good – Most relevant primary and secondary sources are detected, Average – Some relevant primary and secondary sources are detected, Poor – Few relevant, but some irrelevant primary and secondary sources are detected, and Very Poor – Numerous irrelevant primary and secondary sources are detected.


  2. Accurate citation – Five measures of compliance with the Australian Guide to Legal Citation
    < http://mulr.law.unimelb.edu.au/go/aglc> are presented: Excellent – No errors detected, Good – Less than 2 errors detected, Average – Between 3 - 5 errors detected, Poor – Between 6 – 10 errors detected, and Very Poor – More than 10 errors detected.


  3. Identification of primary issues – This measure is a value judgment assessing the extent to which all primary knowledge issues have been identified. For example specific terminology and facts, conventions, trends, classifications, criteria, principles, theories or structures. Five measures are presented: Excellent – All relevant primary issues are detected, Good – Most relevant primary issues are detected, Average – Some relevant issues are detected, Poor – Few relevant, but some irrelevant issues are detected, and Very Poor – Numerous irrelevant issues are detected.


  4. Structured themed arguments – This measure is a value judgment assessing the extent to which comprehension of structured themed arguments has been demonstrated. Has an understanding of facts and ideas through organisation, translation, interpretation and extrapolation, been demonstrated? Five measures are presented: Excellent – All themes are well structured, Good – Most themes are well structured, Average – Some themes are detected with reasonable level structure evident, Poor – One theme is detected is detected, but is poorly structured, and Very Poor – No structured themes are detected.


  5. Intellectual initiative – This measure is a value judgment assessing the extent to which information has been synthesised by combining information in new ways, creating new insights or alternative solutions. Five measures are presented: Excellent – Information has been completely synthesised, Good – Most information has been synthesised, Average – Some information has been synthesised, Poor – Lacks evidence of information synthesis, and Very Poor – No synthesis is detected.


  6. Analytical ability – This measure is a value judgment assessing the extent to which information has been examined and broken down into parts identifying motives, causes, or linkages. Have elements, relationships, and organisational principals been considered in making inferences and in the provision of evidence in support of the arguments presented? Five measures are presented: Excellent – Information has been completed analysed, Good – Most information has been analysed, Average – Some information has been analysed, Poor Lacks evidence of analysis, and Very Poor – No analysis is detected.


  7. Interpretative ability – This measure is a value judgment assessing whether the author of the assignment has conducted an evaluation by presenting and defending opinions, making judgments about information, the validity of ideas, or the quality of others work based on a set of criteria. Five measures are presented: Excellent – Information has been completed evaluated, Good – Most information has been evaluated, Average – Some information has been evaluated, Poor – Lacks evidence of evaluation, and Very Poor – No evaluation is detected.


  8. Argue and express an informed opinion based upon a critique of the relevant literature – This is a skills based measure requiring four elements: identification of the relevant literature (see criteria 1), critique, argue an opinion, express that argument coherently. Five measures are presented: Excellent – All four elements are present to a high standard, Good – All four elements are present, Average – Three elements are present, Poor – Two elements are present, and Very Poor – No elements are present.

Appendix B


SelfPeer_Assessment_ATN_paper.docx




Download 0.67 Mb.

Share with your friends:
1   2




The database is protected by copyright ©sckool.org 2020
send message

    Main page