Embedding pedagogic innovations in higher education: a cross-faculty investigation of staff views at an English university
Brendan Bartram, Jean Brant and Steve Prowse, University of Wolverhampton, UK
This paper examines university teachers’ views of the factors involved in successfully managing innovative assessment practices. Based on small-scale research carried out at one English university, it explores the opinions of staff who trialled the new practice in three separate university faculties. Their in some cases very different experiences and views are analysed in an attempt to identify useful lessons with regard to the successful implementation, management and transfer of educational changes. The article also includes a review of current literature in this field.
Introduction This paper looks at university teachers’ perceptions of the factors involved in successfully implementing pedagogic innovations across different faculties in one English higher education institution (HEI). It focuses on one particular innovation relating to student assessment – recursive feedback - piloted initially in one faculty and then trialled by two others. The paper begins by describing the nature of recursive feedback and the reasons for its introduction at the university. This is followed by a review of literature examining factors believed important in successfully embedding curricular innovations, before moving on to a discussion of the research methodology adopted. Consideration is subsequently given to the findings which emerged from the research project, and a comparison is made with the literature, before proceeding to an analysis of the study’s conclusions with regard to what has been learned about managing educational changes.
Recursive feedback The reason for introducing this assessment innovation related to staff concerns about the apparent lack of engagement some undergraduate students appeared to have with both written and verbal feedback on their formative and summative work. Despite the well documented benefits of formative assessment (see Juwah et al, 2004, for example), some staff felt that many students were failing to participate in this important part of their learning because the assessments were not graded. Furthermore, large numbers of summative essays remained uncollected in tutors’ rooms, suggesting that some students did not appear to value or see the importance of feedback on their work. This proved all the more alarming when initial investigations revealed that ‘lower achieving’ students appeared less likely to retrieve their work than students achieving a B or higher. Whilst anecdotal, this is in line with research carried out by Winter and Dye (2004) who found that 79% of teachers in their survey reported more than 20% of students failing to collect marked work.
In an attempt to overcome some of these issues, a specific first year core module was re-designed to incorporate recursive assessment. This form of assessment involves students submitting an electronic draft of their assignment in the middle of the course. The tutor then marks the piece of work, giving specific advice on how to improve the work by a limited number of marks. The essays were then returned electronically to the students. They were given at least one week to read, digest and respond to the feedback, before attending a compulsory feedback session in which they had to:
demonstrate clearly that they had read and understood the feedback and
explain to the tutor how they could improve the essay.
The engagement of students in a dialogic feedback process is considered by many academics to be good practice (Hyland, 2000), and essential to addressing the important problem of students not understanding what tutors mean by their feedback. It aims to enable the student and teacher to negotiate shared meanings in the work, thereby tightening the ‘understanding nexus’ between both parties. At the meeting, students were each asked if they wished to leave the essay as it stood with the grade given, or whether they wished to submit the amended copy. Either way they were obliged to hand in a printed copy at the end of the semester for official grading and entry to the examination boards.
If they chose to submit an amended copy, they had to leave all the tutor’s comments intact on the script, as well as their original text. They then had to highlight in yellow the parts they had added or changed. This meant that tutors did not need to read the whole assignment again and were able to identify and assess amended sections. Students awarded the top grades were not able to raise their mark beyond the grade ceiling, but still had to attend the feedback session to pass their assessment, given the need to demonstrate they had met the newly included key skill of ‘improving own learning and performance’, assessed on a pass/fail basis. Demonstration of engagement with and understanding of the feedback was thus necessary to validate the grade awarded.
Staff enthusiasm for the innovation based on positive student responses led to discussions with university colleagues, as a result of which the practice was subsequently trialled in two different faculties, where its use was perceived to be more problematic and less successful. These issues will be discussed later in the article. At this stage, however, it is useful to consider what the reading reveals about the factors involved in successfully embedding pedagogic innovations.
Review of the Literature
A review of the literature on this topic reveals several common factors (across international settings and levels of education) which are implicated in the success or failure of the innovation. Some factors appear to have more impact than others, whilst the method of implementation is a common factor across studies.
Embedding pedagogic innovations is, according to Ferdig (2006 p750), a complex multifaceted process. The factors that promote success are the intertwining of “good people, good pedagogy and good performance”. Communication is particularly important: implementation requires negotiation with established practice, rapport and participation. Innovations need to be based on sound learning theory: Ferdig (p.750) emphasises a social-constructivist base, with “authentic, interesting and challenging content”, relevant to the student's development. A successful innovation gives users a sense of “ownership”, with students feeling in control of their learning, and builds new types of relationships between teachers and students. Successful implementation therefore takes a broad view of education, taking account of social and emotional gains, as well as cognitive: Zhao, Sheldon and Byers (2002) found that social interaction (either support or opposition) was a leading factor in predicting the success or failure of the innovation implementation.
Owston (2006)'s findings support Ferdig's holistic analysis: pedagogical innovation, whether involving technology or not, is shaped by a complex interaction of the innovation with contextual factors such as school policy, leadership, cultural norms and values, teacher attitudes and skills, student characteristics. Essential conditions for the sustainability of the innovation include teacher support, characterised by commitment, enthusiasm and belief in the value of the change. This is strongly linked with uptake of the innovation: student support and enthusiasm was vital in motivating teachers to sustain an innovation. Teacher professional development was therefore necessary, to learn new skills, as well as unlearning beliefs (about students, learning) that act as barriers to change. Owston (2006) goes on to elaborate on the importance of teacher attitude in successful implementation of innovations: willingness and commitment are needed, precursors of which are the belief in the value of the innovation, and sustained by personal and professional satisfaction from seeing the positive impact of their work on students. Teachers thus need to take ownership of the innovation, and at times unlearn beliefs about students or instruction that have dominated their professional lives. Thus, teacher development is at the heart of sustaining an innovation.
Other contributing factors identified include: supportive plans and policies, funding, innovation champions, external recognition and support. For success, innovations need to be embedded in a model consisting of a concentric set of 3 contextual levels that affect and mediate change. The micro level consists of the classroom organisation, personal characteristics of teachers and students; but, whilst important, these are not sufficient to sustain reform (Miles 1983): the meso level of school organisation and personal characteristics of administrators, community leaders must be involved. Those reforms most likely to succeed are those in the macro level of policies and trends, such as curriculum and assessment, professional development, telecommunications.
Owston (2006) cites further models that overlap to some degree. For Fullan (2001), success depends on 4 characteristics: the fit between the innovation and the needs of the institution, clarity of goals and means of achieving them, the difficulty of the change for those implementing it, and the quality and practicality of the innovation. Rogers (1995) offers 5 factors related to the nature of the innovation and the rate of adoption: relative advantage (the degree to which the innovation is perceived as better than the idea it supersedes), compatibility (the extent to which it is consistent with existing values), complexity (how difficult it is to use and understand), “trialability” (the degree in which it can be experimented with on a limited basis) and observability (the degree to which results are visible to others).
Factors that contribute to failure of innovations tend to be of a pedagogic nature: Datnow, Hubbard and Mehan (2002) claim 3 factors: agency (implementation is not as intended by the designers, or is not being sensitive to local circumstances); culture, (how the innovation is changed in the school setting) and structure: the further an innovation is from normal practice, the lower the likelihood is that it will be sustained.
Nachmias et al (2004) concur that a complex mix of many factors are involved in successful implementation of an innovation, and have extended this by developing, from their research, a hierarchy of the factors. They found that the most important factors are a history of innovation, backed by encouraging local policy, in conjunction with three main forces; the management, leading staff and co-ordinator. Staff training is mildly necessary, but leading staff use personal sources as opposed to organised training.
Watson (2007) introduced innovations of new learning methodologies (Problem Based Learning and Colloquia) at Masters' level. She suggests that new methods of teaching and learning require time and space to be made available within an existing curriculum, as well as staff commitment, and careful advance planning: an already busy timetable militates against success. She views students as key to the success of any pedagogic innovation: they need to be aware of the implications of this investment on their overall performance, especially where innovations are voluntary, and not linked to assessment.
Studies based in various other HEIs are particularly relevant. White's (2007) qualitative study examined how individuals and institutions experienced the drivers and barriers to change in relation to uptake of innovations: lack of time was given as a major limiting factor, with correlations found also between financial support and uptake of innovation. Various mechanisms were observed to bring about change: supportive strategies, policies and processes. Charlier et al's (2004) study covered four European institutions, in Belgium, France, Switzerland, and so gives inter-national comparison. Key success factors were identified as visionary people (at top or ground level), preliminary experience and resources, key persons (the expert practitioner), diversity in projects, starting small, support of the external environment, clear objectives, strategies, funds, policies, favourable conditions, infrastructure, and culture of the institution. Inception is usually due to visionary people, both at top-down and bottom-up levels. Again, supportive policies and funding were important.
In summary, then, it would seem that the human factors are important in the success of any innovation. Teacher commitment and motivation are essential. Students need to see the value of the innovation, with their enthusiasm, combined with support from administrators and policy-makers, enhancing chances of sustainability of the innovation.
After implementing the strategy across the three different faculties, staff views, experiences and reactions appeared to differ, markedly in some cases, creating thus an interest in the reasons for such divergent views. It was therefore decided to investigate the situation by designing a small-scale qualitative research study examining the perceptions of staff who had been directly involved in implementing the innovation in three separate faculties of the same university – a large, post-1992 HEI based in the English Midlands. The research was designed in two stages - six individual interviews were initially carried out with staff from across the three faculties, followed by a focus group made up of participants from each of the three schools. Staff were asked questions on the ways in which they had operated the strategy, their views on its relative merits/demerits, and the reasons behind its perceived success or failure. Interviews were subsequently transcribed and analysed by using a process of inductive category building – identifying key themes and allocating data to the emergent categories. The scale of the research and the small number of respondents clearly preclude wider generalisations being drawn from the data and the nature of claims that can be made concerning validity. Nonetheless, it is hoped that the inclusion of staff perspectives from three separate university faculties within the case study will go some way towards supporting a more robust and credible understanding of the issues involved in successfully embedding pedagogic innovations, and that the issues which emerge will resonate with HEIs elsewhere. The names of individual staff members and the faculties in which they are located have not been disclosed here in an attempt to protect individual and departmental anonymity.
Findings and discussion
An analysis of the individual and focus group interviews with staff from across the three faculties revealed four broad inter-related factors (cf. Ferdig, 2006 and Owston, 2006) that were believed to be important in determining the success of pedagogic innovations in HE. These factors were identified as follows:
The students participating
Practical management of the innovation
Each of these categories will now be explored to uncover and compare teaching staff’s views on the factors involved.
The students participating
As highlighted by Owston, 2006 and Watson (2007), all the participants felt that students themselves had a key part to play in determining the relative success of innovations, though there were differences in the ways staff interpreted this. For some staff, involving students who were ‘at the right stage’ of studying was seen as important, with unanimous agreement that innovations could best be trialled with first year students who, it was felt, would be more open to pedagogic experimentation. There was some disagreement, however, on whether semester 1 or 2 students were most suitable.
More prominent in the discussions, however, were the ways in which the staff differently conceived of the students, and a dichotomous image of able/motivated versus less able/less engaged students emerged strongly from the conversations. Several staff expressed the view that trialling innovations with students fitting the second description would present a barrier to successful implementation given what was perceived to be a more negative orientation towards studying:
“It encouraged the interested but didn’t generally draw up those that were uninterested.”
“Given their lack of motivation or enthusiasm, I don’t think a lot of them were interested in learning.”
“The very students who would benefit from this practice often didn’t bother to engage.”
“Stronger students made effective use of recursive assessment, the weaker students didn’t.”
For some staff, this difference in motivation appeared to relate in part to students’ status, with part-time and on-line learners being seen as more motivated, and full-time students seen as less engaged:
“The students who did take it seriously – in the full-time group – I have no doubt, benefited. The evening only students benefited most definitely, and in a similar way, the on-line students.”
“Significant numbers of full-time students didn’t submit the second assignment despite the tutor having concerns and focussing on assignment support.”
“Fundamentally, the recursive feedback is a good idea and the more diligent students do benefit.”
Staff from the school which had originally piloted the innovation appeared less inclined to this divided view of success based on student ability, however, expressing the view that the initiative had been well received by students across the ability range. Potential reasons for this difference will be discussed further below.
This aspect was also identified by all staff as vital in determining how successful innovations would be. During the discussions, it emerged in some cases that participants had only a partial understanding of how the recursive feedback method operated, and had in fact carried it out wrongly, highlighting thus the need for clear communication referred to by several authors in the review. Examples included staff not upgrading marks in some cases even when students had re-submitted appropriately but briefly amended sections. Another member of staff commented on the different levels of understanding shared by staff in her team, and how this had been a key factor in inconsistent delivery:
“Despite detailed briefing of the team and their acquiescence, in reality there was variation in tutor practice about the nature of feedback that was given and whether it was consistent across the cohort. Some tutors gave detailed comments and clear guidance, with others it was patchy.”
Another lecturer did not even seem to have grasped how this assessment innovation could facilitate some students’ learning – “I can’t see it being of benefit to the less able students.” Not surprisingly, it was felt that staff training was key in avoiding such inconsistent practice and ensuring that staff possessed shared understandings.
Alongside understanding, and once again reflecting Owston’s (2006) findings, the extent to which staff themselves were prepared to engage with the innovation was seen as a key factor. It was felt in several cases that time pressure was often a reason why some members of staff had been less than enthusiastic about the innovation, perhaps in some cases because of an incomplete understanding of how to operate the assessment of the amended second drafts. Several members of staff commented on this aspect:
“It felt that I was spending a lot longer re-marking material and I did get a bit resentful about that.”
“Others saw it as a really onerous task and were not keen at all. They felt it doubled their workload and were asking for extra hours to be able to cope.”
Some commented that this pressure had been exacerbated by the large size of teaching groups. For others, their awareness of the students’ lack of interest in the innovation appeared to dampen their own enthusiasm, and some staff mentioned a general feeling of disappointment that their efforts were perceived to have been in vain, resonating with the importance of the social dimension identified by Zhao et al (2002) earlier. Some believed that the initiative had partly been unsuccessful as a result of it being at variance with lecturers’ own teaching and assessment preferences. One lecturer commented:
“My personal preference is for marking paper copies. I didn’t like marking on the screen.”
It was perceived that such views had in some cases led to staff who knew and understood exactly how the feedback system should operate being resistant and “doing their own thing,” “despite having discussed clearly what was to be achieved.” One member of staff revealed how such reluctance may even relate to deeper ideological views about the nature of teaching and learning in HE that were perceived to be at odds with the practice, a point reminiscent once again of Owston (2006):
“In some ways this smacks of what they have been used to in school and college, i.e. the teacher marking their work and telling them what they have to do to get a higher grade. Whilst I recognise that there needs to be a transition into higher education to understand the requirement, I’m not sure that this is the way as we are reinforcing old habits of failing to take responsibility for their own assessment and once again allowing the tutor to do it for them.”
Perhaps allied to the above theme is the slightly broader notion of transferability - how well innovations devised in one part of an institution can be successfully transferred to another, where the ‘teaching and learning ethos’ may be different (cf. Datnow et al, 2002, as discussed previously). Two members of staff commented on this issue, suggesting that it would be worth teams spending some time considering adaptations, rather than whole-sale borrowings, given differences in views and methods of practice. It was felt that simply allowing the scope for such discussions might be instrumental in winning staff support, given the potential reluctance that might develop from a sense of imposition. Another member of staff illustrates how such issues of ‘ownership of practice’ are important:
“With a large team the ideas need to come from them rather than taking a model that has worked elsewhere and trying to utilise it. Tutors agreed with the principles but there needs to be more discussion of the issues involved in order to get them on board.”
Practical management of the innovation
Finally, it was felt that the success of such initiatives could also be influenced by certain practical elements. In this case, several staff members referred to the way in which the faulty administration of the process had been an obstacle, with some staff failing to communicate to students (cf. Ferdig again, 2006) that amendments in the second draft needed to be highlighted appropriately before submitting electronically:
“Students had not highlighted improvements. Therefore it is the administration as opposed to the theory that hinders the success of the exercise. It was this element that caused resentment as it was felt that time-consuming double-marking was taking place.”
The electronic submission procedure itself had also proved problematic in some instances, and a number of lecturers ascribed this to students’ unfamiliarity with the process, given that all other assignments required paper submissions. The issue of timing was once again considered important, not only in terms of course year and semester, but also with regard to the implementation of the strategy within a semester. Staff from the school which perceived a more positive response from students had used the strategy mid-way during the module and offered students guided online learning to free up staff time for marking, whereas other schools had implemented it towards the end of the module when students had finished attending taught sessions. According to staff from the school with more positive reactions, timing the strategy’s use in this way had also contributed to more positive staff attitudes as it was perceived to reduce marking workload:
“Cos we had the submissions in early we didn’t have that block of marking for example after Christmas/January when it all comes in.”
The strategy’s more positive reception had encouraged staff to embed it more widely across modules within the subject’s portfolio, and this wider application compared to more isolated usage was seen as important in engendering its success (cf. Miles, 1983):
“For me it’s about changing the whole design of the module…it’s not stand-alone...it’s got to be embedded throughout the three years. It’s not just a one-off.”
The fact that it was no longer ‘just a one-off’ in this particular school had thus allowed the team to apply it each year, and in the process students were felt to have become not only more familiar with it but also more persuaded of its benefits. The process of gaining academic approval for using the assessment in another school, however, was perceived to have been an obstacle in its own right – “one of the problems seemed to be getting through the minor modifications – putting up an argument for that” – which again raises questions about potential differences in educational views of innovative practice between faculties.
At this point, it is worth considering what conclusions arise from the study, particularly with regard to lessons concerning the management of educational changes. Firstly, it would appear that the study offers useful corroboration of many of the factors identified in the emerging body of literature on this topic. The importance of clear communication and positive interaction between all parties involved is highlighted, alongside the significance of teacher attitude and a sense of ownership and engagement on the part of staff and students. As previously discussed, the literature offers a number of models identifying the key factors implicated in successfully embedding pedagogic innovations. This study offers an additional model, singling out the four key factors below:
The students participating
Practical management of the innovation
As with other studies, the research highlights the inter-related nature of these factors in accounting for successful take-up. It is also worth pointing out that the study is one of the few to focus on non-IT related innovations, as many of the previous studies are particularly concerned with initiatives in this field. Though recursive feedback includes an element of IT, it is not in itself a technological innovation, and as such the paper offers a tentative insight into factors which may be more implicated in the successful management of non-IT based innovations.
With regard to the idea of managing the transfer of new practices across institutions, the paper affords additional insights. It would seem that ‘wholesale’ adoptions may be beset with difficulties, and that allowing staff in different areas the opportunity to review and adapt the implementation of innovations is vital in offsetting the potential resistance and lack of ownership that might arise from a sense of imposition. This accommodation of differing views on practice – or adaptation pre-adoption – is thus important, but the paper highlights the fine balance that must be struck here, insomuch as adaptations have the potential to undermine the successful adoption of an innovation if they deviate significantly from original intentions. With this in mind, a frank discussion of staff views and opinions prior to adoption would seem in order, as any innovation would appear likely to fail without staff agreement on and belief in the worthwhileness of the endeavour.
Finally, in terms of signposting further directions for research, it would be worth carrying out a similar project across a number of HEIs to produce a more robust impression of the validity of the findings which have emerged here. It would also be interesting to include student perspectives on the issues involved here too, in an attempt to gain an understanding of the extent to which they corroborate or deviate from staff perceptions.
References: Charlier, B., Platteaux, H., Bouvy, T., Esnault, L., Lebrun, M., Moural, A., Pirotte, S., Denis, B., Verday, N. (2004) Stories About Innovative Processes in Higher Education: Some Success Factors, Networked Learning Conference 2004
Datnow, A., Hubbard L and Mehan, B (2002) Extending Educational Reform from one School to Many. New York: Routledge Falmer
Ferdig, R.E. (2006) Assessing technologies for teaching and learning: understanding the importance of technological pedagogical content knowledge, British Journal of Educational Technology 37 (5) pp749-760
Fullan, M.G. (2001) The New Meaning of Educational Change, New York: Teachers College Press
Hyland, P., (2000) Learning from feedback on assessment, in A, B, P Hyland (ed.) The practice of university history teaching, Manchester: Manchester University Press
Juwah, C., Macfarlane-Dick, K., Matthew, B., Nicol, D., Ross, D & Smith, B (2004) Enhancing student learning through effective formative feedback, The Higher Education Academy, available at: http://www.heacademy.ac.uk/resources
Miles, M. (1983) Unravelling the mystery of institutionalization, Educational Leadership, 41 (3), 14-19
Nachmias, R., Mioduser, D., Cohen, A., Tubin, D., Forkosh-Baruch, A. (2004) Factors involved in the Implementation of Pedagogical Innovations Using Technology Education and Information Technologies 9:3, 291-308
Owston, R. (2006) Contextual factors that sustain innovative pedagogic practice using technology: an international study, Journal of Educational Change, 8, 1, 61-77
Rogers, E.M. (1995) Diffusion of Innovations, New York: Free Press
Watson, H. (2007) Problem Based Learning and Colloquia in Postgraduate Teaching, University of Birmingham, available at:
http://www.c-sap.bham.ac.uk/resources/project_reports/findings/ShowFinding.htm?id=NET/1 White, S. (2007) Critical success factors for e-learning and institutional changes – some organisational perspectives on campus-wide e-learning, British Journal of Educational Technology 38(5) pp840-850
Winter, C., and Dye, V. (2004) An investigation into the reasons why students do not collect marked assignments and the accompanying feedback, CELT Learning and Teaching Projects 2003-4, Wolverhampton: University of Wolverhampton
Zhao, Y, Pugh, K., Sheldon, S and Byers, J (2002) Conditions for classroom technology innovations, Teachers College Record, 104, 3, pp.482-515
This document was added to the Education-line collection on 23 September 2010