| View this article as a .pdf file
Department of Information and Communications
Enhancing Feedback to Students: Technology Can Help
Abstract (of sorts)
Overall time spent marking an assignment reduced by up to a half. Feedback returned to students within a week. Students provided with more in-depth feedback. Students better able to understand where they have gone wrong. Sounds too good to be true? Read on.
‘Electronic Feedback’ is a marking assistant developed by Philip Denton at the School of Pharmacy and Chemistry, Liverpool John Moores University. The application uses MS Excel and Word to generate student reports in print and email messages, and has been described in an earlier issue of Learning and Teaching in Action (Denton, 2003).
The purpose of this article is to show how such a tool has been a catalyst for the review and evaluation of assessment design and practice. There is a brief explanation of how it is currently used by three staff in the Department of Information and Communications. The account is personal and based on a seminar presentation at the second MMU Cheshire Learning and Teaching Conference, Alsager, September 2004.
The application is available for free download at http://cwis.livjm.ac.uk/cis/download/ xlfeedback/welcome.htm. The current version is 11. Registration is required using a simple online form that requests name, school/department, university and a contact number and email address. The application is well documented: an interactive user guide and a self-guided tutorial are included with the software.
Stirred to action
The marking assistant first came to the author’s attention when a printed copy of Learning and Teaching in Action arrived in my pigeon-hole. For whatever reason that issue, once read, stayed at the top of the many piles on my desk and did not disappear under the remnants of earlier good intentions. The lure of an application that offered “more feedback, of higher quality, in a shorter time” (Denton, 2003) spiked my curiosity and overcame my initial scepticism. In recent years there have been sector-wide initiatives to fund and disseminate generic and subject-specific teaching and learning packages. Few of these appeared to offer the promise of alleviating an increasing assessment workload using standard Office tools. ‘Electronic Feedback’ does not require any specialised knowledge to set up and use for those already proficient with Excel and Word (as discussed below).
Assessment is a necessary and important part of our jobs as teachers in higher education. Ever larger piles of assignments are tangible evidence of the growth in student numbers. Finding a means to cope with the increased workload had become a personal priority to:
- overcome a sense of dread at the sight of so much student work
- relieve some of the repetitive aspects where you can find yourself writing the same comments on many scripts
- avoid feeling despondent when the delay in giving feedback stretches into weeks rather than days
- address a sense of futility occasioned by students who only seem interested in their mark and appear to take little or no account of the detailed feedback provided.
Taking no action was not an option – negative feelings were becoming increasingly associated with the task. A useful reminder of how hard and demanding assessment can be was found in my copy of the The ILTA guide (Inspiring Learning about Teaching and Assessment) (Race and Brown, 2001). I had not experienced the desperation of others which these authors illustrate but recognised my anxieties about a process that was becoming ever more public
“It’s also increasingly under the spotlight. We are now required to make clear links between assessment criteria and intended learning outcomes…The whole business is now very public. Every assessed piece of work, and every feedback comment to students, is now a piece of evidence of the quality (or lack of quality) of our teaching.” (p.31)
Under the spotlight
By this point the reader may think that the author is overly sensitive about marking and assessment. Several developments had made me more aware of assessment issues. I was appointed as a QAA Subject Reviewer and took part in three institutional visits. These visits were a good opportunity to compare, and reflect on, practice. Consequently I also steered the Department’s preparation for subject review and wrote an appraisal of assessment practice for the Self-Assessment Document. As part of this I delivered a departmental workshop on marking student work. Examples of student work and staff feedback from every taught unit provided the largest body of evidence made available to the subject reviewers.
Alongside these activities I carried out duties as an external examiner in two other universities. The Department commenced a major review of its undergraduate degrees with the result that all routes are now delivered through a Common Undergraduate Programme. Unit syllabuses had to be re-written to conform to the university’s revised proforma that makes explicit the unit learning outcomes, teaching and learning strategies, assessment strategies and assessment criteria. Biggs (1999) has coined the term ‘constructive alignment’ to describe curriculum design that aligns learning activities and assessment tasks with the learning outcomes to achieve consistency. A useful overview of the concept is hosted on the Higher Education Academy Engineering Subject Centre web site (2004). Feedback is evidence of the students’ achievement of the learning outcomes and indicative of the quality of teaching.
As a teacher there were some prominent issues to address. Assessment could be made fairer and more reliable by providing controlled variables such as weighted criteria. Students could better understand how their overall mark is arrived at and how marks are allocated. These require transparency. I had to make plain what I was looking for and the students had to be told about expected standards. Feedback should allow each student to identify and appreciate their achievement and help them remedy any deficiencies. Effective feedback can assist students to adjust their approach to learning and teaching so they that can achieve to the best of their ability.
The validity of the assessment process is enhanced by clarifying more rigorously what each assignment sets out to measure. It is one thing to know this and another to put it into practice. Assignment briefs have been extensively re-written and typically run to 3 or 4 sides. Expected standards have to be articulated for each level of achievement for each assessment criteria. Adopting the marking assistant followed easily in those units where the assessment strategy had been comprehensively reviewed. The feedback that is generated explains the level of achievement for each assessment criteria and can be aligned to the stated learning outcomes.
Setting up Electronic Feedback
Preparations for first use of the marking assistant in one unit took two days during the summer vacation period. Familiarisation with the application and basic configuration such as unit, tutor and student details required half a day. Student data was downloaded from MIDAS and quickly cut and paste into the relevant columns of the marking assistant spreadsheet, i.e. first name, last name and student ID. The most challenging and time-consuming task was the writing of grade and criterion-based (standard) comments. These were derived from published university and faculty descriptors, tailored to the specific assignment - the report in the module ICT and The Law 2. The fact that I was already in the process of re-writing the assignment briefs simplified the task of aligning the assessment and feedback.
Comments for individualised feedback were selected by looking at the most recent batch of assignments and identifying frequently recurring issues. The set up was then tested by re-marking five assignments from the batch and printing the feedback. There is a template that controls the reporting options and in particular determines the detail of the marks analysis given to individual students.
Example grade comments
||Fail: total irrelevance or complete avoidance of the question
||Fail: irrelevant or barely started work; no required learning outcomes are demonstrated
||Fail: comprehensive failure
||Fail: marginal failure: satisfactory work that fails to demonstrate at least one of the required learning outcomes
||Third Class: threshold level: the minimum pass standard has been achieved
||2.2 Class: satisfactory work but with room for improvement
||2.2 Class: good work that is generally competent
||2.1 Class: very good work that shows understanding and coherent organisation with few errors or misunderstandings
||2.1 Class: very good work that shows an accomplished level of understanding and coherent organisation
||First Class: excellent work
||First Class: exceptional work that is close to professional standards for publication
||First Class: brilliant and original work of a standard comparable to professional publications in the field
Example standard comments
||Content (identification and definition of key terms; relevance)
||You have made no serious attempt to answer the question.
||There is little sign that you have made a serious attempt to produce work of an acceptable standard. Your material has little or nothing to do with the topic.
||Your treatment of the topic is very limited and there are significant errors and omissions.
||Your treatment of the topic is limited. You do not use information constructively to address the topic as it is set. Some material is irrelevant or inconsistent.
||You have made a genuine attempt to address the topic and show reasonable familiarity with relevant basic sources of information. There is a fair indication of your knowledge but you tend to rely on lecture notes or the uncritical use of introductory texts. There are some omissions.
||You have made a genuine attempt to address the topic and show reasonable familiarity with relevant basic sources of information. There is a fair indication of your knowledge but you tend to rely on lecture notes or the uncritical use of introductory texts. There are some omissions but overall your essay is reasonably competent.
||Your treatment of the topic is comprehensive and thoughtful. There is evidence that you have acquired substantial knowledge, for example, through your use of information from a range of relevant sources.
||You have fully addressed the topic. It is clear that you have read and/or researched widely and your essay shows a high level of accuracy and coherence.
||Your essay demonstrates a wide breadth of knowledge and its application. It also shows evidence of originality.
Once the marking assistant had been set up for one unit, it took just under a couple of hours to configure it for a second assignment in a different unit – the essay in the module ICT and The Law 1. The main effort involved was re-drafting the criterion-based comments for the different type of assignment. The comments can be exported and the files have been shared with the teachers who used the application in the other units. In the current year (2004-05), configuration is taking even less time as the relevant files can be imported from amongst those created during the first year of use.
Current use in the Department
During the academic year 2003-04 the marking assistant was used by 3 teachers on the Information and Communications Common Undergraduate Programme in the units: ICT and The Law 1 and 2, Research Methods and Web Site Design. Criterion-based marking is used for three assessments – an essay, a report and a research proposal – where each criterion is given a weighted marked out of 100.
Example for a marking schema for research proposal (criterion-based marking):
Developing the research question/describing
the practical task -15%
Aims and objectives - 10%
Summary of literature - 30%
Methods - 30%
Timescale - 10%
References - 5%
The other assessment is a 50-minute laboratory-based test of web design skills that uses normalised marking. A student who correctly completes every required element could achieve the maximum 100 marks. Initial configuration of the marking assistant was more time-consuming as standard comments had to be written for every available mark but the actual marking of the test became simpler and made much more efficient.
|Example for web design laboratory test (normalised marking):
|Doctype - missing
|Doctype - incorrect syntax or Doctype
|Doctype - correct
|Title - missing/unsatisfactory
|Title - basic description
|Title - fully descriptive of page content etc
The author presented a short staff development seminar to colleagues about the application at the end of year. At the time of writing (November 2004) a further four teachers are considering using the marking assistant to some extent in their units. It will continue to be used in the two ICT and The Law units , and will be applied to both written assignments in Research Methods and to the staged group assignments in Web Site Design. Its use will also be extended to a further unit, Applied Web Design and Management, where assessment comprises staged individual and group tasks. The author has been invited to give presentations at staff development events in two other faculties of the university.
The marking assistant makes it possible to give more extensive feedback that is structured and of high quality. Students receive a word-processed report that is legible and personalised. For the ICT and The Law assignments (essay and report) this includes:
- a standard grade comment that explains the overall mark as a degree classification equivalent, e.g. 2.1 Class: very good work that shows an accomplished level of understanding and coherent organisation
- criterion specific comments
- individualised comments
- summary comments about the collective performance of the unit cohort.
The summary helps students place their individual achievement in a broader context. An optional feature of the marking assistant provides a more detailed breakdown of marks so that individual students can compare their performance with the cohort. This data includes:
- overall mark, e.g. 67%
- highest, average and lowest marks awarded, e.g. H 73%, A 56%, L 34%
- personal ranking, e.g. 7 th out of 53
- weighted mark for each criteria, e.g. Content
- highest, average and lowest marks for each criteria, e.g. H 75%, A 59%, L 30%
As well as being more comprehensive, feedback can be offered more promptly. In the case of the ICT and The Law report, students were given their marks and comments one week after the submission date and only 5 days after the work had been collected from the Faculty coursework receipting office. It is fair to say that the students were as surprised as the teacher in this case.
Evidence that the students gain a clearer understanding of expected standards is at present anecdotal. A group of third year students initiated a discussion about the feedback outside of class. In over twelve years of teaching this was a novel experience for me. They were impressed by the high standard of presentation. The breakdown of marks allowed them to identify more readily the strengths and weaknesses of their work – one student with a mark of 59% appreciated how her lack of attention to one key aspect of the assignment brought down her overall mark when the other criteria had been marked at 60 or above. Another student felt that for the first time she had a more complete understanding of the standards that were required and could relate this to degree classifications. The assessment felt fairer. The indication of highest, average and lowest marks demonstrated that the teacher used a full range of marks. There was a mixed response to the ranking. It was felt that students who were at the top would be pleased but those with the lowest ranking could be upset. However the student who came bottom joined the discussion and said that after her initial shock she found her ranking served as a wake up call for her final year.
There were no adverse comments in the end of year unit evaluations. A systematic evaluation of student opinion will be conducted during the current year across the units and assignments where the marking assistant is applied. The change for the teacher is incremental – the marking assistant can be extended to different assignments as time and resources allow. The change for the student is arguably greater as there is an immediate and substantial change in the feedback that they receive. For example, a careful evaluation of the benefits and possible risks of giving individual rankings would be sensible.
The assistant was first used to mark a report-based assignment. A log was kept of the time taken for all tasks - from initial collection of the work from the Faculty receipting office to when feedback reports were handed to students. The author was surprised to discover that the entire feedback process had taken only half the time compared to the previous year for a similar number of students. This saving was unexpected and most welcome. I had been prepared to use the marking assistant if there was only a minor saving in time because I wanted to improve the quality and extent of the feedback. The gains in efficiency and effectiveness have proved to be substantial.
The teacher is not tied to a computer. You can read the assignments, make notes on each and then enter the marks and comments in batches as suits. It was a relief not to use the Department’s three-part feedback form that requires some pressure to be applied with a pen. As any number of sets of reports can be printed in student name order, the time-consuming task of separating the multi-part forms was removed. No sorting is required. The automated calculation of weighted marks also saves significant time as I tend otherwise to make frequent checks. The marking assistant has given me greater confidence about the reliability and accuracy of the feedback. Clearly articulated criteria are applied consistently and marks are calculated correctly. This applies throughout a large batch of work that may take several days to read, or where two or more assessors share the workload.
There are further efficiency gains when two or more teachers adopt the application. The grade and criterion-based comments can be shared. Set up becomes easier with each successive use. For example, the marking assistant has alleviated some of the pressure on the teachers in the two web design units where staged and iterative assessments necessitate timely feedback.
Assessment takes up a significant proportion of our time as teachers and for the author has a major impact on my sense of professionalism. The marking assistant has provided a vehicle to improve and not just maintain standards. The unwanted feeling of just standing still has been dispelled. The high quality of the feedback made possible by the application stands up to close scrutiny such as that provided by external examiners. On an off day it makes me feel that I am covering my back.
Student perceptions of my work are especially important and their annual unit evaluations remain positive. They can opt out of classes if they are dissatisfied with the teaching but assessment is compulsory. Boud (1995) states this plainly:
“Students can escape bad teaching; they can’t escape bad assessment.”
The review of assessment and feedback has been like a journey made backwards. I have worked back from student achievement through feedback to assessment design via learning outcomes and the unit syllabus. This was possible as the units were well-established so changes could be documented and the achievements of previous cohorts were known. The insights gained will be beneficial to the design of new units and have already given the author the confidence to introduce further innovations in assessment.
Biggs, J. (1999) Teaching for quality learning at university.
Buckingham: SRHE and Open University Press,
Boud, D. (1995) Enhancing learning through self-assessment. London:
Kogan Page. Cited by Race, P. and Brown, S. (2001) The ILTA
guide: Learning about Teaching and Assessment. York: ILTHE.
Denton, P. (2003) Returning feedback to students via email: using
Electronic Feedback 9. Learning and Teaching in Action, 2 (1), Spring,
pp.33-37. Also available online at: <http://www.celt.mmu.ac.uk/ltia/
Higher Education Academy, Engineering Subject Centre (2004)
Constructive Alignment - and why it is important to the learning
process. [online] [cited 15 November 2004] <http://
Race, P. and Brown, S. (2001) The ILTA guide: Learning about
Teaching and Assessment. York: ILTHE.
telephone: 0161 247 6138
top of page