Learning and Teaching in Action: Assessment

Student in language lab

 

Ongoing Assessment Projects

Various

A Study of Academic Staff Attitudes and Beliefs about Assessment and Feedback at MMU
Maurice Palin, Hollings Faculty, and Rosane Pagano, BITMS, MMUBS

A study to investigate student understanding and experiences of assessment and feedback at MMU: Protocol
Rachel Spearing, Physiology, HPSC

Measuring Minds
Debra Kidd, Institute of Education

The Gender Agenda Hub
Dr Helen Jones, Manchester Metropolitan University and Dr Liz Frondigoun, Glasgow Caledonian University

Video as alternative medium for written assessments in MMUBS
Phil Scown, MMUBS

Audience Response Systems: A handout for staff
Rosane Pagano, MMUBS

 

A Study of Academic Staff Attitudes and Beliefs about Assessment and Feedback at MMU

Maurice Palin, Hollings Faculty, and Rosane Pagano, BITMS, MMUBS

Apart from the knowledge and intellectual skills acquisition and the exploration of concepts and ideas which have characterised universities since their early days, probably the most significant activity involving students is their testing via various methods of assessment. It is also arguably the most controversial with strongly held, often conflicting, views on various aspects of the assessment process.

In terms of the actual assessment there are two main perspectives: that of the academic staff, and that of the students. Other pressures result from the senior management of an institution and the issues concerned with student recruitment and retention.

From discussions amongst members of the Assessment CoP it became obvious that there is a great variety of practice across MMU, and that staff often feel constrained in various aspects of the assessment process. A number of Faculties and Departments have their own guidelines or even rules but these also vary and are not necessarily linked to the University regulations.

The aim of this project is to use information about staff understandings of assessment to improve implementation of best assessment practice. A key objective is to determine the views and attitudes about assessment, marking and feedback held by academic staff. This will provide an opportunity for these staff views to be used to create a plan to embed a shared view of good practice in assessment across the university. If found necessary there may also be recommendations of changes to regulatory and quality enhancement practice.

The main data gathering will be by means of an online survey which it is hoped will be produced and agreed by December 2008 and administered during the spring term. An interview schedule will be refined as a result of the survey outcomes and focus groups will be held in selected departments in towards the end of the spring term 2009. The preliminary report will be completed by early June 2009.

The items for inclusion in the questionnaire are being derived from a variety of sources including the literature, the University documentation, and from discussions and meetings of CeLT and the assessment CoP. Six main areas have been identified and are being used to shape the questionnaire. These are :

  • Assessment effort
  • Assessment type
  • Student engagement
  • Marking criteria
  • Feedback
  • Administration

Items relating to these areas have been compiled into a document, and the CoP are being consulted as to their suitability, and if items should be included or excluded. The intention is to investigate staff views and attitudes in the various items by asking for a rating on Likert type scale. This will enable comparison to be made between various sub groups of the respondents. Appropriate subgroupings are currently under consideration by the members of the CoP. Possibilities include:

  • length of time in teaching at HE level
  • previous experience of e.g. industry or other education sectors;
  • subject area and/or Faculty and/or Department;
  • level of teaching;
  • teaching qualification.

Following the survey analysis, it is planned to organise focus groups in selected locations to consider the implications from the survey, and what responses/recommendations would be appropriate. This will include ideas of how the findings can be disseminated and best practice embedded across the institution.

A final report will be prepared towards the end of the summer term.


A study to investigate student understanding and experiences of assessment and feedback at MMU: Protocol

Rachel Spearing, Physiology, HPSC

Project summary

The National Student Survey has become a key tool in strategic review and planning in the UK HE sector. With regard to assessment and feedback all universities appear to under-perform in relation to scores in other areas, and MMU is no exception; the 2008 results show that student satisfaction with assessment and feedback, with an overall average of 3.5 points, is just below the sector average of 3.6 points. Suggestions have been made as to the reasons for this and the institution is taking steps to improve assessment practice with its ‘Challenging Assessment’ initiative and a variety of recent and forthcoming regulatory changes.
The literature on assessment in HE is broad and offers some clear ideas about what constitutes good practice. These ideas are being implemented across the institution via institutional review of regulations and PARM/AME procedures. This study will investigate the understanding and experiences of students with reference to assessment and feedback through a series of workshops facilitated by the researcher during the Spring term of 2008-9. Participants will be programme representatives from a range of teaching departments across MMU that have been found to perform above, below and on a par with the sector mean for assessment and feedback. The researcher will then explore whether the review of regulations and PARM/AME procedures is likely to address some of the issues identified, thereby having an impact on the NSS findings.
Aim: to use information about students’ understanding and experiences of assessment and feedback to improve implementation of best assessment practice across the University.

Objectives:

  • To explore student experiences and understanding of assessment, marking and feedback in a sample of teaching departments at MMU;
  • To use the information obtained in the study to create a plan to embed good practice in communicating assessment information across the university;
  • To recommend changes to regulatory and quality enhancement practice if needed.

Rationale for and background to the study

We know already that in the areas of assessment and feedback as measured using the NSS, MMU scores just below the sector mean. The overall score constitutes a varying pattern across the University however, with some departments scoring above the MMU mean and in some cases above the sector mean for every question, and some departments scoring the MMU mean or below for some questions, and for two departments, all questions. The NSS statements on assessment and feedback are as follows:

5. The criteria used in marking have been clear in advance.
6. Assessment arrangements and marking have been fair.
7. Feedback on my work has been prompt.
8. I have received detailed comments on my work
9. Feedback on my work has helped me clarify things I did not understand.
(National Student Survey, no date).

Further information around student experiences of assessment and feedback has been generated through MMUnion’s ‘traffic light’ campaign, in which students were invited to complete postcards indicating which areas of assessment they liked and disliked. Early analysis suggests that issues that warrant exploration are: format and quality of assessment guidance; group work as an assessed activity with a group mark; the timing and quality of feedback; justification/ explanation of the mark awarded; scheduling of different assignments for the same programme; unpredictable pattern of scheduling of retake assignments (Paul Norman and Gary Hughes, MMUnion officers, personal communication).

The area under investigation in this project is the student experience of assessment and feedback. Assessment has a number of purposes:

  1. It ascertains students’ performance with reference to the intended learning outcomes and is commonly used to generate a numerical score and/ or grade so that they can classify themselves within a group, or compare themselves to other similar groups;
  2. It can be used to inform students and educators which aspects of the teaching and learning approaches are being effective;
  3. It can elucidate for students what constitutes good performance and indicate how they can modify their own performance/ learning in order to improve future performance, thereby motivating future learning (Brown, Race and Smith, 1996; Nicol and Macfarlane-Dick, 2006; Lizzio and Wilson, 2008).

Summative assessment, in which assessment is used as a tool to quantify or grade learning has been described as ‘assessment of learning’, and where it is used to promote learning, it has been described as ‘assessment for learning’ (Assessment Reform Group, 1999). Hargreaves (2007) explains that assessment for learning is formative assessment; for this to shape future learning and teaching activities, the importance of the generation of feedback by either the teacher or student has been stressed (Black and Wiliams, 2001). Another term that has recently been used is ‘assessment as learning’ (Torrance, 2007), which describes the notion that clarification of learning outcomes and criteria leads to coaching so that student learning is strategic and may be narrow by being focused only on how to pass an assessment.

Taras (2007) suggests that the value of assessment in promoting learning depends on whether the assessment (summative or formative) is regarded as a process or a one-off event. Where the assessment is regarded as a process, students receive feedback, which should enable them to shape their future learning; where it is regarded as an event that generates only a grade or mark, relevant lessons cannot be drawn in order to mould future performance.

Race and Pickford ( 2007) describe assessment as a principal driver of learning so that students tend to learn strategically in order to pass exams, achieve a certain grade or achieve a certain qualification. Different types of assessment have been found to stimulate different types of learning; assessments in which students can do well by memorising promote surface learning, whereas assessments that demand understanding and analysis of learned material will promote a deeper level of learning (Gjibels and Dochy, 2006). This being the case, constructive alignment of assessment with learning outcomes that promote the deeper learning described previously, and the inclusion of feedback for any assessment (summative or formative) should shape future learning.

In the definitions of assessment there is consensus that in order to promote learning, feedback is essential. The nature of the feedback given however is also important, and the term ‘feed forward’ has been used to describe feedback that is useful in assisting students in future learning and assessment (Pickford and Brown, 2006; Beaumont, O’Doherty and Shannon, 2008). Seven key characteristics of feedback to promote motivation and future learning have also been described (Nicol and Macfarlane-Dick, 2006). They include making clear to students what constitutes good performance through high quality assessment feedback, which is also used to plan teaching, and promote better future performances. Feedback that does not feed forward may for example only focus on specific content at the end of a unit of study, and in cases where future modules/ assessments are unrelated to the current topic, the value of this sort of feedback is questionable. Feedback that does feed forward may be focused more on skills that will be applicable in the future and that should therefore promote development (Brown, 2006a). In reality, Brown’s (2006a) findings in an analysis of feedback given to students of the Open University and Sheffield Hallam University (SHU), suggest that feedback tended to be content focused, and that where it was skills focused, it lacked detail or coherence so that its usefulness was limited. Furthermore, the type of feedback that students tended to value most was feedback that justified their mark, even though this tended to focus on the content of the assignment. Students in Glover’s (2004) study also judged the value of feedback based on its mode of delivery; oral feedback was not considered by students to be feedback, but teaching, and Glover (2004) goes on to suggest that feedback given in this way was therefore not heeded. This conflicts with the findings of a report commissioned by the NUS that indicated that verbal feedback in an individual meeting with the tutor who set the assignment would be students’ preferred type of feedback, but that this was received by a minority of students (Halsey, 2008). Sometimes the term ‘feedback’ is also used solely to describe the provision of a mark/ grade following a summative assessment (NUS, n.d) and this clearly would not fulfil the requirements outlined above to promote learning. Clarification of the term ‘feedback’ would aid interpretation by both academic staff and students.

So, assessment and feedback are inextricably linked. Feedback can promote learning if it is good, and constructive alignment of learning outcomes and assessment criteria could/ should help ensure that deeper learning is achieved (Gjibels and Dochy, 2006).

Study design

A range of tools is already available for ascertaining tutor and student views on assessment and feedback. Many of the tools have been developed through the Formative Assessment in Science Teaching (FAST) project, which was a joint initiative by the Open University (OU) and SHU. One objective of the project was to explore how student learning may be influenced by their assessment and feedback experiences. The tools developed include the Assessment Experience Questionnaire (AEQ) and the Sheffield Hallam University Questionnaire (SHUq).

A framework of conditions identified by Gibbs and Simpson (2004-5) following a review of the literature, had led to the development of the Assessment Experience Questionnaire (AEQ), which was the main outcome tool used in the project. Whilst the AEQ has been found to be useful in identifying trends in student experiences, it is not designed to generate rich or detailed data that can explain them (Brown, 2006b).

The SHUq was devised to explore specifically how students perceived feedback. Questions asked were based on the insight of tutors, who generated a list of the types of feedback that might be given, and students were then asked to select from a range of levels indicating the helpfulness of the specified type of feedback. Whilst the data generated had greater detail, allowing researchers to explore some of the student experiences in more depth, it was found by respondents to be lengthy and complex, and the response anchors used (for example ‘not applicable’) were not always appropriate (Glover, 2004). Inappropriate anchors in this type of scale could affect the overall validity of the responses, as participants may perceive the scale itself to be poorly planned, and they may therefore give less considered responses in other areas. The complexity and length of the questionnaire might mean that only those with strong views would have the determination to complete it, thus skewing the sample and interfering with the reliability of the results.

Other tools developed included a self-evaluation tool for students to record the amount of effort they had made at different stages of a module; two tutor feedback evaluation tools, and a checklist for tutors to evaluate their own assessment practices (FAST project, n.d.). As these tools focus on areas of learning and teaching other than student experiences of assessment and feedback they were discounted as inappropriate for use in the current study.

Study data will be collected through workshops. This is appropriate because survey data already available from the NSS and MMUnion suggest the areas that warrant further investigation and exploration of areas of known student concern has been recommended by the Quality Assurance Agency for Higher Education (QAA) (2007). The researcher will ensure that the topic areas covered in the AEQ, SHUq and NSS as well as the areas identified through the MMUnion survey are discussed by participants by using a ‘prompt sheet’ (see below). This more qualitative approach will address some of the areas that have been identified as ‘often overlooked’ in an exploration of assessment monitoring practice (QAA, 2007). Furthermore a qualitative, exploratory approach is supported by both Prosser (2005) and Brown (2006b), who suggest that whilst the information that can be gleaned from questionnaire surveys is useful in identifying trends, more in-depth exploration of students’ perceptions are required to explore fully how learning may be driven by assessment and feedback. The workshops will facilitate this by giving participants an opportunity to discuss the issues, firstly with just a topic area identified and secondly with reference to the prompt sheet, which will ensure that the areas of concern noted from other sources are covered.

Departmental performances for the section of the NSS identified (questions 5-9) will be analysed and departments will be categorised as ‘average’, ‘above average’ and ‘below average’ with reference to the HEI and MMU mean scores. A total of 120 student representatives will be approached to participate in the workshops, 40 each from departments categorised as having average, above average and below average performances. The Planning and Management Information Department have agreed to provide the list of programmes which come under each department identified in the NSS to facilitate this. The sample will therefore be purposive in nature; identification of departments that range in performance according to the NSS data is an attempt to ensure that a breadth of experience and understanding regarding assessment and feedback is captured. Caution will need to be exercised in comparison of the NSS findings with any future findings of the current study however, since the departments identified in the NSS do not always appear to correlate with the departments identified by the University. For example, the sample number for the Department of Information and Communications for 2007-8 on the Unistat website exceeded the number registered on programmes in this department, suggesting that additional students have been categorised under this department (Jonathan Wilson, MMU lecturer, personal communication).

The participants will be experienced students in their second, third or fourth year of study on full time undergraduate programmes. Typically, students that complete the NSS are final year undergraduate students, so it is appropriate to explore the issues raised by the NSS with a comparable group, and for this reason first year students will be excluded. It may be appropriate at a later date to carry out a similar study with different groups, for example part time students, postgraduate students and first year students. Possible participants will be identified through MMUnion, who have offered support in advertising the project through their website and student representatives’ bulletin, and will contact student representatives on behalf of the researcher via email with a Participant Information Sheet attached. Six workshops (table 1) will be carried out across a range of University sites to allow students at those sites easy access to the workshop. A choice of two dates on two different days will be offered; if at least ten participants volunteer to participate then the session will go ahead, otherwise participants will be offered the opportunity to attend an alternative session. The workshops will last for no longer than two hours and will run as follows:

The first group of students at each station will be given ten minutes for open discussion and then referred to a prompt sheet to ensure that the items listed on it are discussed. Subsequent groups will start their discussions in the same way, but will then refer to the ideas generated by the preceding group, and consider whether they agree or disagree with their ideas, indicating why. It is hoped that during the undirected phase of the discussion students will discuss issues that are important to them and possibly that the researcher has not identified. Directing the last part of the discussion will ensure that important topics, which have been generated with reference to the NSS, AEQ, SHUq and MMUnion are covered. Specifically, the topics to be covered at each station will be as shown below or similar (areas for inclusion have not yet been finalised since feedback from the steering group has not yet been received).

Table 1: Workshop activities
Time (mins) Activity
00 Introduction - reiterate purpose of study; gain consent; sign consent forms.
10 Ice breaker
20

Participants divided into three smaller groups, and start discussions at one of three stations (for detail to be covered at each station see below):

  • Assessment specifications and criteria;
  • Assessments and marking;
  • Feedback

At each station participants will be asked to have an open discussion about the topic area, and then refer to a 'prompt sheet' to guide their later discussions to ensure that all appropriate areas are covered. Participants will be asked to record their ideas on a piece of flip chart paper.

45 Participants move on to the next station. For the first ten minutes, participants will be asked to have an open discussion about the topic area. After this they will be directded to look at the ideas generated by the first group and indicate where they agree and disagree, giving reasons.
70 Participants move on to the final station. Again, for the first ten minutes, participants will be asked to have an open discussion about the topic area. After this they will be directded to look at the ideas generated by the first group and indicate where they agree and disagree, giving reasons.
95 Participants review the ideas generated by all groups at their final station and summarise them on a separate flip chart and then present them to the rest of the group.
120 Conclusion - thank participants; provide information on student support services; remind them that I will produce a summary of the discussions and forward it to them for comment in the following weeks.

 

Station 1: Assessment information and marking criteria

For this discussion:

  • ‘information’ refers to information you receive about assessments before you do them with regards to their timing, format, word limits (if appropriate), submission etc;
  • ‘marking criteria’ refers to the statements which tell you what the marker will expect to see, for example ‘integration of material from a range of sources’.
  • For each of these areas, please reflect on your experiences and describe ones that were good and ones that were bad. For each of these descriptions say why they were good or bad. Finally try to list some suggestions for future practice.

Prompts:

  • How soon before an assessment are you given the assessment information? What sort of information do you receive? In what format is the assessment information (eg. written, verbal, online)?
  • Do you refer to the assessment information when preparing for it? If yes, how helpful is it? Why? If you don’t refer to it, then why not?
  • What do you think of the timing of your assessments in relation to the other commitments on your programme?
  • Do you think the deadlines for completion of your assessments are clear and fair?
  • How soon before an assessment are you given the marking criteria?
  • Are the marking criteria you receive helpful/ clear? Do you understand them?
  • Do you refer to the marking criteria when preparing for the assessment? If yes, how helpful are they and why? If you don’t refer to them, why not?

Station 2: Assessments and marking

Academics and teachers tend to divide assessment into two broad groups:

  • summative assessment which is formal assessment for which you receive a mark or grade that contributes to your overall mark for the year or degree;
  • formative assessment, which is informal and which doesn’t contribute to your overall mark for the year or degree.

For each of these areas, please reflect on your experiences and describe ones that were good and ones that were bad. For each of these descriptions say why they were good or bad. Finally try to list some suggestions for future practice.

Prompts:

  • Why do you think students are assessed?
  • Have you had ‘informal’ or ‘formative’ assessments as part of your programme of study?
  • Which types of assessment have you done (eg. unseen written exams, essays/ reports/ practical exams/ vivas or oral exams/ laboratory work/ presentations/ posters etc)? Which do you like best? Why? Which do you like least? Why?
  • Do you think the assessments were appropriate for the topic area? If yes, why? If not, why not?
  • Do you think the effort required to pass different types of assignments is equal? Do you think the amount of effort required is appropriate for the academic level? Please give some examples.
  • How are assessment results released? How soon after an assessment does this happen? How do you know when to expect your results?
  • Are results accompanied by feedback? If so, how is this conveyed?

Station 3: Feedback

For this discussion:

* first, please define what you understand by the term ‘feedback’ in the context of assessments
Now, think about your own experiences and consider what the characteristics of the best and worst feedback you have received were. Try to list some suggestions for what you think should be included in feedback.

  • Do you think feedback on assessments is important? Why?
  • Have you always received feedback for all your assessments (formative and summative) at MMU? Who gives you feedback? How is it given? Is there an opportunity to discuss it? Have you ever been involved in providing feedback to other students? If so, how was this organised?
  • How do you use any feedback you receive?
  • Is it important to receive a mark as well as feedback? Why?
  • Do you know of any safeguards that are in place to ensure marking is applied consistently? If so what are they? Do you think marking is generally fair?

 

Data Analysis

Groups will be named A to E. In order to validate the workshop findings, a summary of their workshop discussions will be forwarded to participants and they will be invited to comment on whether the summary is an accurate representation. A deadline of two weeks will be set for comments. Following this, the data from the workshops will be analysed using quantitative and qualitative content analysis. Qualitative content analysis is a systematic method of analysis that can be used to identify, comment on and link themes and patterns in the data and in this sense it is inductive in that it may be used to formulate theory, and explain the topic under scrutiny (Burnard, 1991; Downe-Wamboldt, 1992). Content analysis allows the researcher to take the breadth of the material and narrow it down in stages to incorporate all the presented material under fewer and fewer headings. In order to quantify trends, quantitative content analysis will be used to identify the frequency with which themes occur in the data.

The following definition of terms will be used in the process:

Recording Unit: A word or phrase extracted from the transcript, in this case the workshop summary (Downe-Wamboldt, 1992).

Theme: A phrase or word that sums up similar ideas and comments expressed by the recording units (Downe-Wamboldt, 1992).

Category: An overall heading under which a group of themes fit (Downe-Wamboldt, 1992).

As words or phrases will be used as the recording units, the analysis will differ from thematic analysis, which tends to use larger chunks of text as the unit (Braun and Clarke, 2006).

Content analysis should be carried out in a systematic and objective manner (Brodie, Williams and Glynn Owens, 1994). In order to ensure this, Burnard (1991) suggests that two additional researchers are invited to carry out the process independently, before all three researchers compare notes and agree on a final list of categories and sub-headings or themes. With this in mind three researchers will carry out the analysis independently, but with a series of meetings to discuss, check and clarify findings. Content analysis relies on interpretations of the material by the researcher which may create bias. Having two other researchers to carry out content analysis on the same set of transcripts reduces the likelihood of bias and ensures that inter-rater reliability is achieved (Haggarty, 1996). This will also help to ensure that analysts correctly interpret the message that the respondent was trying to convey (Downe-Wamboldt, 1992; Burnard, 1991), thus ensuring the validity of the interpretation. This can be seen to be important to ensure that misinterpretation of material is avoided so that false assumptions, comparisons or conclusions are not made.

Findings will be presented descriptively in tables showing the categories and themes identified, and the frequency with which the theme occurred in the workshop summaries. Extracts from the workshop summaries will also be included as examples of the data interpretation. If different themes are generated from the groups, then the conclusions drawn will be attributed to the individual groups (A to E). Anonymity of the departments/ programmes that the group represents will be preserved, but the findings may be compared to the 2008 NSS results.

References

Assessment Reform Group. (1999) Assessment for Learning: Beyond the Black Box. Cambridge: University of Cambridge School of Education. [Accessed 11/11/08]

Beaumont, C; O’Doherty, M; Shannon, L. (2008) Staff and Student Perceptions of Feedback Quality in the Context of Widening Participation. York: HEA, [Accessed 11/11/08]

Black, P.; McCormick, R.; James, M.; Pedder, D. (2006) Learning how to learn and assessment for learning: a theoretical inquiry. Research Papers in Education, 21 (2): 119-32

Black, P.; & Wiliam, D. (1998) Assessment and classroom learning. Assessment and Education, 5 (1): 7-74

Black, P.; Wiliam, D. (2001) Inside the Black Box. Raising Standards Through Classroom Assessment. London: King’s College School of Education

Braun, V.; Clarke, V. (2006) Using thematic analysis in psychology. Qualitative Research in Psychology, 3: 77-101

Brodie, D.A.; Williams, J.G.; Glynn Owens, R. (1994) Research Methods for the Health Sciences. Switzerland: Harwood Academic Publishers

Brown, E. (2006a) Effective Feedback. [Accessed 12/1108]

Brown, E. (2006b) The Assessment Experience Questionnaire. [Accessed 11/11/08]

Brown, S. (2004-5) Assessment for learning. Learning and Teaching in Higher Education, 1: 81-9

Brown, S.; Race, P.; Smith, B. (1996) 500 Tips on Assessment. London: Kogan Page.

Burnard, P. (1991) A method of analysing interview transcripts in qualitative research. Nurse Education Today, 11: 461-6

Downe-Wamboldt, B. (1992) Content analysis: method, applications and issues. Health Care for Women International, 13: 313-21

FAST (no date), The Fast Project online [Accessed 22/10/08]

Forsyth, R. (2008) Developing a programme strategy for feedback on assessed work. Learning and Teaching in Action, 7 (2): 58-64

Gibbs, G.; Simpson, C. (2004-5) Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1: 3- 31

Gjibels, D.; Dochy, F. (2006) Students’ assessment preferences and approaches to learning: can formative assessment make a difference? Educational Studies, 32 (4): 399-409

Glover, C. (2006) The Sheffield Hallam University Questionnaire (SHUq). [Accessed 11/11/08]

Haggarty, L. (1996) What is … content analysis? Medical Teacher, 18 (2): 99 – 101

Halsey, G. (2008) NUS/ HSBC Students Research, Study Number: 154021. London: Gfk Financial

Hargreaves, E. (2007) The validity of collaborative assessment for learning. Assessment in Education: Principles, Policy and Practice, 14 (2): 185-99

Higher Education Academy (HEA) Resources: Audio and Video: Marking Criteria and Assessment Methods, [Accessed 11/11/08]

Hills, L.; Glover, C. (2006) How to Understand your own Practice. Quantitative and Qualitative Methods. [Accessed 11/11/08]

Lizzio, A.; Wilson, K. (2008) Feedback on assessment: students’ perceptions of quality and effectiveness. Assessment and Evaluation in Higher Education, 33 (3): 263-75

Mills, J.; Glover, C. (2006) Who Provides the Feedback – Self and Peer Assessment? [Accessed 11/11/08]

National Student Survey (NSS) (no date) NSS Questionnaire. [Accessed 12/11/08]

National Union of Students (NUS) (no date). The Great NUS Feedback Amnesty, Briefing Paper. London: NUS.

Nicol, D.J.; Macfarlane-Dick, D. (2006) Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31 (2): 199-218

Pickford, R; Brown, S. (2006) Assessing Skills and Practice. London: Routledge.

Prosser, M. (2005) Why we shouldn’t use Student Surveys of Teaching as Satisfaction Ratings. York: HEA. [Accessed 11/1108]

Quality Assurance Agency for Higher Education (QAA) (2007) Enhancing Practice. Integrative Assessment: Monitoring Students’ Experiences of Assessment, [Accessed 11/11/08]

Race, P and Pickford, R (2007) Making Teaching Work London: Sage Publications

Savin-Baden, M. (2004) understanding the impact of assessment on students in problem-based learning. Innovations in Education and Teaching International, 41 (2): 221-33

Taras, M. (2007) Assessment for learning: understanding theory to improve practice. Journal of Further and Higher Education, 31(4): 363-71

Torrance, H. (2007) Assessment as learning? How the use of explicit learning objectives, assessment criteria and feedback in post-secondary education and training can come to dominate learning. Assessment in Education: Principles, Policy and Practice, 14 (3): 281-94

Williams, J.; Kane, D. (2008) Exploring the National Student Survey. Assessment and Feedback Issues. York: The Higher Education Academy

Westrup, B. (2006) ‘Tutor Feedback: Am I Bothered?’ Why do some Undergraduate Students Learn from Tutor Feedback and others do not?

 

Measuring Minds

Debra Kidd, Institute of Education

In 2005, Kingstone School in Barnsley, radically collapsed their KS3 curriculum in favour of developing a ‘humanising learning experience which would raise pupil’s self worth, their aspirations and help them to see the world beyond their world as a place they might access, understand and make a positive contribution to’ (headteacher, Matthew Milburn).

Their journey involved developing a programme for Year 7 called ‘Cultural Studies’ and another for Year 8 called ‘Curriculum for Confidence’ and the process has been documented and disseminated widely (Kidd 2007). The school’s creative approach has been recognised by Creative Partnerships and it is one of only 25 nationally to be awarded School of Creativity status. With the curriculum model in place, attention is now shifting to assessment. The school is working with Dr Paul Black and with QCA to develop and trial an assessment programme for Year 7 which involves developing e-portfolios where pupils self select materials to put forward to an assessment panel at the end of the academic year. Each pupil is invited to present to a panel consisting of a teacher, a parent, a peer and an outside representative from the local community. They submit a summary of their work over the year in the form of a presentation and hand over a selection of work which they feel best represents their achievements and of which they are most proud. It is this work which is then ‘assessed’.

In the next edition, I will be exploring how this model developed and how it is working in practice. I will explore how the work fits in with current thinking on assessment and some of the hurdles staff have had to overcome in order to make it work and for it to be recognised as a credible alternative by parents and staff.

 

The Gender Agenda Hub

Dr Helen Jones, Manchester Metropolitan University
Dr Liz Frondigoun, Glasgow Caledonian University

In society there is a wide range of technologies that young people use on a daily basis for information gathering and communications through computer games, mobile phones, the Internet and Web resources like Facebook, My Space etc. Consequently, it has been argued, young people have sophisticated skills and knowledge of information technologies. The latest development in pedagogy by Dr Helen Jones of Manchester Metropolitan University, in partnership with her colleague Dr Liz Frondigoun at Glasgow Caledonian University, has secured the support of C-SAP and taps into these students’ skills by aiming to adapt them to the use of technology in the learning and teaching environment. The project, the Gender Agenda Hub, is developed from Dr Jones’ experience in innovative uses of technology for learning in the academy as evidenced in her successful International E-communication Exchange (IEE) that links with two other English Universities, another three Universities in the USA and one in Scotland.

In the Gender Agenda Hub they examine the utility of the internet in providing opportunities for students to engage in diverse and technologically based learning platforms by providing a format whereby students who are taking a module entitled ‘Gender, Crime and Justice’ in their home universities can debate and collaborate in a virtual space on directed topics relevant to the module as part of their assessed coursework. Students’ learning and knowledge experiences in the academy are often in a uni-directional format – lecturer to student with an over-reliance on a text based format. This project emphasises student knowledge creation, problem solving and group work and moves away from a purely text-based format to an activity-based learning that encourages students to explore and be creative in their learning and knowledge accumulation. In particular it brings together students from different jurisdictions and geographical locations to work together in a virtual space. This format allows students to recognise the value of their own knowledge and skills and the benefits of sharing this with and learning from other students. Thus it highlights the value of the academic community and the benefits therein for cross-university and cross-cultural engagement in learning and teaching.

Much of the current trends in learning and assessment are about engaging with the community but sometimes at the expense of recognising the value of cross-community engagement-university-to-university-at the student level. Previous experience has shown such a format improves the level of student involvement and achievement. By increasing their responsibility for their own learning and, indeed, ownership of their knowledge and for maintaining the reputation of their host university - as they perceive themselves as student ambassadors in their cross-academy interactions - assessment becomes much more than the delivery of grades, it becomes a true part of the acquisition of knowledge.

 

Video as alternative medium for written assessments in MMUBS

Phil Scown, MMUBS

Introduction

Within a final level MMUBS elective students are able to have some choice in the content of their assessment. The rise in prominence of podcasts (sound and video) has meant that this is a medium we should consider further.

In 2006-7, students were given the option to produce a written assignment or one using video with a small amount of supporting documentation. Those taking up the option included a broad spectrum of ability, including one with a dyslexia statement of special educational needs. No student failed the assignment. As a result of the work done so far, and information from other faculties, there are strong indications that the use of video has benefits for students who struggle with written assignments. This includes, but is not limited to dyslexia.

The aims of the project are:

  1. To identify qualitative and quantitative benefits of video assessment to dyslexic students in a range of MMUBS courses;
  2. To identify qualitative and quantitative benefits of video assessment to non-dyslexic students in a range of MMUBS courses;
  3. To produce guidelines on good practice and implementation steps, based on lessons learned from the successes and difficulties experienced by staff in implementing video assessment.

 

Audience Response Systems: A handout for staff

Rosane Pagano, MMUBS

What is an Audience Response System (ARS)?

An audience response system consists of handheld text base devices that students use to respond to tutors questions. A number of question-and-answer formats are available such as true/false, multiple choice, Likert scale, and free text. Typically the tutor prompt questions to the class, and then activates the handheld devices to display the appropriate options, which the students choose from. Each handheld device transmits the student’s response (radio signal) back to the transmitter/ receiver that is connected to the tutor’s computer. The ARS records all students’ responses, and immediately displays on the computer screen (or lecture theatre screen) the relative frequency distribution for all possible responses. The tutor can also track individual responses to questions.

The screen shots used in this document to illustrate the operation of an ARS are of the ActiveXpression software by Promethean. Figure 1 shows the Interface between the system user (tutor) and the ARS when setting up the question type.


Figure 1: Interface between the system user (tutor) and the ARS when setting up the question type

Once the question type has been set up, the communication and interaction between student devices and the computer is initiated (the ‘voting’ session starts). Two windows will open on the lecture theatre screen. One identifies the student devices, and the other is a control panel (stop button, time out button). Figure 2 shows an annotated screen shot to explain this. Here the question type is multiple-choice with four possible responses and only one choice allowed.

Figure 2: Question type is multiple choice with four possible responses and only one choice allowed

The typical sequence of events is: the question is introduced to students on a PowerPoint slide, the tutor initiates the polling session, students ‘vote’, the polling session is terminated. Once the session has ended, that is, communication between the handheld devices and the computer is closed, the system will display a histogram - the relative frequency of each possible response. Figure 3 is an annotated screen shot of the scores (percentage of students who chose each possible response) for the question type illustrated by Figure 2 (multiple choice question).

Figure 3: Percentage of students who chose each possible response in a multiple-choice question type

Is the technology reliable? The communication between the handheld devices and the computer system in my experience so far has been quite robust. The devices have been used in different lecture theatres of various sizes at different parts of the building and there has never been a break-down in communication. All lecture theatres at MMUBS have ActiveXpression installed.

Is the ARS user friendly? The user interface provided to control the polling process is fairly simple (see screen shots). It is worth noting though that the ActiveXpression menu and popup windows will ‘float’ on top of whatever application you have opened, for example, a PowerPoint presentation. If you are introducing questions in PowerPoint slides, and these slides are cluttered, they may be partially covered by the ActiveXpression windows.

Can the ARS be used for in-class group activities? You may choose to distribute hard copies of the questions to the students, let them work on the answers for a short period of time in small groups (2 or 3), then initiate the polling session. Each group would share a handheld device and would transmit a response agreed/ discussed between members of the group.

Potential Benefits

It has been argued that an ARS can provide the following benefits:

  • Provide ‘instant’ formative feedback on students’ own. performance relative to the class
  • Gauge the class understanding of the concepts presented, clarifying misconceptions.
  • Encourage students to participate in class
  • Get students started on a discussion topic.
  • Increase students’ awareness of peers.
  • Gauge the pace of delivering the material to meet the students needs.

The student perspective/ feedback is currently being sought. Students will be asked to rank the statements
(Table 1) on a five-point Likert scale of levels of agreement.

Table 1:

The use of audience response devices ...

  1. helped increase the classes' overall value
  2. helped make the learning experience more enjoyable
  3. helped provide feedback on my instructor's teaching
  4. helped increase my awareness of my peers' opinions
  5. helped me get individualized feedback from the tutor
  6. helped the class to move at the right pace for me
  7. helped motivate me to be more prepared for class
  8. helped me to guage my understanding of course material
  9. helped me to understand my performance in relation to my peers
  10. helped me to stay interested during class time
  11. helped me focus on key knowledge in the class
  12. helped make my input an important part of class
  13. helped me to participate in class
  14. helped me improve small group interactions

 

 

Autumn 2008
ISSN 1477-1241