I was really downhearted when I read this article published by the Guardian on its Higher Education pages.
The author argues that the National Student Survey is a waste of time and is detrimental to learning and teaching.
Personally, I think it is a cause for celebration that students report such high levels of satisfaction and positive experiences. In particular this year when the skies were meant to fall down upon us because the first generation of £9k fee payers were filling in the survey.
The NSS is a survey that draws in hundreds of thousands of respondents each year; just in order for the results to be published a response rate of over 50% must be achieved (and this is massively exceeded every year), this provides reassurance that the data is robust and is genuinely representative of student views about their experience.
To complain that the scores are clustering misses the point entirely and ignores the facts. Would we prefer it if more students were having an utterly miserable experience for the sake of having a wider range of scores? I doubt it.
Then there is the argument that a single survey is not appropriate because universities and the courses they provide are different. Of course they are! Universities are different and the courses that are delivered will vary in content, teaching and assessment methods; the survey does not ask students to compare their experience with students at other universities and it would be absurd to do so because students (by and large) only have one experience to reflect on. The NSS, rightly, asks students a series of pointed questions about their experience of teaching, assessment, support, resources etc. at their institution, the questions are identical for all students in their final year so the focus is on comparability of outcome not input. It is perfectly possible for two courses in the same subject to be taught and assessed in completely different ways but for students to come out at the other end highly satisfied with their experience. It would be disastrous to try to bring about uniformity of input.
Another point to consider here is the ‘masking effect’ of the mean scores for institutions. These scores that aggregate the level of satisfaction of all students covers up the fact that there will be significant variation between departments and even within departments between courses. This in itself is not a problem, but it does counter the suggestion that all this is pointless because there isn’t a wider gulf between the highest and lowest scoring universities. This also gives an opportunity for departments to benchmark themselves against colleagues in university and also peer departments in other universities.
The suggestions in the article of changes to the curriculum that have been made because of the NSS seem very odd to me. If methods of assessment are being changed to move away from over-reliance on essays and exams, then I can only cheer it along as a good thing in itself. These traditional forms of assessment are tried and tested and have very important part to play in universities. But why not add other methods of assessment into the mix? Ones that challenge students to develop their thinking and negotiate a final piece of work with a group of other students, ones that encourage and support them to present their research and ideas in new and innovative ways. I’ve just spent the Summer at UCL reading through hundreds of external examiner reports. They are all experienced, senior academics from other universities, and their feedback is routinely positive about new and innovative methods of assessment as a way of maintaining high standards and challenging students in new and interesting ways; far from discouraging new methods of assessment, they actively encourage, support and promote it and these are the opinions of other academics, not students.
Another criticism that is often made (not though in this Guardian article) is that the questions only ask about satisfaction and do not ask student to reflect on their teaching, learning and assessment experiences. I disagree. To take four questions as examples:
- Staff have made the subject interesting (Q2 in the NSS)
- The course is intellectually stimulating (Q4)
- The criteria used in marking have been clear in advance (Q5)
- Feedback on my work has helped me clarify things I did not understand (Q9)
Who would not want to know the answer to these questions? If, for example, only 30% of students agree that the course is intellectually stimulating I think it’s very important to know that and to do something about it! Equally if, 70% of student agree with the statement “Feedback on my work has helped me clarify things I did not understand”, that’s a good score, but still one third of students are saying that feedback is not helping them and that’s a worry.
No mass survey, containing only 23 questions is going to be perfect. But I for one welcome this annual opportunity for students, en-masse, to take a moment to reflect on their entire experience and tell us what it’s been like, it is an important feature of UK higher education and a critical part of the relationship we have with our students.
UPDATE: Andrew McRae has ‘fisked’ the Guardian piece with magisterial style on his blog .