Yes, uni students say some awful things in teaching surveys, so how can we use them to improve?
- Written by Joseph Crawford, Lecturer in Learning and Teaching, Academic Division, University of Tasmania
Imagine some of the key evidence for promotions at work being anonymous responses[1] from coworkers who just received a bad performance evaluation from you. Something similar happens in higher education, with teachers rated by students grateful for good grades or disgruntled by low grades. That’s a bitter pill to swallow for some academics[2].
Evidence[3] tells us students take their feedback personally. Jurors’ decision-making[4] is similarly affected by their emotional state. People make worse decisions when they are uncertain[5] or stressed[6], which are two common states for students.
So how unreliable are student evaluations? And what can we do about it? Our work[7] indicates there is still much to be done in this space, but we can set some rules to make it easier.
Read more: 'Lose some weight', 'stupid old hag': universities should no longer ask students for anonymous feedback on their teachers[8]
All surveys are not equal
Australia’s national Student Experience Survey[9] is considered “the pulse” on student satisfaction rather than a device to enable teacher growth, with the data being easily skewable by circumstances at the time. Unsurprisingly, during 2020, universities that already had an online presence saw the smallest decline in student experience scores.
So the question becomes: did the quality of learning crash in Group of Eight universities, which had the greatest declines in student experience? Unlikely. Instead, students’ ratings reflected their difficulties engaging with new forms of teaching and learning, plus the inertia of COVID-19 lockdowns.
Maybe they should have given students chocolate[10]?
The reality is these surveys do not tell us how students learn, but instead how students perceive their learning. Yet students aren’t experts at what learning is[11]. And when students don’t receive effective training in evaluation, it’s hardly a surprise that teacher gender, race and attractiveness[12] change scores.
“Everyone is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.” - Albert Einstein.
Instead, let’s ask students to share the most enjoyable content, the most rewarding educational technologies, and where improvement was needed. Include ethics and feedback training for bonus credit.
Read more: Our uni teachers were already among the world's most stressed. COVID and student feedback have just made things worse[13]
Making survey tools that work
Psychometrics is the study of measurements. Interestingly, many academics have specialist knowledge in developing surveys that are designed to be valid and reliable. But it’s unclear if universities use them as a resource to develop their surveys, with some academics wondering if they should. The 2021 Employer Satisfaction Survey Methodological Report[14], for example, does not refer explicitly to the words validity or reliability once across its 140 pages.
Valid surveys exist when the questions align to what we think they are measuring. Using a stopwatch to measure time is easy. When we try to decide how we feel about intangible concepts, it’s harder.
The national Student Experience Survey[15] asks students whether they have developed a sense of belonging to their institution. Yet the evidence on belonging[16] indicates it is typically developed through interpersonal relationships, not institutions, and not through universities[17].
Reliable surveys exist when the questions generate consistent results over time and over different participants. It’s analogous to when we bake a cake and we assume the scales will always accurately measure 40 grams of butter.
Speaking of sweets, scores in student surveys are easy to game[18]. Inflating student grades[19] does the trick.
In contrast, as an example, the Australian Student Experience Survey[20] asks whether students have developed their critical thinking skills during their course. How accurately can a person with low critical thinking skills answer this question?
5 rules for surveys to help teachers improve
There are ways that surveys can be used for good. To actually help teachers be better educators and improve student learning. But it requires a reset.
Here are five rules institutions could consider when developing their surveys.
1. Find psychometric specialists to create quality tools
We go to dentists to have our teeth fixed. The same rule applies here. Find individuals who can take the theory[21] of scale development[22] (producing reliable and valid measures to assess an attribute of interest) into the practices of learning and teaching.
2. Change when the survey is done
Lots of evaluations[23] are done before, during and after a program. In higher education, they are completed only after the class has ended.
A change to evaluations at multiple points will help identify if the learner makes progress during the class. This would also help control for cohort problems (one year, for example, students are smarter).
For student experience, contrasting how the same student rates different classes each semester may serve as a stable measure to see which classes need review.
3. Use more than just numbers
The numbers explain how we are tracking, and this is not inherently bad. The qualitative comments (mostly[24]) help us explore what those mean. Mixed methods[25] approaches can help.
4. Control for bias
It’s not always possible to eliminate bias and emotion. We can seek to understand them and use the measures as a case-by-case conversation about improving teaching. Developing reliable and valid tools will help, but if the aim is for these to help teachers improve, then we need to focus on that, not cross-institutional comparisons.
Better yet, let’s actively recognise teachers’ professional growth, call decline into question, and report on averages.
We can also train students to be better evaluators.
5. Create a growth community
Teaching quality surveys do not necessarily increase teaching quality[26], but they can.
The surveys offer an opportunity to raise awareness of differences. If students rate seven items at 90% but one is 84%, this should prompt research into the reasons. It could be a great opportunity to create more meaningful content; it could also just be an outlier[27].
Use these findings as publishing opportunities[28] to share what was learned.
References
- ^ anonymous responses (theconversation.com)
- ^ some academics (theconversation.com)
- ^ Evidence (doi.org)
- ^ Jurors’ decision-making (doi.org)
- ^ uncertain (psycnet.apa.org)
- ^ stressed (doi.org)
- ^ Our work (doi.org)
- ^ 'Lose some weight', 'stupid old hag': universities should no longer ask students for anonymous feedback on their teachers (theconversation.com)
- ^ Student Experience Survey (www.qilt.edu.au)
- ^ given students chocolate (doi.org)
- ^ aren’t experts at what learning is (doi.org)
- ^ gender, race and attractiveness (doi.org)
- ^ Our uni teachers were already among the world's most stressed. COVID and student feedback have just made things worse (theconversation.com)
- ^ 2021 Employer Satisfaction Survey Methodological Report (www.qilt.edu.au)
- ^ Student Experience Survey (www.qilt.edu.au)
- ^ evidence on belonging (doi.org)
- ^ through universities (doi.org)
- ^ easy to game (doi.org)
- ^ Inflating student grades (doi.org)
- ^ Student Experience Survey (www.qilt.edu.au)
- ^ theory (search.informit.org)
- ^ scale development (doi.org)
- ^ evaluations (doi.org)
- ^ mostly (theconversation.com)
- ^ Mixed methods (doi.org)
- ^ teaching quality (doi.org)
- ^ outlier (en.wikipedia.org)
- ^ publishing opportunities (doi.org)