Logo

How AI can revolutionise the way we analyse student surveys

Student surveys provide a potential goldmine of data. Kirsty Bryant suggests deploying AI to maximise your insight

Kirsty Bryant's avatar
28 Mar 2024
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
A robot hand takes a survey using emoji faces

Created in partnership with

Created in partnership with

University of Westminster

You may also like

Harness human and artificial intelligence to improve classroom debates
3 minute read
A robot holds up a sign showing two people talking

Popular resources

At the University of Westminster, we have been thematically analysing open comments from student surveys for several years. This academic year marks a change for us as we employ AI to help us in analysing student and staff qualitative feedback.

In previous years, we have manually conducted a thematic analysis of about 3,000 students per year, developing themes and uncovering patterns in the open comments response. This was a long and laborious task – especially in comparison with quantitative data, which is often received very rapidly by decision-makers.   

This year, we have implemented cutting-edge machine learning software that enables us to process open comments responses and provide insights much faster. ​We are one of the first UK universities to analyse the results of the National Student Survey (NSS) using AI software (MLY provided by Explorance), leading to better insights into enhancing student experience.

We’re still early on in our journey with AI, but here is what we’ve learned so far.

Good output relies on good input

AI models available on the market have been trained on relevant data and feedback. If you are seeking to build your own model, or to purchase a model, be mindful that AI systems rely on large amounts of data to learn and perform well – but not all existing data is reliable, valid or representative. For example, student feedback surveys may have low response rates, missing values or  leading question wording that affect the accuracy and usefulness of an AI model. Question wording that is leading can also impact the usefulness of an AI model.

To make the most of your input, consider the phrasing of your questions and how they might influence the answers. For instance, “Could you tell us one thing you liked and one thing you didn’t like?” Possible responses might be:

⦁           “Classrooms, quality of teaching”

⦁           “I enjoyed the teaching but didn’t like the classroom.”

Based on the above responses, an AI model may identify themes for the first response (but not likes or dislikes) and sentiments for the second response. You will likely get a more accurate sentiment and theme analysis by ensuring that one question is asked at a time. Changing the wording to “Tell us about your experience” ensures the respondent is providing the sentiment and meaning behind their response (ie, “I enjoyed X, but didn’t like Y”) which the AI can code.

Additionally, your survey data may require a process of cleaning to remove jargon or non-responses (such as “N/A”) in your text. This will help you to ensure accuracy and reliability in your analysis. The output can only be as good as the input.

Combine your AI analysis with human input and validation

While AI can provide you with many insights from your students’ comments, it cannot replace the human judgement and expertise needed to interpret and act on them. 

We decided to purchase AI because we wanted to build on our qualitative analysis capabilities and knowledge. Our institutional research teams’ skill set in thematic analysis, prior knowledge of datasets and experience in inductive and deductive thematic analysis provides our source of human input, interrogation and validation of the AI analysis. 

We can spot discrepancies between human interpretation and AI interpretation and feed back into the AI model. For example, a human can read into sarcasm in texts whereas AI cannot. The AI model currently only understands what it has already seen (eg, deductive coding) and relies on being taught themes, but humans are capable of inductive coding and can spot new themes. 

You should always review your AI results with a critical eye. Consult with other stakeholders to make sure that your new insight can be actioned in line with your institution’s key priorities. Be careful to identify which department can action what insight, and whether that might be an academic or non-academic concern – responses about heating, cafeteria options or accommodation would fall into the latter, for example. 

Also, think about whether these action points are quick wins for a team or important to the institution as a whole. If talks with stakeholders confirm they are neither, then move on to the actions that are.

Make the most of AI’s capabilities

The AI gives us a greater capacity. We can run all our major surveys (student surveys and colleague surveys) through this system, enabling us to be more responsive, where we can identify in-year issues and react quickly.   

Bringing in the qualitative data from all our major surveys means we will have the capacity to map different levels of study or different time periods, allowing us to identify patterns and trends. We can segment the data to better understand different student user journeys and develop measures to create a successful experience for all our students. For example, separating out the experiences of those who have been on placements versus those who haven’t, or pre- and post-placement groupings, can show us how they each self-report on their student experience, education and learning gains, and any differences between their responses.

We can now share high-level analysis of the open comments almost as quickly as we currently deliver the quantitative counterparts of our surveys. 

However, without action and accountability, the development of this analysis will not realise its potential. A culture shift in listening to qualitative insight and actioning the insight is required. Stakeholders must buy in to make the most of the use of AI and its outcomes. If this is achieved, we might expect to see a change in future survey results as the impact of these changes is realised.

Kirsty Bryant is senior institutional research analyst at the University of Westminster.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site