The Future of English Proficiency Testing (Talking ELT Special #5)

Episode 1 May 05, 2026 00:57:29
The Future of English Proficiency Testing (Talking ELT Special #5)
Talking ELT
The Future of English Proficiency Testing (Talking ELT Special #5)

May 05 2026 | 00:57:29

/

Show Notes

English proficiency testing is changing, and fast. In this special episode of Talking ELT, Sara Pierson OBE and Anthony Green unpack what the future holds for English proficiency testing, drawing on insights from a new white paper created in partnership with Times Higher Education.
The conversation explores integrated skills, the ‘authenticity gap’, hybrid assessment models and the growing impact of AI, alongside what universities really need from language tests. Grounded in decades of assessment experience, this episode offers a human-centred view of how English testing can better prepare learners for higher education.
Download the full white paper here: https://oxelt.gl/4eWHS7b 

View Full Transcript

Episode Transcript

[00:00:06] Speaker A: Welcome back to a new talking ELT special. Today we're going to be diving back into the world of English language assessment. English language testing is at a crossroads. [00:00:17] Speaker B: True or false? Yes. I don't think about it so much as a crossroads, as a pendulum. Every phase in assessment over the years has been to answer some kind of social question. [00:00:27] Speaker A: I'm joined by two key members of the assessment team here at Oxford University Press. Sara Pearson, OBE Managing Director, and Anthony Green, Research Director. [00:00:38] Speaker C: There's a whole set of skills and experiences that students need to develop to be able to be successful. We need to think about the ways in which we can replicate that real world experience. [00:00:51] Speaker A: Together, we're going to be chatting about some of the trends that we're seeing in English language proficiency testing. [00:00:57] Speaker B: The most successful experiences we have are when we know what we're getting ourselves into. [00:01:02] Speaker A: How universities worldwide are rethinking what good assessment means. [00:01:08] Speaker B: Why are we judged by a couple of hours performance at the end of three or four hours of a degree [00:01:17] Speaker A: and what the future might look like in the sector? Let's jump in. It's great to have you both here. First of all, introductions, but I'm going to let you do it yourselves. So I have Anthony Green and I have Sara Pearson here. Sara, do you want to kick off, tell us a bit about who you are, what you do and your assessment journey, sort of how you got here? [00:01:47] Speaker C: Thanks. So I'm Sara. I've worked at Oxford University Press for three years leading our English language assessment business. And before that I was at the British Council for 10 years, leading exams there. So that included leading IELTS, which is a very well known English language assessment. And I've worked in education for KOSH longer than I care to remember. 30 odd years. [00:02:12] Speaker A: Fantastic. Great. Anthony, what about you? [00:02:15] Speaker B: So, yeah, I've been involved in assessment for a very long time. I started out as an English language teacher, as many of the people in this field did. I've spent the last 20 years as professor of Language Assessment at the University of Bedfordshire and just recently joined Oxford University Press. So I guess I became really interested in assessment probably at my university years. So I remember my finals at university. [00:02:45] Speaker A: Your final exams. [00:02:46] Speaker B: Final exams, yeah. The kind of star performer in our group in the final exam got very sick and had to be taken out of the exam and taken off to hospital purely because of the level of anxiety. [00:03:01] Speaker A: This is yourself you're talking about, it's not me. [00:03:04] Speaker B: No, I was very relaxed about it. [00:03:05] Speaker A: Okay. [00:03:06] Speaker B: But a much, much better student couldn't take the exam because they were so cramped up with anxiety. So I got really interested. Why are we judged by a couple of hours performance at the end of the 3 or 4 hours? 3 or 4 years of a degree. [00:03:24] Speaker A: Fascinating. Yeah. I mean, that kind of scenario of feeling really sort of ill and tense and anxious about assessment is, I think, fairly familiar. And hopefully it's one that over the years we can kind of, you know, reduce or probably that's even happening now. Just thinking back a bit about, you know, you mentioned university. Let's start with something lighter, which is, can you remember a time when you were maybe assessed on something at university or college or even school? And how did that recognition change you? How did it impact on you? [00:03:55] Speaker B: Sara, what about you? [00:03:57] Speaker C: So I was thinking about a very early experience of a kind of assessment and that was at primary school when I was picked to participate in a competition which was testing your knowledge of the highway code. So I duly went off and I learned the highway course. [00:04:15] Speaker A: So the Highway Code is the sort of manual for how to navigate on roads, correct. [00:04:21] Speaker C: In a car, all the rules of how to behave on a road. You had to learn all the road signs, you had to be able to identify different patterns of behavior on roads and comment and, and, you know, be able to answer those questions correctly. We actually did very well in my school and we got through to the finals in North Wales. And as part of that we, we did the final test in a police station. And that was quite a nerve wracking experience. I remember feeling very anxious at that point in time and fluffing one of the questions when I knew the answer. And I think that gave me an ongoing appreciation for the challenges of performing in an assessment situation and what that means for specific individuals as they're going through their assessment journey in life. And one of the highlights of that experience was then being taken down by the police round the police cells and getting the experience of what it was like to be an inmate locked in a cell for a couple of hours. But we did win. [00:05:27] Speaker A: I love that. I love the fact that the prize was incarceration. Okay, brilliant, Fantastic. And also very interesting that your connection with the Highway Code was entirely, I mean, if you were a child, it was entirely not as a sort of driver or a user of the road or anything like that. That kind of sort of disconnect between the Highway Code as a kid and the highway code as a sort of driver. We'll come back to disconnect later. Okay. Anthony, what about you? Have you got any fond memories of [00:05:55] Speaker B: my Experiences of being locked in police cells. [00:05:57] Speaker A: Okay, save that for later. [00:05:58] Speaker B: It's interesting that the earliest standardized assessments in Imperial China, they used to lock the test takers away in cells for three days to do the test as a security measure to make sure they weren't able to cheat. And they still were able to cheat. There are all sorts of ingenious ways that they used, like writing the answers to questions on their underclothes. But that fear of test takers being able to beat the test by doing some kind of nefarious activity that would get them through without really knowing the content, that's always been with us as well these days. [00:06:38] Speaker A: Of course, if you beat the test, maybe that's a sign of your ingenuity, which should be praised. I'm not saying that. Okay, tell me about yourself. You know, something at uni or college or school which maybe impacted on you personally. [00:06:50] Speaker B: So I think my experience of assessment has changed over the years. So by the time I sort of did my higher degree, so my experience of my PhD viva, for example, I enjoyed it was. It was good. Somebody actually read my PhD, which was sort of the point of writing it, to get someone to read and engage with it. And it was an interesting challenge talking through how I'd done the study and explaining all of my decisions. And I wasn't worried about whether I was going to pass or fail. I was more engaged just in explaining why I'd done it the way I'd done it. So I guess my relationship with assessment changed over the years from that experience as a child of sort of cramming and thinking that the test was the ultimate challenge, to recognizing that tests are actually just a way of establishing where you have strengths, where you have weaknesses, what you've done well, what you may not have done so well, and seeing that as an opportunity to learn. [00:08:03] Speaker A: Great, great. Fascinating that you talk about a viva, which is a spoken test, and you talked about it as being. Feeling like less of a test and more actually as an engaging conversation. Something that was a bit of a revelation, maybe to you at the time, which is lovely. The idea of you can learn something from a test. [00:08:21] Speaker B: Yeah, I wouldn't have seen it that way when I was 12. It would have been a more frightening experience than just sitting down and writing on it on a bit of paper. I think the way people, many people, experience job interviews, it's not always a pleasant experience, but I think it's more just gaining the maturity and confidence in what you know and just being able to share that. And it's not then the ultimate goal in itself. [00:08:51] Speaker A: Yeah, good. Brilliant. Great. So let's set the scene for our conversation. Today we're going to be unpacking some of the big talking points from our a brand new white paper which is created in partnership with Times Higher Education. It's about the future of English proficiency testing and why universities are starting to rethink language assessment. So for everybody listening and watching, head to the description where you can find a link to download the full white paper and get stuck into the details. Including the findings from our in depth survey of university admissions teams around the world. We're going to pull out some highlights from the paper in this conversation with Sara and Anthony's expertise and years of experience, and dig into what it really means for English proficiency testing and assessment as a whole. Okay, so I have read the white paper. Okay. I've done my homework, I've read the white paper and it opens with a really interesting sort of killer first line which is English language testing is at a crossroads. Okay, lovely first line, which really sort of sets us up. Would you agree? And if so, why does it keep happening? Does it keep happening? And either of you can talk to this one? [00:10:09] Speaker C: Well, I think that technology is changing and that's having a significant impact on the classroom, on the university experience, and it's also having an impact on how English language tests are delivered and experience. So I think that's raising a lot of questions for English language testing providers as to where and how to change the English assessment itself so that it can accommodate the changes in technology and how that's used within a higher education setting. And of course that's not just a one off experience. The technology landscape has been changing for a number of years with tests going from being on paper in a very traditional test taking environment through to taking it on a computer. And now you can take the test sometimes from the comfort of your own home using remote proctoring technology in a totally different way from how tests might have been taken in past years. [00:11:17] Speaker A: Great. Really quickly. Remote proctoring technology? [00:11:20] Speaker C: Yes. [00:11:21] Speaker A: What is that? [00:11:22] Speaker C: Well, it's an interesting way of describing something that's really quite simple and that means that instead of being invigilated in person, supervised in terms of taking your exam in person, you're supervised through a computer. So you might be sitting at your screen with a person, or indeed an AI, monitoring how you are taking the exam instead of that being taken in a test centre supervised by a human being. [00:11:53] Speaker A: Got it. So instead of a real person standing over your shoulder, you have a Sort of digital Big Brother. [00:11:58] Speaker C: Correct. [00:11:59] Speaker A: Brilliant. Okay, Tony, this crossroads technology has brought us here. [00:12:04] Speaker B: True or false? I don't think about it so much as a crossroads, as a pendulum, that there are these two competing performance, sort of competing ways of looking at assessment or language assessment. And on the one side there's what we call performance assessment, which is where the person creating the assessment is trying to replicate or simulate real life in some way. And on the other extreme there's a sort of abstracted scientific approach to assessment, which is to try to identify what the key elements are in language and to abstract those and to turn them into question types. And that goes back, it's about 100 years now since the multiple choice question was invented. And that had a dramatic impact when it first came in. And there was great enthusiasm for these very abstract multiple choice based tests. There were then a big battle with people who preferred essay type tests and performance tests, where you get people who actually do what it is that you're interested in finding out about. And that way of thinking was certainly dominant in the 1930s in the UK at least. But by the 1960s, 1970s, the psychometric approach, this very scientific way of looking at abstract language abilities, had really become dominant. Teachers hated it because it meant that people were just studying how to do a multiple choice question instead of studying how to use a language productively. [00:13:50] Speaker A: So this is the sort of washback issue, is that right? [00:13:53] Speaker B: That's the washback issue, yeah. So that became a big focus in the 1980s and in language assessment people were really interested in this idea that if you had these very abstract tests, it was just encouraging people to do completely the wrong things in the classroom. So that was part of the impetus for communicative language testing that developed in the 1980s and that had a big impact on the development of the big tests that we'll be looking at in relation to university admissions. Your TOEFL test, it got revised from a very abstract kind of test into something that looked much more like what people do in university. [00:14:34] Speaker A: Great. I love that you've given us there a sort of potted history of assessment. I mean, you've already mentioned China and hundreds of years ago sort of multi choice being created, which is amazing. You don't think about multiple choice as being something you. [00:14:49] Speaker B: Multiple Choice was invented 100 years ago. [00:14:52] Speaker A: Okay, okay, great. [00:14:53] Speaker B: The Chinese system used essays. [00:14:55] Speaker A: Yeah, fantastic. Good. Okay. So the single most valued skill that came out of the white paper in terms of what people are thinking about now is combining information from multiple sources. Why is that so? Prized. Why is it so valued and how does this reflect the reality of modern study? I mean, it sounds like a very contemporary thing, taking information from multiple sources. Why is it so important and how does it reflect modern study? [00:15:27] Speaker B: Sara? [00:15:27] Speaker A: Anthony? [00:15:29] Speaker C: Well, I think it's reflective of the kinds of skills that students are expected to display in a higher education setting, where, you know, they've got to take information from a multiplicity of sources and be able to combine that, digest it, synthesize it, summarise it and play that back either verbally or in written form as part of their university experience. Now, I think that's always been there, but perhaps nowadays, with the range of different kinds of resources that are available to individuals having multiplied, then perhaps that skill has become more important within the classroom. And the expectations are that English Language tests can support individuals to be able to demonstrate that they have those skills prior to engaging in a higher education institution. So that creates a real challenge for English Language Test producers in thinking about, well, what does that mean in terms of changing the types of questions that we can ask test takers so that they're able to demonstrate that they have that ability and that they can reproduce answers utilizing those, those various inputs? [00:16:52] Speaker A: Great. So that's about narrowing that gap between sort of what is tested and what they actually do in the real world, making it sort of feel much, much closer. [00:17:01] Speaker C: Yes, and I think traditionally tests, English language tests have tested skills as separate skills. They've tested writing, they've tested reading, you know, listening, speaking, so forth, as separate individual skills, rather than looking at how you combine, for example, listening and speaking. So it has an impact, I think, on how we need to think about the ways in which we can replicate that real world experience. [00:17:35] Speaker A: Great. Yeah, Very interesting to hear about that potential sort of disconnect you talked about between sort of what is tested and what is actually done or what is required. Anthony I mean, the White Paper talks about an authenticity gap. It talks about a disconnect between what tests measure and what students actually need to do. I mean, do you recognise that authenticity gap and what are some of the biggest mismatches you see currently in English language for higher education assessment? [00:18:08] Speaker B: Yes, I mean, there is certainly a gap, and there is inevitably a gap between what people can do in one or two hours of being tested and what they do in their real life as students or as professionals outside of an assessment situation. The gap has got wider because of the changes in the way technology is used in higher education. So traditionally, study looks like you had to read some books, you had to listen to a lecture, and then you produced an essay based on it. So it's integrating information, but it's the old model from a couple of sources. That's the old model. Increasingly, now that's changing. There's not this body of information that you have to absorb and then reproduce. There is less and less a correct answer to an issue you're given. But now you're overwhelmed with information. And it comes not just from the lecture and from a source textbook. It comes from perhaps a piece of research you're undertaking yourself, perhaps something you're doing in discussion with other members of your study group. It comes from the Internet, obviously, and from all those other sources that you now have much more access to than you ever would have done before. And it. It comes in the form of multimedia. So it's not a book and a lecture. It's a book, a lecture, a video clip, a mixed media product of some form where there's bits of text, bits of video, images, diagrams all thrown at you. And it's very easy to become confused by that. You need to be able to engage with that very critically. And tests haven't kept pace. [00:20:03] Speaker A: Okay, so I've just had a thought, which is maybe what the major challenge is, is dealing with this overwhelm, this sort of massive range of, sort of multimodal information and trying to kind of students trying to kind of cope with that and accommodate it and synthesize, you know, sort of an idea and combine their own sort of personal input, all [00:20:23] Speaker B: coming at you in another language. [00:20:25] Speaker A: Yeah. Can assessment help prepare students for that kind of thing? Is that too grand an ambition? [00:20:31] Speaker C: Well, I think it is to some extent, because I think an English language test in isolation can only do so much. And so universities perhaps need to think about what other forms of assessment they might need to support the evaluation of. Of a student's readiness for the university environment. [00:20:55] Speaker B: I think it's always been the case that the providers of assessment have said that in the higher education context, that for admissions they should consider not just somebody's performance on an English language test plus their academic qualifications, but a whole range of information about that individual, where it is they're coming from, what they know, what kind of exposure they might have had to the sort of academic culture that the university embodies. So I think it's clear in the survey that a lot of universities are not only drawing on the English language assessment, but on other sources of information as well. And that's good practice, but it's very difficult to do if you're an admissions officer with a thousand people and you've got very little time actually to consider which of them you're going to accept and which you're not. So I think it's finding a way for universities to collect the information they need and think about how they might best work with the students who are coming to make sure that they are able to engage with that overwhelming quantity of information. [00:22:09] Speaker A: Great. [00:22:10] Speaker B: In the most productive way. [00:22:11] Speaker A: Yeah. [00:22:12] Speaker C: And I think it's not only the skills they need once the university, they arrive at university for academic purposes, it's also the wider cultural skills they need to develop to integrate into a university experience. So you know how to engage in the social life on campus and how to interact with people from multiple different cultures. So there's a whole set of skills and experiences that students need to develop to be able to, to be successful once they arrive at university. Far beyond the English language skills that they're going to need. [00:22:45] Speaker A: Sure. So it's just not about what happens in the lecture hall or the seminar. It's all about happens at a party, at a club, at this or that or the other. It's the whole sort of holistic experience. [00:22:55] Speaker C: Correct. And increasingly students are also working alongside studying, so they need to be able to be effective in the workplace as well as in the university environment. [00:23:06] Speaker A: Fantastic. Great. Now, I went to university a very long time ago. Okay. And Anthony, you described my experience perfectly. You said you read a book, you go to a lecture, you have a seminar, you write an essay. I mean, that was it, that was kind of all I did. But obviously that traditional model has changed, it's evolved, it's developed, and the extended essay, the long writing assignment is being broken down. So how are colleges and universities assessing what's required, what needs to replace it? Maybe more sub skills to do with summarizing, paraphrasing. [00:23:51] Speaker B: So that's one of the things that tell us about that across quite clearly in the survey is that actually the receiving institutions don't value long form essays quite as much as they used to. And they do very much value the ability to summarize and paraphrase information from various sources. A lot of universities are picking up other forms of assessment, so the essay is now not the only game in town. People are using presentations, projects, those sorts of alternative ways of looking at what students can do a lot more often than they used to. And it varies very much by course. So where one course may still rely quite heavily on traditional forms of assessment, another may not. So we're seeing much more Variety than we used to, alongside the change of emphasis. [00:24:41] Speaker A: Great. And Sara, that kind of movement towards maybe assessing things which ultimately in the working world will be more useful. So I don't write any long essays anymore, but I do loads of presentations and summaries and sort of micro bits and pieces. I mean, it's got to be good for that, hasn't it? [00:24:59] Speaker C: Ultimately, absolutely. And I think another area that perhaps came out in the report was giving people those skills to engage in group work in a collaborative way and to be able to discuss ideas, work as part of a group, to be able to produce assessment within a university setting. But of course that's also preparing you for a life in work, working as part of a team and making sure that you can communicate and collaborate to produce pieces of work in a coherent way. [00:25:31] Speaker A: Great. So are we saying that, or maybe we're already there, that there'll be an assessment which is done on a group basis where it won't be sort of me on my own, you know, sword hanging over my head, but it'll be sort of, you know, a kind of group of us, a panel. [00:25:45] Speaker B: Well, this is one of the great mismatches that were identified in the survey that actually increasingly language tests are relying on just an individual speaking to a computer rather than speaking with other human beings, and certainly not with groups of human beings to arrive at some sort of outcome. So that's certainly an area where we could develop languages assessments to better simulate that kind of group experience. But it's very challenging because we tend to work with individuals, test takers, and it's difficult to recreate, not yet able to do it. Using AI to recreate that kind of group activity. [00:26:31] Speaker A: Yeah, that dynamic, I guess. Great. So maybe we had a peek into the future there, sort of the idea of sort of dynamic group assessments. Maybe that's where English language assessment is heading. It's certainly heading in a more hybrid direction. That lovely word hybrid, that sort of mix of things. And based on the evidence in the paper, higher education institutions are considering hybrid models more readily. What does a hybrid model look like? What might it look like and how on earth is it going to work? Any idea? I can see I've stumped you there. [00:27:10] Speaker B: So hybrid as it's used in the survey is not talking about hybrid technologies or hybrid formats. It's talking about a hybrid model where an institution will use an external test like the Oxford Test of English Advanced, and an in house form of assessment of some kind to make judgments about what students are able to do. And it was Interesting to see in the survey that many of the institutions only subscribe across the world. It's not in any particular region, are making use of some form of this hybrid approach where they're not relying solely on the external, professionally produced assessment, they're also using ones that they've produced themselves. And that has a lot of advantages because you can then assess the things that really matter to your program. But it's also true that most individual institutions don't have great levels of expertise in assessment development. So there'd be a concern that maybe the what they're producing may not have the same level of quality as something produced by the professionals. [00:28:25] Speaker A: And is that an unfair expectation? I mean, you know, why not combine this mixture of sort of, you know, proper, rigorous, say, Oxford Test of English with something which is more localised and bespoke and maybe less reliable. [00:28:40] Speaker B: Proper and rigorous sounds good. And proper and rigorous should be true on all sides. So there's no reason why an individual institution can't produce a proper and rigorous assessment. It just maybe needs more attention than they tend to give it. [00:28:56] Speaker A: Okay. [00:28:57] Speaker C: And I think that the survey also showed that in spite of the rise of these more hybrid models, these hybrids, is this combined way of assessing students readiness. The role of English language standardised global English language tests remains very important, and that gives institutions a sense of comparability to be able to look at how one student compares to another or how one institution is comparing to another. So it can gives them that baseline from which they can get more confidence in a student's ability. [00:29:32] Speaker A: Yeah, the paper was certainly saying that tests are still important. They're still standardised benchmark tests, recognisable and valuable. It's not chucking assessment out the window. [00:29:47] Speaker B: No, absolutely. And it's definitely good practice that individual institutions should be including their own procedures as part of that process. But they need to do it well and they need to invest in it for it to work effectively. And has to be that recognition that it needs support from the institution in order to make those things really operate as well as they could. [00:30:13] Speaker A: Great. So talking about testing, they're talking about assessment. You know, potentially it's still an area of nervousness and anxiety for students. You know, tests matter. You know, they can sort of change your life's direction in a modest kind of way. But there is a positive impact with testing. You know, it prepares students, it readies them for certainly a university or college environment. Sari, you talked a bit about that. How can it really help them, though, you know, with college life and University life. It's just a test at the end of the day, isn't it? How can it sort of, how can it open up their skill set? [00:30:58] Speaker C: Well, it is just a test at the end of the day, but I think that, you know, before the test, students have to think about how they're going to prepare for the test. They should be thinking about the kinds of English that they're going to need when they get into an academic institution. And they should be practicing those skills on a regular basis so that it's not just a test in isolation. The test is a culmination of some learning and some practice that gives them the confidence that they'll be able to use those skills once they've completed the test, hopefully got the result they wanted and ended up in the higher education institution, hopefully of their choice. [00:31:39] Speaker A: Great, great. [00:31:41] Speaker B: Hi. I wanted to take a moment to talk about something that sits at the heart of this conversation. How universities measure English proficiency. We've partnered with Times Higher Education on a new white paper that looks at why many institutions are rethinking traditional testing models. As higher education becomes more global and more digitally connected, the language skills that students use every day are changing. They're analyzing information from multiple sources, communicating across cultures, and working in collaborative, fast moving settings. But the tests used to measure readiness haven't always kept pace with these realities. This white paper, including insights from higher education leaders around the world, explores what modern English assessment needs to look like, from the importance of more authentic task types to the shifting role of technology and the expectations students now face. If you're interested in where language assessment is heading and how institutions are preparing for the future, download the paper from the link in the description and take a deeper look at the findings. [00:32:54] Speaker A: Assessment. We know why it's important. We know what it can do in terms of the holistic value and ultimately get people where they want to go. It is still tricky. Anthony. How can assessment systems reduce the stress for learners without reducing the reliability of the results? [00:33:14] Speaker B: Well, part of it is that expansion of the evidence that we're collecting. So if it all relies on one test, then that puts all the pressure on that one test. If there are other sources of information about students, then we're not relying so much on that one individual thing and we're spreading out the pressure a little bit and it makes the test a bit less anxiety inducing. But I think it's also true that the most successful experiences we have are when we know what we're getting ourselves into. So think about a job that you're taking on or getting into university, the more you know about it before you begin, the better the experience is likely to be. If you think that the institution of your choice is simply because it's the highest prestige university you can get into, and it all depends on the score that you get on the test when you actually arrive at that university, you may not be at all well prepared for the experience that you're going to have. And you probably won't have the greatest experience with your degree. If you know what it is that you're moving towards and you have an idea of how you might succeed in that situation, then you're likely to do better when you get there. So if we can give them not just tests that look a little bit like what it is that they're going to be doing when they arrive at university, but also show them how the test connects to what they're going to be doing at university and give them some more information about what that experience is actually going to be like, so that we help them to see other people who've been through that same experience and show how the test is part of that whole ecosystem that they're moving through, then I think they're likely to have a more positive experience. And I think we all have a responsibility to help students to understand that the students themselves should be motivated to learn more about what they're getting into. But also the universities who are receiving them and the test providers should be all helping them to see what it is that is involved and what it takes to succeed. [00:35:36] Speaker A: Great. [00:35:37] Speaker C: I think it's certainly true that that level of preparedness will help any test taker to know what they're going to experience when they enter a test situation and to feel more confident, confident and comfortable about the experience. But it's also really important that test providers think about the entire experience. So, you know, the test centre itself, how do you make that friendly and comfortable and supportive for an individual who might be coming in feeling very anxious? Because this is a very significant moment for them. And I think it's really important that you consider the human aspect of sitting down and taking a test and how to make that test taker feel as comfortable as possible, recognising the performance anxiety that they might be experiencing. [00:36:22] Speaker A: What are some of the ways of doing that then? Are we talking soft furnishings, cushions? [00:36:26] Speaker C: I think it can be all of that. You know, I think actually, you know, if you're going to take a test center and it's a very strict environment, you're treated quite brusquely by the test administrator. It's a very austere environment. Maybe it's very hot and uncomfortable in some countries, then that's all going to have a potential impact on your ability to perform at your best. So I think it is important that you think about the physical environment and how you make the test taker feel that they're in an environment that is comfortable and supportive so that they can achieve their best. [00:36:59] Speaker A: Brilliant. That really reminds me of a story, actually. I used to be an IELTS speaking examiner, and I remember one time I was in Italy, I was testing a bunch of kids, and they were all very nervous. I mean, patently kind of quaking. And, you know, instead of kind of being shut up in a room and then they would sort of be ushered in, I actually kind of came out and spoke to them initially, talked to them a little bit in their own language, put them at ease, and then kind of thought, oh, you know, okay, this guy's not too bad. He's not a monster. And when they came in to see me in the test room, it was like sort of another meeting. It was like seeing me again. Are we talking that kind of thing? Just kind of, you know, a little bit more for sort of human touch? [00:37:33] Speaker C: I think. I think so. And I think it's very easy to forget if you're on the examining side of the table or the test center side of the table. You know, all our consideration is about, you know, making the test, the test day go smoothly, that the administration works well, and sometimes it's easy to forget the human aspect of that and what it means to the candidate. I think, you know, I've been around many test centers around the world, and sometimes the most valuable experience is seeing a test taker get their results and just what it means to them when they succeed and get the result that they want. You know, this is massively important to them in the moment, but also for their future life. [00:38:18] Speaker A: Fantastic. I love this idea of, you know, a test is actually a very human experience. It prepares you for something which is very human, and it helps you towards it. I had a really good idea while you were talking, which is we should substitute the word test for taste. It gives you a taste of sort of what's to come. It anticipates, it opens a door, it opens a window on something for your future. Okay, you heard it here first. Brilliant. So, Anthony, is there anything else that teachers should be thinking about in the evolving landscape to support student readiness? We have lots of teachers, obviously, who sort of are very close to Oxford and Oxford University Press. And they love to hear about sort of tips or ways to kind of help their students and make things sort of less stressful, easier for them. What kind of things should they be considering? [00:39:04] Speaker B: Well, part of it is just helping the future test takers to understand the connection between the test and life beyond the test. So it's seeing, okay, we're doing this kind of exercise on the test. Why is that there? Why is it on the test? What is it supposed to represent and how does it connect to what might then happen to me at university or in the situation beyond the test? And if you can help to draw out those connections, it moves people away from just wanting to focus on doing lots and lots of practice of exactly what's going to be on the test, because a lot of that is not actually going to be relevant to them because it'll be either too difficult or too easy. It's not pitched at exactly the level they need. But to focus instead on maybe reproducing the kinds of activity beyond the test that they're going to be doing at university and seeing how can we reproduce that in the classroom. And that then also helps you to have more confidence when you get into that test situation of understanding what it is you're being asked and why you're being asked it. [00:40:13] Speaker A: Sure. So it's explaining the, the why, making that real world connection with, well, we're doing this because ultimately it will, you know, equip you better in these contexts. And it's also sort of maybe making it less mathematical and more sort of, you know, sort of open and kind of less sort of, you know, drill the test and more sort of, you [00:40:32] Speaker B: know, part of your life. [00:40:33] Speaker A: Yeah, love that. Assessment as part of your life. Who'd have thought it? Brilliant. Okay, good. Well, earlier I think one of you did slightly mention two very interesting letters which are unavoidable, I think, in English language teaching at the moment. And those two letters are AI. Okay. You know, technology, AI. It's everywhere. In terms of how that kind of affects testing, we know that AI is on the up. AI tools have become very common in higher education contexts. Our students are savvy, they're using them. Should assessment institutions be assessing students on their ability to use AI tools ethically and effectively? Sara? [00:41:21] Speaker C: I think the first thing to say is a lot of higher education institutions are still grappling with this themselves. The rise of AI is inevitable, I think, in higher education. And it's being used in lots of different ways, sometimes very creatively by students as well as by their lecturers, actually, in terms of developing materials for the classroom, etc. So I think it's an evolving conversation about how AI is used within institutions and how to get the most out of what is inevitably now becoming part of the classroom experience in higher education. And so the test itself needs to think about where and how it can support individuals with the development of suitable AI skills and also how AI can be used as part of the test itself. So we touched earlier on the shift towards remote proctoring. So being monitored by a human or an AI in a test taking experience, is that an appropriate way to monitor a test taker? Should we be reverting back to more traditional forms of invigilating test takers? Or has AI got an effective part to play in the monitoring of how a test taker is performing? So there's a sort of use of AI as a tool in managing the test experience itself and then potentially in thinking about how it can develop the skills of the test taker to utilize AI in an effective way or prepare them for the use of AI in a higher education setting. But I think it's an evolving conversation and I don't think that as test providers we've really got answers to that question yet. [00:43:15] Speaker A: Yeah, I mean, the idea of an invigilator bot is very appealing. I suppose it kind of makes it easy, but it doesn't sound very human or humane in a way we talked about earlier. [00:43:25] Speaker C: It's been quite controversial, I think, in a lot of testing environments where perhaps there was a shift towards that kind of invigilation, particularly during the COVID years, where individuals couldn't physically go to test centres and sit a test in a normal way. But of course there has been evidence of some malpractice and so some providers have shifted back towards it. And I think the question is around how can you use hybrid models that still involve a human being and recognize that it's a human experience to take an assessment, but use AI ethically as a tool as part of that experience to perhaps enable it to be more convenient or a quicker experience? [00:44:15] Speaker A: Great. So, Anthony, will future testing frameworks involve assessment of, say, prompt engineering, critical evaluation and ethical judgment with regards to AI? Or is that a bit of a sort of pipe dream for the future? Are those the things we'll be assessing in the future? [00:44:33] Speaker B: Well, likely, yes. AI is changing the nature of communication so we can talk about what AI can do for the testing situation. You might interact with an AI agent in a, in a test to simulate interaction with other humans. But that's only a very small part of the story. How exactly AI is going to change our lives is still very early days, and it's difficult to predict, but it's clear that it will, and other technologies as well. So if we're trying to simulate the real world in our assessments, and the real world is made up of us interacting every day with AIs, then that has to be part of the test experience as well. And the nature of test tasks will shift to involve interaction with AI. But also what tests are setting out to do will change because of new technologies that actually the world beyond the test is changing. So what we're trying to predict about that world and the kind of information we're trying to get from tests is going to change as well. It's unavoidable. [00:45:52] Speaker A: Great. So very interesting to hear you talk again about the world beyond the test and the idea that maybe one goal of the test is to simulate that world. I mean, is that ever possible? Saura, will we ever get there where the test is a perfect simulation of ultimately, what. What needs to be sort of assessed and what students need to do? [00:46:14] Speaker C: I doubt that it can ever be a perfect simulation, and it's probably a very difficult task for an English language test to do that. But I think there could be ways in which you could break down those real world experiences and think about, well, are there elements of that that you can simulate in a test environment? For example, in the Oxford test of English, we have task types that do simulate aspects of real world experience, like looking at different source texts, reading those, summarizing those, and that is an example of a task type that is aimed at simulating a real life experience. So I doubt it can be a perfect replication, but we can certainly move some way towards that as we think about the new ways in which we can develop test questions that replicate real world experience. [00:47:15] Speaker A: It's great to hear that already say in the Oxford Test of English, we have stuff which is managing that simulation is moving us more towards that possible future. Anthony, have you got anything else to say, maybe specifically with regards to the Oxford Test of English, about how it tries to simulate, how it tries to create that context? [00:47:35] Speaker B: Yeah, I mean, every test tries to simulate the world in some way. Even the most abstract tests, they're still trying to put people into a language use situation where they're having to make decisions, very often grammatical decisions or decisions about a choice of vocabulary. The difference with something like the Oxford Test of English is how it tries to integrate a wider view of communication. So it's not just looking at grammar and vocabulary and thinking that's language, it's looking at language use more broadly. And can you activate your language knowledge and your language abilities in order to be able to integrate ideas from these different sources? So can you read different texts and pull together the information from those texts using all of your knowledge of the language and of course, aspects of communication in useful ways? And yeah, we can develop further along that road. So we've talked about some of these mismatches between the authentic university experience and testing experiences. So it's looking for task types that can bridge those gaps and introduce more of that variety. Obvious areas around the use of multimedia. So integrating video clips and that sort of thing into test tasks, that's worth exploring, but also how we maybe integrate students knowledge of language more generally, including their home language and other languages they know in areas like collaborative communication. So they're maybe not only relying on English, but seeing how they can use all of their communicative resources, gesture other languages in order to communicate effectively, to achieve something where the outcome may be an English language product, but where many other languages may go into producing it. [00:49:41] Speaker A: Okay, got it. Wow. So that's a very enticing proposition. We, we talked in the beginning about we seem to be at a crossroads. Maybe we're always at a crossroads, but we definitely seem to be one. What does the future hold? How can we move on from this crossroads? I mean, Anthony, you've given us a glimpse there. I mean, where will we be in five years, in 10 years? Will it be a completely different kind of assessment landscape, A more optimistic one, a more diverse one, a more encouraging one? [00:50:12] Speaker B: That's why we're at a crossroads. I guess it could go, it could go either way. [00:50:17] Speaker A: Hopefully not backwards, though. [00:50:18] Speaker B: Well, it's always going backwards. [00:50:19] Speaker A: That's your pendulum idea. And I should explain that A pendulum, in case people, you know, didn't know. A pendulum is obviously that big sort of, you know, movement underneath an old fashioned clock or a sort of Edgar Allan Poe story, the Pit in the Pendulum, where something sort of moves from one side to the other and it's more of a kind of continuum that sort of goes from, from left to right. So are we at a continuum? Is a continuum or is it a crossroads? [00:50:43] Speaker B: Well, one of the things that new technology has done for us is allowed. It's made it easier to present shorter and more efficient tests and that has encouraged people to move back to that more psychometrically driven form of testing with lots of multiple Choice questions and fill in the gap and read back a sentence. Those sorts of activities that resemble what we were doing back in the 1970s. And I suspect there will be a strong reaction again from teachers against that approach to testing. But that may be yet to come. So that's one possibility is that we simply swing back to the 1970s and go back to that form of testing, because it's very efficient and it gives people quick answers. But I don't think that answers the need of the changing nature of communication. So every phase in assessment over the years has been to answer some kind of social question. So for the ancient Chinese, it was about selecting the most able people or the best, the most learned people, perhaps you should say, to become members of the civil service. And the test was introduced as a competition, and the people who won the competition got the top jobs in the civil service. That was then the thinking behind the examinations in the 19th century in Europe. That was replaced 100 years ago by this approach to testing, where the question then shifted from picking out the best pieces, just discovering really what everybody in society could do. Because education was opened up to everyone. It wasn't just for the elites anymore, it was for all members of society. So testing was introduced as a way to find out what everyone could do and to allocate them to the most appropriate form of employment. We're now at the situation where higher education is being opened up to everyone. And I think what the new technologies allow us to do in the same way as the new technologies of printing and audio recording did for the psychometric phase of assessment, the new technologies are allowing us now to much better simulate the experience of work and of study in ways that we never able to do before. And that gives new opportunities for assessment, maybe not just to tell us about what the individual test taker can do, but also about the relationship between individual test takers and the experience of work or the experience of university. And that gives us the opportunity not just to pick out people to do a particular job, but to adapt the job to the people so it can become, or we can use assessment to support more assistive technologies and supportive technologies that make a match between individuals and experiences of work or of study, that take account of who they are and how that relationship between the workplace and the individual can be as productive as [00:54:24] Speaker A: it can be wonderful. So so much you brought to the table there in terms of making me think about assessment. I mean, suddenly assessment is democratic. It's about equity, it's about adaptability, it's about being assistive. [00:54:38] Speaker B: Who knew? [00:54:39] Speaker A: Amazing. To finish. Sara, a bigger question maybe for you. I mean, where do you see the biggest opportunities for advancing English language assessment worldwide? [00:54:51] Speaker C: That's a very big question. [00:54:52] Speaker A: Yeah, give me a big answer. [00:54:54] Speaker C: Well, I think that Tony actually picked up on quite a lot of themes that could have relevance for those big opportunities in English language assessment worldwide. But I think it is all about how we match, how we assess the individual's ability to be successful in their chosen career or in their chosen study pathway, et cetera. And technology has a massive role to play in facilitating that process and bringing the test taker closer to understanding the next stage in their career or in their life. So I think it's really about how we enable the test taker to be able to prove that they've got those skills, but also importantly, the test user, the organization that's going to use those test results to differentiate in a fair way between the abilities of one candidate and another. So it's not only about the test taker experience, it's about the purpose of the test and also the test result user and how we bring all of those aspects together to give a much fairer and more accessible experience for the test taker. [00:56:14] Speaker A: Wonderful. Fantastic. Bringing it all together. You mentioned in the beginning that you told us that lovely story about you in the highway code. Okay. Over the weekend I was driving in the Cotswolds and I saw a sign. Okay. And I'm going to test you on it. It was triangular with a red border and it had a hedgehog in the middle. What does that mean? [00:56:38] Speaker C: Okay. Well, I do know what that means. I'm not sure it was one that I was tested on in primary school, but that is be careful of the test. The hedgehogs, you've got to be careful because you can't drive over those. They're a very precious resource in this country anymore nowadays. And drive carefully. [00:56:55] Speaker B: 100%. [00:56:56] Speaker A: Fantastic. Yeah. Much beloved. To the cops. Well, as I think the hedgehogs. Well, listen, it's been fantastic today. There's an enormous amount of kind of knowledge and breadth of understanding in you too, and it's great to be able to share that with our listeners now, our viewers today. So thank you so much for everything you've said today and we will move forward wiser and much more aware of the landscape of assessment. Thank you so much. [00:57:18] Speaker C: Thank you.

Other Episodes

Episode 4

November 17, 2025 00:34:09
Episode Cover

Motivation & Social Learning: Creating Culture & Motivating Through Community

The focus shifts to classroom culture and the teacher’s role in shaping it in the final episode of our Motivation and Social Learning series.  ...

Listen

Episode 3

January 15, 2026 00:19:44
Episode Cover

The Impact of Assessment on Teaching & Learning: Can Assessment Drive Real Change?

Can reforming exams lead to better teaching and learning? In the third episode of this series, we explore the institutional and policy-level impact of...

Listen

Episode 2

August 14, 2025 00:24:43
Episode Cover

Generation Alpha: Are Gen Alpha Losing Social Skills? Teaching Empathy in a Hyper-Digital World

In episode 2 of our Generation Alpha series, we explore the social side of Generation Alpha—how they connect, communicate, and collaborate in a digital-first...

Listen