The Impact of Assessment on Teaching & Learning: Assessment for Learning & The Role of AI

Episode 4 January 15, 2026 00:32:49
The Impact of Assessment on Teaching & Learning: Assessment for Learning & The Role of AI
Talking ELT
The Impact of Assessment on Teaching & Learning: Assessment for Learning & The Role of AI

Jan 15 2026 | 00:32:49

/

Show Notes

How can assessment be used to support learning, not just measure it?
This final episode of our series on the impact of assessment on teaching and learning explores alternative assessment methods like portfolios, peer feedback, and gamification — and how AI is reshaping the future of feedback and test design.
Jo Szoke and Nate Owen also tackle ethical concerns and the importance of keeping humans in the loop.

View Full Transcript

Episode Transcript

[00:00:06] Speaker A: If that division is removed and actually the assessment is sort of weaved into weekly activities and then you're getting a formative score on the basis of the activity throughout the week, then you're being assessed without being assessed in such a way that removes the anxiety associated with the session. [00:00:23] Speaker B: Sometimes there might be dips in performance and that can still contribute to overall improvement and progress. [00:00:31] Speaker A: It comes back to what the assessment is for because if you're not using it for any particular purpose, then I mean, maybe you're gearing towards personalized feedback, in which case, okay, maybe a grade. [00:00:43] Speaker B: Is not necessary, you ask questions instead. So is this the right way to say this and let them think about what the correct answer might have been? And with that kind of delay you are engaging them further. Create more learning opportunities and then you can give your assessment. [00:01:04] Speaker A: We don't really place restrictions on teachers using air, so therefore to what extent should we really place restrictions on students using AI? [00:01:22] Speaker C: So welcome back. Thank you so much. This has been really fascinating, all these different aspects of assessment and learning. So I think we're thinking now much more about teacher in the classroom, how they can design assessment so that it helps learning. Which I think comes under the heading of assessment for learning as opposed to assessment of learning. I don't think we need to define that. People know that. I think, well, let's hope. [00:01:50] Speaker B: But I heard that there's like there's an alternative approach as well, assessment as learning. So now instead of two prepositions, we now have a third one. And I, and they, I forgot who had this idea. But assessment as learning is peer feedback and all kind of bringing feedback and student led feedback into the picture and then that counts as learning as they are giving feedback on and presumably part. [00:02:20] Speaker A: Of the assessment journey as well. Count towards their formative scores. [00:02:24] Speaker B: Yes, that's how this trichotomy. [00:02:28] Speaker C: Interesting. So just trying to think assessment for learning. To me the image that always comes to mind is tasting soup and tasting and testing sounds similar, but actually completely different. Well, sort of related but a chef in a kitchen is tasting the soup in order to decide whether they need to do anything more before they produce that. So it has an impact on what happens next. Whereas the customer sitting at the table and tasting the soup, in that sense there's nothing else that's going to come after that. So that's the, that's your summative assessment. [00:03:13] Speaker B: Where it's maybe a complaint. [00:03:17] Speaker C: That's good. Yeah. So, so, so now we're thinking about what can teachers do to improve the Way they use assessment to help learning. So are there ways that we can design our assessment approaches in the classroom so that actually helps learning? I mean, where do we start with this? It feels like it's a massive topic. [00:03:43] Speaker B: Because again, we come back to assessment literacy. Without that, they cannot really design assessment approaches or activities. But then we also bring in the picture, like bring in alternative assessment into the picture and gamification. [00:03:59] Speaker C: Okay, tell me about alternative assessments. [00:04:02] Speaker B: Well, alternative assessment are. Is basically a collection of methods that. These are usually informal testing methods, continuous testing methods or assessment methods that let students focus on the progress of learning. And they. And some examples might include portfolios, learning journals, lots of peer feedback, self reflection sheets. And gamification can also be an alternative method because it keeps students come back to the same topic and also the fact that they are motivated by something else. So it's kind of assessment is wrapped into this game which can then also improve collaboration because if they play a kind of Minecraft sort of game, they learn different skills while at the same time they are being assessed. [00:05:05] Speaker A: So yeah, what all of this does is just break down that distinction between assessments and learning very specifically. I mean, in a traditional approach it might be the case, but learning throughout the week and then the teacher announces on a Friday afternoon. Right. It's time for the test. [00:05:21] Speaker C: Right. [00:05:21] Speaker A: And of course, if that division is removed and actually the assessment is sort of weaved into weekly activities and then you're getting a formative score on the basis of the activities throughout the week, then you're being assessed without being assessed in such a way that removes the anxiety associated with assessment. [00:05:39] Speaker B: Yeah, so yeah, you get continuous feedback. You can always improve on that. [00:05:45] Speaker A: So assessing, because we're all accountable and so we haven't really mentioned that at all. The notion of accountability and the accountability agenda, which of course is something that people are quite adamantly against, quite legitimately. But if we can still do the assessment, such that we can report progress to outside stakeholders, but do that in a way that avoids some of the more negative connotations of assessments that I think is what this agenda is geared towards. [00:06:12] Speaker C: Yeah, just to pick up an accountability and accountability agenda, making people responsible for. [00:06:20] Speaker A: What they do and answerable for what they do. [00:06:23] Speaker C: So the teacher in that sense, whether. [00:06:24] Speaker A: It be the teacher, whether it be the head teacher, whether it be the policymaker, I mean, this is something that I would argue as a social change that has happened, I mean, since society has become more data driven in general. [00:06:38] Speaker C: Right. [00:06:38] Speaker A: Again, since probably the 60s and 70s. Yeah, yeah, yeah. Now everything is data driven and decided on the basis of data. And people are judged on their performance metrics. [00:06:48] Speaker B: And that's why alternative assessment is such a difficult concept to introduce, because how can you measure a learning journal? How can you measure a portfolio? How can you measure something that. And a teacher can see progress, maybe, and progress is not linear, so you. Sometimes there might be dips in performance and that can still contribute to overall improvement and progress, which the teacher can see, but then the metrics will show something different. So alternative approaches are great for lowering student anxiety, making them involved and invested, but it might not fit into the system that focuses on grades and numbers. [00:07:34] Speaker C: Yeah, I mean, that reminds me of the thing that had triggered for me my interest in portfolios was recordings of speaking, students recording their speaking because it's really difficult to capture on a metrical system. It's partly what I was saying at the beginning about giving someone an A or a B plus. It's kind of a little bit difficult to say what that means. But if you give. If someone records their speaking on a task at the beginning and at the end of the semester, they can see that difference. They can't explain it necessarily, but it's immediately clear to them that they have made progress. [00:08:11] Speaker B: But then again, it's kind of intuitive and it's based on intuition. They, as you said, they can clearly see the difference. But how do you put that into words and put that into metrics? And it's so difficult because as a teacher, you want learning to take place and you want some sort of change. And sometimes it's not even visible. It's visible, but it's not quantifiable. And that's a difficult question. [00:08:42] Speaker C: Which takes us perhaps onto the question of grading. So when we talk about assessment, often people assume that means giving a grade, but we can distinguish between. You can assess someone, evaluate how well they've done it, but you don't necessarily have to give them a grade. And I think there's a movement, I hear it in the US but it may be elsewhere for gradeless assess assessments. [00:09:05] Speaker B: Yeah, we're ungrading. [00:09:07] Speaker C: Ungrading and things like that. Right. I mean, is this something that you have some sympathy with? What do you feel about it? Assessment? [00:09:17] Speaker A: Well, as an assessment specialist, I have very mixed feelings about this. I think if you're ungrading something, then are you even assessing it at the end of the day? Potentially, because it comes back to what the assessment is for. Because if you're not using it for any particular purpose, then, I mean, maybe you're gearing towards personalized feedback, in which case, okay, maybe a grade is not necessary. Okay. But then again, that only is very geared towards that individual. So that would be a form of assessment that might be beneficial within the classroom, low stakes assessments, but that sort of assessment might not necessarily be of use to other stakeholders. So people who need to make decisions, or university admissions offices who look at this and think, what can I do with this? Really? How, how does this help me do my job? [00:10:04] Speaker B: Yeah, because I was thinking like, yeah, I kind of like the idea. But you are absolutely right. If there are. Because I'm thinking about my course, my lesson, the lesson or the, the individual student I'm teaching. And in that context, on grading could work, personalized feedback could work because I can explain what I have in mind, how they are doing now, what we want in the future. But you' Other stakeholders need to see clearly what to do with that person. And they don't really have a picture of that person. [00:10:37] Speaker A: I mean, there's a slight misconception again, which I think goes back to this sort of transition from norm reference to criteria and the corruption of criterion referencing that occurred in the 1970s. A grade or a number doesn't really mean anything by itself. It only really takes on meaning when it's applied to its descriptors, its framework, which is what we get. [00:10:57] Speaker B: And in comparison. [00:10:58] Speaker A: Exactly. Yes. And so if you remove the grade, it's quite probable that you're not removing the framework or the criteria which you're using to provide feedback to those students. So there is a risk that we're trying to create more equitable research, but actually not just removing the grade doesn't really necessarily achieve that. Yeah. So we just need to be careful that we're not throwing the baby out the bathwater and also amending or adjusting assessment in such a way that actually makes sense. [00:11:26] Speaker C: Yeah, but I mean, I would argue that in the course of the learning journey, most of the assessments are not going to external stakeholders. So I agree when you talk about going to external stakeholders, it's a different situation in the classroom. Most of your assessments are between teacher and learner. And sorry, I can't give a citation for it, but I was reading about research recently was showing that students take as soon as they see the score, they switch off to the grade, they switch off to the feedback. But that if you give the feedback first and then you give this, you delay the grade until later, they actually take in the risks, the feedback, and they do better on when the task is given to Them two weeks later. [00:12:19] Speaker B: Yeah. I wanted to say. [00:12:20] Speaker C: Were you going to say the same thing? No, no, no. It's perfect. [00:12:23] Speaker B: No, I just like the moment you mentioned it, I. This popped to my mind, like, delayed feedback. And that's. Yeah. And then when Stu. We also. When I do teacher training, we try to not, like, tell teachers to do this, but just make them see the benefit of delaying feedback. Because it's the same thing that when students see the test and they see all the. The color and the red, which is also a thing, whether we want to use red or another color. I'm not. I mean, I. I think it doesn't really matter because if it's pink, then pink is going to become the terrible color. But the moment they see it. Yes, they switch off. And then if you can delay feedback, also if you can do indirect. [00:13:11] Speaker C: Delay grading or delay. [00:13:12] Speaker B: Sorry. Well, both in a way, because delay feedback by giving indirect feedback or implicit feedback, which basically means that you are not giving feedback like, this is wrong or this is good, or you underline something. You circle something. You ask questions instead. So is this the right way to say this? Or. Or you just use color coding and you just highlight things and let them think about what the correct answer might have been. [00:13:46] Speaker C: Okay. [00:13:46] Speaker B: And with that kind of delay, you are engaging them further, create more learning opportunities, and then you can give your. [00:13:57] Speaker C: Okay. [00:13:57] Speaker A: There is a sort of movement, I suppose. I mean, Oxford is very much attempting to do this with the notion of adaptive learning, really, which is very much around that if you're performing something or doing something on a computer, and then AI can probably assist with this to a certain extent already, and then we'll continue to do so in the future, just providing this, as you said, implicit feedback. Is this the right word here? Or just underlining something for further consideration? And because it's on a computer interface, it can be amended or on a device, it can be amended quite easily and quickly by the student. And that kind of corrective feedback. And then, you know, with your. Your personalized dashboard that keeps a record of the kind of writing or performance that you're doing. Again, this is all, you know, a potentially positive use of AI in assessments. [00:14:43] Speaker C: Yes. [00:14:44] Speaker B: Yeah. I actually, I used AI already when we were talking about giving feedback and, like, how difficult or easy it is. I find it that it's kind of difficult to put your feedback into words and to make it constructive and not harsh. So sometimes I ask my students to use AI to write feedback for them, analyze that kind of language, which is typically very supportive, and Very kind and constructive. So let's use some of these chunks and methods and try and write our own feedback after that, which could be, again, a good use of AI. However, sometimes AI is too helpful or not helpful, but just too supportive and too kind. [00:15:31] Speaker C: Right. [00:15:32] Speaker A: There is a yes bias with AI that it will consistently give you positive feedback whenever you ask it. Even if something such a great question, it can be, yeah, yeah, or even just, you know, you say something which is blatantly wrong and it won't explicitly disagree with you, it'll find a way to agree with you. But say, have you thought about this? [00:15:50] Speaker B: But you know, it's super difficult because in teaching, in classroom teaching as well, we tend to come across this problem, like, how do you give feedback, like not written feedback, but spoken feedback to students? And teachers find it super hard to be still encouraging. So what if a student says something blatantly wrong and like, ah, that's an interesting idea. And then how do you do. And it's a very similar thing. You want to be encouraging, but how do you say that it's actually a terrible idea. So there's. It could be something that real humans work on as well. Hi, I'm Jo Silkey. If you want to discover more about how to use tests and assessment as tools for growth in the English language classroom and the concept of positive washback in more detail, download our position paper that I co authored alongside other amazing contributors called the Impact of Assessment on Teaching and Creative Positive Watchback. In this paper, you can explore research on how testing and assessment shape classroom practices and find practical guidance and advice on how you can prepare your students for exams while also addressing their broader language needs. Download the paper via the link in the description and enjoy the read. Thanks. [00:17:07] Speaker C: Last week I was listening to someone who had been doing some research into comparing teacher feedback on pronunciation to AI. I think it was chatgpt, but I'm not 100% certain to give feedback on their pronunciation. And as you might expect, the teacher was giving better quality feedback. They understood better the context of the student and the student's first language and how that might be interfering and things like that. But of course, the advantage of the AI tool was that it was giving instant feedback to everybody at the same time. And it was personalized exactly to what. What they had tried to say. So there was kind of a balance between those two. [00:17:53] Speaker A: There are language learning apps in China that do explicitly this, whereby they're very clever. I mean, they, they have little almost they can be excerpts from films or TV shows like Western ones, which are done in English. And then what the learner does is try to repeat what they hear in the clip and then the AI will judge what they say against the original audio clip and it'll provide feedback on where they need to work on their pronunciation. And then it's also open to other learners as well, who then judge and create, like this kind of leader scoreboards of learners. So you got this really, really interesting. [00:18:28] Speaker B: I actually tried one of. Not the Chinese apps, but I tried one app that promised to help with pronunciation and pronunciation coaching, and I specifically mispronounced everything. And they just kept saying, oh, well done and good job. Maybe. Yeah, it's not there yet. [00:18:49] Speaker A: Not quite there. [00:18:50] Speaker C: It's not very sincere. That's the problem. [00:18:52] Speaker A: Yes. [00:18:53] Speaker C: It doesn't really believe what it's telling you. [00:18:55] Speaker A: Yeah. So that's what we call adversarial approaches. I'm very much involved in AI automated scoring right now, and one of the ways that we test the suitability of scoring models is to sort of feed them what we call adversarial examples of language. So it could be a real example of language that we've changed the order of all the words so it's the same words, but the order is completely jumbled, it's nonsense. And then you try and see, okay, to what extent is the AI capable of assessing that at the same level or you misspell all the words? You say it's the right words in the right order, but the letters have all changed positions. So there are lots of different sort of automated adversarial techniques that you can use. So obviously that has not been through that particular testing approach. Yeah. [00:19:41] Speaker C: So we're talking, funnily enough, we come on to AI, which. Which is a surprise. So we're thinking now maybe about what the future is and perhaps how AI and other techno technological changes might impact on assessment and assessment for learning. I don't know if you've come across the. I think it's called the vicious cycle, the vicious learning cycle, where a teacher thinks, I've got to set an assignment, go to ChatGPT. Oh, yes, give me an assignment. They give it to the student. Student then goes to ChatGPT to produce the assignment. They then give it back to the teacher and the teacher feeds into ChatGPT. And so no human has actually been involved in that cycle. Is that the future that we. That we face? [00:20:34] Speaker A: Well, is it? If it is, we're in trouble. [00:20:36] Speaker B: Yeah, but if it is, I mean, I think it could be the future. If we don't change our approach to assessment and the way we assess and way we design assessment. And I honestly, I'm so happy that AI has caused this disruption in the system that. Because it's been there and it's an age old problem. We were just talking about it in the break and like it's, it's been there forever. [00:21:03] Speaker C: But finally when you say it's been there, what are you thinking? [00:21:06] Speaker B: What I'm thinking is like cheating and students trying to find a quick solution, but also at the same time teachers trying to find a quick solution because they are doing the same thing. We are trying to just cut the time we sort of waste on things that we don't see the value of. And the most important part in this is that both teachers and students should see the value of what they are doing, why teachers assess and why students are being assessed and how this is done. [00:21:42] Speaker C: Right. Right. [00:21:44] Speaker A: I think increasingly AI is becoming pervasive in all aspects of life and we don't really place restrictions on teachers using AI. So therefore, to what extent should we really place restrictions on students using AI? What I think will happen, if it's not happening already, maybe it is, will be teachers potentially encouraging students to use AI, but in very particular ways to report how they're using it and to think about, well, okay, how did that help you? How did it not help you? I think this is all sort of happening. Yeah. [00:22:18] Speaker C: It's already interesting to give us some examples. [00:22:22] Speaker B: I definitely wanted to. So I already introduced this into my assessment and feedback class, for example, students. So teacher trainees need to design a test with the help of AI. They need to reflect on the prompting process. So how they, they, what was their first prompt, how they specified the target audience, how they specified the, the activities they want. And then the most important part is that they should correct the AI generated results. So if they see any inconsistencies, if they see that an activity is above the level they requested, they should reflect on that shows that they are ready for this new age. Because I'm pretty sure they are going to design tests with AI, but they need this assessment literacy that helps them evaluate the AI generated result and redesign or rework any of the elements that don't fit. So I think it could be. So that's for like, that's teacher trainees trying to use AI in their work, but also teachers helping students use AI in their learning because they are going to use AI in their workplace and in their future profession. So we need to help them use AI well and that could be a way to do it, like asking, like be using it as a collaborative partner and coming up with more authentic assessment tasks. [00:23:55] Speaker A: Yeah, hopefully, yes. There is a bit of a flip side to this, which is I think we're seeing a little bit of it already, which is that assessment moves away from authenticity. I think there is a risk that that can happen towards what AI can do and then so it works within the limitations of AI. So tasks become a little bit more formulaic because AI can handle formulaic but can't handle the creativity. So what I'm thinking of is kind of AI marking. Yes, AI marking. There's also the question around AI question generation as well. So churning out lots and lots of questions and the quality of those questions is not particularly high. I mean we've been investigating this and then you can always use AI to start generating questions. But we have not reached a point yet where AI can generate questions of the same level of quality and of sufficient quality to be included in high stakes assessments. It's just not there. It requires human mediation and content moderation to bring them up to a required standard. And there is a potential risk around the kinds of tasks AI can handle as well. We risk falling into a trap whereby we're only using almost like listen and repeat style tasks. Because AI can handle the automated scoring of listen and repeat tasks extremely well, but it cannot handle these kind of free form conversational responses. [00:25:19] Speaker B: Well, yeah, that's the danger of using automated scoring and AI powered scoring. You're just going to either you expect your students to communicate in a robotic manner because they need to wait until they finish a sentence and then start a new one, or just improve scoring or cut out AI from the picture there. But I don't know. But probably the third option is not feasible because that's the way forward you want to increasing. [00:25:53] Speaker A: Well, the general state of the art at the moment is a kind of hybrid model whereby you use AI where you can. And automated scoring models usually provide not only a score, but a percentage of certainty around that score as well. And when that falls below a particular threshold, that sample is then sent to a human moderator who will assign the score to that sample, which if a model is particularly sophisticated, can then be fed back recursively into the model. So you get this kind of ongoing model training technique so that it's constantly improving and constantly learning itself. But yeah, this is kind of the ideal at the moment we're working towards. [00:26:34] Speaker C: So there's partly the design of the assessment itself, but there's also an element of being able to build up profiles of learners through collecting ongoing information, which could lead to something more useful to the learner because it's, there's a learning analytics element to that which is identifying strengths and weaknesses that can be personalized. I mean, do you see that as being part of the future of. [00:27:03] Speaker B: Sure. It's already here with us because if we use all these like Kahoot and, and all the other like for example, quizzes, which is my favorite, but now it's not quizzes anymore because it's changed its name. But they, they have been doing this for years now. We've got learner analytics. We can see strengths and weaknesses. You can also, after every quiz, which question was more difficult, which question was easier? I think it's highly valuable. But as teachers we shouldn't get caught up in all the data and then we should also see the humans behind all this, however, because we also need to consider different contexts. So I know that in certain parts of the world you have huge classes like sixty, seventy, eighty or even hundred students. Their learning analytics or sorry, learner analytics can be super useful because it's impossible to see your individual students and how they are progressing. But with this you can at least see the cases where you need to step in and then invite or find those particular individuals. [00:28:13] Speaker A: Yeah, yeah, yes. And we have to be mindful of those parts of the world which don't necessarily even have stable Internet connections. We can talk about AI and all the wonderful things we can do, but there's still going to be a role at the end of the day. I mean, course books still sound extremely well and a lot of teachers around the world still rely on these old paper course books. [00:28:35] Speaker B: Oh, and I have to mention one more thing because AI literacy is also important because when teachers have this idea that oh, I can feed in my students essays and other work into AI chatbots and get feedback. Technically they shouldn't be doing that without asking for student consent. Even then it's not the best thing to do because they are also training the model on student work. They might be sharing data that's in these essays. So it's. [00:29:10] Speaker C: Yeah, they're definitely. [00:29:11] Speaker B: We need to deal with this. [00:29:13] Speaker A: We should be mindful. You have to remember a lot of these AI models were trained on native speaker, to use native speaker, but samples of English language that it scraped from the Internet. This is not learner language. So if you're asking it to judge student writing or a speech extract against say the common European framework of reference, they're not particularly good at that because they don't really know what the CFR is or they might be aware of it because there are references to the CFR and the training data, but it is not capable of producing output at certain CFR levels. [00:29:47] Speaker C: Yes. [00:29:47] Speaker B: Also, we have actually tested this that we wanted to. At my university. We wanted to use AI because there's an exam, a standardized essay, that essay exam. And some teachers have thought of this idea, like, let's make evaluation quicker, easier by using AI, or. But since it's based on probabilistic production of language and results, every single time you feed in the essay, it's going to give some, like a different result. It's going to highlight different mistakes, different aspects. So it's not reliable difference every time. Exactly. So unfortunately, you. And even once you will have to start questioning its response, whether you can actually rely. Rely on it. If every single time it gives you a different response, then which one is going to be the one? [00:30:46] Speaker C: Yeah, yeah. I think. Was it Ethan Mollick, I think, who said we should think of AI tools as being a drunk intern, someone who's helpful but not reliable. [00:30:59] Speaker B: Absolutely. [00:31:01] Speaker C: So what I'm taking from a lot of the discussion is the importance of the human input, that outsourcing things to AI without just thinking it does it faster is going to lead to problems. But if we are collaborating with AI, there are possibilities that are difficult at the moment without AI. I think there's a positive, a cautiously. [00:31:24] Speaker A: Positive view of the future, I would suggest. Yeah. Summarize the discussion. That's about what we could say. [00:31:31] Speaker B: Yeah. So the human in the loop is super important, but we need to train the human as well as the AI. [00:31:38] Speaker C: Everyone needs training, which is good news for trainers out there. Yes. So thank you very much. I think it's been a really fantastic conversation. I've really enjoyed it. Thank you. [00:31:47] Speaker A: Thank you. [00:31:48] Speaker C: Thank you. And that's a wrap on series 12 of Talking ELT. We hope this series has sparked new ideas, challenged assumptions and offered practical insights into how assessment can better support teaching and learning. Huge thanks to our guests Joe Soke and Nate Owen for sharing their expertise, experiences and thoughtful perspectives. If you'd like to explore these ideas further, click on the link in the description and download the Oxford University Press position paper that inspired this conversation. The Impact of Assessment on teaching and learning, creating positive pushback to our Talking ELT community. Thank you for listening and for being part of another meaningful and relevant topic impacting the ELT landscape. Don't forget to like and subscribe if you want to learn more about this issue and others like it. We'll see you next time on Talking elt.

Other Episodes

Episode 2

August 14, 2025 00:24:43
Episode Cover

Generation Alpha: Are Gen Alpha Losing Social Skills? Teaching Empathy in a Hyper-Digital World

In episode 2 of our Generation Alpha series, we explore the social side of Generation Alpha—how they connect, communicate, and collaborate in a digital-first...

Listen

Episode 1

August 28, 2024 00:30:42
Episode Cover

Pronunciation: English Pronunciation for a Global World

What goals should we pursue when teaching English pronunciation? Join Robin, Yordanka, and Montse as we explore the importance of international intelligibility, and the...

Listen

Episode 2

November 17, 2025 00:23:19
Episode Cover

Motivation & Social Learning: Motivation & Social Learning: What is Social Learning?

In episode 2 of our Motivation and Social Learning series, Fiona Mauchline, Nick Thorner and Ed Dudley unpack the concept of social learning and...

Listen