Artificial Intelligence: Dangers & Opportunities

Episode 4 October 24, 2023 00:20:12
Artificial Intelligence: Dangers & Opportunities
Talking ELT
Artificial Intelligence: Dangers & Opportunities

Oct 24 2023 | 00:20:12

/

Show Notes

Is artificial intelligence a ticking time bomb or a road to new opportunities? Explore the risks around data and privacy, the potential to reduce global inequality, and the dangers of super-intelligence.

View Full Transcript

Episode Transcript

[00:00:10] Speaker A: Hello and welcome to Talking ELT, the easiest place to learn about the big trends in language teaching. We're continuing our conversation about AI with Higher Reindeers and Ben Knight. And today we're going to be looking at the dangers that AI might posts, the opportunities it provides globally, and the psychological impact it might have on language learners. I want to start with the last point. Previously we talked about the impact of AI on assessment, which seemed to be quite large. [00:00:41] Speaker B: This is raising questions for me around how the student is going to feel in the moment, knowing that everything they do is feeding into the future in a much more holistic way. And the impact on the learner psychologically because I think that's a completely different way of being assessed that people aren't. [00:01:01] Speaker C: It'S a different way of being well, yes, pantare everything flows. So as long as I think this is back to the same point, it's the underlying presumption is which we're not operating on at the moment, which is why we need teachers and other frontline people, not engineers and computer science scientists only to be in the loop. The presumption is that as an individual, I am empowered to curate, to manage, to interact with the data that pertains to me. And at the moment it's all either unclear where your learning data, for example, resides, who has access to it, what happens to it, whether it's sold and what have you. And I think this is a really fascinating point. We are now at a moment in time where there is an opportunity to move away from the current model, which is large corporations essentially just basically trying to obtain as much data as possible about individual people as they possibly can. But it takes the power away from individuals and I think, well, AI may be able to reverse that. [00:02:24] Speaker D: Yeah, and I think there is a kind of data literacy that's needed. People need to understand how data is used, what different types of data are. Because again, it's easy for an engineering perspective to say, for example, look, there's tons of data here, it can all be pulled together and used for different purposes in a way which has unintended consequences. So, for example, we're talking about learners and teachers, but they're also teachers and managers. So how is very easy to be providing data which can be used by a manager to make decisions about the effectiveness of a teacher which is actually unjustified. It can easily slide that way unless people are really thinking about what is good teaching. How does the data indicate that? In what ways do we interpret that kind of data? And not to kind of automate the sending out of data about teaching to managers without understanding that we should be the algorithm? [00:03:36] Speaker C: Yeah, that's a slightly poetic way of putting it, but it can't be generated by data alone. And that means that even to the corporations, perhaps that help generate these algorithms. There is no control over that anymore, not even by the engineers who initially set up the system, because the algorithm regenerates itself largely without oversight. [00:04:09] Speaker D: Well, you saying that just reminds me of how we were talking earlier about how one of the great benefits of AI is being able to personalize things to the interests of the learner. So when you have that example, which I gave of a university student who is working in a particular area of mechanical engineering, so you can target that, that's fine. But imagine you're talking about schoolchildren and their particular areas of interest, so that all that information has been collected and analyzed. Where's that going? That could easily be misused. [00:04:49] Speaker C: Yeah. And I think a lot of teachers, myself included, we cannot oversee the consequences of all this data collection. And it was one thing if there were people who could, and then you could decide whether you would trust them or you might put mechanisms in place to oversee and regulate them, et cetera. But those people don't exist anymore. And that is one of the scary, unanticipated consequences, unintended consequences of AI. [00:05:18] Speaker D: So while we're talking about doom, didn't you say that there was an academic yes, yes. [00:05:26] Speaker C: Let me read out this, then. You know, Chris, you have to promise us that you'll bring us back to happier lands. I will. Okay. Because it was a coincidence. Just last night, I was having dinner here in Oxford, and I was reading an interesting book, and it showed me a quote from Oxford philosopher Nick Bostrom, and he wrote a well received bestseller in 2014 called super intelligence Path Dangerous Strategy. So it's not specifically or exclusively about AI, but obviously superintelligence is something that people worry about as a result of developments in AI. So I'll read the quote to you before the prospect of an intelligence explosion. We humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation explosion will occur, though. If we hold the device to our ear, we can hear a faint ticking sound. For a child with an undetinated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seems almost negligible. Some little idiot is bound to press the ignite button just to see what happens. And on that happy note excellent. Excellent. [00:07:06] Speaker B: A wonderful way to end the series. [00:07:08] Speaker C: And probably life on Earth as we know it. Yes. [00:07:13] Speaker D: Well, it is, I think, there's a lot of discussion about this at the moment. So we kind of know that we're using the tools to make decisions, to help us make decisions. We can automate those decisions, and the more that we automate them, the more we risk losing control of what happens next. And so I think your point about the human in the loop is the key thing there, to avoid the bomb going off. [00:07:45] Speaker C: Yes, exactly. And it's interesting, just yesterday I was reading about also an unanticipated phenomenon in AI, which is called model collapse. And this is quite interesting because what it essentially, if I summarize it accurately, entails is that of course an AI system is based or works off a large language model. And that model might contain information. And I'll just make up an example. Let's say that the database contains information about 100 T shirts. And let's say that 90 of them are blue and ten of them are yellow. If you leave the system B, initially, it will just be able to tell you that the majority of T shirts in the world are blue and that a minority are yellow. But at some point, the model itself starts to become contaminated, and it will actually start to tell you that all T shirts are actually slightly green. Okay. Right. Okay. And this phenomenon is referred to as model collapse. And so there are ways around it. One is that you get the AI to go back to the original database so that it resets itself. Of course, we're back to our earlier example of the teacher creating curating, a database or a model. Well, having a human in the loop might not be the most efficient. Right. But it might be more effective. And I think that there is a glimmer of hope for us as individuals, as human beings, and certainly as teachers, that we'll still be needed to not just refresh, but also check the models that the systems behind the curtain, so to speak, are working off. [00:09:31] Speaker D: Yeah. One of the things I'm going to go slightly off at a tangent, because it just triggered a thought of an area that we haven't really been talking about, which is when we talk about language learning. One of the huge challenges of language learning is the automation of the language competence in our heads, the ability to use the language at speed in real time. That's kind of one of the most difficult bits. And obviously what we're seeing now is the ability to create safe spaces for practicing language ad nauseam, to get constant feedback in a way which is really difficult to achieve at the moment. So that's one of the areas that I think I'm really excited about, is to be able to practice speaking in an immersive situation and getting responses, getting feedback on your speaking which is reliable or encouraging. I think that is going to be a big change. [00:10:43] Speaker C: Yeah, absolutely. And not just encouraging but also appropriate to your level, relevant to your specific needs and interests. Perhaps check a bowl. There must be a better word for that by a teacher in the background, even though that interaction might not be happening in the classroom at a time when you're supposed to be working with the teacher. So tremendous opportunity. [00:11:09] Speaker D: When I go kind of visiting schools around the world, what you often find is the problem is the teacher doesn't feel confident to really give feedback or to model speaking. Everyone says speaking is the most important skill and yet that's the bit which they neglect because of lack of confidence. So that's a problem which I think can be overcome. And I think we'll see a lot more of learners developing their speaking skills in an AI controlled environment. [00:11:42] Speaker C: If you'll allow me, Chris, because that might be a very nice segue onto a different set of topics which I think really deserve some attention to. Because you've mentioned around the world and the global situation, I think there's at least two really important issues to discuss here. One is the question around the availability of technology in the rich world compared with the global south, for example. So there might be people listening and saying, well, you talk about AI and complex systems, et cetera. We don't even have computers in our classroom or in some cases perhaps even reliable electricity. So how is all this relevant to me? So maybe we can briefly explore that. Do you have any thoughts on that? [00:12:35] Speaker D: I agree that there is at a very fundamental level, as you said, electricity and Internet connection and access to devices. But even beyond that, even when you have that, because increasingly people are able to access mobile devices, I think that is something that will change. But the idea that everything is going to be available for free, we live in a world of Google and Facebook where we expect that, but I'm not sure that's going to be the case. So I think a lot of things we're talking about may widen the divide between the rich and the poor. Possibly. [00:13:17] Speaker C: Yeah, this is one of those situations just like with the arrival of the Internet, and even more so perhaps with the arrival of mobile phones, where you have people in different sides of the argument, some people saying, well, obviously it's challenging in resource poor situations contexts. On the other hand, you've got people saying, well, this allows perhaps those countries and those people to leapfrog to bypass some of the existing systems which are cumbersome and no longer perhaps even needed. And you do see that in certain areas, for example, with the use of mobile technologies. I think now I'm of the opinion that the glass is half full and half full with a quite delicious drink. Because I think the greatest impact, perhaps far more important than everything we've discussed so far, is that this potentially unlocks access to if not perfect. Certainly a lot of education and a lot of materials, a lot of feedback, a lot of support of different types to learners who currently either have no access to education or have access to poor education. And the impact of, for example, providing a learner to even a short amount of time to an AI supported system will enhance the amount and quality of educational support that they receive greatly. Just to give you an illustration, bjorn Lomburg, who is someone who works in the area of development. He and his team I forget the name of the NGO ran an experiment in I think it was Uganda and somewhere in East Africa where they provided children in rural schools who were receiving, frankly, very poor instruction and often intermittent instruction with teachers not being available, et cetera, with tablets for 1 hour per day. So, in a nutshell, what was the main problem? There was that the kids in school were often in very large classrooms with 30, 40 or even more children, all of whom had very different levels of development in the different subjects being taught. And they were grouped according to their age level. Right. So the ten year olds with the ten year olds and the eleven year olds with the eleven year olds. But within the group of, say, ten year olds, you might have some learners who were operating at the level of a five year old in terms of literacy skills, for example, as well as those who are operating at the level of a ten year old, and perhaps some that were operating at an even higher level. So so that meant that the best the teacher could hope to do is to sort of aim for the middle and hope for the best. Which meant that basically the majority of all the learners were either bored or had no idea what was going on. Right. And in their project, they found that having a tablet available, again, just for one, or they would take these children outside of the class and let them work individually. And the system, of course, very quickly was able to adapt to their level, maybe using some adaptive testing or what have you. And the cost of that project, including everything, including the tablets and the loss of some of them and the electricity and the power and the solar panels that they had to install, et cetera, was approximately $31 US. Dollars per year per child. Wow. Yeah. And the cost for the education, for the children in that particular context was around $350 per child per year. Right. So we're looking at about 9% cost. But after a couple of years of investigating the impact of that, what they found was that the learners who used the tablets for just 1 hour a day improved two years worth in one year, had two years worth of academic gains. Right. So that is a phenomenal, of course, increase just from 1 hour. And now imagine that if those tablets didn't just have some static set of basic adaptive programming and resources, but had access to an interactive AI that continuously checks your level, sees what you've been doing and adapts to, that changes everything. It potentially fundamentally transforms the nature of education there. And that's something that possibly might well be maybe I'm being very hopeful, but that might well be one of the greatest benefits of AI for education. [00:18:16] Speaker D: Yeah, I think that's a great example. It reminded me of an example which is not exactly the same point you're making, but of the relationship between AI and the teacher, which was an app for developing writing skills in English. And we tried it with groups of teachers, I think it was in Turkey. And what the teachers found was that although they would give personally have given better feedback than the app gave, the fact that the app was able to give feedback to everybody individually in real time and repeatedly was fantastic. So then their role as a teacher was much more around kind of fine tuning things, motivating, all those kind of other things. So I can see that also being in the example that you're talking about, it's not that a teacher becomes redundant, but they have an additional role, well. [00:19:21] Speaker C: And potentially a far more important role. And probably, I certainly can speak for myself, a far more interesting role because I much rather sit down with a learner who's really struggling and coach them and motivate them, et cetera, rather than having to mark 200 essays. I know which one of those not only which I enjoy more personally, but also which one I think I can add more value. [00:19:48] Speaker A: Thanks for listening to this episode of Talking ELT, the easiest place to learn about the big issues in language teaching. Don't forget to like and subscribe if you want to learn more about this issue and others like it. We look forward to seeing you next episode.

Other Episodes

Episode 5

April 16, 2024 00:22:17
Episode Cover

Multimodality: The Role of Assessment

How can we assess learners' ability to both 'read' and express themselves through multimedia? Explore new ways to assess learners' communication skills in today's...

Listen

Episode 2

January 16, 2024 00:18:00
Episode Cover

Self-Regulated Learning: How Can We Support Learners?

How can we support self-regulated learning more effectively as teachers and institutions? Explore the ways we can provide more structure and support to help...

Listen

Episode 1

October 06, 2023 00:16:05
Episode Cover

Artificial Intelligence: The Impact on Language Teaching

How will the arrival of artificial intelligence transform the future of language teaching? Discover what skills your learners will need to succeed in a...

Listen