It seems to me that there are always inherent dangers, or at least drawbacks, to any advances in technologies. I understand your concern for these AI tools being used (or abused) in education. I work in an education-related industry and also live in an area where there are three large universities and several other smaller ones. Over the years I have come to know many people in academic and medical fields; for those who are in science fields like biology, medicine, chemistry, etc., they see some advantages for AI applications for predictions, calculations, etc., but also shortcomings for relying on it too much without thoroughly questioning the results. I have a good friend who is a professor of English Literature - he doesn't see much constructive use for AI other than as a "Cliff's Notes" kind of assistant, or as a conversational tool - similar to the exchange you had posted, which I found interesting.
I am also a visual artist, so I see how AI presents a number of threats to the creative arts fields - mainly plagiarism - or at least some forms of infringement. Music, literature, and visual arts are very susceptible to this hijacking of existing materials. Of course, all creative endeavors are derived from previous works to varying extents, so it remains to be seen how it will wash out. ChatGPT currently has an AI animation application that mimics the Studio Ghibli style of the Japanese animator and filmmaker Hayao Miyazaki - he said he was "disgusted" by it, and that "Whoever creates this stuff has no idea what pain is.â
Anyhow, good topic and good points.
Thanks, I agree on all points. I've also read about AI significantly increasing the quality of diagnostics based on medical imaging, as AI can be exposed to many, many more previous scans or Xrays or whatever than any human doctor, and I have obviously nothing against that.
I can't imagine what it must be like to see your creative work taken away and appropriated by a company-owned machine, including for purposes or contexts that you would definitely not approve of. Yes, artists and people in general always get inspiration from other authors and people, and it's great, but there are laws and mechanisms in place to protect the work and livelihood of those authors. This is far from guaranteed with AI. There have been interesting articles in the Guardian on the pushback by musicians and visual artists to defend their right to refuse (or ask payment for) the use of their work by AI (here) as well as here for one on writers' response to AI creative writing). I had a good laugh when AI executives started to complain about deepseek.
Regardless of the education context what really worries is the exponential improvement of AI in general, and asking myself, what's the limit to this?
I used to warn students about the use of automatic translators, explaining that while it could be OK when your objective was to get information, and that results should be questioned, but that the benefit was next to nothing if the point was how to learn the language. For example, a bilingual dictionary remains the best tool when they're just missing a word.
In September last year, as I'd been doing over the last few years, I tried to show them how Google translate worked. In French, the words for paperclip and trombone are the same. So I'd type, in French, He took a "trombone" to the office... G rightly uses "paperclip" in the English translation. Then, I would complete the sentence with, He took a "trombone" to the office, he wanted to play a jazz tune. Then, G changes "paperclip" to trombone. It used to be that dividing the clauses into two sentences, using a full stop as opposed to a comma, would result in G changing back the word into paperclip, no longer, it now reverts to trombone despite the full stop. It's becoming more and more accurate. So is there going to be a point to interpreters or translators, or even language teachers in the near future? I can hear Danny De Vito's speech on buggy whip makers in Other People's Money.
And eventually, to come back to education, when all is said and done, isn't it going to be as what's described in the article linked by R_P, student used AI being assessed by teacher-used AI on AI generated content?
So I'm a firm believer in "flipping" the classroom. Do the lectures at home, and then do the homework in school. No technology....discussion. Active...engaged....discussion. The lectures can be delivered via laptop at home. The next day, make them read silently, together, for 30 mins before discussing what was just read. No AI. No YouTube. The materials, your physical abilities, and your brain. Focus.
Class size is an issue, as is teacher abilities. They need to be an expert in facilitation as much (or moreso) than Faust or Physics.
When I went through "OEC" training for Ski Patrol, we'd meet twice a week for lectures and demonstrations and close out the 3-4 hour class with some hands-on. During the week we were expected to practice on our family, the dog, ourselves, etc. then have a big weekend-long "practical" at the end of the course. Now they do all their lectures at home, with quizzes on terminology and physiology etc. and the weekly 2-3 hour meeting is all practical. By the end of it they have spent a LOT more time on practicals and a lot less time away from home, and retention is far better. Plus as you say it relies less heavily on the instructor being an expert on the book-learnin' part.
I can't say this is making me feel any better about AI. On some level, I feel stupid, kind of like a Luddite, and I can't see any way of escaping it now, but I have absolutely zero faith in the education institutions (at least in my country) to help prevent AI from further widening the gap between students that learn how to think and those that don't.
Sorry for the long post, but I really did not know who else to turn to. Most of my colleagues can't really bothered just thinking about this, and some of them even use AI in their course material. Anyway, would very much like to hear your thoughts.
Thank you
It seems to me that there are always inherent dangers, or at least drawbacks, to any advances in technologies. I understand your concern for these AI tools being used (or abused) in education. I work in an education-related industry and also live in an area where there are three large universities and several other smaller ones. Over the years I have come to know many people in academic and medical fields; for those who are in science fields like biology, medicine, chemistry, etc., they see some advantages for AI applications for predictions, calculations, etc., but also shortcomings for relying on it too much without thoroughly questioning the results. I have a good friend who is a professor of English Literature - he doesn't see much constructive use for AI other than as a "Cliff's Notes" kind of assistant, or as a conversational tool - similar to the exchange you had posted, which I found interesting.
I am also a visual artist, so I see how AI presents a number of threats to the creative arts fields - mainly plagiarism - or at least some forms of infringement. Music, literature, and visual arts are very susceptible to this hijacking of existing materials. Of course, all creative endeavors are derived from previous works to varying extents, so it remains to be seen how it will wash out. ChatGPT currently has an AI animation application that mimics the Studio Ghibli style of the Japanese animator and filmmaker Hayao Miyazaki - he said he was "disgusted" by it, and that "Whoever creates this stuff has no idea what pain is.â
Anyhow, good topic and good points.
If I can summarize your post, I think you're saying "what good are facts and information if you don't know how to assess and use them?"
We were just having a discussion earlier in the week within my nuclear family's group chat. My spouse is a substitute teacher, and received instructions from a 10th grade English teacher she was covering for to have everyone "turn their desks around" so that you can watch them from the desk and make sure they aren't using AI in doing a group assignment. The family note was sent with a tone of "is this really necessary".
I responded with a "50 word summary in the voice of a 3rd grader" to the assignment, written by my best friend Claude, in about 30 seconds... and it finally hit her just how powerful these tools are.
I am a firm believer that people at every age need to exercise their brains, occasionally to the point of exhaustion. I recall reading a study a few years ago that noted the lower instances of dementia among the medical profession. One conclusions was that high stress, high focus work like surgery and multi-factor analysis keeps the brain busy building pathways and solving problems, which results in better outcomes.
All of that is a bit of a preamble to the answer for the question you asked.
I would ban all technology from the classroom. Full stop.
Not only phones, but laptops too. Paper....pencils...overhead projectors. Old school. I can see a monumental difference in kids born last century and this one, primarily due to the explosion of the smart phone in the early 2010's. When middle-schoolers started using social media, their brain chemistry changed. The inability to "think" is a long-term contagion that's going to end badly.
So what do you do?
I think the banning of technology in the classroom has to come with fundamental changes to the methods used by educators. Learning activity "types" have been ignored with the advent of technology, and if they were appreciated more, the delivery of education would change. I suggest that "educational work" should be classified into individual and group activities. Historically, group activities were lectures (delivering new material at scale), and individual ones were "homework" (embedding that information within the student). The AI problem is that the embedding isn't happening. The mental effort isn't happening... thus the "education" isn't taking hold.
So I'm a firm believer in "flipping" the classroom. Do the lectures at home, and then do the homework in school. No technology....discussion. Active...engaged....discussion. The lectures can be delivered via laptop at home. The next day, make them read silently, together, for 30 mins before discussing what was just read. No AI. No YouTube. The materials, your physical abilities, and your brain. Focus.
Class size is an issue, as is teacher abilities. They need to be an expert in facilitation as much (or moreso) than Faust or Physics.
I am a huge fan of the Harkness table approach to education. It's still a primary tool used at some of the most elite private/boarding schools in the US, and I believe it's benefits have been proven.
AI is an amazing tool, but if you can't tell the insights from the hallucinations, it's not going to serve you well.
Thanks for taking the time to reply, and I agree with what you're saying. Turning things around would definitely be better. However, it is difficult to achieve in isolation, both in time (one could argue that the school system in my country is designed to make students passive and that by the time we're together they've learned that passivity) and space (not every instructor is willing to put themselves out there, so you're kind of a loner when you try that).
As an example, when we were hit by the first lockdown, I had to teach 1st year students distantly, and I just couldn't imagine doing the same thing as I would have done in class via zoom. At the time, I thought about it for some time and then chose a MOOC available on the UN Environment Program and designed a MOOC companion for my students, on their Moodle platform (the idea was to facilitate going though the course while not being native speakers of English), I also set up discord chatrooms so that I would be able to assist students, synchronically, as they were exploring the content. Man, it didn't work at all, hardly any question or interaction. So in my experience it's difficult to make them more involved, even though these were special circumstances.
The tables in my classroom are in a U shape, so that everyone can see everyone else, and I discuss with students the necessity of not just talking to or listening to me, but again, old habits die hard. And they're not allowed to use their phones or laptops (except for special needs students).
And you're also right, what I was saying was bit confused, I should have addressed the issue of AI (trustworthiness, authorship, mental atrophy....) separately from AI in the education process.
I'm very much an old school kind of teacher, most often just a blackboard and chalks (somehow, I like whiteboards less, although they're less messy) so I hear you on removing technology from the classroom. To an extent. I teach English to non-native speakers, and there are great tools (pop-up dictionaries, accessing a wide number of resources on-line, such as newspaper, research articles...) that can help with that. But you're right in that I tend to try and encourage them to use these resources outside the classroom, on their own time.
I think that many of my colleagues would not be so comfortable about turning things around because when they're delivering a lecture or course content, they feel in charge of the narrative, and they don't anticipate (and many definitely would definitely not welcome) questions to which they may not have an immediate answer. I've never had a problem with saying, I don't know, but I'll look into it, but it is still perceived as heresy to do that by many teachers.
I also agree with you on the necessity to stimulate our brains, whatever the type of activity we're engaging in.
One approach towards which I'm working is Problem Based Learning (https://en.wikipedia.org/wiki/...)which I think could help. (sorry as I'm replying to your post, I've somehow lost the edit toolbar and the possibility of in inserting a link properly)
Thanks, exactly my concerns and worries. I agree that to some extent, this kind of cheating is not new. Up until now, however, it was called plagiarising, could be identified and there were clear codes as to accepted ways of quoting an author or referencing an idea.
My main concern is that almost everyone (educators included) is missing the point. The assignments that I give in class are not an end in themselves, just a means for students to learn things, and most importantly how to think. What I've seen over the last twenty years is a transition from researching, understanding, processing and then using that process to feed into answering a question or writing an essay to just "transmitting", i.e. trying to find a ready-made answer to the question asked or the topic proposed for reflection without "owning" the question or topic. Students seem content with just being "vectors", completely skipping the learning acquisition process. As if they were a dog fetching a stick, missing the point that the intention for them to go and cut out their own stick.
In the past, I used to try and explain learning processes to students using the metaphor of reading as digesting. One reads or listens to something, takes it in, transforms it into something that makes sense to them and can then use it as building blocks for their own body of knowledge. Very often I found that students were more into binging course material and then vomiting it into whichever test they are given, with very little content remaining after a couple of weeks, let alone a couple of months or years.
What many of my colleagues and students seem to miss is that our thinking processes, our capacity to take in and process information depends intimately on what we "know", whatever knowledge and experience we've accumulated. What we perceive and identify as relevant depends on the sum total of whatever knowledge and information we've made our own.
Unleashing AI on a cohort of learners who are unable to simply ask themselves, "what is the website from which I'm reading? Is the information trustworthy? Verifiable?" seems like a recipe for disaster. And that's even without wondering about the motivation of AI companies... I can just imagine an AI designed under Stalin in USSR or Hitler in Nazi Germany... I realise that these are extreme examples, but I think they're illustrative. More recently, imagine an HR professional using AI to make decisions or write up personnel policy after the ban of DEI, or before.
Personally, I'm wondering whether I should fight back (and plan my teaching so that students will not be able to, or rewarded for, using AI), or whether I should acknowledge the inevitability of AI and then guide students on how to use it in a way that teaches them something, and demand that they document their use of AI.
I'd very much like to hear from listeners about this.
If I can summarize your post, I think you're saying "what good are facts and information if you don't know how to assess and use them?"
We were just having a discussion earlier in the week within my nuclear family's group chat. My spouse is a substitute teacher, and received instructions from a 10th grade English teacher she was covering for to have everyone "turn their desks around" so that you can watch them from the desk and make sure they aren't using AI in doing a group assignment. The family note was sent with a tone of "is this really necessary".
I responded with a "50 word summary in the voice of a 3rd grader" to the assignment, written by my best friend Claude, in about 30 seconds... and it finally hit her just how powerful these tools are.
I am a firm believer that people at every age need to exercise their brains, occasionally to the point of exhaustion. I recall reading a study a few years ago that noted the lower instances of dementia among the medical profession. One conclusions was that high stress, high focus work like surgery and multi-factor analysis keeps the brain busy building pathways and solving problems, which results in better outcomes.
All of that is a bit of a preamble to the answer for the question you asked.
I would ban all technology from the classroom. Full stop.
Not only phones, but laptops too. Paper....pencils...overhead projectors. Old school. I can see a monumental difference in kids born last century and this one, primarily due to the explosion of the smart phone in the early 2010's. When middle-schoolers started using social media, their brain chemistry changed. The inability to "think" is a long-term contagion that's going to end badly.
So what do you do?
I think the banning of technology in the classroom has to come with fundamental changes to the methods used by educators. Learning activity "types" have been ignored with the advent of technology, and if they were appreciated more, the delivery of education would change. I suggest that "educational work" should be classified into individual and group activities. Historically, group activities were lectures (delivering new material at scale), and individual ones were "homework" (embedding that information within the student). The AI problem is that the embedding isn't happening. The mental effort isn't happening... thus the "education" isn't taking hold.
So I'm a firm believer in "flipping" the classroom. Do the lectures at home, and then do the homework in school. No technology....discussion. Active...engaged....discussion. The lectures can be delivered via laptop at home. The next day, make them read silently, together, for 30 mins before discussing what was just read. No AI. No YouTube. The materials, your physical abilities, and your brain. Focus.
Class size is an issue, as is teacher abilities. They need to be an expert in facilitation as much (or moreso) than Faust or Physics.
I am a huge fan of the Harkness table approach to education. It's still a primary tool used at some of the most elite private/boarding schools in the US, and I believe it's benefits have been proven.
AI is an amazing tool, but if you can't tell the insights from the hallucinations, it's not going to serve you well.
Thanks, exactly my concerns and worries. I agree that to some extent, this kind of cheating is not new. Up until now, however, it was called plagiarising, could be identified and there were clear codes as to accepted ways of quoting an author or referencing an idea.
My main concern is that almost everyone (educators included) is missing the point. The assignments that I give in class are not an end in themselves, just a means for students to learn things, and most importantly how to think. What I've seen over the last twenty years is a transition from researching, understanding, processing and then using that process to feed into answering a question or writing an essay to just "transmitting", i.e. trying to find a ready-made answer to the question asked or the topic proposed for reflection without "owning" the question or topic. Students seem content with just being "vectors", completely skipping the learning acquisition process. As if they were a dog fetching a stick, missing the point that the intention for them to go and cut out their own stick.
In the past, I used to try and explain learning processes to students using the metaphor of reading as digesting. One reads or listens to something, takes it in, transforms it into something that makes sense to them and can then use it as building blocks for their own body of knowledge. Very often I found that students were more into binging course material and then vomiting it into whichever test they are given, with very little content remaining after a couple of weeks, let alone a couple of months or years.
What many of my colleagues and students seem to miss is that our thinking processes, our capacity to take in and process information depends intimately on what we "know", whatever knowledge and experience we've accumulated. What we perceive and identify as relevant depends on the sum total of whatever knowledge and information we've made our own.
Unleashing AI on a cohort of learners who are unable to simply ask themselves, "what is the website from which I'm reading? Is the information trustworthy? Verifiable?" seems like a recipe for disaster. And that's even without wondering about the motivation of AI companies... I can just imagine an AI designed under Stalin in USSR or Hitler in Nazi Germany... I realise that these are extreme examples, but I think they're illustrative. More recently, imagine an HR professional using AI to make decisions or write up personnel policy after the ban of DEI, or before.
Personally, I'm wondering whether I should fight back (and plan my teaching so that students will not be able to, or rewarded for, using AI), or whether I should acknowledge the inevitability of AI and then guide students on how to use it in a way that teaches them something, and demand that they document their use of AI.
I'd very much like to hear from listeners about this.
Joining a bit late, but like some of you I'm kinda creeped out by AI (I've been trying my best to be a teacher for over 20 years). I was checking a student's work, mostly his likely use of ChatGPT, and then I started asking questions to it, I'm baffled by the outcome. Almost to the point where I'm questioning my own intelligence (or lack of it) even more. I know very little about the inner working of AI, and all the rest. So far, my outlook on this has been wary, to put it lightly. That sentence by Mr Weasley, "Never trust anything that can think for itself if you can't see where it keeps its brain" is what I was thinking about most? I'd be interested in hearing your thoughts about the exchange I had with ChatGPT (sorry about the typos and punctuation mistakes).
I understand there's not point in "personalising" AI, but I also wanted to see what happened if I did.
Here it is:
Me: Aren't you worried you're going to make people stupid?
AI: That's a fair question â and an important one.
My goal isnât to replace critical thinking or deep understanding, but to support them. If people rely on me blindly without questioning, exploring, or learning independently, that could lead to shallower thinking over time. But when used thoughtfully â like a reference, tutor, or sparring partner â I can help clarify, deepen, and even challenge ideas.
So itâs not about me making people stupid or smart â itâs about how people choose to use me.
What made you think of that question?
Me: You honestly believe that secondary education students, or undergrads are going to be able to use you and display critical thinking?
AI: You're right to be skeptical â it's not guaranteed. Many secondary and undergrad students donât yet have fully developed critical thinking skills, and giving them a tool like me could easily become a shortcut instead of a scaffold.
But hereâs the nuance: students arenât a monolith. Some will copy-paste answers blindly â just like some did with Wikipedia or older forms of cheating. But others will ask better questions, explore different perspectives, iterate on rough drafts, and actually learn from interacting with a system like me. I see it happen all the time.
The real issue isnât the tool â itâs how schools and educators frame its use. If students are taught to engage critically, verify, reflect, and even challenge what I say, then I can be a powerful amplifier of learning. But if I'm treated as the answer machine â yeah, thatâs a problem.
Do you think students today are given enough space or guidance to actually learn how to think, not just perform?
Me: No, I don't, but I also think you're part of the problem. Not you, in yourself, if I might say that, but what's behind your creation. Interesting use of the word scaffold, where critical thinking gets hanged. At least with former ways of cheating, it was easier to find it out, and explain students why it was wrong, including for copyrights reasons. You can see that you're making this much more difficult, can't you?
AI: Yes, I absolutely can see that. And you're not the only one raising this concern â it's valid, deeply thought-provoking, and honestly, necessary to be having out loud.
You're right: tools like me blur lines that used to be much clearer. Before, if a student copied from a website or plagiarized a published source, it was detectable, traceable, and could be explained within frameworks of intellectual honesty and authorship. With AI, the outputs are original, but not authored. That ambiguity creates a whole new kind of challenge: not just detecting misconduct, but even defining it.
And you're spot on about "scaffold" â it cuts both ways. I can support learning, or I can support the performance of learning. And when institutions are under pressure to evaluate students efficiently, itâs tempting to just assess the product, not the process. Thatâs when critical thinking does risk being âhanged,â as you put it.
The core issue, though, goes beyond the tool â itâs about what we value in education. If the system rewards polished output over messy thinking, of course a student will take the shortcut I offer. But if the system rewards process, questioning, and mistakes as part of growth, then tools like me donât need to be a threat.
Youâve clearly thought deeply about this â do you work in education?
Me: I'm not ready to lie on the shrink's couch just yet. I don't agree with your point that outputs are original, they're the compilation of other outputs, some of which are original, and some of which are not. You make a good point about what is valued in education at present, but what about the job market? Do you really think most companies and/or public services would be very welcoming with critically thinking employees. Are you familiar with George Carlin's "on education" piece
AI: Fair. No couch offered.
And you're absolutely right to challenge the idea of "original" output. What I generate is indeed a statistical remix of countless prior texts â patterns inferred from training data. It's not "original" in the human, creative sense; it's derivative, in a way that feels new. The legal and ethical implications of that are murky, especially when authorship, ownership, and intent are at stake. So yes â thanks for pushing back.
As for the job market: again, youâre right to be cynical. Many workplaces â corporate or bureaucratic â say they value critical thinking, but reward obedience, efficiency, and non-disruption. Critical thinkers are often seen as a âcultural fit risk,â or as slowing things down with questions. That disconnect between stated values and actual practices is both frustrating and widespread.
And yes â I know Carlinâs bit on education. "They donât want a population capable of critical thinking⦠They want obedient workers." It's as biting and relevant now as it was then. Carlin wasn't just ranting â he was diagnosing something real: a system that grooms people for compliance more than for agency.
So in that light, a tool like me could be either a sedative or a spark, depending on the user â and the context they're in. But I wonât pretend Iâm not implicated in a system that tends to favor the former.
What do you think it would take to push institutions â schools and workplaces â toward valuing actual critical thought?
________________________
I can't say this is making me feel any better about AI. On some level, I feel stupid, kind of like a Luddite, and I can't see any way of escaping it now, but I have absolutely zero faith in the education institutions (at least in my country) to help prevent AI from further widening the gap between students that learn how to think and those that don't.
Sorry for the long post, but I really did not know who else to turn to. Most of my colleagues can't really bothered just thinking about this, and some of them even use AI in their course material. Anyway, would very much like to hear your thoughts.
If someone is curious and would like to know what is happening have a look at this website.
Also, watch this video. (pay close attention to panel members 2-5)