Happy Thanksgiving to those who celebrate! I am unfortunately under the weather, so today's blog post will be relatively short and sweet.
How do educators navigate the waters of LLMs (Large Language Models), more popularly known as Generative AI (Artificial Intelligence), in the absence of official guidelines and policies?
This is a difficult question to answer. What's allowed? What's ethical? What should be embraced and what should be avoided? (Heck, even the image that I included with this post wasn't made by me; Bitmoji put it together.)
This coming Wednesday, my friend Kim Davidson and I will be hosting a TDSB Library Network meeting and one of the topics we plan to discuss include AI. There are two recent examples I have that illustrate where we must proceed with caution and/or curiosity.
Sound-Alike
At the OLA 2024 Super Conference, I attended a keynote session by Avery Swartz. One of the tools that she mentioned gave the ability for an individual to sound as if they were fluent in speaking a different language. I made a note to myself to investigate this particular tool.
I finally got around to exploring it. My first piece of advice comes from the AML - read the EULA (End User License Agreement). Even though these are boring and full of legalese, it is critical to understand what you are agreeing to when you use the software. In this case, the creators would be able to use my voice to train their program. I became a bit nervous about using it but was still fascinated, so I used one of my "junk" emails that isn't linked to almost anything to try it out. After making it, my next conundrum was to determine how to share it "safely". I didn't want it spread without context - which is why you won't see it on my blog. One of the AML key concepts is that audiences negotiate meaning, and I didn't want to give the "wrong impression" to others. I consulted with my administrator, and afterwards with two other teachers with expertise in technology education. They gave me some great advice which I will follow.
An interesting tangent to this story involves my son. He was interested in my process and offered to match my "lip flaps" to the words, so that it would look more realistic. I thanked him for the potential free labour, but I was actually glad to see that my mouth didn't match the sentences, so it was clear it wasn't me. When people upload videos to YouTube now, there is a section where the creator must identify if any parts of the video used "GenAI". I wonder how many people are honest about their use.
Collaborative Inquiry
At the 5th International Media Literacy Research Symposium, I attended a talk by Yonty Friesem, Estrella Luna-Monoz, and Irene Andriopoulou. In my "so what / now what" reflection, I commented that I wanted to replicate their cross-country study within my school board. I asked a few people at my TDSB TL Facilitator meeting and a few people, like Tracey Donaldson, David Hoang, and Dawn Legrow, agreed. Yonty very kindly forwarded us the prompts they used in his initial study and offered to be involved with our version too!
This venture is more about curiosity than caution. There are some tools that we'd be permitted to use with guidance in our schools, especially if they are part of a supervised, guided inquiry with our students. The purpose would be to show how LLMs can both help and hinder the writing process and to involve critical thinking when employing it in school tasks. When does it constitute cheating and when does it count as legitimate assistance?
The same applies for teacher use. Can teachers use AI for planning, teaching, and/or assessing? If so, how much? How might student privacy be compromised if their work was inputted into a GenAI program for evaluation? We need to have frank conversations, not just about what's possible, but what's prudent.
This applies to teacher researchers too. This week and next week, I'll be part of two webinars, one for BCTLA and one for OSLA, on preparing action research papers for Treasure Mountain Canada 8, the Canadian school library think tank and research symposium. It's very important to "put in the work", as just last year, a researcher with TDSB was fired for using ChatGPT to generate citations that turned out were non-existent. I wouldn't mind having something reliable to check if I formatted my references correctly, as I'm notoriously bad at proper citation (ironic, considering I teach this as part of my TL AQs), but I think I'd rely on people (like my far-away friend Joanie Proske - who will be co-writing a paper with me this year). We shouldn't be ostriches when it comes to LLMs (even though I know it's a myth that ostriches hide their heads in the sand), but we need to go forth with an equal measure of curiosity and caution.
Yes! Yes! Yes! To all of this. I agree that we have to be aware of the tools. I agree that we have to understand what permissions we are giving away (yikes, your voice). I agree that we can't be ostriches, and that the collective conversations are really important. Also, databases (like the Canadian Encyclopedia) do a stunning job of creating citations for you - in your choice of style! I love it
ReplyDelete