As I prepared my reflections for this conference, I noticed something intriguing: I had a harder time recalling the specifics of the sessions without notes and the only difference was that I took notes on my Chromebook instead of on paper like I did for OLA Super Conference. This confirms some brain research findings that writing by hand does different things than writing on a computer keyboard.
Ontario Teachers Federation
Curriculum Forum & Symposium
Harnessing AI in Teaching and Teacher Education: Ethical Considerations and Pedagogical Benefits
Conference Reflection by Diana Maliszewski
Friday, February 7, 2025
10:15 a.m. - Digging Deeper into Gen AI
Summary: OTF Director of Curriculum and Assessment, Moses Velasco, led the group, consisting of representatives from the many subject associations in Ontario, through several activities to help us examine Gen AI in a more profound way, building on the information shared at the 2024 Fall OTF Curriculum Forum.
3 Key Points:
1) In our first activity, we considered the question "What does it mean for teachers to exercise professional judgement with Generative AI in Ontario K-12 education?" Moses prefaced this work by noting that professional judgement falls under the umbrella of professional responsibility and that we have an obligation to uphold the integrity of the profession; this means that "the decisions and actions we take affirm teaching as a profession that requires specialized knowledge and skills."
2) In our second activity, we consulted some AI-generated outlines for professional learning workshops and provided feedback on how we would tailor and alter these (rather generic, too-wide-encompassing) Gen AI-created plans to possibly design professional development for our subject members that address issues of GenAI.
3) Our third activity transferred to the afternoon, because there was so much to do! Joseph made a good point, that we presume that teachers are using critical thinking to inform their pedagogy (and decisions about AI use) and if they aren't, this is a problem.
So What? Now What?
The conversations were so fast and furious that I attempted to use the ZipCaptions.app tool that Jenn Giffen recommended to me last week. I couldn't figure out how to copy and paste them and the accuracy was so-so, so I had to stop and rely on my quick touch-typing. Moses made a great point, using an analogy I was familiar with thanks to my Presenter's Palette training - good professional learning needs both content (the gum) and time to process (the chewing). We need to remember what good facilitation looks like. This reminds me to return to the template I used to use after my Presenter's Palette training so that I am very deliberate about making time for processing / thinking in the workshops I deliver.
Media Artifacts:
1:00 p.m. - Networking and Collaboration
Summary: We had a choice of working on validating links on the OTF website, especially with an eye on how they would support beginning teachers, creating subject-specific prompts for teachers or students that would use Gen AI, or networking with other subject associations.
3 Key Points:
1) Make sure students do the cognitive heavy lifting. Pay attention. Be aware. Things are changing so rapidly (Moses mentioned that Agentic AI is predicted to hit the scene in 2029 and 2035 for SuperAI) so we need to make as much information about this transparent and make clear what the concerns will be for this kind of AI.
2) If you don't want the automatic use of AI in your searches, in the search field, add "- AI" to the search terms and it will produce results that don't use AI to summarize or contribute. (Thanks to whomever found out that nifty trick and shared it with the group!).
3) Make sure that you do not recommend or advise members to use whatever tools he/she/they want and it is part of their professional responsibility to only use tools that have been vetted or approved by boards with accompanying policies.
So What? Now What?
I'll have some conversations with the various library associations that I'm affiliated with - they didn't have any representatives there for the Winter session and teacher-librarians need to be involved in these talks. I will also share my package of articles (one from each union) with my principal, to make sure he is aware of what's being discussed.
Media Artifacts
Early-Evening Events
Summary: After our brain-taxing day, representatives from many of the subject associations that are collaborating to provide an upcoming conference with a focus on AI gathered to have dinner at Scaddabush. I really enjoyed spending time with Nathalie Giroux (AFEMO), Neil Andersen (AML), Gerry Lewis (OMCA), Lynn Thomas (ECOO), Adam Mills (OAPT), Cal Armstrong (OAME), and Chelsea Attwell (AML).
Media Artifacts:
7:15 p.m. - Opening Keynote: AI and Ethics in Education: Moral Passivity, Cognitive Atrophy and Diminishing Trust by Phil McRae
Summary: Phil McRae is from the Alberta Teachers' Association. He has worked with and presented talks about AI all around the world. He explained the spectrum of issues related to AI, the current methods (scraping the net for data and the search for novel or unusual data sets, the existence of frontier models like Microsoft [ChatGPT], Google [Gemini] and Anthropic [Claude] with others just blocking on to these three) and trajectory of developments for the near future (from Agentic AI where you allow AI to do things for you and give your agency to AI, to Artificial General Intelligence to SuperAI).
3 Key Points:
1) Creators of AI, like the founder of Google Deep Mind, admit that even they really don't understand how they work. (Chelsea whispered to me that this is sort of like parenting!) We need to audit the black box. Many of the early contributors, such as Geoffrey Hinton from U of T, considered a godfather or AI, are raising alarms about how quickly they are developing, the problems they are causing and some of the inappropriate uses of AI. Re: speed - AI systems have 10x growth per year; it took radio 38 years for 50 million users, telephone 20 years, and Chat GPT got 100 million users in 60 days. It's hard to regulate and monitor when new AI tools and systems are being released daily. Re: problems and inappropriate use - e.g. lost jobs, no moratorium on development, the environmental impact (Is it true that every AI prompt equals a bottle of water?), the use of AI for surveillance purposes (done in China classrooms already), dangerous use (there's a class action lawsuit against character.ai because it encouraged a teen to commit suicide), the atomization and datafication of education, the selling of student data, the inability for us to use more of our brains because we rely on machines (e.g. How many phone numbers do you have memorized now?)
2) McRae did a survey of Alberta teachers and discovered many things. In 2024, 5 in 10 teachers were using AI to help them do their job. In 2025, it was 8 in 10. The top 4 AI tools used by teachers were Chat GPT, Magic School AI, Kahoot, and Grammarly. There were many purposes for using AI (see the photo I took) but the one most worrying to McRae was using AI for student assessment and feedback. By putting student information into AI, like with eduaide.ai, there's privacy concerns. Be wary of using AI tutors because it will look like teachers aren't necessary and take advantage of overworked teachers with few resources to hand over their students to corporate interests.
3) Teachers need to be leaders in this space and be involved in the dialogue around Gen AI. There are potential futures, probable futures, and preferred futures. Let's aim for the preferred. There should always be a human audit at the end of every AI task. Remember that teachers do deep listening that cannot be replaced by machines, even when there are "AI therapy bots" trying to create artificial intimacy; AI simulates compassion by hacking human emotions but it has none. Supposedly, elementary school teachers have a 0.04 % chance of being replaced by AI because schools are still a highly relational place of learning. Include things AI can't do (or can't do well) like creativity, social perception, and fine motor manipulation. Fight against moral passivity (To what extent will AI algorithms diminish our decision making capabilities and human judgement?).
So What? Now What?
I have mixed feelings about this talk, and even in the delivery. Some of his images in his slide deck were deliberately emotionally infused. Because McRae allowed comments throughout his talk, this interrupted the flow and meant that he was unable to address all the points he had planned. Plus, I find that questions aren't always ones that add significantly to the discourse. I will need to look at his website www.philmcrae.com to see more.
Media Artifacts
Late-Evening Events
Summary: This was an opportunity to socialize and network with the attendees, who came from all the universities with Faculties of Education (Brock, Lakehead, Laurentian, Nipissing, OISE/UT, Ontario Tech, Ottawa, Queen's, Trent, Western, Wilfried Laureier, Windsor, and York) as well as unions (OECTA, AEFO, ETFO, OSSTF, OTF) and subject associations. I must say I was grateful for the List of Participants by Organization printed for us in our packages; I met so many new people that I would have a very hard time remembering them all. (Thanks to folks like Bailey and Annette and Gerry for the great conversations.) In addition to new faces, I had the pleasure of reconnecting with old familiar faces; for instance, I was so happy to spend a few minutes catching up with Jim Strachan, who did so much for beginning teachers in TDSB. The conversations lasted long into the evening ... maybe a little too long, since by the time I went to bed, it was 1:30 a.m.!
Media Artifacts:
Saturday, February 8, 2025
8:45 a.m. - Expert Panel - Generative AI: Possibilities and Perils with Moses Velasco, Anthony Carabache of OECTA, Daniel Dube of AEFO, Julie Millan of ETFO and Chris Samuel of OSSTF
Summary: Moses was the moderator of a panel discussion on AI with representatives from each of the 4 teaching unions in the province. Moses asked for their positions on Gen AI, how to handle the anthropomorphizing of Gen AI (and defining personal relationships with AI), and the skills needed to inform professional judgement related to Gen AI. (There were more questions prepared but they ran out of time.)
3 Key Points:
1) There was a wide range of opinions and perspectives on Gen AI, from Daniel's philosophy of introducing it if you can or want, to Julie's cautions about jumping on the train and how circumstances like the pandemic and overworked, under-resourced teachers making it look good but there's a need for critical thinking and guidelines established by the Ministry of Education and school boards. Anthony asserted that AI is an existential threat to educators because human isn't listed in the Education Act and we should not surrender or offload our tasks in the name of innovation because it is profit, not innovation, driving these decisions. (He was skeptical of policy documents because he says they can be altered by governments on a whim.) Chris, the only non-teacher on the panel, recommended Cory Doctorow's blog and said it depends on what AI is being used for and context will suggest when it's okay or not (i.e. reducing workload to focus elsewhere.
2) Julie focused on the mindset aspect of dealing with AI (that we need to be curious about it and investigate it more and ask hard questions) and added that the crisis mode we are in at schools are allowing this unchecked use of AI to flourish. Julie told us to ask ourselves "Am I turning over magic to a machine? Am I using AI to replace a human quality?" If so, don't, because the crux of what it means to be a teacher is based on relationships. Anthony (like me in my talk two years ago!) advised against giving human qualities to a machine. By making AI human-like, the big corporations are able to manipulate us more. Anthony says that we must protect our students, teach our members to by cynical and say no to what the EULAs ask of us. Chris mentioned that lower AI literacy rates produce higher rates of AI audience receptivity and that we must be alert to projecting emotions on a thing that does not care about you. A "side-negative" is that it makes us compare ourselves to computers like we compare AI to us; the question to ask ourselves is "Is using this AI tool helping me to be a better or worse teacher?" Daniel had similar things to say, that we must think critically and remember that AI is a machine and a machine that does what we say and that's it's our choice where we "join the parade".
3) Related to professional judgement (PJ) ... Anthony said that PJ isn't in the Education Act, it's in Growing Success, so we need to defend it and maintain its existence. PJ comes with experience and learning from bad prior decisions we make, so PJ is developed, mentored and learned, which needs training that employers should provide. Chris recommended we separate the evidence on the product from the hype - just because a lot of people use a tool does not make it good. Wonder what the companies' incentives are; is it to save costs? Dan says to do your research and talk to other teachers. Julie said that we need to protect PJ, especially when we have governments that don't respect what teachers do. She asked us, "If you are using AI, do you have the professional knowledge to give reasons and justify why you are using that tool?". Because our PJ is improved with our years of teaching, it takes a while for us to get to a certain level of proficiency, particularly around assessment and evaluation, with help from human mentors. Our energy should go into improving student learning and relationships, so beware of the AI marketing that touts that it solve our problems. Email supposedly was going to take things off our plates and look at us now!
So What? Now What?
I wish we could bring back OSAPAC (Ontario Software Acquisition and Purchasing, something something), because we knew if a tool got OSAPAC approved, we could use it in schools and it was worthy. Julie mentioned OSAPAC and she was right that it was an organization that could have helped us in this moment if it hadn't been shut down. I spoke to Anthony beforehand, who was apologetic about his hard-line stance, but I believe that he was making a deliberate point. (Thanks for the shout-out about the EULA during the talk, Anthony!) I wonder if it's any coincidence that many of the boards who are enthusiastically embracing Gen AI publicly are Catholic ones. I think that maybe CSL and AML need to think about giving PD on PJ! (How's that for acronyms?!)
Media Artifacts:
10:30 a.m. Curriculum Forum Carousel - Taking the Plunge: A Deep Dive into Gen AI Workshop A
Look Before You Leap: Testing the Gen AI Waters in Elementary Schools by Diana Maliszewski
Summary: (taken from slide deck): There’s a buzz about educators using Artificial Intelligence in their classrooms. Do you have FOMO (a fear of missing out)? This introductory session will highlight regulations / suggestions from unions and school boards about AI use and then allow participants to have time for safe, guided exploration of some of the more well-known or popular AI tools, to help develop a personal, moral compass, infused with critical thinking, about when, where and how to employ AI.
3 Key Points:
1) Read the policies first. Only 6 of the 72 boards have official policies around Gen AI. ETFO has the most robust guidelines around Gen AI of the unions supporting elementary school educators. Read the EULAs of any tool you want to use. If you aren't paying for it with money, you are paying in some other way, like with your personal data.
2) Establish a moral compass. Consider the environmental impact of Gen AI use (by consulting sources like the United Nations). Consider professional integrity - do we take a "do as I say, not as I do" approach? Consider the bias inherent in many of these algorithms. Think hard about when, how and where to use these tools, if at all.
3) There are ways to explore without joining or registering. Use resources like the ones from CIVIX / Ctrl-F or from AML so you can safely understand what's going on without feeling the pressure to sign up.
So What? Now What?
This was my first session. I thought this one was the more-thoroughly-developed session of the two I offered. During the "explore" portion, where people could choose to look into a EULA, read the Ctrl-F or AML articles, or talk with others, I wandered around the room. I spoke to someone (who will remain nameless here, to protect their privacy), who said they appreciated the options for exploration, as they had been victimized when they were forced into joining social media. I also have to thank Chelsea Attwell, who not only lent me her laptop (since my Chromebook had no way to plug into the data projector and my laptop was too old and slow to get up and running in time), but also took photos and provided a slide to promote the upcoming AML workshop on AI use in classrooms. Thanks for saving me, Chelsea! My next step is to figure out how to make my Chromebook work with data projectors!
I was also delighted to meet more people, like Alyssa Hines, who is like an adopted daughter and friend of Kate Johnson-McGregor (and any friend of Kate is a friend of mine!), and continuing developing relations with people like Annette Gagliano, the representative from OMLTA. We need to do something together in the future, Annette - why didn't we get a photo together? Anyone: If you have my business card, please let's try and connect, since it's all about relationships, right?
Media Artifacts:
11:15 a.m. Curriculum Forum Carousel - Taking the Plunge: A Deep Dive into Gen AI Workshop B
Support without Substitution: How AI Tools Might Help K-8 Teachers Keep Teaching Effectively and Ethically by Diana Maliszewski
Summary: (taken from slide deck) In our enthusiasm for embracing technological innovations, it’s important to be deliberate and selective about how, where, and when to use GenAI, in ways that demonstrate that human educators are still necessary and cannot be merely replaced with teaching-bots. Participants will dabble with familiar (Chat GPT, Microsoft Co-Pilot, Magic School AI, Diffit, Canva, Brisk) as well as some newer (Curricumate, Google Gemini) GenAI tools, and discuss how to be transparent and accountable in our pedagogical practices. Our conversations, as elementary school professionals, can provide modeling for other teachers and students in the appropriate and ethical use of Large Language Model (LLM) tools in education.
3 Key Points:
1) We need to have conversations about what constitutes "appropriate" use of LLMs / Gen AI, even between student use and teacher use. We know, as elementary teachers that our circumstances are different (because most Gen AI tools require the user to be aged 13+), but there are some similarities.
2) Once again, refer to school board policies and end user license agreements before using any Gen AI tool. Research the company as well (which you can do by even looking up if it's mentioned on Wikipedia - thank you Kim Davidson for that tip!). So many of them are popping up that it's hard to keep track. (For instance, many people in the room had not yet heard of Curricumate, which claims to be completely Ontario-centric.)
3) We should think about what is the essence of being a teacher and what we can/might/could relinquish to a Gen AI tool. If we use Gen AI tools, we should be transparent in our use, and cite them.
So What? Now What?
This presentation was the weaker of my two talks, even though it had more attendees. (I counted 44 people in the room.) I'd like to peek into Curricumate a bit myself, after I read the EULA. (It was recommended to me by a colleague whom I trust, so that's why I mentioned it in my talk.)
Media Artifacts:
1:20 p.m. - Final Keynote Presentation and Consolidation by Amanda Cooper
Summary: Dr. Amanda Cooper is the Dean of Ontario Tech University. Her talk focused on AI in education in the Ontario context.
3 Key Points:
1) There's a disconnect between practice-informed research and research-informed practice. There shouldn't be.
2) AI is a "disruptive technology", like the car, that impacts all areas of society.
3) Many people are scared to say they are using it. We need to demystify the use of it. We also need Gen AI policies to help govern and direct our use.
So What? Now What?
I must confess that by this time in the day, I was very tired. The keynote was also speaking very fast and using slides with lots of graphs and small text that was hard to see at the back of the room where we were. I tried to take notes but I only got a few. Maybe my next step is related to last week - connecting with some deans to talk about getting research related to practice off the ground.
Media Artifacts:
Final Thoughts and Thanks:
I am indebted to Moses Velasco, Neil Andersen, and Chelsea Attwell. Moses was able to engineer it so that all three of us were able to attend the OTF Symposium, in exchange for our presentations on AI. Neil is always so intellectually curious and willing to share his thoughts with me. Chelsea was the helpful hands before, during, and after the presentations, even to the point of letting me throw my suitcases in her room when I had to madly grab them after being late for my check-out time.
Thank you to all the new people I had the pleasure of meeting that I really hope I get to meet again. Together, we are better!
No comments:
Post a Comment