Why I’m Still Scared Of An A.I. Future
It’s here, and I’m concerned. I could even argue, a bit scared.
Artificial Intelligence is no longer a science-fiction predicted future. It is our reality at this very moment.
Sitting on the hinge of history is a weird place to be, especially when you realize it is truly a technological advancement that will impact us and generations to come.
I’ve written articles about many of the ways Artificial Intelligence can be used for good.
I’ve created videos detailing the tools and strategies to utilize this A.I. tech to engage learners in meaningful and relevant ways.
I’ve built out an entire community of teachers and school leaders who are sharing all of the amazing breakthroughs happening right now in schools around the world because of what A.I. can help us do.
And yet…
I’m extremely cautious about being “Pro-A.I.” — there is too much we don’t know. Too many turns this could take, and too often an overlooked issue of losing our humanity at stake.
It’s OK To Be Cautious
When I work with schools, districts, and organizations I try to make it very clear that we are dealing with many unknowns right now.
We cannot live in a make-believe world where we act like A.I. does not exist, but we also need to be cautious about bringing this technology into our schools the right way.
Leon Furze developed a great starting point for A.I. Policy, and I use some of his questions over and over again when working with groups:
What are the limitations and potential biases of AI technologies, and how can they be addressed?
How can transparency and accountability be promoted in AI use?
What personal data will be collected, stored, and analysed by the AI technologies?
What policies and procedures will be in place to address issues related to data privacy and security?
What steps will be taken to ensure that the technology is accessible to all students, including those with disabilities?
What are the acceptable and responsible uses of AI technologies? (This is discussed in the Traffic Light Strategy article)
What is the school's position on academic integrity related to AI technologies?
How can AI technologies be used to support academic integrity?
How will AI technologies be used in assessment processes? For what purpose?
What tools and resources will be made available to support appropriate citation and referencing of AI-generated material?
How can the accuracy and credibility of AI-generated information be verified before it is used in research and writing?
What ethical considerations should be discussed in professional development opportunities for staff?
What resources and support will be provided to staff for ongoing learning and development related to AI technologies?
What strategies will be used to engage with parents, students, and other members of the school community in a meaningful and informative way?
How will policies and procedures be updated as necessary to ensure that they remain relevant and effective in the use of AI technologies in the school environment?
These are heavy questions, but they all lead to discussions we need to be having right now.
More often than not, our concerns are boiled down to three very specific issues I'm hearing over and over again---and why I mostly agree with all three being an area of worry and discussion:
Concern #1: Cheating Is Going to Be Rampant
Yes, I agree, this is going to be true. If we continue to assign long-form essays, homework with discussion questions, and any type of work that can be done at home -- then we can assume AI is being used for much of the process.
Some kids may not use it, but most (like most adults) will probably use AI in some way/shape/form to help answer or guide their responses.
We can combat this in two very specific ways without trying to be a detective.
First, we can assess the process of learning instead of only using final products/tests/essays. For example, instead of kids answering 20 math questions in the back of the book, they could screen record a short video of them answering 3 math questions, showing their process and talking through the learning so a teacher can hear how they are thinking and see the work with their own eyes. Or, we could assess the process of writing (mostly in class) instead of just the final piece.
Second, we can assign work that is meaningful and relevant. At a recent talk I gave to teachers at LaSalle College High School, I made the argument that most of us would be using AI if we were teenagers. And, that most adults will also use this technology for work-related tasks. The only reason we wouldn't is if we CARE. If school is just about following the rules and doing the work, then AI will be used all of the time for cheating purposes -- but if we get to do meaningful and relevant work, then we'll use it as a tool -- not a shortcut.
Finally, an even bigger worry with this is the issues with using AI Detection software, if you are not caught up on this please check out this post:
Concern #2: Kids Won't "Think" Anymore
I'd argue that this is my biggest worry. We may have worried the same with calculators, Google, and Wikipedia---but this feels (and actually is) a different scenario.
With Google, you could search for answers to low-level questions. There wasn't much online that would help you analyze, synthesize, evaluate, and truly explain a position on a topic. Now, with ChatGPT and other tools, many are worried there is no need to "think".
Enter a prompt and "voila" you'll get an analysis, or position, or evaluation written for you in a minute. This is going to be true in many areas of life (not only school) -- and we'll have to change a few of our processes to keep kids thinking.
For starters, let's do as much "thinking work" as we can in person. This is the old "flip your class" model--and we'll have to default to this in a post-AI world. Watch a video, read a passage, or listen to a podcast or tutorial before coming to class--and in class is where we share are thinking. We can do this by writing, speaking, and creating (many different ways), but the key is to do it together.
Secondly, there are going to be times students will still have to think outside of the classroom. Ultimately you'll want to shift towards "performance tasks" that require a lot of thinking. My son is doing a wax museum project. AI could help him write the script, but during the project what if people could ask him questions about his person, instead of just reciting the script? Now we have a performance task that goes beyond what AI could initially help with.
As teachers and school leaders, we will have to do a lot of thinking to prepare for these types of performance tasks. Sure, it might be easier to teach from a PPT, ask a few questions, give HW, grade HW, give tests, grade tests, and move on to the next unit. That won't cut it in school anymore (and I'm not sure it ever really did).
Concern #3: Go Back To Just Paper and Pencil
I completely understand this concern. Many folks are in this camp. "Well, if we can't trust anything online, then let's just give all paper assignments, essays, quizzes, worksheets, etc..."
I actually believe this is a good starting point for some student work and learning. When John Spencer and I wrote our book, Launch, we shared that "duct tape and cardboard" is an amazing way to create and prototype.
The issue with always using paper and pencil (just like cardboard and duct tape), is that everyone in the world is using AI in some way/shape/form. In college, in most careers, and in personal life -- Artificial Intelligence is here to stay.
So, if our job as educators is to help students prepare themselves for the world they currently live in (and in the future). We'd be doing them (and us) a disservice by always using paper and pencil.
Not only that but using AI is both a tool and a skill. I was awful when I first started using ChatGPT, Curipod, Copilot, Bard, and MidJourney. But, as I used the tools, my prompts improved. My skills improved. I was able to do so much more with AI than I ever thought possible.
So yes, there are going to be times when paper and pencil is a great starting point. But real, authentic, and relevant performance tasks may involve using AI with a purpose--and that is how we are using it now in jobs, industries, and spaces around the world.
Final Thoughts
AI Is not going away. It is only improving and being added as a feature in many of our current tech products that we already use all day long.
Just like when computers started being used for learning purposes, AI will transform much of what we do in traditional learning environments. Sometimes it will be old things in new ways, other times it will be new things in new ways.
In both cases, our students benefit when WE can also use this technology with a purpose, and have clear conversations about what is beneficial (and what is not) for their learning journey.
Let's be the guide ON the ride when it comes to AI for teaching and learning!