The Rise of AI Agents: Will They Actually Change School and Work?

Imagine this.

You are taking an online graduate school course. After a few weeks, you see that the majority of this course is asynchronous. You have to do some readings, complete discussion board posts, and replies. Write a few essays in response to prompts, and of course take a few quiz-like assessments.

You open up your local AI Agent on your computer and give it instructions: Every week, log onto my school’s Learning Management System (Canvas, Schoology, Google Classroom, etc) and complete all of the assignments listed in the course’s calendar. For longer assessments and essays, please email me the rough draft before submitting so I can edit if need be.

Behind the scenes this is what happens:

An LLM opening the student’s web browser, logging into Canvas, navigating through the course, checking the course calendar, reading, replying to, and making posts in discussion forums, completing and submitting written assignments, taking quizzes, and doing literally everything fully autonomously – without any intervention from you (the learner) whatsoever.

This is the exact scenario that David Wiley shares is not something to imagine, but very available and real to learners right now.

AI Agents are autonomous LLMs that can use your computer, the internet, and any websites or apps you’ve given login instructions to without the help or assistance from a human (besides providing initial direction).

We are entering a phase where the kids don’t even have to “go to [insert AI app]” anymore to get help with or do assignments. With Lindy AI anyone can make an AI agent that does work behind the scenes. Anthropic AI has Claude AI agents that take over your computer to do work.

We are thinking about AI’s role in education wrong. So many folks talking about how it’s going to personalize learning, save teachers time, and provide tutors for every kid. Maybe it does those things, but for now the impact is in a different area.

Any work we assign that students do not want to do, do not have time to do, or do not care about doing — can be outsourced to their own AI agent. This agent can write, talk, and even make videos like them at Hey Gen.

Same goes for a lot of the work we all do on a daily basis, whether it is responding to emails, creating presentations, attending meetings, assessing work, giving feedback, creating plans/activities/resources etc - AI can now do that, and is only improving at its abilities.

Imagine an AI Agent that grades, gives feedback, and logs into your grade book to enter those grades, and add comments. How many teachers would use a tool like this?

Education is much more likely to be impacted by AI for these reasons. Can we keep giving HW the way we have in a world of AI agents? Can we continue to give long term papers and projects in our current environment? Will our compliance-based model of school exacerbate issues?

We are seeing many schools and organizations lean in two directions:

AI-Resistant: paper and pencil, in class and in person tests, no outside HW, etc.

AI-Compatible: use AI as a resource, cite your use, defend learning, performance tasks with AI as a partner in the process.

Will there be a rise of AI-Compatible schools like @AlphaAustinHS?

Will there be more and more parents opting for home school options as we’ve seen post-pandemic? Will folks head the “no-device” route like The Waldorf School? Or will we have a “Blockbuster” moment, where we believe we are headed in the right direction, only to be left behind?

AI will impact education (and already has), but it won’t look like the vision Sal Khan and Sam Altman are sharing of a world where anyone can learn anything with tutors for everything. That is a wonderful thought, but not in touch with most of our schooling. You still have to WANT to be tutored in order to benefit from that AI use-case.

It reminds me of so many Tech CEOs making this argument, “Everything needs to change in education! Overhaul the entire system!”

Maybe you’ve heard that sentiment from many people over the years. I know I have.

It irks me for many reasons, but maybe the biggest reason is: As a teacher, or school leader, I never had any ability to “overhaul” the system even if I wanted to.

I could make changes, and innovate inside the box. But, changing everything was never an option.

An overwhelming majority of the teachers I’ve worked with want to do what is best for kids.

Schools, as institutions are set up with the mission to help students learn, grow, and be prepared for life as an adults.

Parents, too, want what is best for their kids.

If there was a big Venn Diagram of parents, teachers, and school leaders, the middle would be: We want what is best for kids.

So here we are, once again, where the current hype around education is that A.I. is going to change everything. Or, because of artificial intelligence, we should go ahead and change the entire system.

While I agree some changes should be made across the board, we’ve been in this trap before (many times).

I see a gross overhyping of the “Generative A.I.” moment and a few serious misconceptions of what this means for the longer-term future of education.

Let’s dig in.

Overhyping The Moment

Ethan Mollick, Professor at UPenn’s Wharton School of Business, and author of Co-Intelligence: Living and Working With AI, has broken down four distinct future scenarios for life with Artificial Intelligence.

  1. As Good as It Gets: In this scenario, AI technology ceases to make significant advances beyond the current state. Improvements are minimal, focusing only on incremental changes and slight enhancements to existing technologies.

  2. Slow Growth: Here, AI continues to advance but at a much slower, more predictable rate. Instead of exponential growth, AI capabilities increase linearly. This slower pace allows society to adapt gradually to changes, integrating AI into various fields like therapy, scientific research, and entertainment without overwhelming societal structures.

  3. Exponential Growth: AI technology progresses at an accelerating rate, far outpacing human ability to adapt. This scenario sees AI becoming significantly more capable within a short period, leading to profound changes in all aspects of life, including security, entertainment, and personal relationships. The rapid development may also increase risks associated with AI, such as enhanced hacking and misinformation campaigns.

  4. AGI (Artificial General Intelligence): In this scenario, AI reaches and surpasses human-level intelligence, leading to the emergence of machines that are not only as capable as humans but potentially far superior. This could result in the creation of superintelligence, radically transforming every aspect of society and human existence.

    OpenAI. (2024). ChatGPT [Large language model]. /g/g-TcSx5BYQr-the-four-futures-planner

We are currently in the “Slow Growth” scenario. Since the release of ChatGPT to the world on November 22, 2022, we have seen growth (but incremental at best).

Many of the current education-focused products are built on the backbone of ChatGPT (or other similarly trained LLMs) that use Generative AI to do all kinds of tasks for teachers, school systems, and students.

Many teachers and school leaders have been less than impressed. I think Dan Meyer summed it up perfectly on his Mathworlds blog:

In December 2023, Education Week found low uptake of AI tools among teachers. Just 2% of teachers said “I use them a lot” and 37% of teachers said “I have never used them and don’t plan to start.” If you wanted reasons for optimism, you might note that 33% of teachers in that survey said they were using AI tools a little, some, or a lot.

Well, a year later, we have seen the release of new AI models from Meta, Anthropic, OpenAI, and Google, models which improve significantly on their predecessors. And according to Education Week’s newest poll, “43% of teachers said they have received at least one training session on AI,” an increase of nearly 50% from their previous survey.

Let’s put it plainly: the usage numbers haven’t budged in a year.

In spite of the release of more sophisticated AI models, in spite of increased training in AI, in spite of a summer to slow down, regroup, and take an undistracted look at this new technology, Education Week found that teachers are using AI at lower rates in October 2024 than in December 2023—from 33% to 32%.

What we are seeing right now is a lot of doing “old things in new ways” instead of doing “new things in better ways”.

You were making multiple-choice assessments before? Now you can make more of them, and faster.

You were creating rubrics before? Now you can make more rubrics tied to the standards, and much faster than doing it by yourself.

You were creating Slide presentations to use when lecturing? Now you can save a ton of time, and make them look nicer than before.

Is saving time a bad thing? Of course not.

Is making teaching more efficient something to get excited about in a world where more and more gets put on our plates? Yes, we should get excited, even if the technology is not fully there yet.

But, we may be overhyping the moment, in the same way we did with the advent of social media, mobile devices, and Web 2.0.

Things I’ve Said…

I wish there wasn’t a running record of things I’ve said about technology in education over the last two decades, but between this blog, YouTube, and social media — there are many things I’ve said/predicted and been way wrong about!

It is easy to get caught up in the moment and get excited about the possibilities of a better education for our kids when a new technology comes along.

Sadly, Smartboards didn’t revolutionize learning. Some of my favorite Web 2.0 tools like Glogster don’t even exist anymore. And let’s not even get started on netbooks, or ez-grader, or a host of other tools that came along to transform education.

When a new technology like Artificial Intelligence starts being used in education and the workplace we often see three different camps:

  1. Sit and wait it out: These folks know there is something changing, but until lots of people start using it with success, they’ll sit on the sidelines and watch.

  2. Anti-Tech: Why would we change things? This is going to have damaging ramifications. I’m already doing fine with what we currently are doing. Stop the change and madness!

  3. Pro-Tech: They are running full-steam using it and singing it’s praises. Everyone should get on board and wow, things are going to change quickly.

The “Rogers' diffusion model” can quickly identify this curve of innovation adoption.

Pro-tech are the innovators and early adopters. Sit-and-wait are the early and late majority. Anti-tech becomes the late majority and laggards.

Us early adopters (being me as well) tend to overhype a new technology early, while not truly understanding how it will impact society — and specifically education in our work.

This is not new.

It’s happened over the last two centuries with radio, television, computers, the internet, and now artificial intelligence.

However, regardless of the current “overhyping” about AI’s role in teaching and learning, it is going to impact ONE critical area.

The Game of School. It’s going to change.

The game of school has been an issue for a long time (for many different reasons). That “game” is being changed due to AI, and maybe that is a good thing. Time will tell.

If we focus on compliance, instead of engagement and relevancy, we will lose the attention battle. Humans are the ones that can make learning meaningful, and we sure need people more than ever in a world of tech and AI.

Our biggest task ahead is building learning experiences that are meaningful and relevant. Experiences that draw learners in and get them excited about the process of learning, not just about handing in a final product for a grade.

Let’s ask this question when we create curriculum, assignments, and learning experiences: Will this help foster a love of learning, or will it be another hoop to jump through in the game of school?

I don’t think the latter has much of a chance sticking around in a world of AI that can master jumping those hoops better than ever before.

Previous
Previous

The Relevancy Problem: What can we do to make learning meaningful?

Next
Next

Five “A.I. In Schools” Scenarios We Need To Discuss