Last night I was invited to speak to a class of preservice teachers about the role of IT in education. It’s a hard topic to address since it’s so vast and all-encompassing. Do I talk about servers and switches or how to placate grumpy IT Techs (haha) or share the nuances of configuring an MDM? I wasn’t sure so I went in empty-handed and ended up tackling all of those topics and more.
In fact, one of the questions was about the future of technology in education and where I saw it heading. I brought up VR, AR, AI, etc but I shared one caveat – none of those technologies will make an impact without a teacher. I think (and hope) that, for many, COVID and learning from home has shown that teaching is much more than following a pacing guide or putting students on an intervention computer software for 30 minutes a day, every day. It’s both an art and a science.
And as I reflected on that, I dug out a book I had read on Artificial Intelligence last year and laughed at all the connections between AI and teaching.
You Look Like a Thing…
In You Look Like a Thing and I Love You, Janelle Shane explains how Artificial Intelligence (AI) can sometimes be a terrible way to solve a problem. Honestly, they just aren’t as smart as we’ve been duped to believe.
In fact, most of the issues engineers and researchers have been having with AI are probably issues you’ve confronted at some point in your teaching career.
AI is Dumb
I don’t mean the concept. I mean the actual computers running it. It’s not their fault. They just lack the capacity to perform a multitude of complex tasks at one time. Some work-arounds have resulted in numerous computers being strung together, each performing one part of a multi-part scenario (kind of like student project groups). But still, at their core, there’s some serious limitation.
Consider how long it took you to learn to ride a bicycle. I’m sure you learned in less than the hundred crashes the robot had, and even then, it could only go a few meters without falling, and thousands more crashes before riding for a few tens of meters!
Most of this is because computers can’t remember much – their brainpower is exerted on the immediate task, and so there’s not much ability to plan ahead and make generalizations.
There are many instances in the book in which AI was terrible at solving a problem, and the reasons fell into a few categories.
Too broad a problem
In 2019, researchers from Nvidia trained an AI to generate images of human faces. It did pretty well, except for things like earrings not matching or bizarre backgrounds. But when asked to learn about cats, it got it all wrong, producing images with extra limbs, eyes, and distorted faces.
When the AI trained on human faces, they were all forward-facing. But the cats were seen in all sorts of positions (as cats are prone to be) and so the AI couldn’t distinguish what exactly made up a cat face. Check out ThisCatDoesNotExist for creepy examples.
We’ve seen it happen in our classrooms. We introduce an algorithm in math and all of a sudden, students are using it for everything, even when it makes no sense. Or we tell students that an essay hook can be to start with a question and then every single paper starts with a question until the next hook is introduced.
Not enough data for it to figure out whats going on
Most AI learn by example. If you give the machine enough examples of something, it learns the patterns and begins to imitate them. In one AI experiment, a machine was given different ice cream flavor names and told to create its own.
Unfortunately, the machine doesn’t know what ice cream is, or even English, or how flavors work. it only knows how to translate each letter, space, and punctuation into a number and then keep analyzing those numbers for patterns. The result? Flavors like Bourbon Oil and Roasted Beet Pecans and Milky Ginger Chocolate Peppercorn.
Textbooks are notorious for not giving enough data. How can the American Revolution be condensed into one chapter? Ask any textbook publisher and they’ll show you!
Accidentally gave it confusing or non-needed data
When I learned about the Essential Elements of Instruction, which is based on Madeline Hunter’s research, one of the elements was Teach to the Objective. I thought, “well that’s easy. Just teach the lesson” but it turned out to be much more complex than I realized.
For example, if the objective is for students will list two major reasons for the Civil War, then teaching about how the economics of slavery and political control of that system was central to the conflict makes sense. However, if I tell the story about my trip to a plantation in Atlanta and how depressing it was to see the slave quarters, I’ve now begun a non-congruent conversation that may lead to confusion as to what the objective is, and what students need to be able to do.
Machines aren’t any better. Go back to the bizarre ice cream flavors. Although the machine was able to figure out the pattern of ice cream names, nobody bothered to tell the AI that certain flavors just aren’t very yummy as ice cream. It was taught ingredients, but not ice cream specific ingredients.
Trained task was much simpler than the real-world application
In theory, it should be very easy to teach an AI how to drive a car. Program it with the rules of the road; teach it to identify lights and signals and road lines; and add some calculations for stopping distances and you’re good to go. However, we know that the reality of driving is much more complex and nuanced. In 2016, a self-driving car failed to recognize a flatbed truck as an obstacle and caused a fatal collision.
Why?
The car had been trained to drive on the highway, and as such, only recognized trucks from their front and rear view. The driver, however, kept the self-drive mode engaged on city streets. A semi-truck pulled out and crossed in front of the car. Thinking the truck was an overhead sign, the car did not stop.
I can’t tell you how many times I’d get frustrated after looking at the results of my students’ assessments. Why were they not understanding the concepts I had taught for weeks? Honestly, the problem was not their lack of understanding. They understood exactly what I had taught them to understand. But what I had failed to do was put that understanding in a context of real-world use. We can teach math algorithms, or 5 paragraph essays, all day, but until they are shown how to adapt those concepts and apply them, they’re at a loss.
So What?
According to Shane, the best uses of AI are going to be with human supervision to make people more effective. AI will be used as a first draft tool but then humans will edit the results.
AI is dumb, but teachers are not. We are adaptive. We may make some of the same initial mistakes as AI, but the difference is, we learn from them. We reflect, and we get better. The distance learning that happens this Fall will be hugely better than the distance learning provided in March.
So take a deep breath, and remind yourself that you’re smarter than AI and you totally got this!