Pulling through the drive through at Panda Express the other day, I was greeted by an “Ai Ordering Assistant” where the speaker to talk to a real person used to exist. Of course, this system is not Ai. Phone-based customers service lines have had this technology for decades. The one at Panda even came complete with the admonition to “speak loudly and clearly,” just like the early phone menus did as well (I also remember a long time ago when some chain restaurant tried phone menus for taking orders over the phone).

We are well into the stage of Ai that every trend goes through: things that aren’t part of the trend – but are close enough – get labeled with the trend name to cash in on said trend. Of course, with other places like McDonalds backing away from Ai as a failure, I’m wondering how long the Panda Express option will last. For those that actually use Ai, you experience and see frustration and abandonment. At least, that is what I am seeing all over social media as well as from friends, co-workers, and family that are trying it. A few stick with it, but most give it up saying “it’s quicker to do it myself in the first place than fix what Ai spits out.”

Of course, there are those that occasionally claim that Ai saves them time. For example, Miguel Guhlin in My Ai Breakthrough. But what did Guhlin actually do to reduce something from “weeks” of work to “10 minutes?” These people rarely say. Mainly because the few that do say run into people in their field replying “THAT took you weeks? Really?” But then Guhlin goes on to explain a very time consuming process that will take more than weeks to integrate this new process into all of his tasks at work.

From what I have seen, Guhlin is an outlier. His breakthrough was asking a set of questions that most people I know started with before jumping into Ai… so since they are not having the same results, I doubt Guhlin has stumbled upon a very generalizable approach.

But this “breakthrough” is important to discuss, because we have reached the point in the Ai fad hype cycle where leaders are pulling out the “only the cool kids will do this” statements:

This is what we call “Tool Worship” in instructional design circles: trying to make the tool the center of education. Of course, Generative Ai is not a whole classification of education like “open education” is – it is more of a suite of tools with a main company like Microsoft Office and some competitors. Saying “Generative Ai Education” makes about as much sense as saying “Office Software Education.” This is really the first of three important problems with Wiley’s assertation above.

The second problem is that I personally can’t really align Generative Ai with most principles of Open Education. Heather M. Ross does a better job of expressing this in her post The Soul of Open Is in Danger: “GenAi may be fun to play with and make some tasks easier, but the cost to the values of open, the planet, marginalized groups, and humanity as a whole are far too great.” Many people have agreed with this sentiment. A few have not, and their points against her basically come down to the following (in bold):

  1. Ai generally make education more effective for cheaper, so it’s a net good. This is only really true if you cherry-pick the successes uncritically, while ignoring the massive evidence of it’s failures and ineffectiveness. The “My Ai Breakthrough” article above does have some statistics in it about how little Ai is being used and found effective, which generally match what I see in real life and online. A few like it, most don’t. And the truly effective services or tiers are rarely cheap – not to mention that those that are will unlikely stay free or low cost as time goes on. Of course, you also only see the high cost options as cheaper if you also cherry-pick the results that say they save time and ignore the masses that point out they do not.
  2. Does Ai Really Cause Harm? I would say those killed or injured by self-driving cars, job applicants skipped over by application sorting Ai, those being bullied into suicide by Ai bots, and many, many, many other people harmed by Ai would like a word. Questioning whether Ai harm actually exists in this day and age is…. something.
  3. Ai environmental damage isn’t that serious when compared to overall global climate issues. This might sound like a line straight from a Big Business shill. It actually is (it was said to Chris Gilliard on Bluesky this year). But it is surprisingly coming from multiple people that are not fans of Big Business. It is an argument used to gloss over local environmental damage in favor of the “Whataboutism” of global issues. I’m so sure the cities that saw their lakes suddenly sucked dry when the new data center went on line really care about Big Business arguments about personal habits being more dangerous. And the idea that renewable sources will take up the energy challenges? Highly unlikely to happen in anyone’s lifetime (including Gen Z). We need to deal with environmental problems TODAY, not some magical future that may or may not happen.
  4. Damage to marginalized communities would be lessened if they would speak up. Except, they already are speaking up – a lot of them for a long time. It boggles the mind that people want to act like there is a need for the marginalized to speak when they already are. Most of them are not wanting to be used for training Ai data. A few don’t mind, but they are generally outliers in their community. The fear of abuse runs centuries deep, and just saying we need to listen and amplify won’t calm those fears. Listening and amplifying have led to abuse as well – just look at U.S. Politics for examples of that.
  5. Ai doesn’t “copy” work, it “learns” from it, so Ai copyright violation is not a thing. This is a popular argument, but still misses the point that Ai is a computer program that does not copy or learn. It processes information and stores resulting variables in a database that doesn’t operate like an organic brain (because it doesn’t forget things). You can’t use copyrighted content to process computer data (which is what Ai is) without compensating the copyright owner fairly in most cases. Even if you go with the problematic concept of Ai as “learning,” all learning is subject to copyright still. You have buy books and videos. Libraries pay a fee to offer their materials to multiple people. So do video services like Netflix. When you borrow a book from a friend, that friend still paid for that one copy. Popular teachers, musicians, artists, etc that try to get too close to copying what they they were trained on run the risk of getting sued for copyright infringement. And many content creators have found their copyrighted ideas coming out of Ai, because yes Ai does sometimes copy and reproduce content.
  6. Humans are all influenced by ideas, so Ai should be able to as well. The problem is, Ai is a computer and does not suffer from memory loss or forgetfulness like humans do. Computers have the ability to process and store information perfectly – humans do not. In reality, human brains don’t store or process information like computers do, so any comparison between human brains and the computers that make Ai are just outdated science.

Some of the Ai enthusiasm would be better served by avoiding so much “whataboutism” in response to legitimate concerns.

Ross goes on to compare trying to change OpenEd to GenAiEd with colonialism. Her critics accused her of doing the real colonialism for… reasons that are unclear. She said “get off our field” and that was inexplicably changed to “your field” in the response. There are huge important differences between “our” and “your” (assuming it was meant as singular in the criticism).

Anyways, the third problem with Wiley’s statement is the claim that “people who care deeply about affordability, access, and improving outcomes will shift their focus away from OER and toward generative Ai.” You can’t really blame Wiley for this, as this myopic, siloed view of education is often found throughout academia. For my students, any of them that have ever used Ai for an assignment have failed that assignment massively. These assignments existed before ChatGPT came on the scene – there is just no way to use Ai on them and score very high. Ai just can’t handle the assignment, and probably never will be able to. I try to encourage my students to avoid using Ai for those assignments, but some still do and find out why. I also use OER for my classes. Does this mean that I don’t “care deeply about affordability, access, and improving outcomes” because I have to tell my students to avoid Ai? Give me a break. I have a different type of class than Wiley does. I know many, many educators like me. Generative Ai is NOT a solution for every class that uses OER currently.

This is what “worshipping the tool” means. You place it as a central starting point for your view of education and build everything around it. This happened with blogs, social media, virtual worlds, MOOCs, learning analytics, etc near the downfall of each of those fads. All of those concepts are still around – just none of them are the future of education, open or otherwise. Like all other tools, they are just one of many, many different ways to accomplish learning.

Leave a Reply

Your email address will not be published. Required fields are marked *