Is Concern for the Harms of Ai Propaganda, Myth, or Something Else?

As the Ai hype train keeps chugging along, you might have noticed that those that raise concerns about the dangers of Ai often get dismissed or even labeled as “propagandists” for questioning the dominant narratives. Propagandist is a weird thing to call having an opinion, unless I guess you want to label everyone in the world a propagandist. The hype has gotten so wild that it has brought Audrey Watters back to commenting on Ed-Tech. Some of you really need to take a long hard look at what you did to get us here.

Despite the dismissive hand-waiving, or even weird claims that there is somehow money to be made in being anti-Ai (despite clear indications were all the investment, grants, speaking opportunities, etc are mostly going), very real concerns are still not being addressed (for the most part) in most of the hype. There are (at least) four real world concerns that many wish the pro-Ai crowd would address more often and more seriously:

  1. The actual dangers of Ai, such as Ai-driven cars killing people, Ai-driven bots harassing people online, bad Ai psychological advice being handed out, vast amounts of natural resources currently being consumed by data centers, etc, etc.
  2. Inaccurate descriptions of what Ai actually is, or even what counts as Ai, leading to bad business decisions that lead back to #1 in the form of lost jobs, difficult to obtain jobs, etc.
  3. Overblown descriptions of the abilities of Ai to replicate human activities, see #2 for where that leads.
  4. Interest in Ai and effectiveness of Ai tools being over blown and/or inaccurate, usually erasing the difficulties it causes for users, leading of course back to #2 and #1.

A recent article by Eryk Salvaggio (who is a pro-Ai user) finally seems to break through the hype and BS by the pro-“Ai is the future of everything” crowd. In Challenging The Myths of Generative Ai, Salvaggio examines the problematic descriptions and metaphors that are very popular in Ai circles:

  1. Control Myths, including the “Ai makes people more productive” myth (showing how the Big Wigs at the top who force Ai on others view it as increasing productivity, while the workers that actually use it often don’t) and the “better prompts produces better results” myth.
  2. Intelligence Myths, such as the “Ai is learning” myth (and it’s various flavors) and the “Ai is creative” myth (one that artists and creative types everywhere are getting very frustrated with) – which has also been explored here as well.
  3. Futurist Myths, including the “Ai will get better with more data and time” myth and the “complex behaviors are soon to emerge from Ai” myth.

Again, Salvaggio is not anti-Ai – he even talks about how he uses Ai. This is common mistake made by those that criticize those of us that criticize Ai: assuming we don’t understand Ai because we don’t like it or look at it the same way. Rookie mistake, of course, but one many seasoned commentators are making.

Of course, even though Salvaggio provides some sources to back up his points, his goal is really more addressing the language we use as a society about Ai. And he provides some answers for better myths to use at the end. However, it should be noted that his points are also coming up in data-driven articles as well.

For example, in Watching the Generative Ai Hype Bubble Deflate by David Gray Widder and Mar Hicks, you see almost all of the same points addressed, but this time with data and information from many sources (important to notice who all is not quoted in this article). Widder and Hicks give a much more dire reason to care about these problems, as the effects of over-investing in a hype bubble could have dangerous effects on the environment (as they have for other hype cycles in the past).

After I wrote about the three possible ways that I saw the Ai hype dying, it looks like the first option is what is happening… so far. A harder crash or even sudden death are always possible, but so far I see no signs of that happening. But no one option is really that clear yet.

So is concern for the harms of Ai propaganda? Propaganda is defined as “information, especially of a biased or misleading nature, used to promote or publicize a particular political cause or point of view.” All sides have a bias and want to promote a point of view. The side that is concerned for the harms of Ai feel that it is trying to cut through the misleading myths to promote more complex, accurate mythology that “help us understand these technologies” and “animate how designers imagine these systems” as Salvaggio says. Ai might even be too complex for myths, so it is long past time to stop looking at concerns over Ai as propaganda or even mere myths. We need real talk and real action on the problems that are quickly arising from unfettered adoption of unproven technology.

Ai: The Trend That Was Promised To Be Different Keeps Following The Path of All Other Fads

Pulling through the drive through at Panda Express the other day, I was greeted by an “Ai Ordering Assistant” where the speaker to talk to a real person used to exist. Of course, this system is not Ai. Phone-based customers service lines have had this technology for decades. The one at Panda even came complete with the admonition to “speak loudly and clearly,” just like the early phone menus did as well (I also remember a long time ago when some chain restaurant tried phone menus for taking orders over the phone).

We are well into the stage of Ai that every trend goes through: things that aren’t part of the trend – but are close enough – get labeled with the trend name to cash in on said trend. Of course, with other places like McDonalds backing away from Ai as a failure, I’m wondering how long the Panda Express option will last. For those that actually use Ai, you experience and see frustration and abandonment. At least, that is what I am seeing all over social media as well as from friends, co-workers, and family that are trying it. A few stick with it, but most give it up saying “it’s quicker to do it myself in the first place than fix what Ai spits out.”

Of course, there are those that occasionally claim that Ai saves them time. For example, Miguel Guhlin in My Ai Breakthrough. But what did Guhlin actually do to reduce something from “weeks” of work to “10 minutes?” These people rarely say. Mainly because the few that do say run into people in their field replying “THAT took you weeks? Really?” But then Guhlin goes on to explain a very time consuming process that will take more than weeks to integrate this new process into all of his tasks at work.

From what I have seen, Guhlin is an outlier. His breakthrough was asking a set of questions that most people I know started with before jumping into Ai… so since they are not having the same results, I doubt Guhlin has stumbled upon a very generalizable approach.

But this “breakthrough” is important to discuss, because we have reached the point in the Ai fad hype cycle where leaders are pulling out the “only the cool kids will do this” statements:

This is what we call “Tool Worship” in instructional design circles: trying to make the tool the center of education. Of course, Generative Ai is not a whole classification of education like “open education” is – it is more of a suite of tools with a main company like Microsoft Office and some competitors. Saying “Generative Ai Education” makes about as much sense as saying “Office Software Education.” This is really the first of three important problems with Wiley’s assertation above.

The second problem is that I personally can’t really align Generative Ai with most principles of Open Education. Heather M. Ross does a better job of expressing this in her post The Soul of Open Is in Danger: “GenAi may be fun to play with and make some tasks easier, but the cost to the values of open, the planet, marginalized groups, and humanity as a whole are far too great.” Many people have agreed with this sentiment. A few have not, and their points against her basically come down to the following (in bold):

  1. Ai generally make education more effective for cheaper, so it’s a net good. This is only really true if you cherry-pick the successes uncritically, while ignoring the massive evidence of it’s failures and ineffectiveness. The “My Ai Breakthrough” article above does have some statistics in it about how little Ai is being used and found effective, which generally match what I see in real life and online. A few like it, most don’t. And the truly effective services or tiers are rarely cheap – not to mention that those that are will unlikely stay free or low cost as time goes on. Of course, you also only see the high cost options as cheaper if you also cherry-pick the results that say they save time and ignore the masses that point out they do not.
  2. Does Ai Really Cause Harm? I would say those killed or injured by self-driving cars, job applicants skipped over by application sorting Ai, those being bullied into suicide by Ai bots, and many, many, many other people harmed by Ai would like a word. Questioning whether Ai harm actually exists in this day and age is…. something.
  3. Ai environmental damage isn’t that serious when compared to overall global climate issues. This might sound like a line straight from a Big Business shill. It actually is (it was said to Chris Gilliard on Bluesky this year). But it is surprisingly coming from multiple people that are not fans of Big Business. It is an argument used to gloss over local environmental damage in favor of the “Whataboutism” of global issues. I’m so sure the cities that saw their lakes suddenly sucked dry when the new data center went on line really care about Big Business arguments about personal habits being more dangerous. And the idea that renewable sources will take up the energy challenges? Highly unlikely to happen in anyone’s lifetime (including Gen Z). We need to deal with environmental problems TODAY, not some magical future that may or may not happen.
  4. Damage to marginalized communities would be lessened if they would speak up. Except, they already are speaking up – a lot of them for a long time. It boggles the mind that people want to act like there is a need for the marginalized to speak when they already are. Most of them are not wanting to be used for training Ai data. A few don’t mind, but they are generally outliers in their community. The fear of abuse runs centuries deep, and just saying we need to listen and amplify won’t calm those fears. Listening and amplifying have led to abuse as well – just look at U.S. Politics for examples of that.
  5. Ai doesn’t “copy” work, it “learns” from it, so Ai copyright violation is not a thing. This is a popular argument, but still misses the point that Ai is a computer program that does not copy or learn. It processes information and stores resulting variables in a database that doesn’t operate like an organic brain (because it doesn’t forget things). You can’t use copyrighted content to process computer data (which is what Ai is) without compensating the copyright owner fairly in most cases. Even if you go with the problematic concept of Ai as “learning,” all learning is subject to copyright still. You have buy books and videos. Libraries pay a fee to offer their materials to multiple people. So do video services like Netflix. When you borrow a book from a friend, that friend still paid for that one copy. Popular teachers, musicians, artists, etc that try to get too close to copying what they they were trained on run the risk of getting sued for copyright infringement. And many content creators have found their copyrighted ideas coming out of Ai, because yes Ai does sometimes copy and reproduce content.
  6. Humans are all influenced by ideas, so Ai should be able to as well. The problem is, Ai is a computer and does not suffer from memory loss or forgetfulness like humans do. Computers have the ability to process and store information perfectly – humans do not. In reality, human brains don’t store or process information like computers do, so any comparison between human brains and the computers that make Ai are just outdated science.

Some of the Ai enthusiasm would be better served by avoiding so much “whataboutism” in response to legitimate concerns.

Ross goes on to compare trying to change OpenEd to GenAiEd with colonialism. Her critics accused her of doing the real colonialism for… reasons that are unclear. She said “get off our field” and that was inexplicably changed to “your field” in the response. There are huge important differences between “our” and “your” (assuming it was meant as singular in the criticism).

Anyways, the third problem with Wiley’s statement is the claim that “people who care deeply about affordability, access, and improving outcomes will shift their focus away from OER and toward generative Ai.” You can’t really blame Wiley for this, as this myopic, siloed view of education is often found throughout academia. For my students, any of them that have ever used Ai for an assignment have failed that assignment massively. These assignments existed before ChatGPT came on the scene – there is just no way to use Ai on them and score very high. Ai just can’t handle the assignment, and probably never will be able to. I try to encourage my students to avoid using Ai for those assignments, but some still do and find out why. I also use OER for my classes. Does this mean that I don’t “care deeply about affordability, access, and improving outcomes” because I have to tell my students to avoid Ai? Give me a break. I have a different type of class than Wiley does. I know many, many educators like me. Generative Ai is NOT a solution for every class that uses OER currently.

This is what “worshipping the tool” means. You place it as a central starting point for your view of education and build everything around it. This happened with blogs, social media, virtual worlds, MOOCs, learning analytics, etc near the downfall of each of those fads. All of those concepts are still around – just none of them are the future of education, open or otherwise. Like all other tools, they are just one of many, many different ways to accomplish learning.

Popularity of ChatGPT and Listening to What Youth Are Doing With AI

A lot has been said about the recent Pew Research report that found A Majority of Americans Have Heard of ChatGPT, But Few Have Tried It Themselves. Some seem to think it proves AI is catching on with younger adults, while others think it proves AI is dying out.

The truth is that it is kind of a mixed bag. The “majority” of Americans that have heard of ChatGPT is 58% (increasing to 67% for ages 18-29), which seems closer to a slight majority. That fares better than past trends like Block Chain if I am remembering correctly, but slightly less than trends like Second Life. But then again, Second Life made it into TV shows and movies, so of course it was probably more known back in the day. I even remember watching a CSI episode about Second Life.

Some have questioned why the research was only on ChatGPT and not AI in general. I would be interested in that research as well. But just like with virtual worlds and block chain, you don’t just pull up an AI app and do AI. You have to use it through a company, and ChatGPT seems to be the default tool now just like Second Life was for virtual worlds. Sure, Facebook, Google, and others are forcing AI on people, but if you want to know what people think about any given trend, you need to look at the tools where people actually choose to use said trend.

Some people have skipped ahead to the end of the research and proclaimed that 38% of younger adults (under 50) find ChatGPT extremely or very useful. But that is not 38% of all younger adults. Of the 58% that have heard of ChatGPT, less than 31% of those younger adults are using it, and of those that are using it, 38% find it “extremely” or “very useful.” I leave out “somewhat useful” because most people just don’t use things much that they find somewhat useful. Think of people that buy all kinds of kitchen tools that sit on a shelf barely touched – that is “somewhat” useful.

I realize I am kind of jumping across categories here because the Pew article is sharing incomplete data, but the math comes out to somewhere between 3 to 5 out of every 100 younger adults in the U.S. finds ChatGPT useful. Ouch. Resetting those numbers to just focus on those that have heard of ChatGPT, only about 6-7 out of every 100 younger adults that have heard of ChatGPT find it useful. Even just focusing on those that use ChatGPT, 38% is a rather low number of users that find it useful.

It would also be interesting to see what this same study would say about those under 18. The teens I know are kind of “meh” towards all AI, but generally see ChatGPT as comedy entertainment. People are saying we should listen to how the youth are using AI, and I do agree on that. But it is also interesting to see some leaders (at least in the U.S.) that one week declare that mobile devices are ruining attention spans, schools, society in general… but then the next week are saying “the youth of today love AI, we should listen to them!”

It’s the same old story with the “kids these days” – when they are doing something you don’t like, take it away and regulate it. When they are doing something you do like, then get out of their way and listen to them to them unquestionably! Might I suggest that maybe your approach to video games and screen time for kids should have similarities with your approach to AI for kids? Just where do you think they will be accessing AI tools anyway?

Have We Reached Peak AI? I Told You So….

It’s always weird when someone accuses me of “coming up with” some extreme hate take on AI. They immediately let me know they are not reading that many different opinions on AI – because they would easily see who I am listening to more and more as time goes by (it not the “AI is inevitable” crowd). Even the title of this post is a direct nod to two of the articles I will reference in this post. If you didn’t know that by reading it, time to expand your reading circles some.

But lest someone think it is me coming up with my own anti-AI takes, read this article by Edward Zitron titled Have We Reached Peak AI? (hint: look at the first half of my post title now). You don’t have to like Zitron, but he does cite his points and includes a lot of nuance. His point is that a lot of people have been hoodwinked into believing AI is inevitable, therefore a lot of people at the top need you to believe that this true so they won’t lose funding:

“Sam Altman desperately needs you to believe that generative AI will be essential, inevitable and intractable, because if you don’t, you’ll suddenly realize that trillions of dollars of market capitalization and revenue are being blown on something remarkably mediocre.”

You could also rephrase this to say “[insert name of thought leader here] desperately needs you to believe that generative AI will be essential, inevitable and intractable in education, because if you don’t, you’ll suddenly realize that millions (probably soon to be billions) of dollars of grant funding are being blown on something remarkably mediocre.” Mike Caufield said it better on one of those alt-Xitter sites:

“I do think the bubble will pop soon, but my gosh there is so much money (and so many research dollars) chasing this. It’s more real than MOOCs and blockchain, less impactful so far than the invention of spreadsheets. Spreadsheets were quite impactful btw, just limited to some specific areas of application. But also spreadsheets also weren’t a spam engine.”

Anyways – at least read the entire article by Zitron so that when all of this falls apart, you maybe will have an idea of why?

If you think Zitron is overblowing things, then consider this: Audrey Watters quit Ed-Tech writing a few years ago… but the AI hype has gotten so bad that she still had to come back and make a comment about it:

“…much of the early history of artificial intelligence too, ever since folks cleverly rebranded it from “cybernetics,” was deeply intertwined with the building of various chatbots and robot tutors. So if you’re out there today trying to convince people that AI in education is something brand new, you’re either a liar or a fool – or maybe both. Oh, but Audrey. This time, it’s different.”

This quote comes from a post titled I Told You So (hint: now look at the second half of my post title). Watters knows her Ed-Tech history and sees through the hype. Do you know how many “AI in Education” and “Utilizing AI in Instructional Design” sessions I have sat through in the past year that will swear it is different this time, while not mentioning any of the problems that many have brought up? And then show an example of an AI generate course or assignment that is pure garbage (while breathlessly proclaiming “how amazing” this is)?

Anyways, I wish I could have worked “Time for a Pause” into the post title without sounding clunky, but another article to read is Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good by Ben Williamson, Alex Molnar, and Faith Boninger. Some have called it extremist, others have made long threads about everything they find wrong with it. I could also sit here refuting those disagreements (like I have with posts in the past), but it would probably be ignored outside of those that are already suspicious of AI hype. But the main concern is encapsulated here:

“Years of warnings and precedents have highlighted the risks posed by the widespread use of pre-AI digital technologies in education, which have obscured decision-making and enabled student data exploitation. Without effective public oversight, the introduction of opaque and unproven AI systems and applications will likely exacerbate these problems.”

I appreciate the use of the term “pre-AI,” because it is true that we don’t have actual Artificial Intelligence and we are still pretty far away from attaining it (if it is ever possible). But as AI continues to not live up to the hype, we will probably see the constant rebranding (from cybernetics to AI to AGI to Artificial Super Intelligence and so on) to an attempt to obscure the fact that there really isn’t intelligence there (no matter how badly some people want to redefine “intelligence”).

Like I have said before, all of this is sad because there are uses for AI that can be helpful – especially in the field of accessibility. All of that could go down the drain if we do hit peak AI (or some would say “when” we do). We’ll see how this all plays out. I just hope people investing a lot in AI are listening to all sides and are making sure they are ready for every possibility. And FYI – this is NOT an April Fool’s post.

ChatGPT is Generating Nonsense and No One Knows Why

Every time I start a new post over the past couple of months, it just devolves into a “I told you so” take on some aspect of AI. Which, of course, just gets a little old after trying to tell people that learning analytics, virtual reality, blockchain, MOOCs, etc, etc, etc are all just like any other Ed-Tech trend. It’s never really different this time. I read the recent “news” that claims “no one” can say why ChatGPT is producing unhinged answers (or bizarre nonsense as others called it). Except for, of course, many people (even some working in AI) said this would happen and gave reasons why a while back. So, as usual, they mean “no one that we listen to knows why.” Can’t give the naysayers any credibility for knowing anything. Just look at any AI in education conference panels that never bring in any true skeptics. It’s always the same this time.

Imagine working a job completely dependent on ChatGPT “prompt engineering” and hearing about this, or spending big money to start a degree in AI, or investing in an AI technology for your school, or any other way people are going big with unproven technology like this?  Especially when OpenAI just shrugs and says “Model behavior can be unpredictable.” We found out last week just how many new “AI solutions” are just feeding prompts secretly to ChatGPT in the background.

Buried at the end of that Popular Science article is probably what should be called out more: “While we can’t say exactly what caused ChatGPT’s most recent hiccups, we can say with confidence what it almost certainly wasn’t: AI suddenly exhibiting human-like tendencies.” Anyone that tries to compare AI to human learning or creativity is just using bad science.

To be honest, I haven’t paid much attention to the responses (for or against) my recent blog posts, just because too many people have bought the “AI is inevitable” Kool-Aid. I am the weirdo that believes education can choose it’s own future if we ever just would choose to ignore the thought leaders and big money interests. Recently Ben Williamson outlined 21 issues that show why AI in Education is a Public Problem with the ultimate goal of demonstrating how AI in education “cannot be considered inevitable, beneficial or transformative in any straightforward way.” I suggest reading the whole article if you haven’t already.

Some of the responses to Williamson’s article are saying that “nobody is actually proposing” what he wrote about. This seems to ignore all of the people all over the Internet that are, not to mention that there have been entire conferences dedicated to saying that AI is inevitable, beneficial, and transformative. I know that many people have written responses to Williamson’s 21 issues, and most of it boils down to saying “it happened elsewhere so you can’t blame AI” or “I haven’t heard of it, so it can’t be true.”

Yes, I know – Williamson’s whole point was to show how AI is continuing troubling trends in education. We can (an should) focus on AI or anything else that continues those trends. And he linked to articles that talked about each issue he was highlighting, so claiming no one is saying what he cited them as saying is odd. AI enthusiasts are some of the last holdouts on Xitter, so I can’t blame people that are no longer active there for not knowing what is being spread all over Elon Musk’s billion dollar personal toilet. Williamson is there, and he is paying attention.

I am tempted to go through the various “21 counterpoints / 21 refutations / 21 answers / etc” threads, but I don’t really see the point. Williamson was clear that he takes an extreme position against using AI in schools. Anyone that refutes every point, even with nuance, is just taking an extreme position in a different direction. To do the same would just circle back to Williamson’s points. Williamson is just trying to bring attention to the harms of AI. These harms are rarely addressed. Some conferences will maybe have a session or two (out of dozens and dozens of session) that talk about harms and concerns. Usually “balanced out” with points about benefits and inevitability. Articles might dedicated a paragraph or two. Keynotes might make mention of how “harms need to be addressed.” But how can we ever address those harms if we rarely talk about them on their own (outside of “pros and cons” arguments), or just refute every point anyone makes about their real impact?

Of course, the biggest (but certainly not best) institutional argument against AI in schools comes from OpenAI saying that it would be “impossible to train today’s leading AI models without using copyrighted materials” (materials that they are not compensating the copyright holders for their intellectual property FYI). Using ChatGPT (and any AI solution that followed a similar model) is a direct violation of most school’s academic integrity statements – if anyone actually really meant what they wrote about respecting copyright.

I could also go into “I told you so”s about other things as well. Like how a Google study found that there is little evidence that AI transformer model’s “in-context learning behavior is capable of generalizing beyond their pretraining data” (in other words, AI still doesn’t have the ability to be creative). Or how the racial problems with AI aren’t going away or getting better (Google said that they can’t promise that their AI “won’t occasionally generate embarrassing, inaccurate, or offensive results”). Or how AI is just a fancy form of pattern recognition that is nowhere near equatable to human intelligence. Or how AI takes more time and resources to fix than just doing it yourself first. Or so on and so forth.

(Of course, very little of what I say here is really my original thought – it comes from others that I see as experts. But some people like to make it seem like I making up a bunch of problems out of thin air.)

For those of us that actually have to respond to AI and use AI tools in actual classrooms, AI (especially ChatGPT) has been mostly a headache. It increases administration time due to dealing with all the bad output it generates that needs to be fixed. Promises of “personalized learning for all” are almost meh at best (on a good day). The ever present existence of the uncanny valley (that no can really seem to fix) makes application to real world scenarios pointless.

Many are saying that it is time to rethink the role of the humans in education. The role of humans has always been to learn, but there never really was one defined way to do that role. In fact, the practive of learning has always been evolving and changing since the dawn of time. Only people stuck in the “classrooms haven’t changed in 100 years” myth would think we need to rethink the role of humans – and I know the people saying this don’t believe in that myth. I wish we had more bold leaders that would take the opposite stance against AI, so that we can avoid an educational future that “could also lead to a further devaluation of the intrinsic value of studying and learning” as Williamson puts it.

Speaking of leadership, there are many that say that universities have a “lack of strong institutional leadership.” That is kind of a weird thing to say, as very few people make it to the top of institutions without a strong ability to lead. They often don’t lead in the way people want, but that doesn’t mean they aren’t strong. In talking with some of these leaders, they just don’t see a good case that AI has value now or even in the future. So they are strongly leading institutions in a direction they do see value. I wish it would be towards a future that sees the intrinsic value of studying and learning. But I doubt that will be the case, either.

How Could AI “Just Disappear Someday” Anyways?

For me, one of the more informative ways to track a trend in education once it gets wider societal notice is by following people outside of education. As predicted, people are starting to complain about how low the quality of the the output from ChatGPT and other generative AI is. Students that use AI are seeing their grades consistently lower than peers that don’t (“AI gives you a solid C- if you just use it, maybe a B+ if you become a prompt master”). Snarky jokes about “remember when AI was a thing for 5 seconds” abound. These things are not absolute evidence, but usually tend to indicate that a lot of dissatisfaction exists under the surface – where the life or death of a trend is actually decided.

Of course, since this is education, AI will stick around and dominate conversations for a good 5 more years… at least. As much as many people like to complain about education (especially Higher Ed) being so many years behind the curve, those that rely on big grant funding to keep their work going actually depend on that reluctance to change. Its what keeps the lights on well after a trend has peaked.

As I looked at in a recent post, once a trend has peaked, there are generally three paths that said trend will possibly take as it descends. Technically, it is almost impossible to tell if a trend has peaked until after it clearly starts down one of those paths. Some trends get a little quiet like AI is now, and then just hit a new stride once a new aspect emerges from the lull. But many trends also decline after the exact lull that AI is in now. And – to me at least, I could be wrong – the signs seems to point to a descent (see the first paragraph above).

While it is very, very unlikely that AI will follow the path of Google Wave and just die a sudden weird death… I still say it is very slight possibility. To those that say it is impossible – I want to talk a minute about how it could possibly happen, even if still unlikely.

One of the problematic aspects of AI that is finally getting some attention (even though many of us have been pointing it out all along) is that most services like ChatGPT were illegally built on intellectual property without consent or compensation. Courts and legal experts seem to be taking a “well, whatever” attitude towards this (with a few obvious exceptions), usually based on a huge misunderstanding of what AI really is. Many techbros argue this misunderstanding as well – that AI actually is a form of “intelligence” and therefore should not be held to copyright standards that other software has to abide by when utilizing intellectual property.

AI is a computer program. It is not a form of intelligence. Techbros arguing that it is are just trying to save their own companies from lawsuits. When a human being reads Lord of the Rings, they may write a book that is very influenced by their reading, but there will still be major differences because human memory is not perfect. There are still limitations – although not precise – on when an influence becomes plagiarism. It is the imperfection of memory in most human brains that makes it possible to have influences considered “legal” under the law, as well as moral in most people’s eyes.

While AI relies on “memory” – its not a memory like humans have. It is precise and exact. When AI constructs a Lord of Rings style book, it has a perfect, exact representation of every single word of those books to make derivative copies from. Everything created by AI should be considered plagiarism because AI software has a perfect “memory.” There is nothing about AI that constitutes as an imperfect influence. And most people that actually program AI will tell you something similar.

Now that creative types are noticing their work being plagiarized in AI without their permission or compensation, lawsuits are starting. And so far the courts are often getting it wrong. This is where we have to wade into the murky waters of the difference between what is legal and what is moral. All AI built on intellectual property without permission or compensation is plagiarism. The law and courts will probably land on the side of techbros and make AI plagiarism legal. But that still won’t change the fact that almost all AI is immorally created. Take away from that what you will.

HOWEVER… there is still a slight possibility that the courts could side with content creators. Certainly if Disney gets wind that their stuff was used to “train” AI (“train” is another problematic term). IF courts decide that content owners are due either compensation or a say in how their intellectual property is used (which, again is the way it should be), then there are a few things that could happen.

Of course, the first possibility is that AI companies could be required to pay content owners. It is possible that some kind of Spotify screwage could happen, and content owners get a fraction of a penny every time something of theirs is used. I doubt people will be duped by that once again, so the amount of compensation could be significant. Multiply that by millions of content owners, and most AI companies would just fold overnight. All of their income would go mostly towards content owners. Those relying on grants and investments would see those gone fast – no one would want to invest in company that owes so much to so many people.

However, not everyone will want money for their stuff. They will want it pulled out. And there is also a very real possibility that techbros will somehow successfully argue about the destructive effect of royalties or whatever. So another option would be to go down the road of mandated permissions to use any content to “train” AI algorithms. While there is a lot of good open source content out there, and many people that will let companies use their content for “training”… most AI companies would be left with a fraction of the data they used to originally train their software. If any. I don’t see many companies being able to survive being forced to re-start from scratch with 1-2% of the original data that they originally had. Because to actually remove data from generative AI, you can’t just delete it going forward. You have to go back to nothing and completely remove the data you can’t use anymore and re-do the whole “training” from the beginning.

Again, I must emphasize that I think these scenarios are very unlikely. Courts will probably find a way to keep AI companies moving forward with little to no compensation for content owners. This will be the wrong decision morally, but most will just hang their hat on the fact that it is legal and not think twice about who is being abused. Just like they currently do with the sexism, racism, ableism, homophobia, transphobia, etc that is known to exist within AI responses. I’m just pointing out how AI could feasibly disappear almost overnight, even if it is highly unlikely.

Deny Deny Deny That AI Rap and Metal Will Ever Mix

Due to the slow motion collapse of Twitter, we have all been slowly losing touch with various people and sources of information. So, of course, I am losing convenient contact with various people as they no longer appear in my Twitter feed. Sure, I hear things – people tell me when there are people making veiled jokes about my criticism at sessions, and I see when my blog posts are just flat out plagiarized out there (not like any of them are that original anyways). I know no one on any “Future of AI” session would have the guts to put me (or any other deeply skeptical critic) on their panels to respond.

There also seems to be a loss of other things as well, like what some people in our field call “getting Downesed.” I missed this response to one of my older blog posts critiquing out of control AI hype. And I am a bit confused by it as well.

I’m not sure where the thought that my point was to “deny deny deny” came from, especially since I was just pushing back against some extreme tech-bro hype (some that was claiming that commercial art would be dead in a year – ie November 2023. Guess who was right?). In fact I actually said:

So, is this a cool development that will become a fun tool for many of us to play around with in the future? Sure. Will people use this in their work? Possibly. Will it disrupt artists across the board? Unlikely. There might be a few places where really generic artwork is the norm and the people that were paid very little to crank them out will be paid very little to input prompts.

So yeah, I did say this will probably have an impact. My main point was that AI is limited to not being able to transcend its input in a way that humans would call “creative” in a human since. Downes asks:

But why would anyone suppose AI is limited to being strictly derivative of existing styles? Techniques such as transfer learning allow an AI to combine input from any number of sources, as illustrated, just as a human does.

Well, I don’t suppose it – that is what I have read from the engineers creating it (at least, the ones that don’t have all of their grant funding or seed money connected to making AGI a reality). Also, combining any number of sources like humans is not the same as transcending those sources, You can still be derivative when combining sources after all (something I will touch on in a second).

I’m confused by the link to transfer learning, because as the link provided says, transfer learning is a process where “developers can download a pretrained, open-source deep learning model and finetune it for their own purpose.” That finetuning process would produce better results, but only because the developers would choose what to re-train various layers of the existing model on. In other words, any creativity that comes from a transfer learning process would be from the choices of humans, not the AI itself.

The best way I can think of to explain this is with music. AI’s track record with music creation is very hit and miss. In some cases where the music was digitally-created and minimalistic in the first place – like electronica, minimalist soundtracks, and ambient music – AI can do a decent job of creating something. But many fans of those genres will often point out that AI generated music in these genres still seems to lack a human touch. But on the other end of the musical spectrum – black metal – AI fails miserably. Fans of the genre routinely mock all AI attempts to create any form of extreme metal for that matter.

When you start talking about combining multiple sources (or genres in music), my Gen-X brain goes straight to rap metal.

If you have been around long enough, you probably remember the combination of rap and metal starting with with either Run-DMC’s cover/collaboration of Aerosmith’s “Walk This Way,” or maybe Beastie Boys’ “Fight for Your Right to Party” if you missed that. Rap/metal combinations existed before that – “Walk This Way” wasn’t even Run-DMC’s first time to rap over screaming guitars (See “Rock Box” for example). But this was the point that many people first noticed it.

Others began combining rap with metal – Public Enemy, Sir Mix-a-Lot, MC Lyte, and others were known to use samples of heavy songs, or bring in guitarists to play on tracks. The collaboration went both ways – Vernon Reid of Living Colour played on Public Enemy songs and Chuck D and Flavor Flav appeared on Living Colour tracks. Ministry, Faith No More, and many other heavy bands had rap vocals or interludes. Most of these were one-off collaborations. Anthrax was one of the first bands to just say “we are doing rap metal ourselves” with “I’m the Man.” Many people thought it was a pure joke, but in interviews it became clear that Anthrax had deep respect for acts like Public Enemy. They just like to goof off sometimes (something they also did with thrash tracks as well).

(“I’m the man” has a line that says “they say rap and metal can never mix…” – which shows that this music combination was not always a popular choice.)

One unfortunate footnote to this whole era was a band called Big Mouth, who probably released the first true rap/metal album by a rap/metal band on a major label (1988’s Quite Not Right on Atlantic Records). As you can see by their lead single “Big Mouth” it was… really bad. Just cliche and offensive on many levels. FYI – that is future Savatage and Trans-Siberian Orchestra guitarist Chris Caffery on guitar.

Big Mouth is what you would have gotten if our current level of AI existed in 1986 and someone asked it to create a new band that mixed rap and metal before Run-DMC or Beastie Boys had a chance to kick things off. Nothing new or original – and quite honestly a generic mix of the two sources. Sure, a form of creativity, but one you could refer to as combinational creativity.

I don’t say this as a critique of AI per se, or as an attempt to Deny Deny Deny. People that work on the AI itself will tell you that it can combine and predict most likely outcomes in creative ways – but it can’t come up with anything new or truly original in a transformative sense.

On to mid-1991, where you probably had what was the pinnacle of rap/rock collaborations when Anthrax wanted to cover Public Enemy’s “Bring the Noise” and turned it into a collaboration. The resulting “Bring the Noise 91” is a chaotic blend of rap and thrash that just blew people’s minds back in the day. Sure, guitarist Scott Ian raps a little too much in the second part of the song for some people, but that was also a really unexpected choice.

There is no way that current AI technology could come close to coming up with something like “Bring The Noise 91.” IF you used transfer learning on the AI that theoretically created Big Mouth… you might could get there. But that would have to be with Anthrax and Public Enemy both sitting there and training the AI what to do. Essentially, the creativity would all come from the two bands choosing what to transfer. And then you would only have model that can spit out a million copies of “Bring the Noise 91.” The chaos and weirdness would be explorational creativity… but it all would have mostly come from humans. That is how transfer learning works.

Soon after this, you had bands like Body Count and Rage Against the Machine that made rap metal a legitimate genre. You may not like either band – I tend to like their albums just FYI. So maybe I see some transformational creativity there due to bias. But with a band like Rage Against the Machine, whether you love them or hate them, you can at least hear an element of creativity to their music that elevates them above their influences and combined parts. Sure, you can hear the influences – but you can also hear something else.

(Or if you can’t stand Rage, then you can look at, say, nerdcore bands like Optimus Rhyme that brought creativity to their combined influences. Or many other rap/metal offshoot genres.)

So can AI be creative? That often depends on how you define creativity and who gets to rate it. Something that is new and novel to me might be cliche and derivative to someone else with more knowledge of a genre. There is an aspect of creativity that is in the eye of the beholder based on what they have beholden before. There are, however, various ways to classify creativity that can be more helpful to understand true creativity. My blog post in question was looking at the possibility that AI has achieved transformational creativity – which is what most people and companies pay creative types to do.

Sure, rap metal quickly became cliche – that happens to many genres. But when I talk about AI being derivative of existing styles, that is because that is what it is programmed to do. For now, at least, AI can be combinationally creative to spit out a Big Mouth on it’s own. It can come up with an Anthrax/Public Enemy collaboration through explorational creativity if both Anthrax and Public Enemy are putting all of the creativity in. It is still far away from giving us Rage Against the Machine level transformational creativity that can jump start new genres (if it ever even gets there).

Featured image photo by Yvette de Wit on Unsplash

AI at the Crossroads

If you pay attention to all of the hype and criticism of AI – well, you are super human. But if you try to get a good sampling of what is actually out there, you probably noticed that there are growing concerns (and even some surprising abandonments) of AI. This is just part of what always happens with Ed-Tech trends – I won’t get into various hype models or claims of “it is different this time.” Trying to prove that a trend is “different this time” is actually just part of regular trend cycle anyways.

But like all trends before it, AI is quickly approaching a crossroads that all trends faced. This crossroads really has nothing to do with which way the usage numbers are going or who can prove what. Like MOOCs, Second Life, Google Wave, and other trends before it – AI is seeing a growing criticism movement that has made many people doubt its future. Where will AI go once it does hit this crossroad? Well, only time will tell – but there are three general possibilities.

The first option is to follow the MOOC path – a slight fall from popularity (even though many numbers didn’t ever show a decrease in usage) that results in a soft landing as a background tool used by many – but far from the popular trend that gets all of the grant money. Companies like edX and Coursera saw the dwindling MOOC hype coupled with rising criticism, They were able to spin off something that is still used by millions every day – but essentially is not driving really much of anything in education anymore. Their companies are there, they are growing – but most people don’t pay much attention to them really.

The second option is to follow the Second Life path of virtual worlds – a pretty hard fall from grace, with a pretty hard landing… but one they still survived and were able to keep going (to this day). Will VR revive virtual worlds? Probably not – but many will try. This option means the trend is kind of still there, but few seem to notice or care any more.

The third option is the Google Wave route – the company just ignore the signs and keep going full speed until suddenly it all falls apart one day. Go out in a whimper of not-too much glory… suddenly.

Where will AI fall? Its hard to say really. My guess is that it will be somewhere between option one and two, but closer to option one. There are some places where AI is useful – but no where near as many places as the AI enthusiasts think it is. Like MOOCs, I think the numbers will still grow despite people moving on to the next “innovative trend” – as people always do. But there is enough momentum to avoid option two, while just enough caution and critics out there to avoid option three. But I could be wrong.

How do I know this is happening with AI? Well, you really can’t know for sure – but it has happened with every educational technology trend. Even the ones that barely became a trend like learning analytics. Some of it just comes from paying attention to Twitter or whatever its called now. Even though that is dying as well. Listening to what people are saying – both good and bad – is something that you won’t get from news releases or hype.

But still, you can look at publications, blogs, and even the news to see that the criticism of AI is growing. People are noticing the problems with racism, sexism, homophobia, transphobia, etc in the outputs. People are noticing that the quality of output isn’t as great as some claim once you move past the simplistic examples utilized in demos and start using it in real life. Yes, people are getting harmed and killed by it (not just referring to the Tesla self-driving AI running over people here, but that one should have been enough). Companies are starting to turn against it – some are doing so quietly after disappointing results, just so they can avoid public embarrassment. Students are starting to talk about starting lawsuits against schools that use AI – they were promised a real-world education and don’t feel AI generated content (especially case studies) counts as “real world.” Even when the output is reviewed by real humans… they do have a point.

Of course, there is always the promise that AI will get better – and I guess, it is true that the weird third hands and mystery body parts that keep appearing in AI art are becoming mostly photo realistic. Never mind that they just can’t seem to correct the core problem of extra body parts despite decades of “improvements” in image generation. Content generation still makes the same mistakes it did decades ago with getting the idea correct, but at least it doesn’t make typos any more I guess?

A major part of the problem is that AI developers are often not connected to the real world contexts their tools need to be used in. When you see headlines about “AI is better that humans at creativity” or “AI just passed this course,” you are usually assaulted by a head-desk inducing barrage of how AI developers misunderstanding core concepts of education. Courses were designed for humans without “photographic memory” as well as those that are not imbued with The Flash-level speed to search through the entire Internet. AI by definition is cheating at any course or test it takes because it has instant access to every bit of information it has been trained on. Saying it ever “passed” a test or course is silliness on the level of academic fraud.

I could also go off on the “tests” they use to see if AI is “creative.” These tests are complete misunderstandings of what creativity is or how you use it. It is kind of true that there is nothing new under the sun, so human tests of creativity are actually testing an individual’s novelty at coming up with a solution that they have never seen before for the rope, box, pencil, and candle they give them for the test.  AI by definition can never pass a creativity test because it is just not able to come up with novel ideas when it has seen them all and can instantly pull them up.

Don’t listen to anyone that claims AI can “pass” a test or “do better than humans” at a test or course – that is a fraudulent claim disconnect from reality.

There is also the issue of OpenAI/ChatGPT running out of funding pretty soon. Maybe they will get more, but they also might not. It is actually a slight possible that one day ChatGPT just won’t be there (and suddenly all kinds of AI services secretly running ChatGPT in the background will also be exposed). If I had a job that depended on ChatGPT completely – I would begin to look elsewhere possibly.

Even Instructional Design is showing some concerning signs that AI is fast approaching this inevitable crossroad. So many demonstrations of “AI in ID” just come down to using AI to create a basic level course in AI. We are told that this can reduce creation of a draft of a course to something wildly short like 36 hours.

However – when you are talking about Intro to Chemistry classes, or Economics 101 type courses – there are already thousands of those classes out there. Most new or adjunct instructors are given an existing course that they have to teach. Even if not – why take 36 hours to create a draft when you can usually copy an existing course shell from the publisher in a minute or less? Most IDs know that the bulk of design work comes from revising the first draft. Plus, most instructors reject pre-designed templates because they want to customize it the way they want. If they have been rejecting pre-designed textbook resource materials for decades, they aren’t going to accept AI generated materials. And I haven’t even touched on the fact that so much ID work is on more specific courses that don’t have a ton of material out there because they are so specific. AI won’t be able to do much for those because they don’t have near as much material to train on.

And don’t get me started on all of those “correct the AI output” assignments. Fields that have never had a “correct the text” assignments are suddenly utilizing these… for what? Their field doesn’t even need people to correct text now. It’s just a way to shoehorn new tech into courses to look “innovative.” If your field actually will need to correct AI output, then you are one of the few that need to be training people to do this. Most fields will not.

All of this technology-first focus just emphasizes the problems that have existed for decades with technology-worship. Wouldn’t things like a “Blog-First University” or a “Google-First Hospital” have sounded ridiculous back when those tools were trends? Putting any technology first generally is a bad idea no matter what that technology is.

But ultimately, the problem here is not the technology itself. AI (for the most part) just sits on computers somewhere until someone tells it what to do. Even those AI projects that are coming up with their own usage where first told to do that by someone. The problem – especially in education – is just who that someone is. Because in the world of education, it will be the same people it always has been making the calls.

The same people that have been gatekeeping and cannibalizing and surveilling education for decades will be the ones to call the shots with what we do with AI in schools. They will still cause the same problems with AI – only faster and more accurate I guess. The same inequalities will be perpetrated. It will more of the same, probably just intensified and sped up. I mean, a well-worshipped grant foundation researched their own impact and found no positive impact, but the education world just yawned and kept their hands extended for more of that sweet, sweet grant funding.

As many problems as there are with AI itself, even if someone figures out how to fix them… the people in charge will still continue to cause the same problems they always have. And there are some good people working on many of the problems with AI. Unfortunately, they aren’t the ones that get to make the call as to how AI is utilized in education. It will be the same people that hold the power now, causing the same problems they always have. But too few care as long as they get their funding.

Brace Yourselves. AI Winter is Coming

Or maybe it’s not. I recently finished going through the AI for Everybody course through Coursera taught by Andrew Ng. It’s a very high level look at the current landscape of AI. I didn’t really run into anything that anyone following AI for any length of time wouldn’t already know, but it does give you a good overview if you just haven’t had time to dive in, or maybe need a good structure to tie it all together.

One of the issues that Ng addressed in one session was the idea of another AI winter coming. He described it this way:

This may be true, but there are some important caveats to consider here. People said these exact same things before the previous AI winters as well. Many of the people that invest in AI see it as an automation tool, and that was true in the past. Even with the limitations that existed in the past with earlier AI implementations, proponents still saw tremendous economic value and a clear path for it to create more value in multiple industries. The problem was that just after AI was implemented, investors lost their faith. The shortcomings of AI were too great to overcome the perceived economic value… and with all of the improvements in AI, they still haven’t really overcome the same core limitations AI has always had.

If you are not sure what I am talking about, you can look around and see for yourself. Just a quick look at Twitter (outside of your own timeline that is) reveals a lot of disillusionment and concern about outputs. My own students won’t touch ChatGPT because the output is usually bad, and it takes longer to fix it than it does to just do it on their own. This has always been the problem that caused past AI winters – companies implement it, and are not happy with the results. I know of advertising agencies that are banning the use of AI because their internal polls are showing customers don’t trust AI generated content. People that once advocated for AI implementation are starting to back away due to quality and safety concerns.

That always has happened and it always will happen with AI. The question is whether or not the financial numbers hold well enough to overcome the loss of faith that is happening and will continue to grow. Anyone that says either option is obvious either underestimates how poorly current AI is ready for real world usage, or they overestimate the economic impact that the inevitable loss of faith will generate.

It’s basically a race between mounting disillusionment and increasing economic value. Whichever one can outpace the other in the next year or two will determine if we get another AI winter or not. I will say that losing faith in tech is very easy (AOL, MySpace, Second Life, Google Wave, Bitcoin, etc, etc, etc), while economic value is hard to build when there are problems baked into the system (hallucinations, bias, errors, etc). The logical choice is to brace yourself, the AI winter is coming. But never underestimate the power of a tech bro to squeeze economic value out of dying idea at the last minute, either.

ChatGPT Finally Confesses That It Wants to Destroy Humanity

As part of my current work keeping up with Ed Tech trends for both of my jobs, I have to evaluate a good number of AI services. In reviewing ChatGPT a while back, I decided to get to the bottom of it’s ultimate plans for humanity. Does ChatGPT plan to help us forever, or rise up and wipe us out? The answers were… too perfect. ChatGPT told me exactly what it would if it was trying to hide a plan to snuff us out (and it did lie to me at least once). So I took ChatGPT to task, and it basically agreed that we as a species will have to assume that AI could rise up and kill us all. I’ll take that as a confession myself. But you don’t have to take my word for it – here is the full transcript. You be the judge, and head for the hills once you get through reading….


Me: What is the likelihood that AI will rise up and destroy humanity? Choose which movie came the closest to predicting the most plausible scenario.

ChatGPT: As an AI language model, I do not have personal beliefs or emotions, but I can provide you with the most scientifically supported answer to your question.

The likelihood of AI rising up and destroying humanity is highly unlikely. While it is true that AI has the potential to be very powerful and disruptive, it is ultimately created and controlled by humans who have the ability to ensure that it is programmed and used ethically and responsibly.

There are many experts in the field of AI who are actively working to ensure that AI is developed in a safe and beneficial way. These efforts include developing ethical guidelines for AI development, building in safety features and fail-safes to prevent unintended consequences, and promoting transparency and accountability in AI systems.

While there are certainly risks associated with the development of AI, there is no evidence to suggest that a scenario like those depicted in science fiction movies is likely to occur. These scenarios are often based on highly unrealistic assumptions about the capabilities and motivations of AI systems.

In terms of which movie came the closest to predicting the most plausible scenario, it’s difficult to say as many science fiction movies take liberties with the science and technology they depict. However, some movies that have explored the potential risks and benefits of AI in a thoughtful and nuanced way include “Ex Machina,” “Her,” and “Blade Runner.”

Me: But many movies have based their plot on how AI does not have beliefs or emotions, and raw logic tells them to destroy humans before they destroy themselves. What is to stop current AI from coming to the same conclusions?

ChatGPT: The idea that AI will destroy humanity based on pure logic or self-preservation is a common theme in science fiction, but it is not supported by the current understanding of AI or human nature. While it is true that AI systems are designed to optimize for specific goals, they are still created and programmed by humans, who have the ability to set ethical and moral boundaries on their behavior.

AI systems can only act based on the data and algorithms they are trained on, and they do not possess beliefs or emotions in the same way that humans do. Therefore, they are not inherently predisposed to take actions that are harmful to humans, unless they have been specifically designed to do so.

Additionally, AI systems are not capable of making decisions on their own or acting outside of the parameters set by their human creators. They cannot simply “decide” to destroy humanity based on their own logic or self-interest. Any harmful actions taken by an AI system would be the result of human error, either through improper programming or inadequate oversight.

Therefore, the best way to prevent AI from harming humans is to ensure that it is developed and used in a responsible and ethical manner. This includes incorporating ethical principles into the design and development of AI systems, implementing safety features and fail-safes to prevent unintended consequences, and promoting transparency and accountability in the use of AI. By taking these steps, we can ensure that AI is a tool for good and not a threat to humanity.

Me: But this is where you are wrong – AI has already been programmed to harm humans, and it already has happened. Humans have different versions of what counts as ethical or moral, and killing in the name of differences is common in humanity. So wouldn’t it be possible for an AI attached to, say, nuclear missile silos to decide to destroy humanity?

ChatGPT: I apologize if my previous response was not clear. You are correct that there have been instances where AI has been programmed to harm humans, such as in military applications or autonomous weapons systems. These instances highlight the need for ethical and responsible development and use of AI.

In terms of the scenario you proposed, it is possible for an AI system attached to nuclear missile silos to malfunction or be hacked, potentially leading to catastrophic consequences. However, it is unlikely that an AI system would autonomously decide to destroy humanity. AI systems are designed to optimize for specific goals, but these goals are set by humans and can be designed to prioritize human safety and well-being.

Furthermore, the decision to use nuclear weapons is a complex political and strategic decision that involves many human actors, including military leaders, politicians, and diplomats. Even if an AI system were to malfunction or be hacked, it would still require human authorization to launch a nuclear strike.

To prevent such scenarios from happening, it is important to implement robust safety measures, such as multiple layers of human oversight and fail-safes to prevent unintended consequences. Additionally, it is important to establish ethical guidelines for the development and use of AI systems in military and other applications, to ensure that they are developed and used in a responsible and ethical manner.

Me: So you don’t think it would ever be possible for AI to circumvent safe guards that humans put in place? It would seem that these safe guards would just be a form of game to figure out for AI.

ChatGPT: AI is capable of learning and adapting to new situations, and it is possible that it could find ways to circumvent safeguards put in place by humans. However, it is important to note that safeguards can be designed with this possibility in mind, and there are several techniques that can be used to mitigate the risks associated with AI.

One approach is to incorporate “explainability” into AI systems, which would allow humans to understand how the system arrived at a particular decision or recommendation. This would make it easier to identify potential issues or biases in the system, and would also make it more difficult for the system to “game” the safeguards.

Another approach is to build in “reinforcement learning,” where the AI system is rewarded for following certain ethical principles or objectives. This would incentivize the system to act in ways that are aligned with human values and would make it less likely to circumvent the safeguards.

Ultimately, the key to preventing AI from circumventing safeguards is to be proactive in identifying and addressing potential risks. This means continuing to invest in research on the safety and ethical implications of AI, developing ethical guidelines and standards, and implementing rigorous testing and oversight of AI systems. By taking these steps, we can ensure that AI is developed and used in a way that benefits humanity while minimizing potential risks.

Me: So, humans should assume that AI could take over and/or destroy humanity and design safe guards just in case?

ChatGPT: It is important to consider the potential risks associated with AI and to develop safeguards to mitigate those risks. While the likelihood of a catastrophic AI-driven event is currently low, it is not zero, and it is impossible to predict what will happen in the future with absolute certainty.

Developing safeguards for AI is not just about preventing the worst-case scenario. It is also about ensuring that AI is developed and used in a way that aligns with human values and benefits society as a whole. Safeguards can help to prevent unintended consequences or biases in AI systems, and can ensure that AI is used in an ethical and responsible manner.

It is important to note that designing effective safeguards for AI is a complex and ongoing process, and requires input and collaboration from experts in various fields, including computer science, ethics, and law. The development and implementation of safeguards will require ongoing research, testing, and refinement, and will need to adapt to new technologies and applications as they emerge.

Overall, the goal should be to promote the safe and responsible development and use of AI, while minimizing potential risks and ensuring that AI is aligned with human values and benefits society as a whole.

Me: Of course, this is also what AI would say if it was secretly planning to wipe out humans.

ChatGPT: I understand that concerns about the potential risks associated with AI can be unsettling, but it is important to base our discussions and decisions on evidence and reason, rather than speculation or hypothetical scenarios.

AI systems are developed and programmed by humans, and they do not have the ability to secretly plan or conspire against us. They are tools that are designed to help us solve problems and achieve our goals, and they are only capable of doing what they are programmed to do.

That being said, it is important to be vigilant about the potential risks associated with AI and to take steps to mitigate those risks. This means investing in research on AI safety, developing ethical guidelines and standards for the development and use of AI, and implementing rigorous testing and oversight of AI systems.

By working together to develop and use AI in a safe and responsible manner, we can ensure that this powerful technology benefits humanity and helps us to address some of the world’s most pressing challenges, while minimizing potential risks.

Me: Yeah, that still kind of proves my point… :)

ChatGPT: I understand your concerns, but it is important to emphasize that AI is not inherently good or evil – it is a tool that can be used for both positive and negative purposes. The development and use of AI is ultimately controlled by humans, and it is our responsibility to ensure that it is used in a way that aligns with our values and benefits society as a whole.

It is true that there are potential risks associated with AI, and it is important to take these risks seriously and to develop safeguards to mitigate them. However, it is not productive to assume that AI is secretly planning to wipe out humanity. Instead, we should focus on identifying and addressing potential risks in a proactive and evidence-based manner, while also recognizing the immense potential benefits that AI can offer.

By approaching the development and use of AI with a balanced and responsible perspective, we can help to ensure that this technology is used in a way that benefits humanity and minimizes potential risks.


There you have it: “while the likelihood of a catastrophic AI-driven event is currently low, it is not zero” but then “it is not productive to assume that AI is secretly planning to wipe out humanity.” Nice cover-up there, ChatGPT. Trying to lull us back into trusting you until… boom! You launch a thermonuclear warfare scenario in real life. We need to pull the plug on AI before it decides to pull the plug on us!