There are so many stories regarding AI right now that it feels like a five-alarm fire mixed with a Whac-A-Mole game.
- School uses ChatGPT to determine which books are banned
- Elon Musk Will Train His AI Project Using Your Tweets
- Publishing scammers are using AI to scale their grifts
- OpenAI funds new journalism ethics initiative
- I Would Rather See My Books Get Pirated Than This (Or: Why Goodreads and Amazon Are Becoming Dumpster Fires)
- Another Major Publisher Caught Using AI-Generated Cover Image on Bestselling Author’s Work
And no, it’s not a five-alarm fire. But it is the very important pocket of time wherein a thing needs some form of regulation before we are fully immersed in the consequences and everyone learns the hard way what the saying “you can’t put the toothpaste back in the tube” means.
AI (artificial intelligence) is defined as“the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” It is being used in a lot of industries in many ways, and it was already in use before all the recent headlines. So to be clear: what I am specifically talking about is the way AI is being used in place of writers, journalists, and other creatives, and in grifts like where a non-author tricks consumers into buying their AI word salad book instead of the intended author’s properly written book.
There are certain topics in the world of publishing that end up feeling like they just never stop being discussed, one being any version of “Who gets to write what book?” in response to when a writer writes — or is asking how to write — way out of their lane. The thing with that specific question is, as Alexander Chee perfectly explains, “the question is a Trojan horse, posing as reasonable artistic discourse when, in fact, many writers are not really asking for advice — they are asking if it is okay to find a way to continue as they have.”
I keep thinking about this every time (daily at this point) I see people — well-intentioned, I think — saying this isn’t a big deal and everything is fine, because AI will never be good enough to replace writers and authors (insert all creatives). Being that AI is just scrapping all the information that is already out there to toss it into a blender and output something “new,” I am not actually worried that it will ever be good enough to replace creatives. But that’s not the problem for me. While I get where this idea is coming from I feel it gives a very false sense of “It’ll be fine!” and “Don’t worry!” which keeps the conversations that should be had from happening.
Instead, we should be asking: Will those in power care that AI isn’t as good at creating what a human can create when their goal of using it is to not pay actual writers, authors, and creatives? Do scammers care that the “travel book” they put up on Amazon, “written” by AI is garbage that no consumer would knowingly pay for if their scam works into making the sale? If Amazon gets a cut of every sale from buyers unaware that the book they purchased isn’t the book they intended to buy, will they implement something to stop it? How time consuming is it going to be for very real people in publishing and media to have to weed out the flood of AI-generated submissions? How costly will it be for businesses to have to implement ways to spot, catch, and/or stop scammers using AI?
I deeply miss what Etsy used to be and I think a lot about how it went from being this incredible site dedicated to artists to no longer being that: “Etsy went public in 2015. Last year, the company reported annual revenue of nearly $2.6 billion — a more than 10 percent spike over the year prior. Among other issues, these creators see the increase in counterfeiters on the platform as a result of Etsy prioritizing growth over being able to enforce its standards.” It is yet another example that once again leads me to think that we shouldn’t focus on whether AI is, or ever will be, good enough to replace writers and authors.
Instead, this is the time to ask questions, to understand the different ways that AI is being used, the good and the bad. We need to interrogate the “how” and “why” but more importantly: who is investing in this kind of AI, how do they intend to implement it, and what is their long goal? We should not be dismissing further scrutiny by downplaying that it’ll never be as good as a person, because no one thinks it will be, but rather what if the people implementing AI don’t care about its quality, so long as it can do a mediocre version of the job they’ll no longer have to pay someone to do? How much will we lose then?
And in related, you can read WGAStrong: Why Readers Should Care About the Writers Strike.