Yes Artificial Intelligence (AI) and how authors can benefit from it is very hyped right now. I’ve heard a lot of people complain that “everyone will be using it and producing 5, 10 15 (name the magic number) of books every year and it will be even harder to compete.
The hype is to appeal to our human nature for working faster without having to learn too much. The hype is to suggest (not promise but suggest) that if we work faster and generate more content we will generate more money. That is not at all a guarantee, just ask anyone who has already produced 30+ books if they are making significantly more money per book than they were when they only had five.
The hype is to our desire for a quick shortcut to riches and/or an easy writing experience. You still have to pay attention. You still have to edit what it spits out. You still have to check all the context, all the meaning. Metaphor is not a strong suit of AI right now. Neither is any craft required to make the story unique. It is built on large databases. It makes choices based on what is done most often, what is most popular. It has no way to predict that adding a tentacled alien from planet Z is what will be the “it” factor to move that story into the stratosphere.
And the ethics are squiggy (I bet AI would not use the word “squiggy” because it is not in the dictionary.) There have been many lawsuits and even union strikes about the use of AI in recent months: Writers’ Guild, Screen Actors Guild, and rumor has it that the Animators Guild are next to go on strike because of the use of AI to replace them. Already AI has made significant inroads into language translations, AI voices for audio, and now writing short content—from blurbs and marketing copy to blogs and short stories. And many more lawsuits and calls for regulation are in the works.
A Walk Down Memory Lane of Publishing World Technology Changes
Please indulge me for a few paragraphs. I’ve been publishing for more than 40 years. I began mostly with short stories or short essay pieces for magazines in my early 20s. I’ve lost track of how many of those were published. I published my first book, a children’s SF story (for ages 6-10), with a university press in 1977. My next book (nonfiction about online learning) wasn’t published until 1998, followed by additional nonfiction books in that same area in 2000, 2002, and 2004. My first novel for adults was published in 2011. In between these four books I was building my career in counseling and then higher education and eventually entering academia. From 1977 to 2013 I worked full time, and still wrote and published short stories and essays, academic papers and nonfiction books. Eventually, a started writing and publishing novels.
What happened in publishing over those 40 years? From 1977 to 2007 not much changed in the industry in terms of technology that impacted me. Yes, there were business technologies that made it easier for publishers to do billing, analyze how books were selling, knowing how many units of any book sold and what the profit margin was. But the writing process for me was the same. The publishing process was the same—that is traditional publishers knew the magic. They knew how to get books distributed. Interestingly, they didn’t really know much about readers (still don’t) or how to reach them. They knew how to reach bookstores, novelty stores, gift stores, and big box stores. They hoped those places knew how to reach readers.
After 2007, everything changed when ebooks grew faster than anyone had imagined. Though Amazon introduced them in 2008, it wasn’t until 2011 to 2012 that it was obvious that technology would change everything (no matter what the traditional publishers were saying). It also thrust authors into an ever-changing technological world not only of book production and sales—whether the author had a traditional publisher or began independent publishing for themselves. The rise of social media happened around the same time.
The cycles of new technology, in the past, have been those who get in on it early have the highest probability of making money if they can make it work for them well. A year or two later, that same money would not be there because by then everyone is now in on it. More competition for the same readers/users takes more money and time. Also, when a technology catches on and does well, a lot of new companies come into the space to make their own version of that tech can do. They try to ride the tiger’s tail and get their piece of the pie as well.
AI is no different than any of these other new tech cycles we’ve all endured. With every new tech there was fear of how it would change things. There was fear of what would happen if “evil” people controlled it. There was fear of it replacing people and more people losing jobs and careers.
It is true that technology does replace jobs. People who cannot learn to use the technology have to find different careers. It is a dilemma that has many more policy, ethics, and worldwide impacts that I can’t begin to resolve. However, I would also suggest that most people would not choose to return to an agrarian economy. Most people would not choose to return to a time when we used horse and buggy for transportation. All of those were technology changes.
What About AI? Everyone is talking about it. Is it good? Bad? Should I pay to use it?
First, let’s take a step back and understand that AI is not new. It is only the current interation of AI that is scary. Machine learning—the ability for a machine to make decisions was built on foundations from the early 1900s. In the early 1950s was when Turing developed a way to measure machine learning. By 1956 the term “artificial intelligence” was being used and the first AI program deliberately engineered to perform automated reasoning was tested. Chatbots made their appearance in the 1960s. I remember in a psychology class having a computer appear to ask and answer questions that I truly believed it understood. It was really all decision trees designed to lead the questioner down a particular path. More than 60 years later chat bots are still used in a similar way today by most businesses. They are used as a way to weed out customer questions before they can really have a human engage with them.
We’ve all been using artificial intelligence for years, but most of us didn’t know it.
That spell checker or grammar checker in your writing program or in your email? That’s AI. When you use Google translate, that is AI. Almost all factories use AI in some form to cut down on mistakes and make repetitive jobs reside within robotic equipment. Any time you use a search engine, you are engaging with AI. Any program that asks you questions in order to provide you with answers, is using AI that is tapping vast databases in order to “understand” the context of your question and provide a means to give you the best answer possible.
Some of this past AI has been generative. Generative is not new this year or last year. The use of Large Language Models (LLM) to determine the context and more closely match what humans need has been going on for a long time. I can remember, when I first game to Oregon, using a program that could provide a transcription of spoken words. It was called Dragon Naturally Speaking (later Dragon Dictate). I was using Dragon in my job as early as 1990. These LLMs were the foundation for so many things people have used for years: Siri, Amazon Echo, Alexa.
AI is used in maps and navigation for travel, banking, Uber, Lyft, Door Dash, Medical diagnostics new drug creation modeling, fraud prevention. For those who play video games, AI has been huge there as well with non-playable characters that interact with you and creating player skills on the fly to provide increased difficulty.
As you can imagine, anytime AI can do something humans do for pay, there is a cost to pay. Already language translators are seeing a huge reduction in their contract work. Like anything with AI, the translation is not as good as a human translator who understands context and nuance and would choose a better word to accurately translate a book or essay. However, for many businesses and some authors, it is “good enough” and the cost is sooooo low it is hard to pass up.
As you can see from the history above, generative AI has actually been around a long time. There have been questions about it for a while, and SF writers and movies have sometimes portrayed AI out of control. Remember Hal in 2001 A Space Odyssey? So, it’s not surprising that at this juncture, as AI has continued to evolve, that it once again seems very scary, or at least controversial, topic for many. It is the technology that in the public many people deride, protest against, and unions have already won several contracts with special clauses to save their jobs. There are numerous lobbyists in congress wanting to let it grow and pointing to all the good public use, and a lot of other lobbyists who want to regulate it. There are fights about copyright, compensation, and “preserving the art of writing, painting, graphics.”
Chat GPT and the generative nature of creating meaningful text.
For writers, Chat GPT can create complete paragraphs, marketing copy, entire stories based on the prompts/instructions you write for it. The current version, Chat GPT 4, can create a document up to 25,000 words that is hard to tell it wasn’t written by a human. It searches a large database of examples that includes those same keywords, themes, ideas and puts them together in a way that seems original.
In the six months that have passed, as they moved from the original Chat GPT to GPT 4, the improvements have been vast. One of the big problems they encountered in the past two years was particularly evident in nonfiction. In other words, in writing that is normally vetted for truth.
For example, early in 2022. I created some prompts about the 2020 election. A well-formed paragraph came back talking about the two candidates—Biden and Trump—and some of the policies and controversies. However, there was one glaring error. It ended the essay with the “fact” that Trump won the election and had a vote count that seemed to be true. Clearly that is not true.
How did that happen? Because Chat GPT uses a database of articles and coverage and then writes articles based on the plurality of information. There was a lot of coverage (e.g., databases of articles) regarding that false conclusion. There were a lot of conspiracy theories with a lot of coverage. Yes, there were articles in the database about Biden’s win but there were more about the possibility of Trump actually winning. Chat GPT does not have the ability to know what “truth” is. It doesn’t really understand the meaning and nuance of words. It only understands that certain words put together in certain ways are more prevalent. It then makes an assumption that is “truth.”
However, in Chat GPT 4, they have built in certain checks for “truth” for nonfiction that has reportedly improved it to be closer to 80% correct. That still means 20% is not.
In the fiction arena, it is much easier to create a story that is not bound by truth. For example, an author shared with me, earlier this year, that he put in prompts to create a specific type of story about living in the west for modern times. But he wanted it written in the style of Hemmingway. When he showed me what Chat GPT generated I was flabbergasted. Not that it was perfect. It still needed editing. However, it was as good or better than hundreds of attempts at writing a story I’ve seen submitted to contests when I was a judge. Can it maintain themes, nuances, subplots over the course of a 70K novel? I don’t know. However, it is growing and changing every day. With every person who uses it and corrects it, it will learn and get better. That was the basis of The Writer’s Guild problem with that type of generative AI.
Just as writers believe (with good cause in my opinion) that publishers would love to use this technology and not have to pay writers, there are other writers who quietly see this as a way to write faster and produce more books without having to do the hard work of generating complete ideas. Both some publishers and some writers see this as a short cut to content generation.
Despite all this public outcry…quietly, without telling most people, many authors, graphic designers, and wannabee authors and designers are using this generative AI anyway. They all say: “Of course I’m editing it. I’m not taking it as is.” Or “I’m just using it to generate plot ideas. Then I’m writing it myself.” Or “I’m using it only to understand how things are being put together. I’m not really using what it spits out. It’s all for study.”
AI Image Creation Like Midjourney
An equally litigious piece of AI are those that create art for book covers or promotional campaigns or interior images in an “illustrated” book. The one most often used at the moment is Midjourney.
I’ve heard authors say: “I created my cover with Midjourney. It’s not stealing. It’s still mine. My book cover IS original. The AI created from my prompts, my color requests. No one else would give it the same prompts.” There are even some professional graphic artists who use this AI to create things more quickly. They use it as a tool to put things together and then they modify them to the standards they need.
Should I Use It or Not?
As with all technology, it is all in how you use it. The technology itself is not evil. I can see how Open AI like Chat GPT for written content and Midjourney for Art content can be inspirational or generate ideas. The difficulty comes in the exploitation of that same AI that is used without transparency, and the use of authors, artists, and others creative work (intellectual property) without their permission or payment. See the Authors Guild work and advocacy around this issue.
As a counterpoint to the above, I’ll point you to a positive view of AI and how one bestselling author I admire uses it. Joanna Penn’s blog on the AI Assisted Artisan Author embraces the use of AI with a certain ethical lens for herself. I admire Joanna Penn and her writing, as well as her business acumen. She is very specific about how she is using AI and what she doesn’t use it for.
Will AI Make A Difference in my creative output? Marketing Copy? Speed up my writing process?
The answer is different for each person. It depends on your writing process. For me, all my books come from my personal experiences mashed with my processing of difficulties and triumphs in the world as I personally experience it. I begin with a theme and/or character and an idea of how I want the book to end, then I start writing. I’m a pantser. That means I write without an outline and allow my characters to lead me from page one to the book’s climax and the ending. For the most part, my characters are reliable in figuring things out and getting to that end point. My first editing round tends to cut down some of their zigs and zags, but I rarely have to completely gut a plot point. I do have to sometimes add a little more context with another scene because no one else has my subconscious running around to fill in the gaps.
Writing prompts or getting inspiration from what an AI source like Chat GPT might generate would never work for me. I would always be disappointed in the output because it can’t know what MY context is, what MY experience is, what the themes are that most interest me. I am better off talking about my ideas to a couple of good friends who are also writers. People who know me, know how I write, and can reflect it back to me in a way they know works with my brain. This is important because sometimes, when I’m writing and get stuck, it’s because I don’t consciously know what my brain wants yet.
On the other hand, for someone else who loves brainstorming with others, getting a list of possibilities, and mashing them up in ways they’d never thought were possible, may enjoy developing prompts and building them into something they can run with. People who are excited by weird ideas that don’t normally go together and love the challenge of making them work in a story, letting AI generate those combinations might be good for them. For people who don’t already have a catalog of books they want to write and are searching for something different, it might work well for them.
For me, I don’t do that kind of brainstorming. I already have way too many ideas I want to develop. I have more than I can accomplish in my lifetime. My folder for story ideas is added to every day just by living life. There are already more than 200 ideas in that folder. The last thing I need is more ideas for stories.
In the end you have to make your own decision. Remember, the writing comes first. What is your process? Why do you write? Will this help you or are you hoping it just makes it all easier? It won’t be easier. It will be different. How will it change you? Do you want that change?