Ask a writer what they think about generative AI – the kind that creates content, rather than the type that merely sorts information – and you’ll likely be treated to an extended monologue about creativity, plagiarism, laziness, cheating, and outright theft. The word is not good.
Jane Friedman experienced the worst side of GenAI when dishonest actors used the tech to generate books about the business of writing and plastered her name all over them. Obviously, she was not compensated. Worse, the content was shoddy and detrimental to her reputation and sales of any of her future works. Worse-worse, sites selling the books – mainly Amazon, because of course it was Amazon – refused her request to take down the books, because Friedman could not show that he had copyrighted or trademarked of her own name. However, because Jane has spent more than a decade creating a platform – which includes thousands of followers across multiple social media sites – she was able to raise a big enough stink that even the Evil Empire had to take notice. The good guys won a round.
I don’t think the perpetrators were hunted down and flogged, but you can’t have everything.
Are we doomed?
Friedman recently gave a talk on AI and surprisingly, her outlook was a bit more sanguine that you might expect, given her experience. I expected a free pitchfork with attendance, but no.
Yes, generative AI in the hands of bad actors is dangerous to the moral ownership and financial benefit a creative person has the right to expect from their work. However, considered dispassionately, AI is a tool, no more and no less.
Unfortunately, the tools have been trained on content created by human beings, without permission and without payment. On the plus side, various lawsuits have been filed to address this situation. In some cases, writers are being offered payment (a pittance) in exchange for allowing their books to be used to train AI. AI firms have also implemented some firewalls – users can pay for upgraded service that does not feed prompts or outputs to the hive mind, and it is possible to set up a personal AI on your computer that does not connect to the wider conclave at all.
So…horse gone, barn door, etc. etc. But many organizations are trying to put the brakes on the takeover.
Friedman pointed out that a lot of things said about AI have been said about other emerging technologies. In the 1980s, the word processor was expected to put secretaries and proofreaders out of work. We’ve all read that smart phones and tablets would be the death of conversation and in-person meetups, but pundits said the same thing about newspapers and popular magazines.
Of course, word processing inputs are only as good as the person inputting them, and newspapers and smart devices don’t scrape your brain for content. We are new to the experience of machines that can be trained to mimic our speech, writing, and art.
How do we use this tool?
That was the million dollar question, and there isn’t a single answer. For many of us, the answer is easy – we don’t. For another large group, the answer is similarly easy – write a short prompt for a story idea and let AI do the rest. What a timesaver! For some assholes – see above – the answer is to use AI to plagiarize and steal identities.
Is there a middle ground? Are there ways to use AI that are ethical and don’t quash our curiosity and creativity? Can we use AI without becoming lazy turds masquerading as writers?
Maybe.
Friedman raised a lot of questions, but didn’t offer simple answers, because there aren’t any. The writers attending her talk were universally appalled at the idea of using AI to do the actual writing part of writing. But Friedman also shared an anecdote about a writer who has trained a personal (firewalled) AI on her own writing and uses it to create outlines for non-fiction articles based on a specific topic and the general contours of a piece. In other words, the AI doesn’t research or write the article, but helps organize the writer’s content, based on her prompts.
Is that a bridge too far?
Your mileage may vary, but that’s way too close to the line for me. In one view, you might consider the AI as nothing more than a personal assistant. You wouldn’t fault a researcher for using a grad student to organize notes into a logical outline for an article, so why jump on someone’s bones for using AI for the same task?
I wouldn’t necessarily bash someone for making that choice, but it’s not for me. There is – or should be – joy in effort. There is value in organizing your own thoughts. Importantly, there is learning. There is evolution. You might find it more convenient to have AI organize your article, but you’ll miss the opportunity to make connections the AI won’t. You might omit a subtopic or anecdote because it wasn’t included in your inputs, when it would have otherwise come to mind. Your article may lack the amusing story or non sequitur that might have popped into your head as you organized. The physical act of writing jogs memories. We connect to ourselves. You might save some time with an AI tool, but you lose a bit of yourself.
Would it be nice to spend a little less time blogging and a little more time writing my novel? It would. But taking the writing part away from my writing doesn’t work for me. Please send AI that will clean my bathroom, dust the furniture, and order and deliver groceries. I don’t even object to the self-driving car, if they can work out the whole accidentally-murdering-people thing.
Does anyone care?
Another topic was whether readers give a fuck. Again, opinions were mixed.
Friedman shared an email from someone who attended one of her previous talks. In it, he asked why he should care who – or what – wrote his next favorite story. If he could request a made-to-order novel or movie, based on his requested genre, tone, plotlines, and overall “experience”, why wouldn’t he prefer an AI-generated story capable of giving him exactly what he asked for, without variation or failure? If he liked the story, it wouldn't matter where it came from.
That sounds like a piss-poor way to get through life, but to each his own, I guess?
Friedman believes that guy does not represent the average reader, and I agree…mostly. There are lots of readers who want to connect with their favorite writer. They want to hear the story behind the story. As writers, we want to hear what inspired a story or how our favorite writer mastered a writing trick or overcame a challenge. We crave human connection.
Fandoms might be a bit whack-a-doodle, but it’s also exciting to watch a crowd of hundreds show up for a well-known writer’s book release party. We should all be so fortunate. John Waters recently held a book signing in Baltimore, and people queued up outside the bookstore hours before it started. The line wrapped around the block three or four times, on a very cold and rainy night. It took four hours for everyone to file through once the event began.
(PS - I wanted to meet John Waters, but….no. I have no problem waiting but I need not to be cold or rained on.)
No one is standing in line for 4 – 8 hours in the rain to hear about the killer AI prompt that generated your last book. Friedman believes that most readers care about the human behind the writing, or simply want to know that there is a human there somewhere.
I agree that many readers want that human connection, probably enough to sustain the careers of a lot of writers over the long haul. But, pessimist that I am, I also know that people are selfish and awful. We’ve all read about people who download pirated work or return e-books to Amazon after they read them, even though the writer is charged fees that are not refunded after the return. Any social media book club is rife with people who “Stuff their Kindle” with hundreds of free books, proudly asserting, “I don’t care what it is, I just like to read!” As if lack of discernment is a selling point.
People who return e-books and read nothing but free downloads will not give a shit if all their future books are written by AI. In fact, I suspect they will probably prefer, if not demand it. And AI firms and probably some book publishers are banking on them. Literally.
I do think there are enough of the other kind of reader – people like you and me – to keep us afloat. We create, we gather, we lift each other up. It’s hard work but it’s always been hard.
Do I use AI?
I know you’re wondering.
Yes, I occasionally use ChatGPT, but never for creative writing. Never. Not even for a blog post, not for brainstorming topics, not for outlining.
I have used it as a research tool, like Google but with slightly more organized results. Recently, Google has added its own ChatGPT-like conversational AI at the top of many of its search results, and ChatGPT has started adding hyperlinks to some prompt results, so the difference between all these tools is blurring.
I don’t use AI as a sole source. I get some general ideas on a topic and then head to Google, Wikipedia, or the library. I look for a human delivering the same information. I leave myself plenty of room for accidental discovery. ChatGPT is a handy tool for some things, but it will never beat a Wiki-hole for pure entertainment value.
I’ve also used AI for work. Part of my job occasionally involves understanding a client’s legal or regulatory requirements, and ChatGPT is great at For Dummies tutorials on topics I need to grasp for two weeks and promptly forget. My mantra here is identical: Trust, but verify. At my previous job, I let ChatGPT draft my marketing emails, because marketing emails are exactly the kind of soulless malignant crap AI excels at. Also, I do not give a shit about marketing emails.
I have also used ChatGPT to self-diagnose medical issues, but that’s probably not a good idea. Forget I mentioned it.
Know anyone who’d like my blog? Please forward today’s post! I’d love to hear from them.
Need more content? Join my mailing list!