When AI is the Ghostwriter, Are We Okay With That?
A pattern in publishing gossip and the ethical lines I fear we're crossing
In publishing, there is a whisper network. And trust me, the publishing world is a small town. Word travels.
This is, I think, often a good thing. It alerts writers to the shady dealings of a predatory agent or editor. Junior publishing employees can name damaging behavior of their supervisors. Strikes can be organized, censorship whistles can be blown, ways can be made to bring harm into the light to be dealt with as it needs to be.
In a systemically broken industry, this is necessary.
A prime example is the #PublishingPaidMe data. This was a grassroots effort built by a whisper network of authors and publishing folks that created transparency where there had not been any before and has paved a path for Black authors and their agents to advocate for equitable advances and has provided a resource for writers in other marginalized groups to do the same.
A whisper network protects marginalized voices, starts the beginnings of rumblings that wouldn’t find a way otherwise, and raises red flags of wrongdoing or ethical line-crossing.
Over the last two months, I have heard of three separate instances of writers using a generative AI tool for the actual drafting of their contracted manuscripts with traditional publishers. In at least two of the instances, the writers did not feel there was any need to disclose this use of AI as a drafting source to their agent or publishing teams. Across the whisper networks I have had access to, there have been various feelings or contradicting justifications for such action, but what has been clear to me is that many do not have an ethical framework for this. And not only that, I think those who do not see a problem here are missing out on the power their original words have to offer us all.
Now that I have seen three similar scenarios telling me there is a pattern, like the summoning of Beetlejuice, I have been called from the void (by me) to say something into the same void.
Some Ground Rules:
I think it’s worth making some things clear from the top before I get into where I want to take us in this essay:
I come to this conversation as a tech-ethicist, and an amateur one at that. My publishing professional hat is in the closet and I am not an expert in AI or how all publishing companies feel about its usage in any form.
On that note: my opinions are my own. I do not speak for my employer or their approach to AI. I do not sit on that advisory council nor has that changed how my feelings have formed around the subject.
I do not have my hand in book contracts and cannot tell you what is legally permissible or not within any particular contract pertaining to AI. I have feelings about what should be outlined in such a document, but—if you reference point two—these are my opinions. And in the words of one of my father’s favorite clichés, “opinions are like assholes—everyone has one and they all stink.”
In summation: I know enough to be dangerous, but I am mostly informed by my academic research in algorithm ethics and digital hospitality than I am by my professional work. I have some thoughts that may be helpful for writers faced with the temptation to use or the conversation around using AI to draft a manuscript.
Created or Regurgitated
When a traditional publisher contracts a work with a writer, the baseline agreement is that the writer will produce an original long-form work and the publisher will package and distribute that work. Implied in that agreement is that the writer is doing their own work and citing or acknowledging the work of others within their writing as is appropriate to the genre1. Research assistants or ghost writers are compensated and acknowledged in some fashion, sometimes to the point of receiving a “with” byline or co-author title if the contribution is significant enough. Even in the cases of a truly ghostly ghostwriter, there is usually compensation and a clear agreement arranged for original work that will go without public acknowledgement2.
And the work is new and wholly original if it is ethically produced.
Meanwhile, in the cases causing a stir in publishing circles recently, the writers in question are using a technology that, though it is labeled “generative,” it cannot produce wholly original work. The program is predicated on machine learning through pattern recognition of previously existing data. It is predicting out of what was. It is unable and not intended to make what will be.
Perhaps you are among the many whose published work was used to train an AI model without your consent. I am sorry that has happened. That work was the previously existing data. The metaphorical trees broken down to bits and pressed into particle board for others to cut into the general shape of a tree and claim they grew it themselves from a sapling.
The original tree may no longer be identifiable, but the false tree would not exist without all the trees now masked together to take its form.
AI may, at points, be more deft in regurgitation than the particle board tree, to the point where perhaps an editor or reader may notice, perhaps they may not. The recognition of it from another is not the point. It is the ethic (or lack of) in the writer’s action that troubles me3.
AI is not a KitchenAid Mixer
I am not against the use or existence of AI. My feelings are quite complicated, but I do see ways of it being a tool in the writing and publishing process. It can be a massive help in organizing research data. I know of writers who have used it as a starting point for the dreaded synopsis or chapter summaries of a book proposal. It is a brainstorming resource for subtitles or discussion questions, or even a way to confirm a complex subject is being made accessible to a different reading level.
Most of this is using your own writing as the existing data set under your own consent. (While still comparing that consented-to data to the existing data within the AI large language-learning model which still makes me gripe-y, but as I said, it’s complex.)
But to draft or even use it to ideate an outline gives me great pause and big feelings. To not disclose its use should add even greater pause.
From those who are in favor of using AI in such a way, the claim has been made that it is like using a mixer to help in baking over a hand-held whisk. That it helps one more efficient with do the same task. They are still coming up with the idea, and giving it their own prompt, and then editing the output and making changes to make it their own before editing it for submission.
I cry bullshit.
The appropriate equivalency to their baking simile is that using a computer over a pen and paper is like using a mixer over a whisk.
The more appropriate simile to what they are doing is: it is like the time I volunteered to bring dessert to a friend’s shower and then promptly forgot until the morning of, so I grabbed a take-and-bake pie from the grocery store, which I deftly popped out of its tin tray into my pie plate and after re-crimping the top myself, baked in my own oven, and passed it off as my own.
But I did not make that pie.
An anonymous food production plant worth of workers did. I didn’t plagiarize any singular one of them, but I also didn’t rightfully make that pie. The labor is theirs.
You can edit to your own voice or feed ChatGPT a specific prompt you write entirely of your own mind, but the labor of others uncredited is forming that draft you are expected—and are being paid—to write. Not you4.
A Reframing
In my master’s dissertation5 work, Jaron Lanier became a frequent voice in conversation with my topic. If you have not yet read You Are Not A Gadget, please do! It is 15 years old and yet could have been written yesterday.
In the first chapter, “What is a Person",” he makes the stunning case that technology “depersonalizes” us as human beings. His conclusion (one of many) is that by humanizing our technology, we reduce and diminish our own humanity.
AI is a computer program, not a person. But culturally, we’re often using human terms to describe it. It “wrote” or it even “hallucinated.” It did no such thing—the program ran as it is programed to do, and a result was produced from that running or the result produced had an error due to something not accounted for in the programming. But even some of the companies creating these AI models have given their programs human names and tried to imbibe the text output with a characteristic writing voice. It feels like an uphill battle.
So let’s run counter to Lanier’s point for sake of argument and sub in a person for the machine in our story:
Over the last two months, I have heard of three separate instances of writers using a child named Avery to do the drafting of their manuscripts contracted with traditional publishers.
Sure, the original idea was the contracted writer’s, and they instructed Avery in what to do and how they wanted it done. When Avery finished, the writer shifted some of the word choices and added more before editing it all and handing it in to their editor.
And these writers do not think they need to give Avery credit or disclose to their agent or publisher that they enlisted Avery’s help in any way6.
Does that hit different? If we’re not okay with this happening to a person, is the action actually improved when a machine is being used in the exploitation? I would argue not.
To act as if one has written a manuscript instead of incorporating text produced from a computer program and not thinking that needs to be disclosed in any way is not okay. It is turning a blind eye to the exploitation of the labor of others without their consent.
But beyond that, it devalues the incredible ability of the human imagination. It reduces their own humanity and I don’t think they even know to think in such terms.
Do we value our creative impulse so little that we are willing to let a machine mimic it for us? Do we think so little of the beautiful words of the writers who have gone before?
To create something from nothing, to summarize from Dorothy Sayers’ Mind of the Maker is a wholly unique way the writer gets to model the Divine. AI is only a bastardization of this and to forfeit this human privilege of making is a shame.
And I am left with a lament of how entitled this really is: For one to think that they cannot be bothered to write their own thoughts, but still expect the capital of their contract and the labor of publishing professionals like myself. And worse: to demand the precious attention of readers who would feel betrayed to know what they are reading was not actually written with them in mind, nor was it actually written by the writer they gave their hard-earned money to.
And to know that there are countless writers out there willing to do the hard but deeply valuable work of creating, of finding their own original voice and thoughts to share. And they are the ones who deserve to be published.
But instead, work across the industry is headed to the printer, allegedly written by AI.
And I am not comfortable with that.
What I’m Taking in Lately
Watching
My partner and I caught up on Severance on Apple TV this month and everything about this show is just exquisite—the writing, the cinematography, the character portrayals. If you are remotely interested in sci-fi, mysteries, or work-life balance/productivity conversations, I highly recommend subscribing to the platform for a month to watch this show. It gets off to a slow start as it lays groundwork, but WOW does it take off!
Reading
In my reading life, I’ve been going between three books as of late:
Extremely Online by Taylor Lorenz—a history of the internet from the lens of influencing. A colleague lent me his copy and it’s been interesting. This may make some essay appearances.
Emma by Jane Austen—I don’t know what it is about winter, but it definitely gets me in the mood for literature classics. I was given this box set for Christmas and the pretty edition is an added perk to the experience.
Saving Face by Aimee Byrd—Disclosure: this is a book I am working with for my job. BUT I do not recommend books that I don’t genuinely think are worth your consideration. This one has hit me so personally. Byrd weaves together personal memoir, psychology, poetry, theology, and journal entries, into an intimately honest look at finding your footing after spiritual abuse. How I wish I had this book when I first began seeking recovery! This is an invitation to re-learn your story given not from the posture of an expert, but someone who is on their own journey and I am so grateful this book is coming.
Around the Internet
A few Substacks I’ve jumped in on this month include
’ “What’s the So What” for everyone who is sick of trying to hard on the things that don’t matter and are trying to find new ways, ’s “Sigh No More” for the literature nerds who can’t help but connect what their reading to the world they’re living in, and ’s Lent series that just kicked off.I enjoyed this video essay on Kendrick Lamar’s DAMN. after “Humble” popped up on my Spotify shuffle after not listening to it for years.
I’ve been playing Little Kitty, Big City as a de-stress outlet after some overwhelming days. I’m not necessarily a gamer, but I love a cozy game from time to time, and this has platformer from the POV of a lost cat has been low stakes and quite fun.
All genres have a way of going about crediting a source drawn from, not just nonfiction. I think many a novelist or poet can forget this. It is more subtle than citations many are used to from school research work, but I feel it’s worth noting. If you are unsure of best practices, doing some research on this front is worthwhile. I even think you could create and AI prompt to assist with this, though I’d recommend including an ask for sources in that prompt.
A commercial break here to recommend you read the acknowledgements in the books you read if you don’t already! Maybe I’m a more specific type of book-nerd than I realize, but it’s delightful to hear who writers drew from, were inspired or encouraged by, and even get a little look into how the thing you just read and enjoyed came together. This is also often where you get the tea on ghostwriting if it’s not listed in the author treatment on the cover.
(2.5- the author treatment is the author’s name and any other byline on the cover—”FirstAnd LastName, bestselling author of That Book You Read That One Time”
Though, to be clear, it should trouble us that it is more and more difficult to identify an AI-produced piece of artwork. We need to ask why the arts—what arguably makes us feel most human and alive to make or take in—have been the target of its development and not the mundane tasks that we would rather eliminate.
Not to mention, the writers drafting with AI don’t even get the privilege of having to parse the thorny ground of what of one’s own writing is formed and inspired by contemporaries and the writers who have gone before. I know that this has caused a lot of drama in the lives of writers—what is the writer’s and what was too closely inspired by something else, or who beat another writer to the punch, or is it all just unlucky coincidence in synchronicity spurred by capitalism’s liturgies of competition and scarcity. I have wrestled over this with dear writer friends as we have mutually dealt with both accusations and having our own work used without credit being given. It’s messy.
But to take the time to do the research and name what has made our work what it is really is a gift and I want to call that out.
I went to school outside the US. This is what it was called and the PhD students were writing PhD Theses. I didn’t make the rules, but I have accidentally lead people to believe I have a PhD and I don’t!
Also, while I’m thinking of it, it is worth noting AI-generated art cannot be copy-written according to current US law and from legal sources I am aware of in the industry, writers using the tool for the creation of their work are playing with fire.
Lex!! Thank you for this thoughtful and (unfortunately) necessary essay! I’ve heard rumblings about such things happening myself in publishing as well as academia. We talk frequently about what to do if a student comes into the writing center with a paper that was clearly helped along by AI and what to do next semester when we’re actually teaching students. If the consequences for a student are as high as possible expulsion, should not the consequences for an author be just as high? We are trying to teach the next generation that the act of writing, the entire process, is just as important as the final text. That a work has value because of the person and brain behind it. Without that human element, a text is just random soulless words stolen from god knows where and rearranged to meet a prompt. Like you, I have found AI to be helpful with schedule and research organization, but there’s a line I refuse to cross and as I teach students how to write ethically, I feel even more disappointed (even a bit angered) by the grown adults with book contracts these students would naturally look up to as professional writers who have taken a route that is plagiarism by another name 😫
Again, so appreciate your brilliant and experienced thoughts here ❤️❤️
"The metaphorical trees broken down to bits and pressed into particle board for others to cut into the general shape of a tree and claim they grew it themselves from a sapling." Amen.