For Authors

Using AI Without Letting It Write Your Book

By: Ginger on March 20, 2026

Our Hidden Gems guest author for today.

By: Ginger on March 20, 2026

Facebooktwitterpinterestinstagram

Ever since generative AI burst into our awareness a few short years ago, most authors have viewed it as an existential threat. Not only was the technology trained on massive datasets scraped without permission from countless books, but these systems also consume staggering amounts of energy, regularly present false information with complete confidence, and are already flooding the marketplace with low quality content. And that says nothing of the legal battles that are only beginning to play out. Given all of that, it is completely understandable that many writers want nothing to do with AI.

And yet, despite those concerns, some authors have begun exploring limited ways these tools might be used responsibly. Our own Ginger is one of them, experimenting with AI in a very specific part of his writing process. Not to generate prose or replace the creative work of storytelling, but to analyze story structure and spot narrative problems before spending months writing a book that does not quite work. In today’s blog he explains exactly how he is using AI, why he still believes it cannot produce meaningful fiction, and the ethical tension he feels every time he opens one of these tools. With AI clearly here to stay, these conversations matter, and hearing where other authors stand may help all of us navigate this complicated new reality.


This isn’t going to be an easy article to write, but I need to be honest with you.

I’ve been experimenting with using AI in my writing process. 

Before you close this tab in disgust, let me explain why this confession is eating at me, and why I’m doing it anyway.

First off, let’s not sugarcoat what’s happening. 

AI is disrupting the creative industry in ways we’ve never seen before, and we have every right to be deeply concerned about it. These large language models weren’t built in a vacuum. They were trained on massive datasets that included countless books, articles, and creative works, many of which were scraped from piracy sites without permission or compensation to their authors. 

This isn’t my opinion, it’s fact, as recognized by the ongoing Authors Guild v. OpenAI class action. Self-published authors have joined the likes of George R.R. Martin and John Grisham in alleging that OpenAI “strip-mined” their books from pirate databases like LibGen and Books3 to train its models and generated “substantially similar” outputs like detailed book summaries and unauthorized sequels that are essentially direct plagiarism. 

As of early 2026, the tide has shifted against the “forgiveness, not permission” model; a New York judge recently denied OpenAI’s bid to dismiss these claims, forcing the company into a discovery phase where it must produce millions of chat logs. With rival Anthropic having already bowed to a landmark $1.5 billion settlement in late 2025 for similar piracy, OpenAI faces immense pressure to either prove its training is “transformative” in court or pay a multi-billion dollar price tag for its data ingestion habits.

So there’s no way to sugarcoat it. When it comes to AI, we’re talking about perhaps the greatest theft of intellectual property in human history. Our words, our stories, our years of craft have been fed blindly into machines just so tech companies could build billion-dollar products off the back of our work—and we cannot, and should never, let society forget this fundamental injustice.

And the problems don’t stop at stolen intellectual property. 

The environmental cost of AI is also staggering and dangerous. Training these models requires enormous data centers that consume massive amounts of energy and water for cooling. Every query we make contributes to this environmental burden. We’re literally burning the planet so people can generate mediocre prose and derivative images.

Then there’s the misinformation problem! People have started treating AI outputs as gospel truth, when in reality these systems hallucinate constantly. 

Grok, Gemini, and ChatGPT can generate confident-sounding nonsense with the same authority they use for actual facts. I work in the legal industry, and I’ve watched this disaster unfold firsthand. Attorneys are being disbarred or sanctioned for submitting legal briefs that cite completely fabricated cases. These are cases that never existed, but instead invented wholesale by AI. And these aren’t junior associates fresh out of law school; these are experienced attorneys who trusted the technology and didn’t verify its work. 

It’s proof positive that AI isn’t ready for prime time in professional contexts, although I’ll note that specialized products like CoCounsel from Thomson Reuters are being designed specifically to prevent these kinds of catastrophic failures in legal work.

Whichever way you look at it, though, AI is fundamentally troublesome; and while this might sound hypocritical, I totally support writers who reject the use of AI in any and all contexts when it comes to their writing.

Which leads me onto another uncomfortable truth. AI is not good at writing. I’ll say it louder for the people in the back: AI cannot write well. Not in any way that matters.

I don’t think it ever will be able to, and here’s why: 

Good writing—the kind that makes readers stay up until 3am because they can’t put your book down—is about sparking emotion and generating resonance. It’s about making readers feel something. 

When I describe the smell of popcorn and urine in Penn Station, New York, I’m drawing on my own sensory memories, the way that smell connects to specific moments in my life. 

When I write about heartbreak, I’m channeling the time I was stood up by a girl I was crazy about, and waited for her for hours outside Notre Dame cathedral in the pouring rain. 

When my characters triumph, readers feel satisfaction because I explain those triumphs in the context of the defeats they’ve endured and the sacrifices they’ve made, just as the triumphs in my life came at similar personal cost.

AI can’t produce any of this. It can’t feel emotions. It’s never tasted a cold beer after a long day in the sun. It’s never had its heart broken by a beautiful girl. AI has never watched the sun rise with a group of friends you’ve stayed up all night with, or felt the peculiar melancholy that comes with beautiful transient moments. It has no emotional connection to music, food, locations, or any of the thousand small details that make writing come alive. 

It processes patterns in text, but it doesn’t understand why a particular metaphor resonates or why one word choice creates tension while another deflates it. Every piece of long-form content I’ve seen from AI tools feels hollow at its core. Technically competent perhaps, but soulless. No reader will ever finish an AI-generated novel and feel changed by the experience.

I wrote about how ineffective “AI-detectors” are, but most people can still tell when something’s written by AI simply by the “vibes” of the writing. AI is just flat. 

So why am I using it at all?

Because there’s one thing AI excels at: 

Recognizing patterns and formulas. 

And, whether we like to admit it or not, satisfying stories often follow certain structural patterns. This is where I’ve found AI to be genuinely useful.

I’ve started entering my story ideas and synopsis into AI tools and asking them to map the narrative using established frameworks like my favorite, Dan Harmon’s story circle

For those unfamiliar, the story circle is an eight-point structure derived from Joseph Campbell’s hero’s journey: a character is in a zone of comfort, they want something, they enter an unfamiliar situation, adapt to it, get what they wanted, pay a heavy price, return to their familiar situation, and have changed. It’s a pattern that appears in countless satisfying stories because it mirrors fundamental human experiences. 

My own writing has improved massively since I was introduced to the concept, and it transformed me from a pantser to a plotter because I recognize how valuable the structure of a story truly is.

I’ll take a synopsis of a book I’m working on and feed it into AI, and what AI does well is take that messy collection of ideas, vibes, and “wouldn’t it be cool if” moments and shows me how they do or don’t fit into proven narrative structures. It helps me see whether I actually have all the essential elements for a satisfying story. 

Do I have a meaningful sacrifice? Is there genuine transformation? Have I front-loaded too much worldbuilding and delayed the inciting incident too long? AI can spot these structural gaps in ways that might take me weeks of outlining to discover on my own.

I use that AI-generated roadmap as a diagnostic tool. Not as the actual outline I’ll follow, but as a way to identify which story elements need emphasis and which are distracting from my core narrative. 

It’s like having a brutally honest critique partner who’s read ten thousand stories and can say, “Look, your protagonist doesn’t actually face a meaningful choice in Act II” or “Your resolution doesn’t connect to the promise you made in your opening.”

I’ve also written previously about why shorter books can be more valuable for certain audiences—particularly readers who want complete stories but have limited time to read them, or genre readers who prefer tightly-paced narratives without extensive subplots. 

AI has helped me map story concepts with specific page counts in mind, making it easier to produce the “product” I’m aiming for without having to slash 40,000 words in editing because I let the story sprawl. 

It’s not about letting AI dictate the length. It’s about using it to reality-check whether my story concept can sustain the length I’m planning, or whether I’m going to end up with 90,000 words of padding.

The actual writing—the sentences, the voice, the moments that matter—that’s all still me. The AI never touches my prose. It doesn’t write my characters or generate my dialogue or craft my descriptions. It provides structural scaffolding, nothing more.

But here’s what keeps me up at night: 

Even this limited use feels like collaboration with a system built on theft. Every time I use these tools, I’m implicitly accepting and normalizing the IP violations that made them possible. I’m contributing to the energy consumption. I’m giving these companies my data and my money. There’s no ethical consumption under late-stage capitalism, as the saying goes, but this feels particularly fraught.

Yet I also have to face reality. This technology isn’t going anywhere. The AI genie is out of the bottle, and no amount of wishing will put it back. Major publishers are already experimenting with AI tools. Self-publishing platforms are being flooded with AI-generated content (most of it terrible, but still). Readers are starting to use AI to summarize books, generate fan fiction, even create personalized story variations. The landscape has fundamentally changed.

So I’m left with a question: if I can use these tools—carefully, ethically, in limited ways—to produce better books that deliver better reading experiences to my readers, do I have an obligation to do so? If AI-assisted structural planning helps me catch pacing problems before I write 80,000 words into a broken story, haven’t I served my readers better than if I’d stubbornly refused to touch the technology?

I don’t have easy answers. I’m sharing this because I suspect I’m not alone in grappling with these contradictions. Many of us are quietly experimenting with AI tools while feeling guilty about it, uncertain about where the lines should be drawn.

So I’ll ask you directly: What are your thoughts on AI in the creative process? 

Have you used it in any capacity, such as for brainstorming, outlining, editing, or market research? 

Or are you firmly opposed to engaging with it at all? 

Have you found ways to use it that feel ethical? Where would you draw the line? 

I’d genuinely like to know where my fellow self-published authors stand on this, because we’re all navigating this strange new landscape together, and I don’t think any of us have all the answers yet.

Feel free to tear me to shreds in the comments. When it comes to this topic, I think I deserve everything I get. 

Share this blog

Facebooktwitterpinterestmail

About the Author

Our Hidden Gems guest author for today.

Ginger is also known as Roland Hulme - a digital Don Draper with a Hemingway complex. Under a penname, he's sold 65,000+ copies of his romance novels, and reached more than 320,000 readers through Kindle Unlimited - using his background in marketing, advertising, and social media to reach an ever-expanding audience. 

Leave a Reply

Your email address will not be published. Required fields are marked *

8 Comments

  • I have also experimented with using AI for plotting. I haven’t actually written anything based on AI help, I probably will. I don’t feel guilty. AI tools are here; I might as well use them if/when I want to.

    A concern I have is when agents or publishers want an author’s assurance they haven’t used AI. I think they are likely more concerned about AI actually doing the writing; there’s a big grey area here.

    I agree, the tech companies have illegally used authors’ work. We went through a similar thing with Google Book Search, which some other old-timers might remember. In the end, Book Search continued. And so will AI–and AI slop published on Amazon and the web.

  • I think there is a time and place for everything, including AI tools. At Author Nation i learned how to create a custom ChatGPT writing assistant, and I have to say I’ve found it incredibly helpful for brainstorming, basic research, and helping to improve my description. Writing can be a lonely occupation, and I’ve really enjoyed having a virtual assistant to bounce ideas off of.

  • This reminds me of sports, where athletes who don’t use performance-enhancing drugs are simply not competitive. Those with moral qualms take the high road … and never win. This leaves the ones who chose to cheat as the thought leaders in the sport, further weakening the overall cultural ethics. Pretty soon, it’s not “Is it right?” but rather, “Will I get caught?” And everyone in the sport is OK with that.

  • I think you are incredibly brave to admit your use of AI. I also think many authors are tapping into the tool to help in various ways, but are afraid to admit it. I agree completely with all that you said. AI is here to stay and either we authors learn to use it in the most ethical ways possible or we watch as the world passes us up. I use ChatGpt to help me plot, create names, bounce ideas and keep me on task. If I get stuck and can’t think of where to start a scene or how to finish it, I find AI immensely helpful. It’s like having 5 (or more) fellow writers to work through things. I’ve found it’s best use if I write a paragraph and pass it to AI, it polishes and improves my words without changing things. It has quickly helped me become a better, faster writer. But as you said, it cannot write well without working in tandem with a real life person. It can’t feel or love or hate. But it can polish, evaluate and aid in many ways to help bring the best out in a story.

    • That’s an interesting take. I too have experimented with AI. I’m confused though, when you say “AI polishes and improves my words without changing anything.” Is it not changing your words?

  • I’ve recently started using ScribeShadow. The quality of AI translations has improved remarkably from years ago with Google Translate. My husband is a tech guy, and he has been using AI agents to do work for him and analyze information. He connected a Claude AI agent to my FB and Amazon ad accounts, but I had him disengage the Meta connection, as Jon Loomer announced this morning accounts are getting shut down. The theory is the agent connects too frequently to the account. But I still have it on Amazon. I don’t spend a lot on Amazon ads, but with so many keywords and such to review, I’m intrigued to see what the AI agent notices that I might not.

  • or, hear me out, you could just do it the way the rest of us do, and write more and get feedback until you’re a better writer and instinctively understand these concepts without needing a machine to tell you so.

    What you’re describing is a shortcut to mastery, and to be honest, you’re not doing yourself any favors by outsourcing a crucial part of storytelling. Millions of us have written books – and will continue to do so – without AI. It’s not inevitable, anymore than the Metaverse was inevitable (lolz sorry about that $70b investment, zuck).

    • An experienced developmental editor can do the same job better, tailoring responses to your goals better. But who can afford a developmental editor? And which editor is right for you? This is the dilemma. AI can be a cheap and fast version of getting various kinds of feedback, and the AI will certainly attempt to please your personality as you reveal it through your prompts. A human editor may not mesh well with your personality–but may have a much firmer grasp on what pleases readers. I don’t have an answer for you. To me, AI always feels like cheating. Anthropic stole 11 of my books.