Paul Donnett
Three writers walk into a bar.
The first says, “I don’t use AI. Ever. And neither does any serious writer.”
The second says, “What do you mean? You used Google to find this place.”
The third says, “I just want to make sure I’m not cheating. Be true to the craft, you know?”
The bartender says, “It’s simple, guys. Same rule as booze: it can help you write, so long as you don’t let it do the writing for you.”
The third answers: “In that case, I’ll have absinthe.”
The Argument Everyone’s Having (But Not Really Having)
As a writing coach, I think about this constantly. As a writer, my own journey has resembled a pendulum swinging from side to side, starting out firmly in the “never” camp before dabbling with the "slop" machine, then swinging back again.
My anxieties weren’t just ethical. They were creative, professional, educational, downright existential. In the privacy of thoughts I dared not utter aloud, I asked myself:
Is there ever a time to use AI?
What kind of writer would it turn me into?
How might it slowly erode my creativity?
What habits would I build (or destroy)?
How might it help (or sabotage) my career?
How could it affect other writers and the craft as a whole?
How would fellow writers treat me?
And how long before the Matrix takes all of us?
What Concerns Writers About AI
If there's any doubt that writers have strong and often mixed opinions about AI, just visit the comments section of some Facebook writers groups. The conversation is robust, to say the least.
For many, resisting AI isn’t an act of arbitrary luddite stubbornness; it’s about protecting something they’ve spent years trying to build - and by extension, the craft as a whole. It's about staying true to what writers have been committed to since storytellers first put stone to cave wall: authentic storytelling. Their concern is tied not only to identity and voice, but to the battle-worn confidence that comes from struggling through drafts until something truly human speaks from the page.
The idea that machines are increasingly adept at brainstorming, outlining, and actually writing an entire work, "vanilla" results or not, strikes legitimate terror in writers. I mean, what kind of a world are we building here, anyway?
The concern extends especially to newer writers, whom veterans quite understandably fear might miss critical opportunities for growth and development by leaning on it too heavily, ethics aside. All while the theme from The Terminator plays in the back of their brains. Radio host Matt Galloway recently devoted an entire episode of CBC's "The Current" to the subject.
Then there’s the professional reality.
Publishers, agents, contests, and platforms generally are still working out their policies. Some allow limited AI use for spelling, grammar, translation, research support, or simply bouncing around ideas. An increasing number of them expect disclosure if AI generated significant portions of the work. The issue is not simply whether AI was used, but whether the writer is honest about it.
Even if you're not a writer, you can see the problem here. Totally apart from whether a person can claim to be a "real" writer if they used AI at all, if they present something as their own only for it to be discovered that AI did the lion's share of the work, the fallout can be a killer: damaged trust, rejected submissions, canceled opportunities, and reputational harm that could haunt a writer for years. One need only follow the harrowing real-world tale of 'Shy Girl" to get the picture.
So when writers take a strong no-AI position, it's coming from a real place: respect for the craft, concern for the future, commitment to a level playing field, and in the end, a desire to know that when they succeed, they genuinely earned it.
We've Been Here Before
The more I chewed on this whole enchilada, the more I remembered that this isn’t actually a new problem. Tectonic shifts in technology have always forced society to walk that delicate line between tool and "the craft".
Just imagine how Gutenberg’s press must have sucker punched a whole industry of scribes. Or how terrifying the arrival of sound in film must have been to the international community of contract piano players. Many of us are old enough to recall how Photoshop initially outraged graphic designers, or how the advent of e-books rang alarm bells for bookstores across the globe.
Each of these changes sent out shock waves, disrupting career paths mid-stride and creating anxiety about the future. (To be fair, they also led to improvements and new opportunities no one saw coming. But I’ll leave that for another article.)
The question isn’t whether AI disrupts writing. It's how much disruption we’re talking about, how best (if at all) to use it, what it turns us into, and where we should draw the line.
Truth is, AI can be an incredibly useful tool for writers. That might trigger an immediate “yuck” response in some, but let's be honest about it: we already happily lean on other writers, the internet, Google searches, expert input, and “sounding board” friends for help.
Add to that list editors, readers, writing groups, outlines, index cards, whiteboards, mind maps, research databases, spellcheck, grammar tools, and even voice-to-text.
All of these help us move quicker and clarify what we’re trying to say, but few writers would argue they invalidate the work. So it’s hard to make the de facto argument that using another tool is necessarily a bad thing.
Properly used, AI can help you do research, generate ideas, pressure-test your arguments, identify gaps, even straighten out your structure. It can also help you identify or narrow down your story options, challenge your assumptions, provide feedback - the list goes on.
The key thing at every step is that leaning on AI this way should be expanding your thinking, not replacing it.
That’s why, when talking about AI with the writers I coach or work alongside, I don’t have much use for unilateral "should's" and "shouldn’ts" that end any further discussion. I find treating the matter as a kind of purity test forces people to take unnecessary (and potentially counterproductive) sides, often just to silence critical colleagues, warping writing into a kind of robotic, rule-based religion that silently kills creativity. Ironic, to say the least.
What matters in the end is what our choices and habits are doing to our our writing ability, our success, and the fundamental role of written material.
So let’s look at the most common objections to using AI in the writing process and how a writer might respond.
1. “It’s not real writing.”
If AI generates it, it’s not truly mine. It’s not even human.
What’s seen at risk: Authorship, identity and authentic human expression.
Good point: If AI does all the heavy lifting, you’re not building your own voice or representing real human thought or experience. You’re a curator, not a creator. And you’re contributing to a growing body of mechanical, inhuman work.
Balanced response: Keep the writing yours. Use AI for idea exploration and structure like you do with other tools, but do the actual writing yourself.
2. “It’s basically plagiarism.”
AI simply recycles or mimics someone else’s content. And doesn't even give them credit.
What’s seen at risk: Originality and ethics.
Good point: Copying and pasting AI output is no different than copying any other source.
Balanced response: Treat AI like notes. Consult, but always rewrite everything in your own words and apply your own thinking. (Example: You ask for opposing viewpoints, then rewrite and build your own argument from them.)
3. “It kills your voice.”
My writing will become generic and bland.
What’s seen at risk: Distinctiveness, dynamism and self-expression.
Good point: AI defaults to “average”. Overuse leads to homogeneous “sameness”.
Balanced response: Use AI to identify problems, not rewrite your work. Develop your own way of expressing ideas, injecting your unique personality into your writing. (Example: You asks where their draft is unclear, then rewrite those sections in your own voice.)
4. “It’s just a shortcut.”
I’ll skip the necessary struggle involved in developing quality writing.
What’s seen at risk: Skill development.
Good point: The messy parts of writing are where insight and mastery come from. Try to jump the line and you simply never become a good writer. Meanwhile, you build false confidence.
Balanced response: Use AI to explore options, then choose and develop one on your own. Embrace the mess as you come up with your own solutions, don’t just let AI give them to you.
5. “You’ll become dependent on it.”
I’ll gradually lose my ability to write without needing help.
What’s seen at risk: Independence, strength, and confidence.
Good point: Too much reliance can become weakness.
Balanced response: Set clear, hard boundaries. Do core thinking and first drafts without AI. Share your work with humans, get their feedback, make changes, and bring it back to them.
6. “AI will steal my work.”
My ideas will be ripped off or reused.
What’s seen at risk: Ownership and control.
Good point: Even though most AI platform policies promise “no public sharing”, there’s always a risk. Or at least the ugly feeling of one.
Balanced response: Be selective. Don’t paste everything you’ve written into tools you don’t trust. Use summaries or excerpts and treat AI like a semi-public space, not a private notebook. (Example: You describe a scene in general terms to explore ideas but keep the draft itself on your computer.)
7. “It’s going to take our jobs.”
AI reduces the demand for what I have to offer.
What’s seen at risk: Livelihood and opportunities.
Good point: AI is replacing or eliminating positions in every industry at lightning speed.
Balanced response: Focus on getting good at what AI struggles with: voice, style, real human conversations, authentic relationships, your own lived human experience. And participate in calls for the limitation of its use and acceptance. Like any useful tool that has a hazardous downside – nuclear power, the internet, a chainsaw - its utility doesn’t give it a pass to run wild.
8. “It floods the world with garbage writing.”
Before long, nobody will even know what good work looks like.
What’s seen at risk: Standards, expectations, appetite, and basic intelligence.
Good point: AI makes mediocre content easy to produce and proliferate.
Balanced response: Use AI to improve quality, not increase volume. Use AI to identify weak spots, then deepen the work yourself. Write really good stuff and get it out there, one way or another.
9. “You can’t trust it, whether you’re a writer or reader.”
AI can give me wrong or shallow answers and make me look like a fool.
What’s seen at risk: Accuracy and trust.
Good point: AI can sound convincing while being totally incorrect.
Balanced response: Treat AI output as a starting point, not the final answer. Triple-check everything and apply your own judgment. Hopefully, you’re already doing this, wherever you're sourcing your information.
10. “It will turn your brain to mush.”
My critical thinking skills in general will crumble over time.
What’s seen at risk: Cognitive sharpness - our ability to think, question, and solve problems.
Good point: If AI regularly does your thinking for you (generating ideas, making decisions, evaluating quality), those mental functions will inevitably weaken. Passive acceptance equals lazy thinking over time.
Balanced response: Use AI to challenge your thinking, not hand it to you. Ask it to raise questions and expose the gaps in your ideas and conclusions, then work out the solutions and make decisions on your own.
Surfing the Wave (Rather Than Trying to Outrun It)
All of the objections above obviously come from a real place and speak to a common core concern: not whether AI has a downside (it does), but how it affects us, how we should relate to it, and where we go from here.
Like it or not, AI is here to stay. Incurable fan of dystopian science fiction and keen observer of human nature that I am, I tried for years to pretend that we could somehow stem the tide, but we didn’t - and arguably, couldn’t.
But in retrospect, I’m not so sure that was the correct response, anyway. As any coast dweller knows, the trick to handling an unstoppable wave isn't to get mad at it or try to outrun it. It's too learn how to ride it.
While my (hopefully justified) fears about AI do and likely always will linger, here’s my bottom line for us as writers: As with any tool, if AI strengthens our ability to think, create, write, and be original, it’s doing precisely what it should, and there is wisdom in (carefully) embracing it. If we allow it to oust these wonderfully and uniquely human attributes, it’s time to recalibrate.
In the end – surprise, surprise - it’s all up to us if and how we use AI.

Click here to get updated on future blog posts.

In one 90-minute session, we’ll cut through the noise together, identify what’s holding you back, and map a clear "what's next" plan for your project.
Subscribe to get weekly writer momentum tips & support.
Thoughtful emails. No noise.
Unsubscribe anytime.
© WriterJump
All Rights Reserved.