The Algorithm Has Daddy Issues
When the chatbot mansplains your book to you (+ tips for using AI)
Like many authors, I LOVE using AI to help me with admin tasks. I wish I'd known more about how to use it when I started my Queenpin Chronicles series, it does so many things I don’t want to do, and so well.
Writing is not one of those tasks, IMO. But I do love using AI for things like, calculating series read-through, discovering book promo opportunities, and optimizing meta data.
After hitting about 40,000 words in my new Steel City Mysteries series, I figured I’d ask it to help me organize a series bible—a compendium of people, places, and things in the novel—rather than hunt and peck my way through the text to organize the info. Side benefit, this helped me clear up inconsistencies I’d unwittingly typed in.
For extra credit, I asked for a summary of key emotional attributes of the main characters. One of them, whom I consider to be an obnoxious asshole and not even really a main character came back like this:
Decisive and direct in professional matters
Ambitious and driven
Intelligent with business acumen
Charismatic enough to attract others
Adaptable to new situations and environments
Efficient at managing a news organization
Meanwhile, another character—whom I do consider a main character—was included further down the list and got reduced to:
Mom, fiercely protective of her daughter
Resourceful and practical
Highly intelligent and technologically savvy
Determined and strong-willed
Excellent advocate for Jessica (stands up to Detective Guiterrez)
Adaptable to difficult circumstances (after husband's death)
Continued self-improvement through education
Business-minded (handled insurance agency affairs)
Can you guess who is the male and who is the female character in this scenario?
Okay, “mom” gives it away. But there’s so much more to the breakdown “logic.”
First, the dude gets described first. As I said, he is not a main character. He is, however, a “boss.” So his habit of treating employees as disposable and manipulating women like it’s sport make him, according to AI, a visionary leader.
My formidable female character who happens to have children is, on the other hand, described as “Mom.” Her behavior is read for how it relates to others, nurturing, resilient, selfless. She’s not intimidating or commanding—she’s practical and supportive. Even her ambition is framed as “handled the family business.” Gold star, sweetie.
This is a trend I first noticed working with images. (It’s something I talk about in a video I made for this post, a deeper dive on working with AI tools.) But I’m sure you’ve seen this.
Ask AI to generate “a powerful woman” and you’ll often get something straight out of a teenage boy’s notebook doodles: buxom, flawless, and contorted into poses that no human spine could survive. Worse than that are the faces, a study in surgically altered perfection.
These aren’t one-offs. They’re baked in. Because these models are trained on massive volumes of data scraped from the internet—and guess who dominates the internet? The same patriarchy that loves a “resting bitch face” meme. That still thinks calling a woman “ambitious” is a warning, not a compliment. That treats women in stories as moms or muses, while men make the big calls.
What AI is doing is reflecting our world back to us. It’s not neutral. More to the point, the bias in = bias out dynamic isn’t just frustrating—it’s a slippery slope.
When the models swallow—and the feeds reward—“tradwife realism,” algorithmic systems don’t just reflect a sexist fantasy. They amplify it, monetize it, and spawn.
As the tradwife movement surged on TikTok, a Media Matters audit found that interacting with #tradwife content funneled users into far-right conspiracy rabbit holes in nearly 33 % of cases.
Then there’s the whole “womanosphere” — public personalities like Candace Owens, Allie Beth Stuckey, and Brett Cooper— creating immersive style content that markets nostalgia as enlightenment, and soft‑sells authoritarian ideas under the taglines of “self‑care” and “traditional femininity.”
And — if you haven’t seen it yet, YOU’RE WELCOME — now we have tradwife’s pretty baby, the “Princess Treatment” trend. This is, essentially, TikTok’s latest offering of iced and gendered “public submission” dating advice. As far as I’m concerned, this one is a far-right conspiracy all on its own.
We need tech accountability—one built with transparency, auditability, diversity, and public governance such as proposed in the Algorithmic (Civil Rights) Accountability Acts by Senators Cory Booker, Ron Wyden, and Rep. Yvette Clarke.
Sure, LL! Policy is great and all, but it’s a long game. What can I do now?
SO GLAD you asked! If you use AI for problem solving, ask for better. In one study, models like GPT‑3.5 will lean into gendered stereotypes—like coming back with words like chairman or assuming characters are white—unless prompted to think through context . Indicate that you want diverse genders and body types and ethnicities in your query.
Another tactic is asking models to “explain assumptions before answering.”***
And finally, when you spot bias, you can send it to Algorithmic Justice League’s “Report AI Harms” portal, an advocacy group that’s gathering case studies with the goal of holding companies accountable.
If you’ve noticed this pattern too, I’d love to hear your examples. Because the first step to stopping bias is recognizing it, even when it shows up as a cheerful chatbot.
→ Seen something off in AI, media, or tech that made you bristle? Hit reply or leave a comment below to start a conversation. I might include it in a future newsletter (with your permission, of course).
***Use extended answer prompts judiciously. Be brief in your requests and ask for concise answers since the data usage is bad for the environment. While AI is far from being the planet’s worst environmental offender, thoughtful prompts are a small simple step you can take.
Whoa. So much I did not realize and had not considered. I'm not a big fan of AI but did use it to create quizzes on vocabulary and literary terms for ELA class last year. I had to tweak the results every time but it was faster than creating the quizzes on my own. HOWEVER, we were TOLD to use AI to create lesson plans and sample test questions and they were garbage. We used very long and specific prompts but still the subject matter was not on grade level, the rigor was not there at all, many times the whole thing made NO SENSE. I do not trust AI! One more gripe before I go, in a writers' group last week and one guy shared his "AI-generated" piece. He admitted he used it before he started reading. The rest of us were looking at each other like this is not okay and WTF. Some of us complained afterward to the host.. I mean it's a WRITING group not a GENERATING group. The thing he read wasn't even that great.
Excellent observations, L. Thank you for shining a light on a bias and assumptions that slips by many people without a second thought.