Was this newsletter forwarded to you? Sign up to get it in your inbox.
When the first trailer for legendary Mexican director Guillermo del Toro‘s Frankenstein dropped, the tagline—four words slammed across the screen in huge white letters—sent me scrambling for a Google doc: ONLY MONSTERS PLAY GOD.
As a writer who works in technology and also happens to have an extremely marketable degree in English literature, my immediate impulse was to reach for the AI take: Frankenstein, creation, hubris, founders, models—the Substack essay basically wrote itself.
Then I watched the movie, twice. Frankenstein absolutely has something to say about AI. Just not the thing I thought.
Here’s a quick recap for those of you who haven’t read the book or have, but it’s been a while since high-school English: Mary Shelley’s Frankenstein tells the story of an obsessive young academic, Victor Frankenstein (Oscar Isaac in the film) who, on a mission to conquer death and push human knowledge to its outermost limits, brings the creature (that’s a spectacular Jacob Elordi in the movie) to life. Fallout ensues.
Del Toro’s Frankenstein is a largely faithful adaptation of the 1818 novel, with a number of what I’d say are acceptable-to-excellent tweaks. But the more I think about it, the more I have an issue with its slogan.
“Only monsters play God” is a catchy tagline aimed at driving views. But there’s so much more to this book and this film than Disney-grade platitudes about “real monsterhood” can convey. Because the most interesting part is not the “it’s aliiiiive” moment in the lab (as much as that scene in the movie rips); it’s what happens after the eyes open, when the question becomes what you owe to what you’ve made, and what kind of monster you become when you walk away.
Ship better and video faster
Get true 4K video at 50 fps with synchronized audio with LTX-2. Generate dialogue, sound effects, and music all created in one shot, with up to 20 seconds of continuous footage using cinematic clarity and frame consistency. Open source, runs on consumer GPUs, and fast enough to actually use in production.
The part that tracks: Natural stupidity
Guillermo del Toro really, really does not like AI. He’s said he’d “rather die” than use generative AI in his films. His comment to NPR lives rent-free in my head: “My concern is not artificial intelligence, but natural stupidity.”
So when he finally made Frankenstein, he wanted “the arrogance of Victor to be similar in some ways to the tech bros.” Del Toro understands Victor as a guy who charges ahead, builds something he doesn’t fully understand, and panics afterward—hardly the tragic genius who flew too close to the sun.
Del Toro fears the human who shrugs and ships anyway, not the machine itself.
This is where Frankenstein and AI really do align. Victor treats building a new form of life like a private science project rather than a public act with consequences. In Shelley’s novel, he panics and runs the instant the creature opens its eyes. When people start dying at the creature’s hand (which they very much do in the book), Victor’s first instinct is: “I didn’t kill anyone, my creature did.”
It’s a clean dramatization of something some people in tech are tempted to do with AI—pretend harms such as deepfakes and psychosis are an unfortunate side effect of something that was, at its core, neutral. As if design choices, training data, incentive structures, and deployment decisions aren’t part of the creation.
If you’re launching systems that mediate hiring, healthcare, education, or politics, you are Victor in the lab. You don’t get to throw up your hands later when the thing behaves in ways you did not anticipate.
This is where Frankenstein earns its staying power—and where it starts to complicate the easy AI metaphor. Shelley doesn’t give us a ranting, raving monstrosity or a straightforward morality tale. She gives us a creature who talks back.
Where the simple metaphor breaks: The creature’s inner world
For a big chunk of Shelley’s novel and del Toro’s adaptation, we’re inside the creature’s head, not Victor’s. He teaches himself language by eavesdropping on a humble family when he takes shelter in a hidden corner of their cottage. He learns to read, discovers John Milton’s 17th-century epic poem-slash-Bible fan fiction Paradise Lost, weeps over human cruelty, and tries incredibly hard to be good.
He starts to argue back and engage his maker in a philosophical debate about his own existence. In the book he tells Victor, “I was benevolent and good; misery made me a fiend. Make me happy, and I shall again be virtuous.” He’s reframing monstrosity as a result of nurture, not nature.
This is the creature del Toro brings back on screen. His monster makes friends with woodland creatures, gives leaves as gifts, and reads Milton and Genesis. He’s sensitive and yearns for connection—pay attention to del Toro’s repeated focus on hands, touch, and physical proximity throughout the film. When he rejects society and society’s rules, it’s because society rejected him first.
And this is where the simple AI metaphor—Victor as founder, the creature as technological breakthrough—falls apart.
Shelley’s creature has an inner life. There is someone in there: a sense of self, memories of being treated as a monster, an interpretation of what that treatment means. His “dangerous knowledge” is the realization that he exists in a world that has no place for him, and that his creator will not acknowledge him as a subject.
Our current AI systems are not that.
They’re sophisticated pattern engines. They reflect our language back in ways that feel uncannily like conversation. But there’s no consciousness inside your model, sitting there reading its training data and wondering why you abandoned it. When we personify our AIs (which, if you’re like me, you are wont to do from time to time), we’re flattering both ourselves and the machine.
That kind of anthropomorphizing is tempting—it matches the vibes of Frankenstein so nicely—but it buckles under pressure. It lets us say, “The AI went rogue” instead of, “We gave it a bad objective and terrible data and deployed it anyway.” It shifts focus from “What did we build?” and “Who does this harm?” to “What if it becomes self-aware and hates us?”—which is great for sci-fi trailers and terrible for governing technology.
So no, the AI is not the creature. We haven’t earned that metaphor. If we want Frankenstein to help us understand AI, we have to stop pretending the model is the monster and look harder at Victor.
Del Toro’s tweak: Expectations versus reality
In Shelley’s novel, Victor fails instantly and completely. The creature opens his eyes, breathes, moves—and Victor bolts. One glimpse and he panics, with zero attempts at building a relationship. Del Toro doesn’t let him off that easily (warning: spoilers ahead).
In the film, Victor doesn’t flee the second the creature animates. He keeps him alive. He hides him. He locks the creature in a kind of sewer cell—chained in the dark with a culvert of dirty water and a single shaft of light for enrichment. The only “toys” this newborn consciousness gets are a leaf and his own curiosity.
Victor clearly fantasizes that he’s created a new kind of being, something that will reveal its brilliance any second now. He wants the cinematic miracle: the creature standing up, orating in iambic pentameter, and proving his maker’s genius.
What he gets instead is a baby.
When the creature finally hauls himself upright and totters toward the light (poignantly mimicking Victor gesture for gesture), he can’t speak. For what the film implies to be weeks or months, he’s vulnerable and scared and clumsy in exactly the way you’d expect from a being whose entire sensory diet has been mold and runoff. And Victor is disappointed.
Because he doesn’t live up to Victor’s expectations. He’s not violent or malicious—just unimpressive. Instead of realizing, “Oh, this is on me,” Victor downgrades the creature from miracle to mistake. He stops seeing potential and starts seeing a failed prototype.
So in del Toro’s version, it isn’t just horror that makes Victor abandon his creation—it’s wounded pride. The thing he’s made doesn’t showcase his brilliance the way he wanted, so he retreats. It’s an abdication of responsibility, yes, but it’s also aesthetic disappointment: If the miracle isn’t photogenic and articulate on day one, he’d rather pretend it never happened.
That tweak sharpens the moral of the tale. Don’t confuse your expectations with reality, and don’t punish your creations for reflecting the conditions you created.
That, uncomfortably, is where I see a lot of our AI behavior rhyming with Victor’s.
Creation as a job you don’t get to quit
I spend a frankly embarrassing amount of time with large language models. I use them as a Bible study buddy, as a writing collaborator, and as a thinking partner. My systems work well because I invest in them—giving context, explaining how I think, and making corrections. I treat them like a very fast, very literal coworker who needs a long onboarding—not a vending machine.
When people say “AI doesn’t work,” what they often mean is, “I barked one vague prompt at it and expected it to read my mind.” That’s a Victor Frankenstein move: Get disappointed that your creation doesn’t immediately match the image in your head, declare it a failure, and abdicate.
There are at least two layers of responsibility when you create something powerful: First, there’s responsibility to the world: You’re accountable for the downstream consequences of unleashing it. Second, there’s responsibility to the thing you made: You don’t flip it on and walk away. You maintain, monitor, tune, repair, and respond.
Victor fails spectacularly at both. A lot of AI discourse only really dwells on the first one. We debate whether we should build X or pause Y; we argue about “existential risk” versus “overregulation.” That’s important. But Frankenstein keeps dragging me back to the second responsibility—the one about staying.
If you decide to build systems that touch real people’s lives, you inherit a maintenance job. Someone has to debug them. Someone has to keep an eye on what they’re doing in the wild. Someone has to say, “We broke this; we’re going to fix it,” instead of, “It broke itself; not our problem.” It’s not necessarily an indictment of modern-day tech, but it is a warning.
I keep Frankenstein on my personal AI syllabus as a mirror rather than prophecy. It asks a harder question than whether we should build smarter systems: What kind of creator are you going to be once you do?
If you’re going to be Victor—if you’re going to build systems that touch real people’s lives—you don’t get to leave the lab when the eyes open. That’s where the story starts, not where it ends.
Katie Parrott is a staff writer and AI editorial lead at Every. You can read more of her work in her newsletter.
To read more essays like this, subscribe to Every, and follow us on X at @every and on LinkedIn.
We build AI tools for readers like you. Write brilliantly with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora. Dictate effortlessly with Monologue.
We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.
Get paid for sharing Every with your friends. Join our referral program.
For sponsorship opportunities, reach out to sponsorships@every.to.
The Only Subscription
You Need to
Stay at the
Edge of AI
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators
Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools

Comments
Don't have an account? Sign up!
This is one of the best articles I've read in a while.
Wow, Katie. Kudos again for this article, a movie review, and your brilliantly reasoned analysis and observations... haven't seen it yet, but del Toro's "Frankenstein" is on our watch-list for this weekend.
Immediately and urgently apropos: Get a copy of Brian Merchant's latest book, "Blood in the Machine: The Origins for the Rebellion Against Big Tech" (2023, Little Brown & Co., New York). Ostensibly "about" the Luddite movement in the late 1700s and early-mid 1800s, Merchant finds direct parallels to the dangers and temptations that ooze from contemporary purveyors of Big Tech, especially our nascent AI/LLM developments. And he lands congruently to your argument, which is why I strongly recommend the book...
...which also -- this should not be surprising to an aware student of British history and literature -- notices and untangles the lives and influences of Lord Byron (George Gordon), Percy Shelley, Mary (née Wollstonecraft) and William Godwin, and of course, their daughter Mary Shelley (née Godwin) herself, along with her great gothic novel "Frankenstein" of 1816-18. There are direct influences, and core parallels, between Shelley's characters, Dr. Victor Frankenstein and his nameless monster (I checked this with Grok; the monster was never named, although at one point it referred to itself as "Adam", the first man) and the Luddites.
Scholar Russell Smith notes that, "Frankenstein's monster shares many characteristics of the Luddite movement: his demands are articulate, well-reasoned, and founded in natural justice... So too, he only turns to violence when his legitimate pleas are ignored, and his violence is not indiscriminate, but very specifically targeted." Hey, the Luddites were "articulate, well-reasoned, and founded in justice"? This flies in the face of modern misconceptions, promulgated by many in Big Tech, that the Luddites were an undisciplined, pitch-fork and club-wielding rabble, prone to rioting at any hint of "progress", and murderous to boot. But far from it!...
Merchant's book does us the huge service of completely deconstructing the myths of the Luddites and their movement as "ignorant peasants rampaging the countryside." Ultimately, in the book's closing chapters, he draws pointed conclusions about contemporary Big Tech leaders, makers and shakers, based upon the critique and lessons drawn from the "first big tech" textile factory owners. These conclusions echo and support your own, Katie, best summarized by your potent question: "What kind of creator (leader) are you going to be...?"
I could go on-and-on... but won't. I'll stop here with the recommendation to read Merchant's book. It's relevant.
Thanks again, Katie, for your perceptive and constructive writing!