If you want to learn more about Opus 4.5, Kieran Klaassen and I are hosting a Claude Code Camp on Opus 4.5 exclusively for paid subscribers on Friday (that’s tomorrow) at 12 p.m. ET. Sign up and reserve your spot.—Dan Shipper
Was this newsletter forwarded to you? Sign up to get it in your inbox.
Humans have always had two main intuitions about what we’ll find when we travel to the end of the earth:
- An edge where the known world falls off into nothingness, chaos, or monsters
- A new vista where unexplored, lush, and perhaps perilous territory extends toward a new horizon
The first is terrifying, a place to be avoided. The second represents possibility and an entirely new world.
These days most new AI model releases are incremental. Sometimes, though, a new model brings us right up to the edge of the known and allows us to take a peek at what lies beyond. Is it nothingness, dragons, or a new horizon?
Anthropic’s Opus 4.5 is one of those models, and I’ve been peering over the edge for about a week now. Here’s what’s over the horizon:
- We are in a new era of autonomous coding. You can build astonishingly complex apps without looking at a single line of code.
- Prompt-native apps are now possible. You can use Opus 4.5 as a general-purpose agent to power your app’s features. This turns new features into an exercise in prompt-writing, rather than coding.
The infinite vibe coding machine
The first step change with Opus 4.5 is the amount of autonomous coding that is now possible.
Over the last week, I built an iOS reading companion app with a comprehensive suite of features. It’s the kind of thing that previous models could one-shot as a demo, but would start to trip over themselves after a while if I didn’t sit down and look at the code myself. I started writing the app with Anthropic’s Sonnet 4.5 and OpenAI’s Codex, but gave up pretty quickly because they were getting lost in loops of errors that I didn’t have time to debug.
Opus 4.5 is different. I didn’t write a single line of code. I didn’t even look at the code. I just talked into my computer with Monologue. And out came a complete reading companion.
I can photograph any book page, and it identifies the work, analyzes the passage, and connects it to larger themes—with zero clicks. I can tap a character name for a spoiler-free summary of everything the character has done so far in the book. It even automatically researches and downloads the original text (if the book is in the public domain) plus academic secondary sources to help with its analysis, and writes a custom introduction for each book to tell me why I might be interested in it.
The Only Subscription
You Need to
Stay at the
Edge of AI
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators
Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools

Comments
Don't have an account? Sign up!
At first this just seemed like another "Opus 4.5 is amazing" article. But this insight was new to me and is really, really interesting and cool: "I think prompt-native will be where many features start, and they will be hardened into code over time as they stabilize"
Any chance we'll see you app or some details about your prompts and how Claude implemented some of the features? It looks like it has to use a lot of external services to accomplish some of the tasks and I'm curious to know which ones.
Do you see this prompt native, in what they experimented with in "imagine with Claude"
I'm curious how you start a product of this magnitude. Did you write some kind of PRD or other kind of artifact and give it to CC to start or did you just start prompting?
Would love to see the “how”. What do good prompts got opus look like?