
What's Actually New
The improvements in Opus 4.7 are real and worth understanding:
Stronger agentic performance. The model handles long-running, multi-step tasks with greater reliability. It verifies its own outputs before reporting back, recovers more gracefully from tool failures, and follows instructions more literally than its predecessor. For autonomous engineering workflows, these are meaningful gains.
Significantly better vision. Image input resolution has jumped to approximately 3.75 megapixels — more than three times what prior Claude models accepted. This opens up use cases that previously required workarounds: reading dense UI screenshots, extracting data from complex diagrams, and any computer-use agent that depends on fine visual detail.
New effort controls. A new xhigh effort level gives developers finer control over the reasoning-versus-latency tradeoff. Combined with task budgets (now in public beta), teams have more levers to manage how aggressively the model reasons on hard problems across long runs.
Pricing remains unchanged from Opus 4.6: $5 per million input tokens and $25 per million output tokens.
Our Concern: More Tokens
There is one caveat we think deserves more attention than it is getting in the wider coverage.
Opus 4.7 uses more tokens than Opus 4.6. The updated tokenizer and increased reasoning at higher effort levels mean the same input can map to roughly 1.0–1.35× more tokens depending on content type. Anthropic's internal benchmarks suggest the net efficiency is still favourable — but that is on their evaluation data, not yours.
At The MVP Studio, we have always treated Opus as a precision instrument rather than a default. Our approach is deliberate: we use Opus for complex reasoning tasks — architectural planning, brainstorming, stress-testing assumptions — and route execution work, including code generation, to faster and more cost-efficient models like Sonnet. This is not simply a cost-cutting measure. It reflects our view that every model in your stack should be deployed where its specific strengths create the most leverage.
Opus 4.7 does not change that philosophy. If anything, it reinforces it. With higher token consumption, the case for thoughtful model routing becomes stronger, not weaker. Teams that use Opus indiscriminately will feel the cost. Teams that use it surgically will benefit from its genuine improvements.
The Bigger Question We Are Watching
Individual model releases matter. But the more important shift happening in AI-driven engineering is at the systems level.
The teams producing the best outcomes right now are not necessarily using the most powerful models — they are using the right models in the right sequence, with the right orchestration logic holding everything together. The question is no longer "which model is best?" It is "how do we build a system where the right model handles the right task, automatically and reliably?"
Opus 4.7's new controls — effort levels, task budgets, improved instruction following — are tools that support this kind of systems thinking. They give engineering teams more precision over how AI resources are spent across complex, multi-step workflows. That is the direction we think the industry is heading: not bigger models in isolation, but smarter orchestration across a range of models working in concert.
This is precisely the kind of thinking we bring to every product we build at The MVP Studio. AI at the core does not mean AI everywhere without a strategy. It means building software where AI is deployed intelligently — and where that intelligence compounds over time as the underlying models improve.
Our Recommendation
If your team is evaluating the upgrade from Opus 4.6 to 4.7, do not rely on benchmarks alone. Run it against your real workflows, at your real effort levels, and measure cost per meaningful outcome. The improvements are genuine — but so is the increased token footprint.
And if you are still figuring out how to structure AI into your engineering workflows in the first place, that is exactly the conversation we have with our clients.
Thinking about building software with AI at the core? We work with founders and product teams to design and ship AI-driven products — from architecture to production. Book a discovery call with our team and let's talk about what's possible.