[nerd project]
[ai]April 29, 2026 3 min read

DeepSeek-V4 hits frontier AI performance at 1/6th the cost of Opus 4.7

DeepSeek-V4 hits frontier AI performance at 1/6th the cost of Opus 4.7

DeepSeek-V4 just redefined what frontier AI access actually costs: a 1.6-trillion-parameter model released under the MIT open-source license that runs at roughly 1/6th the price of Claude Opus 4.7 and beats several top benchmarks along the way. This isn't a niche release — it's an economic shock to the entire AI industry.

The whale resurfaces

In January 2025, DeepSeek — the AI arm of Chinese quant firm High-Flyer Capital Management — rattled the global AI world with its R1 model, an open-source release that matched proprietary U.S. giants and briefly sent tech stocks tumbling. The community has been waiting for the real follow-up ever since. Now it's here: 484 days after V3 launched, DeepSeek AI researcher Deli Chen called this release a "labor of love" on X and declared simply, "AGI belongs to everyone."

The numbers that matter

DeepSeek-V4-Pro is priced at $1.74 per million input tokens and $3.48 per million output tokens, totaling $5.22 in a standard comparison. That puts it at roughly 1/7th the cost of GPT-5.5 (which runs $35.00 combined) and 1/6th the cost of Claude Opus 4.7 ($30.00 combined). With cached input, the gap widens further — DeepSeek-V4-Pro drops to $3.625 blended. Then there's DeepSeek-V4-Flash, the budget tier, clocking in at a combined $0.42 — more than 98% cheaper than GPT-5.5, though with a meaningful performance drop. Both models are available now on Hugging Face and via DeepSeek's API.

What this actually means

On DeepSeek's own benchmark tables, V4-Pro-Max beats GPT-5.4 and Claude Opus 4.6 on tests like Codeforces and Apex Shortlist — but that's not the same as a clean head-to-head against the newest GPT-5.5 or Claude Opus 4.7, where closed frontier systems still lead on most shared evaluations. So this is not a straightforward performance win — it's an overwhelming value win. For developers and enterprises running large inference workloads, tasks that looked economically unviable on GPT-5.5 or Opus 4.7 may now make financial sense on V4-Pro. The cost-benefit math just shifted.

Who wins and who loses

OpenAI and Anthropic now face a problem that better benchmarks alone won't solve. DeepSeek is compressing the economics of advanced models downward, and doing it under MIT license removes almost every adoption barrier. Companies that built their stacks on premium closed models are going to revisit those decisions. Cloud vendors reselling frontier model access feel the pressure too. And for startups and emerging markets operating on tight budgets, the calculus for accessing quality AI just changed dramatically in their favor.

What comes next

This launch forces OpenAI, Anthropic, and Google to respond — whether by cutting prices, opening up more model access, or accelerating their next releases. The competitive pressure cycle that R1 started in January 2025 has only intensified. With a 1.6-trillion-parameter model available for free under MIT, the question is no longer whether open-weight models can compete with closed ones. The real question is how long premium providers can defend their margins on performance differentiation alone.

Frontier AI is no longer an exclusive club — and that should worry Silicon Valley a lot more than they're publicly letting on.

Source: VentureBeat

#DeepSeek#Inteligencia Artificial#Modelos de Lenguaje#Open Source
Leer en español: Versión en español →
share:Telegram𝕏

[comments]

1000 chars left