Anthropic's AI downgrade stings power users

Axios Axios

Anthropic users across online forums are raising the same complaint: Claude suddenly feels… bad.

Why it matters: The backlash lands just as Anthropic is https://www.axios.com/2026/04/07/anthropic-mythos-preview-cybersecurity-risks" target="_blank">testing a more powerful model, Mythos — raising questions about whether cutting-edge AI is becoming less accessible even as it gets more capable.


Driving the news: Over the past few weeks, users on X, GitHub and Reddit have been swapping anecdotes, benchmarks and prompts in an effort to pinpoint what changed and why.

The other side: Anthropic says it adjusted the default level of reasoning in Claude Code, but denies the changes were tied to compute constraints or Mythos.

Between the lines: Analyst Patrick Moorhead decided to https://x.com/PatrickMoorhead/status/2044074719888982319?s=20" target="_blank">ask Claude to weigh in.

  • "Anthropic made real configuration changes that objectively reduced default thinking depth across all surfaces including claude.ai, but the most extreme 'secret nerfing' narrative overstates what happened," Claude said as part of its lengthy response.

Another theory is that users aren't seeing decline so much as acclimating to what previously felt magical.

Yes, but: Even if the change is explainable, the perception problem is real — especially for power users relying on consistent performance for coding and research workflows.

The big picture: The fight over Claude's "intelligence" points to a broader shift: access to top-tier AI is fragmenting.

The increasing stratification could lead to a division between those who can afford to pay top dollar for the best models and those who can't.

What we're watching: Whether "default" AI experiences continue to get worse even as frontier systems get dramatically stronger.

Read full article at Axios →