DeepSeek's new models are so efficient they'll run on a toaster ... by which we mean Huawei's NPUs

The Register The Register

Now available in preview, DeepSeek V4 cuts inference costs to a fraction of R1

Chinese AI darling DeepSeek is back with a new open weights large language model that promises performance to rival the best proprietary American LLMs. Perhaps more importantly, it claims to dramatically reduce inference costs and it extends support for Huawei's Ascend family of AI accelerators.…

Read full article at The Register →