On Monday, Anthropic launched the latest version of its smallest AI model, Claude 3.5 Haiku, in a way that marks a departure from typical AI model pricing trends—the new model costs four times more to run than its predecessor. The reason for the price increase is causing some pushback in the AI community: more smarts, according to Anthropic.
"During final testing, Haiku surpassed Claude 3 Opus, our previous flagship model, on many benchmarks—at a fraction of the cost," Anthropic wrote in a post on X. "As a result, we've increased pricing for Claude 3.5 Haiku to reflect its increase in intelligence."
"It's your budget model that's competing against other budget models, why would you make it less competitive," wrote one X user. "People wanting a 'too cheap to meter' solution will now look elsewhere."
On X, TakeOffAI developer Mckay Wrigley wrote, "As someone who loves your models and happily uses them daily, that last sentence [about raising the price of Haiku] is *not* going to go over well with people." In a follow-up post, Wrigley said he was not surprised by the price increase or the framing, but saying it out loud might attract ire. "Just say it’s more expensive to run," he wrote.
The new Haiku model will cost users $1 per million input tokens and $5 per million output tokens, compared to 25 cents per million input tokens and $1.25 per million output tokens for the previous Claude 3 Haiku version. Presumably being more computationally expensive to run, Claude 3 Opus still costs $15 per million input tokens and a whopping $75 per million output tokens.