Groq’s breakthrough AI chip achieves blistering 800 tokens per second on Meta’s LLaMA 3
April 20, 2024 at 20:16 PM EDT
In a surprising benchmark result that could shake up the competitive landscape for AI inference, startup chip company Groq appears to have confirmed through a series of retweets that its system is serving Meta’s newly released LLaMA 3 large language model at over 800 tokens per second. “We’ve been … Source