More Post from the Author
- EaseUS to Unveil Data Recovery Wizard 20.1.0 with Breakthrough SSR Technology, Setting a New Standard for Fragmented File Recovery
- ETHGas startet den Ethereum-Blockspace-Terminmarkt mit Zusagen in Hhe von 800 Millionen US-Dollar und einer Seed-Runde ber 12 Millionen US-Dollar unter Fhrung von Polychain Capital
- Cognizant und Microsoft erweitern ihre Partnerschaft, um die KI-Transformation voranzutreiben und die Erfahrungen von Frontier Firms zu verbessern
- Bupa Hong Kong has selected Cognizant to deliver an AI-driven BPaaS solution to transform health insurance claims
- SK hynix First to Complete Intel Data Center Certification for 32Gb Die-based 256GB Server DDR5 RDIMM
Llama 4 Live Day-Zero on Groq at Lowest Cost
MOUNTAIN VIEW, Calif., April 5, 2025 /PRNewswire/ -- Groq, the pioneer in AI inference, has launched Meta's Llama 4 Scout and Maverick models, now live onGroqCloud. Developers and enterprises get day-zero access to the most advanced open-source AI models available.
That speed is possible because Groq controls the full stack-from our custom-built LPU to our vertically integrated cloud. The result: models go live with no delay, no tuning, and no bottlenecks-and run at the lowest cost per token in the industry, with full performance.
"We built Groq to drive the cost of compute to zero," said Jonathan Ross, CEO and Founder of Groq. "Our chips are designed for inference, which means developers can run models like Llama 4 faster, cheaper, and without compromise."
Lowest Cost Per Token - Without Compromise
With Llama 4 models live, developers can run cutting-edgemultimodal workloads while keeping costs low and latency predictable.
- Llama 4 Scout: $0.11 / M input tokens and $0.34 / M output tokens, at a blended rate of $0.13
- Llama 4 Maverick: $0.50 / M input tokens and $0.77 / M output tokens, at a blended rate of $0.53
See Groq pricing here.
About the Models
Llama 4 is Meta's latest open-source model family, featuring Mixture of Experts (MoE) architecture and native multimodality.
- Llama 4 Scout (17Bx16E): A strong general-purpose model, ideal for summarization, reasoning, and code. Runs at over 460 tokens per second on Groq.
- Llama 4 Maverick (17Bx128E): A larger, more capable model optimized for multilingual and multimodal tasks-great for assistants, chat, and creative applications.
Build Fast with Llama 4 onGroqCloud
Llama 4 Scout and Maverick are accessible through:
- GroqChat
- GroqCloud Developer Console
- Groq API (model IDs available in-console)
Start building today at console.groq.com.
Free access is available, or upgrade for worry-free rate limits and higher throughput.
About Groq
Groq is the AI inference platform delivering low cost, high performance without compromise. Its custom LPU and cloud infrastructure run today's most powerful open-source models instantly and reliably.
Over 1 million developers use Groq to build fast and scale with confidence.
Groq Media Contact: [emailprotected]
SOURCE Groq

More Post from the Author
- EaseUS to Unveil Data Recovery Wizard 20.1.0 with Breakthrough SSR Technology, Setting a New Standard for Fragmented File Recovery
- ETHGas startet den Ethereum-Blockspace-Terminmarkt mit Zusagen in Hhe von 800 Millionen US-Dollar und einer Seed-Runde ber 12 Millionen US-Dollar unter Fhrung von Polychain Capital
- Cognizant und Microsoft erweitern ihre Partnerschaft, um die KI-Transformation voranzutreiben und die Erfahrungen von Frontier Firms zu verbessern
- Bupa Hong Kong has selected Cognizant to deliver an AI-driven BPaaS solution to transform health insurance claims
- SK hynix First to Complete Intel Data Center Certification for 32Gb Die-based 256GB Server DDR5 RDIMM
