Tuesday, April 22, 2025

Amazon Shares Drop 1% After Weak Revenue Forecast Despite Strong Earnings

Amazon's quarterly earnings surpassed expectations, but a weaker forecast led to a slight dip in stock price after hours.

North Korean Immigration to Russia Hits 13,000 in 2024, Driven by Education and Business

North Korean visits to Russia surged to 13,221 in 2024, raising concerns about disguised labor under educational visas.

ENHYPEN’s New Album Sells 2 Million Copies in Just Five Days

K-pob boy band ENHYPEN's second full-length album, ROMANCE: UNTOLD, had sold 2,113,143 copies as of the 16th.
Tuesday, April 22, 2025

Llama 4 Goes MoE: Meta Slashes Compute Costs in New AI Models

TechLlama 4 Goes MoE: Meta Slashes Compute Costs in New AI Models
Meta’s office in Brussels, Belgium / News1
Meta’s office in Brussels, Belgium / News1

In response to the shock caused by Chinese startup DeepSeek, Meta has unveiled the Llama 4 series—a lineup of artificial intelligence (AI) models that significantly reduce computation costs by adopting a Mixture-of-Experts (MoE) structure just one year after its previous release.

Previously, major tech companies such as Google and Microsoft (MS) released a series of lightweight AI models featuring low cost and high performance. OpenAI is also expected to release a reasoning-based open model soon.

Last Saturday, Meta open-sourced its Llama 4 models, Scout and Maverick, marking one year since the introduction of the LLaMA 3 series in April 2024. Scout and Maverick are now available via the official website and Hugging Face. Meta also introduced another model currently in training, named Behemoth.

The Llama 4 models apply a method that activates only the necessary expert models based on the type of question asked.

Maverick has a total of 400 billion parameters, but for actual user queries, only 17 billion parameters are activated by selecting a subset of 128 experts, thereby reducing costs.

Meta stated that the models are optimized for general assistant and chat applications and claimed that they outperform OpenAI’s GPT-4o and Google’s Gemini 2.0 in areas such as content creation, coding, and multilingual processing.

Scout is a lightweight model capable of running on a single GPU. Using the MoE structure, it activates 17 billion parameters out of a total of 109 billion to generate responses.

In March, Google released its lightweight language model Gemma 3 as an open-source model, which can operate on a single GPU or Tensor Processing Unit (TPU).

Meta CEO Mark Zuckerberg speaking / Meta
Meta CEO Mark Zuckerberg speaking / Meta

Behemoth, which Meta is training with the aim of creating the world’s most advanced large language model (LLM), reportedly has around 2 trillion parameters. Meta claimed it outperformed GPT-4.5, Claude 3.7, and Gemini 2.0 Pro in its own math and science benchmark tests.

Meta also explained that the Llama 4 series will relax refusal criteria, meaning it will not avoid politically or socially controversial questions, unlike previous models.

The company plans to significantly enhance voice capabilities in the Llama 4 series. According to foreign media outlets such as the Financial Times, Meta is focusing its resources on enabling natural, interactive conversations between users and AI models.

Meta CEO Mark Zuckerberg stated via Instagram, “Our goal is to build the world’s leading AI, open source it, and make it universally accessible… I’ve said for a while that open-source AI will lead the way, and with Llama 4, we’re starting to see that happen.”

Check Out Our Content

Check Out Other Tags:

Most Popular Articles