Friday, January 30, 2026

North Korea’s Flour Demand Soars: How 540 Tons from Russia is Changing Lives in 2026

Russia's flour exports to North Korea are rising, impacting food supply and reflecting strengthened ties between the two nations.

Samsung Display’s Next-Gen OLED: 1.5x Brighter, Thinner, and More Efficient

Samsung Display to unveil a next-gen OLED at MWC 2025, featuring 1.5x brightness, 20% thinner design, and OCF technology.

UNIST Introduces On-Device AI That Protects Privacy in Image Generation

A new lightweight AI model, PRISM, enables high-quality image generation on local devices, preserving privacy and reducing costs.

OpenAI Unveils New AI Model: Are Higher Subscription Fees on the Horizon?

TechOpenAI Unveils New AI Model: Are Higher Subscription Fees on the Horizon?

OpenAI is set to launch a new computationally intensive artificial intelligence (AI) model.

Given the substantial computational resources invested in enhancing this model’s performance, the company has signaled a premium service strategy that will include additional fees on top of its already high-priced plans.

Sam Altman OpenAI CEO X (former Twitter) capture
Sam Altman OpenAI CEO X (former Twitter) capture

Sam Altman, Chief Executive Officer (CEO) of OpenAI, announced on his X (formerly Twitter) account on September 21 that they plan to unveil a new computationally intensive service in the coming weeks. He noted that due to cost considerations, some features will be exclusive to pro subscribers, while others may incur extra charges.

The current ChatGPT Pro subscription is priced at 200 USD.

Altman emphasized that its ultimate goal remains to minimize costs and make the ChatGPT service widely accessible, and he’s confident they’ll achieve this over time. He added, however, today they’re excited to explore the potential of ideas that leverage significant computational resources.

Recently, the AI tech industry has been focusing on developing low-cost, high-efficiency solutions.

Analysts view Altman’s announcement as a strategic response to user feedback suggesting that the recently released GPT-5 didn’t deliver the dramatic performance improvements some had anticipated.

Some experts interpret this move as an attempt to overcome the limitations of the scaling laws in AI development.

The scaling laws posit that performance improves proportionally with increases in model size, data volume, and computing power. However, critics argue that the benefits of this scaling approach have reached a plateau.

While techniques like test-time compute, which extend inference time to boost performance, are gaining traction, they still show limitations in areas where the model lacks training data.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles