
SK Telecom is highlighting artificial intelligence (AI) data centers (AI DC) as crucial infrastructure for the AI era. The company observes that the data center industry is undergoing rapid transformation due to concurrent changes in large-scale power, cooling, and server architectures.
During a press briefing at the Mobile World Congress (MWC) 2026 on Wednesday, SK Telecom’s Chief Technology Officer (CTO) Jeong Seok-geun emphasized that as AI performance improves, the computational requirements and energy consumption during inference processes are escalating significantly.
Jeong noted that while they’ve enhanced efficiency through software optimization over the past couple of years, they’ve now reached a point where software alone can’t address the challenges. This is driving changes in chip and data center structures.
He stressed that data center architectures are evolving at an unprecedented pace. The extent of change in internal servers over the last three years surpasses what they’ve seen in the past two decades, Jeong remarked.
Jeong explained that the nature of AI servers is prompting substantial changes in power and cooling systems. AI servers consume enormous amounts of electricity and are physically heavy, making it difficult to house them in existing buildings, he said. The shift from air cooling to liquid cooling represents another significant development.
Data centers are also rapidly increasing in scale. Jeong pointed out that typically, a large data center in Seoul operates at 40 to 50 MW, but SK Telecom’s new facility in Ulsan is already at 100 MW. Globally, they’re seeing data centers emerge with capacities of hundreds of MW and even several GW.
He forecasts that the future data center market will be split between large hyperscale data centers and regional ‘edge data centers. Telecom companies have experience in constructing large data centers, and edge data centers share similarities with existing base station operations, allowing them to engage in both sectors, he explained.
Jeong also touched on the evolving business model for AI data centers. He noted that while traditional data center operations focused on providing buildings and power infrastructure, a new model called Graphic Processing Unit (GPU) as a Service (GPUaaS) has emerged, where companies directly build and lease GPU servers.
He revealed the substantial investment required: Constructing a 10 MW AI data center costs about 150 to 200 billion KRW (about 102 to 136 million USD) for the building and power facilities, while the GPU costs for the same scale are around 800 billion KRW (about 545 million USD). In total, this represents an investment of roughly 1 trillion KRW (about 682 million USD).
Jeong added that scaling up to 100 MW or 1 GW could involve investments of tens or hundreds of trillions of KRW (about billion to hundreds of billions USD). It’s crucial to carefully consider the level of risk when entering the data center business.
SK Telecom aims to leverage its group-wide capabilities to gain a competitive edge in the AI data center sector. Jeong stated that SK Telecom has developed AI models and Applelications, SK Hynix focuses on chips, and SK Broadband operates data centers. With various companies in the group covering construction and power, they’re able to optimize across the entire spectrum.
He also predicted that competition in the AI data center market will be global. Competing in large-scale data centers based solely on domestic demand will be challenging, Jeong said. It’ll be up against players from Japan, Taiwan, Southeast Asia, and beyond.