
On December 3rd, Kim Ho-sik, Vice President of SK Hynix’s Memory System Research Center, emphasized that the computing paradigm is shifting towards memory-centric approaches. He stressed the necessity of memory innovation for advancing artificial intelligence (AI) systems.
During a panel discussion at the ‘SK AI Summit 2025’ in Seoul’s COEX convention center, Kim urged all ecosystem players to view memory companies as technology partners and solution providers rather than mere suppliers. He advocated for collaborative innovation in the field.
The ‘New Semiconductor Solutions’ session featured a keynote by David A. Patterson, a Google engineer and UC Berkeley professor emeritus, on “The Reality and Future of Memory-Centric Computing: Addressing Memory Bottlenecks.” A panel discussion followed, including Vice President Kim, TSMC engineer and Stanford professor Philip Wong, and Meta engineer Kim Chang-kyu.
With the recent surge in AI demand, the industry faces a critical challenge: computational devices like GPUs and CPUs are outpacing memory development, creating significant bottlenecks.
This shift is driving the importance of next-generation memory solutions, particularly high-bandwidth memory (HBM), as the industry transitions from processor-centric to memory-centric architectures.
Addressing the crucial role of memory in data centers, Kim stated, “HBM has been and will continue to be a game-changer.” He humorously referenced NVIDIA CEO Jensen Huang’s jest about reaching HBM97, adding, “While that’s likely an exaggeration, HBM will remain essential.”
During a recent press conference in South Korea, coinciding with the APEC summit, Huang praised Samsung Electronics for its diversity and SK Hynix for its focus. He expressed complete confidence in their joint development of future HBM generations, including HBM4, HBM5, and even the speculative HBM97.
Kim highlighted energy efficiency as a primary concern, proposing ‘compute-near memory’ solutions. This approach involves positioning computational devices adjacent to or directly above memory components.
Traditional processor-centric designs require data to travel long distances between memory and processors. By integrating processors within or near memory chips, latency can be significantly reduced, alleviating bandwidth bottlenecks and dramatically cutting energy consumption in data transfer.
“Near-memory processing is on the horizon,” Kim concluded, “with numerous products set to debut in the near future.”