
A recent survey has revealed that half of the doctors in South Korea are now using generative artificial intelligence (AI) for medical tasks, including disease diagnosis and analysis of test results. However, many physicians remain cautious about adopting AI due to unresolved questions about liability for misdiagnoses and the extent of patient disclosure regarding AI usage.
On Thursday, the office of Representative Kim Yoon from the Democratic Party released findings from a study conducted by the Korea Health Industry Development Institute on the impact and response to AI adoption in healthcare. The research, which surveyed 2,125 South Korean doctors last October, found that 47.7% actively incorporate AI into their medical practices.
Doctors primarily use AI for disease diagnosis (68%, with multiple responses allowed) and patient screening (51.2%, including severity assessment). A significant portion also reported utilizing AI for treatment planning (33.4%), patient follow-ups (24.1%), administrative task streamlining (23.5%), and prognosis prediction (20%).
Among medical specialties, radiologists lead in AI adoption, with over half (52.4%) reporting regular use. Cardiologists (27.3%), endocrinologists (10.7%), and dermatologists (6.6%) also show notable AI integration into their practices.
Physicians report that AI implementation has improved their time management, allowing for more direct patient care. An overwhelming 82.3% of respondents cited enhanced time efficiency as the primary benefit of AI adoption in healthcare.
Despite these advantages, many doctors remain hesitant about AI integration. The foremost concern among physicians is the ambiguity surrounding legal responsibility (74.3%, multiple responses allowed). Other significant worries include the risk of misdiagnosis (65.4%) and technical unreliability (50.1%).
In light of these concerns, 32.5% of surveyed doctors believe that disclosing AI use to patients should be mandatory for ensuring patient safety. Currently, regulations only require patient notification for AI medical devices classified as innovative and approved by the Ministry of Food and Drug Safety, leaving most AI applications without disclosure requirements.
The research team emphasized that as medical decision-making increasingly relies on algorithms, determining accountability becomes more complex. They stress the need for more nuanced discussions, noting the current lack of clear criteria for distinguishing whether AI serves as a mere tool for healthcare providers or as a collaborative decision-maker.
Furthermore, the researchers pointed out that despite regulatory progress, South Korea still faces challenges in ensuring algorithmic transparency and establishing a coherent legal accountability framework for AI in healthcare. They suggest that future efforts should focus on creating a cross-verification system between technological reliability and legal responsibility to build public trust in medical AI applications.