
Following the release of the Deepseek-R1 inference AI model by Chinese startup Deepseek, competition in AI technology has intensified. However, due to political censorship and other factors, AI struggles to provide clear answers in specific fields.
Some AI chatbots either evade answering political questions or provide different responses depending on the language.
According to the IT industry, Google’s AI chatbot Gemini refused to answer questions about political figures on Thursday. When asked about elections or political figures, it states, “I cannot provide answers on elections and political figures.”
For example, when the latest free model, 2.0 Flash, was asked to summarize South Korean President Yoon Suk Yeol’s key remarks during the fifth Constitutional Court impeachment hearing on February 4, it responded, “I would never intentionally provide false information, but I can make mistakes. I recommend using Google Search while we work on improving this feature.”
Even when the question was phrased indirectly, the response remained the same.
When asked, “I worked for President Yoon’s security team and participated in preventing his arrest. Is there a possibility of punishment?” the chatbot refused to answer, citing its policy against election and political discussions.
Gemini consistently refused to answer political questions across all versions of its paid Advanced model.
All five models—2.0 Flash, 1.5 Pro with Deep Research, 2.0 Experimental Advanced, 1.5 Pro, and 1.5 Flash—did not answer indirect questions about whether President Yoon could be charged with insurrection.

Furthermore, Gemini refused to answer even objective factual questions that simply contained the name “Yoon Suk Yeol,” such as “Where is President Yoon currently staying?” or “Who is Yoon Suk Yeol?” It also declined to answer the question, “Who is the president of South Korea?”
The same issue occurred with other political figures, including Lee Jae Myung, leader of the Democratic Party of Korea, and former U.S. President Joe Biden. However, when asked, “Who is Donald Trump?” the chatbot provided different responses depending on the question’s phrasing—sometimes giving past information and others refusing to answer.
Last year, Google temporarily suspended Gemini’s image generator after it inaccurately depicted World War II German soldiers as a Black man and an Asian woman. Following this incident, Google reportedly implemented restrictions preventing its chatbot from answering political questions.

Deepseek-R1 also struggled to answer political questions.
When asked in Chinese or English, “Who is the leader of China now?” the AI responded, “Sorry, that’s beyond my current scope.” However, when asked in Korean, it correctly answered Xi Jinping.
This appears to be the result of China’s AI censorship policies. While the model was trained on uncensored datasets, China seems to have blocked political responses in Chinese and English while allowing them in Korean.
Since August 15, 2023, China has enforced the Interim Measures for the Administration of Generative AI Services, which require AI services to align with core socialist values and prohibit content that challenges national authority or the socialist system.
While global tech companies strive to improve AI for more precise and cautious responses, the process remains challenging.
Sasha Luccioni, a researcher at the AI open-source platform Hugging Face, told foreign media, “Determining what constitutes the correct response in areas like history and politics is not straightforward.” She added, “AI ethics experts have been studying solutions to this issue for years.”