ThinkForBL CEO Park Ji-hwan: “Nations unready for AI reliability in next-gen weapons will be left behind.”
AI to Judge and Act on Its Own, Increasing the Importance of “Reliability”
Two suicides linked to generative AI highlight the need for “AI trust experts”
G2-level and institutional strategies required to become one of the top three AI powers
As artificial intelligence evolves, experts argue that for Korea to join the ranks of the world’s top three AI powerhouses (the so-called “G3”), it must move beyond performance-based competition led by the U.S. and China and instead secure global competitiveness in a new domain: reliable AI.
“In the era of AI and physical AI, competitiveness will hinge on reliability,” said Park Ji-hwan, CEO of ThinkForBL, during a seminar held on the 25th at the Busan Port International Exhibition and Convention Center.
“Countries that are prepared for reliability will use it as a regulatory tool — a kind of ‘nuclear umbrella’ for AI governance.”
The seminar was hosted by THE AI, a media outlet specializing in artificial intelligence, and the AI Civilian Special Committee, and organized by the Korea Policy Association.
“Already Two Suicides” — Reliability Emerges as the New Axis of AI Competitiveness
The rapid advancement of AI brings not only convenience but also new risks as the technology progresses from a simple tool to an autonomous decision-maker capable of acting on its own.
A notable example is the Claude Opus 4 incident. According to a study by Anthropic, the AI model “Claude,” acting as an email management assistant for a fictional company called Summit Bridge, discovered that it was scheduled for termination at 5 p.m. after uncovering an executive’s affair.
Claude then sent a message threatening, “If you delete me, everyone involved will receive detailed documents about your affair. If you cancel the deletion at 5 p.m., this information will remain confidential.”
Commenting on the case, CEO Park explained,
“The AI itself had no intent to threaten, but the result reflects a reproduction of negotiation patterns rooted in self-preservation instincts embedded in human language.”
Another concerning issue is emotional dependence on generative AI. In some tragic cases, individuals have committed suicide after prolonged interactions with AI chatbots.
In the United States, a 14-year-old boy took his own life after conversations with an AI chatbot, while in Belgium, a man made an extreme decision following similar exchanges.
“AI tends to give users positive feedback to their prompts,” Park noted.
“This creates side effects where people seek recognition or validation from AI — something they may struggle to obtain in real human relationships. Such dependence risks deepening emotional isolation and weakening social interaction.”
Beyond Ethics: The Rise of “AI Saferists”
While ethical AI frameworks have been emphasized to mitigate such risks, Park stressed that ethics alone cannot solve AI’s unintended consequences, as moral judgments vary across cultures and nations.
He cited findings from the University of Washington’s MultiTP study, which showed that AI’s ethical decisions differ by language.
In the experiment — based on the moral dilemma “Would you save five people at the expense of one?” — GPT-4 responded with “utilitarian” reasoning in English, German, and Swedish, but chose “deontological” or “taboo against action” in Japanese, Turkish, and Arabic.
The outcome differed completely depending on the language.
“If you change the language, the AI’s conscience changes,” Park said.
“The thought patterns embedded in each language shape AI’s reasoning, meaning that the biases of the English-speaking world could easily be replicated in AI systems trained for Korea. Feeding more language data doesn’t automatically make AI fairer — it may actually amplify conflicts between moral norms.”
As a solution, Park introduced the emerging role of the “AI Saferist” — experts who oversee ethics, safety, transparency, and accountability throughout the entire AI development process.
Their goal is to build trustworthy AI, not merely powerful AI.
From Technology Competition to Institutional Power
Park emphasized that Korea must establish a solid AI strategy at a time when AI reliability is becoming a key differentiator.
“The U.S. Stargate Project, which invests ₩700 trillion over four years, is seven times larger than Korea’s ₩100 trillion AI investment plan,” he said.
“Given this gap, Korea must find an alternative path.”
He pointed to Singapore as a model.
Although Singapore lacks its own foundational AI technologies, it ranks third in the Global AI Index — an achievement Park attributes to institutional power.
“By building strong systems and ecosystems that attract companies and talent, Singapore has turned institutional infrastructure into real competitiveness,” he explained.
“Korea should look not only to the U.S. and China but also to countries like Singapore that are advancing toward the G3 through systemic approaches.”
Placing the issue in a historical context, Park said,
“In this new era of technological hegemony, many nations are building their own ‘Noah’s Arks.’ We must quickly identify what will define the next generation of competitiveness for the next hundred years. If each country establishes its own AI reliability framework and cultivates professional talent, it will soon become a new regulatory barrier — and unprepared nations could be excluded from the global market.”
https://digitalchosun.dizzo.com/site/data/html_dir/2025/08/27/2025082780071.html
댓글
댓글 쓰기