[AI and Digital Transformation] Domestic AI Services Can Survive Only by Securing Reliability

Nvidia CEO Jensen Huang picked "agent artificial intelligence" and "physical AI" as the keywords for this year. Simply put, AI will now become an employee of a company and a partner to perform tasks such as schedule management and coding (agent AI), or to save lives at dangerous sites (physical AI) instead of driving. For the first time in history, the era of symbiosis has arrived, where humans work together with intellectual beings they have created themselves.

In a society where AI acts as a secretary or developer faster than humans, it is paradoxical, but trust—not speed—is what matters in the AI industry. What kind of AI do we want to work with? Will AI, which already works dozens of times faster than humans, become dozens of times faster still? No. What we need is reliable and safe AI that can work dozens of times faster without concern, AI that can take on massive tasks without risk. What we want is not a superior partner that takes away our leadership, but a competent one we can trust, without fear of accidents or being undermined.

In fact, 85 percent of companies worldwide are considering introducing AI agents, according to a survey by PagerDuty, an IT management platform company. Despite such high corporate interest, the issue of reliability remains unresolved. Stanford Medical Research reported that AI alone diagnoses with higher accuracy than doctors, but when doctors diagnose based on AI advice, accuracy actually decreases. If AI lacks collaborative design, accountability mechanisms, and safety procedures, its superior performance evaporates in practice. In other words, “symbiosis in working together” also requires professional and engineering approaches.

For this reason, competitors are already preparing related human resource development systems and support measures. The European Union (EU) has integrated regulations, governance, and law with technical education in master’s and doctoral programs, while EIT Digital operates multiple degrees under the track “Trustworthy AI.” Responsible AI Network education has emerged through CSIRO in Australia, and in the U.S., field personnel are being trained through the “Safety Engineering Fellowship,” led by Stanford and Wharton School with NIST at the center. These preparations will soon be combined with regulatory barriers designed to protect domestic industries amid a protectionist trade stance. This means they could function as a “nuclear umbrella regulation strategy,” establishing barriers all at once after adequately preparing the technology, talent, and institutions necessary for regulation. At this pace, it is highly likely that within two to three years, strong AI reliability regulations will begin in major countries.

The domestic industry’s preparations are currently insufficient. In the past, there was a strong tendency to dismiss discussions of reliability as ethical debates that hindered technological progress. Meanwhile, other countries have persistently cultivated reliability expertise, treating it as a core technology area. Their goal has been not to restrain industry based on reliability, but to secure competitiveness by creating strategic regulations on reliability through thorough preparation. We also need a shift in perception toward this approach.

In addition, efforts to overcome chronic limitations in the R&D field, which are necessary for such technological transformation and specialized training, are important. We tend to invest in AI when AlphaGo makes headlines, but shift to quarantine technology when trade friction with Japan occurs, to disaster technology during pandemics, to the metaverse when non-face-to-face interactions increase, and back to AI again when GPT rises. Whenever a new issue emerges, the axis of policy and budget shifts completely. This volatility makes it impossible to build a stable foundation for technological innovation.

Even now, investments should be directed toward fostering reliability experts. In a world where AI is a partner, not just a tool, AI products that lack reliability will have no place, no matter how strong their performance. Moreover, once major countries complete their regulatory barriers on AI reliability, Korean companies may lose the chance to compete in the global market.


https://www.etnews.com/20250826000108

댓글

가장 많이 본 글