[On the Spot] Reading Context in Video and Visualizing Data with Vibe Coding — AI Accelerates Industrial Innovation
Generative AI Expands Contextual Understanding — Vision and Industrial AI Take Center Stage at AI Expo Korea 2025
One of the main characteristics of the current generation of generative AI technology is “contextual understanding” based on reasoning and “communication ability” enabled by natural language processing. Recently, general AI users have become accustomed to productivity-enhancing experiences such as planning, analysis, and report writing through these tools.
The same trend applies to various manufacturing and industrial sites that are rarely visible from the outside. The time when AI could only analyze specific data patterns is long gone. Furthermore, with the emergence of domain-specialized language models and improved multimodal data processing capabilities, AI’s contribution to improving productivity and corporate competitiveness in the field is increasing more than ever. This could be observed at AI Expo Korea 2025, held at COEX in Seoul from July 14 to 16, where SUPERBAI and MAKINAROCKS presented their latest technologies.
SUPERBAI – VLM Changes the Vision AI Business
SUPERBAI, a Vision AI company, presented insights into the present and future of “AI + Video Control” at the exhibition. Although Vision AI has been used in the manufacturing industry for more than a decade, its applications have long been limited to areas such as product inspection, security surveillance, and facial recognition.
However, since the emergence of ChatGPT drew attention to Large Language Models (LLMs)—which allow AI to understand the context of text data and engage in natural conversations—Visual Language Models (VLMs) specializing in visual data processing have recently brought a new wave of change to the Vision AI industry.
The “SuperV Video Control” demo displayed at the SUPERBAI booth (J01) was a good example. Combining existing camera devices with video control AI, the system analyzes the real-time flow and context of video footage using SUPERBAI’s proprietary VLM and provides this information to users.
A particularly eye-catching scene showed the AI identifying “purchasing behavior” data from a short hand gesture of a customer hesitating to pick up an item in a store video. This represents a significant change from previous AI systems that simply detected sales or theft in unmanned stores. The difference lies in AI’s ability to understand context, not just a single frame.
In addition, SUPERBAI is expanding its Vision AI business across various manufacturing and service industries, including product recognition at production sites, monitoring of hazardous areas, and stadium crowd analysis.
Among these, speed and productivity improvements have been especially noticeable this year. According to a SUPERBAI official at the booth, the company developed its own domain-specialized VLMs pre-trained on diverse industrial datasets and modularized them into multiple forms, significantly improving the productivity of developing AI products optimized for each customer’s needs. This approach reduces the cost and time required for individual data training, as only necessary data needs to be added. He also expressed hope that the exhibition would help raise awareness of VLM capabilities and encourage adoption across manufacturing sites.
MAKINAROCKS – “One Word Is Enough” for Visible Facility Data
MAKINAROCKS, an industrial AI company, focused on providing a “visible and controllable experience” for utilizing manufacturing data at this exhibition. The company develops Runway, an all-in-one industrial AI platform that supports model development, data utilization, anomaly detection, and optimization for manufacturing operations.
One of the highlights at the booth (F26) was a Runway-based Vibe Coding experience device. Vibe Coding, which allows program development using only natural language commands without requiring coding knowledge, is gaining popularity not only among non-experts but also professional developers. The concept of AI-assisted coding has existed since the emergence of ChatGPT, but productivity-focused systems such as Vibe Coding have become especially popular this year.
MAKINAROCKS preemptively applied this concept to Runway, enabling more efficient use of the complex, high-volume data generated in industrial environments. During the demonstration, when a user entered the command “Please analyze the dataset and create a new dashboard that shows indicators such as the average,” the system instantly generated hundreds of lines of code in real time. The final output displayed sensor data such as temperature, power, and noise levels in text tables and graphical dashboards.
A MAKINAROCKS official emphasized,
“The dashboard is just an example—the key is to convert customer data into valuable insights as quickly as possible through easy Vibe Coding and visualization features.”
He explained that in the field, faster development and faster testing are more advantageous than lengthy planning or document-based verification of ideas. At this exhibition, the company expects that potential customers visiting the booth will directly experience the need for AI platforms like Runway.
댓글
댓글 쓰기