The homegrown artificial intelligence (AI) foundational model, K-Exaone, developed by LG AI Research, has entered the global top 10, ranking seventh. The feat meansThe homegrown artificial intelligence (AI) foundational model, K-Exaone, developed by LG AI Research, has entered the global top 10, ranking seventh. The feat means

LG’s K-Exaone breaks into the world’s top 10 AI models

The homegrown artificial intelligence (AI) foundational model, K-Exaone, developed by LG AI Research, has entered the global top 10, ranking seventh. The feat means the model is the only Korean presence in a ranking dominated by models developed by companies in the United States and China.

In a statement, LG mentioned that its latest AI model delivered the strongest performance among five teams in a government-led AI foundational model competition. The model achieved remarkable feats, topping 10 of 13 benchmark tests with an average score of 72. Internationally, the AI model ranked seventh on the Intelligence Index compiled by Artificial Analysis, making it the only Korean model to enter the top 10. China led with six models, while the US boasted three models. Z.AI’s GLM-4.7 took the first position.

LG’s foundational model ranks seventh in global rankings

LG released its foundational model as an open-weight on Hugging Face and saw it climb to second place on the platform’s global model trend chart. This suggested a strong interest from international leaders. LG mentioned that it is ready to roll out free API access to K-Exaone through January 28. This will allow developers and firms to use the model without any cost during the initial rollout period.

Epoch AI, a US-based nonprofit, also hailed the model. The platform added the model to its list of notable AI models. LG AI research now has five models on the list, making it the Korean company with the most. “We established the development plan according to the time and infrastructure we were given, and we developed the first-phase K-Exaone using about half the data we have,” said Lee Jin-sik, head of Exaone Lab at LG AI Research.

According to LG, the model is the brainchild of five years of in-house research and signals Korea’s entry into the global race for frontier-class AI systems. The division of LG mentioned that instead of relying on only scale, it redesigned the architecture to boost performance while reducing training and operating costs. K-Exaone uses a mixture-of-experts (MoE) architecture with 236 billion parameters, with about 23 billion parameters activated per inference.

The K-Exaone model beats other models in several parameters

The model uses its core technology, hybrid attention, to enhance its ability to focus on important information during data processing while reducing requirements and computational load by 70% compared to the previous models. The tokenizer was also upgraded by expanding its training vocabulary to 150,000 words. In addition, it often optimizes frequently used word combinations, improving the ability to process documents 1.3 times.

In addition, the adoption of multi-token prediction boosted inference speed by 150%, improving overall efficiency. K-Exaone is designed to maximize efficiency while reducing costs, allowing it to run on A100-class GPUs rather than requiring the most expensive infrastructure,” an LG AI Research official said. “This makes frontier-level AI more accessible to companies with limited computing resources and helps broaden Korea’s AI ecosystem.”

Aside from memorization, K-Exaone is trained to focus on improving its reasoning and problem-solving capabilities. LG explained that during its pre-training stage, the model was exposed to thinking trajectory data that shows how problems are solved and not just the final answer. Safety and compliance were also other priorities for the model. LG mentioned that it carried out data compliance reviews across training datasets, removing materials with potential copyright issues.

The company runs an internal AI ethics committee that carries out risk assessment across four categories, including social safety, Korea-specific considerations, future risks, and universal human values. Under KGC-Safety, the benchmark developed by LG AI research for safety in Korea, K-Exaone scored an average of 97.38 across four categories. It outperformed OpenAI’s GPT-OSS-120B model and Alibaba’s Qwen-3-235B model.

Want your project in front of crypto’s top minds? Feature it in our next industry report, where data meets impact.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Monetunes breaks into global top 1% music publishers for catalog output, UGC usage

Monetunes breaks into global top 1% music publishers for catalog output, UGC usage

Monetunes, a Manila-based music rights company focused on the creator economy, has ranked among the world’s top 1% of music publishers within a year of its launch
Share
Bworldonline2026/01/12 00:08
There is always hope

There is always hope

LET THE NEW YEAR begin. Before anything else, though, allow me to take this opportunity to wish everyone a meaningful and hopeful 2026.
Share
Bworldonline2026/01/12 00:04
BetFury is at SBC Summit Lisbon 2025: Affiliate Growth in Focus

BetFury is at SBC Summit Lisbon 2025: Affiliate Growth in Focus

The post BetFury is at SBC Summit Lisbon 2025: Affiliate Growth in Focus appeared on BitcoinEthereumNews.com. Press Releases are sponsored content and not a part of Finbold’s editorial content. For a full disclaimer, please . Crypto assets/products can be highly risky. Never invest unless you’re prepared to lose all the money you invest. Curacao, Curacao, September 17th, 2025, Chainwire BetFury steps onto the stage of SBC Summit Lisbon 2025 — one of the key gatherings in the iGaming calendar. From 16 to 18 September, the platform showcases its brand strength, deepens affiliate connections, and outlines its plans for global expansion. BetFury continues to play a role in the evolving crypto and iGaming partnership landscape. BetFury’s Participation at SBC Summit The SBC Summit gathers over 25,000 delegates, including 6,000+ affiliates — the largest concentration of affiliate professionals in iGaming. For BetFury, this isn’t just visibility, it’s a strategic chance to present its Affiliate Program to the right audience. Face-to-face meetings, dedicated networking zones, and affiliate-focused sessions make Lisbon the ideal ground to build new partnerships and strengthen existing ones. BetFury Meets Affiliate Leaders at its Massive Stand BetFury arrives at the summit with a massive stand placed right in the center of the Affiliate zone. Designed as a true meeting hub, the stand combines large LED screens, a sleek interior, and the best coffee at the event — but its core mission goes far beyond style. Here, BetFury’s team welcomes partners and affiliates to discuss tailored collaborations, explore growth opportunities across multiple GEOs, and expand its global Affiliate Program. To make the experience even more engaging, the stand also hosts: Affiliate Lottery — a branded drum filled with exclusive offers and personalized deals for affiliates. Merch Kits — premium giveaways to boost brand recognition and leave visitors with a lasting conference memory. Besides, at SBC Summit Lisbon, attendees have a chance to meet the BetFury team along…
Share
BitcoinEthereumNews2025/09/18 01:20