The K-Pop AI New Year
Something unprecedented just happened in the AI world. In the final days of 2025, South Korea unleashed four massive foundation models, all within days of each other. It's not a coincidence—it's a coordinated national AI strategy coming to fruition.
While the rest of the world was watching the US-China AI arms race, Korea looked at the playbook and said "네 (yes), we can do that." And they did it with a twist: open-source, efficiency-first, and built for real-world deployment.
The Four Horsemen of Korean AI
A.X K1: The 519B Behemoth
Developer: SK Telecom (8-member consortium)
Released: December 27, 2025
The numbers alone are staggering: 519 billion total parameters, with 33 billion active at any given time. This is Korea's first hyperscale AI model, developed by an eight-member consortium led by SK Telecom.
What makes it special:
- National Infrastructure Play: A.X K1 isn't just another model—it's designed to be Korea's AI backbone. SK Telecom is positioning it as a "Teacher Model" that transfers knowledge to smaller models (under 70B parameters).
- Open by Design: The consortium plans full open-source release with APIs and partial training data disclosure. Over 20 institutions have already submitted letters of intent to participate.
- Real Distribution: With SK Telecom's A-DoT platform serving 10M+ subscribers and a broader 20M-user foundation, this model has immediate production infrastructure.
- Full-Stack Ecosystem: From semiconductors to services, A.X K1 represents Korea's end-to-end AI capability.
The model completed development in a remarkably short timeframe and validates Korea's domestic semiconductor and AI data center capabilities. This isn't vaporware—it's running on Korean infrastructure.
VAETKI: The Efficiency Specialist
Developer: NC AI Consortium (14 institutions)
Released: December 30, 2025
VAETKI proves that bigger isn't always better. With 112 billion total parameters but only 10-11 billion active during inference, it's the definition of doing more with less.
Key innovations:
- Industry-Specialized: Built with a grand consortium including Lotte Innovate, POSCO DX, MBC, KAIST, ETRI, and Korea University, VAETKI is optimized for diverse sectors from manufacturing and defense to retail and entertainment.
- 90% of parameters are "just vibing": The Mixture of Experts (MoE) architecture with Multi-head Latent Attention (MLA) and Sliding Window Attention (SWA) with a 512-token window means maximum efficiency.
- Cost-Efficient and Secure: Designed for on-premise and domain-specific environments where cost and security matter more than raw size.
- 10 Trillion Token Training: Trained on approximately 10 trillion tokens of pure, unadulterated open data—no corporate secrets, no proprietary datasets.
VAETKI's core philosophy: optimal efficiency that companies can actually adopt, not just impressive benchmarks.
Solar-Open: The Data Glutton
Developer: Upstage
Release Date: December 31, 2025
If VAETKI is about efficiency, Solar-Open is about thoroughness. This is the model that looked at training data and said "hold my soju."
The numbers that matter:
- 102B total parameters / 12B active: Another MoE architecture prioritizing efficiency.
- 19.7 TRILLION tokens: Read that again. That's not a typo. Someone's electricity bill is having an existential crisis.
- 129 Experts: Top 8 among 128 routed experts plus 1 shared expert, trained on NVIDIA B200 GPUs.
- 128K context length: Process documents that would make other models weep.
- Agentic and Reasoning-Focused: Built for real-world enterprise applications in finance, law, healthcare, and e-commerce.
Multilingual Support: Native support for both English and Korean, with strong performance across both languages.
Solar-Apache License 2.0: Fully open-source with enterprise-grade performance in reasoning, instruction-following, and agentic capabilities. The model is already available on Hugging Face with official API service launched December 31, 2025.
Upstage didn't just build a model—they built a reasoning engine trained on more text than most models will see in their lifetime.
K-EXAONE: The Corporate Flex
Developer: LG AI Research
Released: December 30, 2025
LG looked at the competition and decided to flex both computational muscle and architectural innovation.
Technical prowess:
- 236B total parameters / 23B active: The largest of the four Korean models announced.
- 70% memory reduction: Hybrid-attention technology combining Mixture of Experts with a 3:1 hybrid attention scheme (128-token sliding window).
- 256K context window: Natively supported—paste your entire codebase AND that 200-line error log.
- Five-month development cycle: LG AI Research completed this monster in just 5 months.
Performance benchmarks:
- Average score: 72.03
- Beats Alibaba's Qwen3 (69.37) and OpenAI's GPT-OSS 120B (69.79)
- Outperforms competitors by more than 100% in certain tasks
Deployment-ready:
- Runs on older GPU environments like NVIDIA's A100
- Lower deployment and operational costs
- Supports 6 languages: Korean, English, Spanish, German, Japanese, and Vietnamese
K-EXAONE isn't just about being big—it's about being practical. The ability to run on older hardware while maintaining top-tier performance is a game-changer for deployment economics.
The MoE Revolution
Notice a pattern? Three out of four models use Mixture of Experts (MoE) architecture. This isn't a coincidence—it's a strategic choice that reveals Korea's AI philosophy:
Efficiency over brute force:
- A.X K1: 519B total, 33B active (6.4% utilization)
- VAETKI: 112B total, 10-11B active (~9% utilization)
- Solar-Open: 102B total, 12B active (11.8% utilization)
- K-EXAONE: 236B total, 23B active (9.7% utilization)
This is the anti-GPT-4 strategy. Instead of activating all parameters all the time (energy-intensive, computationally expensive), MoE activates only the experts needed for each task. It's like having a team of specialists instead of one generalist working on everything.
Why this matters:
- Lower inference costs: Only activate what you need
- Faster responses: Less computation per query
- Easier deployment: Smaller memory footprint during inference
- Better specialization: Different experts for different tasks
Korea looked at the AI arms race and asked: "What if we made this actually usable?"
The Open-Source Gambit
Here's where Korea diverges from the US-China playbook: every single one of these models is open-source or heading that way.
- A.X K1: Planned open-source release with APIs and partial training data disclosure
- VAETKI: Released on Hugging Face under open license
- Solar-Open: Solar-Apache License 2.0 (fully open)
- K-EXAONE: Available on Hugging Face (LGAI-EXAONE/K-EXAONE-236B-A23B)
This isn't altruism—it's strategy. By open-sourcing these models, Korea is:
- Building ecosystem lock-in: Make it easy for developers to adopt, and they'll build on your infrastructure.
- Accelerating innovation: Open models get improved faster through community contributions.
- Demonstrating sovereignty: "We don't need to keep this closed to compete."
- Creating network effects: The more people use Korean AI, the more valuable it becomes.
Compare this to OpenAI's closed approach or China's state-controlled models. Korea is betting that openness is the path to relevance.
The National AI Strategy
This coordinated release isn't random—it's the culmination of South Korea's National AI Foundation Model Project. The government essentially said: "We're going to be a top-three AI power alongside the US and China, and here's how we're doing it."
The strategy:
- Consortium approach: Pool resources from telecom, electronics, gaming, retail, and research institutions
- Full-stack capability: From semiconductor manufacturing to model deployment
- Domestic infrastructure: Validate Korean AI data centers and compute
- Open ecosystem: Make adoption frictionless
- Industry specialization: Build for real-world use cases, not just benchmarks
The fact that these models were all announced within days of each other at a government-hosted presentation at COEX in Seoul tells you everything you need to know about coordination level.
What This Means for the AI Landscape
For developers:
- Four production-ready, open-source models with different strengths
- Lower deployment costs (MoE efficiency + older GPU support)
- Multilingual support with strong Korean and English performance
- Real API access and infrastructure (not just model weights)
For enterprises:
- Industry-specific optimizations (VAETKI)
- Agentic reasoning capabilities (Solar-Open)
- Massive context windows (K-EXAONE, Solar-Open)
- Cost-efficient on-premise deployment options
For the AI race:
- A third pole in the US-China AI competition
- Proof that open-source can compete with closed models
- Validation that efficiency matters as much as scale
- Demonstration that smaller countries can build frontier AI
For AI researchers:
- New architectures to study (hybrid attention, MoE variants)
- Massive training datasets (19.7T tokens on Solar-Open)
- Open weights for research and fine-tuning
The Elephant in the Room: Can They Compete?
Let's be direct: these models are impressive, but are they competitive with GPT-4, Claude 3.5, or Gemini?
The honest answer: It depends on the task.
For multilingual tasks with Korean? Absolutely. For enterprise deployment with cost constraints? Definitely. For agentic reasoning with massive context? Very likely. For bleeding-edge research or creative writing in English? Probably not yet.
But here's the thing: Korea isn't trying to beat OpenAI at creative writing or Google at search. They're building practical, deployable, efficient AI for real-world use cases. And in that game, they're already competitive.
The benchmarks speak for themselves:
- K-EXAONE: 72.03 vs Qwen3's 69.37
- Models running on older hardware (A100s)
- Actual production deployments (SK Telecom's 20M users)
- Open-source accessibility (no API costs)
What's Coming Next?
This is phase one. According to the government presentations, this is the initial results unveiling. Expect:
- Continuous improvements: Open-source means rapid iteration
- Specialized variants: Industry-specific fine-tunes
- International expansion: These models going global
- Ecosystem growth: Tools, frameworks, and applications built on top
- Hardware optimization: Custom silicon for these architectures
Korea spent decades building semiconductor dominance. They're applying the same long-term thinking to AI.
The Bottom Line
Korea just open-sourced the kitchen sink. While everyone else is arguing about whether to gatekeep AI, Seoul demonstrated that you can be both open AND competitive.
These aren't just paper models—they're production-ready systems running on real infrastructure with actual users. They're efficient enough to run on older hardware, specialized enough for industry use cases, and open enough for anyone to build on.
The AI race just got a lot more interesting. Korea brought efficiency, openness, and real-world pragmatism to a game that's been dominated by "bigger is better" thinking.
And they did it all while making their electricity bills have existential crises (19.7 trillion tokens, we're looking at you, Solar-Open).
Welcome to the K-Pop AI era. The faucet in Seoul is running, and it's not turning off anytime soon.
Key Takeaways
- Four major AI models released in rapid succession: A.X K1 (519B), VAETKI (112B), Solar-Open (102B), K-EXAONE (236B)
- MoE architecture dominance: 75% of models use Mixture of Experts for efficiency
- Open-source commitment: All models available or planned for open release
- Production-ready: Real infrastructure, actual users, immediate deployment
- Efficiency focus: Optimized for cost-effective deployment on accessible hardware
- National strategy: Coordinated government-backed AI sovereignty initiative
- Competitive performance: Beating established models in benchmark tests
- Practical design: Built for real-world use cases, not just impressive demos
The message is clear: Korea is serious about AI, and they're doing it differently.
Sources
- SK Telecom Unveils A.X K1, Korea's First 500B-Scale Hyperscale AI Model
- SK Telecom Launches 519B Parameter AI Model
- Consortia unveil models for national AI project - The Korea Times
- National AI foundation model project unveils initial results
- Upstage Solar-Open-100B on Hugging Face
- K-AI Unveils Results: Here's What Each Team Brought to the Table
- LG unveils K-Exaone, touts edge over global AI rivals - The Korea Herald
- LGAI-EXAONE/K-EXAONE-236B-A23B on Hugging Face
- Korea charts one-of-a-kind course in AI race with U.S. and China

Posted by
Fahad Siddiqui
Founder, Datum Brain
