NTT Corporation and SoftBank Group announced in February 2026 that they will jointly develop Japan's first domestically produced frontier-scale language model. The partnership — also involving RIKEN, AIST, and four Japanese universities — represents the most significant coordinated AI research effort in Japanese history.
The motivation
Japan's AI capabilities have been almost entirely dependent on US and Chinese models. NTT's Tsuzumi model, released in 2024, was the first serious Japanese-language foundation model at commercial scale — but at 7 billion parameters, it is not competitive with GPT-4 class models. The NTT-SoftBank joint venture is targeting a model in the 100-billion parameter range, trained primarily on approximately 100 trillion Japanese tokens.
The compute infrastructure
The project has secured access to a Japanese government AI compute cluster at RIKEN's facility in Kobe, scheduled for completion in late 2026. The cluster will consist of approximately 40,000 NVIDIA H100 GPUs, contributed by the government as part of its ¥2 trillion AI infrastructure commitment.
The timeline and the risk
NTT and SoftBank are targeting a first release in early 2028 — an ambitious timeline for a project of this scale undertaken by organisations without prior experience training models at this parameter count. The partnership has hired approximately 300 AI researchers, including Japanese nationals returning from US and UK universities.
The honest assessment: two years is tight, the compute may not be sufficient to produce a genuinely frontier-class model given how fast the frontier is advancing, and governance of a joint venture between two large Japanese corporations brings coordination challenges. But the ambition itself marks a turning point. Japan is no longer content to be an AI consumer.