
The English edition of SoftBank Corp.’s (TOKYO: 9434) Integrated Report for 2025, published on October 31, 2025, provides a comprehensive overview that includes a look back at the fiscal year ended March 31, 2025 (FY2024), management’s perspectives on medium- to long-term growth strategies centered on AI, financial strategies, and shareholder returns. It also introduces SoftBank’s strategies aimed at realizing “Next-generation Social Infrastructure,” as well as its initiatives related to ESG and risk management.
SoftBank is leveraging the full capabilities of its group companies to develop homegrown Large Language Models (LLMs). In the report, Hironobu Tamba, SoftBank’s Head of Homegrown Generative AI Development and President & CEO of SB Intuitions Corp.—SoftBank’s subsidiary at the forefront of homegrown LLM development—was interviewed about the current status of development, competitive advantages and how its AI strategy will enhance SoftBank's corporate value. Below is an excerpt.
SoftBank Integrated Report 2025

Every year, SoftBank publishes an Integrated Report, a comprehensive document that includes information on its vision, medium- to long-term growth strategies, value creation processes, materiality, and financial and non-financial information.
Interview with Head of Homegrown Generative AI Development: How the development of homegrown LLMs will drive SoftBank's next leap forward
I see the development of our foundation model as a crucial initiative that lays the cornerstone for the future of AI development in Japan. Today, AI development starts by developing a very high-performance, large-scale AI that acts as a “teacher.” This “teacher” AI possesses a vast amount of knowledge, but it requires significant computing resources and electricity to operate. This presents challenges in terms of cost and response speed for everyday business use. Therefore, from the knowledge held by this “teacher” AI, we are developing a lighter, faster, and more power-efficient “student” AI. It is this optimized “student” AI that will actually be used by many of our customers.

The strategic intention behind developing this model is to accelerate the practical deployment of AI across society and to enable further advancement of future LLMs. First, a model that is well-balanced in terms of response speed, answer accuracy, and cost is essential for many client companies to implement AI. “Sarashina mini” has been optimized using techniques such as “model distillation*” to meet diverse needs while maintaining, as much as possible, the high-level performance of the large-scale foundation model developed earlier.
However, the role of this model does not end there. “Sarashina mini,” while being an outstanding “student,” also becomes an important component for creating the next generation of AI. Specifically, by combining multiple 70 billion-parameter models with different areas of expertise, we will evolve them into an even more powerful AI, like a “team of specialists.” By continuously training this “team of specialists,” we will efficiently build the next-generation “teacher” AI with even higher performance in a short period, aiming to accelerate the development of a one trillion-parameter scale LLM.
- *
Model distillation: a technique to improve the performance and optimize smaller, lighter models by generating training data with a large-scale, high-performance AI model.

The entire interview can be found here.
(Posted on November 13, 2025)
by SoftBank News Editors


