Blogs

Advancements in RAN through AI ー AI for RAN ー

#AI-RAN

Though 2020 saw the introduction of 5G and the expansion of areas where 5G signals are accessible, the performance initially expected has not been fully realized. Therefore, efforts to evolve 5G through the power of AI and Machine Learning (ML) have gained momentum. In the 5G Advanced from 3GPP Release 18 onwards, efforts towards features that realize optimal network-wide orchestration through AI/ML, and efforts to improve the performance of the Radio Access Network (RAN), are being considered. Additionally, in existing networks, the introduction of AI/ML is progressing in efforts to make RAN operations more efficient and to automate parameter settings.

In this article, we define all the features aimed at maximizing the performance of RAN by applying AI/ML, termed "AI for RAN," as proclaimed by SoftBank. We will introduce the direction and specific research and development examples that "AI for RAN" aims for.

1. The Current Advancements in RAN through AI/ML

RIC (RAN Intelligent Controller), a mechanism to incorporate and control AI/ML for optimizing and automating RAN operations, is under discussion chiefly by the O-RAN Alliance. In the RAN architecture of the O-RAN Alliance, it is defined as a logical node that designs and sets base station parameters as well as automates and optimizes operations. The interfaces between the RIC and each node, such as RU, DU, CU, that make up the RAN, are open and standardized. This has led to an influx of emerging vendors that vary from major base station vendors, intensifying competition.

The RIC collects data from nodes such as RU, DU, CU, and from there, AI/ML dictates optimal approaches for RAN control and operation optimization. Specifically, two types of RICs are defined: Near-RT-RIC and Non-RT-RIC.

Non-RT-RIC is anticipated to be deployed in central locations such as data centers and is being considered for use cases where AI/ML determine the optimal RAN control policy from data collected over extended periods from many base stations, and issue instructions to RAN for control.

Near-RT-RIC is expected to be installed at the same location as RAN's CU and DU, gathering and analyzing information from operating CU and DU in near real-time. It aims to improve wireless performance among other uses by controlling RAN within short spans of time (approximately 10ms to 1 second).

When trying to realize RAN control through these standard RICs, there are many issues to be resolved to achieve a balance between performance improvement and cost. Due to realistic hardware performance constraints, the cycle of data collection from RAN and the volume of data transferred could take time, potentially causing delays in processing. Additionally, sufficient CPU/GPU processing capacity is needed to analyze the collected data using AI/ML. This implies a need to invest in dedicated hardware separate from RAN equipment, which raises additional considerations.

2. What is AI for RAN in AI-RAN?

The AI-RAN proposed by SoftBank virtualizes sufficient computational processing power of hardware resources within data centers, onto which RAN and AI applications are superimposed. By realizing flexible resource allocation in response to traffic and such, it aims to optimize equipment investment and operational efficiency.

Furthermore, by aggregating a large number of base stations and superimposing RAN and RIC/AI on the same platform, it becomes possible to address issues such as data transfer tasks and insufficient computational capacity.

Superimposition of RAN and RIC/AI enables real-time RAN control on the order of milliseconds to microseconds, a feat difficult to achieve even with Near-RT-RIC. Previously, the allocation of MCS and resource blocks was performed based upon a specific threshold or algorithm derived from the uplink radio signal information from the terminal. By integrating AI-based prediction and real-time data analysis, it's possible to perform real-time wireless environment understanding and carry out optimal MCS and resource block assignments. This optimizes wireless scheduling and consequently, improvements in user throughput and user experience are anticipated.

By collecting data from numerous base stations and implementing learning and control via AI/ML, coordination between multiple base stations can be achieved. This enables area-wide control of base stations, such as adjusting transmit power, beamforming, interference wave suppression, etc. This also allows for packet congestion suppression, improvement in MIMO rate, realization of flexible CA combinations, leading to improved user throughput and experience.

Not only can it utilize computational resource usage and information from RAN, but by coordinating with external data like time, weather, event information, and conducting statistics or learning, it can create a so-called digital twin. This enables predictions of user behavior through AI/ML and the pre-realization of optimal RAN parameter settings.

3. AI Application Case Study for Low-Layer Wireless - Channel Interpolation with AI -

We introduce an example where AI is utilized in a lower layer, specifically, for channel interpolation. In environments where base stations are dense and there are numerous terminals, the quality of radio signals becomes distorted due to the impact of multipath fading from the complex propagation environment. Although signal processing for estimation and interpolation can sometimes improve this, it's very challenging to restore significantly degraded signals in complex environments. As a result, throughput will fall dramatically.

Therefore, we applied super-resolution technology used in image analysis AI to RAN radio signal processing. We conducted simulations to see the extent of potential throughput improvement by reconstructing (interpolating) radio signals. We generated radio signal data corresponding to the actual environment for simulation, conducted learning, and inputted UL signals identical to the past base station systems into the constructed AI model. By checking changes in throughput and SINR, we were able to confirm an improvement of about 30% in throughput compared to conventional signal processing technology. In the future, we plan to verify operations in conjunction with AI on actual RAN software.

At SoftBank, we define 'AI for RAN' as applications of AI/ML across all layers, from higher layers of base stations controlled by Non-RT-RIC to lower layers where it's difficult to accomplish even with Near-RT-RIC, to maximize RAN performance.

Going forward, we will push on with the research and development of as many AI for RAN features as possible towards realization.

Research Areas