Overview

Mobility load balancing is a 3GPP Release 17 AI/ML for NG RAN Use Case. It involves transferring load from overloaded cells to under-loaded neighboring cells, for optimizing network performance and user experience.


Concept

  • Default Association: In NetSim, the default user equipment (UE) association is based on Maximum signal strength (SS-RSRP)
  • Load Balancing Goal: Modify the association/Handover criteria to distribute network load efficiently across available cells.

NetSim provides a flexible framework for users to develop and test load balancing algorithms:

  • NetSim passes 'measurements' to the user algorithm
  • The algorithm processes data and returns 'controls or actions'
  • NetSim adjusts the simulation based on algorithm output
  • NetSim then provides performance metrics (KPIs) back to the algorithm
  • These steps occur in a continuous loop, allowing for run-time adjustments.


Example Load Balancing Algorithm

Inputs

  • Number of RRC connected UEs at each gNB
  • DL and UL CQIs of each UE
  • Time averaged PRB utilization (DL and UL) at each gNB

Possible Outputs

  • UE to gNB associations
  • Cell Individual Offsets: When a positive CIO value is applied to a cell (i) It artificially increases the perceived signal strength of that cell from the UE's perspective, (ii) This makes the cell appear "nearer" or "stronger" to the UE than it actually is.

User Algorithm 

  • Algorithms can be written in high-level languages like Python
  • No need for deep knowledge of NetSim internals

Example Scenario

  • 7-cell hexagonal layout
  • 3 sectors per cell, 2 carriers per sector (total 7*3*2 = 42 gNBs in NetSim). One low frequency and one high frequency carrier. 
  • 50 active UEs per sector
  • Dynamic variables: Cell load, traffic characteristics, distribution of devices 
  • Compare the following before and after load balancing:
    • Number of UEs associated with each gNB
    • Sum throughput of the UEs, and 
    • gNB wise PRB utilization

Additional Considerations

To create more sophisticated load balancing solutions, users could consider the following:

  • PRB utilization between GBR (Guaranteed Bit Rate) and Non-GBR users
  • Time-varying network traffic patterns
  • UE mobility 

Advanced: An outline for applying Reinforcement Learning (RL) for load balancing

  • Now, let us denote c_ij as the instantaneous rate of a UE and is theoretically a log function of SINR 

  • And let R_ij be the long-term rate and y_ij is the fraction of resources allocated, to UE_i by BS_j 


  • Note that the max RSS association does not balance the load between BSs. The load balancing problem can be solved from optimization theory for a fixed topology
  • Now let's say the SINR changes with time due to user mobility
  • Then RL can be used to decide a "load-aware" UE-BS association i.e., the association is not based on max RSS
  • We explore, Markov decision process/Q-learning based (model-free) RL
    • At state s_t  RL agent selects action a_t by following policy π and receives reward r(s_t, a_t). 
    • The MDP has value function V^π (s), and action value function Q^π (s, a) where α (0≤α≤1) is the discount factor 
    • We assume that the update interval (epoch) ≫ LTE/5G frame length of 10ms
  • State: UE SINRs (γ_1,…, γ_N  ), based on the current association at time t 
  • Action: 
    • Association x_ij (indicator variable showing association of UEi_ to BS_j)
    • Resource allocation y_ij (equals 1/sum_j x_ij, i.e., the reciprocal of the number of UEs associated with a BS. This is exact for Round robin, and on average for PFS)



Power control example involving Reinforcement Learning