Applicable VersionsNetSim StandardNetSim Pro

Applicable Releases

NetSim calculates the PHY rate per the 3GPP formula, which is explained in the infographic below.

A simple, approximate way to think of the above formula is

Data Rate = BW * Q * R * N * (1 - OH)                                ... (1)

where Data rate is the per carrier PHY rate, BW is the allocated bandwidth to the particular UE, Q is the modulation order, R is the code rate, N is the number of MIMO layers, and OH is the Overhead. In NetSim, OH is usually taken as 2/14 since we have 2 control symbols in a slot spanning 14 symbols. 

How is the formula in (1) equivalent to the 3GPP formula? Let's examine the variables. 

  • Q, R, N, and (1-OH) are common to both
  • The scaling factor in NetSim is assumed as 1
  • The data rate in (1) per carrier and needed to be summed up for multiple carriers
What remains to be shown is that BW is the same as N_prb * 12 / Ts. In this 12 is the number of subcarriers and Ts represents the symbol duration which varies with numerology. 10^-3 represents 1 ms, 14 is the number of OFDM symbols and 2^mu adjusts the slot duration based on the numerology. When we calculate N_prb*12/Ts, we are essentially calculating the total number of subcarriers allocated within the given symbol duration, which is nothing but the bandwidth allocated to that device. The simplified formula we provided would yield a slightly higher estimate compared to the 3GPP formula because of the way N_prb is calculated in the standards. 

When operating in TDD mode, the above computation would give the two-way (downlink + uplink) data rate. Therefore the downlink data rate would be

            DL-rate = Data Rate * DL-Fraction                       ... (2)

While BW, OH, and N are based on user inputs in NetSim, Q and R are dependent on the modulation and coding scheme (MCS). The MCS i.e., Q and R, is chosen by looking up the the 3GPP spectral efficiency to MCS table assuming ideal Shannon rate whereby

    Spectral-Efficiency = log2(1+SINR[linear])                   ... (3)

The expression thus becomes

DL/UL Data Rate [Mbps] = BW [MHz] * Q [bits/symbol]* R * N  * (1 - OH)*(DL/UL fraction)      ...(4)

Bandwidth is the cycles per second which translates to the number of signal changes (or symbol transmissions) per second. Hence multiplying BW and Q gives Mbits/sec when BW is in MHz. The other terms are dimensionless. 

Now, in 5G, the transmitter adapts its PHY layer MCS depending on the receiver's SINR. The SINR in turn depends on the received power, which is transmit-power less path loss.  In NetSim users can record the radio measurements to obtain the SINR and MCS (for each UE) over time if the channel is time-varying.


Consider the following scenario where we have downlink traffic from the 1 gNB to 2 UEs. The pathloss models are set such that UE1 sees MCS13 and UE2 sees MCS7. Additionally, UE1 has tx-rx antennas 1x1 while UE2 has 2x2; the gNB is 2x2. Thus UE1 sees 1 MIMO stream (layer) while UE2 sees 2 MIMO streams (layers). The scheduling algorithm is round-robin. 

The PHY Data Rate Calculations for UE_1 

The PHY Data Rate Calculations for UE_2

Results and discussion

We run a simulation in NetSim per the above scenario and obtain the throughput values tabulated below.

Application throughput (simulation) [Mbps]PHY Data Rate (Analytical) [Mbps]
UE 195.50101.69
UE 2109.85 116.80

The application layer throughput would be

DL-App-Throughput = DL-Data-Rate * (App-Layer-Packet-Size/PHY-Layer-Packet-Size)          ... (4)

The computation of the PHY layer packet size is complex. It involves various layers adding overhead: the Transport layer (UDP) contributes 8 B, and the Network Layer (IP) adds 20 B. The MAC layer introduces additional overhead, with the SDAP header contributing 1B and the PDCP header adding 16B. At this point, the packet size is the size of the application layer packet plus 45 B. The MAC layer in 5G further processes these packets, fitting them into transport blocks (TBs). These TBs are then divided into code blocks (CBs), which are grouped into code block groups (CBGs) for transmission over the air. The sizes of the TB and CB depend on various parameters, and additional overheads are incurred during this process. As a result, it's challenging to provide a simple analytical formula for PHY layer packet size. A reasonable estimate would be about 5 - 10% reduction between the PHY rate and the application throughput. This is what we observe when we compare the simulation results with the theoretical predictions in the above table. 

The above discussion assumes a conservative MCS is selected, ensuring a Block Error Rate (BLER) of zero. However, if a more aggressive MCS is chosen, which typically has a higher throughput but also a higher t-BLER (e.g., 5% or 10%), the computation must account for this increased BLER.

Useful links

1. Overview of NetSim 5G library:

2. NetSim 5G documentation (v14.0):

3. How does the 5G MAC Scheduler work in NetSim 5G?