Tuning guide for Telcos using VMware by Broadcom Solutions- Telco Cloud Platform 5G Core
- Kashif Hindusthan
- Mar 21, 2025
- 7 min read
Overview
In my current role, I am involved in a lot of discussions around operations of Telco Clouds, Virtualised Network Functions (VNFs), Cloud-Native (or containerised) Network Functions (CNFs) on VMware by Broadcom Platform (Telco Cloud Platform [TCP- 5G Core] and VMware Cloud Foundation (VCF).
Key Consideration:
Telco workloads are not comparable with your typical IT application workloads. Telco workloads (VNF or CNFs) can be very demanding, realtime and latency sensitve…  with great apatite for CPU cycles and network performance. Storage performance seems less important however important part. Looking at Telco workloads, you could divide them into the following categories:
Data plane workloads: High packet rates, requiring network performance/capacity
Control plane workloads: CPU and memory intensive
Signal processing workloads: Latency and jitter sensitive
Lets focus on the Latest VCF and Telco Cloud Platform (TCP) offering for Telcos and Tuning Parameters which helps to achieve desired Performance and KPIs for VNFs / CNFs.
Building Blocks of the Telco Cloud

Performance Consideration from Native/Traditional Telcos to Software Defined Data Center/VMware Cloud Foundation (VCF) Stack.
In Traditional / old school native way, having telco specific hardware running line cards, payload servers, etc., obviously is not sustainable looking at the current way we like to do ICT.  On the other hand, it looks like Telco application vendors are still finding their way on how to properly adopt virtualization as a technology.
The answer is VMware by Broadcom Telco Cloud is a purpose-built, carrier-grade cloud services platform, containing NFV features that are designed to support the Communication Service Provider (CSP) requirements for any telco workload including OSS/BSS, VAS, and 4G (LTE), 5G core, and RAN network functions.
VNF/CNF Tuning considerations for Optimum performance
Physical Host BIOS Tuning:
Power Management: ensure that the Hig Performance is enabled, but most likely you will have to disable C-state and P-statesÂ
Turbo Boost:Â it has to be enabled if you use DPDK with ENS since Poll-Mode is expected to keep the CPU idle cycling all the time despite the effective utilization
HyperThreading: is generally to be enabled for VNF unless the VNF vendor does not advise to disable it.
NUMA Node Interleaving: if you enable NUMA this has to be disabled since the settings are mutually exclusive.
Virtualization settings: Intel(R) Virtualization Technology (Intel VT) -> Enabled
Intel(R) VT-d -> Enabled
SR-IOV -> Enabled
Advanced Performance Tuning Options:
- Processor Jitter Control -> Disabled
- Processor jitter Control Frequency -> 0
- Processor jitter Control Optimization -> Zero Latency
- Enhanced Processor Performance -> Enabled
- Performance Config TDP level -> Normal
- PCI Peer to peer Sterilization -> Disabled
- IODC Configuration -> Auto
Memory:
- Channel Interleaving -> Enabled
- Memory Controller Interleaving -> Auto
- Maximum Memory Bus Frequency -> Auto
- Memory Patrol Scrubbing -> Disabled
- Node Interleaving -> Disabled
- Memory Mirroring Mode -> Full Mirror
- Memory Remap -> No Action
- Refresh Watermarks -> Auto
Key Note: Ensure Server Firmware and Drivers are updated as per server vendors guidelines.
ESXi/Hypervisor - Compute Advanced settings and Tuning
Increasing max ring size:
esxcli system settings advanced set -i 10000 -o /Net/MaxNetifTxQueueLen
TxCopy size to max
esxcli system settings advanced set -o /Net/VmxnetTxCopySize -i 4294967295
Tx packet size (in bytes) smaller than this are transmitted immediately
esxcli system settings advanced set -o /Net/NetTxDontClusterSize -i 8192
Enable multiple vnic queue for hw tx queue selection
esxcli system settings advanced set -o /Net/NetSchedHClkVnicMQ -i 1
Seperating RX queue thread to another core - Ensure to disable queue pairing
esxcli system settings advanced set -o /Net/NetNetqRxQueueFeatPairEnable -i 0
NOTE: Disabling queue pairing for all pNICs on an ESXi host creates a separate thread for processing pNIC transmit completions. As a result, completions are processed in a timely manner, freeing space in the vNIC’s transmit ring to transmit additional packets.
Physical Uplink/Nic Ring size settings:
Ring sizes:
Note: The concept of a ring buffer is to allocate a section of memory which is like a temporary holding area for packets whose packet rate may be so high that the code required to process them has trouble keeping up.Â
Each type of network adapter has a "preset maximum" which is determined by the device driver.Â
This "preset maximum" is usually higher than the default setting which is allocated when ESXi is installed from scratch. The "current" setting is revealed by the following command featuring "current" and "preset" from ESXi host CLI.
$ esxcli network nic ring preset get -n vmnicX
$ esxcli network nic ring current get -n vmnicX
Update the Ring size:
- Command reference:
esxcli network nic ring current set -n vmnic6/7/8/9 -r 8192 -t 8192
esxcli network nic ring current set -n vmnic6/7/8/9 -r 8192 -t 8192
Additional Information:
Increasing the Rx and Tx values on the hardware side and within the guest OS can significantly enhance VM performance, especially in high I/O environments. However, the effectiveness of this adjustment depends on factors such as the application, operating system, and hardware in use. Therefore, the recommendations should come from the respective application, OS, and hardware vendors to ensure compatibility and optimal performance.
Enhance Data Path (EDP) :
Enhanced Network Stack (ENS), also appears as Enhanced Data Path is a networking stack mode, which when configured provides superior network performance. It is primarily targeted for NFV workloads, which requires the performance benefits provided by this mode. ENS utilises DPDK Poll Mode driver model and significantly improves packet rate and latency for small message sizes.
Ensure you do not over provision the lcores
Ensure the ens backed uplink ring buffers are updated
Command reference:
nsxdp-cli ens uplink ring set -n vmnic2 -t 4096 -r 4096
DRSS:
Improve packet throughput by enabling Default Queue Receive Side Scaling (DRSS) on the NIC card.
Ensure the NIC card supports Default Queue Receive Side Scaling.
After you enable the Default Queue Receive Side Scaling (DRSS) configuration on a NIC port, Enhanced Network Stack (ENS) manages the receive-side data arriving at physical NIC cards. A single port on the physical NIC card makes multiple hardware queues available to receive-side data. Each queue is assigned a local logical core from the non-uniform memory access (NUMA) node. When inbound packets - multicast, unknown, or broadcast - arrive at a physical NIC port, they are distributed across several hardware queues, depending on the availability of logical cores. DRSS reduces bottlenecks processed by a single queue. DRSS is intended to serve broadcast, unknown, or multicast (BUM) traffic.
For example, on a physical NIC card that has two ports, you can configure one port to make multiple hardware queues available to efficiently manage receive-side (Rx) traffic. It can be done by passing DRSS=4,0 value in the ESXi system parameters command. This parameter enables the first physical NIC port for DRSS.
Command reference:
Install i40en ENS driver NIC driver.
If the NIC has two ports, enable RSS on the first port of the physical NIC, by running the command.
esxcli system module parameters set -m -i40en_ens -p DRSS=4,0
NOTE: Where, DRSS is enabled for 4 Rx queues on the first port and it is not enabled for Tx queues.
The number of DRSS queues assigned depends on the number of physical CPUs available on the host.
ESXi host enable Hyper threading
ESXi hosts manage processor time intelligently to guarantee that load is spread smoothly across processor cores in the system. Logical processors on the same core have consecutive CPU numbers, so that CPUs 0 and 1 are on the first core together, CPUs 2 and 3 are on the second core, and so on. Virtual machines are preferentially scheduled on two different cores rather than on two logical processors on the same core.
Ensure ESXi host are enabled
Browse to the host in the vSphere Client> Click > Configure > Under System > click > Advanced System Settings and select
VMkernel.Boot.hyperthreading
You must restart the host for the setting to take effect. Hyperthreading is enabled if the value is " true "
Virtual Machine - Tuning parameters
Enable NUMA for a specific VM with numa.vcpu.preferHT=TRUE
Enable NUMA for a specific vSphere host with numa.PreferHT=1
Enforce processor affinity for vCPUs to be scheduled on specific NUMA nodes, as well as memory affinity for all VM memory to be allocated from those NUMA nodes using numa.nodeAffinity=0,1,…,  (processor socket numbers)
Reduce the vNUMA default value if application is aware / optimized. The default value is numa.vcpu.min=9 . So if you configure smaller VMs consider using numa.autosize.once=FALSE and  numa.autosize=TRUE
Use sched.cpu.latencySensitivity set to High to provide a VNF/CNF with exclusive CPU
resources.
Assign additional Tx threads to virtual network interfaces using
ethernetX.ctxPerDev.
To ensure that system threads receive exclusive CPU resources use
sched.cpu.latencySensitivity.sysContexts.
For NSX-t 3.x or NSX 4.x use Enhanced Datapath- Performance for VNF/CNF Datapath Traffic:
Enhanced Datapath – Performance: Is the poll mode-based datapath mode that leverages the DPDK Fast Path cache for improved performance. The Networking and Security stack will use CPU cores that are exclusively reserved for this mode during NSX configuration. This mode is also known as ENS-Polling. This method is preferred for best datapath plane traffic.
Additional options available in NSX 4.x:
Standard: Is the default datapath mode. However, it does not support hypervisors that use Data Processing Unit (DPU)-based acceleration or SmartNICs. This can be modified to Enhanced Datapath – Standard mode later.
Enhanced Datapath – Standard: Is the interrupt driven datapath mode. Use this mode for hypervisors that use DPU-based acceleration or SmartNICs. It leverages the DPDK Fast Path cache for improved performance. The Networking and Security stack consume CPU resources based on the demands of network traffic. Enhanced Datapath-Standard is also known as ENS-Interrupt. This can be modified to Standard mode later. For ESX version 8.0.3 and above with less than 128 segment ports this can be modified to Enhanced Datapath – Performance later*.
Conclusion
Bringing it all together, it’s challenging to virtualise telco applications as you will need to deep dive into vSphere SDDC stack/VCF stack to get the performance required for those realtime workloads. There are a lot of trade-offs and choices involved. At the same time these new approaches makes things interesting, right?
At this moment even considering containerisation of telco applications. Combine that with automatic scaling based on performance/load KPI’s, and you have a fully automated  VMware Telco Cloud Platform (TCP- 5G Core) which caters this need.
Thank you for reading my post and more to come in coming weeks :)Â
References used for this Post:
https://knowledge.broadcom.com/external/article/341594/troubleshooting-nic-errors-and-other-net.html
https://www.vmware.com/docs/perf-latency-tuning-vsphere8