VMware ESXi and vSphere: vmNIC speeds are limited by your CPU and RAM speeds, only in part by the vNIC drivers
Posted by jpluimers on 2022/01/11
Still a lot of people think that network speed depends on the vNIC driver and vNIC speed settings.
This is not true: it mainly depends on CPU and RAM speeds as that is where the bottleneck of virtual network processing is.
What does matter is the VM/host overhead is far less when drivers use paravirtualisation (i.e. shortcutting calls from the guest OS to the hypervisor) like PVSCSI for disk or VMXNET3 for networking. This means that VMXNET3 has even more performance than E1000.
See:
- [Wayback/Archive] VMXNET3 = 10GbE ? – Hypervisor.fr
- [Wayback/Archive] Debunking the VM Link Speed Myth! – vswitchzero
10Gbps from a 10Mbps NIC? Why not? Debunking the VM link speed myth once and for all!
…
And again, we see the same pattern. Link speed – even from an adapter type that never supported 10Gbps – has no bearing at all on actual throughput and performance.
…
Thankfully, even standard vSwitches support traffic shaping, where you can limit throughput. If you are trying to accomplish this, the traffic shaping feature is a fully functional way to achieve this.
- [Wayback/Archive] virtual_networking_concepts.pdf
Note: The speed and duplex settings found in physical networking are not relevant in the virtual network, because all the data transfer takes place in the host system’s RAM, nearly instantaneously and without the possibility of collisions or other signaling-related errors.
- Blog post Some links and notes on ESXi and virtualised NAS systems mentioning
When supported in your VM, always use the paravirtualised adapters VMXNET3 for network and PVSCSI for disk:
–jeroen
Leave a Reply