Quantcast
Channel: VMware Communities : All Content - All Communities
Viewing all articles
Browse latest Browse all 179681

Poor network performance between VMs

$
0
0

Hi,

i'm experience poor network performance and i wonder if anyone got any ideas how to solve this problem.

 

When running iperf (no customization without changing running time to 60 seconds) internally on a Windows 2008 R2 VM i reach about 428 MB/s.

 

When running iperf (no customization without changing running time to 60 seconds) between two Windows 2008 R2 VMs in the same ESXi server i reach about 200 MB/s.

 

When running iperf (no customization without changing running time to 60 seconds) between two Windows 2008 R2 VMs in the same ESXi server i reach about 110 MB/s using 10 Gbps network. Looks very much like a 1 Gbps network but it is not.

 

A few TCP/IP settings has been tuned in the Windows VM/VMs:

 

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\TCPIP\Parameters

o TcpTimedWaitDelay 30

o MaxUserPort 32768

o TcpMaxDataRetranmission 5

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\AFD\Parameters

o EnableDynamicBacklog 00000001

o MinimumDynamicBacklog 00000020

o MaximumDynamicBacklog 00001000

o DynamicBacklogGrowthDelta 00000010

o KeepAliveInterval 1

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\{Interface GUID}*

o TcpNoDelay 1

o TcpAckFrequency 1

 

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\DisableTaskOffload has been set to 0,1 and 16.

 

TCP global parameters looks like this:

----------------------------------------------

Receive-Side Scaling State: disabled

Chimney Offload State: automatic

NetDMA State: disabled

Direct Cache Acess (DCA): enabled

Receive Window Auto-Tuning Level: normal

Add-On Congestion Control Provider: ctcp

ECN Capability: disabled

RFC 1323 Timestamps: disabled

 

I'm using ESXi 5.0.0 build-702118

 

I have seen the below message in the VMkernel log file indicating that the network spped is not limited to 1 Gbps:

VMotionSend: 3508: 1352573741598557 S: Sent all modified pages to destination (network bandwidth ~843.801 MB/s)

 

vMotion using the same NICs as the VMs during the vMotion test.

 

Anyone?


Viewing all articles
Browse latest Browse all 179681

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>