We just implemented new storage, and I want to adjust the queue depth of our hosts so that we don't overwhelm our storage ports, aka the fan out config (or is it fan in ).
While digging into this everything made perfect sense to me until I ran ESXTOP to look at qdlen values, and noticed that on my hosts with qlogic HBAs they were not static but were changing between 64 and 32 as I watched ESXTOP refresh.
< let me state that I'm seeing this against the old storage which is full of NL-SAS and we have zero IOPs available, and probably aren't configured properly in that we only have 4 ports dedicated to our VM infrastructure, so this orginal config was never optimized in any way. with our new storage arrival this is being fixed now, hence the purpose of this post. >
In short, it appears that the qlogic HBA itself has an execution throttle, but not sure how it works per se in regards to the values, but certainly its a qfull condition that it is reacting to.. in searching for answers I just read something about this throttle no longer applying to vSphere but if this is the case why would I see the value changing automatically in ESXTOP?
"The QLogic adapter parameter, Execution Throttle, is no longer used in VMware ESXi and is ignored by the QLogic FC/FCoE qla2xxx/qlnativefc driver and ESXi Operating System. " this came from here: QLogic Support
Can someone shed light on this qlogic HBA?
Also, given that DSNRO parameters (global in 5.0 and per LUN in 5.5) trump queue depth when more than 1 VM is active against the LUN, should I bother at all?
thanks