Hi
I'm setting up an ESXI 5.1 machine to test performance / proof of concept with a "storage appliance" in a VM on a host.
Right now I'm using a version of solaris (nexenta community edition) to test.
I have successfully been able to set up the machine with disk controllers passed through, and to add an initial vswitch with a vmkernel port configured and a port group for virtual machines. - I was able to set up the guest solaris vm with two e1000 adapters in this vswitch, one to be used for management, and one configured with an address in the same private network as the vmkernel port on the management network... and things seem to work fine.
I wanted to test if there were any performance improvement using vmxnet3 adapter, and having a completely "virtual network" dedicated only to the nfs traffic.
So I have created an additional vswitch with no adapters attached to it, with another vmkernel port and a second virtual machine portgroup, and added a vmxnet3 adapter to my "storage vm" to go on this vswitch.
Before I do this, I was wondering whether VMCI or anything else is already going to do what I want (IE allow ESXi to communicate with the storage vm over nfs with speeds in excess of what the limitation of the e1000 adapter would provide by default)
i say this because my initial testing with iozone gave me numbers which seemed beyond 1 gbs..