Hello,
Recently we experienced an odd issue with two of our ESXi5 clusters. Each of these clusters has their own storage that wasn't shared between the two however with the older cluster getting near end of life we where migrating to the new ESXi5 cluster. So we shared all the luns from the old cluster ot the new one to stroage vMotion / vMotion over the VM's to the new cluster. This all worked however after sharing the LUNS from the old cluster to the new cluster we started seeing very high disk usage / latency. After some troubleshooting we turned off Datastore heart beat and all the disk usage / latency problems went away? We completed the migration and removed the LUNs ans all was well, however this begs the question as to why the datastore heart beat caused this problem.
Is it not possible to have multiple sets of luns from two different clusters cross sharing with Datastore heart beat enabled? or was it just becuase there was to many heart beats on the luns? We had the Datastore heartbeat setting left on the default setting of Select any of the cluster datastores
Has anyone else had this problem or could explain why this happend? I looked all over the forums / best practices documents but all I could find was best practices on datastore heartbeating per cluster, not a multi-cluster enviroment and it didn't have any warning ot state otherwise?