hi guys
We have a case open but maybe you guys have some feedback
the story is as follow:
Last week DatastoreXX was recreated from scratch.
RecoverPoint is used to replicate Datastore-VMs.
So "RecoverPoint people" some days later after DatastoreXX was created reported that DatastoreXX changed and they were not able to replicate it anymore
So we told them we recreated the Datastore that might be the reason.
So yesteday night "RecoverPoint people" added back DatastoreXX to be replicated. So first time this recreated DatastoreXX was going to be replicated yesterday night.
Today in the morning customer reported 2 VMs were not accesible. As usual we checked networking. OK, VM were Powered on. OK.
So VMs were rebooted and after that they were marked as invalid.
After some troubleshooting VMs were locked so we rebooted hosts to get them up and runing. So we tried rebooting to release the lock but that was not the issue.
The issue was the ATS stuff. Something interesting that during this time our vCenter (located DatastoreXX) was rebooted and was marked as invalid and we were not able to powered back on.
And another VM located in that specific datastore got problems too.
So looks like timing for replicating using "RecoverPoint" and today in the morning having issues with multiple VMs located in the affected datastore (which was replicated the night before)
so we want to find out if this issue is related with RecoverPoint, since we want to know
what was locking VM and why every VM rebooted in that particular Datastore was not consistent.
I forgot to add the ATS article in fact it mentions something about SRM which a replication software too
VMware KB: ESXi 5.x hosts fail to mount VMFS5 volumes that are formatted with ATS-only capabilities
thanks a lot