My question relates to disk sizing for VMs and datastore management. Our datastores are 2TB in size and all part of a datastore cluster. On occasion I need to provision a Windows VM with a large disk, say 2TB for example. Considerations are snapshot space and storage DRS mobility. Provisioning a 1.9TB disk to the VM off one 2TB datastore doesn't work well of course. But I'm leary of provisioning even a 500GB disk due to limiting datastore mobility with SDRS. So my questions;
1. How in practice have you managed to this? Provision multiple 250GB, 500GB disks? Or some other denomination?
2. What have you found is an effect general free space threshold to help ensure snapshots do not fill a datastore? How have you manage this?
I appreciate your feedback,