After a failed backup job my vdp appliance is left showing in vSphere that it has still guest vmdks attached from at least 5 guests. The guests
The backup job usually completes in an hour or 2 but was still running this morning so I cancel the job via the vSphere UI and I rebooted the appliance but that didn't release the disks.
I then consolidated the vdp appliance and shutdown the appliance.
The guest vm's whos disks appear to be still attached to the appliance still had snapshots taken by the vdp so I deleted them using the vSphere UI. After doing that vSphere indicated that the same guests also needed to be consolidates so I do that and those consolidation tasks also completed successfully. The guests in question don't appear to be affected and are running fine
The vdp appliance is on its own datastore but the guest vmdk's are shown as residing on the datastore used by the guest vm's. The same vmdk's are also listed as the current vmdks for the guests even after consolidating and removing the snapshots so its like the guest vmdk's are shared with the appliance
How do I clear the guest vmdk's from the appliance without causing data loss or massive downtime for the guest vms?
Image of VDP Appliance disks
Green rectangle are vdp's own vmdks
Red rectangle shows the guest vmdks