* Mark VMs in error state when expunge fails during destroy operation
* fetch volume by external id (used by external plugins)
* review comments
* Update reorder hosts log to DEBUG, log line is too verbose to have on as INFO
* PowerFlex/ScaleIO client initialization, authentication and command execution improvements
* Migrate VM with volume not supported yet for PowerFlex/ScaleIO
* review changes
* use findByIdIncludingRemoved for volume retrieval in snapshot policy validation
* add unit tests
* add cleanup for orphan snapshot policies
* delete snapshot policies when expunging volumes
* update orphan cleanup to remove policies for volumes that are in expunged state or null
---------
Co-authored-by: Daman Arora <daman.arora@shapeblue.com>
* Allow copy of templates from secondary storages of other zone when adding a new secondary storage
* Add API param and UI changes on add secondary storage page
* Make copy template across zones non blocking
* Code fixes
* unused imports
* Add copy template flag in zone wizard and remove NFS checks
* Fix UI
* Label fixes
* code optimizations
* code refactoring
* missing changes
* Combine template copy and download into a single asynchronous operation
* unused import and fixed conflicts
* unused code
* update config message
* Fix configuration setting value on add secondary storage page
* Removed unused code
* Update unit tests
The secondary storage selectors allow operators to specify, for instance, that volumes should go to a specific secondary storage A. Thus, when uploading a volume, it will always be downloaded to secondary storage A.
The cold volume migration moves volumes to a secondary storage before moving them to the destination primary storage. This process does not consider the secondary storage selectors. However, some companies want to dedicate specific secondary storages for cold migration.
To address this, this PR makes the cold volume migration process consider the secondary storage selectors.
This fixes the case when the storage pool is removed as well the KVM
host and the subsequent volumes on the host. When that happened, listing
snapshots (for recovery purposes) cause NPE as the pool_id was null, but
last_pool_id for the related destroyed volume wasn't null. This adds a
fallback logic.
Signed-off-by: Rohit Yadav <rohit@yadav.cloud>
* NPE fix while deleting storage pool when pool has detached volumes
* review
* unit tests
* Added log for volumes not attached to any VMs
* update filter, log and test
* updated volume dao method names returning non destroyed volumes
* build fix
---------
Co-authored-by: dahn <daan@onecht.net>