The client.setBasePath() would overwrite the Linstor controller IP/host
for all current client users. This is basically a race condition
that triggered as soon as you had configured 2 different primary storages
with different Linstor controllers.
* New feature: Change storage pool scope
* Added checks for Ceph/RBD
* Update op_host_capacity table on primary storage scope change
* Storage pool scope change integration test
* pull 8875 : Addressed review comments
* Pull 8875: remove storage checks, AbstractPrimayStorageLifeCycleImpl class
* Pull 8875: Fixed integration test failure
* Pull 8875: Review comments
* Pull 8875: review comments + broke changeStoragePoolScope into smaller functions
* Added UT for changeStoragePoolScope
* Rename AbstractPrimaryDataStoreLifeCycleImpl to BasePrimaryDataStoreLifeCycleImpl
* Pull 8875: Dao review comments
* Pull 8875: Rename changeStoragePoolScope.vue to ChangeStoragePoolScope.vue
* Pull 8875: Created a new smokes test file + A single warning msg in ui
* Pull 8875: Added cleanup in test_primary_storage_scope.py
* Pull 8875: Type in en.json
* Pull 8875: cleanup array in test_primary_storage_scope.py
* Pull:8875 Removing extra whitespace at eof of StorageManagerImplTest
* Pull 8875: Added UT for PrimaryDataStoreHelper and BasePrimaryDataStoreLifeCycleImpl
* Pull 8875: Added license header
* Pull 8875: Fixed sql query for vmstates
* Pull 8875: Changed icon plus info on disabled mode in apidoc
* Pull 8875: Change scope should not work for local storage
* Pull 8875: Change scope completion event
* Pull 8875: Added api findAffectedVmsForStorageScopeChange
* Pull 8875: Added UT for findAffectedVmsForStorageScopeChange and removed listByPoolIdVMStatesNotInCluster
* Pull 8875: Review comments + Vm name in response
* Pull 8875: listByVmsNotInClusterUsingPool was returning duplicate VM entries because of multiple volumes in the VM satisfying the criteria
* Pull 8875: fixed listAffectedVmsForStorageScopeChange UT
* listAffectedVmsForStorageScopeChange should work if the pool is not disabled
* Fix listAffectedVmsForStorageScopeChangeTest UT
* Pull 8875: add volume.removed not null check in VmsNotInClusterUsingPool query
* Pull 8875: minor refactoring in changeStoragePoolScopeToCluster
* Update server/src/main/java/com/cloud/storage/StorageManagerImpl.java
* fix eof
* changeStoragePoolScopeToZone should connect pool to all Up hosts
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* linstor: update to java-linstor 0.5.1
* linstor: Support VM-Instance Disk snapshots
This adds VM-Instance disk snapshot support for
Linstor primary storage. Instance snapshots are stored on
the used Linstor storage pool backend and can be converted
into regular volume snapshots and also reverted.
Instance VM snapshots are not fully atomic but with the
create multi snapshot feature as good as it gets.
Snapshots are done over multiple volumes in the same devicemanager run.
* 4.19:
linstor: disconnect-disk also search for resource name in Linstor (#9035)
ui: add support to change Account role for admins (#9012)
Use parameter dcId as wrapper to prevent NPE (#8986)
disconnectPhysicalDisk(String, KVMStoragePool) seems to calls the plugin
with the resource name instead of the device path, so we also have
to search for resource names, while cleaning up.
For live migrate we need the allow-two-primaries option,
but we don't know exactly if we are called for a migration operation.
Now also check if at least any of the resources is in use somewhere and
only then set the option.
* linstor: Outline get storagepools from resourcegroup into function
* linstor: move getHostname() to kvm/Pool and reimplement
* linstor: implement CloudStack HA support