- Enabels using worker threads for parallel connection of hosts to a
storage pool
- HostDaoImpl refactorings
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Mitigation for non-scalable Powerflex/ScaleIO clients
- Added ScaleIOSDCManager to manage SDC connections, checks clients limit, prepare and unprepare SDC on the hosts.
- Added commands for prepare and unprepare storage clients to prepare/start and stop SDC service respectively on the hosts.
- Introduced config 'storage.pool.connected.clients.limit' at storage level for client limits, currently support for Powerflex only.
* tests issue fixed
* refactor / improvements
* lock with powerflex systemid while checking connections limit
* updated powerflex systemid lock to hold till sdc preparation
* Added custom stats support for storage pool, through listStoragePools API
* code improvements, and unit tests
* Update config 'storage.pool.connected.clients.limit' to dynamic, and some improvements
* Stop SDC on host after migration if no volumes mapped to host
* Wait for SDC to connect after scini service start, and some log improvements
* Do not throw exception (log it) when SDC is not connected while revoking access for the powerflex volume
* some log improvements
* ScaleIO volume live migration - use usable bytes from source disk to format the destination disk
* Don't abort block copy job when cur,end = 0
* code improvements
* Fix resource count discrepancies
* Fixup while removing vm
* Fix discrepancies when starting VMs
* Fixup tests
* Fixups
* Don't take lock when amount is negative
* Check if volume on datastore requires access for migration, and grant/revoke volume access if requires
* Updated default implementation for requiresAccessForMigration method in PrimaryDataStoreDriver
* StoragePoolType as a class
* Fix agent side StoragePoolType enum to class
* Handle StoragePoolType for StoragePoolJoinVO
* Since StoragePoolType is a class, it cannot be converted by @Enumerated annotation.
Implemented conveter class and logic to utilize @Convert annotation.
* Fix UserVMJoinVO for StoragePoolType
* fixed missing imports
* Since StoragePoolType is a class, it cannot be converted by @Enumerated annotation.
Implemented conveter class and logic to utilize @Convert annotation.
* Fixed equals for the enum.
* removed not needed try/catch for prepareAttribute
* Added license to the file.
* Implemented "supportsPhysicalDiskCopy" for storage adaptor. (#352)
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>
* Add javadoc to StoragePoolType class
* Add unit test for StoragePoolType comparisons
* StoragePoolType "==" and ".equals()" fix.
* Fix for abstract storage adaptor set up issue
* review comments
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>
Co-authored-by: mprokopchuk <mprokopchuk@gmail.com>
Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Making a diskful resource was meant as an optimization,
but cannot work on non hyperconverged setups,
as the storage nodes (diskful) are not part of the cloudstack cluster.
(cherry picked from commit 67cb9b9e40)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
A TODO was overseen and never implemented,
which could trigger the following bug:
If Linstor didn't create a resource (diskless or diskfull) on
the cloudstack choosen node, it would not be able to copy the
template data there, it even seems no error was
triggered and the new template file silently just became
empty/corrupt.
(cherry picked from commit 4a86a0d233)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes the following cases in which Solidfire storage integration
caused issues when using Solidfire datadisks with VMware:
1. Take Volume Snapshot of Solidfire data disk
2. Delete an active Instance with Solidfire data disk attached
3. Attach used existing Solidfire data disk to a running/stopped VM
4. Stop and Start an instance with Solidfire data disks attached
5. Expand disk by resizing Solidfire data disk by providing size
6. Expand disk by changing disk offering for the Solidfire data disk
Additional changes:
- Use VMFS6 as managed datastore type if the host supports
- Refactor detection and splitting of managed storage ds name in storage
processor
- Restrict storage rescanning for managed datastore when resizing
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
steps to reproduce the issue:
- git clone https://github.com/apache/cloudstack.git
- cd cloudstack
- rm -rf .git/
- run `mvn -P developer,systemvm clean install`
Without this PR, it fails with error
```
> [ 8/10] RUN mvn -Pdeveloper -Dsimulator -DskipTests clean install:
668.1 [ERROR] Failed to execute goal pl.project13.maven:git-commit-id-plugin:4.9.10:revision (get-the-git-infos) on project cloud-plugin-storage-volume-storpool: .git directory is not found! Please specify a valid [dotGitDirectory] in your pom.xml -> [Help 1]
```
* Removed the hardcoded StorPool endpoint from tests
- removed the hardcoded enpoint of StorPool primary storage from tests
- added the git commit information into the maven build
* Convert indents to spaces
* update git-commit-id-plugin version
This PR fixes an intermittent issue where SDC id (local_path) is getting deleted and not getting populated when host connects back again.
Fix is to remove the code to delete the records from storage_pool_host_ref table. We are anyways updating the entry if the SDC ID is changed during agent restart which is anyways required inorder to get the new connections. I've quickly verified the host delete scenario to check the storage_pool_host_ref entries behavior, entries are getting deleted.
Supported Virtual machine operations:
- live migration of VM to another host
- virtual machine snapshots (group snapshot without memory)
- revert VM snapshot
- delete VM snapshot
Supported Volume operations:
- attach/detach volume
- live migrate volume between two StorPool primary storages
- volume snapshot
- delete snapshot
- revert snapshot
* Live storage migration of volume in scaleIO within same storage scaleio cluster
* Added migrate command
* Recent changes of migration across clusters
* Fixed uuid
* recent changes
* Pivot changes
* working blockcopy api in libvirt
* Checking block copy status
* Formatting code
* Fixed failures
* code refactoring and some changes
* Removed unused methods
* removed unused imports
* Unit tests to check if volume belongs to same or different storage scaleio cluster
* Unit tests for volume livemigration in ScaleIOPrimaryDataStoreDriver
* Fixed offline volume migration case and allowed encrypted volume migration
* Added more integration tests
* Support for migration of encrypted volumes across different scaleio clusters
* Fix UI notifications for migrate volume
* Data volume offline migration: save encryption details to destination volume entry
* Offline storage migration for scaleio encrypted volumes
* Allow multiple Volumes to be migrated with migrateVirtualMachineWithVolume API
* Removed unused unittests
* Removed duplicate keys in migrate volume vue file
* Fix Unit tests
* Add volume secrets if does not exists during volume migrations. secrets are getting cleared on package upgrades.
* Fix secret UUID for encrypted volume migration
* Added a null check for secret before removing
* Added more unit tests
* Fixed passphrase check
* Add image options to the encypted volume conversion
This PR has 3 improvements for the Linstor primary storage driver:
- Create a separate jar of it and move all Linstor related classes into the correct project (similar to the storpool plugin)
- Add aux properties for Cloudstack volumes in Linstor to make it easier to identify them in Linstor
- Add support for IOPs settings with the Linstor storage plugin