* linstor: enable discard for Linstor storage pools
All Linstor storage backends support discard, so it can be safely enabled.
* linstor: enable discard for Linstor storage pools CHANGELOG.md
This PR fixes the issue with sonar check
```
Error: Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.9.1.2184:sonar (default-cli) on project cloudstack:
Error:
Error: The version of Java (11.0.22) used to run this analysis is deprecated, and SonarCloud no longer supports it. Please upgrade to Java 17 or later.
Error: You can find more information here: https://docs.sonarsource.com/sonarcloud/appendices/scanner-environment/
```
main changes
- Support build/packaging using JDK17
- Still supports JDK11 for building
- Support JRE17 for use in production installation
- Drop EL7 support
The community packages will be still packaged using JDK11.
If uses want, they can build by JDK17 as well.
Signed-off-by: Wei Zhou <wei.zhou@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The client.setBasePath() would overwrite the Linstor controller IP/host
for all current client users. This is basically a race condition
that triggered as soon as you had configured 2 different primary storages
with different Linstor controllers.
* New feature: Change storage pool scope
* Added checks for Ceph/RBD
* Update op_host_capacity table on primary storage scope change
* Storage pool scope change integration test
* pull 8875 : Addressed review comments
* Pull 8875: remove storage checks, AbstractPrimayStorageLifeCycleImpl class
* Pull 8875: Fixed integration test failure
* Pull 8875: Review comments
* Pull 8875: review comments + broke changeStoragePoolScope into smaller functions
* Added UT for changeStoragePoolScope
* Rename AbstractPrimaryDataStoreLifeCycleImpl to BasePrimaryDataStoreLifeCycleImpl
* Pull 8875: Dao review comments
* Pull 8875: Rename changeStoragePoolScope.vue to ChangeStoragePoolScope.vue
* Pull 8875: Created a new smokes test file + A single warning msg in ui
* Pull 8875: Added cleanup in test_primary_storage_scope.py
* Pull 8875: Type in en.json
* Pull 8875: cleanup array in test_primary_storage_scope.py
* Pull:8875 Removing extra whitespace at eof of StorageManagerImplTest
* Pull 8875: Added UT for PrimaryDataStoreHelper and BasePrimaryDataStoreLifeCycleImpl
* Pull 8875: Added license header
* Pull 8875: Fixed sql query for vmstates
* Pull 8875: Changed icon plus info on disabled mode in apidoc
* Pull 8875: Change scope should not work for local storage
* Pull 8875: Change scope completion event
* Pull 8875: Added api findAffectedVmsForStorageScopeChange
* Pull 8875: Added UT for findAffectedVmsForStorageScopeChange and removed listByPoolIdVMStatesNotInCluster
* Pull 8875: Review comments + Vm name in response
* Pull 8875: listByVmsNotInClusterUsingPool was returning duplicate VM entries because of multiple volumes in the VM satisfying the criteria
* Pull 8875: fixed listAffectedVmsForStorageScopeChange UT
* listAffectedVmsForStorageScopeChange should work if the pool is not disabled
* Fix listAffectedVmsForStorageScopeChangeTest UT
* Pull 8875: add volume.removed not null check in VmsNotInClusterUsingPool query
* Pull 8875: minor refactoring in changeStoragePoolScopeToCluster
* Update server/src/main/java/com/cloud/storage/StorageManagerImpl.java
* fix eof
* changeStoragePoolScopeToZone should connect pool to all Up hosts
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Mitigation for non-scalable Powerflex/ScaleIO clients
- Added ScaleIOSDCManager to manage SDC connections, checks clients limit, prepare and unprepare SDC on the hosts.
- Added commands for prepare and unprepare storage clients to prepare/start and stop SDC service respectively on the hosts.
- Introduced config 'storage.pool.connected.clients.limit' at storage level for client limits, currently support for Powerflex only.
* tests issue fixed
* refactor / improvements
* lock with powerflex systemid while checking connections limit
* updated powerflex systemid lock to hold till sdc preparation
* Added custom stats support for storage pool, through listStoragePools API
* code improvements, and unit tests
* unit tests fixes
* Update config 'storage.pool.connected.clients.limit' to dynamic, and some improvements
* Stop SDC on host after migration if no volumes mapped to host
* Wait for SDC to connect after scini service start, and some log improvements
* Do not throw exception (log it) when SDC is not connected while revoking access for the powerflex volume
* some log improvements
* Restart agent when host comes out of maintenance
* Don't send CreateStoragePoolCommand to hosts in maintenance mode
* CreateStoragePoolCommand can run when host in maintenance. Reverted the change to restart agent when host was already up and in maintenance
* Reverted changes done to ResourceManagerImplTest
* Temporarily backup StorPool volume before expunge
Sometimes the users delete the volumes by mistake. This enhancment
provides a solution to backup the volume before it's deleted. The user
will be able to see the snapshot in CloudStack UI/CLI and create only a
volume from it.
A task will check (by default on every 5mins) if the snapshots are
deleted from StorPool
Global settings to enable the delay delete option:
`storpool.delete.after.interval` - The interval (in seconds) after the StorPool snapshot will be deleted
`storpool.list.snapshots.delete.after.interval` - The interval (in seconds) to fetch the StorPool snapshots with deleteAfter flag
Minor fix when deleting snapshots
* added Apache licence
* addressed comments
* Ability to specify NFS mount options while adding a primary storage and modify it later
* Pull 8947: Rename all occurrence of nfsopt to nfsMountOpt and added nfsMountOpts to ApiConstants
* Pull 8947: Refactor code - move into separate methods
* Pull 8947: CollectionsUtils.isNotEmpty and switch statement in LibvirtStoragePoolDef.java
* Pull 8947: UI - cancel maintainenace will remount the storage pool and apply the options
* Pull 8947: UI - moved edit NFS mount options to edit Primary Storage form
* Pull 8947: UI - moved 'NFS Mount Options' to below 'Type' in dataview
* Pull 8947: Fixed message in AddPrimaryStorage.vue
* Pull 8947: Convert _nfsmountOpts to Set in libvirtStoragePoolDef
* Pull 8947: Throw exception and log error if mount fails due to incorrect mount option
* Pull 8947: Added UT and moved integration test to component/maint
* Pull 8947: Review comments
* Pull 8947: Removed password from integration test
* Pull 8947: move details allocation to inside the if loop in getStoragePoolNFSMountOpts
* Pull 8947: Fixed a bug in AddPrimaryStorage.vue
* Pull 8947: Pool should remain in maintenance mode if mount fails
* Pull 8947: Removed password from integration test
* Pull 8947: Added UT
* Pull 8875: Fixed a bug in CloudStackPrimaryDataStoreLifeCycleImplTest
* Pull 8875: Fixed a bug in LibvirtStoragePoolDefTest
* Pull 8947: minor code restructuring
* Pull 8947 : added some ut for coverage
* Fix LibvirtStorageAdapterTest UT
* Updates to change PUre and Primera to host-centric vlun assignments; various small bug fixes
* update to add timestamp when deleting pure volumes to avoid future conflicts
* update to migrate to properly check disk offering is valid for the target storage pool
* Updates to change PUre and Primera to host-centric vlun assignments; various small bug fixes
* update to add timestamp when deleting pure volumes to avoid future conflicts
* update to migrate to properly check disk offering is valid for the target storage pool
* improve error handling when copying volumes to add precision to which step failed
* rename pure volume before delete to avoid conflicts if the same name is used before its expunged on the array
* remove dead code in AdaptiveDataStoreLifeCycleImpl.java
* Fix issues found in PR checks
* fix session refresh TTL logic
* updates from PR comments
* logic to delete by path ONLY on supported OUI
* fix to StorageSystemDataMotionStrategy compile error
* change noisy debug message to trace message
* fix double callback call in handleVolumeMigrationFromNonManagedStorageToManagedStorage
* fix for flash array delete error
* fix typo in StorageSystemDataMotionStrategy
* change copyVolume to use writeback to speed up copy ops
* remove returning PrimaryStorageDownloadAnswer when connectPhysicalDisk returns false during KVMStorageProcessor template copy
* remove change to only set UUID on snapshot if it is a vmSnapshot
* reverting change to UserVmManagerImpl.configureCustomRootDiskSize
* add error checking/simplification per comments from @slavkap
* Update engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/StorageSystemDataMotionStrategy.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* address PR comments from @sureshanaparti
---------
Co-authored-by: GLOVER RENE <rg9975@cs419-mgmtserver.rg9975nprd.app.ecp.att.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* linstor: update to java-linstor 0.5.1
* linstor: Support VM-Instance Disk snapshots
This adds VM-Instance disk snapshot support for
Linstor primary storage. Instance snapshots are stored on
the used Linstor storage pool backend and can be converted
into regular volume snapshots and also reverted.
Instance VM snapshots are not fully atomic but with the
create multi snapshot feature as good as it gets.
Snapshots are done over multiple volumes in the same devicemanager run.
* 4.19:
linstor: disconnect-disk also search for resource name in Linstor (#9035)
ui: add support to change Account role for admins (#9012)
Use parameter dcId as wrapper to prevent NPE (#8986)
disconnectPhysicalDisk(String, KVMStoragePool) seems to calls the plugin
with the resource name instead of the device path, so we also have
to search for resource names, while cleaning up.
For live migrate we need the allow-two-primaries option,
but we don't know exactly if we are called for a migration operation.
Now also check if at least any of the resources is in use somewhere and
only then set the option.
* storage,plugins: delegate allow zone-wide volume migration check and access grant to storage drivers
Following checks have been delegated to storage drivers,
- For volumes on zone-wide storage, whether they need storage migration when VM is migrated
- Whther volume required grant access
Apply fixes in resolving PrimaryDataStore
* add tests
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* unused import
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Update engine/orchestration/src/test/java/org/apache/cloudstack/engine/orchestration/VolumeOrchestratorTest.java
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* 4.18:
Storage plugin support to check if volume on datastore requires access for migration (#8655)
CKS: fix /opt/bin/deploy-cloudstack-secret in CKS control nodes (#8697)
* Check if volume on datastore requires access for migration, and grant/revoke volume access if requires
* Updated default implementation for requiresAccessForMigration method in PrimaryDataStoreDriver
* linstor: Outline get storagepools from resourcegroup into function
* linstor: move getHostname() to kvm/Pool and reimplement
* linstor: implement CloudStack HA support
* linstor: Add util method getBestErrorMessage from main
* linstor: failed remove of allow-two-primaries is no fatal error
* linstor: Fix failure if a Linstor node is down while migrating
If a Linstor node is down while migrating resource, allow-two-primaries
setting will fail because we can't reach the downed node. But it will
still set the property on the other nodes and migration should work.
We now just report an error instead of completely failing.
* Normalize logs
All classes that could have their loggers inherited from their fathers had their own loggers deleted;
Most loggers didn't have to be static, so most of them were normalized so that they wouldn't be;
All loggers are protected now;
Static logger's name are now 'LOGGER';
Non-static logger's name are now 'logger';
New class DbUpgradeAbstractImpl created so that all Upgraders extend it and inherit its logger
* Upgrade log4j
* fix errors caused by the merge
* Refactor cglibThrowableRenderer functionality to log4j2 and upgrade the last configuration files
* fix sonarcloud bug
* Fix errors caused by merge, remove some unused loggers, and rename a variable that was mistakenly renamed on the normalization commit
* Readd snmpTrapAppender, remove TestAppender
* Regenerate changes
* regenerate changes
* refactor last custom appender
* fix systemvm configuration xml
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* Fix utils pom
* fix some tests
* regenerate changes
* Fix jar being printed on exception
* fix logging in system VMs, fix commands not having log4j2 classpath.
* regenerate changes
* Fix some unwanted renomeations
* fix end of file
* regenerate changes
* regenerate changes
* fix merge error
* regenerate changes
* fix tests
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* readd reload4j to tungsten as juniper depends on it
* Regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* re-add reload4j dependency to network-contrail, as juniper depends on it
* regenerate changes
* regenerate changes
* regenerate changes
* fix typo
* regenerate changes
* regenerate changes
* Fix end of files
* regenerate changes
* add logj42 to cloud-utils-SHADED.jar
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* fix some tests
* Regenerate changes
* Regenerate changes
* fix test
* Regenerate changes
* Regenerate changes
* StoragePoolType as a class
* Fix agent side StoragePoolType enum to class
* Handle StoragePoolType for StoragePoolJoinVO
* Since StoragePoolType is a class, it cannot be converted by @Enumerated annotation.
Implemented conveter class and logic to utilize @Convert annotation.
* Fix UserVMJoinVO for StoragePoolType
* fixed missing imports
* Since StoragePoolType is a class, it cannot be converted by @Enumerated annotation.
Implemented conveter class and logic to utilize @Convert annotation.
* Fixed equals for the enum.
* removed not needed try/catch for prepareAttribute
* Added license to the file.
* Implemented "supportsPhysicalDiskCopy" for storage adaptor.
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>
* Add javadoc to StoragePoolType class
* Add unit test for StoragePoolType comparisons
* StoragePoolType "==" and ".equals()" fix.
* Fix StoragePoolType for FiberChannelAdapter
* Fix for abstract storage adaptor set up issue
* review comments
* Pass StoragePoolType object for poolType dao attribute
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>
Co-authored-by: mprokopchuk <mprokopchuk@gmail.com>
This PR provides a new primary storage volume type called "FiberChannel" that allows access to volumes connected to hosts over fiber channel connections. It requires Multipath to provide path discovery and failover. Second, the PR adds an AdaptivePrimaryDatastoreProvider that abstracts how volumes are managed/orchestrated from the connector to communicate with the primary storage provider, using a ProviderAdapter interface, allowing the code interacting with the primary storage provider API's to be simpler and have no direct dependencies on Cloudstack code. Lastly, the PR provides an implementation of the ProviderAdapter classes for the HP Enterprise Primera line of storage solutions and the Pure Flash Array line of storage solutions.
On no access to the storage nodes, we now create a temporary resource from the snapshot and copy that data into the secondary storage. Revert works the same, just that we now also look additionally for any Linstor agent node.
Also enables now backup snapshot by default.
This whole BackupSnapshot functionality was introduced in 4.19,
so I would be happy if this still could be merged.
This PR adds an config option for the Linstor primary storage driver, that allows you to automatically backup
volume snapshots to the secondary storage.
Additionally it will not mangle the need java-linstor dependency into the client.jar, but instead just copy
the java-linstor.jar into lib.
Config option is called: lin.backup.snapshots and is default false
The scope of this change should be limited, as it only touches the Linstor driver and a part of copyAsync
was implemented with 2 new Linstor specific commands.
Extending the current functionality of KVM Host HA for the StorPool storage plugin and the option for easy integration for the rest of the storage plugins to support Host HA
This extension works like the current NFS storage implementation. It allows it to be used simultaneously with NFS and StorPool storage or only with StorPool primary storage.
If it is used with different primary storages like NFS and StorPool, and one of the health checks fails for storage, there is an option to report the failure to the management with the global config kvm.ha.fence.on.storage.heartbeat.failure. By default this option is disabled when enabled the Host HA service will continue with the checks on the host and eventually will fence the host
This PR adds new functionality to copy snapshots across zones and take snapshots for multiple zones.
Copy functionality is similar to template copy. The source zone acts as the web server from where the destination zone(s) can download the snapshot files. For this purpose, a new API - `copySnapshot` has been added. The response for copySnapshot will be returning zone and download details from the first destination zone of the request. This behaviour is similar to the `copyTemplate` API.
In a similar manner, multiple zones can be selected while taking the snapshots or creating snapshot policies. For this snapshot will be taken in the base zone(in which volume is present) and then copied to the additional zones. A new parameter - `zoneids` has been added to `createSnapshot` and `createSnapshotPolicy` APIs.
As snapshots can be present on multiple zones (secondary stores), a new parameter `zoneid` has been added to delete the snapshot copy on a specific zone.
`listSnapshots` API has been updated to allow listing snapshot entries for different zones/datastores. New parameters - `showUnique`, `locationType` have been added.
Events generated during snapshot operations will now be linked to the snapshot itself rather than the volume of the snapshot.
`listSnapshotPolicies` and `createSnapshotPolicy` APIs will return zone details of the zones in which backup will be scheduled for the policy.
----
New API added
`copySnapshot`
Request and response params updated for APIs
```
- listSnapshots
- deleteSnapshot
- createTemplate
- listZones
- listSnapshotPolicies
- createSnapshotPolicy
```
UI updated for
- Snapshot detail view
- Create snapshot form
- Create snapshot policy form
- Create volume (from snapshot) form
- Create template (from snapshot) form
Doc PR: https://github.com/apache/cloudstack-documentation/pull/344
PR: https://github.com/apache/cloudstack/pull/7873
Making a diskful resource was meant as an optimization,
but cannot work on non hyperconverged setups,
as the storage nodes (diskful) are not part of the cloudstack cluster.
A TODO was overseen and never implemented,
which could trigger the following bug:
If Linstor didn't create a resource (diskless or diskfull) on
the cloudstack choosen node, it would not be able to copy the
template data there, it even seems no error was
triggered and the new template file silently just became
empty/corrupt.
This fixes the following cases in which Solidfire storage integration
caused issues when using Solidfire datadisks with VMware:
1. Take Volume Snapshot of Solidfire data disk
2. Delete an active Instance with Solidfire data disk attached
3. Attach used existing Solidfire data disk to a running/stopped VM
4. Stop and Start an instance with Solidfire data disks attached
5. Expand disk by resizing Solidfire data disk by providing size
6. Expand disk by changing disk offering for the Solidfire data disk
Additional changes:
- Use VMFS6 as managed datastore type if the host supports
- Refactor detection and splitting of managed storage ds name in storage
processor
- Restrict storage rescanning for managed datastore when resizing
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
steps to reproduce the issue:
- git clone https://github.com/apache/cloudstack.git
- cd cloudstack
- rm -rf .git/
- run `mvn -P developer,systemvm clean install`
Without this PR, it fails with error
```
> [ 8/10] RUN mvn -Pdeveloper -Dsimulator -DskipTests clean install:
668.1 [ERROR] Failed to execute goal pl.project13.maven:git-commit-id-plugin:4.9.10:revision (get-the-git-infos) on project cloud-plugin-storage-volume-storpool: .git directory is not found! Please specify a valid [dotGitDirectory] in your pom.xml -> [Help 1]
```
* 4.18:
UI: allow new keys for VM details (#7793)
Refactoring StorPool's smoke tests (#7392)
UI: decode userdata in EditVM dialog (#7796)
packaging: unalias cp before package upgrade (#7722)
make NoopDbUpgrade do a systemvm template check (#7564)
UI unit test: fix expected values (#7792)
* Removed the hardcoded StorPool endpoint from tests
- removed the hardcoded enpoint of StorPool primary storage from tests
- added the git commit information into the maven build
* Convert indents to spaces
* update git-commit-id-plugin version
* 4.18:
SSVM: 'allow from' private IP in other SSVMs if the public IP is in allowed internal sites cidrs (#7288)
eof added to StorPoolStatsCollector (#7754)
* 4.18:
Storage and volumes statistics tasks for StorPool primary storage (#7404)
proper storage construction (#6797)
guarantee MAC uniqueness (#7634)
server: allow migration of all VMs with local storage on KVM (#7656)
Add L2 networks to Zones with SG (#7719)
This PR fixes an intermittent issue where SDC id (local_path) is getting deleted and not getting populated when host connects back again.
Fix is to remove the code to delete the records from storage_pool_host_ref table. We are anyways updating the entry if the SDC ID is changed during agent restart which is anyways required inorder to get the new connections. I've quickly verified the host delete scenario to check the storage_pool_host_ref entries behavior, entries are getting deleted.
Supported Virtual machine operations:
- live migration of VM to another host
- virtual machine snapshots (group snapshot without memory)
- revert VM snapshot
- delete VM snapshot
Supported Volume operations:
- attach/detach volume
- live migrate volume between two StorPool primary storages
- volume snapshot
- delete snapshot
- revert snapshot
* Live storage migration of volume in scaleIO within same storage scaleio cluster
* Added migrate command
* Recent changes of migration across clusters
* Fixed uuid
* recent changes
* Pivot changes
* working blockcopy api in libvirt
* Checking block copy status
* Formatting code
* Fixed failures
* code refactoring and some changes
* Removed unused methods
* removed unused imports
* Unit tests to check if volume belongs to same or different storage scaleio cluster
* Unit tests for volume livemigration in ScaleIOPrimaryDataStoreDriver
* Fixed offline volume migration case and allowed encrypted volume migration
* Added more integration tests
* Support for migration of encrypted volumes across different scaleio clusters
* Fix UI notifications for migrate volume
* Data volume offline migration: save encryption details to destination volume entry
* Offline storage migration for scaleio encrypted volumes
* Allow multiple Volumes to be migrated with migrateVirtualMachineWithVolume API
* Removed unused unittests
* Removed duplicate keys in migrate volume vue file
* Fix Unit tests
* Add volume secrets if does not exists during volume migrations. secrets are getting cleared on package upgrades.
* Fix secret UUID for encrypted volume migration
* Added a null check for secret before removing
* Added more unit tests
* Fixed passphrase check
* Add image options to the encypted volume conversion