* Migrate volume improvements, to bypass secondary storage when copy volume between pools is allowed directly
* Bypass secondary storage for copy volume between zone-wide pools and
- local storage on host in the same zone
- cluser-wide pools in the same zone
* Bypass secondary storage for volumes on ceph/rdb pool when the scope permits
* Fix dest disk format while migrating volume from ceph/rbd to nfs, and some code improvements
* unit tests
* Update suitable disk offering(s) for volume(s) after migrate VM with volumes when change in pool type (shared or local)
Currently, Migrate VM with volume(s) bypasses the service and disk offerings of the volumes, as the target pools for migration are specified,
which ignores the offerings. Offering change is required when pool type (shared or local) is changed, mainly
- when volume on shared pool is migrated to local pool
- when volume on local pool is migrated to shared pool
* Update with proper message while migrate volume when target pool and offering type mismatches (both are not shared/local)
* Consider host scope first during endpoint selection while copying between primary storages
* Update disk offering count (for listDiskOfferings api) while removing offerings with tags mismatch with storage tags
* Fix of deploy VM with a snapshot that is copied to another zone
* Fix of creating StorPool volume from a snapshot if the size in the
offering is bigger than the snapshot size
* Option to deploy a VM with existing volume/snapshot
* smoke test changes
check if the hypervisor is KVM
check if the primary storage's scope is ZONE wide
* skip all tests if the storage isn't Zone-Wide and the hypervisor isn't KVM
* support StorPool tags
add StorPool tags to a volume created from snapshot or to a volume which
will be attached as a ROOT to a new VM
* Add StorPool tags on the new ROOT volume
* Add the StorPool's tags when volume is created from a snapshot or a
volume is attached as a ROOT to a VM
* Addressed review
* Introducing Storage Access Groups to define the host and storage pool connections
In CloudStack, when a primary storage is added at the Zone or Cluster scope, it is by default connected to all hosts within that scope. This default behavior can be refined using storage access groups, which allow operators to control and limit which hosts can access specific storage pools.
Storage access groups can be assigned to hosts, clusters, pods, zones, and primary storage pools. When a storage access group is set on a cluster/pod/zone, all hosts within that scope inherit the group. Connectivity between a host and a storage pool is then governed by whether they share the same storage access group.
A storage pool with a storage access group will connect only to hosts that have the same storage access group. A storage pool without a storage access group will connect to all hosts, including those with or without a storage access group.
Dependency name change mockito-inline to mockito-core. Inline is now the default and the last version of mockito-inline released is 5.2.0.
assertj-core in user-authenticators/saml2 pulls in an incompatible version of byte-buddy and required an exclusion. Updating the version of assertj is left for a future PR.
The upgrade requires Java 11+, dropping support for Java 8. CloudStack documentation already says to use Java 11 and does not indicate that java 8 is supported.
Test classes using @RunWith(MockitoJUnitRunner.class) now get run in strict mode. Changes were made to tests where the stubbing intention was clear. In ManagementServerMaintenanceManagerImplTest there are 5 tests where the intention of the test is unclear. Each of the statements now use Mockito.lenient() to avoid the exception. Other cases in the tests follow a similar pattern.
Minor clean up.
Both @Spy and Mockito.spy( should not be used. Favored the annotation.
Both @RunWith(MockitoJUnitRunner.class) and MockitoAnnotations.openMocks(this); should not be used. Favored the annotation.
Unnecessary extends TestCase removed.
@InjectMocks and new in statement unnecessary. Removed new when issue presented.
Some of the Cmd classes like UpdateNetworkCmd have a type tree that includes fields of type Object. This appears to cause issues with injection, requiring that @Mock fields be available. This is where the following fields were added in multiple places:
Object job;
ResponseGenerator _responseGenerator;
Wrong number of parameters for Mockito.when in LibvirtRevertSnapshotCommandWrapperTest.java
* Readd filename string on qemuimg create
* Remove empty object on the data pool details of storage pools with no data pool
* Only use the method createPhysicalDiskByLibVirt with RBD when the pool is of erasure code type. Also added javadoc for createPhysicalDisk method
* Change literal '/' string to File.separator
* Add support for erasure code pools
* Fix null on putAll
* api,agent,server,engine-schema: scalability improvements
Following changes and improvements have been added:
- Improvements in handling of PingRoutingCommand
1. Added global config - `vm.sync.power.state.transitioning`, default value: true, to control syncing of power states for transitioning VMs. This can be set to false to prevent computation of transitioning state VMs.
2. Improved VirtualMachinePowerStateSync to allow power state sync for host VMs in a batch
3. Optimized scanning stalled VMs
- Added option to set worker threads for capacity calculation using config - `capacity.calculate.workers`
- Added caching framework based on Caffeine in-memory caching library, https://github.com/ben-manes/caffeine
- Added caching for account/use role API access with expiration after write can be configured using config - `dynamic.apichecker.cache.period`. If set to zero then there will be no caching. Default is 0.
- Added caching for account/use role API access with expiration after write set to 60 seconds.
- Added caching for some recurring DB retrievals
1. CapacityManager - listing service offerings - beneficial in host capacity calculation
2. LibvirtServerDiscoverer existing host for the cluster - beneficial for host joins
3. DownloadListener - hypervisors for zone - beneficial for host joins
5. VirtualMachineManagerImpl - VMs in progress- beneficial for processing stalled VMs during PingRoutingCommands
- Optimized MS list retrieval for agent connect
- Optimize finding ready systemvm template for zone
- Database retrieval optimisations - fix and refactor for cases where only IDs or counts are used mainly for hosts and other infra entities. Also similar cases for VMs and other entities related to host concerning background tasks
- Changes in agent-agentmanager connection with NIO client-server classes
1. Optimized the use of the executor service
2. Refactore Agent class to better handle connections.
3. Do SSL handshakes within worker threads
5. Added global configs to control the behaviour depending on the infra. SSL handshake could be a bottleneck during agent connections. Configs - `agent.ssl.handshake.min.workers` and `agent.ssl.handshake.max.workers` can be used to control number of new connections management server handles at a time. `agent.ssl.handshake.timeout` can be used to set number of seconds after which SSL handshake times out at MS end.
6. On agent side backoff and sslhandshake timeout can be controlled by agent properties. `backoff.seconds` and `ssl.handshake.timeout` properties can be used.
- Improvements in StatsCollection - minimize DB retrievals.
- Improvements in DeploymentPlanner allow for the retrieval of only desired host fields and fewer retrievals.
- Improvements in hosts connection for a storage pool. Added config - `storage.pool.host.connect.workers` to control the number of worker threads that can be used to connect hosts to a storage pool. Worker thread approach is followed currently only for NFS and ScaleIO pools.
- Minor improvements in resource limit calculations wrt DB retrievals
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* test1, domaindetails, capacitymanager fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* test2 - agent tests
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* capacitymanagertest fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* change
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix missing changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address comments
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* revert marvin/setup.py
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix indent
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* use space in sql
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address duplicate
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* update host logs
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* revert e36c6a5d07
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix npe in capacity calculation
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* move schema changes to 4.20.1 upgrade
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* build fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address comments
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix build
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* add some more tests
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* checkstyle fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove unnecessary mocks
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* build fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* replace statics
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* engine/orchestration,utils: limit number of concurrent new agent
connections
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor - remove unused
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* unregister closed connections, monitor & cleanup
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* add check for outdated vm filter in power sync
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* agent: synchronize sendRequest wait
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* 4.20:
merge errors fixed
Restrict the migration of volumes attached to VMs in Starting state (#9725)
server, plugin: enhance storage stats for IOPS (#10034)
Introducing granular command timeouts global setting (#9659)
Improve logging to include more identifiable information (#9873)
* Improve logging to include more identifiable information for kvm plugin
* Update logging for scaleio plugin
* Improve logging to include more identifiable information for default volume storage plugin
* Improve logging to include more identifiable information for agent managers
* Improve logging to include more identifiable information for Listeners
* Replace ids with objects or uuids
* Improve logging to include more identifiable information for engine
* Improve logging to include more identifiable information for server
* Fixups in engine
* Improve logging to include more identifiable information for plugins
* Improve logging to include more identifiable information for Cmd classes
* Fix toString method for StorageFilterTO.java
* 4.20:
VR: apply iptables rules when add/remove static routes (#10064)
Certificate and VM hostname validation improvements (#10051)
set ulimit for server according to redhat spec (#10040)
kvm-storage: provide isVMMigrate information to storage plugins (#10093)
Allow config drive deletion of migrated VM, on host maintenance (#10045)
linstor: improve heartbeat check with also asking linstor (#10105)
server: simplify role change validation (#9173)
UI: create VPC network offering with conserve mode (#10082)
server: fix typo removeaccessvpn in VirtualRouterElement (#10086)
UI: remove duplicated Instance Name in Public IP details page (#10087)
UI: Fixes in the Usage UI (#10000)
SAML2: add cookie with HttpOnly too #10013 (#10047)
ui: Allow font-awesome icon usage and optimise icon size inconsistency (#9744)
Particular Linstor needs can use this information to only allow
dual volume access for live migration and not enable it in general,
which can and will lead to data corruption if for some reason
2 VMs get started on 2 different hosts.
If a secondary storage pool is used by e.g.
2 concurrent snapshot->template actions,
if the first action finished it removed the netfs mount
point for the other action.
Now the storage pools are usage ref-counted and will only
deleted if there are no more users.
* StorPool: fix of delete snapshot
Mark the DB record as destroyed when a snapshot is deleted
* Addressed reviews
* addressed review
* addressed review
This PR fixes the issue with sonar check
```
Error: Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.9.1.2184:sonar (default-cli) on project cloudstack:
Error:
Error: The version of Java (11.0.22) used to run this analysis is deprecated, and SonarCloud no longer supports it. Please upgrade to Java 17 or later.
Error: You can find more information here: https://docs.sonarsource.com/sonarcloud/appendices/scanner-environment/
```
main changes
- Support build/packaging using JDK17
- Still supports JDK11 for building
- Support JRE17 for use in production installation
- Drop EL7 support
The community packages will be still packaged using JDK11.
If uses want, they can build by JDK17 as well.
Signed-off-by: Wei Zhou <wei.zhou@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* New feature: Change storage pool scope
* Added checks for Ceph/RBD
* Update op_host_capacity table on primary storage scope change
* Storage pool scope change integration test
* pull 8875 : Addressed review comments
* Pull 8875: remove storage checks, AbstractPrimayStorageLifeCycleImpl class
* Pull 8875: Fixed integration test failure
* Pull 8875: Review comments
* Pull 8875: review comments + broke changeStoragePoolScope into smaller functions
* Added UT for changeStoragePoolScope
* Rename AbstractPrimaryDataStoreLifeCycleImpl to BasePrimaryDataStoreLifeCycleImpl
* Pull 8875: Dao review comments
* Pull 8875: Rename changeStoragePoolScope.vue to ChangeStoragePoolScope.vue
* Pull 8875: Created a new smokes test file + A single warning msg in ui
* Pull 8875: Added cleanup in test_primary_storage_scope.py
* Pull 8875: Type in en.json
* Pull 8875: cleanup array in test_primary_storage_scope.py
* Pull:8875 Removing extra whitespace at eof of StorageManagerImplTest
* Pull 8875: Added UT for PrimaryDataStoreHelper and BasePrimaryDataStoreLifeCycleImpl
* Pull 8875: Added license header
* Pull 8875: Fixed sql query for vmstates
* Pull 8875: Changed icon plus info on disabled mode in apidoc
* Pull 8875: Change scope should not work for local storage
* Pull 8875: Change scope completion event
* Pull 8875: Added api findAffectedVmsForStorageScopeChange
* Pull 8875: Added UT for findAffectedVmsForStorageScopeChange and removed listByPoolIdVMStatesNotInCluster
* Pull 8875: Review comments + Vm name in response
* Pull 8875: listByVmsNotInClusterUsingPool was returning duplicate VM entries because of multiple volumes in the VM satisfying the criteria
* Pull 8875: fixed listAffectedVmsForStorageScopeChange UT
* listAffectedVmsForStorageScopeChange should work if the pool is not disabled
* Fix listAffectedVmsForStorageScopeChangeTest UT
* Pull 8875: add volume.removed not null check in VmsNotInClusterUsingPool query
* Pull 8875: minor refactoring in changeStoragePoolScopeToCluster
* Update server/src/main/java/com/cloud/storage/StorageManagerImpl.java
* fix eof
* changeStoragePoolScopeToZone should connect pool to all Up hosts
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Temporarily backup StorPool volume before expunge
Sometimes the users delete the volumes by mistake. This enhancment
provides a solution to backup the volume before it's deleted. The user
will be able to see the snapshot in CloudStack UI/CLI and create only a
volume from it.
A task will check (by default on every 5mins) if the snapshots are
deleted from StorPool
Global settings to enable the delay delete option:
`storpool.delete.after.interval` - The interval (in seconds) after the StorPool snapshot will be deleted
`storpool.list.snapshots.delete.after.interval` - The interval (in seconds) to fetch the StorPool snapshots with deleteAfter flag
Minor fix when deleting snapshots
* added Apache licence
* addressed comments