This PR fixes the issue with sonar check
```
Error: Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.9.1.2184:sonar (default-cli) on project cloudstack:
Error:
Error: The version of Java (11.0.22) used to run this analysis is deprecated, and SonarCloud no longer supports it. Please upgrade to Java 17 or later.
Error: You can find more information here: https://docs.sonarsource.com/sonarcloud/appendices/scanner-environment/
```
main changes
- Support build/packaging using JDK17
- Still supports JDK11 for building
- Support JRE17 for use in production installation
- Drop EL7 support
The community packages will be still packaged using JDK11.
If uses want, they can build by JDK17 as well.
Signed-off-by: Wei Zhou <wei.zhou@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixes domain-admin access check to prevent unauthorized access.
Introduces a new non-dynamic global setting - api.allow.internal.db.ids
to control whether to allow using internal DB IDs as API parameters or
not. Default value for the global setting is false.
Co-authored-by: Fabricio Duarte <fabricio.duarte.jr@gmail.com>
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Documentation PR: https://github.com/apache/cloudstack-documentation/pull/398
Currently, an administrator can break host tag compatibility for a VM administrator by certain operations:
* deploy/start VM on a specific host
* migrate VM
* restore VM
* scale VM
This PR allows the user to specify tags which must be checked during these operations.
Global Settings
1. `vm.strict.host.tags` - A comma-separated list of tags for strict host check (Default - empty)
2. `vm.strict.resource.limit.host.tag.check` - Determines whether the resource limits tags are considered strict or not (Default - true)
During the above operations, we now check and throw an error if host tags compatibility is being broken for tags specified in `vm.strict.host.tags`. If `vm.strict.resource.limit.host.tag.check` is set to `true`, tags set in `resource.limit.host.tags` are also checked during these operations.
This PR introduces the functionality of purging removed DB entries for CloudStack entities (currently only for VirtualMachine). There would be three mechanisms for purging removed resources:
Background task - CloudStack will run a background task which runs at a defined interval. Other parameters for this task can be controlled with new global settings.
API - New admin-only API purgeExpungedResources. It will allow passing the following parameters - resourcetype, batchsize, startdate, enddate. Currently, API is not supported in the UI.
Config for service offering. Service offerings can be created with purgeresources parameter which would allow purging resources immediately on expunge.
Following new global settings have been added:
expunged.resources.purge.enabled: Default: false. Whether to run a background task to purge the expunged resources
expunged.resources.purge.resources: Default: (empty). A comma-separated list of resource types that will be considered by the background task to purge the expunged resources. Currently only VirtualMachine is supported. An empty "value will result in considering all resource types for purging
expunged.resources.purge.interval: Default: 86400. Interval (in seconds) for the background task to purge the expunged resources
expunged.resources.purge.delay: Default: 300. Initial delay (in seconds) to start the background task to purge the expunged resources task.
expunged.resources.purge.batch.size: Default: 50. Batch size to be used during expunged resources purging.
expunged.resources.purge.start.time: Default: (empty). Start time to be used by the background task to purge the expunged resources. Use format yyyy-MM-dd or yyyy-MM-dd HH:mm:ss.
expunged.resources.purge.keep.past.days: Default: 30. The number of days in the past from the execution time of the background task to purge the expunged resources for which the expunged resources must not be purged. To enable purging expunged resource till the execution of the background task, set the value to zero.
expunged.resource.purge.job.delay: Default: 180. Delay (in seconds) to execute the purging of an expunged resource initiated by the configuration in the offering. Minimum value should be 180 seconds and if a lower value is set then the minimum value will be used.
Documentation PR: apache/cloudstack-documentation#397
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
Feature spec: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Granular+Resource+Limit+Management
Introduces the concept of tagged resource limits for granular resource limit management. Limits can be enforced on accounts and domains for the deployment of entities for a tagged resource. Current tagged resource limits can be used for the following resource types,
Host limits
- user_vm
- cpu
- memory
Storage limits
- volume
- primary_storage
Following global settings can used to specify tags for which limit needs to be enforced,
Host: `resource.limit.host.tags`
Storage: `resource.limit.storage.tags`
Option for specifying tagged resource limits and viewing tagged resource usage are made available in the UI.
Enhances the use of templatetag for VM deployment and template creation
Adds option to list service/compute offerings that can be used with a given template. A new parameter named templateid has been added.
Adds option to list disk offering with suitability flag for a virtual machine. A new parameter named virtualmachineid has been added to the listDiskOfferings API which when passed returns suitableforvirtualmachine param in the response.
* Use dualzones for ci github actions
* Update advdualzone.cfg to be similar to advanced.cfg & fixup test_metrics_api.py
* Fixup e2e tests for running with multiple zones
* Add e2e tests for listing of accounts, disk_offerings, domains, hosts, service_offerings, storage_pools, volumes
* Fixup
* another fixup
* Add test for listing volumes with tags filter
* Add check for existing volumes in test_list_volumes
* Wait for volumes to be deleted on cleanup
* Filter out volumes in Destroy state before checking the count of volumes