* Add support for dedicating backup offerings to domains
* Add tests and UI support and update response params
* add license header
* exclude backupofferingdetailsvo from sonar
* fix pre-commit checks - missing / extra EOF line
* add test
* EOF
* filter backup offerings by domain id
* add unit tests
* add more unit tests and remove response file from code coverage check
* update checks
* address review comments: extract common code, fix tests
* added bean definition
* address comments
* add unit tests to increase coverage
* pre-commit check failure fix
* address merge issue
* allow updating backup offering when only domain id is modified
* API: Add support to list all snapshot policies & backup schedules
* Add support for backup policy listing without tying it to the vmid
* add tests for snapshot policy listing
* update tests for listbackupschedules
* remove trailing spaces and fix lint failure
* Add upgrade test
* remove unused import
* add create policy - snap/backup in the list view with resource (volume/vm) selection
* add translations
* refresh parent list
* remove unnecessary alert info
* fix checks for UI backup schedule list view
* fix checks for UI backup schedule list view
* add back access checks
* add since param
* fix failing test
* update snapshot policy and backup schedule ownership when VM is moved
* fix issue with showing vm selection
* fix unit test failure
* Update list snappolicy & backup schedule logic to list only those that belong to a proj or for root admin those that belong to it, unless listall & projid is passed
* fix test
* support snap / backup policy search using keyword
* fix tests
CS creates transient KVM domain.xml. When instance is unmanaged from CS, explicit dump of domain has to be taken to manage is outside of CS.
With this PR
domainXML gets backed up and becomes persistent for further management of Instance.
Stopped instance also can be unmanaged, last host for instance is considered for defining domain
hostid param is supported in unmanageVirtualMachine API for KVM hypervisor and for stopped Instances
hostid field in response of unmanageVirtualMachine, representing host used for unmanage operation
Disable unmanaging instance with config drive, can unmanage from API using forced=true param for KVM
* draas initial changes
* Added option to enable disaster recovery on a backup respository. Added UpdateBackupRepositoryCmd api.
* Added timeout for mount operation in backup restore configurable via global setting
* Addressed review comments
* fix for simulator test failures
* Added UT for coverage
* Fix create instance from backup ui for other providers
* Added events to add/update backup repository
* Fix race in fetchZones
* One more fix in fetchZones in DeployVMFromBackup.vue
* Fix zone selection in createNetwork via Create Instance from backup form.
* Allow template/iso selection in create instance from backup ui
* rename draasenabled to crosszoneinstancecreation
* Added Cross-zone instance creation in test_backup_recovery_nas.py
* Added UT in BackupManagerTest and UserVmManagerImplTest
* Integration test added for Cross-zone instance creation in test_backup_recovery_nas.py
This feature adds the ability to create a new instance from a VM backup for dummy, NAS and Veeam backup providers. It works even if the original instance used to create the backup was expunged or unmanaged. There are two parts to this functionality:
Saving all configuration details that the VM had at the time of taking the backup. And using them to create an instance from backup.
Enabling a user to expunge/unmanage an instance that has backups.
This PR allows attaching of GPU devices via PCI, mdev or VF to an Instance for KVM.
It allows the operator to discover the GPU devices on the KVM host and create a Compute Offering with GPU support based on the available GPU devices on the host. Once the operator has created the Compute offering, it can be used by users to launch Instances with GPU devices.
The Extensions Framework in Apache CloudStack is designed to provide a flexible and standardised mechanism for integrating external systems and custom workflows into CloudStack’s orchestration process. By defining structured hook points during key operations—such as virtual machine deployment, resource preparation, and lifecycle events—the framework allows administrators and developers to extend CloudStack’s behaviour without modifying its core codebase.
* CPU to Memory weight based algorithm to order cluster
host.capacityType.to.order.clusters config will support new algorithm: COMBINED
which will work with host.capacityType.to.order.clusters.cputomemoryweight and capacity will be
computed based on CPU and memory both and using weight factor
* minor changes
* add unit tests
* update desc and add validation
* handle copilot review comments
* add log indicating chosen capacityType for ordering
---------
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Option to deploy a VM with existing volume/snapshot
* smoke test changes
check if the hypervisor is KVM
check if the primary storage's scope is ZONE wide
* skip all tests if the storage isn't Zone-Wide and the hypervisor isn't KVM
* support StorPool tags
add StorPool tags to a volume created from snapshot or to a volume which
will be attached as a ROOT to a new VM
* Add StorPool tags on the new ROOT volume
* Add the StorPool's tags when volume is created from a snapshot or a
volume is attached as a ROOT to a VM
* Addressed review
* FR-248: Instance lease, WIP commit
* insert lease expiry into db and use that to filter exiring vms, add asyncjobmanager
* Add leaseDuration and leaseExpiryAction in Service offering create flow
* Update listVM cmd to allow listing only leased instances
* Add methods to fetch instances for which lease is expiring in next days
* Changes included:
config key setup and configured for alert email
lease options in create and update vm screen
handle delete protection, edit vm, create vm
validated stop and detroy, delete protection
* Update UI screens for leased properties coming from config and service offering
* use global lock before running scheduler
* Unit tests
* Flow changes done in UI based on discussion
* Include view changes in schema upgrade files and use feature in various UI elements
* Added integration test for vm deployment, UI enhancements for user persona, bug fixes
* validate integration tests, minor ui changes and log messages
* fix build: moving configkey from setup to test itself
* Disable testAlert to unblock build and trim whitespaces in integration tests
* Address review comments
* Minor changes in EditVM screen
* Use ExecutorService instead of Timer and TimerTask
* Additional review comments
* Incorporate following changes:
1. Execute lease action once on the instance
2. Cancel lease on instance when feature is disabled
3. Relevant events when lease gets disabled, cancelled, executed
4. Disable associating lease after deployment
5. UI elements and flow changes
6. Changes based on feedback from demo
* Handle pr review comments
* address review comments
* move instance.lease.enabled config to VMLeaseManager interface
* bug fix in edit instance flow and reject api request for invalid values
* max allowed lease is for 100 years
* log instance ids for expired instance
* Fix config validation for value range and code coverage improvement
* fix lease expiry request failures in async
* dont use forced: true for StopVmCmd
* Update server/src/main/java/org/apache/cloudstack/vm/lease/VMLeaseManager.java
Co-authored-by: Vishesh <vishesh92@gmail.com>
* handle review comments
---------
Co-authored-by: Rohit Yadav <rohityadav89@gmail.com>
Co-authored-by: Vishesh <vishesh92@gmail.com>
* Introducing Storage Access Groups to define the host and storage pool connections
In CloudStack, when a primary storage is added at the Zone or Cluster scope, it is by default connected to all hosts within that scope. This default behavior can be refined using storage access groups, which allow operators to control and limit which hosts can access specific storage pools.
Storage access groups can be assigned to hosts, clusters, pods, zones, and primary storage pools. When a storage access group is set on a cluster/pod/zone, all hosts within that scope inherit the group. Connectivity between a host and a storage pool is then governed by whether they share the same storage access group.
A storage pool with a storage access group will connect only to hosts that have the same storage access group. A storage pool without a storage access group will connect to all hosts, including those with or without a storage access group.
Co-authored-by: Stephan Krug <stephan.krug@scclouds.com.br>
Co-authored-by: Gabriel <gabriel.fernandes@scclouds.com.br>
Co-authored-by: Fabricio Duarte <fabricio.duarte.jr@gmail.com>
* Improve logging to include more identifiable information for kvm plugin
* Update logging for scaleio plugin
* Improve logging to include more identifiable information for default volume storage plugin
* Improve logging to include more identifiable information for agent managers
* Improve logging to include more identifiable information for Listeners
* Replace ids with objects or uuids
* Improve logging to include more identifiable information for engine
* Improve logging to include more identifiable information for server
* Fixups in engine
* Improve logging to include more identifiable information for plugins
* Improve logging to include more identifiable information for Cmd classes
* Fix toString method for StorageFilterTO.java
Added caching for ConfigKey value retrievals based on the Caffeine
in-memory caching library.
https://github.com/ben-manes/caffeine
Currently, expire time for a cache is 30s and each update of the
config key invalidates the cache. On any update or reset of the
configuration, cache automatically invalidates for it.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Documentation PR: https://github.com/apache/cloudstack-documentation/pull/398
Currently, an administrator can break host tag compatibility for a VM administrator by certain operations:
* deploy/start VM on a specific host
* migrate VM
* restore VM
* scale VM
This PR allows the user to specify tags which must be checked during these operations.
Global Settings
1. `vm.strict.host.tags` - A comma-separated list of tags for strict host check (Default - empty)
2. `vm.strict.resource.limit.host.tag.check` - Determines whether the resource limits tags are considered strict or not (Default - true)
During the above operations, we now check and throw an error if host tags compatibility is being broken for tags specified in `vm.strict.host.tags`. If `vm.strict.resource.limit.host.tag.check` is set to `true`, tags set in `resource.limit.host.tags` are also checked during these operations.
* Allow overriding root diskoffering id & size while restoring VM
* UI changes
* Allow expunging of old disk while restoring a VM
* Resolve comments
* Address comments
* Duplicate volume's details while duplicating volume
* Allow setting IOPS for the new volume
* minor cleanup
* fixup
* Add checks for template size
* Replace strings for IOPS with constants
* Fix saveVolumeDetails method
* Fixup
* Fixup UI styling
Feature spec: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Granular+Resource+Limit+Management
Introduces the concept of tagged resource limits for granular resource limit management. Limits can be enforced on accounts and domains for the deployment of entities for a tagged resource. Current tagged resource limits can be used for the following resource types,
Host limits
- user_vm
- cpu
- memory
Storage limits
- volume
- primary_storage
Following global settings can used to specify tags for which limit needs to be enforced,
Host: `resource.limit.host.tags`
Storage: `resource.limit.storage.tags`
Option for specifying tagged resource limits and viewing tagged resource usage are made available in the UI.
Enhances the use of templatetag for VM deployment and template creation
Adds option to list service/compute offerings that can be used with a given template. A new parameter named templateid has been added.
Adds option to list disk offering with suitability flag for a virtual machine. A new parameter named virtualmachineid has been added to the listDiskOfferings API which when passed returns suitableforvirtualmachine param in the response.
* allow change only one parameter during live scale
* Update server/src/main/java/com/cloud/vm/UserVmManagerImpl.java
Co-authored-by: sato03 <henriquesato2003@gmail.com>
* apply change method name
* Update server/src/main/java/com/cloud/vm/UserVmManagerImpl.java
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
---------
Co-authored-by: Gabriel <gabriel.fernandes@scclouds.com.br>
Co-authored-by: sato03 <henriquesato2003@gmail.com>
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
* Allocate new volume on restore virtual machine operation when resource count increment succeeds
- keep them in transaction, and fail operation if resource count increment fails
* Added some (negative) unit tests for restore vm
This PR adds the capability in CloudStack to convert VMware Instances disk(s) to KVM using virt-v2v and import them as CloudStack instances. It enables CloudStack operators to import VMware instances from vSphere into a KVM cluster managed by CloudStack. vSphere/VMware setup might be managed by CloudStack or be a standalone setup.
CloudStack will let the administrator select a VM from an existing VMware vCenter in the CloudStack environment or external vCenter requesting vCenter IP, Datacenter name and credentials.
The migrated VM will be imported as a KVM instance
The migration is done through virt-v2v: https://access.redhat.com/articles/1351473, https://www.ovirt.org/develop/release-management/features/virt/virt-v2v-integration.html
The migration process timeout can be set by the setting convert.instance.process.timeout
Before attempting the virt-v2v migration, CloudStack will create a clone of the source VM on VMware. The clone VM will be removed after the registration process finishes.
CloudStack will delegate the migration action to a KVM host and the host will attempt to migrate the VM invoking virt-v2v. In case the guest OS is not supported then CloudStack will handle the error operation as a failure
The migration process using virt-v2v may not be a fast process
CloudStack will not perform any check about the guest OS compatibility for the virt-v2v library as indicated on: https://access.redhat.com/articles/1351473.
Co-authored-by: Stephan Krug <stephan.krug@scclouds.com.br>
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: GaOrtiga <49285692+GaOrtiga@users.noreply.github.com>
Co-authored-by: Gabriel Pordeus Santos <gabrielpordeus@gmail.com>
Co-authored-by: Gabriel <gabriel.fernandes@scclouds.com.br>
* Don't allow inadvertent deletion of hidden details via API
* Update VM details unit test ensuring system/hidden details not removed
* Update test/integration/component/test_update_vm.py
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
### Description
Design document: https://cwiki.apache.org/confluence/display/CLOUDSTACK/%5BDRAFT%5D+Minimal+changes+to+allow+new+dynamic+hypervisor+type%3A+Custom+Hypervisor
This PR introduces the minimal changes to add a new hypervisor type (internally named Custom in the codebase, and configurable display name), allowing to write an external hypervisor plugin as a Custom Hypervisor to CloudStack
The custom hypervisor name is set by the setting: 'hypervisor.custom.display.name'. The new hypervisor type does not affect the behaviour of any CloudStack operation, it simply introduces a new hypervisor type into the system.
CloudStack does not have any means to dynamically add new hypervisor types. The hypervisor types are internally preset by an enum defined within the CloudStack codebase and unless a new version supports a new hypervisor it is not possible to add a host of a hypervisor that is not in part of the enum. It is possible to implement minimal changes in CloudStack to support a new hypervisor plugin that may be developed privately
This PR is an initial work on allowing new dynamic hypervisor types (adds a new element to the HypervisorType enum, but allows variable display name for the hypervisor)
##### Proposed Future work:
Replace the HypervisorType from a fixed enum to an extensible registry mechanism, registered from the hypervisor plugin
#### Feature Specifications
- The new hypervisor type is internally named 'Custom' to the CloudStack services (management server and agent services, database records).
- A new global setting ‘hypervisor.custom.display.name’ allows administrators to set the display name of the hypervisor type. The display name will be shown in the CloudStack UI and API.
- In case the ‘hypervisor.list’ setting contains the display name of the new hypervisor type, the setting value is automatically updated after the ‘hypervisor.custom.display.name’ setting is updated.
- The new Custom hypervisor type supports:
- Direct downloads (the ability to download templates into primary storage from the hypervisor hosts without using secondary storage)
- Local storage (use hypervisor hosts local storage as primary storage)
- Template format: RAW format (the templates to be registered on the new hypervisor type must be in RAW format)
- The UI is also extended to display the new hypervisor type and the supported features listed above.
- The above are the minimal changes for CloudStack to support the new hypervisor type, which can be tested by integrating the plugin codebase with this feature.
#### Use cases
This PR allows the cloud administrators to test custom hypervisor plugins implementations in CloudStack and easily integrate it into CloudStack as a new hypervisor type ("Custom"), reducing the implementation to only the hypervisor supported specific storage/networking and the hypervisor resource to communicate with the management server.
- CloudStack admin should be able to create a zone for the new custom hypervisor and add clusters, hosts into the zone with normal operations
- CloudStack users should be able to execute normal VMs/volumes/network/storage operations on VMs/volumes running on the custom hypervisor hosts
Fixes case of appending userdata when both template and vm data are either shellscript or cloudconfig
Fixes error when appending gzip userdata
Fixes case when userdata manual text from VM is not getting decoded-encoded correctly.
Fixes case of appending multipart data when both template and vm data contain same format types.
Refactor - moved validateUserData method to UserDataManager class
Refactor userdata test to check resultant multipart userdata thoroughly
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>