* KVM incremental snapshot feature
* fix log
* fix merge issues
* fix creation of folder
* fix snapshot update
* Check for hypervisor type during parent search
* fix some small bugs
* fix tests
* Address reviews
* do not remove storPool snapshots
* add support for downloading diff snaps
* Add multiple zones support
* make copied snapshots have normal names
* address reviews
* Fix in progress
* continue fix
* Fix bulk delete
* change log to trace
* Start fix on multiple secondary storages for a single zone
* Fix multiple secondary storages for a single zone
* Fix tests
* fix log
* remove bitmaps when deleting snapshots
* minor fixes
* update sql to new file
* Fix merge issues
* Create new snap chain when changing configuration
* add verification
* Fix snapshot operation selector
* fix bitmap removal
* fix chain on different storages
* address reviews
* fix small issue
* fix test
---------
Co-authored-by: João Jandre <joao@scclouds.com.br>
* KVM: add Virtual TPM model and version
* KVM: add admin-only VM setting GUEST.CPU.MODE and GUEST.CPU.MODEL
* VMware: add vTPM
* vTPM: do not set Key due to 'Cannot add multiple devices using the same device key..'
* vTPM: add unit test testTpmModel
* engine/schema: remove user vm details for guest CPU mode/model
* vTPM: extra methods as Daan's requests
* vTPM: add unit tests in VmwareResourceTest
* vTPM: update unit tests in VmwareResourceTest
* vTPM: add unit test in LibvirtComputingResourceTest
* vTPM: use the default TPM version if an invalid version is passed
* vTPM: requires UEFI on vmware and do nothing if it is not enabled/disabled
* vTPM: let uses to add UEFI on vmware
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* vTPM: remove template details for guest CPU mode/model
* UI: boot vm from ISO into UEFI/SECURE mode
---------
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
Dependency name change mockito-inline to mockito-core. Inline is now the default and the last version of mockito-inline released is 5.2.0.
assertj-core in user-authenticators/saml2 pulls in an incompatible version of byte-buddy and required an exclusion. Updating the version of assertj is left for a future PR.
The upgrade requires Java 11+, dropping support for Java 8. CloudStack documentation already says to use Java 11 and does not indicate that java 8 is supported.
Test classes using @RunWith(MockitoJUnitRunner.class) now get run in strict mode. Changes were made to tests where the stubbing intention was clear. In ManagementServerMaintenanceManagerImplTest there are 5 tests where the intention of the test is unclear. Each of the statements now use Mockito.lenient() to avoid the exception. Other cases in the tests follow a similar pattern.
Minor clean up.
Both @Spy and Mockito.spy( should not be used. Favored the annotation.
Both @RunWith(MockitoJUnitRunner.class) and MockitoAnnotations.openMocks(this); should not be used. Favored the annotation.
Unnecessary extends TestCase removed.
@InjectMocks and new in statement unnecessary. Removed new when issue presented.
Some of the Cmd classes like UpdateNetworkCmd have a type tree that includes fields of type Object. This appears to cause issues with injection, requiring that @Mock fields be available. This is where the following fields were added in multiple places:
Object job;
ResponseGenerator _responseGenerator;
Wrong number of parameters for Mockito.when in LibvirtRevertSnapshotCommandWrapperTest.java
* Readd filename string on qemuimg create
* Remove empty object on the data pool details of storage pools with no data pool
* Only use the method createPhysicalDiskByLibVirt with RBD when the pool is of erasure code type. Also added javadoc for createPhysicalDisk method
* Change literal '/' string to File.separator
* Add support for erasure code pools
* Fix null on putAll
* NAS B&R Plugin enhancements
* Prevent printing mount opts which may include password by removing from response
* revert marvin change
* add sanity checks to validate minimum qemu and libvirt versions
* check is user running script is part of libvirt group
* revert changes of retore expunged VM
* add code coverage ignore file
* remove check
* issue with listing schedules and add defensive checks
* redirect logs to agent log file
* add some more debugging
* remove test file
* prevent deletion of cks cluster when vms associated to a backup offering
* delete all snapshot policies when bkp offering is disassociated from a VM
* Fix `updateTemplatePermission` when the UI is set to a language other than English (#9766)
* Fix updateTemplatePermission UI in non-english language
* Improve fix
---------
* Add nobrl in the mountopts for cifs file system
* Fix restoration of VM / volumes with cifs
* add cifs utils for el8
* add cifs-utils for ubuntu cloudstack-agent
* syntax error
* remove required constraint on both vmid and id params for the delete bkp schedule command
* add use of virsh domifaddr to get VM external DHCP IP
* updates to modularize LibvirtGetVmIpAddressCommandWrapper per comments; added test cases to cover 90%+ scenarios
* updates to modularize LibvirtGetVmIpAddressCommandWrapper per comments; added test cases to cover 90%+ scenarios
* updates to modularize LibvirtGetVmIpAddressCommandWrapper per comments; added test cases to cover 90%+ scenarios
* Delete local storage properties in agent.properties during delete pool
* Fix stale entry when add local storage failed
* Smaller methods
* Comment added
* 4.20:
merge errors fixed
Restrict the migration of volumes attached to VMs in Starting state (#9725)
server, plugin: enhance storage stats for IOPS (#10034)
Introducing granular command timeouts global setting (#9659)
Improve logging to include more identifiable information (#9873)
Adds framework layer change to allow retrieving and storing IOPS stats for storage pools. Custom `PrimaryStoreDriver` can implement method - `getStorageIopsStats` for returning IOPS stats. Existing method `getUsedIops` can also be overridden by such plugins when only used IOPS is returned.
For testing purpose, implementation has been added for simulator hypervisor plugin to return capacity and used IOPS for a pool.
For local storage pool, implementation has been added using iostat to return currently used IOPS.
StoragePoolResponse class has been updated to return IOPS values which allows showing IOPS values in UI for different storage pool related views and APIs.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Improve logging to include more identifiable information for kvm plugin
* Update logging for scaleio plugin
* Improve logging to include more identifiable information for default volume storage plugin
* Improve logging to include more identifiable information for agent managers
* Improve logging to include more identifiable information for Listeners
* Replace ids with objects or uuids
* Improve logging to include more identifiable information for engine
* Improve logging to include more identifiable information for server
* Fixups in engine
* Improve logging to include more identifiable information for plugins
* Improve logging to include more identifiable information for Cmd classes
* Fix toString method for StorageFilterTO.java
* 4.20:
VR: apply iptables rules when add/remove static routes (#10064)
Certificate and VM hostname validation improvements (#10051)
set ulimit for server according to redhat spec (#10040)
kvm-storage: provide isVMMigrate information to storage plugins (#10093)
Allow config drive deletion of migrated VM, on host maintenance (#10045)
linstor: improve heartbeat check with also asking linstor (#10105)
server: simplify role change validation (#9173)
UI: create VPC network offering with conserve mode (#10082)
server: fix typo removeaccessvpn in VirtualRouterElement (#10086)
UI: remove duplicated Instance Name in Public IP details page (#10087)
UI: Fixes in the Usage UI (#10000)
SAML2: add cookie with HttpOnly too #10013 (#10047)
ui: Allow font-awesome icon usage and optimise icon size inconsistency (#9744)
Particular Linstor needs can use this information to only allow
dual volume access for live migration and not enable it in general,
which can and will lead to data corruption if for some reason
2 VMs get started on 2 different hosts.
* 4.20:
UI: Tooltip on the host information card to display the CPU speed in MHz and the memory value in MB (to 3 decimal places) (#9971)
UI: Allow accounts of the `User` type to add other accounts or users to projects through UI (#9927)
enable to create VPC portfowarding rules with source cidr (#7081)
Add new column `last_id` to the table volumes (#9759)
Allow VMWare import via another host (#9787)
Linstor: add support for ISO block devices and direct download (#9792)
get expunged VM data for job result (#9949)
fix section divider display on auth page (#9966)
If a secondary storage pool is used by e.g.
2 concurrent snapshot->template actions,
if the first action finished it removed the netfs mount
point for the other action.
Now the storage pools are usage ref-counted and will only
deleted if there are no more users.
This fixes the issue when create a ovs network
```
2024-10-29 16:02:45,089 WARN [resource.wrapper.LibvirtOvsFetchInterfaceCommandWrapper] (agentRequest-Handler-2:null) (logid:e716722e) Network interface: ''cloudbr1'' not found
```
This is a regression of a previous security release
see "framework/cluster: improve cluster service, integration API server"
since we now use NetworkInterface.getByName to get network interface, we should NOT add single quotes before/after the label.
qemu has a bug versions prior 7.0 with discard enabled and using the IDE bus.
It would crash the qemu process and kill the virtual machine,
this is most noticeable on installing a windows guest from the
Windows ISO installer.
* linstor: enable discard for Linstor storage pools
All Linstor storage backends support discard, so it can be safely enabled.
* linstor: enable discard for Linstor storage pools CHANGELOG.md
* Add logs to LibvirtComputingResource's metrics collecting process
* Apply Joao's suggestions
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
* Adjust some logs
* Print memory statistics log in one line
---------
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
This introduces the multi-arch zones, allowing users to select the VM arch upon deployment.
Multi-arch zone support in CloudStack can allow admins to mix x86_64 & arm64 hosts within the same zone with the following changes proposed:
- All hosts in a clusters need to be homogenous, wrt host CPU type (amd64 vs arm64) and hypevisor
- Arch-aware templates & ISOs:
- Add support for a new arch field (default set of: amd64 and arm64), when unspecified defaults to amd64 and for existing templates & iso
- Allow admins to edit the arch type of the registered template & iso
- Arch-aware clusters and host:
- Add new attribute field for cluster and hosts (kvm host agents can automatically report this, arch of the first host of the cluster is cluster's architecture), defaults to amd64 when not specified
- Allow admins to edit the arch of an existing cluster
- VM deployment form (UI):
- In a multi-arch zone/env, the VM deployment form can allow some kind of template/iso filtration in the UI
- Users should be able to select arch: amd64 & arm64; but this is shown only in a multi-arch zone (env)
- VM orchestration and lifecycle operations:
- Use of VM/template's arch to correctly decide where to provision the VM (on the correct strictly arch-matching host/clusters) & other lifecycle operations (such as migration from/to arch-matching hosts)
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This is a simple NAS backup plugin for KVM which may be later expanded for other hypervisors. This backup plugin aims to use shared NAS storage on KVM hosts such as NFS (or CephFS and others in future), which is used to backup fully cloned VMs for backup & restore operations. This may NOT be as efficient and performant as some of the other B&R providers, but maybe useful for some KVM environments who are okay to only have full-instance backups and limited functionality.
Design & Implementation follows the `networker` B&R plugin, which is simply:
- Implement B&R plugin interfaces
- Use cmd-answer pattern to execute backup and restore operations on KVM host when VM is running (or needs to be restored) - instead of a B&R API client, relies on answers from KVM agent which executes the operations
- Backups are full VM domain snapshots, copied to a VM-specific folders on a NAS target (NFS) along with a domain XML
- Backup uses libvirt feature: https://libvirt.org/kbase/live_full_disk_backup.html orchestrated via virsh/bash script (nasbackup.sh) as the libvirt-java lacks the bindings
- Supported instance volume storage for restore operations: NFS & local storage
Refer the doc PR for feature limitations and usage details:
https://github.com/apache/cloudstack-documentation/pull/429
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
This PR makes sure a KVM VM gets the UUID of the VM as a static serialnumber through smbios.
Some applications on primarily Windows servers require a stable serial number for licensing purposes. By providing this serial number we can make sure these applications can have a license configured.
More information: https://libvirt.org/formatdomain.html#smbios-system-information
If the libvirt mount point is still busy and can't be unmounted
right now, it was waited 5 seconds and an plain unmount was tried,
without cleaning up the libvirt storagepool.
This kept libvirt thinking the storagepool
is active and mounted (which it wasn't).
Now after the plain unmount call, also
the libvirt storagepool will be destroyed.
- mTLS implementation for cluster service communication
- Listen only on the specified cluster node IP address instead of all interfaces
- Validate incoming cluster service requests are from peer management servers based on the server's certificate dns name which can be through global config - ca.framework.cert.management.custom.san
- Hardening of KVM command wrapper script exeicution
- Improve API server integration port check
- cloudstack-management.default: don't have JMX configuration if not needed. JMX is used for instrumentation; users who need to use it should enable it explicitly
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
- mTLS implementation for cluster service communication
- Listen only on the specified cluster node IP address instead of all interfaces
- Validate incoming cluster service requests are from peer management servers based on the server's certificate dns name which can be through global config - ca.framework.cert.management.custom.san
- Hardening of KVM command wrapper script execution
- Improve API server integration port check
- cloudstack-management.default: don't have JMX configuration if not needed. JMX is used for instrumentation; users who need to use it should enable it explicitly
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
- mTLS implementation for cluster service communication
- Listen only on the specified cluster node IP address instead of all interfaces
- Validate incoming cluster service requests are from peer management servers based on the server's certificate dns name which can be through global config - ca.framework.cert.management.custom.san
- Hardening of KVM command wrapper script execution
- Improve API server integration port check
- cloudstack-management.default: don't have JMX configuration if not needed. JMX is used for instrumentation; users who need to use it should enable it explicitly
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Mitigation for non-scalable Powerflex/ScaleIO clients
- Added ScaleIOSDCManager to manage SDC connections, checks clients limit, prepare and unprepare SDC on the hosts.
- Added commands for prepare and unprepare storage clients to prepare/start and stop SDC service respectively on the hosts.
- Introduced config 'storage.pool.connected.clients.limit' at storage level for client limits, currently support for Powerflex only.
* tests issue fixed
* refactor / improvements
* lock with powerflex systemid while checking connections limit
* updated powerflex systemid lock to hold till sdc preparation
* Added custom stats support for storage pool, through listStoragePools API
* code improvements, and unit tests
* unit tests fixes
* Update config 'storage.pool.connected.clients.limit' to dynamic, and some improvements
* Stop SDC on host after migration if no volumes mapped to host
* Wait for SDC to connect after scini service start, and some log improvements
* Do not throw exception (log it) when SDC is not connected while revoking access for the powerflex volume
* some log improvements
* Create/Export OVA file of the VM on external vCenter host, to temporary conversion location (NFS)
* Fixed ova issue on untar/extract ovf from ova file
"tar -xf" cmd on ova fails with "ovf: Not found in archive" while extracting ovf file
* Updated VMware to KVM instance migration using OVA
* Refactoring and cleanup
* test fixes
* Consider zone wide pools in the destination cluster for instance conversion
* Remove local storage pool support as temporary conversion location
- OVA export not possible as the pool is not accessible outside host, NFS pools are supported.
* cleanup unused code
* some improvements, and refactoring
* import nic unit tests
* vmware guru unit tests
* Separate clone VM and create template file for VMware migration
- Export OVA (of the cloned VM) to the conversion location takes time.
- Do any validations with cloned VM before creating the template (and fail early).
- Updated unit tests.
* Check conversion support on host before clone vm / create template on vmware (and fail early)
* minor code improvements
* Auto select the host with instance conversion capability
* Skip instance conversion supported response param for non-KVM hosts
* Show supported conversion hosts in the UI
* Skip persistence map update if network doesn't exist
* Added support to export OVA from KVM host, through ovftool (when installed in KVM host)
* Updated importvm api param 'usemsforovaexport' to 'forcemstodownloadvmfiles', to be generic
* Updated hardcoded UI messages with message labels
* Updated UI to support importvm api param - forcemstodownloadvmfiles
* Improved instance conversion support checks on ubuntu hosts, and for windows guest vms
* Use OVF template (VM disks and spec files) for instance conversion from VMware, instead of OVA file
- this would further increase the migration performance (as it reduces the time for OVA preparation / archiving of the VM files into a single file)
* OVF export tool parallel threads code improvements
* Updated 'convert.vmware.instance.to.kvm.timeout' config default value to 3 hrs
* Config values check & code improvements
* Updated import log, with time taken and vm details
* Support for parallel downloads of VMware VM disk files while exporting OVF from MS, and other changes below.
- Skip clone for powered off VMs
- Fixes to support standalone host (with its default datacenter)
- Some code improvements
* rebase fixes
* rebase fixes
* minor improvement
* code improvements - threads configuration, and api parameter changes to import vm files
* typo fix in error msg
* Ability to specify NFS mount options while adding a primary storage and modify it later
* Pull 8947: Rename all occurrence of nfsopt to nfsMountOpt and added nfsMountOpts to ApiConstants
* Pull 8947: Refactor code - move into separate methods
* Pull 8947: CollectionsUtils.isNotEmpty and switch statement in LibvirtStoragePoolDef.java
* Pull 8947: UI - cancel maintainenace will remount the storage pool and apply the options
* Pull 8947: UI - moved edit NFS mount options to edit Primary Storage form
* Pull 8947: UI - moved 'NFS Mount Options' to below 'Type' in dataview
* Pull 8947: Fixed message in AddPrimaryStorage.vue
* Pull 8947: Convert _nfsmountOpts to Set in libvirtStoragePoolDef
* Pull 8947: Throw exception and log error if mount fails due to incorrect mount option
* Pull 8947: Added UT and moved integration test to component/maint
* Pull 8947: Review comments
* Pull 8947: Removed password from integration test
* Pull 8947: move details allocation to inside the if loop in getStoragePoolNFSMountOpts
* Pull 8947: Fixed a bug in AddPrimaryStorage.vue
* Pull 8947: Pool should remain in maintenance mode if mount fails
* Pull 8947: Removed password from integration test
* Pull 8947: Added UT
* Pull 8875: Fixed a bug in CloudStackPrimaryDataStoreLifeCycleImplTest
* Pull 8875: Fixed a bug in LibvirtStoragePoolDefTest
* Pull 8947: minor code restructuring
* Pull 8947 : added some ut for coverage
* Fix LibvirtStorageAdapterTest UT
* Updates to change PUre and Primera to host-centric vlun assignments; various small bug fixes
* update to add timestamp when deleting pure volumes to avoid future conflicts
* update to migrate to properly check disk offering is valid for the target storage pool
* Updates to change PUre and Primera to host-centric vlun assignments; various small bug fixes
* update to add timestamp when deleting pure volumes to avoid future conflicts
* update to migrate to properly check disk offering is valid for the target storage pool
* improve error handling when copying volumes to add precision to which step failed
* rename pure volume before delete to avoid conflicts if the same name is used before its expunged on the array
* remove dead code in AdaptiveDataStoreLifeCycleImpl.java
* Fix issues found in PR checks
* fix session refresh TTL logic
* updates from PR comments
* logic to delete by path ONLY on supported OUI
* fix to StorageSystemDataMotionStrategy compile error
* change noisy debug message to trace message
* fix double callback call in handleVolumeMigrationFromNonManagedStorageToManagedStorage
* fix for flash array delete error
* fix typo in StorageSystemDataMotionStrategy
* change copyVolume to use writeback to speed up copy ops
* remove returning PrimaryStorageDownloadAnswer when connectPhysicalDisk returns false during KVMStorageProcessor template copy
* remove change to only set UUID on snapshot if it is a vmSnapshot
* reverting change to UserVmManagerImpl.configureCustomRootDiskSize
* add error checking/simplification per comments from @slavkap
* Update engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/StorageSystemDataMotionStrategy.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* address PR comments from @sureshanaparti
---------
Co-authored-by: GLOVER RENE <rg9975@cs419-mgmtserver.rg9975nprd.app.ecp.att.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Added timeout config to copy the disks of remote KVM instance while importing the instance from an external host
* Updated copy config units to mins
* Cleanup remote converted file and local file when copy failed
* kvm: replace ISO path in vm XML configuration during vm migration
* Update 9212: address comments
* kvm: fix vm migration if there are multiple image stores
This PR addresses the issue #8789
The original issue is disconnectPhysicalDiskByPath() implementation in FibreChannelAdaptor always returns true irrespective of the success of the operation. This was already fixed in the PR #8889 .
Ideally this method has to be called after choosing the right adapter based on the storage pool type of the volume path, but currently it is just called in a loop.
05b9b6e2e7/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePoolManager.java (L200-L212)
while trying to fix the case of running into the loop of all adapters by somehow passing the storage pool type to that caller cleanup() method but this is touching all over the code (which I fear it creates other regressions), instead I feel we can keep it the current way only since Fibrechannel adapter has already fixed.
In this PR I've added the java doc explaining the method and situation.
* Add ability to set cpu.threadspercore similar to existing cpu.corespersocket
* add cpu.threadspercore to VM and template detail options
* Update plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* add vm detail for KVM
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
This fixes a limitation for arm64/aarch64 KVM hosts to correctly export
the product name via sysconfig attribute. Without this `cloud-init`
doesn't function correctly on arm64 platforms.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Add a global setting to control whether redirection is allowed while
downloading templates and volumes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Add a global setting to control whether redirection is allowed while
downloading templates and volumes
core: some changes on SimpleHttpMultiFileDownloader
similar as HttpTemplateDownloader
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
(cherry picked from commit b1642bc3bf)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Support KVM storage implementations controlling logical/physical block io size
* Support custom block size during disk attach
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>