* vmware: delete snapshot disk after backup to secondary storage
WIP - This ensures that worker VM is destroyed along with any of its own
disks that are backed up to secondary storage.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix for volume backup and confuding vm var name
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* change
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* tag as worker vm
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: fix copy systemvm.iso for same version
For VMware, systemvm.iso is copied from MS to secondary store. Current server checks if the desired file is present on the secondary store or not. If it is not present ISO is copied.
This change adds a check for checksum for source and destination ISO which would allow copying new ISO if there is a mismatch.
Useful in case when file is corrupted in secondary store or new systemvm.iso is generated for dev environment.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* server: do not remove volume from DB if fail to expunge it from primary storage or secondary storage
* server/VolumeApiServiceImpl.java: move to method
* update #5373
* Fix of creating volumes from snapshots without backup
When few snaphots are created onyl on primary storage, and try to create
a volume or a template from the snapshot only the first operation is
successful. Its because the snapshot is backup on secondary storage with
wrong SQL query. The problem appears on Ceph/NFS but may affects other
storage plugins.
Bypassing secondary storage is implemented only for Ceph primary storage
and it didn't cover the functionality to create volume from snapshot
which is kept only on Ceph
* Address review
Worker VM tags are missed for few cloned VMs in VMware, and so these are skipped when tracking / cleaning up of Worker VMs. Adding proper Worker VM tags to these VMs would make them trackable from CloudStack.
* server: skip zone check for PERHOST iso during attachIso
Hypervisor tools ISO - vmware-toools.iso, xs-tools.iso are marked as PERHOST in DB. They are active but not downloaded to the secondary storages and hence no template-zone entry.
Skips the template-zone check for such templates.
Fixes#5265
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* inverted check
* use constants in TemplateManager
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
On newer libvirt/qemu it seems PCI hot-plugging could be an issue as
seen in:
https://www.suse.com/support/kb/doc/?id=000019383https://bugs.launchpad.net/nova/+bug/1836065
This was found to be true on ARM64/aarch64 platform (tested on
RaspberryPi4). As per the default machine doc, it advises to
pre-allocate PCI controllers on the machine and pcie-to-pci-bridge based
controller for legacy PCI models:
https://libvirt.org/pci-hotplug.html#x86_64-q35
This patch introduces the concept as a workaround until a proper fix is
done (ideally in the upstream libvirt/qemu projects). Until then client
code can add 32 PCI controllers and a pcie-to-pci-bridge controller for
aarch64 platforms.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* server: fix failed to apply userdata when enable static nat
* server: fix cannot expunge vm as applyUserdata fails
* configdrive: fix ISO is not recognized when plug a new nic
* configdrive: detach and attach configdrive ISO as it is changed when plug a new nic or migrate vm
* configdrive test: (1) password file does not exists in recreated ISO; (2) vm hostname should be changed after migration
* configdrive: use centos55 template with sshkey and configdrive support
* configdrive: disklabel is 'config-2' for configdrive ISO
* configdrive: use copy for configdrive ISO and move for other template/volume/iso
* configdrive: use public-keys.txt
* configdrive test: fix (1) update_template ; (2) ssh into vm by keypair
* vxlan: arp does not work between hosts as multicast group is communicated over physical nic instead of linux bridge
when linux bridge is setup (refer to http://docs.cloudstack.apache.org/projects/archived-cloudstack-getting-started/en/latest/networking/vxlan.html#configure-product-to-use-vxlan-plugin) and used as the kvm traffic label of physical networks, the vms on different hosts cannot reach each other.
(1) does not work:
```
/usr/share/cloudstack-common/scripts/vm/network/vnet/modifyvxlan.sh -v 1001 -p eth1 -b brvx-1001 -o add
```
"bridge fdb" shows
```
00:00:00:00:00:00 dev vxlan1001 dst 239.0.3.233 via eth1 self permanent
```
(2) this works:
```
/usr/share/cloudstack-common/scripts/vm/network/vnet/modifyvxlan.sh -v 1001 -p cloudbr1 -b brvx-1001 -o add
```
"bridge fdb" shows
```
00:00:00:00:00:00 dev vxlan1001 dst 239.0.3.233 via cloudbr1 self permanent
```
* vxlan: fix issue if kvm network label is not set
his PR fixes the problem of not updating the chain info or setting chain info to null after volume migrations.
Problem: While fetching the volume chain info, management server assumes datastore name to be a UUID (this is true only for NFS storages added by CloudStack) but datastore name can be with any name.
Solution: To fetch the volume chain info, use datastore name instead of UUID.
The fix is made in the flow of following API operations
migrateVirtualMachine
migrateVirtualMachineWithVolume
migrateVolume
* Fix of some UEFI related issues
1 - fix of attach/detach ISO of VM with UEFI boot type
2 - if OS type of an ISO is categorized as "Other" the bus type of the disk
will be set to "sata"
* Simplify the validation of OS types
This PR fixes the issue of missing fcd folder in local storage in case of VMware vSphere.
with this fix, a folder with name fcd is created whenever local storage is initiated.
* server: destroy ssvm, cpvm on last host maintenance
When a single or last UP host enters into maintenance just stopping SSVM and CPVM will leave behind VMs on hypervisor side. As these system vms will be recreated they can be destroyed.
Fixes#3719
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix methods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* immediately destroy systemvms
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix destroy
Added bypassHostMaintenance flag in Comma.java class to allow command to be handled by host agent even when host is in maintenace.
Flag is set true only for delete commands for ssvm and cpvm.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* unit test fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix missing return statement
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix
VM should be stopped with cleanup before calling expunge else it server may through error with host in PrepareForMaintenance state.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* rename
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.
Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
Fixes: #4808, #4941
This PR adds a force flag to the attachIso / detachIso commands, especially for VMware where it is noticed that when trying to either detach an iso or attach an iso when there already exists another present it fails to do the necessary operation as from ACS end we either answer the question returned by Esxi for CDRom disconnect operation as No (for detach operation) or do not answer the question at all (for Attach operation).
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
* prevent other vm disks getting deleted
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: fix inter-cluster stopped vm migration
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix detached volume inter-cluster migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* cleanup unused method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* review changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: allow attached volume migration using VmwareStorageMotionStrategy
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* find vm clusterid with multiple ROOT volumes
VM can have multiple ROOT volumes and some can be on zone-wide store therefore iterate over all of them till a cluster ID is found.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix successive storage migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix intercluster check
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor vm cluster, host method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove inter-pod check
Added by mistake, VMware won't have pods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address review comment
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster during a stopped VM migration. Also, find target datastore using the host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Adds new/missing guest os mappings for XCP-ng/Xenserver 8.1
Copy guest OS mappings from XCP-ng/Xenserver 8.1 for XCP-ng/Xenserver 8.2
Adds Ubuntu 20.04 guest os mapping for XCP-ng/Xenserver 8.2
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes#4517
Adds capacity checks for RandomAllocator (host allocator)
Factors out host cpu capability and capacity check wrt serviceoffering code into CapacityManager.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes the k8s test failures noticed on vmware.
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes a small bug when explicitly setting VM hardware versions lower than version 10.
Vmware expects the hardware version in format: vmx-DD where DD is a two-digit representation of the virtual hardware version. For hardware version lower than 10, CloudStack was not using to digits for the hardware version number, which ended up on an error while creating worker VMs. (vmx-8 for example instead of vmx-08)
This PR fixes#4244
deploying of VMs from ISOs and from templates with UEFI boot type
deploying of VMs from ISOs and from templates with UEFI boot type with
volumes in RAW format
Fixes#4729
As reported in the issue ACP 4.7 used a normal UUID in db for a presetup primary store on Xenserver.
Later the value has been changed to store's path with '/' removed.
Current changes try to retrieve SR's name-lable from store's path if UUID doesn't match path field for a pre-setup store.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes the issue pertaining to volume resize on VMWare for deploy as-is templates. VMware deploy as-is templates are those that are deployed as per the specification in the imported OVF. Hence override root disk size will not be adhered to for such templates. Moreover, when we deploy VMs in stopped state and resize the volume, the root disk doesn't get resized but the volume size is merely updated in the DB.
This PR also includes the following (for deploy as-is templates):
- Disables overriding root disk size during VM deployment on the UI
- Disables selection of compute offerings with root disk size specified, at the time of deployment
- Provided users with the option to deploy VM is stopped state via UI (so as to give an option to users to resize the volumes before starting the VM)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>