* Use dualzones for ci github actions
* Update advdualzone.cfg to be similar to advanced.cfg & fixup test_metrics_api.py
* Fixup e2e tests for running with multiple zones
* Add e2e tests for listing of accounts, disk_offerings, domains, hosts, service_offerings, storage_pools, volumes
* Fixup
* another fixup
* Add test for listing volumes with tags filter
* Add check for existing volumes in test_list_volumes
* Wait for volumes to be deleted on cleanup
* Filter out volumes in Destroy state before checking the count of volumes
* CKS: retry if unable to drain node or unable to upgrade k8s node
I tried CKS upgrade 16 times, 11 of 16 upgrades succeeded.
2 of 16 upgrades failed due to
```
error: unable to drain node "testcluster-of7974-node-18c8c33c2c3" due to error:[error when evicting pods/"cloud-controller-manager-5b8fc87665-5nwlh" -n "kube-system": Post "https://10.0.66.18:6443/api/v1/namespaces/kube-system/pods/cloud-controller-manager-5b8fc87665-5nwlh/eviction": unexpected EOF, error when evicting pods/"coredns-5d78c9869d-h5nkz" -n "kube-system": Post "https://10.0.66.18:6443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-h5nkz/eviction": unexpected EOF], continuing command...
```
3 of 16 upgrades failed due to
```
Error from server: error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
Name: "kubernetes-dashboard", Namespace: "kubernetes-dashboard"
from server for: "/mnt/k8sdisk//dashboard.yaml": etcdserver: leader changed
```
* CKS: remove tests of creating/deleting HA clusters as they are covered by the upgrade test
* Update PR 8402 as suggested
* test: remove CKS cluster if fail to create or verify
* veeam: detach only the restored volume during backup restore
Steps to reproduce the issue
1. create a VM (A) with ROOT and DATA disk
2. assign to a backup offering
3. create backup
4. create another VM (B)
5. restore the DATA disk of VM A, and attach to VM B
6. When operation is done, check the datastore
Without this change, the ROOT image is not removed and left over on the datastore.
```
[root@ref-trl-5933-v-Mr8-wei-zhou-esxi2:/vmfs/volumes/5f60667d-18d828eb] ls -l /vmfs/volumes/5f60667d-18d828eb/CS-RSTR-dfb6f21c-a941-49db-9963-4f0286a17dac
total 1784840
-rw------- 1 root root 5242880000 Jan 24 09:23 ROOT-722_2-flat.vmdk
-rw------- 1 root root 499 Jan 24 09:23 ROOT-722_2.vmdk
```
With this change, the whole temporary vm has been destroyed.
```
[root@ref-trl-5933-v-Mr8-wei-zhou-esxi2:/vmfs/volumes/5f60667d-18d828eb] ls -l /vmfs/volumes/5f60667d-18d828eb/CS-RSTR-734bee3b-640c-4ff0-a34b-bc45358565b2
ls: /vmfs/volumes/5f60667d-18d828eb/CS-RSTR-734bee3b-640c-4ff0-a34b-bc45358565b2: No such file or directory
```
* veeam: fix wrong disk size in debug message
* veeam: sync backup repository after operations are done
got exception of some operations which succeeds due to the following error
```
2024-01-19 10:59:52,846 DEBUG [o.a.c.b.v.VeeamClient] (API-Job-Executor-42:ctx-716501bb job-4373 ctx-2359b76d) (logid:b5e19a17) Veeam response for PowerShell commands [PowerShell Import-Module Veeam.Backup.PowerShell -WarningAction SilentlyContinue;$restorePoint = Get-VBRRestorePoint ^| Where-Object { $_.Id -eq '1d99106a-b5c8-4a1e-958d-066a987caa5f' };if ($restorePoint) { Remove-VBRRestorePoint -Oib $restorePoint -Confirm:$false;$repo = Get-VBRBackupRepository;Sync-VBRBackupRepository -Repository $repo;} else { ; Write-Output 'Failed to delete'; Exit 1;}] is: [^M
Restore Type Job Name State Start Time End Time Description ^M
------------ -------- ----- ---------- -------- ----------- ^M
ConfResynchronize Configuration Dat... Starting 19/01/2024 10:59:52 01/01/1900 00:00:00 ^M
^M
^M
Remove-VBRRestorePoint : Win32 internal error "Access is denied" 0x5 occurred while reading the console output buffer. ^M
Contact Microsoft Customer Support Services.^M
At line:1 char:196^M
+ ... orePoint) { Remove-VBRRestorePoint -Oib $restorePoint -Confirm:$false ...^M
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^M
+ CategoryInfo : ReadError: (:) [Remove-VBRRestorePoint], HostException^M
+ FullyQualifiedErrorId : ReadConsoleOutput,Veeam.Backup.PowerShell.Cmdlets.RemoveVBRRestorePoint^M
^M
].
```
* veeam: fix unable to detach volume when restore backup and attach to vm then detach the volume
It also happened when destroy the original or backup VM
```
2024-01-24 10:10:03,401 ERROR [c.c.s.r.VmwareStorageProcessor] (DirectAgent-74:ctx-95b24ac7 10.0.35.53, job-25995/job-25996, cmd: DettachCommand) (logid:7260ffb8) Failed to detach volume!
java.lang.RuntimeException: Unable to access file [de52fdd3386b3d67b27b3960ecdb08f4] i-2-723-VM/7c2197c129464035bab062edec536a09-flat.vmdk
at com.cloud.hypervisor.vmware.util.VmwareClient.waitForTask(VmwareClient.java:426)
at com.cloud.hypervisor.vmware.mo.DatastoreMO.moveDatastoreFile(DatastoreMO.java:290)
at com.cloud.storage.resource.VmwareStorageLayoutHelper.syncVolumeToRootFolder(VmwareStorageLayoutHelper.java:241)
at com.cloud.storage.resource.VmwareStorageProcessor.attachVolume(VmwareStorageProcessor.java:2150)
at com.cloud.storage.resource.VmwareStorageProcessor.dettachVolume(VmwareStorageProcessor.java:2408)
at com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:174)
at com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:71)
at com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:589)
at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:48)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:45)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2024-01-24 10:10:03,402 INFO [c.c.h.v.u.VmwareHelper] (DirectAgent-74:ctx-95b24ac7 10.0.35.53, job-25995/job-25996, cmd: DettachCommand) (logid:7260ffb8) [ignored]failed to get message for exception: Unable to access file [de52fdd3386b3d67b27b3960ecdb08f4] i-2-723-VM/7c2197c129464035bab062edec536a09-flat.vmdk
```
* vmware: create restored volume with new UUID and attach to VM
This PR fixes several issues in the testing of Veeam 11 and Veeam12
- Import Veeam.Backup.PowerShell and silently ignore the warning messages
- Fix issue when assign vm to backup offerings, which caused by separator (\r\n)
- Fix authorization failure in veeam 12a, which is because v1_4 is not supported in veeam 12a any more
- Fix exception if backup name has space
- Fix backup metrics in veeam12, which is because powershell command does not return the values needed
- Fix Incorrect datetime value, which is because powershell command returns a datetime which is not supported in Java
- Fix issue during backup restoration if VM has both ROOT and DATA disks.
This PR also has the following update
- Add integration test test/integration/smoke/test_backup_recovery_veeam.py
- Make some UI changes
- Add zone setting backup.plugin.veeam.version. If it is not set, CloudStack will get veeam version via powershell commands.
- Add zone setting backup.plugin.veeam.task.poll.interval and backup.plugin.veeam.task.poll.max.retry
With this change, a fix is added for failures seen with test_08_migrate_vm or other migration-related tests because the target host is in `Connecting` state,
#8356 (comment)
#8374 (comment)
and more
Failures seen in #7344 were debugged and it was seen since one of the
host is in Alert state. VM deployment fails with affinity group.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes the test failures with CKS HA-cluster upgrade.
In production, the CKS HA cluster should have at least 3 control VMs as well.
The etcd cluster requires 3 members to achieve reliable HA. The etcd daemon in control VMs uses RAFT protocol to determine the roles of nodes. During upgrade of CKS with HA, the etcd become unreliable if there are only 2 control VMs.
This PR adds argument 'ipadress' to the disassociateIpAddress api. IP address can be disassociated by directly giving the address instead of ID.
Fixes: #8125
This pull request (PR) implements a Distributed Resource Scheduler (DRS) for a CloudStack cluster. The primary objective of this feature is to enable automatic resource optimization and workload balancing within the cluster by live migrating the VMs as per configuration.
Administrators can also execute DRS manually for a cluster, using the UI or the API.
Adds support for two algorithms - condensed & balanced. Algorithms are pluggable allowing ACS Administrators to have customized control over scheduling.
Implementation
There are three top level components:
Scheduler
A timer task which:
Generate DRS plan for clusters
Process DRS plan
Remove old DRS plan records
DRS Execution
We go through each VM in the cluster and use the specified algorithm to check if DRS is required and to calculate cost, benefit & improvement of migrating that VM to another host in the cluster. On the basis of cost, benefit & improvement, the best migration is selected for the current iteration and the VM is migrated. The maximum number of iterations (live migrations) possible on the cluster is defined by drs.iterations which is defined as a percentage (as a value between 0 and 1) of total number of workloads.
Algorithm
Every algorithms implements two methods:
needsDrs - to check if drs is required for cluster
getMetrics - to calculate cost, benefit & improvement of a migrating a VM to another host.
Algorithms
Condensed - Packs all the VMs on minimum number of hosts in the cluster.
Balanced - Distributes the VMs evenly across hosts in the cluster.
Algorithms use drs.level to decide the amount of imbalance to allow in the cluster.
APIs Added
listClusterDrsPlan
id - ID of the DRS plan to list
clusterid - to list plans for a cluster id
generateClusterDrsPlan
id - cluster id
iterations - The maximum number of iterations in a DRS job defined as a percentage (as a value between 0 and 1) of total number of workloads. Defaults to value of cluster's drs.iterations setting.
executeClusterDrsPlan
id - ID of the cluster for which DRS plan is to be executed.
migrateto - This parameter specifies the mapping between a vm and a host to migrate that VM. Format of this parameter: migrateto[vm-index].vm=<uuid>&migrateto[vm-index].host=<uuid>.
Config Keys Added
ClusterDrsPlanExpireInterval
Key drs.plan.expire.interval
Scope Global
Default Value 30 days
Description The interval in days after which old DRS records will be cleaned up.
ClusterDrsEnabled
Key drs.automatic.enable
Scope Cluster
Default Value false
Description Enable/disable automatic DRS on a cluster.
ClusterDrsInterval
Key drs.automatic.interval
Scope Cluster
Default Value 60 minutes
Description The interval in minutes after which a periodic background thread will schedule DRS for a cluster.
ClusterDrsIterations
Key drs.max.migrations
Scope Cluster
Default Value 50
Description Maximum number of live migrations in a DRS execution.
ClusterDrsAlgorithm
Key drs.algorithm
Scope Cluster
Default Value condensed
Description DRS algorithm to execute on the cluster. This PR implements two algorithms - balanced & condensed.
ClusterDrsLevel
Key drs.imbalance
Scope Cluster
Default Value 0.5
Description Percentage (as a value between 0.0 and 1.0) of imbalance allowed in the cluster. 1.0 means no imbalance
is allowed and 0.0 means imbalance is allowed.
ClusterDrsMetric
Key drs.imbalance.metric
Scope Cluster
Default Value memory
Description The cluster imbalance metric to use when checking the drs.imbalance.threshold. Possible values are memory and cpu.
* marvin,test: fix directdownload template checksum test
During failure while deploying a VM with wrong checksum template, VM may be left in Error state. This PR adds code to delete such VM.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove unnecessary logs
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes#8034
Adds the following test for a backed-up snapshot (original template and VM deleted beforehand):
- Create volume from snapshot
- Create a template from the snapshot and deploy a VM using it
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* VMware: add support for 8.0b (8.0.0.2)
* VMware 8: add new guest os mappings in VirtualMachineGuestOsIdentifier
The full list can be found at https://developer.vmware.com/apis/1355/vsphere
* VMware: get guest os mappings of parent version
* VMware8: remove guest os mappings for 8.0.0.2
* VMware8: fix code smells
* vmware: remove annotations in VmwareVmImplementerTest which caused 0.0% code coverage
* VMware8: add a unit test case
* VMware: add support for 8.0c (8.0.0.3)
* VMware8: move to CloudStackVersion.getVMwareParentVersion
* VMware: add support for 8.0u1 (8.0.1.0)
* Copy engine/schema/src/main/java/com/cloud/upgrade/GuestOsMapper.java from PR 6979
* Copy engine/schema/src/main/java/com/cloud/storage/dao/GuestOSHypervisorDao.java from PR 6979
* VMware: ignore the last number in VMware versions
* VMware: copy guest os mapping from 8.0 to 8.0.1
* VMware: add unit tests in VmwareVmImplementerTest.java
* Copy engine/schema/src/test/java/com/cloud/upgrade/GuestOsMapperTest.java from PR 6979
* VMware8: retry vm poweron if fails due to exception "File system specific implementation of Ioctl[file] failed"
This fixes a weird issue on vmware8. When power on a vm, sometimes it fails due to error
2023-04-27 07:04:43,207 ERROR [c.c.h.v.r.VmwareResource] (DirectAgent-442:ctx-cdd42b03 10.0.32.133, job-105/job-106, cmd: StartCommand) (logid:8a24a607) StartCommand failed due to [Exception: java.lang.RuntimeException
Message: File system specific implementation of Ioctl[file] failed
].
java.lang.RuntimeException: File system specific implementation of Ioctl[file] failed
at com.cloud.hypervisor.vmware.util.VmwareClient.waitForTask(VmwareClient.java:426)
at com.cloud.hypervisor.vmware.mo.VirtualMachineMO.powerOn(VirtualMachineMO.java:288)
in vmware.log on ESXi host, it shows
2023-04-27T09:20:41.713Z In(05)+ vmx - Power on failure messages: File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - Failed to lock the file
2023-04-27T09:20:41.713Z In(05)+ vmx - Cannot open the disk '/vmfs/volumes/7b29c876-ac102328/i-2-167-VM/ROOT-167.vmdk' or one of the snapshot disks it depends on.
2023-04-27T09:20:41.713Z In(05)+ vmx - Module 'Disk' power on failed.
2023-04-27T09:20:41.713Z In(05)+ vmx - Failed to start the virtual machine.
There is a KB article for it, but I still do not know why and how to fix it.
https://kb.vmware.com/s/article/1004232
* VMware: extract to method powerOnVM
* vmware: fix mistake in logs
* vmware8: use curl instead of wget to fix test failures
Traceback (most recent call last):
File "/root/test_internal_lb.py", line 555, in test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80
self.execute_internallb_roundrobin_tests(vpc_offering)
File "/root/test_internal_lb.py", line 641, in execute_internallb_roundrobin_tests
client_vm, applb.sourceipaddress, max_http_requests)
File "/root/test_internal_lb.py", line 497, in run_ssh_test_accross_hosts
(e, clienthost.public_ip))
AssertionError: list index out of range: SSH failed for VM with IP Address: 10.0.52.187
and
sshClient: DEBUG: {Cmd: /usr/bin/wget -T3 -qO- --user=admin --password=password http://10.1.2.253:8081/admin?stats via Host: 10.0.52.188} {returns: ["/usr/bin/wget: '/usr/lib/libpcre.so.1' is not an ELF file", "/usr/bin/wget: can't load library 'libpcre.so.1'"]}
* VMware: correct guest OS names in hypervisor mappings for VMware 8.0
el9 and variants were introduced by https://github.com/apache/cloudstack/pull/7059
they are supported with guest os identifiers since VMware 8.0
see https://vdc-repo.vmware.com/vmwb-repository/dcr-public/c476b64b-c93c-4b21-9d76-be14da0148f9/04ca12ad-59b9-4e1c-8232-fd3d4276e52c/SDK/vsphere-ws/docs/ReferenceGuide/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html
* VMware: add Ubuntu 20.04 and 22.04 support for vmware 7.0+
* PR7380: only add guest os mappings for Ubuntu 20.04
* PR7380: Correct RHEL9 guest os names and others for VMware 8.0
* PR7380: correct guest os names on 8.0.0.1 as well
* PR7380: remove Windows 12 and Windows Server 2025 which are not released yet