Commit Graph

901 Commits

Author SHA1 Message Date
Wei Zhou 8d02e5f808
test: fix test/integration/smoke/test_register_userdata.py which caused networks not to be deleted (#9244)
by moving the test under new domain/account instead of ROOT account
2024-06-14 09:05:57 +02:00
Abhishek Kumar 33659fdf06
server,test: fix resourceid for VOLUME.DETROY in restore VM (#9032)
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2024-05-22 11:02:14 +02:00
Vishesh 87e7c57d08
Fixup e2e test_restore_vm (#9025)
* Fixup e2e test_restore_vm

* Fix template's size attribute

* Resolve comments
2024-05-07 00:02:36 +02:00
Vishesh 08132acaa2
Fix restore VM with allocated root disk (#8977)
* Fix restore VM with allocated root disk

* Add e2e test for restore vm

* Add more checks for e2e test
2024-04-29 12:19:05 +05:30
Wei Zhou 6def370f4a
test: fix unknown parameter hostid in test_vm_life_cycle.py (#8948) 2024-04-24 08:59:33 +02:00
Wei Zhou 0b857def68
New feature: Import/Unamange DATA volume from storage pool (#8808) 2024-04-23 16:05:59 +02:00
Wei Zhou e74a72b4ef
test: fix test_guest_os.py failure on xcpng82 (#8659)
on xenserver-71, supported Debian OSes are
```
          name-label ( RW): Debian Squeeze 6.0 (32-bit)
          name-label ( RW): Debian Jessie 8.0
          name-label ( RW): Debian Wheezy 7.0 (32-bit)
          name-label ( RW): Debian Squeeze 6.0 (64-bit)
          name-label ( RW): Debian Wheezy 7.0 (64-bit)
```

on xcpng82, supported Debian OSes are
```
          name-label ( RW): Debian Jessie 8.0
          name-label ( RW): Debian Buster 10
          name-label ( RW): Debian Stretch 9.0
```
2024-02-17 09:32:37 +01:00
Wei Zhou 17516fd989
test: skip check for volume stats history on xenserver (#8661) 2024-02-15 16:32:49 +05:30
Rohit Yadav 0d36098c76 Merge remote-tracking branch 'origin/4.18' into 4.19 2024-02-07 14:20:39 +05:30
Vishesh df412a99d2
test: Add e2e tests for listing resources (#8410)
* Use dualzones for ci github actions

* Update advdualzone.cfg to be similar to advanced.cfg & fixup test_metrics_api.py

* Fixup e2e tests for running with multiple zones

* Add e2e tests for listing of accounts, disk_offerings, domains, hosts, service_offerings, storage_pools, volumes

* Fixup

* another fixup

* Add test for listing volumes with tags filter

* Add check for existing volumes in test_list_volumes

* Wait for volumes to be deleted on cleanup

* Filter out volumes in Destroy state before checking the count of volumes
2024-02-07 08:25:55 +01:00
Wei Zhou 69e8ebc03f
CKS: retry if unable to drain node or unable to upgrade k8s node (#8402)
* CKS: retry if unable to drain node or unable to upgrade k8s node

I tried CKS upgrade 16 times, 11 of 16 upgrades succeeded.

2 of 16 upgrades failed due to
```
error: unable to drain node "testcluster-of7974-node-18c8c33c2c3" due to error:[error when evicting pods/"cloud-controller-manager-5b8fc87665-5nwlh" -n "kube-system": Post "https://10.0.66.18:6443/api/v1/namespaces/kube-system/pods/cloud-controller-manager-5b8fc87665-5nwlh/eviction": unexpected EOF, error when evicting pods/"coredns-5d78c9869d-h5nkz" -n "kube-system": Post "https://10.0.66.18:6443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-h5nkz/eviction": unexpected EOF], continuing command...
```

3 of 16 upgrades failed due to
```
Error from server: error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
Name: "kubernetes-dashboard", Namespace: "kubernetes-dashboard"
from server for: "/mnt/k8sdisk//dashboard.yaml": etcdserver: leader changed
```

* CKS: remove tests of creating/deleting HA clusters as they are covered by the upgrade test

* Update PR 8402 as suggested

* test: remove CKS cluster if fail to create or verify
2024-02-06 11:14:10 +01:00
Wei Zhou b8904f75dd Merge remote-tracking branch 'apache/4.18' into 4.19 2024-02-05 10:08:31 +01:00
Wei Zhou b34f093137
veeam: fix some issues with restoring volume from backup and attaching it to VM (#8570)
* veeam: detach only the restored volume during backup restore

Steps to reproduce the issue
1. create a VM (A) with ROOT and DATA disk
2. assign to a backup offering
3. create backup
4. create another VM (B)
5. restore the DATA disk of VM A, and attach to VM B
6. When operation is done, check the datastore

Without this change, the ROOT image is not removed and left over on the datastore.
```
[root@ref-trl-5933-v-Mr8-wei-zhou-esxi2:/vmfs/volumes/5f60667d-18d828eb] ls -l /vmfs/volumes/5f60667d-18d828eb/CS-RSTR-dfb6f21c-a941-49db-9963-4f0286a17dac
total 1784840
-rw-------    1 root     root     5242880000 Jan 24 09:23 ROOT-722_2-flat.vmdk
-rw-------    1 root     root           499 Jan 24 09:23 ROOT-722_2.vmdk
```

With this change, the whole temporary vm has been destroyed.
```
[root@ref-trl-5933-v-Mr8-wei-zhou-esxi2:/vmfs/volumes/5f60667d-18d828eb] ls -l /vmfs/volumes/5f60667d-18d828eb/CS-RSTR-734bee3b-640c-4ff0-a34b-bc45358565b2
ls: /vmfs/volumes/5f60667d-18d828eb/CS-RSTR-734bee3b-640c-4ff0-a34b-bc45358565b2: No such file or directory
```

* veeam: fix wrong disk size in debug message

* veeam: sync backup repository after operations are done

got exception of some operations which succeeds due to the following error
```
2024-01-19 10:59:52,846 DEBUG [o.a.c.b.v.VeeamClient] (API-Job-Executor-42:ctx-716501bb job-4373 ctx-2359b76d) (logid:b5e19a17) Veeam response for PowerShell commands [PowerShell Import-Module Veeam.Backup.PowerShell -WarningAction SilentlyContinue;$restorePoint = Get-VBRRestorePoint ^| Where-Object { $_.Id -eq '1d99106a-b5c8-4a1e-958d-066a987caa5f' };if ($restorePoint) { Remove-VBRRestorePoint -Oib $restorePoint -Confirm:$false;$repo = Get-VBRBackupRepository;Sync-VBRBackupRepository -Repository $repo;} else { ; Write-Output 'Failed to delete'; Exit 1;}] is: [^M
Restore Type       Job Name             State      Start Time             End Time               Description           ^M
------------       --------             -----      ----------             --------               -----------           ^M
ConfResynchronize  Configuration Dat... Starting   19/01/2024 10:59:52    01/01/1900 00:00:00                          ^M
^M
^M
Remove-VBRRestorePoint : Win32 internal error "Access is denied" 0x5 occurred while reading the console output buffer. ^M
Contact Microsoft Customer Support Services.^M
At line:1 char:196^M
+ ... orePoint) { Remove-VBRRestorePoint -Oib $restorePoint -Confirm:$false ...^M
+                 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^M
    + CategoryInfo          : ReadError: (:) [Remove-VBRRestorePoint], HostException^M
    + FullyQualifiedErrorId : ReadConsoleOutput,Veeam.Backup.PowerShell.Cmdlets.RemoveVBRRestorePoint^M
 ^M
].
```

* veeam: fix unable to detach volume when restore backup and attach to vm then detach the volume

It also happened when destroy the original or backup VM

```
2024-01-24 10:10:03,401 ERROR [c.c.s.r.VmwareStorageProcessor] (DirectAgent-74:ctx-95b24ac7 10.0.35.53, job-25995/job-25996, cmd: DettachCommand) (logid:7260ffb8) Failed to detach volume!
java.lang.RuntimeException: Unable to access file [de52fdd3386b3d67b27b3960ecdb08f4] i-2-723-VM/7c2197c129464035bab062edec536a09-flat.vmdk
        at com.cloud.hypervisor.vmware.util.VmwareClient.waitForTask(VmwareClient.java:426)
        at com.cloud.hypervisor.vmware.mo.DatastoreMO.moveDatastoreFile(DatastoreMO.java:290)
        at com.cloud.storage.resource.VmwareStorageLayoutHelper.syncVolumeToRootFolder(VmwareStorageLayoutHelper.java:241)
        at com.cloud.storage.resource.VmwareStorageProcessor.attachVolume(VmwareStorageProcessor.java:2150)
        at com.cloud.storage.resource.VmwareStorageProcessor.dettachVolume(VmwareStorageProcessor.java:2408)
        at com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:174)
        at com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:71)
        at com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:589)
        at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
        at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:48)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
        at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:45)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
2024-01-24 10:10:03,402 INFO  [c.c.h.v.u.VmwareHelper] (DirectAgent-74:ctx-95b24ac7 10.0.35.53, job-25995/job-25996, cmd: DettachCommand) (logid:7260ffb8) [ignored]failed to get message for exception: Unable to access file [de52fdd3386b3d67b27b3960ecdb08f4] i-2-723-VM/7c2197c129464035bab062edec536a09-flat.vmdk
```

* vmware: create restored volume with new UUID and attach to VM
2024-01-29 11:40:43 +01:00
Wei Zhou 33bb92acce
Veeam: Support Veeam 11 and 12 (#8241)
This PR fixes several issues in the testing of Veeam 11 and Veeam12
- Import Veeam.Backup.PowerShell and silently ignore the warning messages
- Fix issue when assign vm to backup offerings, which caused by separator (\r\n)
- Fix authorization failure in veeam 12a, which is because v1_4 is not supported in veeam 12a any more
- Fix exception if backup name has space
- Fix backup metrics in veeam12, which is because powershell command does not return the values needed
- Fix Incorrect datetime value, which is because powershell command returns a datetime which is not supported in Java
- Fix issue during backup restoration if VM has both ROOT and DATA disks.

This PR also has the following update
- Add integration test test/integration/smoke/test_backup_recovery_veeam.py
- Make some UI changes
- Add zone setting backup.plugin.veeam.version. If it is not set, CloudStack will get veeam version via powershell commands.
- Add zone setting backup.plugin.veeam.task.poll.interval and backup.plugin.veeam.task.poll.max.retry
2024-01-19 18:42:01 +01:00
Abhishek Kumar ab808995ff
Revert "Add e2e tests for listing resources (#8281)" (#8396)
This reverts commit 1411da1a22.
2023-12-21 20:56:40 +05:30
Abhishek Kumar 26214ea139 Merge remote-tracking branch 'apache/4.18' 2023-12-21 20:55:38 +05:30
Abhishek Kumar d83d994929
test: additional check to ensure hosts are left in up state (#8383)
With this change, a fix is added for failures seen with test_08_migrate_vm or other migration-related tests because the target host is in `Connecting` state,
#8356 (comment)
#8374 (comment)
and more
2023-12-21 20:54:43 +05:30
Vishesh 1411da1a22
Add e2e tests for listing resources (#8281)
This PR adds e2e tests for listing resources
2023-12-20 11:54:39 +05:30
Abhishek Kumar f5c7018e5e Merge remote-tracking branch 'apache/4.18' 2023-12-20 11:53:18 +05:30
Abhishek Kumar 64ecd00eb7
test: fix test_host_ping.py to restore original host state (#8380)
Failures seen in #7344 were debugged and it was seen since one of the
host is in Alert state. VM deployment fails with affinity group.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2023-12-20 11:52:08 +05:30
Vishesh 7c06d289d2
Fixup test_image_store_object_migration.py (#8378)
This PR fixes failure seem in #7344 (comment)
2023-12-20 11:17:41 +05:30
Abhishek Kumar 4bdf35b7b0 Merge remote-tracking branch 'apache/4.18' 2023-12-09 12:04:21 +05:30
Wei Zhou fc44df7c95
CKS: create HA cluster with 3 control VMs instead 2 (#8297)
This PR fixes the test failures with CKS HA-cluster upgrade.
In production, the CKS HA cluster should have at least 3 control VMs as well.
The etcd cluster requires 3 members to achieve reliable HA. The etcd daemon in control VMs uses RAFT protocol to determine the roles of nodes. During upgrade of CKS with HA, the etcd become unreliable if there are only 2 control VMs.
2023-12-09 11:33:05 +05:30
kishankavala 5651eab49c
ObjectStore Framework with MinIO and Simulator plugins (#7752)
This PR adds Object Storage feature to CloudStack.

FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/%5BDRAFT%5D+CloudStack+Object+Store
2023-12-01 17:51:00 +05:30
Bryan Lima cb62ce6767
Global ACL for VPCs (#7150) 2023-11-30 14:51:43 +01:00
Abhisar Sinha 5c7e4b7edc
api: add ipaddress argument to disassociateIPAddress (#8222)
This PR adds argument 'ipadress' to the disassociateIpAddress api. IP address can be disassociated by directly giving the address instead of ID.

Fixes: #8125
2023-11-19 11:50:57 +05:30
Bryan Lima 1f29f6f040
Public IP quarantine feature (#7378) 2023-11-15 10:29:22 +01:00
Daan Hoogland a15cb81c85 Merge remote-tracking branch 'apache/4.18' into main 2023-11-03 11:55:26 +01:00
Vishesh 5362bad442
Storage Management (#7949) 2023-11-01 10:46:22 +01:00
Wei Zhou bd52fa8a12
New feature: VNF templates and appliances integration (#8022) 2023-10-27 10:23:00 +02:00
Vishesh 3b11663d87
Fix failure on agent reconnection (#8089) 2023-10-26 16:54:36 +02:00
Vishesh ea90848429
Feature: Add support for DRS in a Cluster (#7723)
This pull request (PR) implements a Distributed Resource Scheduler (DRS) for a CloudStack cluster. The primary objective of this feature is to enable automatic resource optimization and workload balancing within the cluster by live migrating the VMs as per configuration.
Administrators can also execute DRS manually for a cluster, using the UI or the API.
Adds support for two algorithms - condensed & balanced. Algorithms are pluggable allowing ACS Administrators to have customized control over scheduling.

Implementation
There are three top level components:

    Scheduler
    A timer task which:

    Generate DRS plan for clusters
    Process DRS plan
    Remove old DRS plan records

    DRS Execution
    We go through each VM in the cluster and use the specified algorithm to check if DRS is required and to calculate cost, benefit & improvement of migrating that VM to another host in the cluster. On the basis of cost, benefit & improvement, the best migration is selected for the current iteration and the VM is migrated. The maximum number of iterations (live migrations) possible on the cluster is defined by drs.iterations which is defined as a percentage (as a value between 0 and 1) of total number of workloads.

    Algorithm
    Every algorithms implements two methods:
        needsDrs - to check if drs is required for cluster
        getMetrics - to calculate cost, benefit & improvement of a migrating a VM to another host.

Algorithms

    Condensed - Packs all the VMs on minimum number of hosts in the cluster.
    Balanced - Distributes the VMs evenly across hosts in the cluster.
    Algorithms use drs.level to decide the amount of imbalance to allow in the cluster.

APIs Added

listClusterDrsPlan

    id - ID of the DRS plan to list
    clusterid - to list plans for a cluster id

generateClusterDrsPlan

    id - cluster id
    iterations - The maximum number of iterations in a DRS job defined as a percentage (as a value between 0 and 1) of total number of workloads. Defaults to value of cluster's drs.iterations setting.

executeClusterDrsPlan

    id - ID of the cluster for which DRS plan is to be executed.
    migrateto - This parameter specifies the mapping between a vm and a host to migrate that VM. Format of this parameter: migrateto[vm-index].vm=<uuid>&migrateto[vm-index].host=<uuid>.

Config Keys Added

    ClusterDrsPlanExpireInterval
    Key drs.plan.expire.interval
    Scope Global
    Default Value 30 days
    Description The interval in days after which old DRS records will be cleaned up.

    ClusterDrsEnabled
    Key drs.automatic.enable
    Scope Cluster
    Default Value false
    Description Enable/disable automatic DRS on a cluster.

    ClusterDrsInterval
    Key drs.automatic.interval
    Scope Cluster
    Default Value 60 minutes
    Description The interval in minutes after which a periodic background thread will schedule DRS for a cluster.

    ClusterDrsIterations
    Key drs.max.migrations
    Scope Cluster
    Default Value 50
    Description Maximum number of live migrations in a DRS execution.

    ClusterDrsAlgorithm
    Key drs.algorithm
    Scope Cluster
    Default Value condensed
    Description DRS algorithm to execute on the cluster. This PR implements two algorithms - balanced & condensed.

    ClusterDrsLevel
    Key drs.imbalance
    Scope Cluster
    Default Value 0.5
    Description Percentage (as a value between 0.0 and 1.0) of imbalance allowed in the cluster. 1.0 means no imbalance
    is allowed and 0.0 means imbalance is allowed.

    ClusterDrsMetric
    Key drs.imbalance.metric
    Scope Cluster
    Default Value memory
    Description The cluster imbalance metric to use when checking the drs.imbalance.threshold. Possible values are memory and cpu.
2023-10-26 11:48:18 +05:30
Abhishek Kumar 1bb6130795 Merge remote-tracking branch 'apache/4.18' into main 2023-10-20 11:00:22 +05:30
Abhishek Kumar a2ec1f3777
marvin,test: fix directdownload template checksum test (#8096)
* marvin,test: fix directdownload template checksum test

During failure while deploying a VM with wrong checksum template, VM may be left in Error state. This PR adds code to delete such VM.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* remove unnecessary logs

---------

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2023-10-19 19:40:03 +02:00
Abhishek Kumar 20046ffe61 Merge remote-tracking branch 'apache/4.18' into main 2023-10-19 10:39:40 +05:30
Abhishek Kumar 540c7b802f
test: add test for standalone snapshot (#8104)
Fixes #8034

Adds the following test for a backed-up snapshot (original template and VM deleted beforehand):
- Create volume from snapshot
- Create a template from the snapshot and deploy a VM using it

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2023-10-19 09:48:16 +05:30
Abhishek Kumar b4c7705d4b Merge remote-tracking branch 'apache/4.18' into main 2023-10-10 17:30:37 +05:30
Harikrishna a9f3af85cb
Default value of force should be false for template delete operation (#7731)
* default value of force should be false

* Added force flag in tests
2023-10-10 15:49:57 +05:30
Vishesh b9e423b7a9
Add extra checks for test_vm_schedule to avoid intermittent failures (#8036)
This PR adds additional checks in test_vm_schedule to avoid intermittent failures
2023-10-05 11:48:37 +05:30
Gabriel Pordeus Santos 7541cb97bd
Add Service Offering to listSystemVMs and fix link from VR to its offering (#7938)
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
2023-09-28 09:10:03 +02:00
Vishesh d25521e96f
Fix issues in VM Scheduler (#7782) 2023-09-18 14:11:06 +02:00
John Bampton 4eb110af73
Remove unneeded duplicate words (#7850) 2023-09-18 13:16:33 +02:00
Wei Zhou 78411fd405
test: fix test_vm_autoscaling.py which does not work due to userdata improvement (#7921) 2023-08-30 23:51:52 +02:00
Daan Hoogland 24ae5aa5fa Merge branch '4.18' 2023-08-25 14:27:34 +02:00
Wei Zhou 8dc5fdd067
server: fix cannot get systemvm ips in dedicated ranges (#7144)
This fixes #6698
2023-08-25 11:36:39 +02:00
Daan Hoogland ea832bce13 Merge branch '4.18' 2023-08-22 11:44:45 +02:00
Wei Zhou 78bdde9e98
AutoScaling: support Managed User Data (#7769) 2023-08-22 11:07:16 +02:00
Wei Zhou c8d6e50539
VMware: add support for 8.0b (8.0.0.2), 8.0c (8.0.0.3) (#7380)
* VMware: add support for 8.0b (8.0.0.2)

* VMware 8: add new guest os mappings in VirtualMachineGuestOsIdentifier

The full list can be found at https://developer.vmware.com/apis/1355/vsphere

* VMware: get guest os mappings of parent version

* VMware8: remove guest os mappings for 8.0.0.2

* VMware8: fix code smells

* vmware: remove annotations in VmwareVmImplementerTest which caused 0.0% code coverage

* VMware8: add a unit test case

* VMware: add support for 8.0c (8.0.0.3)

* VMware8: move to CloudStackVersion.getVMwareParentVersion

* VMware: add support for 8.0u1 (8.0.1.0)

* Copy engine/schema/src/main/java/com/cloud/upgrade/GuestOsMapper.java from PR 6979

* Copy engine/schema/src/main/java/com/cloud/storage/dao/GuestOSHypervisorDao.java from PR 6979

* VMware: ignore the last number in VMware versions

* VMware: copy guest os mapping from 8.0 to 8.0.1

* VMware: add unit tests in VmwareVmImplementerTest.java

* Copy engine/schema/src/test/java/com/cloud/upgrade/GuestOsMapperTest.java from PR 6979

* VMware8: retry vm poweron if fails due to exception "File system specific implementation of Ioctl[file] failed"

This fixes a weird issue on vmware8. When power on a vm, sometimes it fails due to error

2023-04-27 07:04:43,207 ERROR [c.c.h.v.r.VmwareResource] (DirectAgent-442:ctx-cdd42b03 10.0.32.133, job-105/job-106, cmd: StartCommand) (logid:8a24a607) StartCommand failed due to [Exception: java.lang.RuntimeException
Message: File system specific implementation of Ioctl[file] failed
].
java.lang.RuntimeException: File system specific implementation of Ioctl[file] failed
        at com.cloud.hypervisor.vmware.util.VmwareClient.waitForTask(VmwareClient.java:426)
        at com.cloud.hypervisor.vmware.mo.VirtualMachineMO.powerOn(VirtualMachineMO.java:288)

in vmware.log on ESXi host, it shows

2023-04-27T09:20:41.713Z In(05)+ vmx - Power on failure messages: File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - Failed to lock the file
2023-04-27T09:20:41.713Z In(05)+ vmx - Cannot open the disk '/vmfs/volumes/7b29c876-ac102328/i-2-167-VM/ROOT-167.vmdk' or one of the snapshot disks it depends on.
2023-04-27T09:20:41.713Z In(05)+ vmx - Module 'Disk' power on failed.
2023-04-27T09:20:41.713Z In(05)+ vmx - Failed to start the virtual machine.

There is a KB article for it, but I still do not know why and how to fix it.
https://kb.vmware.com/s/article/1004232

* VMware: extract to method powerOnVM

* vmware: fix mistake in logs

* vmware8: use curl instead of wget to fix test failures

Traceback (most recent call last):
  File "/root/test_internal_lb.py", line 555, in test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80
    self.execute_internallb_roundrobin_tests(vpc_offering)
  File "/root/test_internal_lb.py", line 641, in execute_internallb_roundrobin_tests
    client_vm, applb.sourceipaddress, max_http_requests)
  File "/root/test_internal_lb.py", line 497, in run_ssh_test_accross_hosts
    (e, clienthost.public_ip))
AssertionError: list index out of range: SSH failed for VM with IP Address: 10.0.52.187

and

sshClient: DEBUG: {Cmd: /usr/bin/wget -T3 -qO- --user=admin --password=password http://10.1.2.253:8081/admin?stats via Host: 10.0.52.188} {returns: ["/usr/bin/wget: '/usr/lib/libpcre.so.1' is not an ELF file", "/usr/bin/wget: can't load library 'libpcre.so.1'"]}

* VMware: correct guest OS names in hypervisor mappings for VMware 8.0

el9 and variants were introduced by https://github.com/apache/cloudstack/pull/7059
they are supported with guest os identifiers since VMware 8.0

see https://vdc-repo.vmware.com/vmwb-repository/dcr-public/c476b64b-c93c-4b21-9d76-be14da0148f9/04ca12ad-59b9-4e1c-8232-fd3d4276e52c/SDK/vsphere-ws/docs/ReferenceGuide/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html

* VMware: add Ubuntu 20.04 and 22.04 support for vmware 7.0+

* PR7380: only add guest os mappings for Ubuntu 20.04

* PR7380: Correct RHEL9 guest os names and others for VMware 8.0

* PR7380: correct guest os names on 8.0.0.1 as well

* PR7380: remove Windows 12 and Windows Server 2025 which are not released yet
2023-08-17 10:42:42 +02:00
Daan Hoogland 5559668f12 Merge branch '4.18' 2023-08-15 09:15:17 +02:00
Wei Zhou aa02d9b3c1
test: skip live storage migration on CentOS 7 (#7862)
since #7570, The detail 'Host.OS' of centos7 host is changed from 'CentOS' to 'CentOS Linux'
2023-08-15 08:39:18 +02:00