* Add new directory to c8 packaging for CKS config
* Remove k8s configuration from resources and make it configurable
* Revert "Remove k8s configuration from resources and make it configurable"
This reverts commit d5997033ebe4ba559e6478a64578b894f8e7d3db.
* copy conf to mgmt server and consume them from there
* kvm: fix vm deployment from RAW template
* Update plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
---------
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
* Introducing Storage Access Groups to define the host and storage pool connections
In CloudStack, when a primary storage is added at the Zone or Cluster scope, it is by default connected to all hosts within that scope. This default behavior can be refined using storage access groups, which allow operators to control and limit which hosts can access specific storage pools.
Storage access groups can be assigned to hosts, clusters, pods, zones, and primary storage pools. When a storage access group is set on a cluster/pod/zone, all hosts within that scope inherit the group. Connectivity between a host and a storage pool is then governed by whether they share the same storage access group.
A storage pool with a storage access group will connect only to hosts that have the same storage access group. A storage pool without a storage access group will connect to all hosts, including those with or without a storage access group.
* VMware - Ignore disk not found error on cleanup when the VM disk doesn't exists
* VMware - Retry powerOn on lock issues
* addressed comments
* Update CPVM reboot tests - wait for the agent to Disconnect and back Up
* Retry moveDatastoreFile when any file access issue while creating volume from snapshot
* Update full clone flag when restoring vm using root disk offering with more size than the template size
* refactored (mainly,for diskInfo - causing NPE in some cases)
* Retry moveDatastoreFile when there is any file access issue
* Reset the pool id when create volume fails on the allocated pool
- the pool id is persisted while creating the volume, when it fails the pool id is not reverted. On next create volume attempt, CloudStack couldn't find any suitable primary storage even there are pools available with enough capacity as the pool is already assigned to volume which is in Allocated state (and storage pool compatibility check fails). Ensure volume is not assigned to any pool if create volume fails (so the next creation job would pick the suitable pool).
* endpoint check for resize
* update the resize error through callback result instead of exception
* Add & Remove PowerFlex/ScaleIO MDMs while preparing & unpreparing the storage SDC connections (instead of start & stop scini)
* Add/Remove MDM IP addresses during Host connection/disconnection to/from storage pool when powerflex.connect.on.demand is false
* unit test fixes
* Don't remove MDM IPs from SDC when any volumes mapped to SDC
* Don't remove MDM IPs when other pools of same ScaleIO/PowerFlex cluster are connected
* rebase fixes
* update changes, to not remove/disconnect MDMs on maintenance
* import fixes after rebase
* Consider the clusters with allocation state 'Enabled' for EndPoint selection (in addition to Host state)
* Reset the pool id when create volume fails on the allocated pool
- the pool id is persisted while creating the volume, when it fails the pool id is not reverted. On next create volume attempt, CloudStack couldn't find any suitable primary storage even there are pools available with enough capacity as the pool is already assigned to volume which is in Allocated state (and storage pool compatibility check fails). Ensure volume is not assigned to any pool if create volume fails (so the next creation job would pick the suitable pool).
* endpoint check for resize
* update the resize error through callback result instead of exception
* logger fix
* KVM incremental snapshot feature
* fix log
* fix merge issues
* fix creation of folder
* fix snapshot update
* Check for hypervisor type during parent search
* fix some small bugs
* fix tests
* Address reviews
* do not remove storPool snapshots
* add support for downloading diff snaps
* Add multiple zones support
* make copied snapshots have normal names
* address reviews
* Fix in progress
* continue fix
* Fix bulk delete
* change log to trace
* Start fix on multiple secondary storages for a single zone
* Fix multiple secondary storages for a single zone
* Fix tests
* fix log
* remove bitmaps when deleting snapshots
* minor fixes
* update sql to new file
* Fix merge issues
* Create new snap chain when changing configuration
* add verification
* Fix snapshot operation selector
* fix bitmap removal
* fix chain on different storages
* address reviews
* fix small issue
* fix test
---------
Co-authored-by: João Jandre <joao@scclouds.com.br>
* Don't set signingRegion as auto for creating the s3 client in ceph object store provider.
* replace getBucketAcl with doesBucketExistV2 in CephObjectStoreDriverImplTest
* KVM: add Virtual TPM model and version
* KVM: add admin-only VM setting GUEST.CPU.MODE and GUEST.CPU.MODEL
* VMware: add vTPM
* vTPM: do not set Key due to 'Cannot add multiple devices using the same device key..'
* vTPM: add unit test testTpmModel
* engine/schema: remove user vm details for guest CPU mode/model
* vTPM: extra methods as Daan's requests
* vTPM: add unit tests in VmwareResourceTest
* vTPM: update unit tests in VmwareResourceTest
* vTPM: add unit test in LibvirtComputingResourceTest
* vTPM: use the default TPM version if an invalid version is passed
* vTPM: requires UEFI on vmware and do nothing if it is not enabled/disabled
* vTPM: let uses to add UEFI on vmware
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* vTPM: remove template details for guest CPU mode/model
* UI: boot vm from ISO into UEFI/SECURE mode
---------
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
Dependency name change mockito-inline to mockito-core. Inline is now the default and the last version of mockito-inline released is 5.2.0.
assertj-core in user-authenticators/saml2 pulls in an incompatible version of byte-buddy and required an exclusion. Updating the version of assertj is left for a future PR.
The upgrade requires Java 11+, dropping support for Java 8. CloudStack documentation already says to use Java 11 and does not indicate that java 8 is supported.
Test classes using @RunWith(MockitoJUnitRunner.class) now get run in strict mode. Changes were made to tests where the stubbing intention was clear. In ManagementServerMaintenanceManagerImplTest there are 5 tests where the intention of the test is unclear. Each of the statements now use Mockito.lenient() to avoid the exception. Other cases in the tests follow a similar pattern.
Minor clean up.
Both @Spy and Mockito.spy( should not be used. Favored the annotation.
Both @RunWith(MockitoJUnitRunner.class) and MockitoAnnotations.openMocks(this); should not be used. Favored the annotation.
Unnecessary extends TestCase removed.
@InjectMocks and new in statement unnecessary. Removed new when issue presented.
Some of the Cmd classes like UpdateNetworkCmd have a type tree that includes fields of type Object. This appears to cause issues with injection, requiring that @Mock fields be available. This is where the following fields were added in multiple places:
Object job;
ResponseGenerator _responseGenerator;
Wrong number of parameters for Mockito.when in LibvirtRevertSnapshotCommandWrapperTest.java