If the libvirt mount point is still busy and can't be unmounted
right now, it was waited 5 seconds and an plain unmount was tried,
without cleaning up the libvirt storagepool.
This kept libvirt thinking the storagepool
is active and mounted (which it wasn't).
Now after the plain unmount call, also
the libvirt storagepool will be destroyed.
Adminstrators should ensure that IDP configuration has a signing certificate for the actual signature check to be performed. In addition to this, this change introduces a new global setting saml2.check.signature, with the default value of true, which can deliberately fail a SAML login attempt when the SAML response has a missing signature.
Purges the SAML token upon handling the first SAML response.
Authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Adminstrators should ensure that IDP configuration has a signing certificate for the actual signature check to be performed. In addition to this, this change introduces a new global setting saml2.check.signature, with the default value of true, which can deliberately fail a SAML login attempt when the SAML response has a missing signature.
Purges the SAML token upon handling the first SAML response.
Authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* saml: purge token after first response and improve setting description
This improves the description of a saml signature checking global
setting, and purges the SAML token upon handling the first SAML
response.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* fix failing unit test
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
---------
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This improves upon #9219, to make the signature checks mandatory by
default but allows for users to relax the setting if they really must.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- mTLS implementation for cluster service communication
- Listen only on the specified cluster node IP address instead of all interfaces
- Validate incoming cluster service requests are from peer management servers based on the server's certificate dns name which can be through global config - ca.framework.cert.management.custom.san
- Hardening of KVM command wrapper script exeicution
- Improve API server integration port check
- cloudstack-management.default: don't have JMX configuration if not needed. JMX is used for instrumentation; users who need to use it should enable it explicitly
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
- mTLS implementation for cluster service communication
- Listen only on the specified cluster node IP address instead of all interfaces
- Validate incoming cluster service requests are from peer management servers based on the server's certificate dns name which can be through global config - ca.framework.cert.management.custom.san
- Hardening of KVM command wrapper script execution
- Improve API server integration port check
- cloudstack-management.default: don't have JMX configuration if not needed. JMX is used for instrumentation; users who need to use it should enable it explicitly
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
- mTLS implementation for cluster service communication
- Listen only on the specified cluster node IP address instead of all interfaces
- Validate incoming cluster service requests are from peer management servers based on the server's certificate dns name which can be through global config - ca.framework.cert.management.custom.san
- Hardening of KVM command wrapper script execution
- Improve API server integration port check
- cloudstack-management.default: don't have JMX configuration if not needed. JMX is used for instrumentation; users who need to use it should enable it explicitly
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
The client.setBasePath() would overwrite the Linstor controller IP/host
for all current client users. This is basically a race condition
that triggered as soon as you had configured 2 different primary storages
with different Linstor controllers.
* New feature: Change storage pool scope
* Added checks for Ceph/RBD
* Update op_host_capacity table on primary storage scope change
* Storage pool scope change integration test
* pull 8875 : Addressed review comments
* Pull 8875: remove storage checks, AbstractPrimayStorageLifeCycleImpl class
* Pull 8875: Fixed integration test failure
* Pull 8875: Review comments
* Pull 8875: review comments + broke changeStoragePoolScope into smaller functions
* Added UT for changeStoragePoolScope
* Rename AbstractPrimaryDataStoreLifeCycleImpl to BasePrimaryDataStoreLifeCycleImpl
* Pull 8875: Dao review comments
* Pull 8875: Rename changeStoragePoolScope.vue to ChangeStoragePoolScope.vue
* Pull 8875: Created a new smokes test file + A single warning msg in ui
* Pull 8875: Added cleanup in test_primary_storage_scope.py
* Pull 8875: Type in en.json
* Pull 8875: cleanup array in test_primary_storage_scope.py
* Pull:8875 Removing extra whitespace at eof of StorageManagerImplTest
* Pull 8875: Added UT for PrimaryDataStoreHelper and BasePrimaryDataStoreLifeCycleImpl
* Pull 8875: Added license header
* Pull 8875: Fixed sql query for vmstates
* Pull 8875: Changed icon plus info on disabled mode in apidoc
* Pull 8875: Change scope should not work for local storage
* Pull 8875: Change scope completion event
* Pull 8875: Added api findAffectedVmsForStorageScopeChange
* Pull 8875: Added UT for findAffectedVmsForStorageScopeChange and removed listByPoolIdVMStatesNotInCluster
* Pull 8875: Review comments + Vm name in response
* Pull 8875: listByVmsNotInClusterUsingPool was returning duplicate VM entries because of multiple volumes in the VM satisfying the criteria
* Pull 8875: fixed listAffectedVmsForStorageScopeChange UT
* listAffectedVmsForStorageScopeChange should work if the pool is not disabled
* Fix listAffectedVmsForStorageScopeChangeTest UT
* Pull 8875: add volume.removed not null check in VmsNotInClusterUsingPool query
* Pull 8875: minor refactoring in changeStoragePoolScopeToCluster
* Update server/src/main/java/com/cloud/storage/StorageManagerImpl.java
* fix eof
* changeStoragePoolScopeToZone should connect pool to all Up hosts
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Mitigation for non-scalable Powerflex/ScaleIO clients
- Added ScaleIOSDCManager to manage SDC connections, checks clients limit, prepare and unprepare SDC on the hosts.
- Added commands for prepare and unprepare storage clients to prepare/start and stop SDC service respectively on the hosts.
- Introduced config 'storage.pool.connected.clients.limit' at storage level for client limits, currently support for Powerflex only.
* tests issue fixed
* refactor / improvements
* lock with powerflex systemid while checking connections limit
* updated powerflex systemid lock to hold till sdc preparation
* Added custom stats support for storage pool, through listStoragePools API
* code improvements, and unit tests
* unit tests fixes
* Update config 'storage.pool.connected.clients.limit' to dynamic, and some improvements
* Stop SDC on host after migration if no volumes mapped to host
* Wait for SDC to connect after scini service start, and some log improvements
* Do not throw exception (log it) when SDC is not connected while revoking access for the powerflex volume
* some log improvements
* Restart agent when host comes out of maintenance
* Don't send CreateStoragePoolCommand to hosts in maintenance mode
* CreateStoragePoolCommand can run when host in maintenance. Reverted the change to restart agent when host was already up and in maintenance
* Reverted changes done to ResourceManagerImplTest
* Create/Export OVA file of the VM on external vCenter host, to temporary conversion location (NFS)
* Fixed ova issue on untar/extract ovf from ova file
"tar -xf" cmd on ova fails with "ovf: Not found in archive" while extracting ovf file
* Updated VMware to KVM instance migration using OVA
* Refactoring and cleanup
* test fixes
* Consider zone wide pools in the destination cluster for instance conversion
* Remove local storage pool support as temporary conversion location
- OVA export not possible as the pool is not accessible outside host, NFS pools are supported.
* cleanup unused code
* some improvements, and refactoring
* import nic unit tests
* vmware guru unit tests
* Separate clone VM and create template file for VMware migration
- Export OVA (of the cloned VM) to the conversion location takes time.
- Do any validations with cloned VM before creating the template (and fail early).
- Updated unit tests.
* Check conversion support on host before clone vm / create template on vmware (and fail early)
* minor code improvements
* Auto select the host with instance conversion capability
* Skip instance conversion supported response param for non-KVM hosts
* Show supported conversion hosts in the UI
* Skip persistence map update if network doesn't exist
* Added support to export OVA from KVM host, through ovftool (when installed in KVM host)
* Updated importvm api param 'usemsforovaexport' to 'forcemstodownloadvmfiles', to be generic
* Updated hardcoded UI messages with message labels
* Updated UI to support importvm api param - forcemstodownloadvmfiles
* Improved instance conversion support checks on ubuntu hosts, and for windows guest vms
* Use OVF template (VM disks and spec files) for instance conversion from VMware, instead of OVA file
- this would further increase the migration performance (as it reduces the time for OVA preparation / archiving of the VM files into a single file)
* OVF export tool parallel threads code improvements
* Updated 'convert.vmware.instance.to.kvm.timeout' config default value to 3 hrs
* Config values check & code improvements
* Updated import log, with time taken and vm details
* Support for parallel downloads of VMware VM disk files while exporting OVF from MS, and other changes below.
- Skip clone for powered off VMs
- Fixes to support standalone host (with its default datacenter)
- Some code improvements
* rebase fixes
* rebase fixes
* minor improvement
* code improvements - threads configuration, and api parameter changes to import vm files
* typo fix in error msg
* xenserver: attach regular iso with configdrive
Fixes#7902
This PR allows attaching a regular ISO to a VM when it already has the
config drive ISO attached.
Config-drive ISO is now attached with the SR name-label
<VM-NAME>-CONFIGDRIVE-ISO. While regular ISOs continue to attach with SR
name-label <VM-NAME>-ISO. VM which already have the configdrive ISO
attached before this fix will return an appropriate error and will need
to be stopped-start.
* Veeam: find storage pool by path for PreSetup and VMFS
* Veeam: support VMware distributed virtual switch
* Veeam: sync volumes on Solidfire after backup restoration
user faced the issue that backup is restored but the DATA disk is gone (ROOT disk is ok)
```
2024-05-03 12:00:32,868 ERROR [o.a.c.b.BackupManagerImpl] (API-Job-Executor-13:ctx-aa8a1d85 job-149661 ctx-73328567) (logid:6510cf06) Failed to import VM [vmInternalName: i-169-9679-VM] from backup restoration [{"backupType":"Full","externalId":"821ca400-a5da-4282-bf3f-7c7e38a6cdb4","id":257,"uuid":"69399101-5cbd-461c-8a48-f0c70eac0b24","vmId":9679}] with hypervisor [type: VMware] due to: [Couldn't find storage pool -iqn.2010-01.com.solidfire:3p53.data-9679.221-0].
```
On managed storage, the datastore name of DATA disk is determined by the iscsi_name of the volume.
* Veeam: set correct path for DATA disks on solidfire
* Temporarily backup StorPool volume before expunge
Sometimes the users delete the volumes by mistake. This enhancment
provides a solution to backup the volume before it's deleted. The user
will be able to see the snapshot in CloudStack UI/CLI and create only a
volume from it.
A task will check (by default on every 5mins) if the snapshots are
deleted from StorPool
Global settings to enable the delay delete option:
`storpool.delete.after.interval` - The interval (in seconds) after the StorPool snapshot will be deleted
`storpool.list.snapshots.delete.after.interval` - The interval (in seconds) to fetch the StorPool snapshots with deleteAfter flag
Minor fix when deleting snapshots
* added Apache licence
* addressed comments
* Ability to specify NFS mount options while adding a primary storage and modify it later
* Pull 8947: Rename all occurrence of nfsopt to nfsMountOpt and added nfsMountOpts to ApiConstants
* Pull 8947: Refactor code - move into separate methods
* Pull 8947: CollectionsUtils.isNotEmpty and switch statement in LibvirtStoragePoolDef.java
* Pull 8947: UI - cancel maintainenace will remount the storage pool and apply the options
* Pull 8947: UI - moved edit NFS mount options to edit Primary Storage form
* Pull 8947: UI - moved 'NFS Mount Options' to below 'Type' in dataview
* Pull 8947: Fixed message in AddPrimaryStorage.vue
* Pull 8947: Convert _nfsmountOpts to Set in libvirtStoragePoolDef
* Pull 8947: Throw exception and log error if mount fails due to incorrect mount option
* Pull 8947: Added UT and moved integration test to component/maint
* Pull 8947: Review comments
* Pull 8947: Removed password from integration test
* Pull 8947: move details allocation to inside the if loop in getStoragePoolNFSMountOpts
* Pull 8947: Fixed a bug in AddPrimaryStorage.vue
* Pull 8947: Pool should remain in maintenance mode if mount fails
* Pull 8947: Removed password from integration test
* Pull 8947: Added UT
* Pull 8875: Fixed a bug in CloudStackPrimaryDataStoreLifeCycleImplTest
* Pull 8875: Fixed a bug in LibvirtStoragePoolDefTest
* Pull 8947: minor code restructuring
* Pull 8947 : added some ut for coverage
* Fix LibvirtStorageAdapterTest UT
* Updates to change PUre and Primera to host-centric vlun assignments; various small bug fixes
* update to add timestamp when deleting pure volumes to avoid future conflicts
* update to migrate to properly check disk offering is valid for the target storage pool
* Updates to change PUre and Primera to host-centric vlun assignments; various small bug fixes
* update to add timestamp when deleting pure volumes to avoid future conflicts
* update to migrate to properly check disk offering is valid for the target storage pool
* improve error handling when copying volumes to add precision to which step failed
* rename pure volume before delete to avoid conflicts if the same name is used before its expunged on the array
* remove dead code in AdaptiveDataStoreLifeCycleImpl.java
* Fix issues found in PR checks
* fix session refresh TTL logic
* updates from PR comments
* logic to delete by path ONLY on supported OUI
* fix to StorageSystemDataMotionStrategy compile error
* change noisy debug message to trace message
* fix double callback call in handleVolumeMigrationFromNonManagedStorageToManagedStorage
* fix for flash array delete error
* fix typo in StorageSystemDataMotionStrategy
* change copyVolume to use writeback to speed up copy ops
* remove returning PrimaryStorageDownloadAnswer when connectPhysicalDisk returns false during KVMStorageProcessor template copy
* remove change to only set UUID on snapshot if it is a vmSnapshot
* reverting change to UserVmManagerImpl.configureCustomRootDiskSize
* add error checking/simplification per comments from @slavkap
* Update engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/StorageSystemDataMotionStrategy.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* address PR comments from @sureshanaparti
---------
Co-authored-by: GLOVER RENE <rg9975@cs419-mgmtserver.rg9975nprd.app.ecp.att.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Added timeout config to copy the disks of remote KVM instance while importing the instance from an external host
* Updated copy config units to mins
* Cleanup remote converted file and local file when copy failed
Earlier the triggerShutdown API would immediately shutdown the MS and if
it is the same MS on which API is called it would lead to error in the
API call. This change adds a delay to the process so the MS would be
able to send response to the API.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR introduces the functionality of purging removed DB entries for CloudStack entities (currently only for VirtualMachine). There would be three mechanisms for purging removed resources:
Background task - CloudStack will run a background task which runs at a defined interval. Other parameters for this task can be controlled with new global settings.
API - New admin-only API purgeExpungedResources. It will allow passing the following parameters - resourcetype, batchsize, startdate, enddate. Currently, API is not supported in the UI.
Config for service offering. Service offerings can be created with purgeresources parameter which would allow purging resources immediately on expunge.
Following new global settings have been added:
expunged.resources.purge.enabled: Default: false. Whether to run a background task to purge the expunged resources
expunged.resources.purge.resources: Default: (empty). A comma-separated list of resource types that will be considered by the background task to purge the expunged resources. Currently only VirtualMachine is supported. An empty "value will result in considering all resource types for purging
expunged.resources.purge.interval: Default: 86400. Interval (in seconds) for the background task to purge the expunged resources
expunged.resources.purge.delay: Default: 300. Initial delay (in seconds) to start the background task to purge the expunged resources task.
expunged.resources.purge.batch.size: Default: 50. Batch size to be used during expunged resources purging.
expunged.resources.purge.start.time: Default: (empty). Start time to be used by the background task to purge the expunged resources. Use format yyyy-MM-dd or yyyy-MM-dd HH:mm:ss.
expunged.resources.purge.keep.past.days: Default: 30. The number of days in the past from the execution time of the background task to purge the expunged resources for which the expunged resources must not be purged. To enable purging expunged resource till the execution of the background task, set the value to zero.
expunged.resource.purge.job.delay: Default: 180. Delay (in seconds) to execute the purging of an expunged resource initiated by the configuration in the offering. Minimum value should be 180 seconds and if a lower value is set then the minimum value will be used.
Documentation PR: apache/cloudstack-documentation#397
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Update extraconfig for platform param in xen/xcpng
* Fix map param key, not to replace '-' with '_' (replace only applicable to param / map-param)
* Added unit tests
* Add license for tests file
* Add API for listing Quota preset variables
* Add new line at EOF
* Address review
* Remove usage types
* Remove usage types from quotatypes
* Remove unused imports
* Add space for preset variable definition description
Co-authored-by: Bernardo De Marco Gonçalves <bernardomg2004@gmail.com>
---------
Co-authored-by: Bernardo De Marco Gonçalves <bernardomg2004@gmail.com>
* linstor: update to java-linstor 0.5.1
* linstor: Support VM-Instance Disk snapshots
This adds VM-Instance disk snapshot support for
Linstor primary storage. Instance snapshots are stored on
the used Linstor storage pool backend and can be converted
into regular volume snapshots and also reverted.
Instance VM snapshots are not fully atomic but with the
create multi snapshot feature as good as it gets.
Snapshots are done over multiple volumes in the same devicemanager run.
Adminstrators should ensure that IDP configuration has signing
certificate for the actual signature check to be performed. In addition
to this, this change introduces a new global setting
`saml2.check.signature` which can deliberately fail a SAML login attempt
when the SAML response has missing signature.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* kvm: replace ISO path in vm XML configuration during vm migration
* Update 9212: address comments
* kvm: fix vm migration if there are multiple image stores
This PR addresses the issue #8789
The original issue is disconnectPhysicalDiskByPath() implementation in FibreChannelAdaptor always returns true irrespective of the success of the operation. This was already fixed in the PR #8889 .
Ideally this method has to be called after choosing the right adapter based on the storage pool type of the volume path, but currently it is just called in a loop.
05b9b6e2e7/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePoolManager.java (L200-L212)
while trying to fix the case of running into the loop of all adapters by somehow passing the storage pool type to that caller cleanup() method but this is touching all over the code (which I fear it creates other regressions), instead I feel we can keep it the current way only since Fibrechannel adapter has already fixed.
In this PR I've added the java doc explaining the method and situation.
* Normalize dates in Usage and Quota APIs
* Apply Daan's sugestions
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Restore removed sinces
* Add missing space
* Change param descriptions for quotaBalance and quotaStatement
* Apply Daniel's suggestions
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
---------
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* 4.19:
linstor: disconnect-disk also search for resource name in Linstor (#9035)
ui: add support to change Account role for admins (#9012)
Use parameter dcId as wrapper to prevent NPE (#8986)
disconnectPhysicalDisk(String, KVMStoragePool) seems to calls the plugin
with the resource name instead of the device path, so we also have
to search for resource names, while cleaning up.
* Add ability to set cpu.threadspercore similar to existing cpu.corespersocket
* add cpu.threadspercore to VM and template detail options
* Update plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* add vm detail for KVM
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
For live migrate we need the allow-two-primaries option,
but we don't know exactly if we are called for a migration operation.
Now also check if at least any of the resources is in use somewhere and
only then set the option.
This fixes a limitation for arm64/aarch64 KVM hosts to correctly export
the product name via sysconfig attribute. Without this `cloud-init`
doesn't function correctly on arm64 platforms.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* 4.19:
protect against null-path (#8915)
UI: Fix missing locale strings for Status widget (#8792)
Add a shutdownhook to remove jobs owned by the process (#8896)
* 4.18:
protect against null-path (#8915)
UI: Fix missing locale strings for Status widget (#8792)
Add a shutdownhook to remove jobs owned by the process (#8896)
Add a global setting to control whether redirection is allowed while
downloading templates and volumes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Add a global setting to control whether redirection is allowed while
downloading templates and volumes
core: some changes on SimpleHttpMultiFileDownloader
similar as HttpTemplateDownloader
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
(cherry picked from commit b1642bc3bf)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* storage,plugins: delegate allow zone-wide volume migration check and access grant to storage drivers
Following checks have been delegated to storage drivers,
- For volumes on zone-wide storage, whether they need storage migration when VM is migrated
- Whther volume required grant access
Apply fixes in resolving PrimaryDataStore
* add tests
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* unused import
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Update engine/orchestration/src/test/java/org/apache/cloudstack/engine/orchestration/VolumeOrchestratorTest.java
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Support KVM storage implementations controlling logical/physical block io size
* Support custom block size during disk attach
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
* NSX integration - skeletal code
* Fix module not loading on startup
* add upgrade path and daos
\n add nsx controller command
* add support for adding and listing nsx provider to a zone
* add license
* add default VPC offering and update upgrade path
* add global setting to enable nsx plugin
* add delete nsx controller operation
* add nsxresource
* add NSX resource , api client, create tier1 gw
* update db
* update response and add license
* Add support to create and delete nsx tier-1 gateway
* add license
* cleanup and add skeletal code for network creation
* add create/delete segment and UI integration
* add license
* address code smells - part 1
* fix test / build failure
* NSX integration - skeletal code
* Fix module not loading on startup
* add upgrade path and daos
\n add nsx controller command
* add support for adding and listing nsx provider to a zone
* add license
* add default VPC offering and update upgrade path
* add global setting to enable nsx plugin
* add delete nsx controller operation
* add nsxresource
* add NSX resource , api client, create tier1 gw
* update db
* update response and add license
* Add support to create and delete nsx tier-1 gateway
* add license
* cleanup and add skeletal code for network creation
* add create/delete segment and UI integration
* add license
* address code smells - part 1
* fix test / build failure
* add ui changes + update nsx_provider table transport zones + use NSX broadcast domain for add nics to router
* ui: fix password field, and backend changes
* add route advertisement
* update offering
* update offering
* add sleep before deletion of vpc / tier g/w for ports to be removed
* move creation of segments to design phase
* change provider to VPC router for Dhcp & dns service in an nsx offering
* Add public nic for NSX
* reserve first IP (after g/w) of subnet for router nic - NSX
* revert reserving 1st IP in vpc segments
* [NSX] Create a DHCP relay and add it to a VPC tier segment (#107)
* Create DHCP relay command and execute request
* In progress integrate with networking
* Create DHCP relay config on the network VR allocation
* Revert domain router dao changes
* Create DHCP relay con VR nic plug to NSX network
* Link DHCP relay config to segment after creation
* [NSX] Cleanup DHCP Relay config on segment deletion (#108)
* Cleanup DHCP Relay config on segment deletion
* update segment & relay name generators and call delete dhcprelay after deletion of segment
* address comment
* [NSX] Fix DHCP relay config deletion was missing zone name (#8068)
* [NSX] Refactor API wrapper operations (#8059)
* [NSX] Refactor API wrapper operations
* Big refactor
* Address review comment
* change network cidr to cidr to prevent NPE
* add domain and zone names to the various networks - vpc & tier
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* Nsx unit tests (#8090)
* Add tests
* add test for NsxGuestNetworkGuru
* add unit tests for NsxResource
* add unti tests for NsxElement
* cleanup
* [NSX] Refactor API wrapper operations
* update tests
* update tests - add nsxProviderServiceImpl test
* add unit test - NsxServiceImpl
* add license
* Big refactor
* Address review comment
* change network cidr to cidr to prevent NPE
* add domain and zone names to the various networks - vpc & tier
* fix tests
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* modify NSX resource naming convention (#8095)
* modify NSX resource naming convention
* remove unused imports
* add a setup phase between desgin and implementation of a network for intermediary steps
* add method to all classes
* NSX: Refactor Network & VPC offering (#8110)
* [NSX] Refactor API wrapper operations
* Network offering changes for NSX
* fix services and provider combination
* address comments: rename param
* update nsx_mode parameter
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix test
* [NSX] Allow NSX isolated networks (#8132)
* Add network offerings for NSX on isolated networks
* Fix offerings creation
* In progress NSX isolated network
* Fixes
* Fix NIC allocation to router
* NSX: Add Step for Adding Public traffic network for NSX During zone creation (#8126)
* NSX: Add Step for Adding Public traffic network for NSX
* address comments and cleanup
* address comment
* remove indent
* NSX: Create and Delete static NAT & Port forward rules (#8131)
* NSX: Create and delete NSX Static Nat rules
* fix issues with static nat
* add static nat
* Support to add and delete Port forward rules
* add license
* fix adding multiple pf rules
* cleanup
* fix lint check
* fix smoke tests
* fix smoke tests
* Nsx add lb rule (#8161)
* NSX: Create and delete NSX Static Nat rules
* fix issues with static nat
* add static nat
* Support to add and delete Port forward rules
* add license
* fix adding multiple pf rules
* cleanup
* NSX: Add support to create and delete Load balancer rules
* fix deletion of lb rules
* add header file and update protocol detail
* build failure fix
* [NSX] Add SNAT support (#8100)
* In progress add source NAT
* Fix after merge
* Fix tests
* Fix NPE on isolated network deletion
* Reserve source NAT IP when its not passed for NSX VPC
* Create source NAT rule on VR NIC allocation
* Fix update VPC and remove VPC to update and remove SNAT rule
* Fix packaging
* Address review comment
* Fix build
* fix build - unused import
* Add defensive checks
* Add missing design to NSX public guru
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* NSX: Fix VR public NIC allocation (#8166)
* NSX: fix LB member addition and deletion and add defensive checks (#8167)
* Fix public NIC NPE on broadcast URI
* NSX: Router Public nic to get IP from systemVM Ip range (#8172)
* NSX: Router Public nic to get IP from systemVM Ip range
* Fix VR IP address and setSourceNatIp command
* NSX: hide systemVM reserved IP range SourceNAT
* fix test
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix test failure
* test failure fix
* [NSX] Fix update source NAT IP (#8176)
* [NSX] Fix update source NAT IP
* Fix startup
* Fix API result
* NSX - add LB route Advertizement (#8192)
* [NSX] Add ACL types support (#8224)
* NSX: Create segment group on segment creation
* Add unit tests
* Remove group for segment before removing segment
* Create Distributed Firewall rules
* Remove distributed firewall policy on segment deletion
* Fix policy rule ID and add more unit tests
* Fix DROP action rules and transform tests
* Add new ACL rules
* Fixes
* associate security policies with groups and not to DFW and add deletion of rules
* Fix name convention
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* NSX: Fix creation of VPCs (#8320)
* Fix ACL rules creation (#8323)
* [NSX] Fix database views (#8325)
* NSX: Add CKS Support & Firewall rules for Isolated Networks (#8189)
* NSX: Add ALL LB IP to the list of route advertisements in tier1
* NSX: Support Source NAT on NSX Isolated networks
* NSX: Cks Support
* NSX: Create segment group on segment creation
* Add unit tests
* Remove group for segment before removing segment
* Create Distributed Firewall rules
* Remove distributed firewall policy on segment deletion
* Fix policy rule ID and add more unit tests
* Add support for routed NSX Isolated networks \n and non RFC 1918 compliant IPs
* Add support for routed NSX Isolated networks \n and non RFC 1918 compliant IPs
* Add Firewall rules
* build failure - fix unit test
* fix npes
* Add support to delete firewall rules
* update nsx cks offering
* add license
* update order of ports in PF & FW rules
* fix filter for getting transport zones
* CKS support changed - MTU updated, etc
* add LB for CKS on VPC
* address comments
* adapt upstream cks logic for vpc
* rever mtu hack
* update UI changes as per upstream fix
* change display test for CKS n/w offerings for isolated and VPC tiers
* add extra line for linter
* address comment
* revert list change
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix ui build failure
* [NSX] Address SonarCloud Bugs (#8341)
* [NSX] Address SonarCloud Bugs
* Fix NSX API connection issues
* NSX: Add unit tests to increase coverage (#8355)
* NSX: Add unit tests
* cleanup unused imports
* add more unit tests
* add tests for publicnsxnetworkguru
* add license
* fix build failures
* address sonar comment
* fix security hotspots
* NSX: Add more unit tests (#8381)
* NSX : Unit tests
* remove unused imports
* remove unused import causing build failure
* fix build failures due to unused imports
* fix build failure
* fix test assertion
* remove unused imports
* remove unused import
* Nsx UI zone bug (#8398)
* NSX: Attempt to fix NSX Zone creation bug for public networks
* fix zone wizard public traffic issue
* add proper filtering of offerings based on VPC nsx mode
* clean up console logs
* NSX: Fix code smells and reported bugs (#8409)
* NSX: Fix code smells and reported bugs
* fox override issue
* remove unused imports
* fix test
* refactor code to reduce complexity
* add lisence
* cleanup
* fix build failure
* fix build failure
* address comments
* test - add config to ignore certain files from test coverage
* test exclusion of classes from test cov
* rever pom changes
* [NSX] Add more unit tests (#8431)
* [NSX] Add more unit tests
* More tests
* Fix build errors
* NSX: Prevent creation of L2 and Shared networks for NSX (#8463)
* NSX: Prevent creation of L2 and Shared networks for NSX
* add checks to backend to prevent creation of l2 and shared networks in nsx zones and filter only nsx offerings when creating isolated networks
* cleanup
* NSX: Fix code smells (#8436)
* NSX: Fix code smells
* Add changes to service creation logic
* CKS: Add action to during firewall rule creation (#8498)
* NSX,UI: Deduplicate network list when creating kubernetes clusters (#8513)
* NSX: Make LB service selectable in network offering (#8512)
* NSX: Make LB service selectable in network offering
* fix label
* address comments
* address comments
* NSX: Add appropriate error message when icmp type is set to -1 for NSX (#8504)
* NSX: Add appropriate error message when icmp type is set to -1 for NSX
* address comments
* update text
* fix test
* fix test - build failure
* fix test - build failure
* NSX: Cleanup NSX resources during k8s cluster cleanup (#8528)
* fix test failure
* NSX: Improve segment deletion process (#8538)
* NSX: Add passive monitor for NSX LB to test whether a server is available (#8533)
* NSX: Add passive monitor for NSX LB to test whether a server is available
* Add active monitors too
* fix build failure
* NSX: Add check for ICMP code / type for NSX zones (#8542)
* NSX: Fix Routed Mode for Isolated and VPC networks (#8534)
* NSX: Fix Routed Mode for Isolated and VPC networks
* NSX: Fix Routed mode - add checks for ports added for FW rules
* clean up code
* fix build failure
* NSX: Add retry logic with sleep to delete segments (#8554)
* NSX: Add retry logic with sleep to delete segments
* add logs
* NSX: Fix custom ACL check (#2)
* NSX: Fix custom ACL check
* NSX: Fix custom ACL check
* Nsx vpc routed mode (#5)
* NSX: Fix VPC routed mode
* NSX: VPC route mode
* remove unnecessary changes
* Nsx: Support internal LB (#4)
* NSX: Support internal LB service in NSX
* add lb removal logic
* Fix UI issue hiding internal LB tab
* Refactor method name
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* NSX: Improve NSX resource cleanup process (#3)
* Fix unit test
* NSX: Add SourceNAT service to the default Routed offering for VPC (#13)
* Fix VPC restart with cleanup (#12)
* NSX: Fix ACL rule removal on replacement and fix rule order (#11)
* NSX: fix smoke test failure for ACLs (#9)
* Fix unit tests
* Fix NSX plugin pom XML
* NSX: Add support to re-order ACL rules (NSX FW rules) (#14)
* [WIP] NSX: Add support to re-order ACL rules (NSX FW rules)
* fix reordering of acl rules on all networks that it is associated to
* clean up and attempt test fix
* Fix tests
* Remove unused import
* tweak reorder logic
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Fix zone creation issue for internal load balancer
* Fix
* Fix unit test
* fix logger
* fix logger
* fix logger
* NSX: Fix VPC form to ignore source NAT IP when creating VPCs and fix label
* Move SQL changes to the newest schema file
* NSX: Last Fixes
* Fix build
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Introduced a new API checkVolumeAndRepair that allows users or admins to check and repair if any leaks observed.
Currently this is supported only for KVM
* some fixes
* Added unit tests
* addressed review comments
* add repair volume while granting access
* Changed repair parameter to accept both leaks/all
* Introduced new global setting volume.check.and.repair.before.use to do volume check and repair before VM start or volume attach operations
* Added volume check and repair changes only during VM start and volume attach operations
* Refactored the names to look similar across the code
* Some code fixes
* remove unused code
* Renamed repair values
* Fixed unit tests
* changed version
* Address review comments
* Code refactored
* used volume name in logs
* Changed the API to Async and the setting scope to storage pool
* Fixed exit value handling with check volume command
* Fixed storage scope to the setting
* Fix volume format issues
* Refactored the log messages
* Fix formatting
* 4.18:
Storage plugin support to check if volume on datastore requires access for migration (#8655)
CKS: fix /opt/bin/deploy-cloudstack-secret in CKS control nodes (#8697)
* Check if volume on datastore requires access for migration, and grant/revoke volume access if requires
* Updated default implementation for requiresAccessForMigration method in PrimaryDataStoreDriver
* Update to 4.20.0
* Update to python3
* Upgrade to JRE 17
* Upgrade to Debian 12.4.0
* VR: upgrade to python3
for f in `find systemvm/ -name *.py`;do
if grep "print " $f >/dev/null;then
2to3-2.7 -w $f
else
2to3-2.7 -p -w $f
fi
done
* java: Use JRE17 in cloudstack packages and systemvmtemplate
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Add --add-opens to JAVA_OPTS in systemd config
* Add --add-opens to JAVA_OPTS in systemd config for usage
* python3: fix "TypeError: a bytes-like object is required, not 'str'"
* python3: fix "ValueError: must have exactly one of create/read/write/append mode"
* Add --add-exports=java.base/sun.security.x509=ALL-UNNAMED for management server
* Use pip3 instead of pip for centos8
* python3: fix "TypeError: write() argument must be str, not bytes"
```
root@r-1037-VM:~# /opt/cloud/bin/passwd_server_ip.py 10.1.1.1
Traceback (most recent call last):
File "/opt/cloud/bin/passwd_server_ip.py", line 201, in <module>
serve()
File "/opt/cloud/bin/passwd_server_ip.py", line 187, in serve
initToken()
File "/opt/cloud/bin/passwd_server_ip.py", line 60, in initToken
f.write(secureToken)
TypeError: write() argument must be str, not bytes
root@r-1037-VM:~#
```
* Python3: fix "name 'file' is not defined"
```
root@r-1037-VM:~# /opt/cloud/bin/passwd_server_ip.py 10.1.1.1
Traceback (most recent call last):
File "/opt/cloud/bin/passwd_server_ip.py", line 201, in <module>
serve()
File "/opt/cloud/bin/passwd_server_ip.py", line 188, in serve
loadPasswordFile()
File "/opt/cloud/bin/passwd_server_ip.py", line 67, in loadPasswordFile
with file(getPasswordFile()) as f:
NameError: name 'file' is not defined
```
* python3: fix "TypeError: write() argument must be str, not bytes" (two more files)
* Upgrade jaxb version
* python3: fix more "TypeError: a bytes-like object is required, not str"
* python3: fix "Failed to update password server"
Failed to update password server due to: POST data should be bytes, an iterable of bytes, or a file object. It cannot be of type str.
* python3: fix "bad duration value: ikelifetime=24.0h"
Jan 15 13:57:20 systemvm ipsec[3080]: # bad duration value: ikelifetime=24.0h
* python3: fix password server "invalid save_password token"
* test: incease retries in test_vpc_vpn.py
* python3: fix passwd_server_ip.py
see error below
```
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: ----------------------------------------
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: Exception occurred during processing of request from ('10.1.1.129', 32782)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: Traceback (most recent call last):
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/socketserver.py", line 650, in process_request_thread
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.finish_request(request, client_address)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/socketserver.py", line 360, in finish_request
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.RequestHandlerClass(request, client_address, self)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/socketserver.py", line 720, in __init__
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.handle()
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/http/server.py", line 427, in handle
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.handle_one_request()
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/http/server.py", line 415, in handle_one_request
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: method()
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/opt/cloud/bin/passwd_server_ip.py", line 120, in do_GET
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.wfile.write(password)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/socketserver.py", line 799, in write
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self._sock.sendall(b)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: TypeError: a bytes-like object is required, not 'str'
```
* python3: fix self.cl.get_router_password in Redundant VRs
```
File "/opt/cloud/bin/cs/CsDatabag.py", line 154, in get_router_password
md5.update(passwd)
TypeError: Unicode-objects must be encoded before hashing"]
```
* scripts: mark multipath scripts as executable
* systemvm template: remove hyperv packages and do not export
* VR: update default RAM size of System VMs/VRs to 512MiB
Before
```
mysql> select id,name,cpu,speed,ram_size,unique_name,system_use from service_offering where name like "System%";
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
| id | name | cpu | speed | ram_size | unique_name | system_use |
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
| 3 | System Offering For Software Router | 1 | 500 | 256 | Cloud.Com-SoftwareRouter | 1 |
| 4 | System Offering For Software Router - Local Storage | 1 | 500 | 256 | Cloud.Com-SoftwareRouter-Local | 1 |
| 5 | System Offering For Internal LB VM | 1 | 256 | 256 | Cloud.Com-InternalLBVm | 1 |
| 6 | System Offering For Internal LB VM - Local Storage | 1 | 256 | 256 | Cloud.Com-InternalLBVm-Local | 1 |
| 7 | System Offering For Console Proxy | 1 | 500 | 1024 | Cloud.com-ConsoleProxy | 1 |
| 8 | System Offering For Console Proxy - Local Storage | 1 | 500 | 1024 | Cloud.com-ConsoleProxy-Local | 1 |
| 9 | System Offering For Secondary Storage VM | 1 | 500 | 512 | Cloud.com-SecondaryStorage | 1 |
| 10 | System Offering For Secondary Storage VM - Local Storage | 1 | 500 | 512 | Cloud.com-SecondaryStorage-Local | 1 |
| 11 | System Offering For Elastic LB VM | 1 | 128 | 128 | Cloud.Com-ElasticLBVm | 1 |
| 12 | System Offering For Elastic LB VM - Local Storage | 1 | 128 | 128 | Cloud.Com-ElasticLBVm-Local | 1 |
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
10 rows in set (0.00 sec)
```
New value
```
mysql> select id,name,cpu,speed,ram_size,unique_name,system_use from service_offering where name like "System%";
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
| id | name | cpu | speed | ram_size | unique_name | system_use |
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
| 3 | System Offering For Software Router | 1 | 500 | 512 | Cloud.Com-SoftwareRouter | 1 |
| 4 | System Offering For Software Router - Local Storage | 1 | 500 | 512 | Cloud.Com-SoftwareRouter-Local | 1 |
| 5 | System Offering For Internal LB VM | 1 | 256 | 512 | Cloud.Com-InternalLBVm | 1 |
| 6 | System Offering For Internal LB VM - Local Storage | 1 | 256 | 512 | Cloud.Com-InternalLBVm-Local | 1 |
| 7 | System Offering For Console Proxy | 1 | 500 | 1024 | Cloud.com-ConsoleProxy | 1 |
| 8 | System Offering For Console Proxy - Local Storage | 1 | 500 | 1024 | Cloud.com-ConsoleProxy-Local | 1 |
| 9 | System Offering For Secondary Storage VM | 1 | 500 | 512 | Cloud.com-SecondaryStorage | 1 |
| 10 | System Offering For Secondary Storage VM - Local Storage | 1 | 500 | 512 | Cloud.com-SecondaryStorage-Local | 1 |
| 11 | System Offering For Elastic LB VM | 1 | 128 | 512 | Cloud.Com-ElasticLBVm | 1 |
| 12 | System Offering For Elastic LB VM - Local Storage | 1 | 128 | 512 | Cloud.Com-ElasticLBVm-Local | 1 |
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
10 rows in set (0.01 sec)
```
* debian12: fix test_network_ipv6 and test_vpc_ipv6
* python3: remove duplicated imports
* debian12: failed to start Apache2 server (SSLCipherSuite @SECLEVEL=0)
error message
```
[Sat Jan 20 22:51:14.595143 2024] [ssl:emerg] [pid 10200:tid 140417063888768] AH02562: Failed to configure certificate cloudinternal.com:443:0 (with chain), check /etc/ssl/certs/cert_apache.crt
[Sat Jan 20 22:51:14.595234 2024] [ssl:emerg] [pid 10200:tid 140417063888768] SSL Library Error: error:0A00018E:SSL routines::ca md too weak
AH00016: Configuration Failed
```
openssl version
```
root@s-167-VM:~# openssl version -a
OpenSSL 3.0.11 19 Sep 2023 (Library: OpenSSL 3.0.11 19 Sep 2023)
built on: Mon Oct 23 17:52:22 2023 UTC
platform: debian-amd64
options: bn(64,64)
compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -fzero-call-used-regs=used-gpr -DOPENSSL_TLS_SECURITY_LEVEL=2 -Wa,--noexecstack -g -O2 -ffile-prefix-map=/build/reproducible-path/openssl-3.0.11=. -fstack-protector-strong -Wformat -Werror=format-security -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_BUILDING_OPENSSL -DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2
OPENSSLDIR: "/usr/lib/ssl"
ENGINESDIR: "/usr/lib/x86_64-linux-gnu/engines-3"
MODULESDIR: "/usr/lib/x86_64-linux-gnu/ossl-modules"
Seeding source: os-specific
CPUINFO: OPENSSL_ia32cap=0x80202001478bfffd:0x0
```
certificate
```
root@s-167-VM:~# keytool -printcert -rfc -file /usr/local/cloud/systemvm/certs/realhostip.crt
-----BEGIN CERTIFICATE-----
MIIFZTCCBE2gAwIBAgIHKBCduBUoKDANBgkqhkiG9w0BAQUFADCByjELMAkGA1UE
BhMCVVMxEDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxGjAY
BgNVBAoTEUdvRGFkZHkuY29tLCBJbmMuMTMwMQYDVQQLEypodHRwOi8vY2VydGlm
aWNhdGVzLmdvZGFkZHkuY29tL3JlcG9zaXRvcnkxMDAuBgNVBAMTJ0dvIERhZGR5
IFNlY3VyZSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTERMA8GA1UEBRMIMDc5Njky
ODcwHhcNMTIwMjAzMDMzMDQwWhcNMTcwMjA3MDUxMTIzWjBZMRkwFwYDVQQKDBAq
LnJlYWxob3N0aXAuY29tMSEwHwYDVQQLDBhEb21haW4gQ29udHJvbCBWYWxpZGF0
ZWQxGTAXBgNVBAMMECoucmVhbGhvc3RpcC5jb20wggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQCDT9AtEfs+s/I8QXp6rrCw0iNJ0+GgsybNHheU+JpL39LM
TZykCrZhZnyDvwdxCoOfE38Sa32baHKNds+y2SHnMNsOkw8OcNucHEBX1FIpOBGp
h9D6xC+umx9od6xMWETUv7j6h2u+WC3OhBM8fHCBqIiAol31/IkcqDxxsHlQ8S/o
CfTlXJUY6Yn628OA1XijKdRnadV0hZ829cv/PZKljjwQUTyrd0KHQeksBH+YAYSo
2JUl8ekNLsOi8/cPtfojnltzRI1GXi0ZONs8VnDzJ0a2gqZY+uxlz+CGbLnGnlN4
j9cBpE+MfUE+35Dq121sTpsSgF85Mz+pVhn2S633AgMBAAGjggG+MIIBujAPBgNV
HRMBAf8EBTADAQEAMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAOBgNV
HQ8BAf8EBAMCBaAwMwYDVR0fBCwwKjAooCagJIYiaHR0cDovL2NybC5nb2RhZGR5
LmNvbS9nZHMxLTY0LmNybDBTBgNVHSAETDBKMEgGC2CGSAGG/W0BBxcBMDkwNwYI
KwYBBQUHAgEWK2h0dHA6Ly9jZXJ0aWZpY2F0ZXMuZ29kYWRkeS5jb20vcmVwb3Np
dG9yeS8wgYAGCCsGAQUFBwEBBHQwcjAkBggrBgEFBQcwAYYYaHR0cDovL29jc3Au
Z29kYWRkeS5jb20vMEoGCCsGAQUFBzAChj5odHRwOi8vY2VydGlmaWNhdGVzLmdv
ZGFkZHkuY29tL3JlcG9zaXRvcnkvZ2RfaW50ZXJtZWRpYXRlLmNydDAfBgNVHSME
GDAWgBT9rGEyk2xF1uLuhV+auud2mWjM5zArBgNVHREEJDAighAqLnJlYWxob3N0
aXAuY29tgg5yZWFsaG9zdGlwLmNvbTAdBgNVHQ4EFgQUZyJz9/QLy5TWIIscTXID
E8Xk47YwDQYJKoZIhvcNAQEFBQADggEBAKiUV3KK16mP0NpS92fmQkCLqm+qUWyN
BfBVgf9/M5pcT8EiTZlS5nAtzAE/eRpBeR3ubLlaAogj4rdH7YYVJcDDLLoB2qM3
qeCHu8LFoblkb93UuFDWqRaVPmMlJRnhsRkL1oa2gM2hwQTkBDkP7w5FG1BELCgl
gZI2ij2yxjge6pOEwSyZCzzbCcg9pN+dNrYyGEtB4k+BBnPA3N4r14CWbk+uxjrQ
6j2Ip+b7wOc5IuMEMl8xwTyjuX3lsLbAZyFI9RCyofwA9NqIZ1GeB6Zd196rubQp
93cmBqGGjZUs3wMrGlm7xdjlX6GQ9UvmvkMub9+lL99A5W50QgCmFeI=
-----END CERTIFICATE-----
Warning:
The certificate uses the SHA1withRSA signature algorithm which is considered a security risk. This algorithm will be disabled in a future update.
```
it comes from
```
$ openssl x509 -in ./systemvm/agent/certs/realhostip.crt -noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 11277268652730408 (0x28109db8152828)
Signature Algorithm: sha1WithRSAEncryption
Issuer: C = US, ST = Arizona, L = Scottsdale, O = "GoDaddy.com, Inc.", OU = http://certificates.godaddy.com/repository, CN = Go Daddy Secure Certification Authority, serialNumber = 07969287
Validity
Not Before: Feb 3 03:30:40 2012 GMT
Not After : Feb 7 05:11:23 2017 GMT
Subject: O = *.realhostip.com, OU = Domain Control Validated, CN = *.realhostip.com
```
* debian12: use ed25519 instead of rsa as ssh-rsa has been deprecated in OpenSSH
on xenserver
```
[root@pr8497-t8906-xenserver-71-xs2 ~]# ssh -i .ssh/id_rsa.cloud -p 3922 169.254.214.153
Warning: Permanently added '[169.254.214.153]:3922' (ECDSA) to the list of known hosts.
Permission denied (publickey).
```
in the CPVM
Jan 22 19:31:09 v-1-VM sshd[2869]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Jan 22 19:31:09 v-1-VM sshd[2869]: Connection closed by authenticating user root 169.254.0.1 port 54704 [preauth]
```
ssh-dss (DSA) is not supported either
* debian12: add PubkeyAcceptedAlgorithms=+ssh-rsa to sshd_config
* VR: install python3 packages in case of Debian 11
* pom.xml: exclude systemvm/agent/packages/* in license check
* systemvm: do not patch router/systemvm during startup
this will cause 4.19 SYSTEM template not work, but may be expected
- python3 VS python2 (default)
- openSSL 3.0.1 VS 1.1.1w
- openssh-server 9.1 VS 8.4
* VR: patch router/systemvm if template is debian11
This supports debian 11 template by
- revert change in systemvm/debian/etc/ssh/sshd_config
- patch VR/systemvms during startup
- install packages during patching system vm/routers
* python3 flake: fix E502 the backslash is redundant between brackets
```
../debian/root/health_checks/router_version_check.py:55:70: E502 the backslash is redundant between brackets
../debian/root/health_checks/router_version_check.py:58:61: E502 the backslash is redundant between brackets
../debian/root/health_checks/router_version_check.py:67:71: E502 the backslash is redundant between brackets
../debian/root/health_checks/router_version_check.py:70:60: E502 the backslash is redundant between brackets
../debian/root/health_checks/haproxy_check.py:47:71: E502 the backslash is redundant between brackets
../debian/root/health_checks/haproxy_check.py:48:64: E502 the backslash is redundant between brackets
../debian/root/health_checks/cpu_usage_check.py:43:54: E502 the backslash is redundant between brackets
../debian/root/health_checks/cpu_usage_check.py:46:58: E502 the backslash is redundant between brackets
../debian/root/health_checks/memory_usage_check.py:31:65: E502 the backslash is redundant between brackets
../debian/root/health_checks/memory_usage_check.py:42:57: E502 the backslash is redundant between brackets
../debian/root/health_checks/memory_usage_check.py:45:63: E502 the backslash is redundant between brackets
```
* python3 flake: fix E275 missing whitespace after keyword
```
../debian/opt/cloud/bin/cs_firewallrules.py:29:20: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_dhcp.py:27:16: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_dhcp.py:36:16: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_guestnetwork.py:33:20: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_guestnetwork.py:35:16: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_vpnusers.py:37:16: E275 missing whitespace after keyword
../debian/opt/cloud/bin/merge.py:230:11: E275 missing whitespace after keyword
../debian/opt/cloud/bin/merge.py:239:19: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_remoteaccessvpn.py:24:12: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_site2sitevpn.py:24:12: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs/CsHelper.py:90:15: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs/CsAddress.py:367:15: E275 missing whitespace after keyword
```
* python3 flake: fix configure.py
```
../debian/opt/cloud/bin/configure.py:24:22: E401 multiple imports on one line
../debian/opt/cloud/bin/configure.py:43:180: E501 line too long (294 > 179 characters)
../debian/opt/cloud/bin/configure.py:46:1: E302 expected 2 blank lines, found 1
../debian/opt/cloud/bin/configure.py:63:1: E302 expected 2 blank lines, found 1
../debian/opt/cloud/bin/configure.py:65:12: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/configure.py:72:1: E302 expected 2 blank lines, found 1
../debian/opt/cloud/bin/configure.py:310:25: E711 comparison to None should be 'if cond is not None:'
../debian/opt/cloud/bin/configure.py:312:29: E711 comparison to None should be 'if cond is None:'
../debian/opt/cloud/bin/configure.py:378:25: E711 comparison to None should be 'if cond is not None:'
../debian/opt/cloud/bin/configure.py:380:29: E711 comparison to None should be 'if cond is None:'
../debian/opt/cloud/bin/configure.py:490:29: E712 comparison to False should be 'if cond is False:' or 'if not cond:'
../debian/opt/cloud/bin/configure.py:642:16: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/configure.py:644:18: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/configure.py:1416:1: E305 expected 2 blank lines after class or function definition, found 1
```
* python3 flake: fix other python files
```
../debian/opt/cloud/bin/vmdata.py:97:12: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/vmdata.py:99:14: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/cs/CsRedundant.py:438:53: E203 whitespace before ':'
../debian/opt/cloud/bin/cs/CsRedundant.py:461:53: E203 whitespace before ':'
../debian/opt/cloud/bin/cs/CsRedundant.py:499:5: E303 too many blank lines (2)
../debian/opt/cloud/bin/cs/CsDatabag.py:189:1: E302 expected 2 blank lines, found 1
../debian/opt/cloud/bin/cs/CsDatabag.py:193:37: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/cs/CsHelper.py:118:30: E231 missing whitespace after ','
../debian/opt/cloud/bin/cs/CsHelper.py:119:15: E225 missing whitespace around operator
../debian/opt/cloud/bin/cs/CsHelper.py:127:19: E225 missing whitespace around operator
../debian/opt/cloud/bin/cs/CsAddress.py:324:43: E221 multiple spaces before operator
../debian/opt/cloud/bin/cs/CsVpcGuestNetwork.py:28:1: E302 expected 2 blank lines, found 1
```
* python3 flake: fix CsNetfilter.py
```
../debian/opt/cloud/bin/cs/CsNetfilter.py:226:13: E117 over-indented
../debian/opt/cloud/bin/cs/CsNetfilter.py:233:180: E501 line too long (197 > 179 characters)
../debian/opt/cloud/bin/cs/CsNetfilter.py:241:14: E201 whitespace after '{'
../debian/opt/cloud/bin/cs/CsNetfilter.py:242:14: E201 whitespace after '{'
../debian/opt/cloud/bin/cs/CsNetfilter.py:247:18: E201 whitespace after '{'
../debian/opt/cloud/bin/cs/CsNetfilter.py:247:74: E202 whitespace before '}'
../debian/opt/cloud/bin/cs/CsNetfilter.py:248:18: E201 whitespace after '{'
```
* systemvm/test: fix sys.path
```
$ bash runtests.sh
/usr/bin/python
Python 3.10.12
Running pycodestyle to check systemvm/python code for errors
Running pylint to check systemvm/python code for errors
Python 3.10.12
pylint 2.12.2
astroid 2.9.3
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
--------------------------------------------------------------------
Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)
--------------------------------------------------------------------
Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)
Running systemvm/python unit tests
....Device "eth0" does not exist.
.....................
----------------------------------------------------------------------
Ran 25 tests in 0.008s
OK
```
* Revert "systemvm template: remove hyperv packages and do not export"
This reverts commit 4383d59d03.
* debian12: move SQL change to schema-41900to42000.sql
* debian12: update systemvm template version to 4.20 in pom.xml
* pom.xml: fix NPE if templates do not exist on download.cloudstack.org
* debian12: increase default system offering for routers to 384MiB RAM
* CKS: fix addkubernetessupportedversion failed with JRE17
```
marvin.cloudstackException.CloudstackAPIException: Execute cmd: addkubernetessupportedversion failed, due to: errorCode: 530, errorText:Cannot invoke "org.apache.cloudstack.engine.subsystem.api.storage.ObjectInDataStoreStateMachine$State.toString()" because the return value of "com.cloud.api.query.vo.TemplateJoinVO.getState()" is null
```
* python3: revert changes by 2to3 with systemvm/debian/root/health_checks/*.py
* debian12: use ISO/packages on download.cloudstack.org
* VR: Update default ram size to 384
* debian12: fix router_version_check.py after VR live-patch and add health check in test_routers.py
* debian12: fix build error after log4j 2.x merge
* VR: Update default ram size to 512MB (again)
This reverts commit 578dd2b73f and efafa8c4d6.
* systemvmtemplate: Upgrade to Debian 12.5.0
* systemvm template: increase swap to 512MB
* VR: fix health check error due to deprecated SafeConfigParser
warning below
```
root@r-20-VM:~# /opt/cloud/bin/getRouterMonitorResults.sh true
/root/monitorServices.py:59: DeprecationWarning: The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in Python 3.12. Use ConfigParser directly instead.
parser = SafeConfigParser()
```
* test: fix wget does not work in macchinina vms on vmware80u1
fixes error below
```
{Cmd: wget -t 1 -T 1 www.google.com via Host: 10.0.55.186} {returns: ["wget: '/usr/lib/libpcre.so.1' is not an ELF file", "wget: can't load library 'libpcre.so.1'"]}
```
* packaging: add message for VR memory upgrade after packages installation
---------
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Vishesh <vishesh92@gmail.com>
Feature spec: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Granular+Resource+Limit+Management
Introduces the concept of tagged resource limits for granular resource limit management. Limits can be enforced on accounts and domains for the deployment of entities for a tagged resource. Current tagged resource limits can be used for the following resource types,
Host limits
- user_vm
- cpu
- memory
Storage limits
- volume
- primary_storage
Following global settings can used to specify tags for which limit needs to be enforced,
Host: `resource.limit.host.tags`
Storage: `resource.limit.storage.tags`
Option for specifying tagged resource limits and viewing tagged resource usage are made available in the UI.
Enhances the use of templatetag for VM deployment and template creation
Adds option to list service/compute offerings that can be used with a given template. A new parameter named templateid has been added.
Adds option to list disk offering with suitability flag for a virtual machine. A new parameter named virtualmachineid has been added to the listDiskOfferings API which when passed returns suitableforvirtualmachine param in the response.
* Use free/total instead of free metric to calculate imbalance
* Filter out hosts for condensed while checking imbalance
* Make DRS more configurable
* code refactor
* Add unit tests
* fixup
* Fix validation for drs.imbalance.condensed.skip.threshold
* Add logging and other minor changes for drs
* Add some logging for drs
* Change format for drs imbalance to string
* Show drs imbalance as percentage
* Fixup label for memorytotal in en.json
* linstor: Outline get storagepools from resourcegroup into function
* linstor: move getHostname() to kvm/Pool and reimplement
* linstor: implement CloudStack HA support
* linstor: Add util method getBestErrorMessage from main
* linstor: failed remove of allow-two-primaries is no fatal error
* linstor: Fix failure if a Linstor node is down while migrating
If a Linstor node is down while migrating resource, allow-two-primaries
setting will fail because we can't reach the downed node. But it will
still set the property on the other nodes and migration should work.
We now just report an error instead of completely failing.
* Normalize logs
All classes that could have their loggers inherited from their fathers had their own loggers deleted;
Most loggers didn't have to be static, so most of them were normalized so that they wouldn't be;
All loggers are protected now;
Static logger's name are now 'LOGGER';
Non-static logger's name are now 'logger';
New class DbUpgradeAbstractImpl created so that all Upgraders extend it and inherit its logger
* Upgrade log4j
* fix errors caused by the merge
* Refactor cglibThrowableRenderer functionality to log4j2 and upgrade the last configuration files
* fix sonarcloud bug
* Fix errors caused by merge, remove some unused loggers, and rename a variable that was mistakenly renamed on the normalization commit
* Readd snmpTrapAppender, remove TestAppender
* Regenerate changes
* regenerate changes
* refactor last custom appender
* fix systemvm configuration xml
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* Fix utils pom
* fix some tests
* regenerate changes
* Fix jar being printed on exception
* fix logging in system VMs, fix commands not having log4j2 classpath.
* regenerate changes
* Fix some unwanted renomeations
* fix end of file
* regenerate changes
* regenerate changes
* fix merge error
* regenerate changes
* fix tests
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* readd reload4j to tungsten as juniper depends on it
* Regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* re-add reload4j dependency to network-contrail, as juniper depends on it
* regenerate changes
* regenerate changes
* regenerate changes
* fix typo
* regenerate changes
* regenerate changes
* Fix end of files
* regenerate changes
* add logj42 to cloud-utils-SHADED.jar
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* fix some tests
* Regenerate changes
* Regenerate changes
* fix test
* Regenerate changes
* Regenerate changes
* CKS: retry if unable to drain node or unable to upgrade k8s node
I tried CKS upgrade 16 times, 11 of 16 upgrades succeeded.
2 of 16 upgrades failed due to
```
error: unable to drain node "testcluster-of7974-node-18c8c33c2c3" due to error:[error when evicting pods/"cloud-controller-manager-5b8fc87665-5nwlh" -n "kube-system": Post "https://10.0.66.18:6443/api/v1/namespaces/kube-system/pods/cloud-controller-manager-5b8fc87665-5nwlh/eviction": unexpected EOF, error when evicting pods/"coredns-5d78c9869d-h5nkz" -n "kube-system": Post "https://10.0.66.18:6443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-h5nkz/eviction": unexpected EOF], continuing command...
```
3 of 16 upgrades failed due to
```
Error from server: error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
Name: "kubernetes-dashboard", Namespace: "kubernetes-dashboard"
from server for: "/mnt/k8sdisk//dashboard.yaml": etcdserver: leader changed
```
* CKS: remove tests of creating/deleting HA clusters as they are covered by the upgrade test
* Update PR 8402 as suggested
* test: remove CKS cluster if fail to create or verify
* StoragePoolType as a class
* Fix agent side StoragePoolType enum to class
* Handle StoragePoolType for StoragePoolJoinVO
* Since StoragePoolType is a class, it cannot be converted by @Enumerated annotation.
Implemented conveter class and logic to utilize @Convert annotation.
* Fix UserVMJoinVO for StoragePoolType
* fixed missing imports
* Since StoragePoolType is a class, it cannot be converted by @Enumerated annotation.
Implemented conveter class and logic to utilize @Convert annotation.
* Fixed equals for the enum.
* removed not needed try/catch for prepareAttribute
* Added license to the file.
* Implemented "supportsPhysicalDiskCopy" for storage adaptor.
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>
* Add javadoc to StoragePoolType class
* Add unit test for StoragePoolType comparisons
* StoragePoolType "==" and ".equals()" fix.
* Fix StoragePoolType for FiberChannelAdapter
* Fix for abstract storage adaptor set up issue
* review comments
* Pass StoragePoolType object for poolType dao attribute
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>
Co-authored-by: mprokopchuk <mprokopchuk@gmail.com>
* veeam: detach only the restored volume during backup restore
Steps to reproduce the issue
1. create a VM (A) with ROOT and DATA disk
2. assign to a backup offering
3. create backup
4. create another VM (B)
5. restore the DATA disk of VM A, and attach to VM B
6. When operation is done, check the datastore
Without this change, the ROOT image is not removed and left over on the datastore.
```
[root@ref-trl-5933-v-Mr8-wei-zhou-esxi2:/vmfs/volumes/5f60667d-18d828eb] ls -l /vmfs/volumes/5f60667d-18d828eb/CS-RSTR-dfb6f21c-a941-49db-9963-4f0286a17dac
total 1784840
-rw------- 1 root root 5242880000 Jan 24 09:23 ROOT-722_2-flat.vmdk
-rw------- 1 root root 499 Jan 24 09:23 ROOT-722_2.vmdk
```
With this change, the whole temporary vm has been destroyed.
```
[root@ref-trl-5933-v-Mr8-wei-zhou-esxi2:/vmfs/volumes/5f60667d-18d828eb] ls -l /vmfs/volumes/5f60667d-18d828eb/CS-RSTR-734bee3b-640c-4ff0-a34b-bc45358565b2
ls: /vmfs/volumes/5f60667d-18d828eb/CS-RSTR-734bee3b-640c-4ff0-a34b-bc45358565b2: No such file or directory
```
* veeam: fix wrong disk size in debug message
* veeam: sync backup repository after operations are done
got exception of some operations which succeeds due to the following error
```
2024-01-19 10:59:52,846 DEBUG [o.a.c.b.v.VeeamClient] (API-Job-Executor-42:ctx-716501bb job-4373 ctx-2359b76d) (logid:b5e19a17) Veeam response for PowerShell commands [PowerShell Import-Module Veeam.Backup.PowerShell -WarningAction SilentlyContinue;$restorePoint = Get-VBRRestorePoint ^| Where-Object { $_.Id -eq '1d99106a-b5c8-4a1e-958d-066a987caa5f' };if ($restorePoint) { Remove-VBRRestorePoint -Oib $restorePoint -Confirm:$false;$repo = Get-VBRBackupRepository;Sync-VBRBackupRepository -Repository $repo;} else { ; Write-Output 'Failed to delete'; Exit 1;}] is: [^M
Restore Type Job Name State Start Time End Time Description ^M
------------ -------- ----- ---------- -------- ----------- ^M
ConfResynchronize Configuration Dat... Starting 19/01/2024 10:59:52 01/01/1900 00:00:00 ^M
^M
^M
Remove-VBRRestorePoint : Win32 internal error "Access is denied" 0x5 occurred while reading the console output buffer. ^M
Contact Microsoft Customer Support Services.^M
At line:1 char:196^M
+ ... orePoint) { Remove-VBRRestorePoint -Oib $restorePoint -Confirm:$false ...^M
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^M
+ CategoryInfo : ReadError: (:) [Remove-VBRRestorePoint], HostException^M
+ FullyQualifiedErrorId : ReadConsoleOutput,Veeam.Backup.PowerShell.Cmdlets.RemoveVBRRestorePoint^M
^M
].
```
* veeam: fix unable to detach volume when restore backup and attach to vm then detach the volume
It also happened when destroy the original or backup VM
```
2024-01-24 10:10:03,401 ERROR [c.c.s.r.VmwareStorageProcessor] (DirectAgent-74:ctx-95b24ac7 10.0.35.53, job-25995/job-25996, cmd: DettachCommand) (logid:7260ffb8) Failed to detach volume!
java.lang.RuntimeException: Unable to access file [de52fdd3386b3d67b27b3960ecdb08f4] i-2-723-VM/7c2197c129464035bab062edec536a09-flat.vmdk
at com.cloud.hypervisor.vmware.util.VmwareClient.waitForTask(VmwareClient.java:426)
at com.cloud.hypervisor.vmware.mo.DatastoreMO.moveDatastoreFile(DatastoreMO.java:290)
at com.cloud.storage.resource.VmwareStorageLayoutHelper.syncVolumeToRootFolder(VmwareStorageLayoutHelper.java:241)
at com.cloud.storage.resource.VmwareStorageProcessor.attachVolume(VmwareStorageProcessor.java:2150)
at com.cloud.storage.resource.VmwareStorageProcessor.dettachVolume(VmwareStorageProcessor.java:2408)
at com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:174)
at com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:71)
at com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:589)
at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:48)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:45)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2024-01-24 10:10:03,402 INFO [c.c.h.v.u.VmwareHelper] (DirectAgent-74:ctx-95b24ac7 10.0.35.53, job-25995/job-25996, cmd: DettachCommand) (logid:7260ffb8) [ignored]failed to get message for exception: Unable to access file [de52fdd3386b3d67b27b3960ecdb08f4] i-2-723-VM/7c2197c129464035bab062edec536a09-flat.vmdk
```
* vmware: create restored volume with new UUID and attach to VM
This PR fixes bug introduced in #8502. Timeout for script execution was set to 60 ms instead of 60s which resulted in host not getting UEFI enabled. This is a blocker for 4.19 release.
We do this by introducing a new agent parameter `agent.script.timeout` (default - 60 seconds) to use as a timeout for the script checking host's UEFI status.
We also externalize the timeout for the ReadyCommand by introducing a new global setting `ready.command.wait` (default - 60 seconds).
For ModifyStoragePoolCommand, we don't externalize the timeout to avoid confusion for the user. Since, the required timeout can vary depending on the provider in use and we are only setting the wait for default host listener for now. Instead, we reuse the global `wait` setting by dividing it by `5` making the default value of 6 minutes (1800/5 = 360s) for ModifyStoragePoolCommand.
Note: the actual time, the MS waits is twice the wait set for a Command. Check reference code below.
19250403e6/engine/orchestration/src/main/java/com/cloud/agent/manager/AgentAttache.java (L406-L442)
This PR fixes several issues in the testing of Veeam 11 and Veeam12
- Import Veeam.Backup.PowerShell and silently ignore the warning messages
- Fix issue when assign vm to backup offerings, which caused by separator (\r\n)
- Fix authorization failure in veeam 12a, which is because v1_4 is not supported in veeam 12a any more
- Fix exception if backup name has space
- Fix backup metrics in veeam12, which is because powershell command does not return the values needed
- Fix Incorrect datetime value, which is because powershell command returns a datetime which is not supported in Java
- Fix issue during backup restoration if VM has both ROOT and DATA disks.
This PR also has the following update
- Add integration test test/integration/smoke/test_backup_recovery_veeam.py
- Make some UI changes
- Add zone setting backup.plugin.veeam.version. If it is not set, CloudStack will get veeam version via powershell commands.
- Add zone setting backup.plugin.veeam.task.poll.interval and backup.plugin.veeam.task.poll.max.retry
There are a lot of test failures due to test_vm_life_cycle.py in multiple PRs due to host not available for migration of VMs.
#8438 (comment)
#8433 (comment)
#7344 (comment)
While debugging I noticed that the hosts get stuck in Connecting state because MS is waiting for a response of the ReadyCommand from the agent. Since we take a lock on connection and disconnection, restarting the agent doesn't work. To fix this, we have to restart the MS or wait for ~1 hour (default timeout).
On the agent side, it gets stuck waiting for a response from the Script execution.
To reproduce, run smoke/test_vm_life_cycle.py (TestSecuredVmMigration test class to be specific). Once the tests are complete, you will notice that some hosts are stuck in Connecting state. And restarting the agent fails due to the named lock. Locks on DB can be checked using the below query.
SELECT *
FROM performance_schema.metadata_locks
INNER JOIN performance_schema.threads ON THREAD_ID = OWNER_THREAD_ID
WHERE PROCESSLIST_ID <> CONNECTION_ID() \G;
This PR adds a wait for the ready command and a timeout to the Script execution to ensure that the thread doesn't get stuck and the named lock from database is released.
This PR fixes a regression caused by #8465 on advanced zones, import fails with:
2024-01-10 12:13:33,234 DEBUG [o.a.c.e.o.NetworkOrchestrator] (API-Job-Executor-3:ctx-991bbe9f job-128 ctx-f49517d4) (logid:d7b8e716) Allocating nic for vm 142272e8-9e2e-407b-9d7e-e9a03b81653c in network Network {"id": 204, "name": "Isolated", "uuid": "9679fac5-e3ac-4694-a57b-beb635340f39", "networkofferingid": 10} during import
2024-01-10 12:13:33,239 ERROR [o.a.c.v.UnmanagedVMsManagerImpl] (API-Job-Executor-3:ctx-991bbe9f job-128 ctx-f49517d4) (logid:d7b8e716) Failed to import NICs while importing vm: i-2-31-VM
com.cloud.exception.InsufficientVirtualNetworkCapacityException: Unable to acquire Guest IP address for network Network {"id": 204, "name": "Isolated", "uuid": "9679fac5-e3ac-4694-a57b-beb635340f39", "networkofferingid": 10}Scope=interface com.cloud.dc.DataCenter; id=1
at org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.importNic(NetworkOrchestrator.java:4582)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.importNic(UnmanagedVMsManagerImpl.java:859)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.importVirtualMachineInternal(UnmanagedVMsManagerImpl.java:1198)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.importUnmanagedInstanceFromHypervisor(UnmanagedVMsManagerImpl.java:1511)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.baseImportInstance(UnmanagedVMsManagerImpl.java:1342)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.importUnmanagedInstance(UnmanagedVMsManagerImpl.java:1282)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Also, addresses the VNC password field set instead of a fixed string
This PR makes changes to use cluster's free metrics instead of used while computing imbalance for the cluster. This allows DRS to run for clusters where hosts doesn't have the same amount of metrics.
To prevent errors during multi-user access, use account UUID to create/access user on the provider side. Also, update the existing secret key for a user that already exists.
1. Problem description
In Apache CloudStack (ACS), when a VM is deployed in a host with the KVM hypervisor, an XML file is created in the assigned host, which has a property shares that defines the weight of the VM to access the host CPU. The value of this property has no unit, and it is a relative measure to calculate how much CPU a given VM will have in the host. However, this value has a limit, which depends on the version of cgroup utilized by the host's kernel. The problem lies at the range value of shares that varies between both versions: [2, 264144] for cgroups version 1; and [1, 10000] for cgroups version 2. Currently, ACS calculates the value of shares using Equation 1, presented below, where CPU is the number of cores and speed is the CPU frequency; both specified in the VM's compute offering. Therefore, if a compute offering has, for example, 6 cores at 2 GHz, the shares value will be 12000 and an exception will be thrown by libvirt if the host utilizes cgroup v2. The second version is becoming the default one in current Linux distributions; thus, it is necessary to address this limitation.
Equation 1
shares = CPU * speed
Fixes: #6744
2. Proposed changes
To address the problem described, we propose to apply a scale conversion considering the max shares of the host. Using the same formula currently utilized by ACS, it is possible to calculate the maximum shares of a VM for a given host. In other words, using the number of cores and the nominal speed of the host's CPU as the upper limit of shares allowed to a VM. Then, this value will be scaled to the allowed interval of [1, 10000] of cgroup v2 by using a linear scale conversion.
The VM shares would be calculated as Equation 2, presented below, where VM requested shares is the requested shares value calculated using Equation 1, cgroup upper limit is fixed with a value of 10000 (cgroups v2 upper limit), and host max shares is the maximum shares value of the host, calculated using Equation 1. Using Equation 2, the only case where a VM passes the cgroup v2 limit is when the user requests more resources than the host has, which is not possible with the current implementation of ACS.
Equation 2
shares = (VM requested shares * cgroup upper limit)/host max shares
To implement the proposal, the following APIs will be updated: deployVirtualMachine, migrateVirtualMachine and scaleVirtualMachine. When a VM is being deployed, a new verification will be added to find a suitable host. The max shares of each host will be calculated, and the VM calculated shares will be verified if it does not surpass the host's value. Likewise, the migration of VMs will have a similar new verification. Lastly, the scale of VMs will also have the same verification for the VM's host.
To determine the max shares of a given host, we will use the same equation currently used in ACS for calculating the shares of VMs, presented in Section 1. When Equation 1 is used to determine the maximum shares of a host, CPU is the number of cores of the host, and speed is the nominal CPU speed, i.e., considering the CPU's base frequency.
It is important to note that these changes are only for hosts with the KVM hypervisor using cgroup v2 for now.
This PR fixes the test failures with CKS HA-cluster upgrade.
In production, the CKS HA cluster should have at least 3 control VMs as well.
The etcd cluster requires 3 members to achieve reliable HA. The etcd daemon in control VMs uses RAFT protocol to determine the roles of nodes. During upgrade of CKS with HA, the etcd become unreliable if there are only 2 control VMs.
This PR provides a new primary storage volume type called "FiberChannel" that allows access to volumes connected to hosts over fiber channel connections. It requires Multipath to provide path discovery and failover. Second, the PR adds an AdaptivePrimaryDatastoreProvider that abstracts how volumes are managed/orchestrated from the connector to communicate with the primary storage provider, using a ProviderAdapter interface, allowing the code interacting with the primary storage provider API's to be simpler and have no direct dependencies on Cloudstack code. Lastly, the PR provides an implementation of the ProviderAdapter classes for the HP Enterprise Primera line of storage solutions and the Pure Flash Array line of storage solutions.
Sometimes the hostStats object of the agents becomes null in the management server. It is a rare situation, and we haven't found the root cause yet, but it occurs occasionally in our CloudStack deployments with many hosts.
The hostStat is null, even though the agent is UP and hosting multiple VMs. It is possible to access the VM consoles and execute tasks on them.
This pull request doesn't address the issue directly; rather it displays those hosts in Prometheus so we can restart the agent and get the necessary information.
This PR adds the capability in CloudStack to convert VMware Instances disk(s) to KVM using virt-v2v and import them as CloudStack instances. It enables CloudStack operators to import VMware instances from vSphere into a KVM cluster managed by CloudStack. vSphere/VMware setup might be managed by CloudStack or be a standalone setup.
CloudStack will let the administrator select a VM from an existing VMware vCenter in the CloudStack environment or external vCenter requesting vCenter IP, Datacenter name and credentials.
The migrated VM will be imported as a KVM instance
The migration is done through virt-v2v: https://access.redhat.com/articles/1351473, https://www.ovirt.org/develop/release-management/features/virt/virt-v2v-integration.html
The migration process timeout can be set by the setting convert.instance.process.timeout
Before attempting the virt-v2v migration, CloudStack will create a clone of the source VM on VMware. The clone VM will be removed after the registration process finishes.
CloudStack will delegate the migration action to a KVM host and the host will attempt to migrate the VM invoking virt-v2v. In case the guest OS is not supported then CloudStack will handle the error operation as a failure
The migration process using virt-v2v may not be a fast process
CloudStack will not perform any check about the guest OS compatibility for the virt-v2v library as indicated on: https://access.redhat.com/articles/1351473.
On no access to the storage nodes, we now create a temporary resource from the snapshot and copy that data into the secondary storage. Revert works the same, just that we now also look additionally for any Linstor agent node.
Also enables now backup snapshot by default.
This whole BackupSnapshot functionality was introduced in 4.19,
so I would be happy if this still could be merged.
Co-authored-by: Stephan Krug <stephan.krug@scclouds.com.br>
Co-authored-by: GaOrtiga <49285692+GaOrtiga@users.noreply.github.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
This PR adds an config option for the Linstor primary storage driver, that allows you to automatically backup
volume snapshots to the secondary storage.
Additionally it will not mangle the need java-linstor dependency into the client.jar, but instead just copy
the java-linstor.jar into lib.
Config option is called: lin.backup.snapshots and is default false
The scope of this change should be limited, as it only touches the Linstor driver and a part of copyAsync
was implemented with 2 new Linstor specific commands.
Extending the current functionality of KVM Host HA for the StorPool storage plugin and the option for easy integration for the rest of the storage plugins to support Host HA
This extension works like the current NFS storage implementation. It allows it to be used simultaneously with NFS and StorPool storage or only with StorPool primary storage.
If it is used with different primary storages like NFS and StorPool, and one of the health checks fails for storage, there is an option to report the failure to the management with the global config kvm.ha.fence.on.storage.heartbeat.failure. By default this option is disabled when enabled the Host HA service will continue with the checks on the host and eventually will fence the host
- Extracted shared ACL setup logic into a private helper method, setupAcl().
- Split original testCRUDAcl into two separate tests: testCRUDAclReadAll and testCRUDAclReadOne.
- Each test case now represents a unique scenario for better readability and maintainability.
- Replaced assertTrue(false) with fail() in catch blocks for better test failure indication.
These changes aim to enhance the clarity and maintainability of the test suite, and ensure each test case checks only one scenario.
* Refactoring reduces mock cloning of TungstenAnswer
* Apply suggestions from code review
Great suggestions, thanks a lot!
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Rename CreateMockTungstenAnswer to MockTungstenAnswerFactory
* Updated parameter to camel case.
* Revised in accordance with the latest update
* Replace all `\r` with `\n`.
* Replace all \r with \n.
* temp for re-uploading
* reupdate
* update line ending
* update ling ending
* Add static methods to avoid duplicate creation of new
---------
Co-authored-by: dahn <daan.hoogland@gmail.com>
* refactor MockNetworkVO
* Apply suggestions from code review
Co-authored-by: dahn <daan.hoogland@gmail.com>
* adding static
adding a static method to the MockNetworkVO class that generates a MockNetworkVO rather than using new everytime.
---------
Co-authored-by: dahn <daan.hoogland@gmail.com>
OAuth2, the industry-standard authorization or authentication framework, simplifies the process of
granting access to resources. CloudStack supports OAuth2 authentication wherein users can login into
CloudStack without using a username and password. Support for Google and Github providers has been added.
Other OAuth2 providers can be easily integrated with CloudStack using its plugin framework.
The login page will show provider options when the OAuth2 is enabled and corresponding providers are configured.
"OAuth configuration" sub-section is present under "Configuration" where admins can register the corresponding
OAuth providers.
This pull request (PR) implements a Distributed Resource Scheduler (DRS) for a CloudStack cluster. The primary objective of this feature is to enable automatic resource optimization and workload balancing within the cluster by live migrating the VMs as per configuration.
Administrators can also execute DRS manually for a cluster, using the UI or the API.
Adds support for two algorithms - condensed & balanced. Algorithms are pluggable allowing ACS Administrators to have customized control over scheduling.
Implementation
There are three top level components:
Scheduler
A timer task which:
Generate DRS plan for clusters
Process DRS plan
Remove old DRS plan records
DRS Execution
We go through each VM in the cluster and use the specified algorithm to check if DRS is required and to calculate cost, benefit & improvement of migrating that VM to another host in the cluster. On the basis of cost, benefit & improvement, the best migration is selected for the current iteration and the VM is migrated. The maximum number of iterations (live migrations) possible on the cluster is defined by drs.iterations which is defined as a percentage (as a value between 0 and 1) of total number of workloads.
Algorithm
Every algorithms implements two methods:
needsDrs - to check if drs is required for cluster
getMetrics - to calculate cost, benefit & improvement of a migrating a VM to another host.
Algorithms
Condensed - Packs all the VMs on minimum number of hosts in the cluster.
Balanced - Distributes the VMs evenly across hosts in the cluster.
Algorithms use drs.level to decide the amount of imbalance to allow in the cluster.
APIs Added
listClusterDrsPlan
id - ID of the DRS plan to list
clusterid - to list plans for a cluster id
generateClusterDrsPlan
id - cluster id
iterations - The maximum number of iterations in a DRS job defined as a percentage (as a value between 0 and 1) of total number of workloads. Defaults to value of cluster's drs.iterations setting.
executeClusterDrsPlan
id - ID of the cluster for which DRS plan is to be executed.
migrateto - This parameter specifies the mapping between a vm and a host to migrate that VM. Format of this parameter: migrateto[vm-index].vm=<uuid>&migrateto[vm-index].host=<uuid>.
Config Keys Added
ClusterDrsPlanExpireInterval
Key drs.plan.expire.interval
Scope Global
Default Value 30 days
Description The interval in days after which old DRS records will be cleaned up.
ClusterDrsEnabled
Key drs.automatic.enable
Scope Cluster
Default Value false
Description Enable/disable automatic DRS on a cluster.
ClusterDrsInterval
Key drs.automatic.interval
Scope Cluster
Default Value 60 minutes
Description The interval in minutes after which a periodic background thread will schedule DRS for a cluster.
ClusterDrsIterations
Key drs.max.migrations
Scope Cluster
Default Value 50
Description Maximum number of live migrations in a DRS execution.
ClusterDrsAlgorithm
Key drs.algorithm
Scope Cluster
Default Value condensed
Description DRS algorithm to execute on the cluster. This PR implements two algorithms - balanced & condensed.
ClusterDrsLevel
Key drs.imbalance
Scope Cluster
Default Value 0.5
Description Percentage (as a value between 0.0 and 1.0) of imbalance allowed in the cluster. 1.0 means no imbalance
is allowed and 0.0 means imbalance is allowed.
ClusterDrsMetric
Key drs.imbalance.metric
Scope Cluster
Default Value memory
Description The cluster imbalance metric to use when checking the drs.imbalance.threshold. Possible values are memory and cpu.
This PR adds new functionality to copy snapshots across zones and take snapshots for multiple zones.
Copy functionality is similar to template copy. The source zone acts as the web server from where the destination zone(s) can download the snapshot files. For this purpose, a new API - `copySnapshot` has been added. The response for copySnapshot will be returning zone and download details from the first destination zone of the request. This behaviour is similar to the `copyTemplate` API.
In a similar manner, multiple zones can be selected while taking the snapshots or creating snapshot policies. For this snapshot will be taken in the base zone(in which volume is present) and then copied to the additional zones. A new parameter - `zoneids` has been added to `createSnapshot` and `createSnapshotPolicy` APIs.
As snapshots can be present on multiple zones (secondary stores), a new parameter `zoneid` has been added to delete the snapshot copy on a specific zone.
`listSnapshots` API has been updated to allow listing snapshot entries for different zones/datastores. New parameters - `showUnique`, `locationType` have been added.
Events generated during snapshot operations will now be linked to the snapshot itself rather than the volume of the snapshot.
`listSnapshotPolicies` and `createSnapshotPolicy` APIs will return zone details of the zones in which backup will be scheduled for the policy.
----
New API added
`copySnapshot`
Request and response params updated for APIs
```
- listSnapshots
- deleteSnapshot
- createTemplate
- listZones
- listSnapshotPolicies
- createSnapshotPolicy
```
UI updated for
- Snapshot detail view
- Create snapshot form
- Create snapshot policy form
- Create volume (from snapshot) form
- Create template (from snapshot) form
Doc PR: https://github.com/apache/cloudstack-documentation/pull/344
PR: https://github.com/apache/cloudstack/pull/7873
Making a diskful resource was meant as an optimization,
but cannot work on non hyperconverged setups,
as the storage nodes (diskful) are not part of the cloudstack cluster.
A TODO was overseen and never implemented,
which could trigger the following bug:
If Linstor didn't create a resource (diskless or diskfull) on
the cloudstack choosen node, it would not be able to copy the
template data there, it even seems no error was
triggered and the new template file silently just became
empty/corrupt.
This PR allows an admin to reserve some hypervisor host CPUs for system use. Another way to think of it is limiting the number of CPUs allocatable to VMs. This can be useful if the admin wants to do other things with the hypervisor's CPU, for example reserve some cores for running hyperconverged storage processes.
Co-authored-by: Marcus Sorensen <mls@apple.com>
* Trigger out of band VM state update via libvirt event when VM stops
* Add License headers, refactor nested try
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
* Fix style for LibvirtComputingResource variable names and its dependencies
* More variable name fixes
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>