If the libvirt mount point is still busy and can't be unmounted
right now, it was waited 5 seconds and an plain unmount was tried,
without cleaning up the libvirt storagepool.
This kept libvirt thinking the storagepool
is active and mounted (which it wasn't).
Now after the plain unmount call, also
the libvirt storagepool will be destroyed.
- mTLS implementation for cluster service communication
- Listen only on the specified cluster node IP address instead of all interfaces
- Validate incoming cluster service requests are from peer management servers based on the server's certificate dns name which can be through global config - ca.framework.cert.management.custom.san
- Hardening of KVM command wrapper script exeicution
- Improve API server integration port check
- cloudstack-management.default: don't have JMX configuration if not needed. JMX is used for instrumentation; users who need to use it should enable it explicitly
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
- mTLS implementation for cluster service communication
- Listen only on the specified cluster node IP address instead of all interfaces
- Validate incoming cluster service requests are from peer management servers based on the server's certificate dns name which can be through global config - ca.framework.cert.management.custom.san
- Hardening of KVM command wrapper script execution
- Improve API server integration port check
- cloudstack-management.default: don't have JMX configuration if not needed. JMX is used for instrumentation; users who need to use it should enable it explicitly
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
- mTLS implementation for cluster service communication
- Listen only on the specified cluster node IP address instead of all interfaces
- Validate incoming cluster service requests are from peer management servers based on the server's certificate dns name which can be through global config - ca.framework.cert.management.custom.san
- Hardening of KVM command wrapper script execution
- Improve API server integration port check
- cloudstack-management.default: don't have JMX configuration if not needed. JMX is used for instrumentation; users who need to use it should enable it explicitly
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Mitigation for non-scalable Powerflex/ScaleIO clients
- Added ScaleIOSDCManager to manage SDC connections, checks clients limit, prepare and unprepare SDC on the hosts.
- Added commands for prepare and unprepare storage clients to prepare/start and stop SDC service respectively on the hosts.
- Introduced config 'storage.pool.connected.clients.limit' at storage level for client limits, currently support for Powerflex only.
* tests issue fixed
* refactor / improvements
* lock with powerflex systemid while checking connections limit
* updated powerflex systemid lock to hold till sdc preparation
* Added custom stats support for storage pool, through listStoragePools API
* code improvements, and unit tests
* unit tests fixes
* Update config 'storage.pool.connected.clients.limit' to dynamic, and some improvements
* Stop SDC on host after migration if no volumes mapped to host
* Wait for SDC to connect after scini service start, and some log improvements
* Do not throw exception (log it) when SDC is not connected while revoking access for the powerflex volume
* some log improvements
* Create/Export OVA file of the VM on external vCenter host, to temporary conversion location (NFS)
* Fixed ova issue on untar/extract ovf from ova file
"tar -xf" cmd on ova fails with "ovf: Not found in archive" while extracting ovf file
* Updated VMware to KVM instance migration using OVA
* Refactoring and cleanup
* test fixes
* Consider zone wide pools in the destination cluster for instance conversion
* Remove local storage pool support as temporary conversion location
- OVA export not possible as the pool is not accessible outside host, NFS pools are supported.
* cleanup unused code
* some improvements, and refactoring
* import nic unit tests
* vmware guru unit tests
* Separate clone VM and create template file for VMware migration
- Export OVA (of the cloned VM) to the conversion location takes time.
- Do any validations with cloned VM before creating the template (and fail early).
- Updated unit tests.
* Check conversion support on host before clone vm / create template on vmware (and fail early)
* minor code improvements
* Auto select the host with instance conversion capability
* Skip instance conversion supported response param for non-KVM hosts
* Show supported conversion hosts in the UI
* Skip persistence map update if network doesn't exist
* Added support to export OVA from KVM host, through ovftool (when installed in KVM host)
* Updated importvm api param 'usemsforovaexport' to 'forcemstodownloadvmfiles', to be generic
* Updated hardcoded UI messages with message labels
* Updated UI to support importvm api param - forcemstodownloadvmfiles
* Improved instance conversion support checks on ubuntu hosts, and for windows guest vms
* Use OVF template (VM disks and spec files) for instance conversion from VMware, instead of OVA file
- this would further increase the migration performance (as it reduces the time for OVA preparation / archiving of the VM files into a single file)
* OVF export tool parallel threads code improvements
* Updated 'convert.vmware.instance.to.kvm.timeout' config default value to 3 hrs
* Config values check & code improvements
* Updated import log, with time taken and vm details
* Support for parallel downloads of VMware VM disk files while exporting OVF from MS, and other changes below.
- Skip clone for powered off VMs
- Fixes to support standalone host (with its default datacenter)
- Some code improvements
* rebase fixes
* rebase fixes
* minor improvement
* code improvements - threads configuration, and api parameter changes to import vm files
* typo fix in error msg
* xenserver: attach regular iso with configdrive
Fixes#7902
This PR allows attaching a regular ISO to a VM when it already has the
config drive ISO attached.
Config-drive ISO is now attached with the SR name-label
<VM-NAME>-CONFIGDRIVE-ISO. While regular ISOs continue to attach with SR
name-label <VM-NAME>-ISO. VM which already have the configdrive ISO
attached before this fix will return an appropriate error and will need
to be stopped-start.
* Veeam: find storage pool by path for PreSetup and VMFS
* Veeam: support VMware distributed virtual switch
* Veeam: sync volumes on Solidfire after backup restoration
user faced the issue that backup is restored but the DATA disk is gone (ROOT disk is ok)
```
2024-05-03 12:00:32,868 ERROR [o.a.c.b.BackupManagerImpl] (API-Job-Executor-13:ctx-aa8a1d85 job-149661 ctx-73328567) (logid:6510cf06) Failed to import VM [vmInternalName: i-169-9679-VM] from backup restoration [{"backupType":"Full","externalId":"821ca400-a5da-4282-bf3f-7c7e38a6cdb4","id":257,"uuid":"69399101-5cbd-461c-8a48-f0c70eac0b24","vmId":9679}] with hypervisor [type: VMware] due to: [Couldn't find storage pool -iqn.2010-01.com.solidfire:3p53.data-9679.221-0].
```
On managed storage, the datastore name of DATA disk is determined by the iscsi_name of the volume.
* Veeam: set correct path for DATA disks on solidfire
* Ability to specify NFS mount options while adding a primary storage and modify it later
* Pull 8947: Rename all occurrence of nfsopt to nfsMountOpt and added nfsMountOpts to ApiConstants
* Pull 8947: Refactor code - move into separate methods
* Pull 8947: CollectionsUtils.isNotEmpty and switch statement in LibvirtStoragePoolDef.java
* Pull 8947: UI - cancel maintainenace will remount the storage pool and apply the options
* Pull 8947: UI - moved edit NFS mount options to edit Primary Storage form
* Pull 8947: UI - moved 'NFS Mount Options' to below 'Type' in dataview
* Pull 8947: Fixed message in AddPrimaryStorage.vue
* Pull 8947: Convert _nfsmountOpts to Set in libvirtStoragePoolDef
* Pull 8947: Throw exception and log error if mount fails due to incorrect mount option
* Pull 8947: Added UT and moved integration test to component/maint
* Pull 8947: Review comments
* Pull 8947: Removed password from integration test
* Pull 8947: move details allocation to inside the if loop in getStoragePoolNFSMountOpts
* Pull 8947: Fixed a bug in AddPrimaryStorage.vue
* Pull 8947: Pool should remain in maintenance mode if mount fails
* Pull 8947: Removed password from integration test
* Pull 8947: Added UT
* Pull 8875: Fixed a bug in CloudStackPrimaryDataStoreLifeCycleImplTest
* Pull 8875: Fixed a bug in LibvirtStoragePoolDefTest
* Pull 8947: minor code restructuring
* Pull 8947 : added some ut for coverage
* Fix LibvirtStorageAdapterTest UT
* Updates to change PUre and Primera to host-centric vlun assignments; various small bug fixes
* update to add timestamp when deleting pure volumes to avoid future conflicts
* update to migrate to properly check disk offering is valid for the target storage pool
* Updates to change PUre and Primera to host-centric vlun assignments; various small bug fixes
* update to add timestamp when deleting pure volumes to avoid future conflicts
* update to migrate to properly check disk offering is valid for the target storage pool
* improve error handling when copying volumes to add precision to which step failed
* rename pure volume before delete to avoid conflicts if the same name is used before its expunged on the array
* remove dead code in AdaptiveDataStoreLifeCycleImpl.java
* Fix issues found in PR checks
* fix session refresh TTL logic
* updates from PR comments
* logic to delete by path ONLY on supported OUI
* fix to StorageSystemDataMotionStrategy compile error
* change noisy debug message to trace message
* fix double callback call in handleVolumeMigrationFromNonManagedStorageToManagedStorage
* fix for flash array delete error
* fix typo in StorageSystemDataMotionStrategy
* change copyVolume to use writeback to speed up copy ops
* remove returning PrimaryStorageDownloadAnswer when connectPhysicalDisk returns false during KVMStorageProcessor template copy
* remove change to only set UUID on snapshot if it is a vmSnapshot
* reverting change to UserVmManagerImpl.configureCustomRootDiskSize
* add error checking/simplification per comments from @slavkap
* Update engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/StorageSystemDataMotionStrategy.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* address PR comments from @sureshanaparti
---------
Co-authored-by: GLOVER RENE <rg9975@cs419-mgmtserver.rg9975nprd.app.ecp.att.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Added timeout config to copy the disks of remote KVM instance while importing the instance from an external host
* Updated copy config units to mins
* Cleanup remote converted file and local file when copy failed
* Update extraconfig for platform param in xen/xcpng
* Fix map param key, not to replace '-' with '_' (replace only applicable to param / map-param)
* Added unit tests
* Add license for tests file
* kvm: replace ISO path in vm XML configuration during vm migration
* Update 9212: address comments
* kvm: fix vm migration if there are multiple image stores
This PR addresses the issue #8789
The original issue is disconnectPhysicalDiskByPath() implementation in FibreChannelAdaptor always returns true irrespective of the success of the operation. This was already fixed in the PR #8889 .
Ideally this method has to be called after choosing the right adapter based on the storage pool type of the volume path, but currently it is just called in a loop.
05b9b6e2e7/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePoolManager.java (L200-L212)
while trying to fix the case of running into the loop of all adapters by somehow passing the storage pool type to that caller cleanup() method but this is touching all over the code (which I fear it creates other regressions), instead I feel we can keep it the current way only since Fibrechannel adapter has already fixed.
In this PR I've added the java doc explaining the method and situation.
* Add ability to set cpu.threadspercore similar to existing cpu.corespersocket
* add cpu.threadspercore to VM and template detail options
* Update plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* add vm detail for KVM
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
This fixes a limitation for arm64/aarch64 KVM hosts to correctly export
the product name via sysconfig attribute. Without this `cloud-init`
doesn't function correctly on arm64 platforms.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* 4.19:
protect against null-path (#8915)
UI: Fix missing locale strings for Status widget (#8792)
Add a shutdownhook to remove jobs owned by the process (#8896)
* 4.18:
protect against null-path (#8915)
UI: Fix missing locale strings for Status widget (#8792)
Add a shutdownhook to remove jobs owned by the process (#8896)
Add a global setting to control whether redirection is allowed while
downloading templates and volumes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Add a global setting to control whether redirection is allowed while
downloading templates and volumes
core: some changes on SimpleHttpMultiFileDownloader
similar as HttpTemplateDownloader
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
(cherry picked from commit b1642bc3bf)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Support KVM storage implementations controlling logical/physical block io size
* Support custom block size during disk attach
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
* NSX integration - skeletal code
* Fix module not loading on startup
* add upgrade path and daos
\n add nsx controller command
* add support for adding and listing nsx provider to a zone
* add license
* add default VPC offering and update upgrade path
* add global setting to enable nsx plugin
* add delete nsx controller operation
* add nsxresource
* add NSX resource , api client, create tier1 gw
* update db
* update response and add license
* Add support to create and delete nsx tier-1 gateway
* add license
* cleanup and add skeletal code for network creation
* add create/delete segment and UI integration
* add license
* address code smells - part 1
* fix test / build failure
* NSX integration - skeletal code
* Fix module not loading on startup
* add upgrade path and daos
\n add nsx controller command
* add support for adding and listing nsx provider to a zone
* add license
* add default VPC offering and update upgrade path
* add global setting to enable nsx plugin
* add delete nsx controller operation
* add nsxresource
* add NSX resource , api client, create tier1 gw
* update db
* update response and add license
* Add support to create and delete nsx tier-1 gateway
* add license
* cleanup and add skeletal code for network creation
* add create/delete segment and UI integration
* add license
* address code smells - part 1
* fix test / build failure
* add ui changes + update nsx_provider table transport zones + use NSX broadcast domain for add nics to router
* ui: fix password field, and backend changes
* add route advertisement
* update offering
* update offering
* add sleep before deletion of vpc / tier g/w for ports to be removed
* move creation of segments to design phase
* change provider to VPC router for Dhcp & dns service in an nsx offering
* Add public nic for NSX
* reserve first IP (after g/w) of subnet for router nic - NSX
* revert reserving 1st IP in vpc segments
* [NSX] Create a DHCP relay and add it to a VPC tier segment (#107)
* Create DHCP relay command and execute request
* In progress integrate with networking
* Create DHCP relay config on the network VR allocation
* Revert domain router dao changes
* Create DHCP relay con VR nic plug to NSX network
* Link DHCP relay config to segment after creation
* [NSX] Cleanup DHCP Relay config on segment deletion (#108)
* Cleanup DHCP Relay config on segment deletion
* update segment & relay name generators and call delete dhcprelay after deletion of segment
* address comment
* [NSX] Fix DHCP relay config deletion was missing zone name (#8068)
* [NSX] Refactor API wrapper operations (#8059)
* [NSX] Refactor API wrapper operations
* Big refactor
* Address review comment
* change network cidr to cidr to prevent NPE
* add domain and zone names to the various networks - vpc & tier
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* Nsx unit tests (#8090)
* Add tests
* add test for NsxGuestNetworkGuru
* add unit tests for NsxResource
* add unti tests for NsxElement
* cleanup
* [NSX] Refactor API wrapper operations
* update tests
* update tests - add nsxProviderServiceImpl test
* add unit test - NsxServiceImpl
* add license
* Big refactor
* Address review comment
* change network cidr to cidr to prevent NPE
* add domain and zone names to the various networks - vpc & tier
* fix tests
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* modify NSX resource naming convention (#8095)
* modify NSX resource naming convention
* remove unused imports
* add a setup phase between desgin and implementation of a network for intermediary steps
* add method to all classes
* NSX: Refactor Network & VPC offering (#8110)
* [NSX] Refactor API wrapper operations
* Network offering changes for NSX
* fix services and provider combination
* address comments: rename param
* update nsx_mode parameter
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix test
* [NSX] Allow NSX isolated networks (#8132)
* Add network offerings for NSX on isolated networks
* Fix offerings creation
* In progress NSX isolated network
* Fixes
* Fix NIC allocation to router
* NSX: Add Step for Adding Public traffic network for NSX During zone creation (#8126)
* NSX: Add Step for Adding Public traffic network for NSX
* address comments and cleanup
* address comment
* remove indent
* NSX: Create and Delete static NAT & Port forward rules (#8131)
* NSX: Create and delete NSX Static Nat rules
* fix issues with static nat
* add static nat
* Support to add and delete Port forward rules
* add license
* fix adding multiple pf rules
* cleanup
* fix lint check
* fix smoke tests
* fix smoke tests
* Nsx add lb rule (#8161)
* NSX: Create and delete NSX Static Nat rules
* fix issues with static nat
* add static nat
* Support to add and delete Port forward rules
* add license
* fix adding multiple pf rules
* cleanup
* NSX: Add support to create and delete Load balancer rules
* fix deletion of lb rules
* add header file and update protocol detail
* build failure fix
* [NSX] Add SNAT support (#8100)
* In progress add source NAT
* Fix after merge
* Fix tests
* Fix NPE on isolated network deletion
* Reserve source NAT IP when its not passed for NSX VPC
* Create source NAT rule on VR NIC allocation
* Fix update VPC and remove VPC to update and remove SNAT rule
* Fix packaging
* Address review comment
* Fix build
* fix build - unused import
* Add defensive checks
* Add missing design to NSX public guru
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* NSX: Fix VR public NIC allocation (#8166)
* NSX: fix LB member addition and deletion and add defensive checks (#8167)
* Fix public NIC NPE on broadcast URI
* NSX: Router Public nic to get IP from systemVM Ip range (#8172)
* NSX: Router Public nic to get IP from systemVM Ip range
* Fix VR IP address and setSourceNatIp command
* NSX: hide systemVM reserved IP range SourceNAT
* fix test
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix test failure
* test failure fix
* [NSX] Fix update source NAT IP (#8176)
* [NSX] Fix update source NAT IP
* Fix startup
* Fix API result
* NSX - add LB route Advertizement (#8192)
* [NSX] Add ACL types support (#8224)
* NSX: Create segment group on segment creation
* Add unit tests
* Remove group for segment before removing segment
* Create Distributed Firewall rules
* Remove distributed firewall policy on segment deletion
* Fix policy rule ID and add more unit tests
* Fix DROP action rules and transform tests
* Add new ACL rules
* Fixes
* associate security policies with groups and not to DFW and add deletion of rules
* Fix name convention
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* NSX: Fix creation of VPCs (#8320)
* Fix ACL rules creation (#8323)
* [NSX] Fix database views (#8325)
* NSX: Add CKS Support & Firewall rules for Isolated Networks (#8189)
* NSX: Add ALL LB IP to the list of route advertisements in tier1
* NSX: Support Source NAT on NSX Isolated networks
* NSX: Cks Support
* NSX: Create segment group on segment creation
* Add unit tests
* Remove group for segment before removing segment
* Create Distributed Firewall rules
* Remove distributed firewall policy on segment deletion
* Fix policy rule ID and add more unit tests
* Add support for routed NSX Isolated networks \n and non RFC 1918 compliant IPs
* Add support for routed NSX Isolated networks \n and non RFC 1918 compliant IPs
* Add Firewall rules
* build failure - fix unit test
* fix npes
* Add support to delete firewall rules
* update nsx cks offering
* add license
* update order of ports in PF & FW rules
* fix filter for getting transport zones
* CKS support changed - MTU updated, etc
* add LB for CKS on VPC
* address comments
* adapt upstream cks logic for vpc
* rever mtu hack
* update UI changes as per upstream fix
* change display test for CKS n/w offerings for isolated and VPC tiers
* add extra line for linter
* address comment
* revert list change
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix ui build failure
* [NSX] Address SonarCloud Bugs (#8341)
* [NSX] Address SonarCloud Bugs
* Fix NSX API connection issues
* NSX: Add unit tests to increase coverage (#8355)
* NSX: Add unit tests
* cleanup unused imports
* add more unit tests
* add tests for publicnsxnetworkguru
* add license
* fix build failures
* address sonar comment
* fix security hotspots
* NSX: Add more unit tests (#8381)
* NSX : Unit tests
* remove unused imports
* remove unused import causing build failure
* fix build failures due to unused imports
* fix build failure
* fix test assertion
* remove unused imports
* remove unused import
* Nsx UI zone bug (#8398)
* NSX: Attempt to fix NSX Zone creation bug for public networks
* fix zone wizard public traffic issue
* add proper filtering of offerings based on VPC nsx mode
* clean up console logs
* NSX: Fix code smells and reported bugs (#8409)
* NSX: Fix code smells and reported bugs
* fox override issue
* remove unused imports
* fix test
* refactor code to reduce complexity
* add lisence
* cleanup
* fix build failure
* fix build failure
* address comments
* test - add config to ignore certain files from test coverage
* test exclusion of classes from test cov
* rever pom changes
* [NSX] Add more unit tests (#8431)
* [NSX] Add more unit tests
* More tests
* Fix build errors
* NSX: Prevent creation of L2 and Shared networks for NSX (#8463)
* NSX: Prevent creation of L2 and Shared networks for NSX
* add checks to backend to prevent creation of l2 and shared networks in nsx zones and filter only nsx offerings when creating isolated networks
* cleanup
* NSX: Fix code smells (#8436)
* NSX: Fix code smells
* Add changes to service creation logic
* CKS: Add action to during firewall rule creation (#8498)
* NSX,UI: Deduplicate network list when creating kubernetes clusters (#8513)
* NSX: Make LB service selectable in network offering (#8512)
* NSX: Make LB service selectable in network offering
* fix label
* address comments
* address comments
* NSX: Add appropriate error message when icmp type is set to -1 for NSX (#8504)
* NSX: Add appropriate error message when icmp type is set to -1 for NSX
* address comments
* update text
* fix test
* fix test - build failure
* fix test - build failure
* NSX: Cleanup NSX resources during k8s cluster cleanup (#8528)
* fix test failure
* NSX: Improve segment deletion process (#8538)
* NSX: Add passive monitor for NSX LB to test whether a server is available (#8533)
* NSX: Add passive monitor for NSX LB to test whether a server is available
* Add active monitors too
* fix build failure
* NSX: Add check for ICMP code / type for NSX zones (#8542)
* NSX: Fix Routed Mode for Isolated and VPC networks (#8534)
* NSX: Fix Routed Mode for Isolated and VPC networks
* NSX: Fix Routed mode - add checks for ports added for FW rules
* clean up code
* fix build failure
* NSX: Add retry logic with sleep to delete segments (#8554)
* NSX: Add retry logic with sleep to delete segments
* add logs
* NSX: Fix custom ACL check (#2)
* NSX: Fix custom ACL check
* NSX: Fix custom ACL check
* Nsx vpc routed mode (#5)
* NSX: Fix VPC routed mode
* NSX: VPC route mode
* remove unnecessary changes
* Nsx: Support internal LB (#4)
* NSX: Support internal LB service in NSX
* add lb removal logic
* Fix UI issue hiding internal LB tab
* Refactor method name
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* NSX: Improve NSX resource cleanup process (#3)
* Fix unit test
* NSX: Add SourceNAT service to the default Routed offering for VPC (#13)
* Fix VPC restart with cleanup (#12)
* NSX: Fix ACL rule removal on replacement and fix rule order (#11)
* NSX: fix smoke test failure for ACLs (#9)
* Fix unit tests
* Fix NSX plugin pom XML
* NSX: Add support to re-order ACL rules (NSX FW rules) (#14)
* [WIP] NSX: Add support to re-order ACL rules (NSX FW rules)
* fix reordering of acl rules on all networks that it is associated to
* clean up and attempt test fix
* Fix tests
* Remove unused import
* tweak reorder logic
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Fix zone creation issue for internal load balancer
* Fix
* Fix unit test
* fix logger
* fix logger
* fix logger
* NSX: Fix VPC form to ignore source NAT IP when creating VPCs and fix label
* Move SQL changes to the newest schema file
* NSX: Last Fixes
* Fix build
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Introduced a new API checkVolumeAndRepair that allows users or admins to check and repair if any leaks observed.
Currently this is supported only for KVM
* some fixes
* Added unit tests
* addressed review comments
* add repair volume while granting access
* Changed repair parameter to accept both leaks/all
* Introduced new global setting volume.check.and.repair.before.use to do volume check and repair before VM start or volume attach operations
* Added volume check and repair changes only during VM start and volume attach operations
* Refactored the names to look similar across the code
* Some code fixes
* remove unused code
* Renamed repair values
* Fixed unit tests
* changed version
* Address review comments
* Code refactored
* used volume name in logs
* Changed the API to Async and the setting scope to storage pool
* Fixed exit value handling with check volume command
* Fixed storage scope to the setting
* Fix volume format issues
* Refactored the log messages
* Fix formatting
* Update to 4.20.0
* Update to python3
* Upgrade to JRE 17
* Upgrade to Debian 12.4.0
* VR: upgrade to python3
for f in `find systemvm/ -name *.py`;do
if grep "print " $f >/dev/null;then
2to3-2.7 -w $f
else
2to3-2.7 -p -w $f
fi
done
* java: Use JRE17 in cloudstack packages and systemvmtemplate
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Add --add-opens to JAVA_OPTS in systemd config
* Add --add-opens to JAVA_OPTS in systemd config for usage
* python3: fix "TypeError: a bytes-like object is required, not 'str'"
* python3: fix "ValueError: must have exactly one of create/read/write/append mode"
* Add --add-exports=java.base/sun.security.x509=ALL-UNNAMED for management server
* Use pip3 instead of pip for centos8
* python3: fix "TypeError: write() argument must be str, not bytes"
```
root@r-1037-VM:~# /opt/cloud/bin/passwd_server_ip.py 10.1.1.1
Traceback (most recent call last):
File "/opt/cloud/bin/passwd_server_ip.py", line 201, in <module>
serve()
File "/opt/cloud/bin/passwd_server_ip.py", line 187, in serve
initToken()
File "/opt/cloud/bin/passwd_server_ip.py", line 60, in initToken
f.write(secureToken)
TypeError: write() argument must be str, not bytes
root@r-1037-VM:~#
```
* Python3: fix "name 'file' is not defined"
```
root@r-1037-VM:~# /opt/cloud/bin/passwd_server_ip.py 10.1.1.1
Traceback (most recent call last):
File "/opt/cloud/bin/passwd_server_ip.py", line 201, in <module>
serve()
File "/opt/cloud/bin/passwd_server_ip.py", line 188, in serve
loadPasswordFile()
File "/opt/cloud/bin/passwd_server_ip.py", line 67, in loadPasswordFile
with file(getPasswordFile()) as f:
NameError: name 'file' is not defined
```
* python3: fix "TypeError: write() argument must be str, not bytes" (two more files)
* Upgrade jaxb version
* python3: fix more "TypeError: a bytes-like object is required, not str"
* python3: fix "Failed to update password server"
Failed to update password server due to: POST data should be bytes, an iterable of bytes, or a file object. It cannot be of type str.
* python3: fix "bad duration value: ikelifetime=24.0h"
Jan 15 13:57:20 systemvm ipsec[3080]: # bad duration value: ikelifetime=24.0h
* python3: fix password server "invalid save_password token"
* test: incease retries in test_vpc_vpn.py
* python3: fix passwd_server_ip.py
see error below
```
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: ----------------------------------------
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: Exception occurred during processing of request from ('10.1.1.129', 32782)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: Traceback (most recent call last):
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/socketserver.py", line 650, in process_request_thread
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.finish_request(request, client_address)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/socketserver.py", line 360, in finish_request
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.RequestHandlerClass(request, client_address, self)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/socketserver.py", line 720, in __init__
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.handle()
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/http/server.py", line 427, in handle
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.handle_one_request()
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/http/server.py", line 415, in handle_one_request
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: method()
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/opt/cloud/bin/passwd_server_ip.py", line 120, in do_GET
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.wfile.write(password)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/socketserver.py", line 799, in write
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self._sock.sendall(b)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: TypeError: a bytes-like object is required, not 'str'
```
* python3: fix self.cl.get_router_password in Redundant VRs
```
File "/opt/cloud/bin/cs/CsDatabag.py", line 154, in get_router_password
md5.update(passwd)
TypeError: Unicode-objects must be encoded before hashing"]
```
* scripts: mark multipath scripts as executable
* systemvm template: remove hyperv packages and do not export
* VR: update default RAM size of System VMs/VRs to 512MiB
Before
```
mysql> select id,name,cpu,speed,ram_size,unique_name,system_use from service_offering where name like "System%";
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
| id | name | cpu | speed | ram_size | unique_name | system_use |
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
| 3 | System Offering For Software Router | 1 | 500 | 256 | Cloud.Com-SoftwareRouter | 1 |
| 4 | System Offering For Software Router - Local Storage | 1 | 500 | 256 | Cloud.Com-SoftwareRouter-Local | 1 |
| 5 | System Offering For Internal LB VM | 1 | 256 | 256 | Cloud.Com-InternalLBVm | 1 |
| 6 | System Offering For Internal LB VM - Local Storage | 1 | 256 | 256 | Cloud.Com-InternalLBVm-Local | 1 |
| 7 | System Offering For Console Proxy | 1 | 500 | 1024 | Cloud.com-ConsoleProxy | 1 |
| 8 | System Offering For Console Proxy - Local Storage | 1 | 500 | 1024 | Cloud.com-ConsoleProxy-Local | 1 |
| 9 | System Offering For Secondary Storage VM | 1 | 500 | 512 | Cloud.com-SecondaryStorage | 1 |
| 10 | System Offering For Secondary Storage VM - Local Storage | 1 | 500 | 512 | Cloud.com-SecondaryStorage-Local | 1 |
| 11 | System Offering For Elastic LB VM | 1 | 128 | 128 | Cloud.Com-ElasticLBVm | 1 |
| 12 | System Offering For Elastic LB VM - Local Storage | 1 | 128 | 128 | Cloud.Com-ElasticLBVm-Local | 1 |
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
10 rows in set (0.00 sec)
```
New value
```
mysql> select id,name,cpu,speed,ram_size,unique_name,system_use from service_offering where name like "System%";
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
| id | name | cpu | speed | ram_size | unique_name | system_use |
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
| 3 | System Offering For Software Router | 1 | 500 | 512 | Cloud.Com-SoftwareRouter | 1 |
| 4 | System Offering For Software Router - Local Storage | 1 | 500 | 512 | Cloud.Com-SoftwareRouter-Local | 1 |
| 5 | System Offering For Internal LB VM | 1 | 256 | 512 | Cloud.Com-InternalLBVm | 1 |
| 6 | System Offering For Internal LB VM - Local Storage | 1 | 256 | 512 | Cloud.Com-InternalLBVm-Local | 1 |
| 7 | System Offering For Console Proxy | 1 | 500 | 1024 | Cloud.com-ConsoleProxy | 1 |
| 8 | System Offering For Console Proxy - Local Storage | 1 | 500 | 1024 | Cloud.com-ConsoleProxy-Local | 1 |
| 9 | System Offering For Secondary Storage VM | 1 | 500 | 512 | Cloud.com-SecondaryStorage | 1 |
| 10 | System Offering For Secondary Storage VM - Local Storage | 1 | 500 | 512 | Cloud.com-SecondaryStorage-Local | 1 |
| 11 | System Offering For Elastic LB VM | 1 | 128 | 512 | Cloud.Com-ElasticLBVm | 1 |
| 12 | System Offering For Elastic LB VM - Local Storage | 1 | 128 | 512 | Cloud.Com-ElasticLBVm-Local | 1 |
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
10 rows in set (0.01 sec)
```
* debian12: fix test_network_ipv6 and test_vpc_ipv6
* python3: remove duplicated imports
* debian12: failed to start Apache2 server (SSLCipherSuite @SECLEVEL=0)
error message
```
[Sat Jan 20 22:51:14.595143 2024] [ssl:emerg] [pid 10200:tid 140417063888768] AH02562: Failed to configure certificate cloudinternal.com:443:0 (with chain), check /etc/ssl/certs/cert_apache.crt
[Sat Jan 20 22:51:14.595234 2024] [ssl:emerg] [pid 10200:tid 140417063888768] SSL Library Error: error:0A00018E:SSL routines::ca md too weak
AH00016: Configuration Failed
```
openssl version
```
root@s-167-VM:~# openssl version -a
OpenSSL 3.0.11 19 Sep 2023 (Library: OpenSSL 3.0.11 19 Sep 2023)
built on: Mon Oct 23 17:52:22 2023 UTC
platform: debian-amd64
options: bn(64,64)
compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -fzero-call-used-regs=used-gpr -DOPENSSL_TLS_SECURITY_LEVEL=2 -Wa,--noexecstack -g -O2 -ffile-prefix-map=/build/reproducible-path/openssl-3.0.11=. -fstack-protector-strong -Wformat -Werror=format-security -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_BUILDING_OPENSSL -DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2
OPENSSLDIR: "/usr/lib/ssl"
ENGINESDIR: "/usr/lib/x86_64-linux-gnu/engines-3"
MODULESDIR: "/usr/lib/x86_64-linux-gnu/ossl-modules"
Seeding source: os-specific
CPUINFO: OPENSSL_ia32cap=0x80202001478bfffd:0x0
```
certificate
```
root@s-167-VM:~# keytool -printcert -rfc -file /usr/local/cloud/systemvm/certs/realhostip.crt
-----BEGIN CERTIFICATE-----
MIIFZTCCBE2gAwIBAgIHKBCduBUoKDANBgkqhkiG9w0BAQUFADCByjELMAkGA1UE
BhMCVVMxEDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxGjAY
BgNVBAoTEUdvRGFkZHkuY29tLCBJbmMuMTMwMQYDVQQLEypodHRwOi8vY2VydGlm
aWNhdGVzLmdvZGFkZHkuY29tL3JlcG9zaXRvcnkxMDAuBgNVBAMTJ0dvIERhZGR5
IFNlY3VyZSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTERMA8GA1UEBRMIMDc5Njky
ODcwHhcNMTIwMjAzMDMzMDQwWhcNMTcwMjA3MDUxMTIzWjBZMRkwFwYDVQQKDBAq
LnJlYWxob3N0aXAuY29tMSEwHwYDVQQLDBhEb21haW4gQ29udHJvbCBWYWxpZGF0
ZWQxGTAXBgNVBAMMECoucmVhbGhvc3RpcC5jb20wggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQCDT9AtEfs+s/I8QXp6rrCw0iNJ0+GgsybNHheU+JpL39LM
TZykCrZhZnyDvwdxCoOfE38Sa32baHKNds+y2SHnMNsOkw8OcNucHEBX1FIpOBGp
h9D6xC+umx9od6xMWETUv7j6h2u+WC3OhBM8fHCBqIiAol31/IkcqDxxsHlQ8S/o
CfTlXJUY6Yn628OA1XijKdRnadV0hZ829cv/PZKljjwQUTyrd0KHQeksBH+YAYSo
2JUl8ekNLsOi8/cPtfojnltzRI1GXi0ZONs8VnDzJ0a2gqZY+uxlz+CGbLnGnlN4
j9cBpE+MfUE+35Dq121sTpsSgF85Mz+pVhn2S633AgMBAAGjggG+MIIBujAPBgNV
HRMBAf8EBTADAQEAMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAOBgNV
HQ8BAf8EBAMCBaAwMwYDVR0fBCwwKjAooCagJIYiaHR0cDovL2NybC5nb2RhZGR5
LmNvbS9nZHMxLTY0LmNybDBTBgNVHSAETDBKMEgGC2CGSAGG/W0BBxcBMDkwNwYI
KwYBBQUHAgEWK2h0dHA6Ly9jZXJ0aWZpY2F0ZXMuZ29kYWRkeS5jb20vcmVwb3Np
dG9yeS8wgYAGCCsGAQUFBwEBBHQwcjAkBggrBgEFBQcwAYYYaHR0cDovL29jc3Au
Z29kYWRkeS5jb20vMEoGCCsGAQUFBzAChj5odHRwOi8vY2VydGlmaWNhdGVzLmdv
ZGFkZHkuY29tL3JlcG9zaXRvcnkvZ2RfaW50ZXJtZWRpYXRlLmNydDAfBgNVHSME
GDAWgBT9rGEyk2xF1uLuhV+auud2mWjM5zArBgNVHREEJDAighAqLnJlYWxob3N0
aXAuY29tgg5yZWFsaG9zdGlwLmNvbTAdBgNVHQ4EFgQUZyJz9/QLy5TWIIscTXID
E8Xk47YwDQYJKoZIhvcNAQEFBQADggEBAKiUV3KK16mP0NpS92fmQkCLqm+qUWyN
BfBVgf9/M5pcT8EiTZlS5nAtzAE/eRpBeR3ubLlaAogj4rdH7YYVJcDDLLoB2qM3
qeCHu8LFoblkb93UuFDWqRaVPmMlJRnhsRkL1oa2gM2hwQTkBDkP7w5FG1BELCgl
gZI2ij2yxjge6pOEwSyZCzzbCcg9pN+dNrYyGEtB4k+BBnPA3N4r14CWbk+uxjrQ
6j2Ip+b7wOc5IuMEMl8xwTyjuX3lsLbAZyFI9RCyofwA9NqIZ1GeB6Zd196rubQp
93cmBqGGjZUs3wMrGlm7xdjlX6GQ9UvmvkMub9+lL99A5W50QgCmFeI=
-----END CERTIFICATE-----
Warning:
The certificate uses the SHA1withRSA signature algorithm which is considered a security risk. This algorithm will be disabled in a future update.
```
it comes from
```
$ openssl x509 -in ./systemvm/agent/certs/realhostip.crt -noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 11277268652730408 (0x28109db8152828)
Signature Algorithm: sha1WithRSAEncryption
Issuer: C = US, ST = Arizona, L = Scottsdale, O = "GoDaddy.com, Inc.", OU = http://certificates.godaddy.com/repository, CN = Go Daddy Secure Certification Authority, serialNumber = 07969287
Validity
Not Before: Feb 3 03:30:40 2012 GMT
Not After : Feb 7 05:11:23 2017 GMT
Subject: O = *.realhostip.com, OU = Domain Control Validated, CN = *.realhostip.com
```
* debian12: use ed25519 instead of rsa as ssh-rsa has been deprecated in OpenSSH
on xenserver
```
[root@pr8497-t8906-xenserver-71-xs2 ~]# ssh -i .ssh/id_rsa.cloud -p 3922 169.254.214.153
Warning: Permanently added '[169.254.214.153]:3922' (ECDSA) to the list of known hosts.
Permission denied (publickey).
```
in the CPVM
Jan 22 19:31:09 v-1-VM sshd[2869]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Jan 22 19:31:09 v-1-VM sshd[2869]: Connection closed by authenticating user root 169.254.0.1 port 54704 [preauth]
```
ssh-dss (DSA) is not supported either
* debian12: add PubkeyAcceptedAlgorithms=+ssh-rsa to sshd_config
* VR: install python3 packages in case of Debian 11
* pom.xml: exclude systemvm/agent/packages/* in license check
* systemvm: do not patch router/systemvm during startup
this will cause 4.19 SYSTEM template not work, but may be expected
- python3 VS python2 (default)
- openSSL 3.0.1 VS 1.1.1w
- openssh-server 9.1 VS 8.4
* VR: patch router/systemvm if template is debian11
This supports debian 11 template by
- revert change in systemvm/debian/etc/ssh/sshd_config
- patch VR/systemvms during startup
- install packages during patching system vm/routers
* python3 flake: fix E502 the backslash is redundant between brackets
```
../debian/root/health_checks/router_version_check.py:55:70: E502 the backslash is redundant between brackets
../debian/root/health_checks/router_version_check.py:58:61: E502 the backslash is redundant between brackets
../debian/root/health_checks/router_version_check.py:67:71: E502 the backslash is redundant between brackets
../debian/root/health_checks/router_version_check.py:70:60: E502 the backslash is redundant between brackets
../debian/root/health_checks/haproxy_check.py:47:71: E502 the backslash is redundant between brackets
../debian/root/health_checks/haproxy_check.py:48:64: E502 the backslash is redundant between brackets
../debian/root/health_checks/cpu_usage_check.py:43:54: E502 the backslash is redundant between brackets
../debian/root/health_checks/cpu_usage_check.py:46:58: E502 the backslash is redundant between brackets
../debian/root/health_checks/memory_usage_check.py:31:65: E502 the backslash is redundant between brackets
../debian/root/health_checks/memory_usage_check.py:42:57: E502 the backslash is redundant between brackets
../debian/root/health_checks/memory_usage_check.py:45:63: E502 the backslash is redundant between brackets
```
* python3 flake: fix E275 missing whitespace after keyword
```
../debian/opt/cloud/bin/cs_firewallrules.py:29:20: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_dhcp.py:27:16: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_dhcp.py:36:16: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_guestnetwork.py:33:20: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_guestnetwork.py:35:16: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_vpnusers.py:37:16: E275 missing whitespace after keyword
../debian/opt/cloud/bin/merge.py:230:11: E275 missing whitespace after keyword
../debian/opt/cloud/bin/merge.py:239:19: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_remoteaccessvpn.py:24:12: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_site2sitevpn.py:24:12: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs/CsHelper.py:90:15: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs/CsAddress.py:367:15: E275 missing whitespace after keyword
```
* python3 flake: fix configure.py
```
../debian/opt/cloud/bin/configure.py:24:22: E401 multiple imports on one line
../debian/opt/cloud/bin/configure.py:43:180: E501 line too long (294 > 179 characters)
../debian/opt/cloud/bin/configure.py:46:1: E302 expected 2 blank lines, found 1
../debian/opt/cloud/bin/configure.py:63:1: E302 expected 2 blank lines, found 1
../debian/opt/cloud/bin/configure.py:65:12: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/configure.py:72:1: E302 expected 2 blank lines, found 1
../debian/opt/cloud/bin/configure.py:310:25: E711 comparison to None should be 'if cond is not None:'
../debian/opt/cloud/bin/configure.py:312:29: E711 comparison to None should be 'if cond is None:'
../debian/opt/cloud/bin/configure.py:378:25: E711 comparison to None should be 'if cond is not None:'
../debian/opt/cloud/bin/configure.py:380:29: E711 comparison to None should be 'if cond is None:'
../debian/opt/cloud/bin/configure.py:490:29: E712 comparison to False should be 'if cond is False:' or 'if not cond:'
../debian/opt/cloud/bin/configure.py:642:16: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/configure.py:644:18: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/configure.py:1416:1: E305 expected 2 blank lines after class or function definition, found 1
```
* python3 flake: fix other python files
```
../debian/opt/cloud/bin/vmdata.py:97:12: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/vmdata.py:99:14: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/cs/CsRedundant.py:438:53: E203 whitespace before ':'
../debian/opt/cloud/bin/cs/CsRedundant.py:461:53: E203 whitespace before ':'
../debian/opt/cloud/bin/cs/CsRedundant.py:499:5: E303 too many blank lines (2)
../debian/opt/cloud/bin/cs/CsDatabag.py:189:1: E302 expected 2 blank lines, found 1
../debian/opt/cloud/bin/cs/CsDatabag.py:193:37: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/cs/CsHelper.py:118:30: E231 missing whitespace after ','
../debian/opt/cloud/bin/cs/CsHelper.py:119:15: E225 missing whitespace around operator
../debian/opt/cloud/bin/cs/CsHelper.py:127:19: E225 missing whitespace around operator
../debian/opt/cloud/bin/cs/CsAddress.py:324:43: E221 multiple spaces before operator
../debian/opt/cloud/bin/cs/CsVpcGuestNetwork.py:28:1: E302 expected 2 blank lines, found 1
```
* python3 flake: fix CsNetfilter.py
```
../debian/opt/cloud/bin/cs/CsNetfilter.py:226:13: E117 over-indented
../debian/opt/cloud/bin/cs/CsNetfilter.py:233:180: E501 line too long (197 > 179 characters)
../debian/opt/cloud/bin/cs/CsNetfilter.py:241:14: E201 whitespace after '{'
../debian/opt/cloud/bin/cs/CsNetfilter.py:242:14: E201 whitespace after '{'
../debian/opt/cloud/bin/cs/CsNetfilter.py:247:18: E201 whitespace after '{'
../debian/opt/cloud/bin/cs/CsNetfilter.py:247:74: E202 whitespace before '}'
../debian/opt/cloud/bin/cs/CsNetfilter.py:248:18: E201 whitespace after '{'
```
* systemvm/test: fix sys.path
```
$ bash runtests.sh
/usr/bin/python
Python 3.10.12
Running pycodestyle to check systemvm/python code for errors
Running pylint to check systemvm/python code for errors
Python 3.10.12
pylint 2.12.2
astroid 2.9.3
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
--------------------------------------------------------------------
Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)
--------------------------------------------------------------------
Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)
Running systemvm/python unit tests
....Device "eth0" does not exist.
.....................
----------------------------------------------------------------------
Ran 25 tests in 0.008s
OK
```
* Revert "systemvm template: remove hyperv packages and do not export"
This reverts commit 4383d59d03.
* debian12: move SQL change to schema-41900to42000.sql
* debian12: update systemvm template version to 4.20 in pom.xml
* pom.xml: fix NPE if templates do not exist on download.cloudstack.org
* debian12: increase default system offering for routers to 384MiB RAM
* CKS: fix addkubernetessupportedversion failed with JRE17
```
marvin.cloudstackException.CloudstackAPIException: Execute cmd: addkubernetessupportedversion failed, due to: errorCode: 530, errorText:Cannot invoke "org.apache.cloudstack.engine.subsystem.api.storage.ObjectInDataStoreStateMachine$State.toString()" because the return value of "com.cloud.api.query.vo.TemplateJoinVO.getState()" is null
```
* python3: revert changes by 2to3 with systemvm/debian/root/health_checks/*.py
* debian12: use ISO/packages on download.cloudstack.org
* VR: Update default ram size to 384
* debian12: fix router_version_check.py after VR live-patch and add health check in test_routers.py
* debian12: fix build error after log4j 2.x merge
* VR: Update default ram size to 512MB (again)
This reverts commit 578dd2b73f and efafa8c4d6.
* systemvmtemplate: Upgrade to Debian 12.5.0
* systemvm template: increase swap to 512MB
* VR: fix health check error due to deprecated SafeConfigParser
warning below
```
root@r-20-VM:~# /opt/cloud/bin/getRouterMonitorResults.sh true
/root/monitorServices.py:59: DeprecationWarning: The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in Python 3.12. Use ConfigParser directly instead.
parser = SafeConfigParser()
```
* test: fix wget does not work in macchinina vms on vmware80u1
fixes error below
```
{Cmd: wget -t 1 -T 1 www.google.com via Host: 10.0.55.186} {returns: ["wget: '/usr/lib/libpcre.so.1' is not an ELF file", "wget: can't load library 'libpcre.so.1'"]}
```
* packaging: add message for VR memory upgrade after packages installation
---------
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Vishesh <vishesh92@gmail.com>
Feature spec: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Granular+Resource+Limit+Management
Introduces the concept of tagged resource limits for granular resource limit management. Limits can be enforced on accounts and domains for the deployment of entities for a tagged resource. Current tagged resource limits can be used for the following resource types,
Host limits
- user_vm
- cpu
- memory
Storage limits
- volume
- primary_storage
Following global settings can used to specify tags for which limit needs to be enforced,
Host: `resource.limit.host.tags`
Storage: `resource.limit.storage.tags`
Option for specifying tagged resource limits and viewing tagged resource usage are made available in the UI.
Enhances the use of templatetag for VM deployment and template creation
Adds option to list service/compute offerings that can be used with a given template. A new parameter named templateid has been added.
Adds option to list disk offering with suitability flag for a virtual machine. A new parameter named virtualmachineid has been added to the listDiskOfferings API which when passed returns suitableforvirtualmachine param in the response.
* Normalize logs
All classes that could have their loggers inherited from their fathers had their own loggers deleted;
Most loggers didn't have to be static, so most of them were normalized so that they wouldn't be;
All loggers are protected now;
Static logger's name are now 'LOGGER';
Non-static logger's name are now 'logger';
New class DbUpgradeAbstractImpl created so that all Upgraders extend it and inherit its logger
* Upgrade log4j
* fix errors caused by the merge
* Refactor cglibThrowableRenderer functionality to log4j2 and upgrade the last configuration files
* fix sonarcloud bug
* Fix errors caused by merge, remove some unused loggers, and rename a variable that was mistakenly renamed on the normalization commit
* Readd snmpTrapAppender, remove TestAppender
* Regenerate changes
* regenerate changes
* refactor last custom appender
* fix systemvm configuration xml
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* Fix utils pom
* fix some tests
* regenerate changes
* Fix jar being printed on exception
* fix logging in system VMs, fix commands not having log4j2 classpath.
* regenerate changes
* Fix some unwanted renomeations
* fix end of file
* regenerate changes
* regenerate changes
* fix merge error
* regenerate changes
* fix tests
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* readd reload4j to tungsten as juniper depends on it
* Regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* re-add reload4j dependency to network-contrail, as juniper depends on it
* regenerate changes
* regenerate changes
* regenerate changes
* fix typo
* regenerate changes
* regenerate changes
* Fix end of files
* regenerate changes
* add logj42 to cloud-utils-SHADED.jar
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* fix some tests
* Regenerate changes
* Regenerate changes
* fix test
* Regenerate changes
* Regenerate changes
* StoragePoolType as a class
* Fix agent side StoragePoolType enum to class
* Handle StoragePoolType for StoragePoolJoinVO
* Since StoragePoolType is a class, it cannot be converted by @Enumerated annotation.
Implemented conveter class and logic to utilize @Convert annotation.
* Fix UserVMJoinVO for StoragePoolType
* fixed missing imports
* Since StoragePoolType is a class, it cannot be converted by @Enumerated annotation.
Implemented conveter class and logic to utilize @Convert annotation.
* Fixed equals for the enum.
* removed not needed try/catch for prepareAttribute
* Added license to the file.
* Implemented "supportsPhysicalDiskCopy" for storage adaptor.
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>
* Add javadoc to StoragePoolType class
* Add unit test for StoragePoolType comparisons
* StoragePoolType "==" and ".equals()" fix.
* Fix StoragePoolType for FiberChannelAdapter
* Fix for abstract storage adaptor set up issue
* review comments
* Pass StoragePoolType object for poolType dao attribute
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>
Co-authored-by: mprokopchuk <mprokopchuk@gmail.com>
* veeam: detach only the restored volume during backup restore
Steps to reproduce the issue
1. create a VM (A) with ROOT and DATA disk
2. assign to a backup offering
3. create backup
4. create another VM (B)
5. restore the DATA disk of VM A, and attach to VM B
6. When operation is done, check the datastore
Without this change, the ROOT image is not removed and left over on the datastore.
```
[root@ref-trl-5933-v-Mr8-wei-zhou-esxi2:/vmfs/volumes/5f60667d-18d828eb] ls -l /vmfs/volumes/5f60667d-18d828eb/CS-RSTR-dfb6f21c-a941-49db-9963-4f0286a17dac
total 1784840
-rw------- 1 root root 5242880000 Jan 24 09:23 ROOT-722_2-flat.vmdk
-rw------- 1 root root 499 Jan 24 09:23 ROOT-722_2.vmdk
```
With this change, the whole temporary vm has been destroyed.
```
[root@ref-trl-5933-v-Mr8-wei-zhou-esxi2:/vmfs/volumes/5f60667d-18d828eb] ls -l /vmfs/volumes/5f60667d-18d828eb/CS-RSTR-734bee3b-640c-4ff0-a34b-bc45358565b2
ls: /vmfs/volumes/5f60667d-18d828eb/CS-RSTR-734bee3b-640c-4ff0-a34b-bc45358565b2: No such file or directory
```
* veeam: fix wrong disk size in debug message
* veeam: sync backup repository after operations are done
got exception of some operations which succeeds due to the following error
```
2024-01-19 10:59:52,846 DEBUG [o.a.c.b.v.VeeamClient] (API-Job-Executor-42:ctx-716501bb job-4373 ctx-2359b76d) (logid:b5e19a17) Veeam response for PowerShell commands [PowerShell Import-Module Veeam.Backup.PowerShell -WarningAction SilentlyContinue;$restorePoint = Get-VBRRestorePoint ^| Where-Object { $_.Id -eq '1d99106a-b5c8-4a1e-958d-066a987caa5f' };if ($restorePoint) { Remove-VBRRestorePoint -Oib $restorePoint -Confirm:$false;$repo = Get-VBRBackupRepository;Sync-VBRBackupRepository -Repository $repo;} else { ; Write-Output 'Failed to delete'; Exit 1;}] is: [^M
Restore Type Job Name State Start Time End Time Description ^M
------------ -------- ----- ---------- -------- ----------- ^M
ConfResynchronize Configuration Dat... Starting 19/01/2024 10:59:52 01/01/1900 00:00:00 ^M
^M
^M
Remove-VBRRestorePoint : Win32 internal error "Access is denied" 0x5 occurred while reading the console output buffer. ^M
Contact Microsoft Customer Support Services.^M
At line:1 char:196^M
+ ... orePoint) { Remove-VBRRestorePoint -Oib $restorePoint -Confirm:$false ...^M
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^M
+ CategoryInfo : ReadError: (:) [Remove-VBRRestorePoint], HostException^M
+ FullyQualifiedErrorId : ReadConsoleOutput,Veeam.Backup.PowerShell.Cmdlets.RemoveVBRRestorePoint^M
^M
].
```
* veeam: fix unable to detach volume when restore backup and attach to vm then detach the volume
It also happened when destroy the original or backup VM
```
2024-01-24 10:10:03,401 ERROR [c.c.s.r.VmwareStorageProcessor] (DirectAgent-74:ctx-95b24ac7 10.0.35.53, job-25995/job-25996, cmd: DettachCommand) (logid:7260ffb8) Failed to detach volume!
java.lang.RuntimeException: Unable to access file [de52fdd3386b3d67b27b3960ecdb08f4] i-2-723-VM/7c2197c129464035bab062edec536a09-flat.vmdk
at com.cloud.hypervisor.vmware.util.VmwareClient.waitForTask(VmwareClient.java:426)
at com.cloud.hypervisor.vmware.mo.DatastoreMO.moveDatastoreFile(DatastoreMO.java:290)
at com.cloud.storage.resource.VmwareStorageLayoutHelper.syncVolumeToRootFolder(VmwareStorageLayoutHelper.java:241)
at com.cloud.storage.resource.VmwareStorageProcessor.attachVolume(VmwareStorageProcessor.java:2150)
at com.cloud.storage.resource.VmwareStorageProcessor.dettachVolume(VmwareStorageProcessor.java:2408)
at com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:174)
at com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:71)
at com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:589)
at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:48)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:45)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2024-01-24 10:10:03,402 INFO [c.c.h.v.u.VmwareHelper] (DirectAgent-74:ctx-95b24ac7 10.0.35.53, job-25995/job-25996, cmd: DettachCommand) (logid:7260ffb8) [ignored]failed to get message for exception: Unable to access file [de52fdd3386b3d67b27b3960ecdb08f4] i-2-723-VM/7c2197c129464035bab062edec536a09-flat.vmdk
```
* vmware: create restored volume with new UUID and attach to VM
This PR fixes bug introduced in #8502. Timeout for script execution was set to 60 ms instead of 60s which resulted in host not getting UEFI enabled. This is a blocker for 4.19 release.
We do this by introducing a new agent parameter `agent.script.timeout` (default - 60 seconds) to use as a timeout for the script checking host's UEFI status.
We also externalize the timeout for the ReadyCommand by introducing a new global setting `ready.command.wait` (default - 60 seconds).
For ModifyStoragePoolCommand, we don't externalize the timeout to avoid confusion for the user. Since, the required timeout can vary depending on the provider in use and we are only setting the wait for default host listener for now. Instead, we reuse the global `wait` setting by dividing it by `5` making the default value of 6 minutes (1800/5 = 360s) for ModifyStoragePoolCommand.
Note: the actual time, the MS waits is twice the wait set for a Command. Check reference code below.
19250403e6/engine/orchestration/src/main/java/com/cloud/agent/manager/AgentAttache.java (L406-L442)
This PR fixes several issues in the testing of Veeam 11 and Veeam12
- Import Veeam.Backup.PowerShell and silently ignore the warning messages
- Fix issue when assign vm to backup offerings, which caused by separator (\r\n)
- Fix authorization failure in veeam 12a, which is because v1_4 is not supported in veeam 12a any more
- Fix exception if backup name has space
- Fix backup metrics in veeam12, which is because powershell command does not return the values needed
- Fix Incorrect datetime value, which is because powershell command returns a datetime which is not supported in Java
- Fix issue during backup restoration if VM has both ROOT and DATA disks.
This PR also has the following update
- Add integration test test/integration/smoke/test_backup_recovery_veeam.py
- Make some UI changes
- Add zone setting backup.plugin.veeam.version. If it is not set, CloudStack will get veeam version via powershell commands.
- Add zone setting backup.plugin.veeam.task.poll.interval and backup.plugin.veeam.task.poll.max.retry
There are a lot of test failures due to test_vm_life_cycle.py in multiple PRs due to host not available for migration of VMs.
#8438 (comment)
#8433 (comment)
#7344 (comment)
While debugging I noticed that the hosts get stuck in Connecting state because MS is waiting for a response of the ReadyCommand from the agent. Since we take a lock on connection and disconnection, restarting the agent doesn't work. To fix this, we have to restart the MS or wait for ~1 hour (default timeout).
On the agent side, it gets stuck waiting for a response from the Script execution.
To reproduce, run smoke/test_vm_life_cycle.py (TestSecuredVmMigration test class to be specific). Once the tests are complete, you will notice that some hosts are stuck in Connecting state. And restarting the agent fails due to the named lock. Locks on DB can be checked using the below query.
SELECT *
FROM performance_schema.metadata_locks
INNER JOIN performance_schema.threads ON THREAD_ID = OWNER_THREAD_ID
WHERE PROCESSLIST_ID <> CONNECTION_ID() \G;
This PR adds a wait for the ready command and a timeout to the Script execution to ensure that the thread doesn't get stuck and the named lock from database is released.
This PR fixes a regression caused by #8465 on advanced zones, import fails with:
2024-01-10 12:13:33,234 DEBUG [o.a.c.e.o.NetworkOrchestrator] (API-Job-Executor-3:ctx-991bbe9f job-128 ctx-f49517d4) (logid:d7b8e716) Allocating nic for vm 142272e8-9e2e-407b-9d7e-e9a03b81653c in network Network {"id": 204, "name": "Isolated", "uuid": "9679fac5-e3ac-4694-a57b-beb635340f39", "networkofferingid": 10} during import
2024-01-10 12:13:33,239 ERROR [o.a.c.v.UnmanagedVMsManagerImpl] (API-Job-Executor-3:ctx-991bbe9f job-128 ctx-f49517d4) (logid:d7b8e716) Failed to import NICs while importing vm: i-2-31-VM
com.cloud.exception.InsufficientVirtualNetworkCapacityException: Unable to acquire Guest IP address for network Network {"id": 204, "name": "Isolated", "uuid": "9679fac5-e3ac-4694-a57b-beb635340f39", "networkofferingid": 10}Scope=interface com.cloud.dc.DataCenter; id=1
at org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.importNic(NetworkOrchestrator.java:4582)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.importNic(UnmanagedVMsManagerImpl.java:859)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.importVirtualMachineInternal(UnmanagedVMsManagerImpl.java:1198)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.importUnmanagedInstanceFromHypervisor(UnmanagedVMsManagerImpl.java:1511)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.baseImportInstance(UnmanagedVMsManagerImpl.java:1342)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.importUnmanagedInstance(UnmanagedVMsManagerImpl.java:1282)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Also, addresses the VNC password field set instead of a fixed string
1. Problem description
In Apache CloudStack (ACS), when a VM is deployed in a host with the KVM hypervisor, an XML file is created in the assigned host, which has a property shares that defines the weight of the VM to access the host CPU. The value of this property has no unit, and it is a relative measure to calculate how much CPU a given VM will have in the host. However, this value has a limit, which depends on the version of cgroup utilized by the host's kernel. The problem lies at the range value of shares that varies between both versions: [2, 264144] for cgroups version 1; and [1, 10000] for cgroups version 2. Currently, ACS calculates the value of shares using Equation 1, presented below, where CPU is the number of cores and speed is the CPU frequency; both specified in the VM's compute offering. Therefore, if a compute offering has, for example, 6 cores at 2 GHz, the shares value will be 12000 and an exception will be thrown by libvirt if the host utilizes cgroup v2. The second version is becoming the default one in current Linux distributions; thus, it is necessary to address this limitation.
Equation 1
shares = CPU * speed
Fixes: #6744
2. Proposed changes
To address the problem described, we propose to apply a scale conversion considering the max shares of the host. Using the same formula currently utilized by ACS, it is possible to calculate the maximum shares of a VM for a given host. In other words, using the number of cores and the nominal speed of the host's CPU as the upper limit of shares allowed to a VM. Then, this value will be scaled to the allowed interval of [1, 10000] of cgroup v2 by using a linear scale conversion.
The VM shares would be calculated as Equation 2, presented below, where VM requested shares is the requested shares value calculated using Equation 1, cgroup upper limit is fixed with a value of 10000 (cgroups v2 upper limit), and host max shares is the maximum shares value of the host, calculated using Equation 1. Using Equation 2, the only case where a VM passes the cgroup v2 limit is when the user requests more resources than the host has, which is not possible with the current implementation of ACS.
Equation 2
shares = (VM requested shares * cgroup upper limit)/host max shares
To implement the proposal, the following APIs will be updated: deployVirtualMachine, migrateVirtualMachine and scaleVirtualMachine. When a VM is being deployed, a new verification will be added to find a suitable host. The max shares of each host will be calculated, and the VM calculated shares will be verified if it does not surpass the host's value. Likewise, the migration of VMs will have a similar new verification. Lastly, the scale of VMs will also have the same verification for the VM's host.
To determine the max shares of a given host, we will use the same equation currently used in ACS for calculating the shares of VMs, presented in Section 1. When Equation 1 is used to determine the maximum shares of a host, CPU is the number of cores of the host, and speed is the nominal CPU speed, i.e., considering the CPU's base frequency.
It is important to note that these changes are only for hosts with the KVM hypervisor using cgroup v2 for now.
This PR provides a new primary storage volume type called "FiberChannel" that allows access to volumes connected to hosts over fiber channel connections. It requires Multipath to provide path discovery and failover. Second, the PR adds an AdaptivePrimaryDatastoreProvider that abstracts how volumes are managed/orchestrated from the connector to communicate with the primary storage provider, using a ProviderAdapter interface, allowing the code interacting with the primary storage provider API's to be simpler and have no direct dependencies on Cloudstack code. Lastly, the PR provides an implementation of the ProviderAdapter classes for the HP Enterprise Primera line of storage solutions and the Pure Flash Array line of storage solutions.
This PR adds the capability in CloudStack to convert VMware Instances disk(s) to KVM using virt-v2v and import them as CloudStack instances. It enables CloudStack operators to import VMware instances from vSphere into a KVM cluster managed by CloudStack. vSphere/VMware setup might be managed by CloudStack or be a standalone setup.
CloudStack will let the administrator select a VM from an existing VMware vCenter in the CloudStack environment or external vCenter requesting vCenter IP, Datacenter name and credentials.
The migrated VM will be imported as a KVM instance
The migration is done through virt-v2v: https://access.redhat.com/articles/1351473, https://www.ovirt.org/develop/release-management/features/virt/virt-v2v-integration.html
The migration process timeout can be set by the setting convert.instance.process.timeout
Before attempting the virt-v2v migration, CloudStack will create a clone of the source VM on VMware. The clone VM will be removed after the registration process finishes.
CloudStack will delegate the migration action to a KVM host and the host will attempt to migrate the VM invoking virt-v2v. In case the guest OS is not supported then CloudStack will handle the error operation as a failure
The migration process using virt-v2v may not be a fast process
CloudStack will not perform any check about the guest OS compatibility for the virt-v2v library as indicated on: https://access.redhat.com/articles/1351473.
Co-authored-by: Stephan Krug <stephan.krug@scclouds.com.br>
Co-authored-by: GaOrtiga <49285692+GaOrtiga@users.noreply.github.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
Extending the current functionality of KVM Host HA for the StorPool storage plugin and the option for easy integration for the rest of the storage plugins to support Host HA
This extension works like the current NFS storage implementation. It allows it to be used simultaneously with NFS and StorPool storage or only with StorPool primary storage.
If it is used with different primary storages like NFS and StorPool, and one of the health checks fails for storage, there is an option to report the failure to the management with the global config kvm.ha.fence.on.storage.heartbeat.failure. By default this option is disabled when enabled the Host HA service will continue with the checks on the host and eventually will fence the host
This PR adds new functionality to copy snapshots across zones and take snapshots for multiple zones.
Copy functionality is similar to template copy. The source zone acts as the web server from where the destination zone(s) can download the snapshot files. For this purpose, a new API - `copySnapshot` has been added. The response for copySnapshot will be returning zone and download details from the first destination zone of the request. This behaviour is similar to the `copyTemplate` API.
In a similar manner, multiple zones can be selected while taking the snapshots or creating snapshot policies. For this snapshot will be taken in the base zone(in which volume is present) and then copied to the additional zones. A new parameter - `zoneids` has been added to `createSnapshot` and `createSnapshotPolicy` APIs.
As snapshots can be present on multiple zones (secondary stores), a new parameter `zoneid` has been added to delete the snapshot copy on a specific zone.
`listSnapshots` API has been updated to allow listing snapshot entries for different zones/datastores. New parameters - `showUnique`, `locationType` have been added.
Events generated during snapshot operations will now be linked to the snapshot itself rather than the volume of the snapshot.
`listSnapshotPolicies` and `createSnapshotPolicy` APIs will return zone details of the zones in which backup will be scheduled for the policy.
----
New API added
`copySnapshot`
Request and response params updated for APIs
```
- listSnapshots
- deleteSnapshot
- createTemplate
- listZones
- listSnapshotPolicies
- createSnapshotPolicy
```
UI updated for
- Snapshot detail view
- Create snapshot form
- Create snapshot policy form
- Create volume (from snapshot) form
- Create template (from snapshot) form
Doc PR: https://github.com/apache/cloudstack-documentation/pull/344
PR: https://github.com/apache/cloudstack/pull/7873
This PR allows an admin to reserve some hypervisor host CPUs for system use. Another way to think of it is limiting the number of CPUs allocatable to VMs. This can be useful if the admin wants to do other things with the hypervisor's CPU, for example reserve some cores for running hyperconverged storage processes.
Co-authored-by: Marcus Sorensen <mls@apple.com>
* Trigger out of band VM state update via libvirt event when VM stops
* Add License headers, refactor nested try
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
* Fix style for LibvirtComputingResource variable names and its dependencies
* More variable name fixes
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
This fixes the following cases in which Solidfire storage integration
caused issues when using Solidfire datadisks with VMware:
1. Take Volume Snapshot of Solidfire data disk
2. Delete an active Instance with Solidfire data disk attached
3. Attach used existing Solidfire data disk to a running/stopped VM
4. Stop and Start an instance with Solidfire data disks attached
5. Expand disk by resizing Solidfire data disk by providing size
6. Expand disk by changing disk offering for the Solidfire data disk
Additional changes:
- Use VMFS6 as managed datastore type if the host supports
- Refactor detection and splitting of managed storage ds name in storage
processor
- Restrict storage rescanning for managed datastore when resizing
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* VMware: add support for 8.0b (8.0.0.2)
* VMware 8: add new guest os mappings in VirtualMachineGuestOsIdentifier
The full list can be found at https://developer.vmware.com/apis/1355/vsphere
* VMware: get guest os mappings of parent version
* VMware8: remove guest os mappings for 8.0.0.2
* VMware8: fix code smells
* vmware: remove annotations in VmwareVmImplementerTest which caused 0.0% code coverage
* VMware8: add a unit test case
* VMware: add support for 8.0c (8.0.0.3)
* VMware8: move to CloudStackVersion.getVMwareParentVersion
* VMware: add support for 8.0u1 (8.0.1.0)
* Copy engine/schema/src/main/java/com/cloud/upgrade/GuestOsMapper.java from PR 6979
* Copy engine/schema/src/main/java/com/cloud/storage/dao/GuestOSHypervisorDao.java from PR 6979
* VMware: ignore the last number in VMware versions
* VMware: copy guest os mapping from 8.0 to 8.0.1
* VMware: add unit tests in VmwareVmImplementerTest.java
* Copy engine/schema/src/test/java/com/cloud/upgrade/GuestOsMapperTest.java from PR 6979
* VMware8: retry vm poweron if fails due to exception "File system specific implementation of Ioctl[file] failed"
This fixes a weird issue on vmware8. When power on a vm, sometimes it fails due to error
2023-04-27 07:04:43,207 ERROR [c.c.h.v.r.VmwareResource] (DirectAgent-442:ctx-cdd42b03 10.0.32.133, job-105/job-106, cmd: StartCommand) (logid:8a24a607) StartCommand failed due to [Exception: java.lang.RuntimeException
Message: File system specific implementation of Ioctl[file] failed
].
java.lang.RuntimeException: File system specific implementation of Ioctl[file] failed
at com.cloud.hypervisor.vmware.util.VmwareClient.waitForTask(VmwareClient.java:426)
at com.cloud.hypervisor.vmware.mo.VirtualMachineMO.powerOn(VirtualMachineMO.java:288)
in vmware.log on ESXi host, it shows
2023-04-27T09:20:41.713Z In(05)+ vmx - Power on failure messages: File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - Failed to lock the file
2023-04-27T09:20:41.713Z In(05)+ vmx - Cannot open the disk '/vmfs/volumes/7b29c876-ac102328/i-2-167-VM/ROOT-167.vmdk' or one of the snapshot disks it depends on.
2023-04-27T09:20:41.713Z In(05)+ vmx - Module 'Disk' power on failed.
2023-04-27T09:20:41.713Z In(05)+ vmx - Failed to start the virtual machine.
There is a KB article for it, but I still do not know why and how to fix it.
https://kb.vmware.com/s/article/1004232
* VMware: extract to method powerOnVM
* vmware: fix mistake in logs
* vmware8: use curl instead of wget to fix test failures
Traceback (most recent call last):
File "/root/test_internal_lb.py", line 555, in test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80
self.execute_internallb_roundrobin_tests(vpc_offering)
File "/root/test_internal_lb.py", line 641, in execute_internallb_roundrobin_tests
client_vm, applb.sourceipaddress, max_http_requests)
File "/root/test_internal_lb.py", line 497, in run_ssh_test_accross_hosts
(e, clienthost.public_ip))
AssertionError: list index out of range: SSH failed for VM with IP Address: 10.0.52.187
and
sshClient: DEBUG: {Cmd: /usr/bin/wget -T3 -qO- --user=admin --password=password http://10.1.2.253:8081/admin?stats via Host: 10.0.52.188} {returns: ["/usr/bin/wget: '/usr/lib/libpcre.so.1' is not an ELF file", "/usr/bin/wget: can't load library 'libpcre.so.1'"]}
* VMware: correct guest OS names in hypervisor mappings for VMware 8.0
el9 and variants were introduced by https://github.com/apache/cloudstack/pull/7059
they are supported with guest os identifiers since VMware 8.0
see https://vdc-repo.vmware.com/vmwb-repository/dcr-public/c476b64b-c93c-4b21-9d76-be14da0148f9/04ca12ad-59b9-4e1c-8232-fd3d4276e52c/SDK/vsphere-ws/docs/ReferenceGuide/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html
* VMware: add Ubuntu 20.04 and 22.04 support for vmware 7.0+
* PR7380: only add guest os mappings for Ubuntu 20.04
* PR7380: Correct RHEL9 guest os names and others for VMware 8.0
* PR7380: correct guest os names on 8.0.0.1 as well
* PR7380: remove Windows 12 and Windows Server 2025 which are not released yet
* 4.18:
server: remove registered userdata when cleanup an account (#7777)
server: Use max secondary storage defined on the account during upload (#7441)
test: upgrade kubernetes versions to 1.25.0/1.26.0 (#7685)
kvm: Added VNI Devices as normal bridge slave devs (#7836)
noVNC: fix JP keyboard on vmware7+ which uses websocket URL (#7694)
* 4.18:
UI: Filter templates by zone and hypervisor type when reinstall a VM (#7739)
KVM: fix SSVM starting when overprovisioning memory (#7663)
pom.xml: add property project.systemvm.template.location (#7706)
cloudutils: fix adding rocky9 host failure due to missing /etc/sysconfig/libvirtd (#7779)
server: get id from persisted object ReservationVO (#7785)
search in (too) large result sets (#7766)
ui: fix 404 error when list volumes of system vms (#7772)
packaging: install tzdata-java on centos7/centos8 (#7768)
* 4.18:
Storage and volumes statistics tasks for StorPool primary storage (#7404)
proper storage construction (#6797)
guarantee MAC uniqueness (#7634)
server: allow migration of all VMs with local storage on KVM (#7656)
Add L2 networks to Zones with SG (#7719)
Currently, ACS can continue to show an imported instance/VM as an unmanaged instance if the name and internalCSName (custom attribute, cloud.vm.internal.name) is different for the instance/VM on vCenter. This PR while filtering managed instances from the instance list received from ESXi host also checks if the internal name for the instance is not in the managed instance names list.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Guest OS mapping improvements
- Checks the OS mapping name in hypervisor (VMware, XenServer)
- Displays guest OS mappings in UI
* Added API getHypervisorGuestOsNames to list the guest OS names in the hypervisor, and code improvements
* Some static analysis fixes
* Removed commented code in listview
* Guest OS list
* UI changes for adding guest os and mappings
* Added guest os mappings in guest os form
* Added new filter to guest os mapping
* Name and description changes
* VMWare Host and cluster MO unit tests
* CheckGuestOsMapping command and answer unit tests
* GetHypervisorGuestOsNames command and answer unit tests
* VmwareResource unitests
* GuestOsMapper unittests
* icon changes
* Addressed review comments
* Renaming fixes
* Removed comments
* marvin tests for guest os operations
* Added marvin tests for OS mappings
* Document links and UI improvements
* Added deduplication for the list guest OS API
* Fixed linter failure
* Few bug fixes and UI changes
* Few improvements
* Addressed code smells
* Fixed UI issues after rebase
---------
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
* Live storage migration of volume in scaleIO within same storage scaleio cluster
* Added migrate command
* Recent changes of migration across clusters
* Fixed uuid
* recent changes
* Pivot changes
* working blockcopy api in libvirt
* Checking block copy status
* Formatting code
* Fixed failures
* code refactoring and some changes
* Removed unused methods
* removed unused imports
* Unit tests to check if volume belongs to same or different storage scaleio cluster
* Unit tests for volume livemigration in ScaleIOPrimaryDataStoreDriver
* Fixed offline volume migration case and allowed encrypted volume migration
* Added more integration tests
* Support for migration of encrypted volumes across different scaleio clusters
* Fix UI notifications for migrate volume
* Data volume offline migration: save encryption details to destination volume entry
* Offline storage migration for scaleio encrypted volumes
* Allow multiple Volumes to be migrated with migrateVirtualMachineWithVolume API
* Removed unused unittests
* Removed duplicate keys in migrate volume vue file
* Fix Unit tests
* Add volume secrets if does not exists during volume migrations. secrets are getting cleared on package upgrades.
* Fix secret UUID for encrypted volume migration
* Added a null check for secret before removing
* Added more unit tests
* Fixed passphrase check
* Add image options to the encypted volume conversion
This PR adds two vm setting for user vms on KVM
- nic multiqueue number
- packed virtqueues enabled . optional are true and false (false by default). It requires qemu>=4.2.0 and libvirt >=6.3.0
Tested ok on ubuntu 22 and rocky 8.4
* Auto Enable Disable KVM hosts
* Improve health check result
* Fix corner cases
* Script path refactor
* Fix sonar cloud reports
* Fix last code smells
* Add marvin tests
* Fix new line on agent.properties to prevent host add failures
* Send alert on auto-enable-disable and add annotations when the setting is enabled
* Address reviews
* Add a reason for enabling or disabling a host when the automatic feature is enabled
* Fix comment on the marvin test description
* Fix for disabling the feature if the admin has manually updated the host resource state before any health check result
* Cleanup in the javadocs of QemuImg
* Update QemuImg.java
* Apply suggestions from code review
Co-authored-by: Stephan Krug <stekrug@icloud.com>
Co-authored-by: cloudstack-lab-gabriel <gabriel.fernandes@scclouds.com.br>
Co-authored-by: Stephan Krug <stekrug@icloud.com>
This PR has 3 improvements for the Linstor primary storage driver:
- Create a separate jar of it and move all Linstor related classes into the correct project (similar to the storpool plugin)
- Add aux properties for Cloudstack volumes in Linstor to make it easier to identify them in Linstor
- Add support for IOPs settings with the Linstor storage plugin
This PR fixes the issue that volume snapshot fails on RBD storage with the following error
qemu-img: Could not open 'driver=raw,file.filename=rbd:cloudstack/test_wei.img:mon_host=10.0.32.254:auth_supported=cephx:id=cloudstack:key=AQDwHTNjjHXRKRAAJb+AToFr6x4a1AvKUc4Ksg==:rbd_default_format=2:client_mount_timeout=30': Could not open 'rbd:cloudstack/test_wei.img:mon_host=10.0.32.254:auth_supported=cephx:id=cloudstack:key=AQDwHTNjjHXRKRAAJb+AToFr6x4a1AvKUc4Ksg==:rbd_default_format=2:client_mount_timeout=30': No such file or directory
However, it works without using image options
Therefore, do not pass the image options if the image format is not QCOW2 and LUKS.
This PR introduces a feature designed to allow CloudStack to manage a generic volume encryption setting. The encryption is handled transparently to the guest OS, and is intended to handle VM guest data encryption at rest and possibly over the wire, though the actual encryption implementation is up to the primary storage driver.
In some cases cloud customers may still prefer to maintain their own guest-level volume encryption, if they don't trust the cloud provider. However, for private cloud cases this greatly simplifies the guest OS experience in terms of running volume encryption for guests without the user having to manage keys, deal with key servers and guest booting being dependent on network connectivity to them (i.e. Tang), etc, especially in cases where users are attaching/detaching data disks and moving them between VMs occasionally.
The feature can be thought of as having two parts - the API/control plane (which includes scheduling aspects), and the storage driver implementation.
This initial PR adds the encryption setting to disk offerings and service offerings (for root volume), and implements encryption support for KVM SharedMountPoint, NFS, Local, and ScaleIO storage pools.
NOTE: While not required, operations can be significantly sped up by ensuring that hosts have the `rng-tools` package and service installed and running on the management server and hypervisors. For EL hosts the service is `rngd` and for Debian it is `rng-tools`. In particular, the use of SecureRandom for generating volume passphrases can be slow if there isn't a good source of entropy. This could affect testing and build environments, and otherwise would only affect users who actually use the encryption feature. If you find tests or volume creates blocking on encryption, check this first.
### Management Server
##### API
* createDiskOffering now has an 'encrypt' Boolean
* createServiceOffering now has an 'encryptroot' Boolean. The 'root' suffix is added here in case there is ever any other need to encrypt something related to the guest configuration, like the RAM of a VM. This has been refactored to deal with the new separation of service offering from disk offering internally.
* listDiskOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listServiceOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listHosts now shows encryption support of each hypervisor host via `encryptionsupported`
* Volumes themselves don't show encryption on/off, rather the offering should be referenced. This follows the same pattern as other disk offering based settings such as the IOPS of the volume.
##### Volume functions
A decent effort has been made to ensure that the most common volume functions have either been cleanly supported or blocked. However, for the first release it is advised to mark this feature as *experimental*, as the code base is complex and there are certainly edge cases to be found.
Many of these features could eventually be supported over time, such as creating templates from encrypted volumes, but the effort and size of the change is already overwhelming.
Supported functions:
* Data Volume create
* VM root volume create
* VM root volume reinstall
* Offline volume snapshot/restore
* Migration of VM with storage (e.g. local storage VM migration)
* Resize volume
* Detach/attach volume
Blocked functions:
* Online volume snapshot
* VM snapshot w/memory
* Scheduled snapshots (would fail when VM is running)
* Disk offering migration to offerings that don't have matching encryption
* Creating template from encrypted volume
* Creating volume from encrypted volume
* Volume extraction (would we decrypt it first, or expose the key? Probably the former).
##### Primary Storage Support
For storage developers, adding encryption support involves:
1. Updating the `StoragePoolType` for your primary storage to advertise encryption support. This is used during allocation of storage to match storage types that support encryption to storage that supports it.
2. Implementing encryption feature when your `PrimaryDataStoreDriver` is called to perform volume lifecycle functions on volumes that are requesting encryption. You are free to do what your storage supports - this could be as simple as calling a storage API with the right flag when creating a volume. Or (as is the case with the KVM storage types), as complex as managing volume details directly at the hypervisor host. The data objects passed to the storage driver will contain volume passphrases, if encryption is requested.
##### Scheduling
For the KVM implementations specified above, we are dependent on the KVM hosts having support for volume encryption tools. As such, the hosts `StartupRoutingCommand` has been modified to advertise whether the host supports encryption. This is done via a probe during agent startup to look for functioning `cryptsetup` and support in `qemu-img`. This is also visible via the listHosts API and the host details in the UI. This was patterned after other features that require hypervisor support such as UEFI.
The `EndPointSelector` interface and `DefaultEndpointSelector` have had new methods added, which allow the caller to ask for endpoints that support encryption. This can be used by storage drivers to find the proper hosts to send storage commands that involve encryption. Not all volume activities will require a host to support encryption (for example a snapshot backup is a simple file copy), and this is the reason why the interface has been modified to allow for the storage driver to decide, rather than just passing the data objects to the EndpointSelector and letting the implementation decide.
VM scheduling has also been modified. When a VM start is requested, if any volume that requires encryption is attached, it will filter out hosts that don't support encryption.
##### DB Changes
A volume whose disk offering enables encryption will get a passphrase generated for it before its first use. This is stored in the new 'passphrase' table, and is encrypted using the CloudStack installation's standard configured DB encryption. A field has been added to the volumes table, referencing this passphrase, and a foreign key added to ensure passphrases that are referenced can't be removed from the database. The volumes table now also contains an encryption format field, which is set by the implementer of the encryption and used as it sees fit.
#### KVM Agent
For the KVM storage pool types supported, the encryption has been implemented at Qemu itself, using the built-in LUKS storage support. This means that the storage remains encrypted all the way to the VM process, and decrypted before the block device is visible to the guest. This may not be necessary in order to implement encryption for /your/ storage pool type, maybe you have a kernel driver that decrypts before the block device on the system, or something like that. However, it seemed like the simplest, common place to terminate the encryption, and provides the lowest surface area for decrypted guest data.
For qcow2 based storage, `qemu-img` is used to set up a qcow2 file with LUKS encryption. For block based (currently just ScaleIO storage), the `cryptsetup` utility is used to format the block device as LUKS for data disks, but `qemu-img` and its LUKS support is used for template copy.
Any volume that requires encryption will contain a passphrase ID as a byte array when handed down to the KVM agent. Care has been taken to ensure this doesn't get logged, and it is cleared after use in attempt to avoid exposing it before garbage collection occurs. On the agent side, this passphrase is used in two ways:
1. In cases where the volume experiences some libvirt interaction it is loaded into libvirt as an ephemeral, private secret and then referenced by secret UUID in any libvirt XML. This applies to things like VM startup, migration preparation, etc.
2. In cases where `qemu-img` needs to use this passphrase for volume operations, it is written to a `KeyFile` on the cloudstack agent's configured tmpfs and passed along. The `KeyFile` is a `Closeable` and when it is closed, it is deleted. This allows us to try-with-resources any volume operations and get the KeyFile removed regardless.
In order to support the advanced syntax required to handle encryption and passphrases with `qemu-img`, the `QemuImg` utility has been modified to support the new `--object` and `--image-opts` flags. These are modeled as `QemuObject` and `QemuImageOptions`. These `qemu-img` flags have been designed to supersede some of the existing, older flags being used today (such as choosing file formats and paths), and an effort could be made to switch over to these wholesale. However, for now we have instead opted to keep existing functions and do some wrapping to ensure backward compatibility, so callers of `QemuImg` can choose to use either way.
It should be noted that there are also a few different Enums that represent the encryption format for various purposes. While these are analogous in principle, they represent different things and should not be confused. For example, the supported encryption format strings for the `cryptsetup` utility has `LuksType.LUKS` while `QemuImg` has a `QemuImg.PhysicalDiskFormat.LUKS`.
Some additional effort could potentially be made to support advanced encryption configurations, such as choosing between LUKS1 and LUKS2 or changing cipher details. These may require changes all the way up through the control plane. However, in practice Libvirt and Qemu currently only support LUKS1 today. Additionally, the cipher details aren't required in order to use an encrypted volume, as they're stored in the LUKS header on the volume there is no need to store these elsewhere. As such, we need only set the one encryption format upon volume creation, which is persisted in the volumes table and then available later as needed. In the future when LUKS2 is standard and fully supported, we could move to it as the default and old volumes will still reference LUKS1 and have the headers on-disk to ensure they remain usable. We could also possibly support an automatic upgrade of the headers down the road, or a volume migration mechanism.
Every version of cryptsetup and qemu-img tested on variants of EL7 and Ubuntu that support encryption use the XTS-AES 256 cipher, which is the leading industry standard and widely used cipher today (e.g. BitLocker and FileVault).
Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
Adds option to provide custom DNS servers for isolated network, shared network and VPC tier.
New API parameters added in createNetwork API along with the corresponding response parameters.
Doc PR: apache/cloudstack-documentation#276
Fixes#6680
While finding CPU speed for KVM host following methods will be used in the same order:
1. lscpu
2. value in /sys/devices/system/cpu/cpu0/cpufreq/base_frequency
3. virsh capabilities
4. libvirt nodeinfo
This will allow correct value for AMD based hosts when first two methods doesn't give a value
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR provides constructors and the associated changes to use LibvirtVMDef for creating user mode network interfaces.
While this isn't used directly in the CloudStack KVM agent today, it could be used in the future for e.g. pod networking/management networks without needing to assign a pod IP. The VIF driver used by the CloudStack Agent is also pluggable, so this allows plugin code to create user mode network interfaces as well.
Note that the user mode network already exists in the GuestNetType enum, but wasn't usable prior to this change.
Also included unit test to ensure we continue to create the expected XML.
Additionally, this uncovered a null pointer on _networkRateKBps and this PR fixes it. The decision to add bandwidth throttling assumes this field is not null and simply checks for > 0.
Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
Fixes#4314
Failure in attaching k8s ISO is seen when VMware DRS is enabled. Log reported VM is not found. This fix tries to find VM on peer hosts when the VM is not found on the given host.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes#6621
Each time getMemStat() is called, a static value is returned. This value
should instead be refreshed to return the actual memory used.
Co-authored-by: Ruben Bosch <ruben.bosch@cldin.eu>
Fixes#6455
The default storage adaptor - LibvirtStorageAdaptor - is used by different storage types and doesn't use the annotation @StorageAdaptorInfo. In this case, a storage plugin that wants to adopt one of the predefined storage pool types will override the default behaviour. If fixing the issue in general (for new storage plugins or current ones that want to reuse the existing storage pool types) would affect all volume/snapshot/VM cases. This will lead to the need of extensive testing for each storage plugin for which we don't have the resources to do it. That's why this patch fixes the old behaviour for the SharedMountPoint by adding a new storage pool type for the StorPool plugin.
This PR fixes the issue #6209 where the snapshot revert operation fails after certain volume operations like Migrate VM with volume / migrate volume / reinstall VM.
The root cause of the issue after these volume operations, the primary storage entry is getting deleted for that volume. We have fixed it here to get the primary datastore entry wrt volume and continue the operation.
* add global setting to allow parallel execution on vmware
* cleanup setting distribution for vmware.create.full.clone
* query setting in vmware guru
* don´t touch other hypervisor's commands
* guru hierarchy cleanup
* Fix VMware VM migration with volume in case of local storage
* Break the loop once target host is found
* Code optimisations in getting the target host guid for local storage
* Fixed code smells and added unit test
* Add more logs to migrate VM process in KVM
* Remove unused imports
* Verify if debug is enable before write the log string
* Fix conflicts
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
Co-authored-by: SadiJr <sadi@scclouds.com.br>
This PR enhances the existing PowerFlex/ScaleIO storage plugin to support separate (storage) network for Hosts(KVM)/Storage connection, mainly the SDC (ScaleIo Data Client) connection.
* Fix extract snapshot from VM snapshot on KVM
* Fix validation expression - does not need to escape the slash
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
* kvm: truncate vnc password to 8 chars (#6244)
This PR truncates the vnc password of kvm vms to 8 chars to support latest versions of libvirt.
* Use lang3 string utils
Co-authored-by: Wei Zhou <weizhou@apache.org>
* agent: enable ssl only for kvm agent (not in system vms)
* Revert "agent: enable ssl only for kvm agent (not in system vms)"
This reverts commit b2d76bad2e.
* Revert "KVM: Enable SSL if keystore exists (#6200)"
This reverts commit 4525f8c8e7.
* KVM: Enable SSL if keystore exists in LibvirtComputingResource.java
* Improve log when live patching fails
* change patching path from /tmp to /var/cache/clou
* add iptable rule for console proxy (novnc)
* temporary template paths
* revert pom xml to original paths
* Support for live patching systemVMs and deprecating systemVM.iso. Includes:
- fix systemVM template version
- Include agent.zip, cloud-scripts.tgz to the commons package
- Support for live-patching systemVMs - CPVM, SSVM, Routers
- Fix Unit test
- Remove systemvm.iso dependency
* The following commit:
- refactors logic added to support SystemVM deployment on KVM
- Adds support to copy specific files (required for patching) to the hosts on Xenserver
- Modifies vmops method - createFileInDomr to take cleanup param
- Adds configuratble sleep param to CitrixResourceBase::connect() used to verify if telnet to specifc port is possible (if sleep is 0, then default to _sleep = 10000ms)
- Adds Command/Answer for patch systemVMs on XenServer/Xcp
* - Support to patch SystemVMs - VMWare
- Remove attaching systemvm.iso to systemVMs
- Modify / Refactor VMware start command to copy patch related files to the systemvms
- cleanup
* Commit comprises of:
- remove docker from systemvm template - use containerd as container runtime
- update create-k8s-binaries script to use ctr for all docker operations
- Update userdata sent to the k8s nodes
- update cksnode script, run during patching of the cks/k8s nodes
* Add ssh to k8s nodes details in the Access tab on the UI
* test
* Refactor ca/cert patching logic
* Commit comprises of the following changes:
- Use restart network/VPC API to patch routers
- use livePatch API support patching of only cpvm/ssvm
- add timeout to the keystore setup/import script
* remove all references of systemvm.iso
* Fix keystore-cert-import invocation + refactor cert timeout in CP/SS VMs
* fix script timeout
* Refactor cert patching for systemVMs + update keystore-cert-import script + patch-sysvms script + remove patchSysvmCommand from networkelementcommand
* remove commented code + change core user to cloud for cks nodes
* Update ownership of ssh directory
* NEED TO DISCUSS - add on the fly template conversion as an ExecStartPre action (systemd)
* Add UI changes + move changes from patch file to runcmd
* test: validate performance for template modification during seeding
* create vms folder in cloudstack-commons directory - debian rules
* remove logic for on the fly template convert + update k8s test
* fix syntax issue - causing issue with shared network tests
* Code cleanup
* refactor patching logic - certs
* move logic of fixing rootdiskcontroller from upgrade to kubernetes service
* add livepatch option to restart network & vpc
* smooth upgrade of cks clusters
* Support for live patching systemVMs and deprecating systemVM.iso. Includes:
- fix systemVM template version
- Include agent.zip, cloud-scripts.tgz to the commons package
- Support for live-patching systemVMs - CPVM, SSVM, Routers
- Fix Unit test
- Remove systemvm.iso dependency
* The following commit:
- refactors logic added to support SystemVM deployment on KVM
- Adds support to copy specific files (required for patching) to the hosts on Xenserver
- Modifies vmops method - createFileInDomr to take cleanup param
- Adds configuratble sleep param to CitrixResourceBase::connect() used to verify if telnet to specifc port is possible (if sleep is 0, then default to _sleep = 10000ms)
- Adds Command/Answer for patch systemVMs on XenServer/Xcp
* - Support to patch SystemVMs - VMWare
- Remove attaching systemvm.iso to systemVMs
- Modify / Refactor VMware start command to copy patch related files to the systemvms
- cleanup
* Commit comprises of:
- remove docker from systemvm template - use containerd as container runtime
- update create-k8s-binaries script to use ctr for all docker operations
- Update userdata sent to the k8s nodes
- update cksnode script, run during patching of the cks/k8s nodes
* Add ssh to k8s nodes details in the Access tab on the UI
* test
* Refactor ca/cert patching logic
* Commit comprises of the following changes:
- Use restart network/VPC API to patch routers
- use livePatch API support patching of only cpvm/ssvm
- add timeout to the keystore setup/import script
* remove all references of systemvm.iso
* Fix keystore-cert-import invocation + refactor cert timeout in CP/SS VMs
* fix script timeout
* Refactor cert patching for systemVMs + update keystore-cert-import script + patch-sysvms script + remove patchSysvmCommand from networkelementcommand
* remove commented code + change core user to cloud for cks nodes
* Update ownership of ssh directory
* NEED TO DISCUSS - add on the fly template conversion as an ExecStartPre action (systemd)
* Add UI changes + move changes from patch file to runcmd
* test: validate performance for template modification during seeding
* create vms folder in cloudstack-commons directory - debian rules
* remove logic for on the fly template convert + update k8s test
* fix syntax issue - causing issue with shared network tests
* Code cleanup
* add cgroup config for containerd
* add systemd config for kubelet
* add additional info during image registry config
* address comments
* add temp links of download.cloudstack.org
* address part of the comments
* address comments
* update containerd config - as version has upgraded to 1.5 from 1.4.12 in 4.17.0
* address comments - simplify
* fix vue3 related icon changes
* allow network commands when router template version is lower but is patched
* add internal LB to the list of routers to be patched on network restart with live patch
* add unit tests for API param validations and new helper utilities - file scp & checksum validations
* perform patching only for non-user i.e., system VMs
* add test to validate params
* remove unused import
* add column to domain_router to display software version and support networkrestart with livePatch from router view
* Requires upgrade column to consider package (cloud-scripts) checksum to identify if true/false
* use router software version instead of checksum
* show N/A if no software version reported i.e., in upgraded envs
* fix deb failure
* update pom to official links of systemVM template
* get vdisk uuid from vcenter and store it into database
* add vdisk uuid as external_uuid to listVolume response
* add sql upgrade file
* Update vmware-base/src/main/java/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* update sql add column external_uuid
* Update server/src/main/java/com/cloud/storage/VolumeApiServiceImpl.java
Co-authored-by: Wei Zhou <weizhou@apache.org>
* adapt param description for externalUuid
* add 'idempotent column add' to create external_uuid col
* rename method to getExternalDiskUUID
* remove line disk_offering.system_use
Co-authored-by: DK101010 <dirk.klahre@itelligence.de>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>