* NSX integration - skeletal code
* Fix module not loading on startup
* add upgrade path and daos
\n add nsx controller command
* add support for adding and listing nsx provider to a zone
* add license
* add default VPC offering and update upgrade path
* add global setting to enable nsx plugin
* add delete nsx controller operation
* add nsxresource
* add NSX resource , api client, create tier1 gw
* update db
* update response and add license
* Add support to create and delete nsx tier-1 gateway
* add license
* cleanup and add skeletal code for network creation
* add create/delete segment and UI integration
* add license
* address code smells - part 1
* fix test / build failure
* NSX integration - skeletal code
* Fix module not loading on startup
* add upgrade path and daos
\n add nsx controller command
* add support for adding and listing nsx provider to a zone
* add license
* add default VPC offering and update upgrade path
* add global setting to enable nsx plugin
* add delete nsx controller operation
* add nsxresource
* add NSX resource , api client, create tier1 gw
* update db
* update response and add license
* Add support to create and delete nsx tier-1 gateway
* add license
* cleanup and add skeletal code for network creation
* add create/delete segment and UI integration
* add license
* address code smells - part 1
* fix test / build failure
* add ui changes + update nsx_provider table transport zones + use NSX broadcast domain for add nics to router
* ui: fix password field, and backend changes
* add route advertisement
* update offering
* update offering
* add sleep before deletion of vpc / tier g/w for ports to be removed
* move creation of segments to design phase
* change provider to VPC router for Dhcp & dns service in an nsx offering
* Add public nic for NSX
* reserve first IP (after g/w) of subnet for router nic - NSX
* revert reserving 1st IP in vpc segments
* [NSX] Create a DHCP relay and add it to a VPC tier segment (#107)
* Create DHCP relay command and execute request
* In progress integrate with networking
* Create DHCP relay config on the network VR allocation
* Revert domain router dao changes
* Create DHCP relay con VR nic plug to NSX network
* Link DHCP relay config to segment after creation
* [NSX] Cleanup DHCP Relay config on segment deletion (#108)
* Cleanup DHCP Relay config on segment deletion
* update segment & relay name generators and call delete dhcprelay after deletion of segment
* address comment
* [NSX] Fix DHCP relay config deletion was missing zone name (#8068)
* [NSX] Refactor API wrapper operations (#8059)
* [NSX] Refactor API wrapper operations
* Big refactor
* Address review comment
* change network cidr to cidr to prevent NPE
* add domain and zone names to the various networks - vpc & tier
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* Nsx unit tests (#8090)
* Add tests
* add test for NsxGuestNetworkGuru
* add unit tests for NsxResource
* add unti tests for NsxElement
* cleanup
* [NSX] Refactor API wrapper operations
* update tests
* update tests - add nsxProviderServiceImpl test
* add unit test - NsxServiceImpl
* add license
* Big refactor
* Address review comment
* change network cidr to cidr to prevent NPE
* add domain and zone names to the various networks - vpc & tier
* fix tests
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* modify NSX resource naming convention (#8095)
* modify NSX resource naming convention
* remove unused imports
* add a setup phase between desgin and implementation of a network for intermediary steps
* add method to all classes
* NSX: Refactor Network & VPC offering (#8110)
* [NSX] Refactor API wrapper operations
* Network offering changes for NSX
* fix services and provider combination
* address comments: rename param
* update nsx_mode parameter
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix test
* [NSX] Allow NSX isolated networks (#8132)
* Add network offerings for NSX on isolated networks
* Fix offerings creation
* In progress NSX isolated network
* Fixes
* Fix NIC allocation to router
* NSX: Add Step for Adding Public traffic network for NSX During zone creation (#8126)
* NSX: Add Step for Adding Public traffic network for NSX
* address comments and cleanup
* address comment
* remove indent
* NSX: Create and Delete static NAT & Port forward rules (#8131)
* NSX: Create and delete NSX Static Nat rules
* fix issues with static nat
* add static nat
* Support to add and delete Port forward rules
* add license
* fix adding multiple pf rules
* cleanup
* fix lint check
* fix smoke tests
* fix smoke tests
* Nsx add lb rule (#8161)
* NSX: Create and delete NSX Static Nat rules
* fix issues with static nat
* add static nat
* Support to add and delete Port forward rules
* add license
* fix adding multiple pf rules
* cleanup
* NSX: Add support to create and delete Load balancer rules
* fix deletion of lb rules
* add header file and update protocol detail
* build failure fix
* [NSX] Add SNAT support (#8100)
* In progress add source NAT
* Fix after merge
* Fix tests
* Fix NPE on isolated network deletion
* Reserve source NAT IP when its not passed for NSX VPC
* Create source NAT rule on VR NIC allocation
* Fix update VPC and remove VPC to update and remove SNAT rule
* Fix packaging
* Address review comment
* Fix build
* fix build - unused import
* Add defensive checks
* Add missing design to NSX public guru
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* NSX: Fix VR public NIC allocation (#8166)
* NSX: fix LB member addition and deletion and add defensive checks (#8167)
* Fix public NIC NPE on broadcast URI
* NSX: Router Public nic to get IP from systemVM Ip range (#8172)
* NSX: Router Public nic to get IP from systemVM Ip range
* Fix VR IP address and setSourceNatIp command
* NSX: hide systemVM reserved IP range SourceNAT
* fix test
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix test failure
* test failure fix
* [NSX] Fix update source NAT IP (#8176)
* [NSX] Fix update source NAT IP
* Fix startup
* Fix API result
* NSX - add LB route Advertizement (#8192)
* [NSX] Add ACL types support (#8224)
* NSX: Create segment group on segment creation
* Add unit tests
* Remove group for segment before removing segment
* Create Distributed Firewall rules
* Remove distributed firewall policy on segment deletion
* Fix policy rule ID and add more unit tests
* Fix DROP action rules and transform tests
* Add new ACL rules
* Fixes
* associate security policies with groups and not to DFW and add deletion of rules
* Fix name convention
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* NSX: Fix creation of VPCs (#8320)
* Fix ACL rules creation (#8323)
* [NSX] Fix database views (#8325)
* NSX: Add CKS Support & Firewall rules for Isolated Networks (#8189)
* NSX: Add ALL LB IP to the list of route advertisements in tier1
* NSX: Support Source NAT on NSX Isolated networks
* NSX: Cks Support
* NSX: Create segment group on segment creation
* Add unit tests
* Remove group for segment before removing segment
* Create Distributed Firewall rules
* Remove distributed firewall policy on segment deletion
* Fix policy rule ID and add more unit tests
* Add support for routed NSX Isolated networks \n and non RFC 1918 compliant IPs
* Add support for routed NSX Isolated networks \n and non RFC 1918 compliant IPs
* Add Firewall rules
* build failure - fix unit test
* fix npes
* Add support to delete firewall rules
* update nsx cks offering
* add license
* update order of ports in PF & FW rules
* fix filter for getting transport zones
* CKS support changed - MTU updated, etc
* add LB for CKS on VPC
* address comments
* adapt upstream cks logic for vpc
* rever mtu hack
* update UI changes as per upstream fix
* change display test for CKS n/w offerings for isolated and VPC tiers
* add extra line for linter
* address comment
* revert list change
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix ui build failure
* [NSX] Address SonarCloud Bugs (#8341)
* [NSX] Address SonarCloud Bugs
* Fix NSX API connection issues
* NSX: Add unit tests to increase coverage (#8355)
* NSX: Add unit tests
* cleanup unused imports
* add more unit tests
* add tests for publicnsxnetworkguru
* add license
* fix build failures
* address sonar comment
* fix security hotspots
* NSX: Add more unit tests (#8381)
* NSX : Unit tests
* remove unused imports
* remove unused import causing build failure
* fix build failures due to unused imports
* fix build failure
* fix test assertion
* remove unused imports
* remove unused import
* Nsx UI zone bug (#8398)
* NSX: Attempt to fix NSX Zone creation bug for public networks
* fix zone wizard public traffic issue
* add proper filtering of offerings based on VPC nsx mode
* clean up console logs
* NSX: Fix code smells and reported bugs (#8409)
* NSX: Fix code smells and reported bugs
* fox override issue
* remove unused imports
* fix test
* refactor code to reduce complexity
* add lisence
* cleanup
* fix build failure
* fix build failure
* address comments
* test - add config to ignore certain files from test coverage
* test exclusion of classes from test cov
* rever pom changes
* [NSX] Add more unit tests (#8431)
* [NSX] Add more unit tests
* More tests
* Fix build errors
* NSX: Prevent creation of L2 and Shared networks for NSX (#8463)
* NSX: Prevent creation of L2 and Shared networks for NSX
* add checks to backend to prevent creation of l2 and shared networks in nsx zones and filter only nsx offerings when creating isolated networks
* cleanup
* NSX: Fix code smells (#8436)
* NSX: Fix code smells
* Add changes to service creation logic
* CKS: Add action to during firewall rule creation (#8498)
* NSX,UI: Deduplicate network list when creating kubernetes clusters (#8513)
* NSX: Make LB service selectable in network offering (#8512)
* NSX: Make LB service selectable in network offering
* fix label
* address comments
* address comments
* NSX: Add appropriate error message when icmp type is set to -1 for NSX (#8504)
* NSX: Add appropriate error message when icmp type is set to -1 for NSX
* address comments
* update text
* fix test
* fix test - build failure
* fix test - build failure
* NSX: Cleanup NSX resources during k8s cluster cleanup (#8528)
* fix test failure
* NSX: Improve segment deletion process (#8538)
* NSX: Add passive monitor for NSX LB to test whether a server is available (#8533)
* NSX: Add passive monitor for NSX LB to test whether a server is available
* Add active monitors too
* fix build failure
* NSX: Add check for ICMP code / type for NSX zones (#8542)
* NSX: Fix Routed Mode for Isolated and VPC networks (#8534)
* NSX: Fix Routed Mode for Isolated and VPC networks
* NSX: Fix Routed mode - add checks for ports added for FW rules
* clean up code
* fix build failure
* NSX: Add retry logic with sleep to delete segments (#8554)
* NSX: Add retry logic with sleep to delete segments
* add logs
* Update pom XML for the NSX project
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
This PR fixes moves resources stuck in transition state during async job cleanup
Problem:
During maintenance of the management server, other servers in the cluster or the same server after a restart initiate async job cleanup. However, this process leaves resources in a transitional state. The only recovery option currently available is to make direct database changes.
Solution:
This PR introduces a resolution by changing Volume, Virtual Machine, and Network resources from their transitional states. This adjustment enables the reattempt of failed operations without the need for manual database modifications.
This PR marks the multipath scripts as executable.
This fixes the issue that in 4.19.0.0-RC2, vms can not be stopped in ubuntu hosts.
2024-01-17 12:56:26,061 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-4:ctx-e3503563 job-38/job-39 ctx-42706275) (logid:81ede4e9) Invocation exception, caused by: com.cloud.utils.exception.CloudRuntimeException: Unable to stop the virtual machine due to java.lang.NullPointerException
at com.cloud.utils.script.Script.getExitValue(Script.java:74)
at com.cloud.hypervisor.kvm.storage.MultipathSCSIAdapterBase.runScript(MultipathSCSIAdapterBase.java:476)
at com.cloud.hypervisor.kvm.storage.MultipathSCSIAdapterBase.disconnectPhysicalDiskByPath(MultipathSCSIAdapterBase.java:226)
at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.disconnectPhysicalDiskByPath(KVMStoragePoolManager.java:205)
at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.cleanupDisk(LibvirtComputingResource.java:3335)
at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtStopCommandWrapper.execute(LibvirtStopCommandWrapper.java:101)
at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtStopCommandWrapper.execute(LibvirtStopCommandWrapper.java:49)
at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1903)
There are a lot of test failures due to test_vm_life_cycle.py in multiple PRs due to host not available for migration of VMs.
#8438 (comment)
#8433 (comment)
#7344 (comment)
While debugging I noticed that the hosts get stuck in Connecting state because MS is waiting for a response of the ReadyCommand from the agent. Since we take a lock on connection and disconnection, restarting the agent doesn't work. To fix this, we have to restart the MS or wait for ~1 hour (default timeout).
On the agent side, it gets stuck waiting for a response from the Script execution.
To reproduce, run smoke/test_vm_life_cycle.py (TestSecuredVmMigration test class to be specific). Once the tests are complete, you will notice that some hosts are stuck in Connecting state. And restarting the agent fails due to the named lock. Locks on DB can be checked using the below query.
SELECT *
FROM performance_schema.metadata_locks
INNER JOIN performance_schema.threads ON THREAD_ID = OWNER_THREAD_ID
WHERE PROCESSLIST_ID <> CONNECTION_ID() \G;
This PR adds a wait for the ready command and a timeout to the Script execution to ensure that the thread doesn't get stuck and the named lock from database is released.
This PR fixes a regression caused by #8465 on advanced zones, import fails with:
2024-01-10 12:13:33,234 DEBUG [o.a.c.e.o.NetworkOrchestrator] (API-Job-Executor-3:ctx-991bbe9f job-128 ctx-f49517d4) (logid:d7b8e716) Allocating nic for vm 142272e8-9e2e-407b-9d7e-e9a03b81653c in network Network {"id": 204, "name": "Isolated", "uuid": "9679fac5-e3ac-4694-a57b-beb635340f39", "networkofferingid": 10} during import
2024-01-10 12:13:33,239 ERROR [o.a.c.v.UnmanagedVMsManagerImpl] (API-Job-Executor-3:ctx-991bbe9f job-128 ctx-f49517d4) (logid:d7b8e716) Failed to import NICs while importing vm: i-2-31-VM
com.cloud.exception.InsufficientVirtualNetworkCapacityException: Unable to acquire Guest IP address for network Network {"id": 204, "name": "Isolated", "uuid": "9679fac5-e3ac-4694-a57b-beb635340f39", "networkofferingid": 10}Scope=interface com.cloud.dc.DataCenter; id=1
at org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.importNic(NetworkOrchestrator.java:4582)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.importNic(UnmanagedVMsManagerImpl.java:859)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.importVirtualMachineInternal(UnmanagedVMsManagerImpl.java:1198)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.importUnmanagedInstanceFromHypervisor(UnmanagedVMsManagerImpl.java:1511)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.baseImportInstance(UnmanagedVMsManagerImpl.java:1342)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.importUnmanagedInstance(UnmanagedVMsManagerImpl.java:1282)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Also, addresses the VNC password field set instead of a fixed string
This PR fixes reorder/list pools when cluster details are not set, while deploying vm / attaching volume.
Problem:
Attach volume to a VM fails, on infra with zone-wide pools & vm.allocation.algorithm=userdispersing as the cluster details are not set (passed as null) while reordering / listing pools by volumes.
Solution:
Ignore cluster details when not set, while reordering / listing pools by volumes.
This PR makes changes to use cluster's free metrics instead of used while computing imbalance for the cluster. This allows DRS to run for clusters where hosts doesn't have the same amount of metrics.
Currently, if a host is on alert, the option to reconnect it is not presented in the UI.
This PR addresses this issue by adding a reconnect button to the host if it is in an alert state.
To prevent errors during multi-user access, use account UUID to create/access user on the provider side. Also, update the existing secret key for a user that already exists.
This PR enables support for user data content upto 1048576 bytes - updates jetty maxFormContentSize value to 1048576 bytes (default is 200000 bytes).
CloudStack can support max user data content to 1048576 bytes (the size can be configurable through vm.userdata.max.length setting, max 1048576), but it's limited due to the default Jetty max content size, which is 200000 bytes.
Configuration Reference from jetty doc:
https://eclipse.dev/jetty/documentation/jetty-9/index.html#configuring-specific-webapp-deployment (check with maxFormContentSize here)
This PR fixes KVM manage/unmanage functionality on 4.19.0 RC1 - was introduced on #7976 but the latest merge commits on the PR removed the execution for KVM
Fixes#8412
Add support for 8.0.0.2 explicitly to prevent falling over to the parent version
Adds log when hypervisor capabilities fail over to the parent version
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Non-admin account may not have access to the readyForShutdown API therefore UI should not schedule the job.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
GuestOS mappings are retrieved from the parent hypervisor version when a minor, patch hypervisor version doesn't exist.
Fixes#8412
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes the template selection regression for VMware Ingestion in the UI on 4.19.0 RC1 and adds back the default template selection for VMware
Fixes: #8428#8432
During management server initialization, some EmptyStackException and BeansException exceptions are always thrown; These exceptions do not affect ACS, however, they can cause confusion when observing the management server startup logs with DEBUG level on. The causes of the exceptions have been addressed.
Co-authored-by: João Jandre <joao@scclouds.com.br>
This PR changes the password.policy.regex default value to empty. With an empty value for the configuration, it is skipped during the password policy check, only when the configuration is set to something different than a blank string, the regex will get checked.
This way, when creating a user on org.apache.cloudstack.ldap.LdapAuthenticator#authenticate() we won't get an error by default, as an empty value for the password is passed.
With this change, a fix is added for failures seen with test_08_migrate_vm or other migration-related tests because the target host is in `Connecting` state,
#8356 (comment)
#8374 (comment)
and more