Add a global setting to control whether redirection is allowed while
downloading templates and volumes
core: some changes on SimpleHttpMultiFileDownloader
similar as HttpTemplateDownloader
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
(cherry picked from commit b1642bc3bf)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* NSX integration - skeletal code
* Fix module not loading on startup
* add upgrade path and daos
\n add nsx controller command
* add support for adding and listing nsx provider to a zone
* add license
* add default VPC offering and update upgrade path
* add global setting to enable nsx plugin
* add delete nsx controller operation
* add nsxresource
* add NSX resource , api client, create tier1 gw
* update db
* update response and add license
* Add support to create and delete nsx tier-1 gateway
* add license
* cleanup and add skeletal code for network creation
* add create/delete segment and UI integration
* add license
* address code smells - part 1
* fix test / build failure
* NSX integration - skeletal code
* Fix module not loading on startup
* add upgrade path and daos
\n add nsx controller command
* add support for adding and listing nsx provider to a zone
* add license
* add default VPC offering and update upgrade path
* add global setting to enable nsx plugin
* add delete nsx controller operation
* add nsxresource
* add NSX resource , api client, create tier1 gw
* update db
* update response and add license
* Add support to create and delete nsx tier-1 gateway
* add license
* cleanup and add skeletal code for network creation
* add create/delete segment and UI integration
* add license
* address code smells - part 1
* fix test / build failure
* add ui changes + update nsx_provider table transport zones + use NSX broadcast domain for add nics to router
* ui: fix password field, and backend changes
* add route advertisement
* update offering
* update offering
* add sleep before deletion of vpc / tier g/w for ports to be removed
* move creation of segments to design phase
* change provider to VPC router for Dhcp & dns service in an nsx offering
* Add public nic for NSX
* reserve first IP (after g/w) of subnet for router nic - NSX
* revert reserving 1st IP in vpc segments
* [NSX] Create a DHCP relay and add it to a VPC tier segment (#107)
* Create DHCP relay command and execute request
* In progress integrate with networking
* Create DHCP relay config on the network VR allocation
* Revert domain router dao changes
* Create DHCP relay con VR nic plug to NSX network
* Link DHCP relay config to segment after creation
* [NSX] Cleanup DHCP Relay config on segment deletion (#108)
* Cleanup DHCP Relay config on segment deletion
* update segment & relay name generators and call delete dhcprelay after deletion of segment
* address comment
* [NSX] Fix DHCP relay config deletion was missing zone name (#8068)
* [NSX] Refactor API wrapper operations (#8059)
* [NSX] Refactor API wrapper operations
* Big refactor
* Address review comment
* change network cidr to cidr to prevent NPE
* add domain and zone names to the various networks - vpc & tier
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* Nsx unit tests (#8090)
* Add tests
* add test for NsxGuestNetworkGuru
* add unit tests for NsxResource
* add unti tests for NsxElement
* cleanup
* [NSX] Refactor API wrapper operations
* update tests
* update tests - add nsxProviderServiceImpl test
* add unit test - NsxServiceImpl
* add license
* Big refactor
* Address review comment
* change network cidr to cidr to prevent NPE
* add domain and zone names to the various networks - vpc & tier
* fix tests
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* modify NSX resource naming convention (#8095)
* modify NSX resource naming convention
* remove unused imports
* add a setup phase between desgin and implementation of a network for intermediary steps
* add method to all classes
* NSX: Refactor Network & VPC offering (#8110)
* [NSX] Refactor API wrapper operations
* Network offering changes for NSX
* fix services and provider combination
* address comments: rename param
* update nsx_mode parameter
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix test
* [NSX] Allow NSX isolated networks (#8132)
* Add network offerings for NSX on isolated networks
* Fix offerings creation
* In progress NSX isolated network
* Fixes
* Fix NIC allocation to router
* NSX: Add Step for Adding Public traffic network for NSX During zone creation (#8126)
* NSX: Add Step for Adding Public traffic network for NSX
* address comments and cleanup
* address comment
* remove indent
* NSX: Create and Delete static NAT & Port forward rules (#8131)
* NSX: Create and delete NSX Static Nat rules
* fix issues with static nat
* add static nat
* Support to add and delete Port forward rules
* add license
* fix adding multiple pf rules
* cleanup
* fix lint check
* fix smoke tests
* fix smoke tests
* Nsx add lb rule (#8161)
* NSX: Create and delete NSX Static Nat rules
* fix issues with static nat
* add static nat
* Support to add and delete Port forward rules
* add license
* fix adding multiple pf rules
* cleanup
* NSX: Add support to create and delete Load balancer rules
* fix deletion of lb rules
* add header file and update protocol detail
* build failure fix
* [NSX] Add SNAT support (#8100)
* In progress add source NAT
* Fix after merge
* Fix tests
* Fix NPE on isolated network deletion
* Reserve source NAT IP when its not passed for NSX VPC
* Create source NAT rule on VR NIC allocation
* Fix update VPC and remove VPC to update and remove SNAT rule
* Fix packaging
* Address review comment
* Fix build
* fix build - unused import
* Add defensive checks
* Add missing design to NSX public guru
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* NSX: Fix VR public NIC allocation (#8166)
* NSX: fix LB member addition and deletion and add defensive checks (#8167)
* Fix public NIC NPE on broadcast URI
* NSX: Router Public nic to get IP from systemVM Ip range (#8172)
* NSX: Router Public nic to get IP from systemVM Ip range
* Fix VR IP address and setSourceNatIp command
* NSX: hide systemVM reserved IP range SourceNAT
* fix test
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix test failure
* test failure fix
* [NSX] Fix update source NAT IP (#8176)
* [NSX] Fix update source NAT IP
* Fix startup
* Fix API result
* NSX - add LB route Advertizement (#8192)
* [NSX] Add ACL types support (#8224)
* NSX: Create segment group on segment creation
* Add unit tests
* Remove group for segment before removing segment
* Create Distributed Firewall rules
* Remove distributed firewall policy on segment deletion
* Fix policy rule ID and add more unit tests
* Fix DROP action rules and transform tests
* Add new ACL rules
* Fixes
* associate security policies with groups and not to DFW and add deletion of rules
* Fix name convention
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* NSX: Fix creation of VPCs (#8320)
* Fix ACL rules creation (#8323)
* [NSX] Fix database views (#8325)
* NSX: Add CKS Support & Firewall rules for Isolated Networks (#8189)
* NSX: Add ALL LB IP to the list of route advertisements in tier1
* NSX: Support Source NAT on NSX Isolated networks
* NSX: Cks Support
* NSX: Create segment group on segment creation
* Add unit tests
* Remove group for segment before removing segment
* Create Distributed Firewall rules
* Remove distributed firewall policy on segment deletion
* Fix policy rule ID and add more unit tests
* Add support for routed NSX Isolated networks \n and non RFC 1918 compliant IPs
* Add support for routed NSX Isolated networks \n and non RFC 1918 compliant IPs
* Add Firewall rules
* build failure - fix unit test
* fix npes
* Add support to delete firewall rules
* update nsx cks offering
* add license
* update order of ports in PF & FW rules
* fix filter for getting transport zones
* CKS support changed - MTU updated, etc
* add LB for CKS on VPC
* address comments
* adapt upstream cks logic for vpc
* rever mtu hack
* update UI changes as per upstream fix
* change display test for CKS n/w offerings for isolated and VPC tiers
* add extra line for linter
* address comment
* revert list change
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix ui build failure
* [NSX] Address SonarCloud Bugs (#8341)
* [NSX] Address SonarCloud Bugs
* Fix NSX API connection issues
* NSX: Add unit tests to increase coverage (#8355)
* NSX: Add unit tests
* cleanup unused imports
* add more unit tests
* add tests for publicnsxnetworkguru
* add license
* fix build failures
* address sonar comment
* fix security hotspots
* NSX: Add more unit tests (#8381)
* NSX : Unit tests
* remove unused imports
* remove unused import causing build failure
* fix build failures due to unused imports
* fix build failure
* fix test assertion
* remove unused imports
* remove unused import
* Nsx UI zone bug (#8398)
* NSX: Attempt to fix NSX Zone creation bug for public networks
* fix zone wizard public traffic issue
* add proper filtering of offerings based on VPC nsx mode
* clean up console logs
* NSX: Fix code smells and reported bugs (#8409)
* NSX: Fix code smells and reported bugs
* fox override issue
* remove unused imports
* fix test
* refactor code to reduce complexity
* add lisence
* cleanup
* fix build failure
* fix build failure
* address comments
* test - add config to ignore certain files from test coverage
* test exclusion of classes from test cov
* rever pom changes
* [NSX] Add more unit tests (#8431)
* [NSX] Add more unit tests
* More tests
* Fix build errors
* NSX: Prevent creation of L2 and Shared networks for NSX (#8463)
* NSX: Prevent creation of L2 and Shared networks for NSX
* add checks to backend to prevent creation of l2 and shared networks in nsx zones and filter only nsx offerings when creating isolated networks
* cleanup
* NSX: Fix code smells (#8436)
* NSX: Fix code smells
* Add changes to service creation logic
* CKS: Add action to during firewall rule creation (#8498)
* NSX,UI: Deduplicate network list when creating kubernetes clusters (#8513)
* NSX: Make LB service selectable in network offering (#8512)
* NSX: Make LB service selectable in network offering
* fix label
* address comments
* address comments
* NSX: Add appropriate error message when icmp type is set to -1 for NSX (#8504)
* NSX: Add appropriate error message when icmp type is set to -1 for NSX
* address comments
* update text
* fix test
* fix test - build failure
* fix test - build failure
* NSX: Cleanup NSX resources during k8s cluster cleanup (#8528)
* fix test failure
* NSX: Improve segment deletion process (#8538)
* NSX: Add passive monitor for NSX LB to test whether a server is available (#8533)
* NSX: Add passive monitor for NSX LB to test whether a server is available
* Add active monitors too
* fix build failure
* NSX: Add check for ICMP code / type for NSX zones (#8542)
* NSX: Fix Routed Mode for Isolated and VPC networks (#8534)
* NSX: Fix Routed Mode for Isolated and VPC networks
* NSX: Fix Routed mode - add checks for ports added for FW rules
* clean up code
* fix build failure
* NSX: Add retry logic with sleep to delete segments (#8554)
* NSX: Add retry logic with sleep to delete segments
* add logs
* NSX: Fix custom ACL check (#2)
* NSX: Fix custom ACL check
* NSX: Fix custom ACL check
* Nsx vpc routed mode (#5)
* NSX: Fix VPC routed mode
* NSX: VPC route mode
* remove unnecessary changes
* Nsx: Support internal LB (#4)
* NSX: Support internal LB service in NSX
* add lb removal logic
* Fix UI issue hiding internal LB tab
* Refactor method name
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* NSX: Improve NSX resource cleanup process (#3)
* Fix unit test
* NSX: Add SourceNAT service to the default Routed offering for VPC (#13)
* Fix VPC restart with cleanup (#12)
* NSX: Fix ACL rule removal on replacement and fix rule order (#11)
* NSX: fix smoke test failure for ACLs (#9)
* Fix unit tests
* Fix NSX plugin pom XML
* NSX: Add support to re-order ACL rules (NSX FW rules) (#14)
* [WIP] NSX: Add support to re-order ACL rules (NSX FW rules)
* fix reordering of acl rules on all networks that it is associated to
* clean up and attempt test fix
* Fix tests
* Remove unused import
* tweak reorder logic
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Fix zone creation issue for internal load balancer
* Fix
* Fix unit test
* fix logger
* fix logger
* fix logger
* NSX: Fix VPC form to ignore source NAT IP when creating VPCs and fix label
* Move SQL changes to the newest schema file
* NSX: Last Fixes
* Fix build
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Introduced a new API checkVolumeAndRepair that allows users or admins to check and repair if any leaks observed.
Currently this is supported only for KVM
* some fixes
* Added unit tests
* addressed review comments
* add repair volume while granting access
* Changed repair parameter to accept both leaks/all
* Introduced new global setting volume.check.and.repair.before.use to do volume check and repair before VM start or volume attach operations
* Added volume check and repair changes only during VM start and volume attach operations
* Refactored the names to look similar across the code
* Some code fixes
* remove unused code
* Renamed repair values
* Fixed unit tests
* changed version
* Address review comments
* Code refactored
* used volume name in logs
* Changed the API to Async and the setting scope to storage pool
* Fixed exit value handling with check volume command
* Fixed storage scope to the setting
* Fix volume format issues
* Refactored the log messages
* Fix formatting
* Normalize logs
All classes that could have their loggers inherited from their fathers had their own loggers deleted;
Most loggers didn't have to be static, so most of them were normalized so that they wouldn't be;
All loggers are protected now;
Static logger's name are now 'LOGGER';
Non-static logger's name are now 'logger';
New class DbUpgradeAbstractImpl created so that all Upgraders extend it and inherit its logger
* Upgrade log4j
* fix errors caused by the merge
* Refactor cglibThrowableRenderer functionality to log4j2 and upgrade the last configuration files
* fix sonarcloud bug
* Fix errors caused by merge, remove some unused loggers, and rename a variable that was mistakenly renamed on the normalization commit
* Readd snmpTrapAppender, remove TestAppender
* Regenerate changes
* regenerate changes
* refactor last custom appender
* fix systemvm configuration xml
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* Fix utils pom
* fix some tests
* regenerate changes
* Fix jar being printed on exception
* fix logging in system VMs, fix commands not having log4j2 classpath.
* regenerate changes
* Fix some unwanted renomeations
* fix end of file
* regenerate changes
* regenerate changes
* fix merge error
* regenerate changes
* fix tests
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* readd reload4j to tungsten as juniper depends on it
* Regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* re-add reload4j dependency to network-contrail, as juniper depends on it
* regenerate changes
* regenerate changes
* regenerate changes
* fix typo
* regenerate changes
* regenerate changes
* Fix end of files
* regenerate changes
* add logj42 to cloud-utils-SHADED.jar
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* fix some tests
* Regenerate changes
* Regenerate changes
* fix test
* Regenerate changes
* Regenerate changes
* StoragePoolType as a class
* Fix agent side StoragePoolType enum to class
* Handle StoragePoolType for StoragePoolJoinVO
* Since StoragePoolType is a class, it cannot be converted by @Enumerated annotation.
Implemented conveter class and logic to utilize @Convert annotation.
* Fix UserVMJoinVO for StoragePoolType
* fixed missing imports
* Since StoragePoolType is a class, it cannot be converted by @Enumerated annotation.
Implemented conveter class and logic to utilize @Convert annotation.
* Fixed equals for the enum.
* removed not needed try/catch for prepareAttribute
* Added license to the file.
* Implemented "supportsPhysicalDiskCopy" for storage adaptor.
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>
* Add javadoc to StoragePoolType class
* Add unit test for StoragePoolType comparisons
* StoragePoolType "==" and ".equals()" fix.
* Fix StoragePoolType for FiberChannelAdapter
* Fix for abstract storage adaptor set up issue
* review comments
* Pass StoragePoolType object for poolType dao attribute
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>
Co-authored-by: mprokopchuk <mprokopchuk@gmail.com>
This PR fixes several issues in the testing of Veeam 11 and Veeam12
- Import Veeam.Backup.PowerShell and silently ignore the warning messages
- Fix issue when assign vm to backup offerings, which caused by separator (\r\n)
- Fix authorization failure in veeam 12a, which is because v1_4 is not supported in veeam 12a any more
- Fix exception if backup name has space
- Fix backup metrics in veeam12, which is because powershell command does not return the values needed
- Fix Incorrect datetime value, which is because powershell command returns a datetime which is not supported in Java
- Fix issue during backup restoration if VM has both ROOT and DATA disks.
This PR also has the following update
- Add integration test test/integration/smoke/test_backup_recovery_veeam.py
- Make some UI changes
- Add zone setting backup.plugin.veeam.version. If it is not set, CloudStack will get veeam version via powershell commands.
- Add zone setting backup.plugin.veeam.task.poll.interval and backup.plugin.veeam.task.poll.max.retry
1. Problem description
In Apache CloudStack (ACS), when a VM is deployed in a host with the KVM hypervisor, an XML file is created in the assigned host, which has a property shares that defines the weight of the VM to access the host CPU. The value of this property has no unit, and it is a relative measure to calculate how much CPU a given VM will have in the host. However, this value has a limit, which depends on the version of cgroup utilized by the host's kernel. The problem lies at the range value of shares that varies between both versions: [2, 264144] for cgroups version 1; and [1, 10000] for cgroups version 2. Currently, ACS calculates the value of shares using Equation 1, presented below, where CPU is the number of cores and speed is the CPU frequency; both specified in the VM's compute offering. Therefore, if a compute offering has, for example, 6 cores at 2 GHz, the shares value will be 12000 and an exception will be thrown by libvirt if the host utilizes cgroup v2. The second version is becoming the default one in current Linux distributions; thus, it is necessary to address this limitation.
Equation 1
shares = CPU * speed
Fixes: #6744
2. Proposed changes
To address the problem described, we propose to apply a scale conversion considering the max shares of the host. Using the same formula currently utilized by ACS, it is possible to calculate the maximum shares of a VM for a given host. In other words, using the number of cores and the nominal speed of the host's CPU as the upper limit of shares allowed to a VM. Then, this value will be scaled to the allowed interval of [1, 10000] of cgroup v2 by using a linear scale conversion.
The VM shares would be calculated as Equation 2, presented below, where VM requested shares is the requested shares value calculated using Equation 1, cgroup upper limit is fixed with a value of 10000 (cgroups v2 upper limit), and host max shares is the maximum shares value of the host, calculated using Equation 1. Using Equation 2, the only case where a VM passes the cgroup v2 limit is when the user requests more resources than the host has, which is not possible with the current implementation of ACS.
Equation 2
shares = (VM requested shares * cgroup upper limit)/host max shares
To implement the proposal, the following APIs will be updated: deployVirtualMachine, migrateVirtualMachine and scaleVirtualMachine. When a VM is being deployed, a new verification will be added to find a suitable host. The max shares of each host will be calculated, and the VM calculated shares will be verified if it does not surpass the host's value. Likewise, the migration of VMs will have a similar new verification. Lastly, the scale of VMs will also have the same verification for the VM's host.
To determine the max shares of a given host, we will use the same equation currently used in ACS for calculating the shares of VMs, presented in Section 1. When Equation 1 is used to determine the maximum shares of a host, CPU is the number of cores of the host, and speed is the nominal CPU speed, i.e., considering the CPU's base frequency.
It is important to note that these changes are only for hosts with the KVM hypervisor using cgroup v2 for now.
This PR adds the capability in CloudStack to convert VMware Instances disk(s) to KVM using virt-v2v and import them as CloudStack instances. It enables CloudStack operators to import VMware instances from vSphere into a KVM cluster managed by CloudStack. vSphere/VMware setup might be managed by CloudStack or be a standalone setup.
CloudStack will let the administrator select a VM from an existing VMware vCenter in the CloudStack environment or external vCenter requesting vCenter IP, Datacenter name and credentials.
The migrated VM will be imported as a KVM instance
The migration is done through virt-v2v: https://access.redhat.com/articles/1351473, https://www.ovirt.org/develop/release-management/features/virt/virt-v2v-integration.html
The migration process timeout can be set by the setting convert.instance.process.timeout
Before attempting the virt-v2v migration, CloudStack will create a clone of the source VM on VMware. The clone VM will be removed after the registration process finishes.
CloudStack will delegate the migration action to a KVM host and the host will attempt to migrate the VM invoking virt-v2v. In case the guest OS is not supported then CloudStack will handle the error operation as a failure
The migration process using virt-v2v may not be a fast process
CloudStack will not perform any check about the guest OS compatibility for the virt-v2v library as indicated on: https://access.redhat.com/articles/1351473.
Extending the current functionality of KVM Host HA for the StorPool storage plugin and the option for easy integration for the rest of the storage plugins to support Host HA
This extension works like the current NFS storage implementation. It allows it to be used simultaneously with NFS and StorPool storage or only with StorPool primary storage.
If it is used with different primary storages like NFS and StorPool, and one of the health checks fails for storage, there is an option to report the failure to the management with the global config kvm.ha.fence.on.storage.heartbeat.failure. By default this option is disabled when enabled the Host HA service will continue with the checks on the host and eventually will fence the host
OAuth2, the industry-standard authorization or authentication framework, simplifies the process of
granting access to resources. CloudStack supports OAuth2 authentication wherein users can login into
CloudStack without using a username and password. Support for Google and Github providers has been added.
Other OAuth2 providers can be easily integrated with CloudStack using its plugin framework.
The login page will show provider options when the OAuth2 is enabled and corresponding providers are configured.
"OAuth configuration" sub-section is present under "Configuration" where admins can register the corresponding
OAuth providers.
This pull request (PR) implements a Distributed Resource Scheduler (DRS) for a CloudStack cluster. The primary objective of this feature is to enable automatic resource optimization and workload balancing within the cluster by live migrating the VMs as per configuration.
Administrators can also execute DRS manually for a cluster, using the UI or the API.
Adds support for two algorithms - condensed & balanced. Algorithms are pluggable allowing ACS Administrators to have customized control over scheduling.
Implementation
There are three top level components:
Scheduler
A timer task which:
Generate DRS plan for clusters
Process DRS plan
Remove old DRS plan records
DRS Execution
We go through each VM in the cluster and use the specified algorithm to check if DRS is required and to calculate cost, benefit & improvement of migrating that VM to another host in the cluster. On the basis of cost, benefit & improvement, the best migration is selected for the current iteration and the VM is migrated. The maximum number of iterations (live migrations) possible on the cluster is defined by drs.iterations which is defined as a percentage (as a value between 0 and 1) of total number of workloads.
Algorithm
Every algorithms implements two methods:
needsDrs - to check if drs is required for cluster
getMetrics - to calculate cost, benefit & improvement of a migrating a VM to another host.
Algorithms
Condensed - Packs all the VMs on minimum number of hosts in the cluster.
Balanced - Distributes the VMs evenly across hosts in the cluster.
Algorithms use drs.level to decide the amount of imbalance to allow in the cluster.
APIs Added
listClusterDrsPlan
id - ID of the DRS plan to list
clusterid - to list plans for a cluster id
generateClusterDrsPlan
id - cluster id
iterations - The maximum number of iterations in a DRS job defined as a percentage (as a value between 0 and 1) of total number of workloads. Defaults to value of cluster's drs.iterations setting.
executeClusterDrsPlan
id - ID of the cluster for which DRS plan is to be executed.
migrateto - This parameter specifies the mapping between a vm and a host to migrate that VM. Format of this parameter: migrateto[vm-index].vm=<uuid>&migrateto[vm-index].host=<uuid>.
Config Keys Added
ClusterDrsPlanExpireInterval
Key drs.plan.expire.interval
Scope Global
Default Value 30 days
Description The interval in days after which old DRS records will be cleaned up.
ClusterDrsEnabled
Key drs.automatic.enable
Scope Cluster
Default Value false
Description Enable/disable automatic DRS on a cluster.
ClusterDrsInterval
Key drs.automatic.interval
Scope Cluster
Default Value 60 minutes
Description The interval in minutes after which a periodic background thread will schedule DRS for a cluster.
ClusterDrsIterations
Key drs.max.migrations
Scope Cluster
Default Value 50
Description Maximum number of live migrations in a DRS execution.
ClusterDrsAlgorithm
Key drs.algorithm
Scope Cluster
Default Value condensed
Description DRS algorithm to execute on the cluster. This PR implements two algorithms - balanced & condensed.
ClusterDrsLevel
Key drs.imbalance
Scope Cluster
Default Value 0.5
Description Percentage (as a value between 0.0 and 1.0) of imbalance allowed in the cluster. 1.0 means no imbalance
is allowed and 0.0 means imbalance is allowed.
ClusterDrsMetric
Key drs.imbalance.metric
Scope Cluster
Default Value memory
Description The cluster imbalance metric to use when checking the drs.imbalance.threshold. Possible values are memory and cpu.
This PR adds new functionality to copy snapshots across zones and take snapshots for multiple zones.
Copy functionality is similar to template copy. The source zone acts as the web server from where the destination zone(s) can download the snapshot files. For this purpose, a new API - `copySnapshot` has been added. The response for copySnapshot will be returning zone and download details from the first destination zone of the request. This behaviour is similar to the `copyTemplate` API.
In a similar manner, multiple zones can be selected while taking the snapshots or creating snapshot policies. For this snapshot will be taken in the base zone(in which volume is present) and then copied to the additional zones. A new parameter - `zoneids` has been added to `createSnapshot` and `createSnapshotPolicy` APIs.
As snapshots can be present on multiple zones (secondary stores), a new parameter `zoneid` has been added to delete the snapshot copy on a specific zone.
`listSnapshots` API has been updated to allow listing snapshot entries for different zones/datastores. New parameters - `showUnique`, `locationType` have been added.
Events generated during snapshot operations will now be linked to the snapshot itself rather than the volume of the snapshot.
`listSnapshotPolicies` and `createSnapshotPolicy` APIs will return zone details of the zones in which backup will be scheduled for the policy.
----
New API added
`copySnapshot`
Request and response params updated for APIs
```
- listSnapshots
- deleteSnapshot
- createTemplate
- listZones
- listSnapshotPolicies
- createSnapshotPolicy
```
UI updated for
- Snapshot detail view
- Create snapshot form
- Create snapshot policy form
- Create volume (from snapshot) form
- Create template (from snapshot) form
Doc PR: https://github.com/apache/cloudstack-documentation/pull/344
PR: https://github.com/apache/cloudstack/pull/7873
* Trigger out of band VM state update via libvirt event when VM stops
* Add License headers, refactor nested try
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
* Fix style for LibvirtComputingResource variable names and its dependencies
* More variable name fixes
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
* 4.18:
SSVM: 'allow from' private IP in other SSVMs if the public IP is in allowed internal sites cidrs (#7288)
eof added to StorPoolStatsCollector (#7754)
Currently, ACS can continue to show an imported instance/VM as an unmanaged instance if the name and internalCSName (custom attribute, cloud.vm.internal.name) is different for the instance/VM on vCenter. This PR while filtering managed instances from the instance list received from ESXi host also checks if the internal name for the instance is not in the managed instance names list.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Guest OS mapping improvements
- Checks the OS mapping name in hypervisor (VMware, XenServer)
- Displays guest OS mappings in UI
* Added API getHypervisorGuestOsNames to list the guest OS names in the hypervisor, and code improvements
* Some static analysis fixes
* Removed commented code in listview
* Guest OS list
* UI changes for adding guest os and mappings
* Added guest os mappings in guest os form
* Added new filter to guest os mapping
* Name and description changes
* VMWare Host and cluster MO unit tests
* CheckGuestOsMapping command and answer unit tests
* GetHypervisorGuestOsNames command and answer unit tests
* VmwareResource unitests
* GuestOsMapper unittests
* icon changes
* Addressed review comments
* Renaming fixes
* Removed comments
* marvin tests for guest os operations
* Added marvin tests for OS mappings
* Document links and UI improvements
* Added deduplication for the list guest OS API
* Fixed linter failure
* Few bug fixes and UI changes
* Few improvements
* Addressed code smells
* Fixed UI issues after rebase
---------
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
* Auto Enable Disable KVM hosts
* Improve health check result
* Fix corner cases
* Script path refactor
* Fix sonar cloud reports
* Fix last code smells
* Add marvin tests
* Fix new line on agent.properties to prevent host add failures
* Send alert on auto-enable-disable and add annotations when the setting is enabled
* Address reviews
* Add a reason for enabling or disabling a host when the automatic feature is enabled
* Fix comment on the marvin test description
* Fix for disabling the feature if the admin has manually updated the host resource state before any health check result
* 4.17:
marvin: newer python setuptools doesn't like -SNAPSHOT in marvin version (#7120)
VR: fix warning Expected X answers while executing SetXXXCommand but Y (#7050)
This PR introduces a feature designed to allow CloudStack to manage a generic volume encryption setting. The encryption is handled transparently to the guest OS, and is intended to handle VM guest data encryption at rest and possibly over the wire, though the actual encryption implementation is up to the primary storage driver.
In some cases cloud customers may still prefer to maintain their own guest-level volume encryption, if they don't trust the cloud provider. However, for private cloud cases this greatly simplifies the guest OS experience in terms of running volume encryption for guests without the user having to manage keys, deal with key servers and guest booting being dependent on network connectivity to them (i.e. Tang), etc, especially in cases where users are attaching/detaching data disks and moving them between VMs occasionally.
The feature can be thought of as having two parts - the API/control plane (which includes scheduling aspects), and the storage driver implementation.
This initial PR adds the encryption setting to disk offerings and service offerings (for root volume), and implements encryption support for KVM SharedMountPoint, NFS, Local, and ScaleIO storage pools.
NOTE: While not required, operations can be significantly sped up by ensuring that hosts have the `rng-tools` package and service installed and running on the management server and hypervisors. For EL hosts the service is `rngd` and for Debian it is `rng-tools`. In particular, the use of SecureRandom for generating volume passphrases can be slow if there isn't a good source of entropy. This could affect testing and build environments, and otherwise would only affect users who actually use the encryption feature. If you find tests or volume creates blocking on encryption, check this first.
### Management Server
##### API
* createDiskOffering now has an 'encrypt' Boolean
* createServiceOffering now has an 'encryptroot' Boolean. The 'root' suffix is added here in case there is ever any other need to encrypt something related to the guest configuration, like the RAM of a VM. This has been refactored to deal with the new separation of service offering from disk offering internally.
* listDiskOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listServiceOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listHosts now shows encryption support of each hypervisor host via `encryptionsupported`
* Volumes themselves don't show encryption on/off, rather the offering should be referenced. This follows the same pattern as other disk offering based settings such as the IOPS of the volume.
##### Volume functions
A decent effort has been made to ensure that the most common volume functions have either been cleanly supported or blocked. However, for the first release it is advised to mark this feature as *experimental*, as the code base is complex and there are certainly edge cases to be found.
Many of these features could eventually be supported over time, such as creating templates from encrypted volumes, but the effort and size of the change is already overwhelming.
Supported functions:
* Data Volume create
* VM root volume create
* VM root volume reinstall
* Offline volume snapshot/restore
* Migration of VM with storage (e.g. local storage VM migration)
* Resize volume
* Detach/attach volume
Blocked functions:
* Online volume snapshot
* VM snapshot w/memory
* Scheduled snapshots (would fail when VM is running)
* Disk offering migration to offerings that don't have matching encryption
* Creating template from encrypted volume
* Creating volume from encrypted volume
* Volume extraction (would we decrypt it first, or expose the key? Probably the former).
##### Primary Storage Support
For storage developers, adding encryption support involves:
1. Updating the `StoragePoolType` for your primary storage to advertise encryption support. This is used during allocation of storage to match storage types that support encryption to storage that supports it.
2. Implementing encryption feature when your `PrimaryDataStoreDriver` is called to perform volume lifecycle functions on volumes that are requesting encryption. You are free to do what your storage supports - this could be as simple as calling a storage API with the right flag when creating a volume. Or (as is the case with the KVM storage types), as complex as managing volume details directly at the hypervisor host. The data objects passed to the storage driver will contain volume passphrases, if encryption is requested.
##### Scheduling
For the KVM implementations specified above, we are dependent on the KVM hosts having support for volume encryption tools. As such, the hosts `StartupRoutingCommand` has been modified to advertise whether the host supports encryption. This is done via a probe during agent startup to look for functioning `cryptsetup` and support in `qemu-img`. This is also visible via the listHosts API and the host details in the UI. This was patterned after other features that require hypervisor support such as UEFI.
The `EndPointSelector` interface and `DefaultEndpointSelector` have had new methods added, which allow the caller to ask for endpoints that support encryption. This can be used by storage drivers to find the proper hosts to send storage commands that involve encryption. Not all volume activities will require a host to support encryption (for example a snapshot backup is a simple file copy), and this is the reason why the interface has been modified to allow for the storage driver to decide, rather than just passing the data objects to the EndpointSelector and letting the implementation decide.
VM scheduling has also been modified. When a VM start is requested, if any volume that requires encryption is attached, it will filter out hosts that don't support encryption.
##### DB Changes
A volume whose disk offering enables encryption will get a passphrase generated for it before its first use. This is stored in the new 'passphrase' table, and is encrypted using the CloudStack installation's standard configured DB encryption. A field has been added to the volumes table, referencing this passphrase, and a foreign key added to ensure passphrases that are referenced can't be removed from the database. The volumes table now also contains an encryption format field, which is set by the implementer of the encryption and used as it sees fit.
#### KVM Agent
For the KVM storage pool types supported, the encryption has been implemented at Qemu itself, using the built-in LUKS storage support. This means that the storage remains encrypted all the way to the VM process, and decrypted before the block device is visible to the guest. This may not be necessary in order to implement encryption for /your/ storage pool type, maybe you have a kernel driver that decrypts before the block device on the system, or something like that. However, it seemed like the simplest, common place to terminate the encryption, and provides the lowest surface area for decrypted guest data.
For qcow2 based storage, `qemu-img` is used to set up a qcow2 file with LUKS encryption. For block based (currently just ScaleIO storage), the `cryptsetup` utility is used to format the block device as LUKS for data disks, but `qemu-img` and its LUKS support is used for template copy.
Any volume that requires encryption will contain a passphrase ID as a byte array when handed down to the KVM agent. Care has been taken to ensure this doesn't get logged, and it is cleared after use in attempt to avoid exposing it before garbage collection occurs. On the agent side, this passphrase is used in two ways:
1. In cases where the volume experiences some libvirt interaction it is loaded into libvirt as an ephemeral, private secret and then referenced by secret UUID in any libvirt XML. This applies to things like VM startup, migration preparation, etc.
2. In cases where `qemu-img` needs to use this passphrase for volume operations, it is written to a `KeyFile` on the cloudstack agent's configured tmpfs and passed along. The `KeyFile` is a `Closeable` and when it is closed, it is deleted. This allows us to try-with-resources any volume operations and get the KeyFile removed regardless.
In order to support the advanced syntax required to handle encryption and passphrases with `qemu-img`, the `QemuImg` utility has been modified to support the new `--object` and `--image-opts` flags. These are modeled as `QemuObject` and `QemuImageOptions`. These `qemu-img` flags have been designed to supersede some of the existing, older flags being used today (such as choosing file formats and paths), and an effort could be made to switch over to these wholesale. However, for now we have instead opted to keep existing functions and do some wrapping to ensure backward compatibility, so callers of `QemuImg` can choose to use either way.
It should be noted that there are also a few different Enums that represent the encryption format for various purposes. While these are analogous in principle, they represent different things and should not be confused. For example, the supported encryption format strings for the `cryptsetup` utility has `LuksType.LUKS` while `QemuImg` has a `QemuImg.PhysicalDiskFormat.LUKS`.
Some additional effort could potentially be made to support advanced encryption configurations, such as choosing between LUKS1 and LUKS2 or changing cipher details. These may require changes all the way up through the control plane. However, in practice Libvirt and Qemu currently only support LUKS1 today. Additionally, the cipher details aren't required in order to use an encrypted volume, as they're stored in the LUKS header on the volume there is no need to store these elsewhere. As such, we need only set the one encryption format upon volume creation, which is persisted in the volumes table and then available later as needed. In the future when LUKS2 is standard and fully supported, we could move to it as the default and old volumes will still reference LUKS1 and have the headers on-disk to ensure they remain usable. We could also possibly support an automatic upgrade of the headers down the road, or a volume migration mechanism.
Every version of cryptsetup and qemu-img tested on variants of EL7 and Ubuntu that support encryption use the XTS-AES 256 cipher, which is the leading industry standard and widely used cipher today (e.g. BitLocker and FileVault).
Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
This PR addresses parallel resource allocation as a generalization of the problem and solution described in #6644. Instead of the Global lock on the resources a reservation record is created which is added in the resource check count in the ResourceLimitService/ResourceLimitManagerImpl. As a convenience a CheckedReservation is created. This is an implementation of AutoClosable and can be used as a guard in a try-with-resource fashion. The close method of the CheckedReservation wil delete the reservation record.
Co-authored-by: Boris Stoyanov - a.k.a Bobby <bss.stoyanov@gmail.com>
This PR creates a new API createConsoleAccess to create VM console URL allowing it to connect using other UI implementations. To avoid reply attacks, the console access is enhanced to use a one time token per session
New configuration added:
consoleproxy.extra.security.validation.enabled: Enable/disable extra security validation for console proxy using a token
Documentation PR: apache/cloudstack-documentation#284
* Fix global setting reference for max secondary storage usage based on account or project
* Changed a variable naming
* Replaced config enum usage with configkey class for global settings
* Fixed grammar mistake
* Fixed code smells
This PR enhances the existing PowerFlex/ScaleIO storage plugin to support separate (storage) network for Hosts(KVM)/Storage connection, mainly the SDC (ScaleIo Data Client) connection.
* Support for live patching systemVMs and deprecating systemVM.iso. Includes:
- fix systemVM template version
- Include agent.zip, cloud-scripts.tgz to the commons package
- Support for live-patching systemVMs - CPVM, SSVM, Routers
- Fix Unit test
- Remove systemvm.iso dependency
* The following commit:
- refactors logic added to support SystemVM deployment on KVM
- Adds support to copy specific files (required for patching) to the hosts on Xenserver
- Modifies vmops method - createFileInDomr to take cleanup param
- Adds configuratble sleep param to CitrixResourceBase::connect() used to verify if telnet to specifc port is possible (if sleep is 0, then default to _sleep = 10000ms)
- Adds Command/Answer for patch systemVMs on XenServer/Xcp
* - Support to patch SystemVMs - VMWare
- Remove attaching systemvm.iso to systemVMs
- Modify / Refactor VMware start command to copy patch related files to the systemvms
- cleanup
* Commit comprises of:
- remove docker from systemvm template - use containerd as container runtime
- update create-k8s-binaries script to use ctr for all docker operations
- Update userdata sent to the k8s nodes
- update cksnode script, run during patching of the cks/k8s nodes
* Add ssh to k8s nodes details in the Access tab on the UI
* test
* Refactor ca/cert patching logic
* Commit comprises of the following changes:
- Use restart network/VPC API to patch routers
- use livePatch API support patching of only cpvm/ssvm
- add timeout to the keystore setup/import script
* remove all references of systemvm.iso
* Fix keystore-cert-import invocation + refactor cert timeout in CP/SS VMs
* fix script timeout
* Refactor cert patching for systemVMs + update keystore-cert-import script + patch-sysvms script + remove patchSysvmCommand from networkelementcommand
* remove commented code + change core user to cloud for cks nodes
* Update ownership of ssh directory
* NEED TO DISCUSS - add on the fly template conversion as an ExecStartPre action (systemd)
* Add UI changes + move changes from patch file to runcmd
* test: validate performance for template modification during seeding
* create vms folder in cloudstack-commons directory - debian rules
* remove logic for on the fly template convert + update k8s test
* fix syntax issue - causing issue with shared network tests
* Code cleanup
* refactor patching logic - certs
* move logic of fixing rootdiskcontroller from upgrade to kubernetes service
* add livepatch option to restart network & vpc
* smooth upgrade of cks clusters
* Support for live patching systemVMs and deprecating systemVM.iso. Includes:
- fix systemVM template version
- Include agent.zip, cloud-scripts.tgz to the commons package
- Support for live-patching systemVMs - CPVM, SSVM, Routers
- Fix Unit test
- Remove systemvm.iso dependency
* The following commit:
- refactors logic added to support SystemVM deployment on KVM
- Adds support to copy specific files (required for patching) to the hosts on Xenserver
- Modifies vmops method - createFileInDomr to take cleanup param
- Adds configuratble sleep param to CitrixResourceBase::connect() used to verify if telnet to specifc port is possible (if sleep is 0, then default to _sleep = 10000ms)
- Adds Command/Answer for patch systemVMs on XenServer/Xcp
* - Support to patch SystemVMs - VMWare
- Remove attaching systemvm.iso to systemVMs
- Modify / Refactor VMware start command to copy patch related files to the systemvms
- cleanup
* Commit comprises of:
- remove docker from systemvm template - use containerd as container runtime
- update create-k8s-binaries script to use ctr for all docker operations
- Update userdata sent to the k8s nodes
- update cksnode script, run during patching of the cks/k8s nodes
* Add ssh to k8s nodes details in the Access tab on the UI
* test
* Refactor ca/cert patching logic
* Commit comprises of the following changes:
- Use restart network/VPC API to patch routers
- use livePatch API support patching of only cpvm/ssvm
- add timeout to the keystore setup/import script
* remove all references of systemvm.iso
* Fix keystore-cert-import invocation + refactor cert timeout in CP/SS VMs
* fix script timeout
* Refactor cert patching for systemVMs + update keystore-cert-import script + patch-sysvms script + remove patchSysvmCommand from networkelementcommand
* remove commented code + change core user to cloud for cks nodes
* Update ownership of ssh directory
* NEED TO DISCUSS - add on the fly template conversion as an ExecStartPre action (systemd)
* Add UI changes + move changes from patch file to runcmd
* test: validate performance for template modification during seeding
* create vms folder in cloudstack-commons directory - debian rules
* remove logic for on the fly template convert + update k8s test
* fix syntax issue - causing issue with shared network tests
* Code cleanup
* add cgroup config for containerd
* add systemd config for kubelet
* add additional info during image registry config
* address comments
* add temp links of download.cloudstack.org
* address part of the comments
* address comments
* update containerd config - as version has upgraded to 1.5 from 1.4.12 in 4.17.0
* address comments - simplify
* fix vue3 related icon changes
* allow network commands when router template version is lower but is patched
* add internal LB to the list of routers to be patched on network restart with live patch
* add unit tests for API param validations and new helper utilities - file scp & checksum validations
* perform patching only for non-user i.e., system VMs
* add test to validate params
* remove unused import
* add column to domain_router to display software version and support networkrestart with livePatch from router view
* Requires upgrade column to consider package (cloud-scripts) checksum to identify if true/false
* use router software version instead of checksum
* show N/A if no software version reported i.e., in upgraded envs
* fix deb failure
* update pom to official links of systemVM template
* StorPool storage plugin
Adds volume storage plugin for StorPool SDS
* Added support for alternative endpoint
Added option to switch to alternative endpoint for SP primary storage
* renamed all classes from Storpool to StorPool
* Address review
* removed unnecessary else
* Removed check about the storage provider
We don't need this check, we'll get if the snapshot is on StorPool be
its name from path
* Check that current plugin supports all functionality before upgrade CS
* Smoke tests for StorPool plug-in
* Fixed conflicts
* Fixed conflicts and added missed Apache license header
* Removed whitespaces in smoke tests
* Added StorPool plugin jar for Debian
the StorPool jar will be included into cloudstack-agent package for
Debian/Ubuntu
* Refactor create volume snapshot with running VM
* Refactor create volume snapshot with stopped VM
* Refactor create volume from snapshot
* Refactor create template from snapshot
* Refactor volume migration (migrateVolume/ migrateVirtualMachineWithVolume)
* Refactor snapshot deletion
* Refactor snapshot revertion
* Adjusts and fix cherry-pick conflicts
* Remove diffuse tests
* Add validation to add flag '--delete' on command 'virsh blockcommand' only if libvirt version is equal or higher 6.0.0
* Expunge temporary snapshot only if template creation is from snapshot
* Extract strings to constant
* Remove unused imports
* Fix error on revert backed up snapshot
* Turn method's return to void as it is not used
* Rename method in SnapshotHelper
* Fix folder creation when using SharedMountPoint pool
* Remove static import
* Remove unnused method
* Cover take snapshot in centos 7
* Handle right snapshot flag according to qemu version
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
* Add persistence of VM stats
* Fix API 'since' attribute
* Add license
* Address GutoVeronezi's reviews
* Fix the order of VM stats in the API response
* Fix msid in VM stats data
* Fix disk stats and add minor improvements
* Add log message
* Build string using ReflectionToStringBuilderUtils
* Rerun checks
Co-authored-by: joseflauzino <jose@scclouds.com.br>
* VM snapshots of running KVM instance using storage providers plugins for disk snapshots
Added new virtual machine snapshot strategy which is using storage providers plugins to take/revert/delete snapshots.
You can take VM snapshot without VM memory on KVM instance, using storage providers implementations for disk snapshots.
Also revert and delete is added as functionality. Added Thaw/Freeze command for KVM instance.
The snapshots will be consistent, because we freeze the VM during the snapshotting. Backup to secondary storage is executed after
thaw of the VM and if it is enabled in global settings.
* Removed duplicated functionality
Set few methods in DefaultVMSnapshotStrategy to protected to reuse them
without duplicating the code. Remove code that is actualy not needed
* Added requirements in global setting kvm.vmstoragesnapshot.enabled
Added more information in kvm.vmstoragesnapshot.enabled global setting,
that it needs installation of:
- qemu version 1.6+
- qemu-guest-agent installed on guest virtual machine
when the option is enabled
* Added Apache license header
* Removed commented code
* If "kvm.vmstoragesnapshot.enabled" is null should be considered as false
* removed unused imports, replaced default template
Removed unused imports which causing failures and replaced template to
CentOS8
* "kvm.vmstoragesnapshot.enabled" set to dynamic
* Getting status of freeze/thaw commands not the return code
Will chacke the status if freeze/thaw of Guest VM succeded, rather than
looking for return code. Code refactoring
* removed "CreatingKVM" VMsnapshot state and events related to it
* renamed AllocatedKVM to AllocatedVM
the states should not be associated to a hypervisor type
* loggin the result of "drive-backup" command
* Check which VM snapshot strategy could handle the vm snapshots
gets the best match of VM snapshot strategy which could handle the vm
snapshots on KVM.
Other storage plugins could integrate with this functionality to support group snapshots
* Added poolId in canHandle for KVM hypervisors
Added poolId into canHandle method used to check if all volumes are on
the same PowerFlex's storage pool
* skip smoke tests if the hypervisor's OS type is CentOS
This PR works with functionality included in qemu-kvm-ev which
does not come by default on CentOS. The smoke tests will be skipped if
the hypervisor OS is CentOS
* Added missed import in smoke test
* Suggested change to use ` org.apache.commons.lang.StringUtils.isNotBlank`
* Fix getting device on Ubuntu
On Ubuntu the device isn't provided and we have to get it from
node-name parameter. For drive-backup command (for Ubuntu) is needed and job-id which
is the value of node-name (this extra param works on Ubuntu and CentOS as well).
* Removed new snapshot states and functionality for NFS
* throw CloudRuntimeException
provide a properer error message when delete VM snapshot fails
* exclude GROUP snapshots when listing snapshots
* Skip tests if there is pool with NFS/Local
* address comments
* Add NFS version to mount command
* Remove extra line
* Extend NFS version to mount secondary storage
* Unused import
* Refactor NFS version to be granular
* Make use of the ConfigKey on the NFS version setting value
Currently, our compute offerings and disk offerings are tightly coupled with respect to many aspects. For example, if a compute offering is created, a corresponding disk offering entry is also created with the same ID as the reference. Also creating compute offering takes few disk-related parameters which anyway goes to the corresponding disk offering only. I think this design was initially made to address compute offering for the root volume created from a template. Also changing the offering of a volume is tightly coupled with storage tags and has to be done in different APIs either migrateVolume or resizeVolume. Changing of disk offering should be seamless and should consider new storage tags, new size and place the volume in appropriate state as defined in disk offering.
more details are mentioned here https://cwiki.apache.org/confluence/display/CLOUDSTACK/Compute+offering+and+disk+offering+refactoring
* Schema changes and disk offering column change from "type" to "compute_only"
* Few more changes
* Decoupled service offering and disk offering
* Remove diskofferingid from vminstance VO
* Decouple service offering and disk offering states
* diskoffering getsize() is only for strict disk offerings
* Fix deployVM flow
* Added new API params to compute offering creation
* Add diskofferingstrictness to serviceoffering vo under quota
* Added overrideDiskOfferingId parameter in deploy VM API which will override disk offering for the root disk both in template and ISO case
Added diskSizeStrictness parameter in create Disk offering API which will decide whether to restrict resize or disk offering change of a volume
* Fix User vm response to show proper service offering and disk offerings
* Added disk size strictness in disk offering response
* Added disk offering strictness to the service offering response
* Remove comments
* Added UI changes for Disk offering strictness in add compute offering form and Disk size strictness in add disk offering form
* Added diskoffering details to the service offering response
* Added UI changes in deployvm wizard to accept override disk offering id
* Fix delete compute offering
* Fix VM deployment from custom service offering
* Move uselocalstorage column access from service offering to disk offering
* UI: Separated compute and disk releated parameters in add compute offering wizard, also added association to disk offering
* Fixed diskoffering automatic selection on add compute offering wizard
* UI: move compute only toggle button outside the box in add compute offering wizard
* Added volumeId parameter to listDiskOfferings API and the disksizestrictness flag of the current disk offering is honored while list disk offerings
* Added configuration parameter to decide whether to check volume tags on the destination storagepool during migration
* Added disk offering change checks during resize volume operation
* Added new API changeofferingforVolume API and corresponding changes
* Add UI form for changeOfferingForVolume API
* Fix UI conflicts
* Fix service offering usage as disk offering
* Fix unit test failures
* fix user_vm_view
* Addressed review comments
* Fixed service_offering_view
* Fix service offering edit flow
* Fix service offering constructor to address custom offering
* Fix domain_router_view to get proper service offering id
* Removed unused import
* Addressed review comments and fixed update service offering flow with storage tags
* Added marvin test cases for checking disk offering strictness
* review comments addressed
* Remove system_use column from disk offering join
* update volume_view to update system_use column from service offering and not disk offering
* Fix changeOfferingForVolume API for custom disk offering
* Fix global setting implementation
* Fix list volumes, after changing system_use column from disk offering to service offering in volume_view
* Changes for override root disk offering in deployvm wizard in case of custom offering
* Fix a unit test case
* Fixed recent unit test cases with new serviceofferingvo constructor
* Fix unit test in VolumeApiServiceImpl
* Added storage id for the list disk offering API and corresponding UI changes in migrateVolume and changeOfferingForVolume flow
* Rename global configuration parameter from storage.pool.tags.disk.offering.strictness to match.storage.pool.tags.with.disk.offering
* Fix smoke test failures
* Added tool tip for migrate volume UI form
* Address review comments and fix UI form of deploy VM in case of ISO.
* Fixed resize volume UI form for data disk
* UI changes to disable override root disk size when override root disk offering is enabled
* UI fix in deploy vm wizard
* Fix listdiskoffering after rebasing with main
* Fixed UI in migrate and changeofferingfor volume to handle empty disk offering list
Removed the volume's current disk offering from listDiskOffering response list
* Added custom Iops to resize volume form and removed the current disk offering during change offering for volume UI form
* Fix false response on updateDiskOffering API
* Added search field for changeofferingforvolume UI form
* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster
* Removed DB changes from 4.16 upgrade file
* Resolving merge conflicts with main 4.17
* Added support for auto migration and auto resize of the root volume upon changing the service offering for VM.
* UI: Added automigrate checkbox in scale VM form
* Addes since attributes to new API params
* Added shrinkOK parameter to changeofferingforvolume API
* Added shrinkOk param to UI in changeOfferingforVolume form
* Added shrinkOk flag to scaleVM and changeServiceForVirtualMachines and UI form
* Removed old foreign key constraint on IDs of service offering and disk offering
* Allow resize and automigrate of root volume if required in all cases of service offering change
* Allow only resize to higher disk size from UI
* Fixing vue syntax error
* Make UI changes to provide root disk size box when the linked disk offering is of custom
* Converted from check box to toggle in scale VM, changeoffering, resize and migrate volume forms
* Fix resize volume operation to update the VM settings
* Fix migratevolume form to pick selected storage pool id in list diskofferings API
* Add the list of supported namespaces per document and refactor the disks extraction by using the namespaces
* Refactor matching the default OVF schema
* Move parser methods to a new class and refactor
* Fix import, unit tests
* Reduce indentation
* Address review comments
* core: use the URL scheme same as iframe for non-SSL enabled consoles
For environments where SSL is not enabled for console, this forces the
URL scheme (http/https) in iframe to match the iframe URL scheme.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* consoleproxy: enable SSL on CPVM when both console proxy url/domain and
ssl setting are configured
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* fix unit test
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* address code review comments
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster
* Change in constructors
* Naming changes
* Remove commented code
* Refactor code for more readability
* Addressed review comments on code refactor
* Extend addAnnotation and listAnnotations APIs
* Allow users to add, list and remove comments
* Add adminsonly UI and allow admins or owners to remove comments
* New annotations tab
* In progress: new comments section
* Address review comments
* Fix
* Fix annotationfilter and comments section
* Add keyword and delete action
* Fix and rename annotations tab
* Update annotation visibility API and update comments table accordingly
* Allow users seeing all the comments for their owned resources
* Extend comments for volumes and snapshots
* Extend comments to multiple entities
* Add uuid to ssh keypairs
* SSH keypair UI refactor
* Extend comments to the infrastructure entities
* Add missing entities
* Fix upgrade version for ssh keypairs
* Fix typo on DB upgrade schema
* Fix annotations table columns when there is no data
* Extend the list view of items showing they if they have comments
* Remove extra test
* Add annotation permissions
* Address review comments
* Extend marvin tests for annotations
* updating ui stuff
* addition to toggle visibility
* Fix pagination on comments section
* Extend to kubernetes clusters
* Fixes after last review
* Change default value for adminsonly column
* Remove the required field for the annotationfilter parameter
* Small fixes on visibility and other fixes
* Cleanup to reduce files changed
* Rollback extra line
* Address review comments
* Fix cleanup error on smoke test
* Fix sending incorrect parameter to checkPermissions method
* Add check domain access for the calling account for domain networks
* Fix only display annotations icon if there are comments the user can see
* Simply change the Save button label to Submit
* Change order of the Tools menu to provent users getting 404 error on clicking the text instead of expanding
* Remove comments when removing entities
* Address review comments on marvin tests
* Allow users to list annotations for an entity ID
* Allow users to see all comments for allowed entities
* Fix search filters
* Remove username from search filter
* Add pagination to the annotations tab
* Display username for user comments
* Fix add permissions for domain and resource admins
* Fix for domain admins
* Trivial but important UI fix
* Replace pagination for annotations tab
* Add confirmation for delete comment
* Lint warnings
* Fix reduced list as domain admin
* Fix display remove comment button for non admins
* Improve display remove action button
* Remove unused parameter on groupShow
* Include a clock icon to the all comments filter except for root admin
* Move cleanup SQL to the correct file after rebasing main
Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>
* Create utility to centralize byte convertions
* Add/change toString definitions
* Create Libvirt handler to ScaleVmCommand
* Enable dynamic scalling VM with KVM
* Move config from interface to class and rename it
As every variable declared in interfaces are already final,
this moving will be needed to mock tests in nexts commits
* Configure VM max memory and cpu cores
The values are according to service offering or global configs
* Extract dpdk configuration to a method and test it
* Extract OS desc config to a method and test it
* Extract guest resource def to a method and test it
Improve libvirt def
* Refactor LibvirtVMDef.GuestResourceDef
* Refactor ScaleVmCommand
* Improve VMInstaVO toString()
* Refactor upgradeRunningVirtualMachine method
* Turn int variables into long on utility
* Verify if VM is scalable on KVMGuru
* Rename some KVMGuruTest's methods
* Change vm's xml to work with max memory
* Verify if service offering is dynamic before scale
* Create methods to retrieve data from domain
* Create def to hotplug memory
* Adjust the way command was scaling the VM
* Fix database persistence before executing command
* Send more info to host to improve log
* Fix var name
* Fix missing "}"
* Undo unnecessary changes
* Address review
* Fix scale validation
* Add VM prepared for dynamic scaling validation
* Refactor LibvirtScaleVmCommandWrapper and improve unit tests
* Remove duplicated method
* Add RuntimeException check
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Update ByteScaleUtilsTest.java
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Added disk provisioning type support for VMWare
* Review changes
* Fixed unit test
* Review changes
* Added missing licenses
* Review changes
* Update StoragePoolInfo.java
Removed white space
* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id
* Delete __init__.py
* Merge fix
* Fixed failing test
* Added comment about parameters
* Added error log when update fails
* Added exception when using API
* Ordering storage pool selection to prefer thick disk capable pools if available
* Removed unused parameter
* Reordering changes
* Returning storage pool details after update
* Removed multiple pool update, updated marvin test, removed duplicate enum
* Removed comment
* Removed unused import
* Removed for loop
* Added missing return statements for failed checks
* Class name change
* Null pointer
* Added more info when a deployment fails
* Null pointer
* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Small bug fix on API response and added missing bracket
* Removed datastore cluster code
* Removed unused imports, added missing signature
* Removed duplicate config key
* Revert "Added more info when a deployment fails"
This reverts commit 2486db78dc.
Co-authored-by: dahn <daan.hoogland@gmail.com>
Inclusivity changes for CloudStack
- Change default git branch name from 'master' to 'main' (post renaming/changing default git branch to 'main' in git repo)
- Rename some offensive words/terms as appropriate for inclusiveness.
This PR updates the default git branch to 'main', as part of #4887.
Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.
Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
IKE version allows selecting ike (autoselect), ikev1, or ikev2.
Split connections gives an option of separating the first right subnet from the rest, and kicking out individual statements for each right subnet for better cross-compatibility.
Backported from PR: #4137
update per PR suggestion
Fixes#3138
Co-authored-by: Greg Goodrich <ggoodrich@ippathways.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
Fixes: #4808, #4941
This PR adds a force flag to the attachIso / detachIso commands, especially for VMware where it is noticed that when trying to either detach an iso or attach an iso when there already exists another present it fails to do the necessary operation as from ACS end we either answer the question returned by Esxi for CDRom disconnect operation as No (for detach operation) or do not answer the question at all (for Attach operation).
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
* prevent other vm disks getting deleted
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: fix inter-cluster stopped vm migration
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix detached volume inter-cluster migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* cleanup unused method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* review changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: allow attached volume migration using VmwareStorageMotionStrategy
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* find vm clusterid with multiple ROOT volumes
VM can have multiple ROOT volumes and some can be on zone-wide store therefore iterate over all of them till a cluster ID is found.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix successive storage migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix intercluster check
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor vm cluster, host method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove inter-pod check
Added by mistake, VMware won't have pods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address review comment
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster during a stopped VM migration. Also, find target datastore using the host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes#4244
deploying of VMs from ISOs and from templates with UEFI boot type
deploying of VMs from ISOs and from templates with UEFI boot type with
volumes in RAW format
This PR aims at introducing persistence mode in L2 networks and enhancing the behavior in Isolated networks
Doc PR apache/cloudstack-documentation#183
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
* novnc: Add client IP check for novnc console in cloudstack 4.16
* novnc ip check : Fix restart CPVM or mgt server does not update novnc param
* novnc ip check: move to method