Retrieving templates with inner join on host table was turning out to be
more expensive than finding distinct hypervisor types for a zone using
hsot table and then finding templates for those types
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
During agent join and while changing configs - host and indirect.agent.lb.algorithm, optimize calling DB for zone's host list
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Follow up for #9628
Creates a utility class LazyCache which currently wraps Caffeine library Cache class.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Added caching for ConfigKey value retrievals based on the Caffeine
in-memory caching library.
https://github.com/ben-manes/caffeine
Currently, expire time for a cache is 1 minute and each update of the
config key invalidates the cache.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* test: improve purge expunged resources b/g task testcase
Failures were seen for during purging of expunged resources via
bacground task during different test runs. This PR tries to make sure
b/g task execution is not skipped after MS restrat in a multi-MS
environment. It also updates expunged.resources.purge.interval to allow
running task again in 60s.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* new string formatting
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Don't send sql exception/query from dao to upper layer, log it and send only the error message
* Updated charset to utf8mb4, for display_name column/user_vm table and job_result column/async_job table to support unicode chars & emojis
* Added API arg validator for RFC compliance domain name, to validate VM's host name
* Updated user resources name / display name column's charset to utf8mb4
* Check and update char set for affinity group name to utf8mb4, from the data migration in upgrade path
* Updated backup offering name column charset to utf8mb4
* Added unit tests for vm host/domain name validation
* Added smoke test to check resource name for vm, volume, service & disk offering, template, iso, account(first/lastname)
* Updated resource annotation charset to utf8mb4
* Updated some resources description charset to utf8mb4
- mTLS implementation for cluster service communication
- Listen only on the specified cluster node IP address instead of all interfaces
- Validate incoming cluster service requests are from peer management servers based on the server's certificate dns name which can be through global config - ca.framework.cert.management.custom.san
- Hardening of KVM command wrapper script execution
- Improve API server integration port check
- cloudstack-management.default: don't have JMX configuration if not needed. JMX is used for instrumentation; users who need to use it should enable it explicitly
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
(cherry picked from commit 4f5561937c)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Mitigation for non-scalable Powerflex/ScaleIO clients
- Added ScaleIOSDCManager to manage SDC connections, checks clients limit, prepare and unprepare SDC on the hosts.
- Added commands for prepare and unprepare storage clients to prepare/start and stop SDC service respectively on the hosts.
- Introduced config 'storage.pool.connected.clients.limit' at storage level for client limits, currently support for Powerflex only.
* tests issue fixed
* refactor / improvements
* lock with powerflex systemid while checking connections limit
* updated powerflex systemid lock to hold till sdc preparation
* Added custom stats support for storage pool, through listStoragePools API
* code improvements, and unit tests
* Update config 'storage.pool.connected.clients.limit' to dynamic, and some improvements
* Stop SDC on host after migration if no volumes mapped to host
* Wait for SDC to connect after scini service start, and some log improvements
* Do not throw exception (log it) when SDC is not connected while revoking access for the powerflex volume
* some log improvements
* kvm: Add support for cgroupv2 (#8252)
1. Problem description
In Apache CloudStack (ACS), when a VM is deployed in a host with the KVM hypervisor, an XML file is created in the assigned host, which has a property shares that defines the weight of the VM to access the host CPU. The value of this property has no unit, and it is a relative measure to calculate how much CPU a given VM will have in the host. However, this value has a limit, which depends on the version of cgroup utilized by the host's kernel. The problem lies at the range value of shares that varies between both versions: [2, 264144] for cgroups version 1; and [1, 10000] for cgroups version 2. Currently, ACS calculates the value of shares using Equation 1, presented below, where CPU is the number of cores and speed is the CPU frequency; both specified in the VM's compute offering. Therefore, if a compute offering has, for example, 6 cores at 2 GHz, the shares value will be 12000 and an exception will be thrown by libvirt if the host utilizes cgroup v2. The second version is becoming the default one in current Linux distributions; thus, it is necessary to address this limitation.
Equation 1
shares = CPU * speed
Fixes: #6744
2. Proposed changes
To address the problem described, we propose to apply a scale conversion considering the max shares of the host. Using the same formula currently utilized by ACS, it is possible to calculate the maximum shares of a VM for a given host. In other words, using the number of cores and the nominal speed of the host's CPU as the upper limit of shares allowed to a VM. Then, this value will be scaled to the allowed interval of [1, 10000] of cgroup v2 by using a linear scale conversion.
The VM shares would be calculated as Equation 2, presented below, where VM requested shares is the requested shares value calculated using Equation 1, cgroup upper limit is fixed with a value of 10000 (cgroups v2 upper limit), and host max shares is the maximum shares value of the host, calculated using Equation 1. Using Equation 2, the only case where a VM passes the cgroup v2 limit is when the user requests more resources than the host has, which is not possible with the current implementation of ACS.
Equation 2
shares = (VM requested shares * cgroup upper limit)/host max shares
To implement the proposal, the following APIs will be updated: deployVirtualMachine, migrateVirtualMachine and scaleVirtualMachine. When a VM is being deployed, a new verification will be added to find a suitable host. The max shares of each host will be calculated, and the VM calculated shares will be verified if it does not surpass the host's value. Likewise, the migration of VMs will have a similar new verification. Lastly, the scale of VMs will also have the same verification for the VM's host.
To determine the max shares of a given host, we will use the same equation currently used in ACS for calculating the shares of VMs, presented in Section 1. When Equation 1 is used to determine the maximum shares of a host, CPU is the number of cores of the host, and speed is the nominal CPU speed, i.e., considering the CPU's base frequency.
It is important to note that these changes are only for hosts with the KVM hypervisor using cgroup v2 for now.
* Update overcommit ratio during live VM migration
* minor refactoring
---------
Co-authored-by: Bryan Lima <42067040+BryanMLima@users.noreply.github.com>