A separate service account will be created and added in the project, if
not exist already, when a Kubernetes cluster is deployed in a project.
This account will have a role with limited API access.
Cleanup clusters on owner account cleanup, delete service account
if needed
When the owner account of k8s clusters is deleted, while its node VMs
get expunged, the cluster entry in DB remain present. This fixes the
issue by cleaning up all clusters for the account deleted.
Project k8s service account will be deleted on account cleanup or when
there is no active k8s cluster remaining
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* kvm: fix vm deployment from RAW template
* Update plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
---------
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
* Introducing Storage Access Groups to define the host and storage pool connections
In CloudStack, when a primary storage is added at the Zone or Cluster scope, it is by default connected to all hosts within that scope. This default behavior can be refined using storage access groups, which allow operators to control and limit which hosts can access specific storage pools.
Storage access groups can be assigned to hosts, clusters, pods, zones, and primary storage pools. When a storage access group is set on a cluster/pod/zone, all hosts within that scope inherit the group. Connectivity between a host and a storage pool is then governed by whether they share the same storage access group.
A storage pool with a storage access group will connect only to hosts that have the same storage access group. A storage pool without a storage access group will connect to all hosts, including those with or without a storage access group.
* VMware - Ignore disk not found error on cleanup when the VM disk doesn't exists
* VMware - Retry powerOn on lock issues
* addressed comments
* Update CPVM reboot tests - wait for the agent to Disconnect and back Up
* Retry moveDatastoreFile when any file access issue while creating volume from snapshot
* Update full clone flag when restoring vm using root disk offering with more size than the template size
* refactored (mainly,for diskInfo - causing NPE in some cases)
* Retry moveDatastoreFile when there is any file access issue
* Reset the pool id when create volume fails on the allocated pool
- the pool id is persisted while creating the volume, when it fails the pool id is not reverted. On next create volume attempt, CloudStack couldn't find any suitable primary storage even there are pools available with enough capacity as the pool is already assigned to volume which is in Allocated state (and storage pool compatibility check fails). Ensure volume is not assigned to any pool if create volume fails (so the next creation job would pick the suitable pool).
* endpoint check for resize
* update the resize error through callback result instead of exception
* Add & Remove PowerFlex/ScaleIO MDMs while preparing & unpreparing the storage SDC connections (instead of start & stop scini)
* Add/Remove MDM IP addresses during Host connection/disconnection to/from storage pool when powerflex.connect.on.demand is false
* unit test fixes
* Don't remove MDM IPs from SDC when any volumes mapped to SDC
* Don't remove MDM IPs when other pools of same ScaleIO/PowerFlex cluster are connected
* rebase fixes
* update changes, to not remove/disconnect MDMs on maintenance
* import fixes after rebase
* Consider the clusters with allocation state 'Enabled' for EndPoint selection (in addition to Host state)
* Reset the pool id when create volume fails on the allocated pool
- the pool id is persisted while creating the volume, when it fails the pool id is not reverted. On next create volume attempt, CloudStack couldn't find any suitable primary storage even there are pools available with enough capacity as the pool is already assigned to volume which is in Allocated state (and storage pool compatibility check fails). Ensure volume is not assigned to any pool if create volume fails (so the next creation job would pick the suitable pool).
* endpoint check for resize
* update the resize error through callback result instead of exception
* logger fix
* KVM incremental snapshot feature
* fix log
* fix merge issues
* fix creation of folder
* fix snapshot update
* Check for hypervisor type during parent search
* fix some small bugs
* fix tests
* Address reviews
* do not remove storPool snapshots
* add support for downloading diff snaps
* Add multiple zones support
* make copied snapshots have normal names
* address reviews
* Fix in progress
* continue fix
* Fix bulk delete
* change log to trace
* Start fix on multiple secondary storages for a single zone
* Fix multiple secondary storages for a single zone
* Fix tests
* fix log
* remove bitmaps when deleting snapshots
* minor fixes
* update sql to new file
* Fix merge issues
* Create new snap chain when changing configuration
* add verification
* Fix snapshot operation selector
* fix bitmap removal
* fix chain on different storages
* address reviews
* fix small issue
* fix test
---------
Co-authored-by: João Jandre <joao@scclouds.com.br>
* Don't set signingRegion as auto for creating the s3 client in ceph object store provider.
* replace getBucketAcl with doesBucketExistV2 in CephObjectStoreDriverImplTest
* KVM: add Virtual TPM model and version
* KVM: add admin-only VM setting GUEST.CPU.MODE and GUEST.CPU.MODEL
* VMware: add vTPM
* vTPM: do not set Key due to 'Cannot add multiple devices using the same device key..'
* vTPM: add unit test testTpmModel
* engine/schema: remove user vm details for guest CPU mode/model
* vTPM: extra methods as Daan's requests
* vTPM: add unit tests in VmwareResourceTest
* vTPM: update unit tests in VmwareResourceTest
* vTPM: add unit test in LibvirtComputingResourceTest
* vTPM: use the default TPM version if an invalid version is passed
* vTPM: requires UEFI on vmware and do nothing if it is not enabled/disabled
* vTPM: let uses to add UEFI on vmware
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* vTPM: remove template details for guest CPU mode/model
* UI: boot vm from ISO into UEFI/SECURE mode
---------
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
Dependency name change mockito-inline to mockito-core. Inline is now the default and the last version of mockito-inline released is 5.2.0.
assertj-core in user-authenticators/saml2 pulls in an incompatible version of byte-buddy and required an exclusion. Updating the version of assertj is left for a future PR.
The upgrade requires Java 11+, dropping support for Java 8. CloudStack documentation already says to use Java 11 and does not indicate that java 8 is supported.
Test classes using @RunWith(MockitoJUnitRunner.class) now get run in strict mode. Changes were made to tests where the stubbing intention was clear. In ManagementServerMaintenanceManagerImplTest there are 5 tests where the intention of the test is unclear. Each of the statements now use Mockito.lenient() to avoid the exception. Other cases in the tests follow a similar pattern.
Minor clean up.
Both @Spy and Mockito.spy( should not be used. Favored the annotation.
Both @RunWith(MockitoJUnitRunner.class) and MockitoAnnotations.openMocks(this); should not be used. Favored the annotation.
Unnecessary extends TestCase removed.
@InjectMocks and new in statement unnecessary. Removed new when issue presented.
Some of the Cmd classes like UpdateNetworkCmd have a type tree that includes fields of type Object. This appears to cause issues with injection, requiring that @Mock fields be available. This is where the following fields were added in multiple places:
Object job;
ResponseGenerator _responseGenerator;
Wrong number of parameters for Mockito.when in LibvirtRevertSnapshotCommandWrapperTest.java
* 4.20:
xenserver: do not destroy halted hypervisor vm (#9175)
define the limit of projects through the UI (#10652)
fix projects metrics on dashboard (#10651)
systemvm: Bump systemvm template version to debian 12.10 (#10628)
Enhance VPC Network Tier form to auto-populate Gateway, and Netmask (#10617)
* Readd filename string on qemuimg create
* Remove empty object on the data pool details of storage pools with no data pool
* Only use the method createPhysicalDiskByLibVirt with RBD when the pool is of erasure code type. Also added javadoc for createPhysicalDisk method
* Change literal '/' string to File.separator
* Add support for erasure code pools
* Fix null on putAll
* Update last agents during ms maintenance, and some code improvements
* Send 503 (Service Unavailable) response status when maintenance or shutdown is initiated
[Any load balancer in the clustered environment can avoid routing requests to this MS node]
* Migrate systemvm agents before routing host agents, and some code improvements
* Added events for ms maintenance and shutdown operations
* Added the following ms maintenance and shutdown improvements
- block new agent connections during prepare for maintenance of ms
- maintain avoids ms list
- propagate updated management servers list and lb algorithm in host and indirect.agent.lb.algorithm settings respectively, to systemvm (non-routing) agents
- updated setup ms list and migrate agent connections to executor service
- migrate agent connection through executor, and send the answer to the ms host that initiated the migration
- re-initialize ssl handshake executor if it is shutdown
- don't allow prepare for maintenance or shutdown when other management server nodes are in preparing states
- don't allow trigger shutdown when management server is up and other management server nodes are in preparing states
- stop agent connections monitor on ms maintenance
- update avoid ms list in ready command
- updated connected host from the client connection
- update last agents in ms metrics from the database
- updated some agent config descriptions
- update last management server in the hosts during shutdown
- added agents and lastagents in management server response
- updated management server maintenance & shutdown unit tests
- some code improvements
* refactored code / addressed comments
* removed shutdown testcase (maybe, calling System.exit)
* Revert "removed shutdown testcase (maybe, calling System.exit)"
This reverts commit e14b071715.
* avoid system.exit during shutdown test
* code improvements
* testcase fix
* Fix cutoff time in agent connections monitor thread
Somehow deleteDatastore was never implemented, that meant:
templates haven't been cleaned up on datastore delete and
also agents have never been informed about storage pool removal.
If a -rst resource wasn't deleted because of a failed copy,
a reoccurring snapshot attempt couldn't be done, because there
was still the "old" -rst resource. To prevent this always
try to remove the -rst resource before, if it doesn't exist it is a noop.
* NAS B&R Plugin enhancements
* Prevent printing mount opts which may include password by removing from response
* revert marvin change
* add sanity checks to validate minimum qemu and libvirt versions
* check is user running script is part of libvirt group
* revert changes of retore expunged VM
* add code coverage ignore file
* remove check
* issue with listing schedules and add defensive checks
* redirect logs to agent log file
* add some more debugging
* remove test file
* prevent deletion of cks cluster when vms associated to a backup offering
* delete all snapshot policies when bkp offering is disassociated from a VM
* Fix `updateTemplatePermission` when the UI is set to a language other than English (#9766)
* Fix updateTemplatePermission UI in non-english language
* Improve fix
---------
* Add nobrl in the mountopts for cifs file system
* Fix restoration of VM / volumes with cifs
* add cifs utils for el8
* add cifs-utils for ubuntu cloudstack-agent
* syntax error
* remove required constraint on both vmid and id params for the delete bkp schedule command
* add use of virsh domifaddr to get VM external DHCP IP
* updates to modularize LibvirtGetVmIpAddressCommandWrapper per comments; added test cases to cover 90%+ scenarios
* updates to modularize LibvirtGetVmIpAddressCommandWrapper per comments; added test cases to cover 90%+ scenarios
* updates to modularize LibvirtGetVmIpAddressCommandWrapper per comments; added test cases to cover 90%+ scenarios
This PR introduces the concept of multi-scope configuration settings. In addition to the Global level, currently all configurations can be set at a single scope level.
It will be useful if a configuration can be set at multiple scopes. For example, a configuration set at the domain level
will apply for all accounts, but it can be set for an account as well. In which case the account level setting will override the domain level setting.
This is done by changing the column `scope` of table `configuration` from string (single scope) to bitmask (multiple scopes).
```
public enum Scope {
Global(null, 1),
Zone(Global, 1 << 1),
Cluster(Zone, 1 << 2),
StoragePool(Cluster, 1 << 3),
ManagementServer(Global, 1 << 4),
ImageStore(Zone, 1 << 5),
Domain(Global, 1 << 6),
Account(Domain, 1 << 7);
```
Each scope is also assigned a parent scope. When a configuration for a given scope is not defined but is available for multiple scope types, the value will be retrieved from the parent scope. If there is no parent scope or if the configuration is defined for a single scope only, the value will fall back to the global level.
Hierarchy for different scopes is defined as below :
- Global
- Zone
- Cluster
- Storage Pool
- Image Store
- Management Server
- Domain
- Account
This PR also updates the scope of the following configurations (Storage Pool scope is added in addition to the existing Zone scope):
- pool.storage.allocated.capacity.disablethreshold
- pool.storage.allocated.resize.capacity.disablethreshold
- pool.storage.capacity.disablethreshold
Doc PR : https://github.com/apache/cloudstack-documentation/pull/476
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This is found in some PRs
plugins/storage/volume/linstor/src/main/java/com/cloud/hypervisor/kvm/storage/LinstorStorageAdaptor.java:510: poperties ==> properties
Doc PR : https://github.com/apache/cloudstack-documentation/pull/461
This PR fixes https://github.com/apache/cloudstack/issues/8638
== Description
Four new Resource Types have been added. Admin can configure corresponding resource limits for the tenants at different levels (domain, account, project)
User dashboard's Storage section will show the new resources, their limits and current usage.
1. backup - No. of backups used by the account
2. backup_storage - Backup storage allocated for the account
3. bucket - No. of buckets used by the accounts
4. object_storage - Object storage allocated for the account.
Some other related changes done to BnR framework:
1. Maximum number of Backups to retain can be specified while creating Backup schedules, similar to Scheduled snapshots.
2. Oldest Scheduled backup of the same interval type will be deleted once the number reaches the configured max Backups value.
3. Code refactor: Moved syncBackups method from BackupProvider to the framework BackupManagerImpl, as it is a common functionality and all providers were using duplicated code.
Changes done to the Object Storage Framework
1. Quota parameter is made mandatory while creating a bucket. Bucket quota is considered to be the allocated space and will be used to enforce Resource limits.
== Schema Changes:
1. New Column `max_backups` added to `backup_schedule` table
4. New Column `backup_interval_type` added to `backups` table
== Api Changes:
1. createBackup: new Parameter `scheduleid`. It should be specified whenever a scheduled backup is created. This will translate to the `backup_interval_type` in the `backups` table.
3. createBackupScheduke: new Parameter `max_backups`. To specify maximum number of backups to retain for the given schedule.
== Configurations:
|Setting |Scope |Default Value |Description|
|-------|--------|--------------|-----------|
|backup.max.hourly |Global |8 |Maximum recurring hourly backups to be retained for an instance|
|backup.max.daily |Global |8 |Maximum recurring daily backups to be retained for an instance|
|backup.max.weekly |Global |8 |Maximum recurring weekly backups to be retained for an instance|
|backup.max.monthly |Global |8 |Maximum recurring monthly backups to be retained for an instance|
|max.account.backups| Global| 20 | The default maximum number of backups that can be created for an account|
|max.account.backup.storage| Global| 400 | The default maximum backup storage space (in GiB) that can be used for an account|
|max.domain.backups| Global| 40 | The default maximum number of backups that can be created for an domain|
|max.domain.backup.storage| Global| 800 | The default maximum backup storage space (in GiB) that can be used for an domain|
|max.project.backups| Global| 20 | The default maximum number of backups that can be created for an project|
|max.project.backup.storage| Global| 400 | The default maximum backup storage space (in GiB) that can be used for an project|
|Setting |Scope |Default Value |Description|
|-------|--------|--------------|-----------|
|max.account.buckets| Global| 20 | The default maximum number of buckets that can be created for an account|
|max.account.object.storage| Global| 400 | The default maximum object storage space (in GiB) that can be used for an account|
|max.domain.buckets| Global| 40 | The default maximum number of buckets that can be created for an domain|
|max.domain.object.storage| Global| 800 | The default maximum object storage space (in GiB) that can be used for an domain|
|max.project.buckets| Global| 20 | The default maximum number of buckets that can be created for an project|
|max.project.object.storage| Global| 400 | The default maximum object storage space (in GiB) that can be used for an project|
Co-authored-by: Daan Hoogland <daan@onecht.net>
Co-authored-by: Lucas Martins <56271185+lucas-a-martins@users.noreply.github.com>
Co-authored-by: Lucas Martins <lucas.martins@scclouds.com.br>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* api,agent,server,engine-schema: scalability improvements
Following changes and improvements have been added:
- Improvements in handling of PingRoutingCommand
1. Added global config - `vm.sync.power.state.transitioning`, default value: true, to control syncing of power states for transitioning VMs. This can be set to false to prevent computation of transitioning state VMs.
2. Improved VirtualMachinePowerStateSync to allow power state sync for host VMs in a batch
3. Optimized scanning stalled VMs
- Added option to set worker threads for capacity calculation using config - `capacity.calculate.workers`
- Added caching framework based on Caffeine in-memory caching library, https://github.com/ben-manes/caffeine
- Added caching for account/use role API access with expiration after write can be configured using config - `dynamic.apichecker.cache.period`. If set to zero then there will be no caching. Default is 0.
- Added caching for account/use role API access with expiration after write set to 60 seconds.
- Added caching for some recurring DB retrievals
1. CapacityManager - listing service offerings - beneficial in host capacity calculation
2. LibvirtServerDiscoverer existing host for the cluster - beneficial for host joins
3. DownloadListener - hypervisors for zone - beneficial for host joins
5. VirtualMachineManagerImpl - VMs in progress- beneficial for processing stalled VMs during PingRoutingCommands
- Optimized MS list retrieval for agent connect
- Optimize finding ready systemvm template for zone
- Database retrieval optimisations - fix and refactor for cases where only IDs or counts are used mainly for hosts and other infra entities. Also similar cases for VMs and other entities related to host concerning background tasks
- Changes in agent-agentmanager connection with NIO client-server classes
1. Optimized the use of the executor service
2. Refactore Agent class to better handle connections.
3. Do SSL handshakes within worker threads
5. Added global configs to control the behaviour depending on the infra. SSL handshake could be a bottleneck during agent connections. Configs - `agent.ssl.handshake.min.workers` and `agent.ssl.handshake.max.workers` can be used to control number of new connections management server handles at a time. `agent.ssl.handshake.timeout` can be used to set number of seconds after which SSL handshake times out at MS end.
6. On agent side backoff and sslhandshake timeout can be controlled by agent properties. `backoff.seconds` and `ssl.handshake.timeout` properties can be used.
- Improvements in StatsCollection - minimize DB retrievals.
- Improvements in DeploymentPlanner allow for the retrieval of only desired host fields and fewer retrievals.
- Improvements in hosts connection for a storage pool. Added config - `storage.pool.host.connect.workers` to control the number of worker threads that can be used to connect hosts to a storage pool. Worker thread approach is followed currently only for NFS and ScaleIO pools.
- Minor improvements in resource limit calculations wrt DB retrievals
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* test1, domaindetails, capacitymanager fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* test2 - agent tests
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* capacitymanagertest fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* change
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix missing changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address comments
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* revert marvin/setup.py
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix indent
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* use space in sql
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address duplicate
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* update host logs
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* revert e36c6a5d07
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix npe in capacity calculation
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* move schema changes to 4.20.1 upgrade
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* build fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address comments
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix build
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* add some more tests
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* checkstyle fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove unnecessary mocks
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* build fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* replace statics
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* engine/orchestration,utils: limit number of concurrent new agent
connections
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor - remove unused
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* unregister closed connections, monitor & cleanup
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* add check for outdated vm filter in power sync
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* agent: synchronize sendRequest wait
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Support for Management Server Maintenance
- New APIs: prepareForMaintenance and cancelMaintenance, with required parameter - managementserverid.
- New management server states for maintenance: PreparingForMaintenance, Maintenance.
- listHosts API with optional parameter – managementserverid, to list the hosts connected to the management server.
- Support management server maintenance when more than one active management servers available.
- Triggers transfer agents to other available management servers for maintenance, new agent command MigrateAgentConnectionCommand to initiate transfer of indirect agents.
- New global config 'management.server.maintenance.timeout', to set the timeout (in mins) for the management server maintenance window, default: 60 mins.
- UI changes: Prepare and Cancel Maintenance in Management Server section, Connected Agents tab, New fields for hosts and management servers.
* Updated pending jobs check timer task with ScheduledExecutorService
* keep maintenance state on trigger shutdown call when ms is in maintenance
* add pending jobs count to ms response
* during ms heartbeat, update state to up only when it's down
* allow vm work jobs of async job created before prepare for maintenance
* Revert "keep maintenance state on trigger shutdown call when ms is in maintenance"
This reverts commit 607e13364679eac897f4d146bb3325ea7a61ba17.
* skip maintenance test when multiple management servers are not available, and not configured in host setting for kvm
* Delete local storage properties in agent.properties during delete pool
* Fix stale entry when add local storage failed
* Smaller methods
* Comment added
* 4.20:
linstor: Fix ZFS snapshot backup (#10219)
fix listing of VMs by network (#10204)
Configure org.eclipse.jetty.server.Request.maxFormKeys from server.properties and increase the default value (#10214)
api: fix access for listSystemVmUsageHistory (#10032)
Fix NPE issues during host rolling maintenance, due to host tags and custom constrained/unconstrained service offering (#9844)
* 4.20:
merge errors fixed
Restrict the migration of volumes attached to VMs in Starting state (#9725)
server, plugin: enhance storage stats for IOPS (#10034)
Introducing granular command timeouts global setting (#9659)
Improve logging to include more identifiable information (#9873)
Adds framework layer change to allow retrieving and storing IOPS stats for storage pools. Custom `PrimaryStoreDriver` can implement method - `getStorageIopsStats` for returning IOPS stats. Existing method `getUsedIops` can also be overridden by such plugins when only used IOPS is returned.
For testing purpose, implementation has been added for simulator hypervisor plugin to return capacity and used IOPS for a pool.
For local storage pool, implementation has been added using iostat to return currently used IOPS.
StoragePoolResponse class has been updated to return IOPS values which allows showing IOPS values in UI for different storage pool related views and APIs.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Improve logging to include more identifiable information for kvm plugin
* Update logging for scaleio plugin
* Improve logging to include more identifiable information for default volume storage plugin
* Improve logging to include more identifiable information for agent managers
* Improve logging to include more identifiable information for Listeners
* Replace ids with objects or uuids
* Improve logging to include more identifiable information for engine
* Improve logging to include more identifiable information for server
* Fixups in engine
* Improve logging to include more identifiable information for plugins
* Improve logging to include more identifiable information for Cmd classes
* Fix toString method for StorageFilterTO.java
* 4.20:
VR: apply iptables rules when add/remove static routes (#10064)
Certificate and VM hostname validation improvements (#10051)
set ulimit for server according to redhat spec (#10040)
kvm-storage: provide isVMMigrate information to storage plugins (#10093)
Allow config drive deletion of migrated VM, on host maintenance (#10045)
linstor: improve heartbeat check with also asking linstor (#10105)
server: simplify role change validation (#9173)
UI: create VPC network offering with conserve mode (#10082)
server: fix typo removeaccessvpn in VirtualRouterElement (#10086)
UI: remove duplicated Instance Name in Public IP details page (#10087)
UI: Fixes in the Usage UI (#10000)
SAML2: add cookie with HttpOnly too #10013 (#10047)
ui: Allow font-awesome icon usage and optimise icon size inconsistency (#9744)
Particular Linstor needs can use this information to only allow
dual volume access for live migration and not enable it in general,
which can and will lead to data corruption if for some reason
2 VMs get started on 2 different hosts.
If a node doesn't have a DRBD connection to another node,
additionally ask Linstor-Controller if the node is alive.
Otherwise we would have simply said no and the node might still be alive.
This is always the case in a non hyperconverged setup.
* API to validate Quota activation rule
* Apply suggestions from code review
Co-authored-by: Bryan Lima <42067040+BryanMLima@users.noreply.github.com>
* Use constants
---------
Co-authored-by: Henrique Sato <henrique.sato@scclouds.com.br>
Co-authored-by: Bryan Lima <42067040+BryanMLima@users.noreply.github.com>
* 4.20:
UI: Tooltip on the host information card to display the CPU speed in MHz and the memory value in MB (to 3 decimal places) (#9971)
UI: Allow accounts of the `User` type to add other accounts or users to projects through UI (#9927)
enable to create VPC portfowarding rules with source cidr (#7081)
Add new column `last_id` to the table volumes (#9759)
Allow VMWare import via another host (#9787)
Linstor: add support for ISO block devices and direct download (#9792)
get expunged VM data for job result (#9949)
fix section divider display on auth page (#9966)
* cli changes to update user/account, list by apikeyaccess, domain level setting
* UI changes for updating user/account and searchfilter in listview
* make the api parameters and setting accessible only to root admin
* revert changes to ui/package-lock.json
* minor changes to description strings
* UT for ApiServer and AccountManagerImpl classes
* fix pre-commit failure
* Added a constant for the string System
* UT for searchForUsers and searchForAccounts
* Fix marvin test error
* Update schema to use idempotent add column
* Fix `updateTemplatePermission` when the UI is set to a language other than English (#9766)
* Fix updateTemplatePermission UI in non-english language
* Improve fix
---------
Co-authored-by: Lucas Martins <lucas.martins@scclouds.com.br>
* Added user name uuid to logging
* Add events when api key access is changed via api or config setting
* fix the userid for api key access update event
* Fix ut failure after event logging
* Convert drop down to radio-button in edit user and account
* Add ApiKeyAccess status in User InfoCard for Users if Api key is generated
* Return apiKeyAccess in user and account response only for Root Admin
* fixed noredist build failure
* Show apikeyaccess on the left panel in the user view for root admins as well
* don't show divider if apiKeyAccess is not shown to user
* Fix events generated to set Username, Account and Domain of the caller correctly
* cli changes to update user/account, list by apikeyaccess, domain level setting
* UI changes for updating user/account and searchfilter in listview
* make the api parameters and setting accessible only to root admin
* revert changes to ui/package-lock.json
* minor changes to description strings
* UT for ApiServer and AccountManagerImpl classes
* fix pre-commit failure
* Added a constant for the string System
* UT for searchForUsers and searchForAccounts
* Fix marvin test error
* Update schema to use idempotent add column
* Added user name uuid to logging
* Add events when api key access is changed via api or config setting
* fix the userid for api key access update event
* Fix ut failure after event logging
* Convert drop down to radio-button in edit user and account
* Add ApiKeyAccess status in User InfoCard for Users if Api key is generated
* Return apiKeyAccess in user and account response only for Root Admin
* fixed noredist build failure
* Show apikeyaccess on the left panel in the user view for root admins as well
* don't show divider if apiKeyAccess is not shown to user
* Fix events generated to set Username, Account and Domain of the caller correctly
* Added DB upgrade path from 42000 to 42010
---------
Co-authored-by: Daan Hoogland <daan@onecht.net>
Co-authored-by: Lucas Martins <56271185+lucas-a-martins@users.noreply.github.com>
Co-authored-by: Lucas Martins <lucas.martins@scclouds.com.br>
If a secondary storage pool is used by e.g.
2 concurrent snapshot->template actions,
if the first action finished it removed the netfs mount
point for the other action.
Now the storage pools are usage ref-counted and will only
deleted if there are no more users.
In non-hyperconverged setups, diskless nodes don't have a connection
to each other, so setting properties there had no effect.
Now it is checked if a connection exists,
between the live migration nodes and if not,
it will set the allow-two-primaries on resource-definition level.
This fixes the issue when create a ovs network
```
2024-10-29 16:02:45,089 WARN [resource.wrapper.LibvirtOvsFetchInterfaceCommandWrapper] (agentRequest-Handler-2:null) (logid:e716722e) Network interface: ''cloudbr1'' not found
```
This is a regression of a previous security release
see "framework/cluster: improve cluster service, integration API server"
since we now use NetworkInterface.getByName to get network interface, we should NOT add single quotes before/after the label.
* StorPool: fix of delete snapshot
Mark the DB record as destroyed when a snapshot is deleted
* Addressed reviews
* addressed review
* addressed review
qemu has a bug versions prior 7.0 with discard enabled and using the IDE bus.
It would crash the qemu process and kill the virtual machine,
this is most noticeable on installing a windows guest from the
Windows ISO installer.
* linstor: enable discard for Linstor storage pools
All Linstor storage backends support discard, so it can be safely enabled.
* linstor: enable discard for Linstor storage pools CHANGELOG.md
* CKS: add ConfigDrive to cloud-init datasource_list in systemvm template
* systemvm template: update debian 11.7.0 iso url
* CKS: get K8S iso by LABEL=CDROM if config drive ISO is attached
* Revert "CKS: add ConfigDrive to cloud-init datasource_list in systemvm template"
This reverts commit b6863a5ce1.
* CKS: patch cloud-init in opt/cloud/bin/setup/cksnode.sh
* PR7650: move ConfigDrive before CloudStack in datasource list
* Revert "CKS: patch cloud-init in opt/cloud/bin/setup/cksnode.sh"
This reverts commit 75be03c6aa.
* CKS: fix ConfigDrive
* Make volume attachment disk controller selection consistent with VM creation and start
* Update vmware-base/src/main/java/com/cloud/hypervisor/vmware/util/VmwareHelper.java
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Choose disk controllers after converting osdefault
* Rename function
---------
Co-authored-by: dahn <daan.hoogland@gmail.com>
* add dedicated resource response
* populate dedicatedresources field
* change affinity group name and description when it contains dedicated resources
* display dedicatedresources on UI
* add end of line to DedicatedResourceResponse class
* remove unnecessary fully qualified names
* Prevent addition of duplicate PF rules on scale up and no rules left behind on scale down (#32)
* fix missing dependency injection
* NSX: Fix concurrency issues on port forwarding rules deletion (#37)
* Fix concurrency issues on port forwarding rules deletion
* Refactor objectExists
* Fix unit test
* Fix test
* Small fixes
* CKS: Externalize control and worker node setup wait time and installation attempts (#38)
* NSX: Add shared network support (#41)
* NSX: Fix number of physical networks for Guest traffic checks and leftover rules on CKS cluster deletion (#45)
* Fix pf rules removal on CKS cluster deletion
* Fix check for number of physical networks for guest traffic
* Fix unit test
* fix logger
* NSX: Handle CheckHealthCommand to avoid host disconnection and errors on APIs
* NSX: Handle CheckHealthCommand to avoid host disconnection and errors on APIs
* Remove unused string
* fix logger
* Update UDP active monitor to ICMP
* Fix NPE on restarting VPC with additional public IPs
* NSX / VPC: Reuse Source NAT IP from systemVM range on restarts
* CKS: Public IP not found for VPC networks
* Externalize retries and inverval for NSX segment deletion (#67)
* remove unused import
* remove duplicate imports
* remove unused import
* revert externalizing cks settings
* fix test
* Refactor log messages
* Address comments
* Fix issue caused due to forward merge: 90fe1d
---------
Co-authored-by: Nicolas Vazquez <nicovazquez90@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Add logs to LibvirtComputingResource's metrics collecting process
* Apply Joao's suggestions
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
* Adjust some logs
* Print memory statistics log in one line
---------
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
This introduces the multi-arch zones, allowing users to select the VM arch upon deployment.
Multi-arch zone support in CloudStack can allow admins to mix x86_64 & arm64 hosts within the same zone with the following changes proposed:
- All hosts in a clusters need to be homogenous, wrt host CPU type (amd64 vs arm64) and hypevisor
- Arch-aware templates & ISOs:
- Add support for a new arch field (default set of: amd64 and arm64), when unspecified defaults to amd64 and for existing templates & iso
- Allow admins to edit the arch type of the registered template & iso
- Arch-aware clusters and host:
- Add new attribute field for cluster and hosts (kvm host agents can automatically report this, arch of the first host of the cluster is cluster's architecture), defaults to amd64 when not specified
- Allow admins to edit the arch of an existing cluster
- VM deployment form (UI):
- In a multi-arch zone/env, the VM deployment form can allow some kind of template/iso filtration in the UI
- Users should be able to select arch: amd64 & arm64; but this is shown only in a multi-arch zone (env)
- VM orchestration and lifecycle operations:
- Use of VM/template's arch to correctly decide where to provision the VM (on the correct strictly arch-matching host/clusters) & other lifecycle operations (such as migration from/to arch-matching hosts)
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>