* Fix resource count discrepancies
* Fixup while removing vm
* Fix discrepancies when starting VMs
* Fixup tests
* Fixups
* Don't take lock when amount is negative
Adds:
- Fix for volume limit checks for disk offerings with multiple tags - When a VM is deployed with multiple disks having offerings with multiple tags the resource limit check may falter as currently it tries to check based on individual diskoffering. With this, change if offering d1 and d2 for volumes v1 and v2 both have tag1, server will check volume limits for tag tag1 using the combined size of v1 and v2.
- Fix for template tag hosts in random host allocator - May affect use of template tag, service offering tags and random host allocator together. The current code for the random host allocator falters while trying to find the host allocation. This was found and fixed during the addition of the unit test here, https://github.com/shapeblue/cloudstack-apple/pull/378/files#diff-bbf9baea014e5cc1dfe9e7d13467c9857208cfe65e93883721d88a6f0452f912
- Unit tests for changes in api,server,ui: tagged resource limits #327
* Introduced a new API "checkVolumeAndRepair" that allows users or admins to check and repair if any leaks observed.
Currently this is supported only for KVM
* some fixes
* Added unit tests
* addressed review comments
* add repair volume while granting access
* Changed repair parameter to accept both leaks/all values
* Introduced new global setting volume.check.and.repair.before.use to do volume check and repair before VM start or volume attach operations
* Added volume check and repair changes only during VM start and volume attach operations
* Refactored the names to look similar across the code
* Some code fixes
* remove unused code
* Renamed repair values
* Addressed review comments
* code refactored
* used volume name in logs
* Changed the API to Async and the setting scope to storage pool
* Fixed exit value handling with check volume command
* Fixed storage scope to the setting
* Fixed volume format issues
* Refactored the log messages
* Fix formatting
* Fix host stuck in connecting state (#8502)
There are a lot of test failures due to test_vm_life_cycle.py in multiple PRs due to host not available for migration of VMs.
#8438 (comment)
#8433 (comment)
#7344 (comment)
While debugging I noticed that the hosts get stuck in Connecting state because MS is waiting for a response of the ReadyCommand from the agent. Since we take a lock on connection and disconnection, restarting the agent doesn't work. To fix this, we have to restart the MS or wait for ~1 hour (default timeout).
On the agent side, it gets stuck waiting for a response from the Script execution.
To reproduce, run smoke/test_vm_life_cycle.py (TestSecuredVmMigration test class to be specific). Once the tests are complete, you will notice that some hosts are stuck in Connecting state. And restarting the agent fails due to the named lock. Locks on DB can be checked using the below query.
SELECT *
FROM performance_schema.metadata_locks
INNER JOIN performance_schema.threads ON THREAD_ID = OWNER_THREAD_ID
WHERE PROCESSLIST_ID <> CONNECTION_ID() \G;
This PR adds a wait for the ready command and a timeout to the Script execution to ensure that the thread doesn't get stuck and the named lock from database is released.
* Externalise a few timeouts & fix timeout for hostSupportsUefi in libvirt ready command wrapper (#8547)
This PR fixes bug introduced in #8502. Timeout for script execution was set to 60 ms instead of 60s which resulted in host not getting UEFI enabled. This is a blocker for 4.19 release.
We do this by introducing a new agent parameter `agent.script.timeout` (default - 60 seconds) to use as a timeout for the script checking host's UEFI status.
We also externalize the timeout for the ReadyCommand by introducing a new global setting `ready.command.wait` (default - 60 seconds).
For ModifyStoragePoolCommand, we don't externalize the timeout to avoid confusion for the user. Since, the required timeout can vary depending on the provider in use and we are only setting the wait for default host listener for now. Instead, we reuse the global `wait` setting by dividing it by `5` making the default value of 6 minutes (1800/5 = 360s) for ModifyStoragePoolCommand.
Note: the actual time, the MS waits is twice the wait set for a Command. Check reference code below.
19250403e6/engine/orchestration/src/main/java/com/cloud/agent/manager/AgentAttache.java (L406-L442)
* fixup
* Check if volume on datastore requires access for migration, and grant/revoke volume access if requires
* Updated default implementation for requiresAccessForMigration method in PrimaryDataStoreDriver
* Update VM's state if powerstate & state are not in sync
* Add unit tests
* some code improvements for instacne / power state check
* Update power state after vm stop confirmation (as power state 'PowerOn' is kept after vm stop and not updated later)
* Reset the power state update counter before migrate, to allow power state sync to proper state / host
* Do not consider transitional states (Starting, Stopping) to check power state sync
* set powerstate to off for all vm types
---------
Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Introduces the concept of tagged resource limits. Limits can be enforced on accounts and domains for the deployment of entities for a tagged resource. Current tagged resource limits can be used for the following resource types,
Host limits
user_vm
cpu
memory
Storage limits
volume
primary_storage
Following global settings can used to specify tags for which limit needs to be enforced,
Host: resource.limit.host.tags
Storage: resource.limit.storage.tags
Option for specifying tagged resource limits and viewing tagged resource usage are made available in the UI.
Enhances use of templatetag for VM deployment and template creation
Adds option to list disk offering with suitability flag for a virtualmachine. A new parameter named virtualmachineid has been added to the listDiskOfferings API which when passed returns suitableforvirtualmachine param in the reponse.
* Cleanup Volume AsyncJob after mgmt server stop
* Clean Up Vm async job resources during mggmt server stop
* Use State.isTransitional method to identify trnsition states
* Add cleanup for Network Async Job
* Add license
* Added RevertSnapshotting to volume transition state. Fixed spacing code style
* Added transitional flag in Volume state
* Updated network event for failed job, (re)added cleanup for volumes created from snapshots, and some code improvements
* Added java doc for volume state constructor
* Fixed cleanup SNAPSHOT_ID entry in volume details for failed volumes created from snapshots
---------
Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
* Trigger out of band VM state update via libvirt event when VM stops
* Add License headers, refactor nested try
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
(cherry picked from commit 3694667f50)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Allow configkey to set 'cloud-name' cloud-init metadata
* Update engine/api/src/main/java/com/cloud/vm/VirtualMachineManager.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/network/NetworkModelImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/network/router/CommandSetupHelper.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Revert "Update server/src/main/java/com/cloud/network/router/CommandSetupHelper.java"
This reverts commit 8abc3e38c4.
* Revert "Update server/src/main/java/com/cloud/network/NetworkModelImpl.java"
This reverts commit 7f239be919.
* Rework/Fix review code suggestions
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
(cherry picked from commit 155a30748c)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Auto Enable Disable KVM hosts
* Improve health check result
* Fix corner cases
* Script path refactor
* Fix sonar cloud reports
* Fix last code smells
* Add marvin tests
* Fix new line on agent.properties to prevent host add failures
* Send alert on auto-enable-disable and add annotations when the setting is enabled
* Address reviews
* Add a reason for enabling or disabling a host when the automatic feature is enabled
* Fix comment on the marvin test description
* Fix for disabling the feature if the admin has manually updated the host resource state before any health check result
(cherry picked from commit be66eb2a35)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* When VM start fails at host for admin, report error
Signed-off-by: Marcus Sorensen <mls@apple.com>
* Report ResourceUnavailableExceptions that result in InsufficientCapacityException to admin
* Update error message to be more straightforward
Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
(cherry picked from commit 8885d252f0)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes the following cases in which Solidfire storage integration
caused issues when using Solidfire datadisks with VMware:
1. Take Volume Snapshot of Solidfire data disk
2. Delete an active Instance with Solidfire data disk attached
3. Attach used existing Solidfire data disk to a running/stopped VM
4. Stop and Start an instance with Solidfire data disks attached
5. Expand disk by resizing Solidfire data disk by providing size
6. Expand disk by changing disk offering for the Solidfire data disk
Additional changes:
- Use VMFS6 as managed datastore type if the host supports
- Refactor detection and splitting of managed storage ds name in storage
processor
- Restrict storage rescanning for managed datastore when resizing
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When deploying a VM is failed during the allocation process it may leave the resources that have been already allocated before the failure. They will get removed from the database as the whole code block is wrapped inside a transaction twice but the server would not inform the network or storage plugins to clean up the allocated resources.
This PR removes Transactions during VM allocation which results in the allocated VM and its resource records being persisted in DB even during failures. When failure is encountered VM is moved to Error state. This helps VM and its resources to be properly deallocated when it is expunged either by a server task such as ExpungeTask or during manual expunge.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes case of account/domain having negative storage count when there is a volume size difference while deploying volumes on certain stroages. In case of Powerflex volume size results in a multiple of 8. If user deploys a volume of 12GB it will result in 16GB in size. But currently CloudStack will not update resource count after deployment.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Live storage migration of volume in scaleIO within same storage scaleio cluster
* Added migrate command
* Recent changes of migration across clusters
* Fixed uuid
* recent changes
* Pivot changes
* working blockcopy api in libvirt
* Checking block copy status
* Formatting code
* Fixed failures
* code refactoring and some changes
* Removed unused methods
* removed unused imports
* Unit tests to check if volume belongs to same or different storage scaleio cluster
* Unit tests for volume livemigration in ScaleIOPrimaryDataStoreDriver
* Fixed offline volume migration case and allowed encrypted volume migration
* Added more integration tests
* Support for migration of encrypted volumes across different scaleio clusters
* Fix UI notifications for migrate volume
* Data volume offline migration: save encryption details to destination volume entry
* Offline storage migration for scaleio encrypted volumes
* Allow multiple Volumes to be migrated with migrateVirtualMachineWithVolume API
* Removed unused unittests
* Removed duplicate keys in migrate volume vue file
* Fix Unit tests
* Add volume secrets if does not exists during volume migrations. secrets are getting cleared on package upgrades.
* Fix secret UUID for encrypted volume migration
* Added a null check for secret before removing
* Added more unit tests
* Fixed passphrase check
* Add image options to the encypted volume conversion
* server: fix error on deleted template vm start
When a VM is deployed with start flag as false and the template is deleted before the VM start, NPE is obeserved it is started for the first time.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>