Commit Graph

18 Commits

Author SHA1 Message Date
Rohit Yadav 1abd10199c Merge remote-tracking branch 'origin/4.15' 2021-05-04 19:37:45 +05:30
Gabriel Beims Bräscher ab790c11d5
server: Allow to upgrade service offerings from local <> shared storage pools (#4915)
This PR addresses the issue raised at #4545 (Fail to change Service offering from local <> shared storage).

When upgrading a VM service offering it is validated if the new offering has the same storage scope (local or shared) as the current offering. I think that the validation makes sense in a way of preventing running Root disks with an offering that does not match the current storage pool. However, the validation only compares both offerings and does not consider that it is possible to migrate Volumes between local <> shared storage pools.

The idea behind this implementation is that CloudStack should check the scope of the current storage pool which the ROOT volume is allocated; this, it is possible to migrate the volume between storage pools and list/upgrade according to the offerings that are supported for such pool.

This PR also fixes an issue where the API command that lists offerings for a VM should follow the same idea and list based on the storage pool that the volume is allocated and not the previous offering.

Fixes: #4545
2021-04-30 11:59:50 +05:30
Abhishek Kumar 42c83b08f5 Merge remote-tracking branch 'apache/4.15' 2021-04-26 14:33:58 +05:30
Abhishek Kumar a30d518e8a
vmware: fix stopped VM volume migration (#4758)
* prevent other vm disks getting deleted

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* vmware: fix inter-cluster stopped vm migration

Fixes #4838

For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix detached volume inter-cluster migration

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* cleanup unused method

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* review changes

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* changes

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* vmware: allow attached volume migration using VmwareStorageMotionStrategy

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* find vm clusterid with multiple ROOT volumes

VM can have multiple ROOT volumes and some can be on zone-wide store therefore iterate over all of them till a cluster ID is found.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix successive storage migration

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix intercluster check

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactor vm cluster, host method

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* remove inter-pod check

Added by mistake, VMware won't have pods

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* address review comment

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-04-24 18:55:25 +05:30
sureshanaparti eba186aa40
storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
2021-02-24 14:58:33 +05:30
Abhishek Kumar d6e8b53736
vmware: vm migration improvements (#4385)
- Fixes inter-cluster migration of VMs
- Allows migration of stopped VM with disks attached to different and suitable pools
- Improves inter-cluster detached volume migration
- Allows inter-cluster migration (clusters of same Pod) for system VMs, VRs on VMware
- Allows storage migration for stopped system VMs, VRs on VMware within same Pod if StoragePool cluster scopetype

Linked Primate PR: https://github.com/apache/cloudstack-primate/pull/789 [Changes merged in this PR after new UI merge]
Documentation PR: https://github.com/apache/cloudstack-documentation/pull/170

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-02-12 12:41:41 +05:30
Wei Zhou 2acd87c41e
server: Add global configuration vm.serviceoffering.cpu.cores.max and vm.serviceoffering.ram.size.max (#4379)
vm.serviceoffering.cpu.cores.max and vm.serviceoffering.ram.size.max
2020-10-14 15:48:35 +05:30
Wei Zhou 4746c8c726
server: move UpdateDefaultNic to vm work job queue (#4020)
While remove secondary nic from a Running vm, if update the default nic to the secondary nic before the nic is removed, the vm will not have default nic (and cannot be started) when both operations are completed.

It is because UpdateDefaultNic api is not handled as a vm work job (AddNicToVMCmd and RemoveNicFromVMCmd are), it is processed before nic is removed. The result is that secondary nic becomes default nic and got removed.
2020-09-01 13:54:48 +05:30
Rohit Yadav 562a7db8df Merge remote-tracking branch 'origin/4.14'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2020-08-05 23:59:16 +05:30
Wei Zhou cd8e28b279
server: Move restoreVM to vm work job queue (#4019) 2020-08-05 09:46:55 +00:00
Pearl Dsilva a73712ec4e
server: Enable sending hypervior host name via metadata - VR and Config Drive (#3976)
Enable sending hypervisor host details via metadata for VR and Config Drive providers

Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2020-07-01 08:44:11 +05:30
Nicolas Vazquez 8c1d749360
[VMware] Enable unmanaging guest VMs (#4103)
* Enable unmanaging guest VMs

* Minor fixes

* Fix stop usage event only if VM is not stopped when unmanaging

* Rename unmanaged VMs manager

* Generate netofferingremove usage event if VM is not stopped

* Generate usage event VM snapshot primary off when unmanaging
2020-06-26 08:31:43 -03:00
harikrishna-patnala 0d4f67ad8c
server: NPE occured when dynamic scaling tried on VM and as part of this when VM tries to migrate if current host does not have capacity. (#3998)
Repro Steps:
1. Create a VM on host1
2. Make host1 capacity full by deploying multiple VMs
3. Try Dynamic scaling on VM on host1
4. NPE occurs when MS tries to find host to migrate the VM and then scale.

Root cause: VM profile is not initiated properly with serviceoffering before planning for deployment

Solution: Iniate VM profile with serviceoffering and also make sure custom compute parameters are handled
2020-06-18 09:19:19 +05:30
Rohit Yadav 77947f23fd Merge remote-tracking branch 'origin/4.13' into 4.14
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2020-06-16 12:21:48 +05:30
harikrishna-patnala 5054766d9f
server: Submitting multiple dynamic VM Scaling API commands for the same instance can result in two usage events in the same second causing a compound key violation in usage service (#3991)
Root cause:
Even though dynamic scaling job is handled in vmworkjob queue which ensures serilizing multiple jobs but the database updating and generating usage events are out of the job queue.

Solution:
Moved all updations into the job queue

Firstly I have tested all the scenarios to check if nothing is broken:
Scaling on a running VM with normal compute offering
Scaling on a stopped VM with normal compute offering
Scaling on a running VM with custom compute offering
Scaling on stopped VM with custom compute offering
Scaling on stopped/running VM between custom compute offering and normal compute offering and combinations among these. Checked if the custom parameters have been populated or deleted accordingly based on the offering to which the VM is scaled
Since this is a corner scenario I could not test the exact point where two usage events are recorded at the same time for two different API calls on same VM.
2020-06-16 11:41:14 +05:30
Wei Zhou ac581d1546
New feature: Resource count (CPU/RAM) take only running vms into calculation (#3760)
* marvin: check resource count of more types

* New feature: add flag resource.count.running.vms.only to count resource consumption of only running vms

Stopped VMs do not use CPU/RAM actually.
A new global configuration resource.count.running.vms.only is added to determine whether resource (cpu/memory) of only running vms (including Starting/Stopping) will be taken into calculation of resource consumption.

* Add integration test for resource count of only running vms
2020-01-30 10:36:50 +01:00
Rohit Yadav 7c6777b8d3 Merge branch '4.11': allow config drives on primary storage for KVM (#2651)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-05-21 14:50:55 +05:30
Marc-Aurèle Brothier 893a88d225 CLOUDSTACK-10105: Use maven standard project structure in all projects (#2283)
Remove maven standard module (which only a few were using) and get ride of maven customization for the projects structure.

- moved all directories to src/main/java, src/main/resources, src/main/scripts, src/test/java, src/test/resources
- grep scan to search for src/com and src/org left over
- grep for <project>/scripts to fix pom.xml configuration
- remove custom <build> configuration in pom.xml

Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
2018-01-20 03:19:27 +05:30