Commit Graph

361 Commits

Author SHA1 Message Date
Rohit Yadav 77290df0d5 Merge remote-tracking branch 'origin/4.15'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-02-26 12:09:11 +05:30
Abhishek Kumar 88337bdea4
server: fix finding pools for volume migration (#4693)
While finding pools for volume migration list following compatible storages:
- all zone-wide storages of the same hypervisor.
- when the volume is attached to a VM, then all storages from the same cluster as that of VM.
- for detached volume, all storages that belong to clusters of the same hypervisor. 

Fixes #4692 
Fixes #4400
2021-02-25 22:13:50 +05:30
sureshanaparti eba186aa40
storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
2021-02-24 14:58:33 +05:30
Pearl Dsilva aa01580381
network: Specify IP for VR in shared networks (#4503)
This PR enables admins to specify IP for a VR in a shared network.
2021-02-18 13:54:09 +05:30
Abhishek Kumar d6e8b53736
vmware: vm migration improvements (#4385)
- Fixes inter-cluster migration of VMs
- Allows migration of stopped VM with disks attached to different and suitable pools
- Improves inter-cluster detached volume migration
- Allows inter-cluster migration (clusters of same Pod) for system VMs, VRs on VMware
- Allows storage migration for stopped system VMs, VRs on VMware within same Pod if StoragePool cluster scopetype

Linked Primate PR: https://github.com/apache/cloudstack-primate/pull/789 [Changes merged in this PR after new UI merge]
Documentation PR: https://github.com/apache/cloudstack-documentation/pull/170

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-02-12 12:41:41 +05:30
Daan Hoogland b6b778f003 Merge release branch 4.14 to 4.15
* 4.14:
  server: select root disk based on user input during vm import (#4591)
  kvm: Use Q35 chipset for UEFI x86_64 (#4576)
  server: fix wrong error message when create isolated network without SourceNat (#4624)
  server: add possibility to scale vm to current customer offerings (#4622)
  server: keep networks order and ips while move a vm with multiple networks (#4602)
  server: throw exception when update vm nic on L2 network (#4625)
  doc: fix typo in install notes (#4633)
2021-02-01 09:57:35 +00:00
Wei Zhou 1913c6854e
server: keep networks order and ips while move a vm with multiple networks (#4602)
This PR fixes an issue when move a vm from an account to another account.

Steps to reproduce the issue
(1) create a vm with multiple shared networks (in advanced zone, or advanced zone with security groups)
(2) create another account (in same domain who can also access the shared networks)
(3) move vm to new account, with a list of networkid

expected result: the vm has nics on the networks in same order as specified in API request, and nics have the same ips as before actual result: network order is not same as specified, ips are changed.
2021-02-01 14:14:20 +05:30
Harikrishna Patnala 46b5322d9b Adding vSphere storage policy to disk on start command and attach volume command 2020-10-19 15:05:58 +05:30
Harikrishna Patnala 9543fd6e6a Fix startcommand on Datastore cluster when the volume datastore in CloudStack mismatches with vCenter datastore. Volume could have migrated with in datastore cluster which caused the mismatch
Fix dettach volume when volume is not on CloudStack intended datastore
2020-10-19 15:05:57 +05:30
nvazquez d864e9dc39 [VMware] Full OVF properties support 2020-10-19 15:05:56 +05:30
Harikrishna Patnala 61dd85876b Fix migrate vm and volume APIs in case if datastore cluster 2020-10-19 14:57:16 +05:30
Wei Zhou 2acd87c41e
server: Add global configuration vm.serviceoffering.cpu.cores.max and vm.serviceoffering.ram.size.max (#4379)
vm.serviceoffering.cpu.cores.max and vm.serviceoffering.ram.size.max
2020-10-14 15:48:35 +05:30
Pearl Dsilva b464fe41c6
server: Secondary Storage Usage Improvements (#4053)
This feature enables the following:
Balanced migration of data objects from source Image store to destination Image store(s)
Complete migration of data
setting an image store to read-only
viewing download progress of templates across all data stores
Related Primate PR: apache/cloudstack-primate#326
2020-09-17 10:12:10 +05:30
Wei Zhou 4746c8c726
server: move UpdateDefaultNic to vm work job queue (#4020)
While remove secondary nic from a Running vm, if update the default nic to the secondary nic before the nic is removed, the vm will not have default nic (and cannot be started) when both operations are completed.

It is because UpdateDefaultNic api is not handled as a vm work job (AddNicToVMCmd and RemoveNicFromVMCmd are), it is processed before nic is removed. The result is that secondary nic becomes default nic and got removed.
2020-09-01 13:54:48 +05:30
Rohit Yadav 562a7db8df Merge remote-tracking branch 'origin/4.14'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2020-08-05 23:59:16 +05:30
Wei Zhou cd8e28b279
server: Move restoreVM to vm work job queue (#4019) 2020-08-05 09:46:55 +00:00
Pearl Dsilva a73712ec4e
server: Enable sending hypervior host name via metadata - VR and Config Drive (#3976)
Enable sending hypervisor host details via metadata for VR and Config Drive providers

Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2020-07-01 08:44:11 +05:30
Nicolas Vazquez 8c1d749360
[VMware] Enable unmanaging guest VMs (#4103)
* Enable unmanaging guest VMs

* Minor fixes

* Fix stop usage event only if VM is not stopped when unmanaging

* Rename unmanaged VMs manager

* Generate netofferingremove usage event if VM is not stopped

* Generate usage event VM snapshot primary off when unmanaging
2020-06-26 08:31:43 -03:00
harikrishna-patnala 0d4f67ad8c
server: NPE occured when dynamic scaling tried on VM and as part of this when VM tries to migrate if current host does not have capacity. (#3998)
Repro Steps:
1. Create a VM on host1
2. Make host1 capacity full by deploying multiple VMs
3. Try Dynamic scaling on VM on host1
4. NPE occurs when MS tries to find host to migrate the VM and then scale.

Root cause: VM profile is not initiated properly with serviceoffering before planning for deployment

Solution: Iniate VM profile with serviceoffering and also make sure custom compute parameters are handled
2020-06-18 09:19:19 +05:30
Rohit Yadav 77947f23fd Merge remote-tracking branch 'origin/4.13' into 4.14
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2020-06-16 12:21:48 +05:30
harikrishna-patnala 5054766d9f
server: Submitting multiple dynamic VM Scaling API commands for the same instance can result in two usage events in the same second causing a compound key violation in usage service (#3991)
Root cause:
Even though dynamic scaling job is handled in vmworkjob queue which ensures serilizing multiple jobs but the database updating and generating usage events are out of the job queue.

Solution:
Moved all updations into the job queue

Firstly I have tested all the scenarios to check if nothing is broken:
Scaling on a running VM with normal compute offering
Scaling on a stopped VM with normal compute offering
Scaling on a running VM with custom compute offering
Scaling on stopped VM with custom compute offering
Scaling on stopped/running VM between custom compute offering and normal compute offering and combinations among these. Checked if the custom parameters have been populated or deleted accordingly based on the offering to which the VM is scaled
Since this is a corner scenario I could not test the exact point where two usage events are recorded at the same time for two different API calls on same VM.
2020-06-16 11:41:14 +05:30
pavanaravapalli d4b537efa7
UEFI Implementation: Enabled UEFI Support for Guest VM's on Hypervisor KVM,VMware. enabled boot modes [Legacy,Secure] support for UEFI boot with known caveats. (#3638)
Co-authored-by: Pavan Kumar Aravapalli <pavan_aravapalli@accelerite.com>
Co-authored-by: dahn <daan.hoogland@shapeblue.com>
2020-03-13 20:56:26 +01:00
Nicolas Vazquez 73122fd0a9
[KVM] Direct download agnostic of the storage provider (#3828)
* Remove constraint for NFS storage

* Add new property on agent.properties

* Add free disk space on the host prior template download

* Add unit tests for the free space check

* Fix free space check - retrieve avaiable size in bytes

* Update default location for direct download

* Improve the method to retrieve hosts to retry on depending on the destination pool type and scope

* Verify location for temporary download exists before checking free space

* In progress - refactor and extension

* Refactor and fix

* Last fixes and marvin tests

* Remove unused test file

* Improve logging

* Change default path for direct download

* Fix upload certificate

* Fix ISO failure after retry

* Fix metalink filename mismatch error

* Fix iso direct download

* Fix for direct download ISOs on local storage and shared mount point

* Last fix iso

* Fix VM migration with ISO

* Refactor volume migration to remove secondary storage intermediate

* Fix simulator issue
2020-03-06 19:56:54 +01:00
Wei Zhou ce894238d9
vpc: add bypassvlanoverlapcheck parameter when create private g… (#3899) 2020-02-23 21:21:08 +00:00
Nicolas Vazquez ce896a477d
[Vmware] Enable PVLAN support on L2 networks (#3732)
* Enable PVLAN support on L2 networks

* Fix prevent null pointer on details

* Add marvin tests

* Fixes from comments

* Fix: missing pvlan type on plugniccommand

* Fix checks on network creation for vlans overlap

* Fix remove prefix from secondary vlan id

* Improve checks on physical network for pvlans

* Fix compatibility with previous pvlan creation

* Fix shared networks backwards pvlan compatibility

* Add ui fix for pvlan type not passed to api

* Add check for isolated vlan id overlap

* Include check for dynamic vlan reserved for secondary vlan

* Fix marvin tests errors

* Fix redundant imports

* Skip marvin test for pvlan if dvswitch is not present

* spelling

Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
2020-02-07 15:43:01 +01:00
Abhishek Kumar 0f5b0e67f8
VM ingestion (#3606)
The VM ingestion feature allows CloudStack to discover, on-board, import existing VMs in an infra. The feature currently works only for VMware, with a hypervisor agnostic framework which may be extended for KVM and XenServer in future.
2020-02-03 15:43:52 +01:00
Wei Zhou ac581d1546
New feature: Resource count (CPU/RAM) take only running vms into calculation (#3760)
* marvin: check resource count of more types

* New feature: add flag resource.count.running.vms.only to count resource consumption of only running vms

Stopped VMs do not use CPU/RAM actually.
A new global configuration resource.count.running.vms.only is added to determine whether resource (cpu/memory) of only running vms (including Starting/Stopping) will be taken into calculation of resource consumption.

* Add integration test for resource count of only running vms
2020-01-30 10:36:50 +01:00
Abhishek Kumar b2db8979f2 server: fix for respecting secondary storage threshold limit (#3480)
Retrieval of an image store using ImageStoreProviderManager has been refactored by introducing three different methods,
DataStore getRandomImageStore(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will not be used here.
DataStore getImageStoreWithFreeCapacity(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will be used here and the store with max free space will be returned. If no store with filled storage less than the threshold is found, the NULL value will be returned.
List<DataStore> listImageStoresWithFreeCapacity(List<DataStore> imageStores);
To get a list of image stores for writing purpose which fulfills threshold capacity check.

Correspondingly DataStoreManager methods have been refactored to return similar values for a given zone.

Fixes #3287 - NULL value will be returned when secondary storage is needed for writing but there is not store with free space.
Fixes #3041 - Rather than returning random secondary storage for writing, storage with max. free space will be returned.
Fixes #3478 - For migration on VMware, all writable secondary storage will be mounted while preparation.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2019-07-31 15:37:59 +05:30
Nicolas Vazquez 0fbf5006b8 kvm: live storage migration intra cluster from NFS source and destination (#2983)
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548

Live storage migration on KVM under these conditions:

From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.

Additional notes:

To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026
https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:

Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-10 15:35:26 +05:30
Rohit Yadav bbc0ae873d
storage: post process locally uploaded multi-disk ova template (#3215)
Problem: When a multi-disk OVA template is uploaded, only the root disk is recognized and VMs deployed using such template only get the root disk provisioned.
Root Cause: The template processor for multi-disk OVA was not used in the template upload processor.
Solution: Added support for local multi-disk OVA template upload. After a multi-disk OVA template is
uploaded, the mechanism that worked on multi-disk OVA templates registered using URL is now also used to discovers and creates data-disk templates in cloud.vm_template table and on the secondary storage.

To enable SSL on SSVMs :
• Upload the certificates like you usually do via the API or UI->Infrastructure tab
• Set the global settings secstorage.encrypt.copy, secstorage.ssl.cert.domain to appropriate values
along with the CPVM ones
• Restart management server (no need to destroy/restart SSVM (or the ssvm agent))

Test cases:
- Upload template and check it creates multi-disk folders on secondary 
storage and entries in cloud.vm_template table
- Upload template and kill/shutdown management server. Then restart MS
to check if template sync works
- Copy template across zone of an uploaded template

Signed-off-by: Rohit Yadav rohit.yadav@shapeblue.com
2019-06-05 23:07:40 +05:30
Rohit Yadav b2b99ca63e Merge remote-tracking branch 'origin/4.11' into 4.12
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-03 17:15:41 +05:30
Nicolas Vazquez c9ce3e2344 router: Persistent DHCP leases file on VRs and cleanup /etc/hosts on VM deletion (#3351)
Since the CloudStack virtual router was redesigned on version 4.6 it has been observed that the DHCP leases file is not persistent across network operations. This causes conflicts on guest VMs static IPs, causing these static IPs to not be renewed by the DHCP server running on isolated and VPC networks' virtual routers (dnsmasq). On stopping or destroying a VM, its dhcp/dns records are not removed from the virtual router causing ghost effects.

Fixes #3272
Fixes #3354

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-03 17:04:16 +05:30
Nathan Johnson 637cc6ec4e feature: add libvirt / qemu io bursting (#3133)
* feature: add libvirt / qemu io bursting

Adds the ability to set bursting features from libvirt / qemu

This allows you to utilize the iops and bytes temporary "burst" mode
introduced with libvirt 2.4 and improved upon with libvirt 2.6.

https://blogs.igalia.com/berto/2016/05/24/io-bursts-with-qemu-2-6/

* updates per rafael et al
2019-02-04 19:47:44 -02:00
GabrielBrascher 460d3127ec Fix conflict and merge forward PR #3122 from 4.11 to master (4.12) 2019-02-04 19:24:59 -02:00
Nathan Johnson bf805d1483 Add back ability to disable backup of snapshot to secondary (#3122)
* The snapshot.backup.rightafter configuration variable was removed by:

SHA: 6bb0ca2f85

This adds it back, though named snapshot.backup.to.secondary now instead.

This global parameter, once set, will allow you to prevent automatic backups of
     snapshots to secondary storage, unless they're actually needed.

Fixes #3096

* updates per review
2019-02-04 19:08:42 -02:00
dahn b363fd49f7 Vmware offline migration (#2848)
* - Offline VM and Volume migration on Vmware hypervisor hosts
- Also add VM disk consolidation call on successful VM migrations

* Fix indentation of marvin test file and reformat against PEP8

* * Fix few comment typos
* Refactor debug messages to use String.format() when debug log level is enabled.

* Send list of commands returned by hypervisor Guru instead of explicitly selecting the first one

* Fix unhandled NPE during VM migration

* Revert back to distinct event descriptions for VM to host or storage pool migration

* Reformat test_primary_storage file against PEP-8 and Remove unused imports

* Revert back the deprecation messages in the custom StringUtils class to favour the use of the ApacheUtils
2019-01-25 10:05:13 -02:00
Gabriel Beims Bräscher e45bed74a5 server: remove unused StrategyPriority.PLUGIN. (#3014)
Remove unused StrategyPriority.PLUGIN enum. The PLUGIN Strategy priority is not used, except by three JUnit test methods.
2018-11-14 15:07:37 +05:30
Rohit Yadav 233f46c94b Merge remote-tracking branch 'origin/4.11'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-10-17 20:29:58 +05:30
Rohit Yadav 5ce14df31f
network: Allow ability to disable rolling restart feature (#2900)
This adds a global setting for admins who may not want the rolling
restart of routers or are seeing any issues around it. In future, this
setting may be removed.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-10-17 20:27:08 +05:30
Mike Tutkowski 3db33b7385 Support online migration of a virtual disk on XenServer from non-managed storage to managed storage 2018-08-12 00:23:36 -06:00
Rohit Yadav 7c6777b8d3 Merge branch '4.11': allow config drives on primary storage for KVM (#2651)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-05-21 14:50:55 +05:30
Rohit Yadav acc5fdcdbd
CLOUDSTACK-10290: allow config drives on primary storage for KVM (#2651)
This introduces a new global setting `vm.configdrive.primarypool.enabled` to toggle creation/hosting of config drive iso files on primary storage, the default will be false causing them to be hosted on secondary storage. The current support is limited from hypervisor resource side and in current implementation limited to `KVM` only. The next big change is that config drive is created at a temporary location by management server and shipped to either KVM or SSVM agent via cmd-answer pattern, the data of which is not logged in logs. This saves us from adding genisoimage dependency on cloudstack-agent pkg.

The APIs to reset ssh public key, password and user-data (via update VM API) requires that VM should be shutdown. Therefore, in the refactoring I removed the case of updation of existing ISO. If there are objections I'll re-put the strategy to detach+attach new config iso as a way of updation. In the refactored implementation, the folder name is changed to lower-cased configdrive. And during VM start, migration or shutdown/removal if primary storage is enable for use, the KVM agent will handle cleanup tasks otherwise SSVM agent will handle them.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-05-21 14:27:23 +05:30
Rafael Weingärtner 15eddf3dd6 Merge forward branch '4.11' PR #2629
Fix primary storage count when deleting volumes (#2629)
2018-05-16 16:59:17 -03:00
Rafael Weingärtner b9ed42bd29
Fix primary storage count when deleting volumes (#2629)
* Primary Storage count for an account does not decrease when a Data Disk is deleted

When a data disk is created and not attached in a running VM, the "deleteVolume" will not decrement the count for used primary storage in the VMs accounting information. The property that is not being decremented is called "primarystoragetotal"; this information can be retrieved via "listAccounts" API method.

Steps to reproduce this issue:
1 - Create an account, deploy a VM in it
2 - Check the primary storage count for the account with listAccounts API
3 - Create a data disk
4 - Check the primary storage count for the account with listAccounts API
5 - Delete the Data disk
6 - Check the primary storage count for the account with listAccounts API - It is the same as before deleting the data disk (it should not be the same as the value in step 2!)

* formatting and cleanups

* fix imports that were wrongly changed during rebase
2018-05-16 15:28:28 -03:00
Rohit Yadav 65511c4335 Merge branch '4.11': Reduce VR downtime during network restart (#2508)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-05-11 13:09:03 +05:30
Rohit Yadav a77ed56b86
CLOUDSTACK-9114: Reduce VR downtime during network restart (#2508)
This introduces a rolling restart of VRs when networks are restarted
with cleanup option for isolated and VPC networks. A make redundant option is
shown for isolated networks now in UI.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-05-11 12:48:07 +05:30
Marc-Aurèle Brothier 46bd94c6a2 [CLOUDSTACK-10254] checkstyle: add package name declaration validation (#2422)
* checktyle: verify package name matches directory structure

* fix new checkstyle findings on directory with package name mismatch
2018-04-26 10:32:08 -03:00
Marc-Aurèle Brothier 893a88d225 CLOUDSTACK-10105: Use maven standard project structure in all projects (#2283)
Remove maven standard module (which only a few were using) and get ride of maven customization for the projects structure.

- moved all directories to src/main/java, src/main/resources, src/main/scripts, src/test/java, src/test/resources
- grep scan to search for src/com and src/org left over
- grep for <project>/scripts to fix pom.xml configuration
- remove custom <build> configuration in pom.xml

Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
2018-01-20 03:19:27 +05:30
Mike Tutkowski a30a31c9b7 CLOUDSTACK-9620: Enhancements for managed storage (#2298)
Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in a KVM-only environment (previously this only worked for XenServer and VMware)

Added support for root disks on managed storage with KVM

Added support for volume snapshots with managed storage on KVM

Enable creating a template directly from a volume (i.e. without having to go through a volume snapshot) on KVM with managed storage

Only allow the resizing of a volume for managed storage on KVM if the volume in question is either not attached to a VM or is attached to a VM in the Stopped state.

Included support for Reinstall VM on KVM with managed storage

Enabled offline migration on KVM from non-managed storage to managed storage and vice versa

Included support for online storage migration on KVM with managed storage (NFS and Ceph to managed storage)

Added support to download (extract) a managed-storage volume to a QCOW2 file

When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS, if applicable.

Included support for the KVM auto-convergence feature

The compression flag was actually added in version 1.0.3 (1000003) as opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)

On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable iSCSI volumes that need to be disconnected.

Added a new Global Setting: kvm.storage.live.migration.wait

For XenServer, added a check to enforce that only volumes from zone-wide managed storage can be storage motioned from a host in one cluster to a host in another cluster (cannot do so at the time being with volumes from cluster-scoped managed storage)

Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one or more snapshots.

Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot, create volume from snapshot, and create template from snapshot

Added an SIOC API plug-in to support VMware SIOC

When starting a VM that uses managed storage in a cluster other than the one it last was running in, we need to remove the reference to the iSCSI volume from the original cluster.

Added the ability to revert a volume to a snapshot

Enabled cluster-scoped managed storage

Added support for VMware dynamic discovery
2018-01-15 00:05:52 +05:30
Frank Maximus b176648f90 CLOUDSTACK-9813: Extending Config Drive support (#2097)
Extending Config Drive support

* Added support for VMware
* Build configdrive.iso on ssvm
* Added support for VPC and Isolated Networks
* Moved implementation to new Service Provider
* UI fix: add support for urlencoded userdata
* Add support for building systemvm behind a proxy

Co-Authored-By: Raf Smeets <raf.smeets@nuagenetworks.net>
Co-Authored-By: Frank Maximus <frank.maximus@nuagenetworks.net>
Co-Authored-By: Sigert Goeminne <sigert.goeminne@nuagenetworks.net>
2018-01-12 15:14:40 +05:30