Commit Graph

983 Commits

Author SHA1 Message Date
Spaceman1984 d57aa83517
server: Added nfs minor version support (#4180)
This PR adds minor version support when mounting nfs on the SSVM as requested in #2861

The global setting "secstorage.nfs.version" has been changed to use the String data type which allows any minor version to be specified.
2020-08-19 14:53:38 +05:30
nvazquez 7e3b61b723 Merge branch '4.14' 2020-07-18 14:17:43 -03:00
nvazquez 5c6e79b1eb Merge branch '4.13' into 4.14 2020-07-18 14:15:46 -03:00
Nicolas Vazquez f843c537f0
Fix snapshots garbage collection (#4188)
* Cleanup orphan entries from snapshot store ref for primary storage

* Add debug message
2020-07-18 14:12:53 -03:00
Nicolas Vazquez 8c1d749360
[VMware] Enable unmanaging guest VMs (#4103)
* Enable unmanaging guest VMs

* Minor fixes

* Fix stop usage event only if VM is not stopped when unmanaging

* Rename unmanaged VMs manager

* Generate netofferingremove usage event if VM is not stopped

* Generate usage event VM snapshot primary off when unmanaging
2020-06-26 08:31:43 -03:00
Rohit Yadav de3ccd2c29 Merge remote-tracking branch 'origin/4.14' 2020-06-15 09:56:55 +05:30
Rohit Yadav e94a54f3b4 Merge remote-tracking branch 'origin/4.13' into 4.14 2020-06-15 09:56:06 +05:30
Spaceman1984 6a683dcf77
storage: Fixed null pointer (#4130)
Fixes #4090

When trying to migrate a VM across 2 clusters, if a snapshot has been deleted and garbage collection has run to update the removed field, it is not possible to migrate the instance due to a null pointer.
2020-06-15 09:54:22 +05:30
andrijapanicsb 5f926c3353 Updating pom.xml version numbers for release 4.15.0.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 10:18:39 +01:00
andrijapanicsb 05e9b11694 Updating pom.xml version numbers for release 4.14.1.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 09:59:32 +01:00
andrijapanicsb 6f96b3b2b3 Updating pom.xml version numbers for release 4.14.0.0
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-11 15:03:14 +01:00
andrijapanicsb 398e685e01 Updating pom.xml version numbers for release 4.13.2.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-04-29 12:29:12 +01:00
Daan Hoogland 689e529d7b Merge release branch 4.13 to master
* 4.13:
  Fixed guest vlan range going missing when using zone wizzard (#4042)
  Volume migration (#4043)
2020-04-23 20:19:30 +02:00
andrijapanicsb b2ffa3efa5 Updating pom.xml version numbers for release 4.13.1.0
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-04-23 19:17:09 +01:00
dahn c1570b9c91
Volume migration (#4043)
* Update AncientDataMotionStrategy.java

fix When secondary storage usage is> 90%, VOLUME migration across primary storage will cause the migration to fail and lose VOLUME

* Update AncientDataMotionStrategy.java

Volume is migrated across Primary storage. If no secondary storage is available(Or used capacity> 90% ), the migration is canceled.
Before modification, if secondary storage cannot be found, copyVolumeBetweenPools return NUll

copyAsync considers answer = null to be a sign of successful task execution, so it deletes the VOLUME on the old primary storage. This is the root cause of data loss, because VOLUME did not perform the migration at all.

* code in comment removed

Co-authored-by: div8cn <35140268+div8cn@users.noreply.github.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
2020-04-23 19:56:27 +02:00
Daan Hoogland b984184b7a Merge release branch 4.13 to master
* 4.13:
  Snapshot deletion issues (#3969)
  server: Cannot list affinity group if there are hosts dedicated… (#4025)
  server: Search zone-wide storage pool when allocation algothrim is firstfitleastconsumed (#4002)
2020-04-11 16:45:00 +02:00
dahn f18fe5e1da
Snapshot deletion issues (#3969)
* Fixes snapshot deletion

* Remove legacy '@Component', it is not necessary in this bean/class.

* Fix log message missing %d and remove snapshot on DB

* Remove "dummy" boolean return statement

* Manage snapshot deletion for KVM + NFS (primary storage)

* checkstyle trailing spaces

* rename options strings to *_OPTION

* Fix typo on deleteSnapshotOnSecondaryStorage and enhance log message

* Move the snapshotDao.remove(snapshotId); (#4006)

* Fix deletesnapshot worflow to handle both snapshots created in primary storage and snapshots backed up to secondary storage

* Fix extra space

* refactor out separate handling methods for secondary and primary (reducing returns)

* return false on unexpected error or log when expected

* != instead of ==

* secondary instead of backup storage

* init to null

* Handle snapshot deletion on primary storage. When primary store ref not found for snapshot do not fail the operation.

* Fix debug levels on log messages

Co-authored-by: GabrielBrascher <gabriel@apache.org>
Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
2020-04-11 16:40:27 +02:00
Wei Zhou 6bf92fb136
server: Search zone-wide storage pool when allocation algothrim is firstfitleastconsumed (#4002) 2020-04-06 22:01:40 +02:00
Nicolas Vazquez 73122fd0a9
[KVM] Direct download agnostic of the storage provider (#3828)
* Remove constraint for NFS storage

* Add new property on agent.properties

* Add free disk space on the host prior template download

* Add unit tests for the free space check

* Fix free space check - retrieve avaiable size in bytes

* Update default location for direct download

* Improve the method to retrieve hosts to retry on depending on the destination pool type and scope

* Verify location for temporary download exists before checking free space

* In progress - refactor and extension

* Refactor and fix

* Last fixes and marvin tests

* Remove unused test file

* Improve logging

* Change default path for direct download

* Fix upload certificate

* Fix ISO failure after retry

* Fix metalink filename mismatch error

* Fix iso direct download

* Fix for direct download ISOs on local storage and shared mount point

* Last fix iso

* Fix VM migration with ISO

* Refactor volume migration to remove secondary storage intermediate

* Fix simulator issue
2020-03-06 19:56:54 +01:00
Rohit Yadav 58cf300fb6 Merge remote-tracking branch 'origin/4.13' 2020-03-06 14:22:46 +05:30
Nicolas Vazquez bd7d41bf6d
server: fix VM with ISO attached migration issue (#3935)
As previously described by PR #3929:
If vm has attached ISO, the migration fails with error message "org.libvirt.LibvirtException: Cannot access storage file /mnt/b33e5a1d-e4ea-3465-b6ac-c98dc8ff8af0/207-2-cc5fd717-2d57-3bb3-bcf6-2c930268db6c.iso"
2020-03-06 13:32:19 +05:30
Abhishek Kumar 0ad2370baf
Enable Direct Download for System VMs (#3731)
* changes for configurable timeouts for direct download

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* server: refactor direct download config value retrieval

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactored direc download cmd, downloader classes

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* server, services: allow direct download template for SSVM, CPVM

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* list bypassed system templates

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* ignore direct download template during system tempalte download

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* add direct download entry while adding store

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix previous change, donot add multiple entries for direct download

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* connection request timeout as hidden configuration

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix template zone ref cleanup on zone deletion

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix previous commit test error, change implementation

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactored zone template cleanup

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2020-02-26 13:38:31 +01:00
Rohit Yadav d90341ebf1
cloudstack: add JDK11 support (#3601)
This adds support for JDK11 in CloudStack 4.14+:

- Fixes code to build against JDK11
- Bump to Debian 9 systemvmtemplate with openjdk-11
- Fix Travis to run smoketests against openjdk-11
- Use maven provided jdk11 compatible mysql-connector-java
- Remove old agent init.d scripts

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2020-02-12 12:58:25 +05:30
Wei Zhou fd5bea838b
New feature: Add support to destroy/recover volumes (#3688)
* server: fix resource count of primary storage if some volumes are Expunged but not removed

Steps to reproduce the issue
(1) create a vm and stop it. check resource count of primary storage
(2) download volume. resource count of primary storage is not changed.
(3) expunge the vm, the volume will be Expunged state as there is a volume snapshot on secondary storage. The resource count of primary storage decreased.
(4) update resource count of the account (or domain), the resource count of primary storage is reset to the value in step (2).

* New feature: Add support to destroy/recover volumes

* Add integration test for volume destroy/recover

* marvin: check resource count of more types

* messages translate to JP

* Update messages for CN

* translate message for NL

* fix two issues per Daan's comments

Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
2020-02-07 11:25:10 +01:00
Rohit Yadav 424f10cc77 Merge remote-tracking branch 'origin/4.13' 2020-01-31 14:18:11 +05:30
Abhishek Kumar 9d105b6546
template: copy md5 mismatch (#3383)
Fixes #3191

When a template is registered, code stores md5sum of the downloaded file in the vm_template table. However, this downloaded file could be deleted after template installation if it is not an actual (.qcow2, .ova, etc.) file. When the user copies a template using copyTemplate API, the actual template file will be copied across the image stores. Matching checksum for the copied templated file and the stored value from the vm_template table will result in a mismatch.
Changes will set an empty checksum value for the copied template while passing to download service which allows skipping wrong checksum check for the copied while install.
However, this results in a change in checksum value for concerned template entry in vm_template table post template install.

Co-authored-by: dahn <daan.hoogland@gmail.com>
2020-01-31 14:16:37 +05:30
Gabriel Beims Bräscher 93aad24bbb storage: Handle RBD snapshot deletion (#3615)
When deleting volume snapshots, only records in the database are deleted, and snapshots are not deleted on the main storage.

Fixes: #3586
2019-12-08 14:48:51 +05:30
Rohit Yadav 7f5096a4d0
storage: don't select an SSVM that is removed (#3668)
In case an older SSVM is removed without changing it's state from Up
to Destroyed/Removed etc, the SSVM may be randomly selected for image
store related operations. This fix ensures that endpoints for an image
store are found only from a set of SSVM hosts that are not removed.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-11-12 00:47:21 +05:30
Rohit Yadav b0e3fbec3a Merge remote-tracking branch 'origin/4.13' 2019-11-11 21:57:59 +05:30
dahn 95fbe7c55b datamotion: snapshot failure diagnostics unhidden (#3666)
Diagnostics are hard when a snapshot fails if a null pointer occurs. This is because no stack trace or location of the error is logged. I.E.
2019-10-21 12:55:00,056 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-131:ctx-80420156 job-10033827/job-10033828 ctx-4864e2f5) (logid:21454564) copy snasphot failed: java.lang.NullPointerException
2019-11-11 21:55:36 +05:30
Paul Angus 50fc045f36 Updating pom.xml version numbers for release 4.14.0.0-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2019-09-07 09:57:46 +01:00
Paul Angus 61b8b77913 Updating pom.xml version numbers for release 4.13.1.0-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2019-09-01 13:36:50 +01:00
Paul Angus 8e08b47cc9 Updating pom.xml version numbers for release 4.13.0.0
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2019-08-20 15:35:49 +01:00
Nicolas Vazquez bfc08715cc Display VM snapshot tags on usage records (#3560)
* Refactor usage helper tables to include VM snapshot id

* Fix resource type and resource id while listing usage records

* Add defensive checks
2019-08-20 14:20:23 +01:00
Rohit Yadav b576972f71
test: stabilize 4.13/master (#3547)
Fix failing smoketests, fix NPEs. 

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-08-13 11:51:10 +05:30
Nicolas Vazquez 3c2af55d81 vmware: allow configuring appliances on the VM instance wizard when OVF properties are available (#3271)
Problem: In Vmware, appliances that have options that are required to be answered before deployments are configurable through vSphere vCenter user interface but it is not possible from the CloudStack user interface.

Root cause: CloudStack does not handle vApp configuration options during deployments if the appliance contains configurable options. These configurations are mandatory for VM deployment from the appliance on Vmware vSphere vCenter. As shown in the image below, Vmware detects there are mandatory configurations that the administrator must set before deploy the VM from the appliance (in red on the image below):

Solution:
On template registration, after it is downloaded to secondary storage, the OVF file is examined and OVF properties are extracted from the file when available.
OVF properties extracted from templates after being downloaded to secondary storage are stored on the new table 'template_ovf_properties'.
A new optional section is added to the VM deployment wizard in the UI:
If the selected template does not contain OVF properties, then the optional section is not displayed on the wizard.
If the selected template contains OVF properties, then the optional new section is displayed. Each OVF property is displayed and the user must complete every property before proceeding to the next section.
If any configuration property is empty, then a dialog is displayed indicating that there are empty properties which must be set before proceeding
image
The specific OVF properties set on deployment are stored on the 'user_vm_details' table with the prefix: 'ovfproperties-'.
The VM is configured with the vApp configuration section containing the values that the user provided on the wizard.
2019-08-09 16:14:46 +05:30
Gabriel Beims Bräscher 5dc982d8ba KVM local migration issue #3521 (#3533)
Fix regression bug that affects KVM local storage migration. Some of the desired execution flows for KVM local storage migration had been altered to allow only managed storage to execute. Fixed allowing managed and non managed storages to execute.

Fixes #3521
2019-08-07 15:41:30 +05:30
Abhishek Kumar b2db8979f2 server: fix for respecting secondary storage threshold limit (#3480)
Retrieval of an image store using ImageStoreProviderManager has been refactored by introducing three different methods,
DataStore getRandomImageStore(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will not be used here.
DataStore getImageStoreWithFreeCapacity(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will be used here and the store with max free space will be returned. If no store with filled storage less than the threshold is found, the NULL value will be returned.
List<DataStore> listImageStoresWithFreeCapacity(List<DataStore> imageStores);
To get a list of image stores for writing purpose which fulfills threshold capacity check.

Correspondingly DataStoreManager methods have been refactored to return similar values for a given zone.

Fixes #3287 - NULL value will be returned when secondary storage is needed for writing but there is not store with free space.
Fixes #3041 - Rather than returning random secondary storage for writing, storage with max. free space will be returned.
Fixes #3478 - For migration on VMware, all writable secondary storage will be mounted while preparation.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2019-07-31 15:37:59 +05:30
Rohit Yadav d930982a1c engine/storage: remove unused import
Fixes checkstyle issue caused by previous commit 6a511fce40
from PR #3466 where a minor review fix did not address this. Merging this
one to unblock few other PRs after running a local build test.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-07-24 11:59:08 +05:30
Gabriel Beims Bräscher 6a511fce40 kvm: Add ceph RBD snapshot rollback (#3502)
Add CephSnapshotStrategy to handle RBD revert (rollback) snapshot. In order to support RBD revert (rbd_rollback), this PR adds a CephSnapshotStrategy class to handle Ceph/RBD snapshot actions.
2019-07-23 19:40:56 +05:30
Rohit Yadav f30d716452
cloudstack: fix forward merge issues (#3394)
- Fixes tests path from old layout to standard maven in src/test/java/
- Removed duplicate SnapshotManagerImpl at old path `server/src/com...`

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-12 16:38:58 +05:30
Nicolas Vazquez 0fbf5006b8 kvm: live storage migration intra cluster from NFS source and destination (#2983)
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548

Live storage migration on KVM under these conditions:

From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.

Additional notes:

To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026
https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:

Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-10 15:35:26 +05:30
Rohit Yadav bbc0ae873d
storage: post process locally uploaded multi-disk ova template (#3215)
Problem: When a multi-disk OVA template is uploaded, only the root disk is recognized and VMs deployed using such template only get the root disk provisioned.
Root Cause: The template processor for multi-disk OVA was not used in the template upload processor.
Solution: Added support for local multi-disk OVA template upload. After a multi-disk OVA template is
uploaded, the mechanism that worked on multi-disk OVA templates registered using URL is now also used to discovers and creates data-disk templates in cloud.vm_template table and on the secondary storage.

To enable SSL on SSVMs :
• Upload the certificates like you usually do via the API or UI->Infrastructure tab
• Set the global settings secstorage.encrypt.copy, secstorage.ssl.cert.domain to appropriate values
along with the CPVM ones
• Restart management server (no need to destroy/restart SSVM (or the ssvm agent))

Test cases:
- Upload template and check it creates multi-disk folders on secondary 
storage and entries in cloud.vm_template table
- Upload template and kill/shutdown management server. Then restart MS
to check if template sync works
- Copy template across zone of an uploaded template

Signed-off-by: Rohit Yadav rohit.yadav@shapeblue.com
2019-06-05 23:07:40 +05:30
Gabriel Beims Bräscher 25c4f7fc08 kvm: Remove code that generated /var/lib/libvirt/images/null on target host (#3280)
This commit simplifies the generateDestPath method and fixes an issue where an extra file, named as 'null', was created on the target storage pool during VM local storage volume migration. Without this fix, the VM is migrated and there is no data loss; however, 193 KB is allocated for the unused file named as 'null' and the file stays on the target storage.
2019-05-27 18:15:29 +05:30
Rohit Yadav 0700d91a68 Merge branch '4.12'
- Fixes PR #3146 db cleanup to the correct 4.12->4.13 upgrade path
- Fixes failing unit test due to jdk specific changes after forward
  merging

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-05-14 15:15:17 +05:30
Rohit Yadav 00ff536f81 Merge remote-tracking branch 'origin/4.11' into 4.12
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-05-14 14:26:11 +05:30
skattoju4 4c60a5b1ff Fix slow vm creation when large sf snapshot count (#3282)
* skip geting used bytes for volumes that are not in Ready state
* updated log message
* filter snapshots by state backedup
* removed * import
* filter templates by state 'DOWNLOADED'
* refactored getUsedBytes to use O(1) queries
* querying for ready volumes instead filtering in memory
* make listByStoreIdInReadyState more generic ex listByStoreIdAndState
* updated snapshot search criteria for listByStoreIdAndState
* updated template search criteria for listByPoolIdAndState
* fixed typo in search criteria for listByTemplateAndState
* fixed typo in search criteria for templates in listByPoolIdAndState
2019-05-11 16:02:52 +02:00
Anurag Awasthi f9b61bc737 orchestration: Allow VM that has never started to have volumes attached (#3276)
With this patch b766bf7
we started tracking disks in attaching state so that other attach request can fail gracefully. However this missed the case where disks were in allocated state but attach was requested.

For the use case where users want to attach disk in allocated state but not ready, we need to have allocated-attaching transition as well. We must take care of returning to the original state - allocated or ready - when attach request has completed.

For the use case of unstarted vm's the disk must proceed as follows - "Allocated" -> Attaching -> Allocated. When VM is started, the disk is "created" and pool is assigned. For the use case of started VMs it's more trivial and disk proceeds as follows - Ready -> Attaching -> Ready.

Test this by creating a VM with "startvm=false", create a disk and try attaching it in allocated state. It would give an exception on latest 4.11 but will be fixed on this patch.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-05-10 23:40:38 +05:30
Rohit Yadav 491a10be0c
storage: publish delete usage event for snapshot deletion (#3212)
Problem: Users are billed for destroyed VMs with VM snapshots because usage records don't get that the VM and VM snapshots are removed.
Root Cause: The destroyVirtualMachine and expungeVirtualMachine APIs were removing VM snapshots but not generating VMSNAPSHOT.DELETE usage event due to which the VM snapshots were not marked as removed in the usage_vmsnapshot table.
Solution: The issue was fixed by emitting the proper usage event for all the VM snapshots of a VM that is destroyed.
2019-04-10 17:12:55 +05:30
GabrielBrascher 8d3feb100a Updating pom.xml version numbers for release 4.13.0.0-SNAPSHOT
Signed-off-by: GabrielBrascher <gabriel@pcextreme.nl>
2019-03-20 18:47:35 -03:00
GabrielBrascher a137398bf1 Updating pom.xml version numbers for release 4.12.0.0
Signed-off-by: GabrielBrascher <gabriel@pcextreme.nl>
2019-03-14 10:11:46 -03:00
Gabriel Beims Bräscher 7c5eca9481
Copy template to target KVM host if needed when migrating local <> local storage (#3154)
* Migrate template to target host if needed.

Fix KVM VM local storage live migration by migrating its template to the
target host if needed.

* Address reviewer and add method that updates the DB template reference

* Remove deprecated Config.PrimaryStorageDownloadWait

* Code formating of @Inject to follow checkstyle
2019-02-05 00:18:29 -02:00
Nathan Johnson 637cc6ec4e feature: add libvirt / qemu io bursting (#3133)
* feature: add libvirt / qemu io bursting

Adds the ability to set bursting features from libvirt / qemu

This allows you to utilize the iops and bytes temporary "burst" mode
introduced with libvirt 2.4 and improved upon with libvirt 2.6.

https://blogs.igalia.com/berto/2016/05/24/io-bursts-with-qemu-2-6/

* updates per rafael et al
2019-02-04 19:47:44 -02:00
GabrielBrascher 460d3127ec Fix conflict and merge forward PR #3122 from 4.11 to master (4.12) 2019-02-04 19:24:59 -02:00
Nathan Johnson bf805d1483 Add back ability to disable backup of snapshot to secondary (#3122)
* The snapshot.backup.rightafter configuration variable was removed by:

SHA: 6bb0ca2f85

This adds it back, though named snapshot.backup.to.secondary now instead.

This global parameter, once set, will allow you to prevent automatic backups of
     snapshots to secondary storage, unless they're actually needed.

Fixes #3096

* updates per review
2019-02-04 19:08:42 -02:00
dahn b363fd49f7 Vmware offline migration (#2848)
* - Offline VM and Volume migration on Vmware hypervisor hosts
- Also add VM disk consolidation call on successful VM migrations

* Fix indentation of marvin test file and reformat against PEP8

* * Fix few comment typos
* Refactor debug messages to use String.format() when debug log level is enabled.

* Send list of commands returned by hypervisor Guru instead of explicitly selecting the first one

* Fix unhandled NPE during VM migration

* Revert back to distinct event descriptions for VM to host or storage pool migration

* Reformat test_primary_storage file against PEP-8 and Remove unused imports

* Revert back the deprecation messages in the custom StringUtils class to favour the use of the ApacheUtils
2019-01-25 10:05:13 -02:00
Gabriel Beims Bräscher bf209405e7 Allow KVM VM live migration with ROOT volume on file storage type (#2997)
* Allow KVM VM live migration with ROOT volume on file

* Allow KVM VM live migration with ROOT volume on file
- Add JUnit tests

* Address reviewers and change some variable names to ease future
implementation (developers can easily guess the name and use
autocomplete)
2018-12-14 09:01:28 -02:00
Paul Angus fb80e51307 Updating pom.xml version numbers for release 4.11.3.0-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2018-11-20 13:11:52 +00:00
Gabriel Beims Bräscher e45bed74a5 server: remove unused StrategyPriority.PLUGIN. (#3014)
Remove unused StrategyPriority.PLUGIN enum. The PLUGIN Strategy priority is not used, except by three JUnit test methods.
2018-11-14 15:07:37 +05:30
Kui LIU d53fc94485 CLOUDSTACK-10365: Change the "getXXX" boolean method names to "isXXX" (#2847)
These boolean-return methods are named as "getXXX".
Other boolean-return methods are named as "isXXX".
Considering there methods will return boolean values, it should be more clear and consistent to rename them as "isXXX".
(rebase #2602 and #2816)
2018-09-22 17:20:48 +02:00
Mike Tutkowski d12c106a47
Restrict the number of managed clustered file systems per compute cluster (#2500)
* Restrict the number of managed clustered file systems per compute cluster
2018-09-11 08:23:19 -06:00
Mike Tutkowski 568119d437
Merge pull request #2585 from syed/upstream-snapshot-archive
Add ability to archive snapshots on primary storage
2018-08-23 19:36:04 -06:00
Rafael Weingärtner 85c88e08a7
Remove class snapshot data factory test (#2813)
* rename package from "src" to "org.apache.cloudstack.storage.vmsnapshot"

* Remove Empty test class "SnapshotDataFactoryTest"

* Remove redundant imports (importing a class from its own package)
2018-08-20 10:16:07 -03:00
Rafael Weingärtner 21c0225921
Fix the problem at #1740 when it loads all snapshots in the primary storage (#2808)
* Fix the problem at #1740 when it loads all snapshots in the primary storage

While checking a problem with @mike-tutkowski at PR #1740, we noticed that the method `org.apache.cloudstack.storage.snapshot.SnapshotDataFactoryImpl.getSnapshots(long, DataStoreRole)` was not loading the snapshots in `snapshot_store_ref` table according to their storage role. Instead, it would return a list of all snapshots (even if they are not in the storage role sent as a parameter) saying that they are in the storage that was sent as parameter.

* Add unit test
2018-08-17 08:33:25 -03:00
Mike Tutkowski 3db33b7385 Support online migration of a virtual disk on XenServer from non-managed storage to managed storage 2018-08-12 00:23:36 -06:00
Mike Tutkowski 46c56eaaf9 Merge release branch 4.11 to master
* 4.11:
  Changed the implementation of isVolumeOnManagedStorage(VolumeInfo) to check if the data store in question is for primary storage (and added a unit test from Daan Hoogland)
  vmware: reboot VR after mac updates (#2794)
2018-08-12 00:03:37 -06:00
Mike Tutkowski ab83c198a5 Changed the implementation of isVolumeOnManagedStorage(VolumeInfo) to check if the data store in question is for primary storage (and added a unit test from Daan Hoogland) 2018-08-10 11:24:18 -06:00
Khosrow Moossavi 7c6630bca7 Cleanup POMs (#2613)
* Cleaup and code-formatting POM files

* Remove obsolete mycila license-maven-plugin

* Remove obsolete console-proxy/plugin project

* Move console-proxy-rdbconsole under console-proxy parent

* Use correct parent path for rdpconsole

* Order alphabetally items in setnextversion.sh

* Unifiy License header in POMs

* Alphabetic order of modules definition

* Extract all defined versions into parent pom

* Remove obsolete files: version-info.in, configure-info.in

* Remove redundant defaultGoal

* Remove useless checkstyle plugin from checkstyle project

* Order alphabetally items in pom.xml

* Add aditional SPACEs to fix debian build

* Don't execute checkstyle on parent projects

* Use UTF-8 encoding in building checkstyle project

* Extract plugin versions into properties

* Execute PMD plugin on all the projects with -Penablefindbugs

* Upgrade maven plugins to latest version

* Make sure to always look for apache parent pom from repository

* Fix incorrect version grep in debian packaging

* Fix rebase conflicts

* Fix rebase conflicts

* Remove PMD for now to be fixed on another PR
2018-07-25 14:39:37 -03:00
Mike Tutkowski 73608dec28 Support multiple volume access groups per compute cluster 2018-07-16 15:13:16 -06:00
Khosrow Moossavi 67860d9f46 maven: Updating pom.xml version numbers for release 4.11.2.0-SNAPSHOT (#2728)
Fixes the version in pom etc. to be consistent with versioning pattern as X.Y.Z.0-SNAPSHOT after a minor release.

Signed-off-by: Khosrow Moossavi <khos2ow@gmail.com>
2018-07-06 17:27:12 +05:30
Paul Angus 8ba318da19 Updating pom.xml version numbers for release 4.11.2-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2018-06-26 17:53:54 +01:00
Paul Angus 2cb2dacbe7 Updating pom.xml version numbers for release 4.11.1.0
Signed-off-by: Paul Angus <paulangus@PA-Ansible-GUI.sblab.local>
2018-06-21 15:52:43 +01:00
Rohit Yadav 85750f918b Merge branch '4.11'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-06-20 12:31:52 +05:30
Rohit Yadav 39471c8c00
configdrive: make fewer mountpoints on hosts (#2716)
This ensure that fewer mount points are made on hosts for either
primary storagepools or secondary storagepools.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-06-20 12:25:16 +05:30
Rohit Yadav 72e61bfa1d Merge branch '4.11'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-06-07 11:26:34 +05:30
Nicolas Vazquez 99ca81a676 ui: do not send conserve mode on L2 network offering creation from the UI (#2694)
Do not send conserve mode param on L2 network offering creation from the UI. Fix config drive NPE issue on L2 network.
2018-06-07 11:20:37 +05:30
Rohit Yadav 9146d7b7a0 Merge branch '4.11' 2018-06-06 12:41:18 +05:30
Rafael Weingärtner 9b83337658 Create unit test cases for 'ConfigDriveBuilder' class (#2674)
* Create unit test cases for 'ConfigDriveBuilder' class

* add method 'getProgramToGenerateIso' as suggested by rohit and Daan

* fix encoding for base64 to StandardCharsets.US_ASCII

* fix MockServerTest.testIsMockServerCanUpgradeConnectionToSsl()

This is another method that is causing Jenkins to fail for almost a month
2018-06-04 13:20:09 +02:00
Rohit Yadav 7c6777b8d3 Merge branch '4.11': allow config drives on primary storage for KVM (#2651)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-05-21 14:50:55 +05:30
Rohit Yadav acc5fdcdbd
CLOUDSTACK-10290: allow config drives on primary storage for KVM (#2651)
This introduces a new global setting `vm.configdrive.primarypool.enabled` to toggle creation/hosting of config drive iso files on primary storage, the default will be false causing them to be hosted on secondary storage. The current support is limited from hypervisor resource side and in current implementation limited to `KVM` only. The next big change is that config drive is created at a temporary location by management server and shipped to either KVM or SSVM agent via cmd-answer pattern, the data of which is not logged in logs. This saves us from adding genisoimage dependency on cloudstack-agent pkg.

The APIs to reset ssh public key, password and user-data (via update VM API) requires that VM should be shutdown. Therefore, in the refactoring I removed the case of updation of existing ISO. If there are objections I'll re-put the strategy to detach+attach new config iso as a way of updation. In the refactored implementation, the folder name is changed to lower-cased configdrive. And during VM start, migration or shutdown/removal if primary storage is enable for use, the KVM agent will handle cleanup tasks otherwise SSVM agent will handle them.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-05-21 14:27:23 +05:30
Syed Ahmed cd70ede3c2 Add ability to archive snapshots on primary storage 2018-05-17 04:39:34 -04:00
Rafael Weingärtner 15eddf3dd6 Merge forward branch '4.11' PR #2629
Fix primary storage count when deleting volumes (#2629)
2018-05-16 16:59:17 -03:00
Rafael Weingärtner b9ed42bd29
Fix primary storage count when deleting volumes (#2629)
* Primary Storage count for an account does not decrease when a Data Disk is deleted

When a data disk is created and not attached in a running VM, the "deleteVolume" will not decrement the count for used primary storage in the VMs accounting information. The property that is not being decremented is called "primarystoragetotal"; this information can be retrieved via "listAccounts" API method.

Steps to reproduce this issue:
1 - Create an account, deploy a VM in it
2 - Check the primary storage count for the account with listAccounts API
3 - Create a data disk
4 - Check the primary storage count for the account with listAccounts API
5 - Delete the Data disk
6 - Check the primary storage count for the account with listAccounts API - It is the same as before deleting the data disk (it should not be the same as the value in step 2!)

* formatting and cleanups

* fix imports that were wrongly changed during rebase
2018-05-16 15:28:28 -03:00
Marc-Aurèle Brothier 46bd94c6a2 [CLOUDSTACK-10254] checkstyle: add package name declaration validation (#2422)
* checktyle: verify package name matches directory structure

* fix new checkstyle findings on directory with package name mismatch
2018-04-26 10:32:08 -03:00
Rohit Yadav 41895561a7 Merge branch '4.11'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-03-30 16:21:06 +05:30
Mike Tutkowski e68f5cea67 Only use the host if its Resource State is Enabled. (#2512) 2018-03-29 17:43:22 +00:00
Rohit Yadav 8ef131745a Merge branch '4.11' 2018-03-15 16:46:50 +05:30
Rafael Weingärtner 972b8b71d7
CLOUDSTACK-8855 Improve Error Message for Host Alert State and reconnect host API. (#2387)
* CLOUDSTACK-8855 Improve Error Message for Host Alert State

* [CLOUDSTACK-9846] create column to save the content of alert messages

Remove declaration of throws CloudRuntimeException
I also removed some unused variables and comments left behind

This closes #837

* Isolate a problematic test "smoke/test_certauthority_root"
2018-03-14 15:27:43 -03:00
Raf Smeets 19d6578732 CLOUDSTACK-10303 : Refactor test data to nuage_test_data.py runnable against simulator (#2483)
* Refactored nuage tests

Added simulator support for ConfigDrive
Allow all nuage tests to run against simulator
Refactored nuage tests to remove code duplication

* Move test data from test_data.py to nuage_test_data.py

Nuage test data is now contained in nuage_test_data.py instead of
test_data.py
Removed all nuage test data from nuage_test_data.py

* CLOUD-1252 fixed cleanup of vpc tier network

* Import libVSD into the codebase

* CLOUDSTACK-1253: Volumes are not expunged in simulator

* Fixed some merge issues in test_nuage_vsp_mngd_subnets test

* Implement GetVolumeStatsCommand in Simulator

* Add vspk as marvin nuagevsp dependency, after removing libVSD dependency

* correct libVSD files for license purposes

pep8 pyflakes compliant
2018-03-14 17:17:36 +05:30
Rafael Weingärtner f2efbcecec
[CLOUDSTACK-10240] ACS cannot migrate a local volume to shared storage (#2425)
* [CLOUDSTACK-10240] ACS cannot migrate a volume from local to shared storage.

CloudStack is logically restricting the migration of local storages to shared storage and vice versa. This restriction is a logical one and can be removed for XenServer deployments. Therefore, we will enable migration of volumes between local-shared storages in XenServers independently of their service offering. This will work as an override mechanism to the disk offering used by volumes. If administrators want to migrate local volumes to a shared storage, they should be able to do so (the hypervisor already allows that). The same the other way around.

* Cleanups implemented while working on [CLOUDSTACK-10240]

* Fix test case test_03_migrate_options_storage_tags

The changes applied were:
- When loading hypervisors capabilities we must use "default" instead of nulls
- "Enable" storage migration for simulator hypervisor
- Remove restriction on "ClusterScopeStoragePoolAllocator" to find shared pools
2018-03-07 18:23:15 -03:00
Rohit Yadav 0ece15f86e Updating pom.xml version numbers for release 4.11.1.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-02-26 16:57:48 +01:00
Rohit Yadav 6ffbce6159 Updating pom.xml version numbers for release 4.11.0.1-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-02-05 11:13:50 +01:00
Rohit Yadav 5dada1f7ed Updating pom.xml version numbers for release 4.11.0.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-01-26 13:13:37 +01:00
Rafael Weingärtner c591c5ad3e CLOUDSTACK-10248: Fix errors that appeared after #2283 (#2417)
This fixes move refactoring error introduced in #2283 
For instance, the class DatadiskTO is supposed to be in com.cloud.agent.api.to package. However, the folder structure it was placed in is com.cloud.agent.api.api.to.

Skip tests for cloud-plugin-hypervisor-ovm3:
For some unknown reason, there are quite a lot of broken test cases for cloud-plugin-hypervisor-ovm3. They might have appeared after some dependency upgrade and was overlooked by the person updating them. I checked them to see if they could be fixed, but these tests are not developed in a clear and clean manner. On top of that, we do not see (at least I) people using OVM3-hypervisor with ACS. Therefore, I decided to skip them.

Identention corrected to use spaces instead of tabs in XML files
2018-01-23 12:19:36 +01:00
Marc-Aurèle Brothier 893a88d225 CLOUDSTACK-10105: Use maven standard project structure in all projects (#2283)
Remove maven standard module (which only a few were using) and get ride of maven customization for the projects structure.

- moved all directories to src/main/java, src/main/resources, src/main/scripts, src/test/java, src/test/resources
- grep scan to search for src/com and src/org left over
- grep for <project>/scripts to fix pom.xml configuration
- remove custom <build> configuration in pom.xml

Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
2018-01-20 03:19:27 +05:30
Rohit Yadav 072dbc0720 Updating pom.xml version numbers for master to 4.12.0.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-01-15 17:43:45 +05:30
Mike Tutkowski a30a31c9b7 CLOUDSTACK-9620: Enhancements for managed storage (#2298)
Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in a KVM-only environment (previously this only worked for XenServer and VMware)

Added support for root disks on managed storage with KVM

Added support for volume snapshots with managed storage on KVM

Enable creating a template directly from a volume (i.e. without having to go through a volume snapshot) on KVM with managed storage

Only allow the resizing of a volume for managed storage on KVM if the volume in question is either not attached to a VM or is attached to a VM in the Stopped state.

Included support for Reinstall VM on KVM with managed storage

Enabled offline migration on KVM from non-managed storage to managed storage and vice versa

Included support for online storage migration on KVM with managed storage (NFS and Ceph to managed storage)

Added support to download (extract) a managed-storage volume to a QCOW2 file

When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS, if applicable.

Included support for the KVM auto-convergence feature

The compression flag was actually added in version 1.0.3 (1000003) as opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)

On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable iSCSI volumes that need to be disconnected.

Added a new Global Setting: kvm.storage.live.migration.wait

For XenServer, added a check to enforce that only volumes from zone-wide managed storage can be storage motioned from a host in one cluster to a host in another cluster (cannot do so at the time being with volumes from cluster-scoped managed storage)

Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one or more snapshots.

Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot, create volume from snapshot, and create template from snapshot

Added an SIOC API plug-in to support VMware SIOC

When starting a VM that uses managed storage in a cluster other than the one it last was running in, we need to remove the reference to the iSCSI volume from the original cluster.

Added the ability to revert a volume to a snapshot

Enabled cluster-scoped managed storage

Added support for VMware dynamic discovery
2018-01-15 00:05:52 +05:30
Abhinandan Prateek 64832fd70a CLOUDSTACK-4757: Support OVA files with multiple disks for templates (#2146)
CloudStack volumes and templates are one single virtual disk in case of XenServer/XCP and KVM hypervisors since the files used for templates and volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates are in OVA format, which are archives that can contain a complete VM including multiple VMDKs and other files such as ISOs. And currently, Cloudstack only supports Template creation based on OVA files containing a single disk. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, only the first disk is attached to the new instance and other disks are ignored.
Similarly with uploaded volumes, attaching an uploaded volume that contains multiple disks to a VM will result in only one VMDK to being attached to the VM.

FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Support+OVA+files+containing+multiple+disks

This behavior needs to be improved in VMWare to support OVA files with multiple disks for both uploaded volumes and templates. i.e. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, the first disk should be attached to the new instance as the ROOT disk and volumes should be created based on other VMDK disks in the OVA file and should be attached to the instance.

Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-01-10 22:10:41 +05:30
Nicolas Vazquez e86bb41e0e CLOUDSTACK-10146: Bypass Secondary Storage for KVM templates (#2379)
This feature allows using templates and ISOs avoiding secondary storage as intermediate cache on KVM. The virtual machine deployment process is enhanced to supported bypassed registered templates and ISOs, delegating the work of downloading them to primary storage to the KVM agent instead of the SSVM agent.

Template and ISO registration:
- When hypervisor is KVM, a checkbox is displayed with 'Direct Download' label.
- API methods registerTemplate and registerISO are both extended with this new parameter directdownload.
- On template or ISO registration, no download job is sent to SSVM agent, CloudStack would only persist an entry on template_store_ref indicating that template or ISO has been marked as 'Direct Download' (bypassing Secondary Storage). These entries are persisted as:
template_id = Template or ISO id on vm_template table
store_id NULL
download_state = BYPASSED
state = Ready
(Note: these entries allow users to deploy virtual machine from registered templates or ISOs)
- An URL validation command is sent to a random KVM host to check if template/ISO location can be reached. Metalink are also supported by this feature. In case of a metalink, it is fetched and URL check is performed on each of its URLs.
- Checksum should be provided as indicated on #2246: {ALGORITHM}CHKSUMHASH
- After template or ISO is registered, it would be displayed in the UI

Virtual machine deployment:
When a 'Direct Download' template is selected for deployment, CloudStack would delegate template downloading to destination storage pool via destination host by a new pluggable download manager.
Download manager would handle template downloading depending on URL protocol. In case of HTTP, request headers can be set by the user via vm_template_details. Those details should be persisted as:
Key: HTTP_HEADER
Value: HEADERNAME:HEADERVALUE

In case of HTTPS, a new API method is added uploadTemplateDirectDownloadCertificate to allow user importing a client certificate into all KVM hosts' keystore before deployment.
After template or ISO is downloaded to primary storage, usual entry would be persisted on template_spool_ref indicating the mapping between template/ISO and storage pool.
2018-01-09 12:22:18 +05:30
subhash yedugundla 8eca04e1f6 CLOUDSTACK-9572: Snapshot on primary storage not cleaned up after Storage migration (#1740)
Snapshot on primary storage not cleaned up after Storage migration. This happens in the following scenario:

Steps To Reproduce
Create an instance on the local storage on any host
Create a scheduled snapshot of the volume:
Wait until ACS created the snapshot. ACS is creating a snapshot on local storage and is transferring this snapshot to secondary storage. But the latest snapshot on local storage will stay there. This is as expected.
Migrate the instance to another XenServer host with ACS UI and Storage Live Migration
The Snapshot on the old host on local storage will not be cleaned up and is staying on local storage. So local storage will fill up with unneeded snapshots.
2018-01-05 11:19:56 +05:30