Commit Graph

271 Commits

Author SHA1 Message Date
Nicolas Vazquez 73122fd0a9
[KVM] Direct download agnostic of the storage provider (#3828)
* Remove constraint for NFS storage

* Add new property on agent.properties

* Add free disk space on the host prior template download

* Add unit tests for the free space check

* Fix free space check - retrieve avaiable size in bytes

* Update default location for direct download

* Improve the method to retrieve hosts to retry on depending on the destination pool type and scope

* Verify location for temporary download exists before checking free space

* In progress - refactor and extension

* Refactor and fix

* Last fixes and marvin tests

* Remove unused test file

* Improve logging

* Change default path for direct download

* Fix upload certificate

* Fix ISO failure after retry

* Fix metalink filename mismatch error

* Fix iso direct download

* Fix for direct download ISOs on local storage and shared mount point

* Last fix iso

* Fix VM migration with ISO

* Refactor volume migration to remove secondary storage intermediate

* Fix simulator issue
2020-03-06 19:56:54 +01:00
Wei Zhou fd5bea838b
New feature: Add support to destroy/recover volumes (#3688)
* server: fix resource count of primary storage if some volumes are Expunged but not removed

Steps to reproduce the issue
(1) create a vm and stop it. check resource count of primary storage
(2) download volume. resource count of primary storage is not changed.
(3) expunge the vm, the volume will be Expunged state as there is a volume snapshot on secondary storage. The resource count of primary storage decreased.
(4) update resource count of the account (or domain), the resource count of primary storage is reset to the value in step (2).

* New feature: Add support to destroy/recover volumes

* Add integration test for volume destroy/recover

* marvin: check resource count of more types

* messages translate to JP

* Update messages for CN

* translate message for NL

* fix two issues per Daan's comments

Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
2020-02-07 11:25:10 +01:00
Nicolas Vazquez 0fbf5006b8 kvm: live storage migration intra cluster from NFS source and destination (#2983)
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548

Live storage migration on KVM under these conditions:

From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.

Additional notes:

To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026
https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:

Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-10 15:35:26 +05:30
Rohit Yadav 00ff536f81 Merge remote-tracking branch 'origin/4.11' into 4.12
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-05-14 14:26:11 +05:30
Anurag Awasthi f9b61bc737 orchestration: Allow VM that has never started to have volumes attached (#3276)
With this patch b766bf7
we started tracking disks in attaching state so that other attach request can fail gracefully. However this missed the case where disks were in allocated state but attach was requested.

For the use case where users want to attach disk in allocated state but not ready, we need to have allocated-attaching transition as well. We must take care of returning to the original state - allocated or ready - when attach request has completed.

For the use case of unstarted vm's the disk must proceed as follows - "Allocated" -> Attaching -> Allocated. When VM is started, the disk is "created" and pool is assigned. For the use case of started VMs it's more trivial and disk proceeds as follows - Ready -> Attaching -> Ready.

Test this by creating a VM with "startvm=false", create a disk and try attaching it in allocated state. It would give an exception on latest 4.11 but will be fixed on this patch.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-05-10 23:40:38 +05:30
Nathan Johnson 637cc6ec4e feature: add libvirt / qemu io bursting (#3133)
* feature: add libvirt / qemu io bursting

Adds the ability to set bursting features from libvirt / qemu

This allows you to utilize the iops and bytes temporary "burst" mode
introduced with libvirt 2.4 and improved upon with libvirt 2.6.

https://blogs.igalia.com/berto/2016/05/24/io-bursts-with-qemu-2-6/

* updates per rafael et al
2019-02-04 19:47:44 -02:00
dahn b363fd49f7 Vmware offline migration (#2848)
* - Offline VM and Volume migration on Vmware hypervisor hosts
- Also add VM disk consolidation call on successful VM migrations

* Fix indentation of marvin test file and reformat against PEP8

* * Fix few comment typos
* Refactor debug messages to use String.format() when debug log level is enabled.

* Send list of commands returned by hypervisor Guru instead of explicitly selecting the first one

* Fix unhandled NPE during VM migration

* Revert back to distinct event descriptions for VM to host or storage pool migration

* Reformat test_primary_storage file against PEP-8 and Remove unused imports

* Revert back the deprecation messages in the custom StringUtils class to favour the use of the ApacheUtils
2019-01-25 10:05:13 -02:00
Rafael Weingärtner 15eddf3dd6 Merge forward branch '4.11' PR #2629
Fix primary storage count when deleting volumes (#2629)
2018-05-16 16:59:17 -03:00
Rafael Weingärtner b9ed42bd29
Fix primary storage count when deleting volumes (#2629)
* Primary Storage count for an account does not decrease when a Data Disk is deleted

When a data disk is created and not attached in a running VM, the "deleteVolume" will not decrement the count for used primary storage in the VMs accounting information. The property that is not being decremented is called "primarystoragetotal"; this information can be retrieved via "listAccounts" API method.

Steps to reproduce this issue:
1 - Create an account, deploy a VM in it
2 - Check the primary storage count for the account with listAccounts API
3 - Create a data disk
4 - Check the primary storage count for the account with listAccounts API
5 - Delete the Data disk
6 - Check the primary storage count for the account with listAccounts API - It is the same as before deleting the data disk (it should not be the same as the value in step 2!)

* formatting and cleanups

* fix imports that were wrongly changed during rebase
2018-05-16 15:28:28 -03:00
Marc-Aurèle Brothier 893a88d225 CLOUDSTACK-10105: Use maven standard project structure in all projects (#2283)
Remove maven standard module (which only a few were using) and get ride of maven customization for the projects structure.

- moved all directories to src/main/java, src/main/resources, src/main/scripts, src/test/java, src/test/resources
- grep scan to search for src/com and src/org left over
- grep for <project>/scripts to fix pom.xml configuration
- remove custom <build> configuration in pom.xml

Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
2018-01-20 03:19:27 +05:30
Mike Tutkowski a30a31c9b7 CLOUDSTACK-9620: Enhancements for managed storage (#2298)
Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in a KVM-only environment (previously this only worked for XenServer and VMware)

Added support for root disks on managed storage with KVM

Added support for volume snapshots with managed storage on KVM

Enable creating a template directly from a volume (i.e. without having to go through a volume snapshot) on KVM with managed storage

Only allow the resizing of a volume for managed storage on KVM if the volume in question is either not attached to a VM or is attached to a VM in the Stopped state.

Included support for Reinstall VM on KVM with managed storage

Enabled offline migration on KVM from non-managed storage to managed storage and vice versa

Included support for online storage migration on KVM with managed storage (NFS and Ceph to managed storage)

Added support to download (extract) a managed-storage volume to a QCOW2 file

When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS, if applicable.

Included support for the KVM auto-convergence feature

The compression flag was actually added in version 1.0.3 (1000003) as opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)

On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable iSCSI volumes that need to be disconnected.

Added a new Global Setting: kvm.storage.live.migration.wait

For XenServer, added a check to enforce that only volumes from zone-wide managed storage can be storage motioned from a host in one cluster to a host in another cluster (cannot do so at the time being with volumes from cluster-scoped managed storage)

Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one or more snapshots.

Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot, create volume from snapshot, and create template from snapshot

Added an SIOC API plug-in to support VMware SIOC

When starting a VM that uses managed storage in a cluster other than the one it last was running in, we need to remove the reference to the iSCSI volume from the original cluster.

Added the ability to revert a volume to a snapshot

Enabled cluster-scoped managed storage

Added support for VMware dynamic discovery
2018-01-15 00:05:52 +05:30
Abhinandan Prateek 64832fd70a CLOUDSTACK-4757: Support OVA files with multiple disks for templates (#2146)
CloudStack volumes and templates are one single virtual disk in case of XenServer/XCP and KVM hypervisors since the files used for templates and volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates are in OVA format, which are archives that can contain a complete VM including multiple VMDKs and other files such as ISOs. And currently, Cloudstack only supports Template creation based on OVA files containing a single disk. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, only the first disk is attached to the new instance and other disks are ignored.
Similarly with uploaded volumes, attaching an uploaded volume that contains multiple disks to a VM will result in only one VMDK to being attached to the VM.

FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Support+OVA+files+containing+multiple+disks

This behavior needs to be improved in VMWare to support OVA files with multiple disks for both uploaded volumes and templates. i.e. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, the first disk should be attached to the new instance as the ROOT disk and volumes should be created based on other VMDK disks in the OVA file and should be attached to the instance.

Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-01-10 22:10:41 +05:30
subhash yedugundla 8eca04e1f6 CLOUDSTACK-9572: Snapshot on primary storage not cleaned up after Storage migration (#1740)
Snapshot on primary storage not cleaned up after Storage migration. This happens in the following scenario:

Steps To Reproduce
Create an instance on the local storage on any host
Create a scheduled snapshot of the volume:
Wait until ACS created the snapshot. ACS is creating a snapshot on local storage and is transferring this snapshot to secondary storage. But the latest snapshot on local storage will stay there. This is as expected.
Migrate the instance to another XenServer host with ACS UI and Storage Live Migration
The Snapshot on the old host on local storage will not be cleaned up and is staying on local storage. So local storage will fill up with unneeded snapshots.
2018-01-05 11:19:56 +05:30
Rafael Weingärtner 7f6ae15972 Remove annotation not needed at cloud-engine-storage-image (#2324)
remove bogus depends-on declaration at spring-engine-storage-snapshot-core-context
2017-11-16 23:11:15 +05:30
Mike Tutkowski ccb68e13a7
Fix for an issue introduced in this commit: 336df84f17
(cherry picked from commit c57f556966)

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2017-06-15 16:28:52 +05:30
Rajani Karuturi 1bd66cb03e Merge pull request #2072 from Accelerite/CLOUDSTACK-9895_ParallelVolumes
CLOUDSTACK-9895 : Added support for parallel volume(s) creation from a volume snapshot
2017-05-31 14:05:05 +05:30
Pavan Kumar Aravapalli 502f813370 CLOUDSTACK-9895 : Added support for parallel volume(s) creation from a volume snapshot 2017-05-31 11:27:30 +05:30
Rajani Karuturi 86eb5683dd Merge pull request #1840 from anshul1886/CLOUDSTACK-9685
CLOUDSTACK-9685: delete snapshot on primary associated with a volume …
2017-05-16 12:42:48 +05:30
Rajani Karuturi 503c803ba0 Merge pull request #803 from anshul1886/CLOUDSTACK-8833
CLOUDSTACK-8833: Fixed  Generating url and migrate volume to another storage , resulting two entry in UI and listvolume is not working for that volume
2017-05-08 10:14:02 +05:30
Daan Hoogland 4fd72917cd CE-113 removed TODOs 2017-04-18 18:11:00 +02:00
Daan Hoogland c689d4a696 CE-113 trace logging and rethrow instead of nesting CloudRuntimeException 2017-04-18 18:11:00 +02:00
nvazquez c66df6e11f Fix for test failure 2017-03-02 16:14:45 -03:00
Anshul Gangwar 42b89278e9 CLOUDSTACK-8833: Fixed Generating url and migrate volume to another storage , resulting two entry in UI and listvolume is not working for that volume
Update the volume id in volume_store_ref table to newly created volume for migration
2017-02-28 17:55:42 +05:30
Rajani Karuturi 6a18cdd6ef Merge pull request #1825 from Accelerite/CLOUDSTACK-9660
CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests

NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle
only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.

* pr/1825:
  CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2017-02-28 06:00:02 +05:30
Anshul Gangwar 336df84f17 CLOUDSTACK-9685: delete snapshot on primary associated with a volume when that volume is deleted
as that snapshot will never be going to use again and also it will fill up primary storage
2016-12-20 15:21:04 +05:30
Koushik Das d6b41d9ac2 CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests
NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle
only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
2016-12-09 15:49:39 +05:30
Mike Tutkowski 06806a6523 CLOUDSTACK-9619: Updates for SAN-assisted snapshots 2016-12-06 17:32:56 -07:00
Mike Tutkowski 9d215562eb Faster logic to see if a cluster supports resigning 2016-05-16 07:18:39 -06:00
Mike Tutkowski 2bd035d199 Support for backend snapshots with XenServer 2016-05-13 01:02:04 -06:00
Mike Tutkowski dad9e5d868 CLOUDSTACK-8813: Notify listeners when a host has been added to a cluster, is about to be removed from a cluster, or has been removed from a cluster 2016-05-11 08:02:46 -06:00
Wei Zhou bef92052ee CLOUDSTACK-8964: Can't create volume from snapshot of a removed volume
This issue happens on KVM as well.
This is because the volume info is missing in the CopyCommand once the volume has been removed.
When the KVM agent tries to process the command, it will throws a NPE.
2015-10-23 22:02:59 +02:00
Mike Tutkowski 8b0266d12e Merge branch 'pr/547'
* pr/547:
  CLOUDSTACK-8601. VMFS storage added as local storage can be re-added as shared storage. Fail addition of a VMFS shared storage pool in case it has already been added as local storage in CS.

Signed-off-by: Mike Tutkowski <mike.tutkowski@solidfire.com>
2015-08-10 19:00:53 -06:00
Koushik Das ab7c9e4098 CLOUDSTACK-8655: [Browser Based Upload Volume] Partially uploaded volumes are not getting destroyed as part of storage GC
As part of volume sync, that runs during of SSVM start-up, the volume_store_ref entry was getting deleted. Volume GC relies on this entry to move volume to destroyed state.
Since the entry was getting deleted, GC thread never moved the volume from UploadError/UploadAbandoned to Destroyed. Fix is to not remove the volume_store_ref entry as part
of volume sync and let GC thread handle the clean up.

This closes #611
2015-07-22 19:05:47 +05:30
Likitha Shetty 13a98dd196 CLOUDSTACK-8601. VMFS storage added as local storage can be re-added as shared storage.
Fail addition of a VMFS shared storage pool in case it has already been added as local storage in CS.
2015-07-01 10:47:36 +05:30
Rajani Karuturi b31b8425df CLOUDSTACK-8525: NPE while updating the state of the volume after deletion
The volume is already deleted (may be by the cleanup thread) and hence
the NPE. Added a not null check for the volumevo and returning false
from the state transition

This closes #321
2015-06-03 11:45:02 +05:30
Mike Tutkowski ac2bccd2a2 Removing unused imports 2015-05-07 13:52:25 -06:00
Daan Hoogland 1c408dec37 Merge branch '4.5' after 4.5.1 vote passes 2015-05-07 16:03:26 +02:00
Likitha Shetty 6c649ce3ae CLOUDSTACK-8411. Unable to delete an uploaded volume after CCP fails to attach the volume to a VM.
Correctly update the status of an uploaded volume upon failure to attach it to a VM.

(cherry picked from commit 10a106f5d8)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2015-04-29 16:50:40 +02:00
Rajani Karuturi 0b8355920e Merge branch 'volume-upload' into master
This closes #206
2015-04-29 11:12:53 +05:30
Likitha Shetty 10a106f5d8 CLOUDSTACK-8411. Unable to delete an uploaded volume after CCP fails to attach the volume to a VM.
Correctly update the status of an uploaded volume upon failure to attach it to a VM.
2015-04-28 09:52:06 +05:30
Koushik Das bc399b981f volume-upload: If SSVM is destroyed and started, then partially uploaded volumes/templates remain in
inconsistent state (NotUploaded or UploadInProgress) and doesn't transition to any terminal
state (Uploaded, Uploaded). As a result these volumes/templates cannot be removed. Fix is to handle such
volume/template entries correctly.
2015-04-22 19:28:51 +05:30
Likitha Shetty 326bb3e0a4 CLOUDSTACK-8320. Upon a failed migration, a dummy volume is created which remains in 'Expunging' state.
Set destination volume path as NULL while duplicating volume during migration.
If migration fails, destination volume will be marked as removed. And if migration succeeds, volume path will be rightly updated.

(cherry picked from commit d30d5644bb)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2015-04-17 15:42:02 +02:00
Likitha Shetty d30d5644bb CLOUDSTACK-8320. Upon a failed migration, a dummy volume is created which remains in 'Expunging' state.
Set destination volume path as NULL while duplicating volume during migration.
If migration fails, destination volume will be marked as removed. And if migration succeeds, volume path will be rightly updated.
2015-03-12 11:57:02 +05:30
Rohit Yadav 8cfd374f04 CLOUDSTACK-8224: Don't try to unlock if template is not locked
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2015-02-06 17:43:12 +05:30
Likitha Shetty 7de7885a54 CLOUDSTACK-8125. VM fails to start on the first attempt after a cold migration.
Update volume chain_info to NULL during cold migration.
Otherwise during VM start, CCP will configure and try to power-on the VM with wrong disk information.

(cherry picked from commit 7b32b8a268)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2015-02-05 15:30:49 +05:30
Damodar 01cc1b816d CLOUDSTACK-7792: Usage Events to be captured based on Volume State Machine
(cherry picked from commit 781648fb10)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>

Conflicts:
	engine/orchestration/src/org/apache/cloudstack/engine/orchestration/VolumeOrchestrator.java
	engine/storage/volume/src/org/apache/cloudstack/storage/volume/VolumeServiceImpl.java
2015-02-05 15:22:21 +05:30
Mike Tutkowski 1b51bbbf74 Update hypervisor snapshot reserve for the root volume earlier than when it is currently being set 2015-01-29 13:01:51 -07:00
Rajani Karuturi a710743871 volume upload: persisting into volume store ref only when SSVM is found 2015-01-29 16:55:24 +05:30
Rajani Karuturi b343fb79e3 volume upload: fixed NPE if SSVM is not up 2015-01-29 16:55:16 +05:30
Mike Tutkowski 0f84e042b9 Adding support for creating a volume from a snapshot when the snapshot is on managed storage 2015-01-20 15:24:33 -07:00