Commit Graph

182 Commits

Author SHA1 Message Date
Marcus Sorensen d4596ddc9a
Pass storage scope during KVM volume migration to avoid remotely moun… (#190)
* Use cryptsetup w/o zeroing for encrypted scaleio - faster

Signed-off-by: Marcus Sorensen <mls@apple.com>

* Pass storage scope during KVM volume migration to avoid remotely mounting local storage

Signed-off-by: Marcus Sorensen <mls@apple.com>

* Add method to choose template pool based on scope

Signed-off-by: Marcus Sorensen <mls@apple.com>

* Clean up null check when creating migration options

Signed-off-by: Marcus Sorensen <mls@apple.com>

* ScaleIO enhancements - thin/thick encrypted, online resize

Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
2022-08-26 09:47:04 -06:00
Suresh Kumar Anaparti 41d6dd6a23
Fixes issue with migration of VM with volumes (#179) 2022-06-30 10:58:07 +05:30
Marcus Sorensen ba7adfa6f0
Volume encryption (#135)
This PR introduces volume encryption option to service offerings and disk offerings. Fixes #136

There is a hypervisor component and a storage pool component. Hypervisors are responsible for being capable of running/using the encrypted volumes. Storage pools are responsible for being able to create, copy, resize, etc. Hypervisors will report encryption support in their details, storage pools are marked for encryption support by pool type.

The initial offering for experimental release of this feature will have support for encryption on Local, NFS, SharedMountPoint, and ScaleIO storage types.

When volumes choosing an encrypted offering are allocated to a pool, the pool type must be capable of supporting encryption and this is enforced.

When VMs are started and they have an encrypted volume, the hypervisor must be capable of supporting encryption. Also, if volumes are attached to running VMs, the attach will only work if the hypervisor supports encryption.

This change includes a few other minor changes - for example the ability to force the KVM hypervisor private IP. This was necessary in my testing of ScaleIO, where the KVM hypervisors had multiple IPs and the ScaleIO storage only functions if the hypervisor as a ScaleIO client matches IPs with what CloudStack sees as the hypervisor IP.

For experimental release of this feature, some volume workflows like extract volume and migrate volume aren't supported for encrypted volumes. In the future we could support these, as well as migrating from unencrypted to encrypted offerings, and vice versa.

It may also be possible to configure encryption specifics in the future, perhaps at the pool level or the offering level. Currently, there is only one workable encryption offering for KVM that is supported by Libvirt and Qemu for raw and qcow2 disk files, LUKS version 1. This PR ensures we at least store this encryption format associated with each volume, with the expectation that later we may have LUKS v2 volumes or something else. Thus we will have the information necessary to use each volume with Libvirt if/when other formats are introduced.

I think the most disruptive change here is probably a refactoring of the QemuImg utility to support newer flags like --object. I've tested the change against the basic Qemu 1.5.3 that comes with EL7 and I believe it is good, but it will be nice to see the results of some functional tests. Most of the other changes are limited to changing behavior only if volume encryption is requested.

Working on documentation for the CloudStack docs. One thing to note is that hypervisors that run the stock EL7 version of Qemu will not support encryption. This is tested to be detected and report properly via the CloudStack API/UI. I intend to like to have a support matrix in the CloudStack docs.

I may add a few more unit tests. I'd also like some guidance on having functional tests. I'm not sure if there's a separate framework, or if Marvin is still used, or what the current thing is.

* Add Qemu object flag to QemuImg create

* Add apache license header to new files

* Add Qemu object flag to QemuImg convert

* Set host details if hypervisor supports LUKS

* Add disk encrypt flag to APIs, diskoffering

* Schema upgrade 4.16.0.0 to 4.16.1.0 to support vol encryption

* Add Libvirt secret on disk attach, and refer to it in disk XML

* Add implementation of luks volume encryption to QCOW2 and RAW disk prep

* Start VMs that have encrypted volumes

* Add encrypt option to service offering and root volume provisioning

* Refactor volume passphrase into its own table and object

* CryptSetup, use key files to pass keys instead of command line

* Update storage types and allocators to select encryption support

* Allow agent.properties to define the hypervisor's private IP

* Implement createPhysicalDisk for ScaleIOStorageAdaptor

* UI: Add encrypt options to offerings

* UI module security updates

* Revert "UI module security updates" - belongs in base

This reverts commit a7cb7cf7f57aad38f0b5e5d67389c187b88ffd94.

* Add --target-is-zero support for QemuImg

* Allow qemu image options to be passed, API support convert encrypted

* Switch hypervisor encryption support detection to use KeyFiles

* Fixes for ScaleIO root disk encryption

* Resize root disk if it won't fit encryption header

* Use cryptsetup to prep raw root disks, when supported

* Create qcow2 formatting if necessary during initial template copy to ScaleIO

* Allow setting no cache for qemu-img during disk convert

* Use 1M sparse on qemu-img convert for zero target disks

* UI: Add volume encryption support to hypervisor details

* QemuImg use --image-opts and --object depending on version

* Only send storage commands that require encryption to hosts that support encryption

* Move host encryption detail to a static constant

* Update host selection to account for volume encryption support

Only attach volumes if encryption requirements are met

* Ensure resizeVolume won't allow changing encryption

* Catch edge cases for clearing passphrase when volume is removed

* Disable volume migration and extraction for encrypted volumes

* Register volume secret on destination host during live migration

* Fix configdrive path editing during live migration

* Ensure configdrive path is edited properly during live migration

* Pass along and store volume encryption format during creation

* Fixes for rebase

* Fix tests after rebase

* Add unit tests for DeploymentPlanningManagerImpl to support encryption

* Deployment planner tests for encryption support on last host

* Add deployment tests for encryption when calling planner

* Added Libvirt DiskDef test for encryption details

* Add test for KeyFile utility

* Add CryptSetup tests

* Add QemuImageOptionsTest

* add smoke tests for API level changes on create/list offerings

* Fix schema upgrade, do disk_offering_view first

* Fix UI to show hypervisor encryption support

* Load details into hostVO before trying to query them for encryption

* Remove whitespace in CreateNetworkOfferingTest

* Move QemuImageOptions to use constants for flag keys

* Set physical disk encrypt format during createDiskFromTemplate in KVM Agent

* Whitespace in AbstractStoragePoolAllocator

* Fix whitespace in VolumeDaoImpl

* Support old Qemu in convert

* Log how long it takes to generate a passphrase during volume creation

* Move passphrase generation to async portion of createVolume

* Revert "Allow agent.properties to define the hypervisor's private IP"

This reverts commit 6ea9377505f0e5ff9839156771a241aaa1925e70.

* Updated ScaleIO/PowerFlex storage plugin to support separate (storage) network for Host(KVM) SDC connection. (#144)

* Added smoke tests for volume encryption (in KVM). (#149)

* Updated ScaleIO pool unit tests.

* Some improvements/fixes for code smells (in ScaleIO storage plugin).

* Updated review changes for ScaleIO improvements.

* Updated host response parameter 'encryptionsupported' in the UI.

* Move passphrase generation for the volume to async portion, while deploying VM (#158)

* Move passphrase generation for the volume to async portion, while deploying VM.
* Updated logs, to include volume details.

* Fix schema upgrade, create passphrase table first

* Fixed the DB upgrade issue (as noticed in the logs below.)
DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:) CALL `cloud`.`IDEMPOTENT_ADD_FOREIGN_KEY`('cloud.volumes', 'passphrase', 'id')
ERROR [c.c.u.d.ScriptRunner] (main:null) (logid:) Error executing: CALL `cloud`.`IDEMPOTENT_ADD_FOREIGN_KEY`('cloud.volumes', 'passphrase', 'id')
ERROR [c.c.u.d.ScriptRunner] (main:null) (logid:) java.sql.SQLException: Failed to open the referenced table 'passphrase'
ERROR [c.c.u.DatabaseUpgradeChecker] (main:null) (logid:) Unable to execute upgrade script

* Fixes for snapshots with encrypted qcow2
Fixes #159 #160 #163

* Support create/delete encrypted snapshots of encrypted qcow2 volumes
* Select endpoints that support encryption when snapshotting encrypted volumes
* Update revert snapshot to be compatible with encrypted snapshots
* Disallow volume and template create from encrypted vols/snapshots

* Disallow VM memory snapshots on encrypted vols. Fixes #157

* Fix for TemplateManagerImpl unit test failure

* Support offline resize of encrypted volumes. Fixes #168

* Fix for resize volume unit tests

* Updated libvirt resize volume unit tests

* Support volume encryption on kvm only, and  passphrase generation refactor (#169)

* Fail deploy VM when ROOT/DATA volume's offering has encryption enabled, on non-KVM hypervisors
* Fail attach volume when volume's offering has encryption enabled, on non-KVM hypervisors
* Refactor passphrase generation for volume

* Apply encryption to dest volume for live local storage migration
fixes #161

* Apply encryption to data volumes during live storage migration

Fixes #161

* Use the same encryption passphrase id for migrating volumes

* Pass secret consumer during storage migration prepare

Fix for #161

* Fixes create / delete volume snapshot issue, for stopped VMs

* Block volume snapshot if encrypted and VM is running

Fixes #159

* Block snap schedules on encrypted volumes

Fix for #159

* Support cryptsetup where luks type defaults to 2

Fixes #170

* Modify domain XML secret UUID when storage migrating VM

Fix for #172

* Remove any libvirt secrets on VM stop and post migration

Fix for #172

* Update disk profile with encryption requirement from the disk offering (#176)

Update disk profile with encryption requirement from the disk offering
and some code improvements

* Updated review changes / javadoc in ScaleIOUtil

Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
2022-06-29 15:51:00 +05:30
Suresh Kumar Anaparti cad9332082
Updating pom.xml version numbers for release 4.16.1.0
Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
2022-02-25 19:01:16 +05:30
Daniel Augusto Veronezi Salvador 79d924f3ee
Insert correct template size when live migrating VM with volumes (#5758)
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2021-12-16 20:21:38 +05:30
nicolas 93c3c3b9ac
Updating pom.xml version numbers for release 4.16.1.0-SNAPSHOT
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-09 22:50:22 -03:00
nicolas 44c08b5acc
Updating pom.xml version numbers for release 4.16.0.0
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-04 14:14:57 -03:00
Gabriel Beims Bräscher 404e264caf
CloudStack fails to migrate VM with volume when there are datadisks attatched (#5410)
* Check if should map volume in createStoragePoolMappingsForVolumes

* Invert conditional at internalCanHandle
2021-10-08 11:50:37 +05:30
Harikrishna cd4e7e031a
Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster (#5539)
* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster

* Change in constructors

* Naming changes

* Remove commented code

* Refactor code for more readability

* Addressed review comments on code refactor
2021-10-04 20:58:25 -03:00
Daniel Augusto Veronezi Salvador 8ffba83214
Keep volume policies after migrating it to another primary storage (#5067)
* Add commons-lang3 to Utils

* Create an util to provide methods that ReflectionToStringBuilder does not have yet

* Create method to retrieve map of tags from resource

* Enable tests on volume components and remove useless tests

* Refactor VolumeObject and add unit tests

* Extract createPolicy in several methods

* Create method to copy policies between volumes and add unit tests

* Copy policies to new volume before removing old volume on volume migration

* Extract "destroySourceVolumeAfterMigration" to a method and test it

* Remove javadoc @param with no sensible information

* Rename method name to a generic name

Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
2021-09-08 09:13:41 -03:00
Daniel Augusto Veronezi Salvador 9c51009134
Remove storage scope validation on KVM live migration (#5321)
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2021-08-20 14:54:14 -03:00
Daniel Augusto Veronezi Salvador 65a48dcb74
Add SharedMountPoint to KVMs supported storage pool types (#4780)
* Add SharedMountPoint to KVMs supported storage pool types

* Fix live migration to iSCSI and improve logs

Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
2021-08-16 12:32:19 -03:00
Pearl Dsilva ea7d3b34d1
Cleanup volume information from db when deleted (#4551)
* Cleanup volume information from db when deleted

* reuse search builder

* revert change

Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2021-08-09 14:21:07 +05:30
Spaceman1984 96c9c5a5e2
Added disk provisioning type support for VMWare (#4640)
* Added disk provisioning type support for VMWare

* Review changes

* Fixed unit test

* Review changes

* Added missing licenses

* Review changes

* Update StoragePoolInfo.java

Removed white space

* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id

* Delete __init__.py

* Merge fix

* Fixed failing test

* Added comment about parameters

* Added error log when update fails

* Added exception when using API

* Ordering storage pool selection to prefer thick disk capable pools if available

* Removed unused parameter

* Reordering changes

* Returning storage pool details after update

* Removed multiple pool update, updated marvin test, removed duplicate enum

* Removed comment

* Removed unused import

* Removed for loop

* Added missing return statements for failed checks

* Class name change

* Null pointer

* Added more info when a deployment fails

* Null pointer

* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java

Co-authored-by: dahn <daan.hoogland@gmail.com>

* Small bug fix on API response and added missing bracket

* Removed datastore cluster code

* Removed unused imports, added missing signature

* Removed duplicate config key

* Revert "Added more info when a deployment fails"

This reverts commit 2486db78dc.

Co-authored-by: dahn <daan.hoogland@gmail.com>
2021-07-16 22:37:42 -03:00
Abhishek Kumar 42c83b08f5 Merge remote-tracking branch 'apache/4.15' 2021-04-26 14:33:58 +05:30
slavkap b4ee4acaf3
server: Fix volume state on migrate with migrateVirtualMachineWithVolume API call (#4934)
When invoking migrateVirtualMachineWithVolume API call and a strategy isn't found the volumes are left in Migrating state

This PR puts back the volumes to Ready state.
2021-04-22 14:30:18 +05:30
Rohit Yadav 7270ca7e25 Merge remote-tracking branch 'origin/4.14' into 4.15 2021-04-06 12:51:26 +05:30
Gabriel Beims Bräscher cb91a769d3
Fix npe when migrating vm with volume (#4698) (#4775)
Cherry-pick commit 59fba4916b and fix conflict.

Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
2021-04-06 11:54:29 +05:30
Daniel Augusto Veronezi Salvador 59fba4916b
Fix npe when migrating vm with volume (#4698)
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
2021-03-08 17:56:56 +01:00
Rohit Yadav fa067e02a7 Updating pom.xml version numbers for release 4.14.2.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-03-02 12:32:27 +05:30
sureshanaparti eba186aa40
storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
2021-02-24 14:58:33 +05:30
Rohit Yadav 66f0beda5f Updating pom.xml version numbers for release 4.14.1.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-02-08 16:24:09 +05:30
Rohit Yadav b482da8c91 Updating pom.xml version numbers for release 4.15.1.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-01-11 13:58:30 +05:30
Daan Hoogland 280c13a4bb Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-05 15:51:02 +00:00
Daan Hoogland 81e9e6809b Updating pom.xml version numbers for release 4.15.1.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:34:46 +00:00
Daan Hoogland e26202f23e Updating pom.xml version numbers for release 4.16.0.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:32:10 +00:00
Daan Hoogland 01b3e361c7 Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2020-12-23 16:32:25 +00:00
Harikrishna Patnala 07abcf5705 During migrate volume command, when operation timed out exception or any exception is occured it is not handled properly to clean the volume_store_ref entry.
Fixed it to clean the volume_store_ref entry upon on any exception
2020-10-19 15:05:57 +05:30
nvazquez d864e9dc39 [VMware] Full OVF properties support 2020-10-19 15:05:56 +05:30
Pearl Dsilva b464fe41c6
server: Secondary Storage Usage Improvements (#4053)
This feature enables the following:
Balanced migration of data objects from source Image store to destination Image store(s)
Complete migration of data
setting an image store to read-only
viewing download progress of templates across all data stores
Related Primate PR: apache/cloudstack-primate#326
2020-09-17 10:12:10 +05:30
andrijapanicsb 5f926c3353 Updating pom.xml version numbers for release 4.15.0.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 10:18:39 +01:00
andrijapanicsb 05e9b11694 Updating pom.xml version numbers for release 4.14.1.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 09:59:32 +01:00
andrijapanicsb 6f96b3b2b3 Updating pom.xml version numbers for release 4.14.0.0
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-11 15:03:14 +01:00
Daan Hoogland 689e529d7b Merge release branch 4.13 to master
* 4.13:
  Fixed guest vlan range going missing when using zone wizzard (#4042)
  Volume migration (#4043)
2020-04-23 20:19:30 +02:00
dahn c1570b9c91
Volume migration (#4043)
* Update AncientDataMotionStrategy.java

fix When secondary storage usage is> 90%, VOLUME migration across primary storage will cause the migration to fail and lose VOLUME

* Update AncientDataMotionStrategy.java

Volume is migrated across Primary storage. If no secondary storage is available(Or used capacity> 90% ), the migration is canceled.
Before modification, if secondary storage cannot be found, copyVolumeBetweenPools return NUll

copyAsync considers answer = null to be a sign of successful task execution, so it deletes the VOLUME on the old primary storage. This is the root cause of data loss, because VOLUME did not perform the migration at all.

* code in comment removed

Co-authored-by: div8cn <35140268+div8cn@users.noreply.github.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
2020-04-23 19:56:27 +02:00
Nicolas Vazquez 73122fd0a9
[KVM] Direct download agnostic of the storage provider (#3828)
* Remove constraint for NFS storage

* Add new property on agent.properties

* Add free disk space on the host prior template download

* Add unit tests for the free space check

* Fix free space check - retrieve avaiable size in bytes

* Update default location for direct download

* Improve the method to retrieve hosts to retry on depending on the destination pool type and scope

* Verify location for temporary download exists before checking free space

* In progress - refactor and extension

* Refactor and fix

* Last fixes and marvin tests

* Remove unused test file

* Improve logging

* Change default path for direct download

* Fix upload certificate

* Fix ISO failure after retry

* Fix metalink filename mismatch error

* Fix iso direct download

* Fix for direct download ISOs on local storage and shared mount point

* Last fix iso

* Fix VM migration with ISO

* Refactor volume migration to remove secondary storage intermediate

* Fix simulator issue
2020-03-06 19:56:54 +01:00
Rohit Yadav d90341ebf1
cloudstack: add JDK11 support (#3601)
This adds support for JDK11 in CloudStack 4.14+:

- Fixes code to build against JDK11
- Bump to Debian 9 systemvmtemplate with openjdk-11
- Fix Travis to run smoketests against openjdk-11
- Use maven provided jdk11 compatible mysql-connector-java
- Remove old agent init.d scripts

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2020-02-12 12:58:25 +05:30
Rohit Yadav b0e3fbec3a Merge remote-tracking branch 'origin/4.13' 2019-11-11 21:57:59 +05:30
dahn 95fbe7c55b datamotion: snapshot failure diagnostics unhidden (#3666)
Diagnostics are hard when a snapshot fails if a null pointer occurs. This is because no stack trace or location of the error is logged. I.E.
2019-10-21 12:55:00,056 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-131:ctx-80420156 job-10033827/job-10033828 ctx-4864e2f5) (logid:21454564) copy snasphot failed: java.lang.NullPointerException
2019-11-11 21:55:36 +05:30
Paul Angus 50fc045f36 Updating pom.xml version numbers for release 4.14.0.0-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2019-09-07 09:57:46 +01:00
Paul Angus 61b8b77913 Updating pom.xml version numbers for release 4.13.1.0-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2019-09-01 13:36:50 +01:00
Paul Angus 8e08b47cc9 Updating pom.xml version numbers for release 4.13.0.0
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2019-08-20 15:35:49 +01:00
Gabriel Beims Bräscher 5dc982d8ba KVM local migration issue #3521 (#3533)
Fix regression bug that affects KVM local storage migration. Some of the desired execution flows for KVM local storage migration had been altered to allow only managed storage to execute. Fixed allowing managed and non managed storages to execute.

Fixes #3521
2019-08-07 15:41:30 +05:30
Abhishek Kumar b2db8979f2 server: fix for respecting secondary storage threshold limit (#3480)
Retrieval of an image store using ImageStoreProviderManager has been refactored by introducing three different methods,
DataStore getRandomImageStore(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will not be used here.
DataStore getImageStoreWithFreeCapacity(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will be used here and the store with max free space will be returned. If no store with filled storage less than the threshold is found, the NULL value will be returned.
List<DataStore> listImageStoresWithFreeCapacity(List<DataStore> imageStores);
To get a list of image stores for writing purpose which fulfills threshold capacity check.

Correspondingly DataStoreManager methods have been refactored to return similar values for a given zone.

Fixes #3287 - NULL value will be returned when secondary storage is needed for writing but there is not store with free space.
Fixes #3041 - Rather than returning random secondary storage for writing, storage with max. free space will be returned.
Fixes #3478 - For migration on VMware, all writable secondary storage will be mounted while preparation.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2019-07-31 15:37:59 +05:30
Nicolas Vazquez 0fbf5006b8 kvm: live storage migration intra cluster from NFS source and destination (#2983)
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548

Live storage migration on KVM under these conditions:

From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.

Additional notes:

To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026
https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:

Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-10 15:35:26 +05:30
Gabriel Beims Bräscher 25c4f7fc08 kvm: Remove code that generated /var/lib/libvirt/images/null on target host (#3280)
This commit simplifies the generateDestPath method and fixes an issue where an extra file, named as 'null', was created on the target storage pool during VM local storage volume migration. Without this fix, the VM is migrated and there is no data loss; however, 193 KB is allocated for the unused file named as 'null' and the file stays on the target storage.
2019-05-27 18:15:29 +05:30
GabrielBrascher 8d3feb100a Updating pom.xml version numbers for release 4.13.0.0-SNAPSHOT
Signed-off-by: GabrielBrascher <gabriel@pcextreme.nl>
2019-03-20 18:47:35 -03:00
GabrielBrascher a137398bf1 Updating pom.xml version numbers for release 4.12.0.0
Signed-off-by: GabrielBrascher <gabriel@pcextreme.nl>
2019-03-14 10:11:46 -03:00
Gabriel Beims Bräscher 7c5eca9481
Copy template to target KVM host if needed when migrating local <> local storage (#3154)
* Migrate template to target host if needed.

Fix KVM VM local storage live migration by migrating its template to the
target host if needed.

* Address reviewer and add method that updates the DB template reference

* Remove deprecated Config.PrimaryStorageDownloadWait

* Code formating of @Inject to follow checkstyle
2019-02-05 00:18:29 -02:00
dahn b363fd49f7 Vmware offline migration (#2848)
* - Offline VM and Volume migration on Vmware hypervisor hosts
- Also add VM disk consolidation call on successful VM migrations

* Fix indentation of marvin test file and reformat against PEP8

* * Fix few comment typos
* Refactor debug messages to use String.format() when debug log level is enabled.

* Send list of commands returned by hypervisor Guru instead of explicitly selecting the first one

* Fix unhandled NPE during VM migration

* Revert back to distinct event descriptions for VM to host or storage pool migration

* Reformat test_primary_storage file against PEP-8 and Remove unused imports

* Revert back the deprecation messages in the custom StringUtils class to favour the use of the ApacheUtils
2019-01-25 10:05:13 -02:00