Commit Graph

1125 Commits

Author SHA1 Message Date
sureshanaparti eba186aa40
storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
2021-02-24 14:58:33 +05:30
Rohit Yadav 66f0beda5f Updating pom.xml version numbers for release 4.14.1.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-02-08 16:24:09 +05:30
Rohit Yadav b482da8c91 Updating pom.xml version numbers for release 4.15.1.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-01-11 13:58:30 +05:30
Daan Hoogland 280c13a4bb Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-05 15:51:02 +00:00
Daan Hoogland 81e9e6809b Updating pom.xml version numbers for release 4.15.1.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:34:46 +00:00
Daan Hoogland e26202f23e Updating pom.xml version numbers for release 4.16.0.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:32:10 +00:00
Daan Hoogland 01b3e361c7 Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2020-12-23 16:32:25 +00:00
Harikrishna b1ddd7c2e6
vmware: Fix for mapping guest OS type read from OVF to existing guest OS in C… (#4553)
* Fix for mapping guest OS type read from OVF to existing guest OS in CloudStack database  while registering VMware template

* Added unit tests to String Utils methods and updated the code

* Updated the java doc section

* Updated os description logic to keep equals ignore match with guest os display name
2020-12-23 19:37:21 +05:30
Nicolas Vazquez 4617be4583
vmware: Fix template upload from local (#4555)
Update the guest OS from the OVF file after upload is completed
This PR fixes the template upload from local on VMware

Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
2020-12-23 15:13:39 +05:30
Alexandru Bagu fdb2ee3165
storage: Fix hypervisor type cast to string (#4516)
This PR addresses an error that appears when you try to add a new host. I don't even understand why there was a cast to String in the first place. I will assume some classes send HypervisorType and some send a string (empty or otherwise). Shouldn't this be addressed to use the same type everywhere? With this fix adding a new xenserver host works fine.

Co-authored-by: dahn <daan.hoogland@gmail.com>
2020-12-14 11:56:44 +05:30
Pearl Dsilva e4a504b084
Make global setting non-dynamic (#4505)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2020-12-01 14:00:35 +05:30
Spaceman1984 dfa09fc856
server: Setting snapshot removed on timeout (#4425)
* Setting snapshot state to error on timeout

* Setting removed field so snapshot record is ignored by garbage collection

* Removed explicitly setting error status, renamed method from markFailed to markRemoved

* Renamed method, moved code a few lines down

* Moved remove logic

* Removed unused service

* Moved removed logic - last time, promise
2020-11-21 02:20:16 +05:30
Rakesh 735b6de296
Cleanup download urls when SSVM destroyed (#4078)
Co-authored-by: Rakesh Venkatesh <r.venkatesh@global.leaseweb.com>
2020-11-18 14:01:31 +01:00
Spaceman1984 acee15a530
Moved dedicated hosts to the end of the resultset when selecting an e… (#4428) 2020-11-18 12:07:14 +00:00
Pearl Dsilva 1dbb76f64b
Fix: Data migration (#4475)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2020-11-18 09:45:53 +01:00
nvazquez ee5b8763a6 Fix remove VM and its volumes for deploy-as-is if have previously failed - restore cpu flags in nested virtualization test 2020-10-19 15:05:58 +05:30
Harikrishna Patnala 5fdabc1cb0 Added storage policy details to disk while creating disk and restricted migration of volumes to storage pools which are not storage policy compliance 2020-10-19 15:05:58 +05:30
Harikrishna Patnala 46b5322d9b Adding vSphere storage policy to disk on start command and attach volume command 2020-10-19 15:05:58 +05:30
Harikrishna Patnala 07abcf5705 During migrate volume command, when operation timed out exception or any exception is occured it is not handled properly to clean the volume_store_ref entry.
Fixed it to clean the volume_store_ref entry upon on any exception
2020-10-19 15:05:57 +05:30
nvazquez 94bebe8792 Revert back deploy as is column on templates but keep it as default for new templates 2020-10-19 15:05:57 +05:30
nvazquez 9b51a706db Set deploy-as-is to default on VMware 2020-10-19 15:05:57 +05:30
nvazquez b0d3168e0b Fail template registration when guest OS not found 2020-10-19 15:05:57 +05:30
nvazquez 32d85b0fa2 Display storage on logging when not deploy-as-is and guest OS small refactor 2020-10-19 15:05:57 +05:30
nvazquez 41354227e2 Handle guest OS read from deploy-as-is OVF descriptor 2020-10-19 15:05:57 +05:30
nvazquez edfbed34ad Use network adapter from OVF on deploy-as-is 2020-10-19 15:05:57 +05:30
Harikrishna Patnala 33ae2afc89 Removed few duplicate imports during rebase with master 2020-10-19 15:05:57 +05:30
Harikrishna Patnala 44dc0c6072 Fixed rat failure on new class DeployAsIsHelper.java
Also removed some unused imports during rebase
2020-10-19 15:05:57 +05:30
nvazquez f73830acbb Refactor deploy as is constants 2020-10-19 15:05:57 +05:30
nvazquez bb4ce2118d Add new template and vm deploy as is details table and refactor 2020-10-19 15:05:57 +05:30
nvazquez d864e9dc39 [VMware] Full OVF properties support 2020-10-19 15:05:56 +05:30
Harikrishna Patnala 19745ea049 Fix enable primary datastore maintenance command seriliaztion on it 2020-10-19 14:57:16 +05:30
Harikrishna Patnala 201ebe8868 Simulator failures fixing 2020-10-19 14:57:16 +05:30
Harikrishna Patnala 61dd85876b Fix migrate vm and volume APIs in case if datastore cluster 2020-10-19 14:57:16 +05:30
Harikrishna Patnala 873f9dd9ac Datastore Clusters operations on putting into maintenance mode, update storage pool with tags, cancelling mantenance mode and deleting storage pool 2020-10-19 14:57:16 +05:30
Harikrishna Patnala 75fb1d91ee Fix adding Datastore clusters and listing 2020-10-19 14:57:15 +05:30
Harikrishna Patnala b4a23ea5f6 Allocation logic to skip datastore cluster and consider only storagepools inside the datastore cluster 2020-10-19 14:57:15 +05:30
Harikrishna Patnala 41b3fc19d6 Add Datastore cluster and the child entities which are datastores in the cluster into CloudStack
Setting scope is still pending.
2020-10-19 14:57:15 +05:30
Harikrishna Patnala 48786b2d31 DataStore Clusters addition as a storage pool 2020-10-19 14:57:15 +05:30
Harikrishna Patnala 6df819028e UI changes and accept any type of datastore as presetup in vmware 2020-10-19 14:57:15 +05:30
Harikrishna Patnala fb0a96e7fb Check if datastore is complaince with the storagepolicy provided in the disk offering.
Added corresponding manager objects from PBM sdk to do the job.
Made dao layer changes to read the storage policy in diskoffering
2020-10-19 14:57:15 +05:30
Pearl Dsilva 0d487fc8c9
support for data migration of incremental snaps on xen (#4395)
* support for handling incremental snaps (on DB entries) on xen

* Addressed comments

* Update NfsSecondaryStorageResource.java

adjusted space in comment/ log

Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2020-10-18 02:15:10 +05:30
Pearl Dsilva b464fe41c6
server: Secondary Storage Usage Improvements (#4053)
This feature enables the following:
Balanced migration of data objects from source Image store to destination Image store(s)
Complete migration of data
setting an image store to read-only
viewing download progress of templates across all data stores
Related Primate PR: apache/cloudstack-primate#326
2020-09-17 10:12:10 +05:30
Spaceman1984 d57aa83517
server: Added nfs minor version support (#4180)
This PR adds minor version support when mounting nfs on the SSVM as requested in #2861

The global setting "secstorage.nfs.version" has been changed to use the String data type which allows any minor version to be specified.
2020-08-19 14:53:38 +05:30
nvazquez 7e3b61b723 Merge branch '4.14' 2020-07-18 14:17:43 -03:00
nvazquez 5c6e79b1eb Merge branch '4.13' into 4.14 2020-07-18 14:15:46 -03:00
Nicolas Vazquez f843c537f0
Fix snapshots garbage collection (#4188)
* Cleanup orphan entries from snapshot store ref for primary storage

* Add debug message
2020-07-18 14:12:53 -03:00
Nicolas Vazquez 8c1d749360
[VMware] Enable unmanaging guest VMs (#4103)
* Enable unmanaging guest VMs

* Minor fixes

* Fix stop usage event only if VM is not stopped when unmanaging

* Rename unmanaged VMs manager

* Generate netofferingremove usage event if VM is not stopped

* Generate usage event VM snapshot primary off when unmanaging
2020-06-26 08:31:43 -03:00
Rohit Yadav de3ccd2c29 Merge remote-tracking branch 'origin/4.14' 2020-06-15 09:56:55 +05:30
Rohit Yadav e94a54f3b4 Merge remote-tracking branch 'origin/4.13' into 4.14 2020-06-15 09:56:06 +05:30
Spaceman1984 6a683dcf77
storage: Fixed null pointer (#4130)
Fixes #4090

When trying to migrate a VM across 2 clusters, if a snapshot has been deleted and garbage collection has run to update the removed field, it is not possible to migrate the instance due to a null pointer.
2020-06-15 09:54:22 +05:30
andrijapanicsb 5f926c3353 Updating pom.xml version numbers for release 4.15.0.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 10:18:39 +01:00
andrijapanicsb 05e9b11694 Updating pom.xml version numbers for release 4.14.1.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 09:59:32 +01:00
andrijapanicsb 6f96b3b2b3 Updating pom.xml version numbers for release 4.14.0.0
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-11 15:03:14 +01:00
andrijapanicsb 398e685e01 Updating pom.xml version numbers for release 4.13.2.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-04-29 12:29:12 +01:00
Daan Hoogland 689e529d7b Merge release branch 4.13 to master
* 4.13:
  Fixed guest vlan range going missing when using zone wizzard (#4042)
  Volume migration (#4043)
2020-04-23 20:19:30 +02:00
andrijapanicsb b2ffa3efa5 Updating pom.xml version numbers for release 4.13.1.0
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-04-23 19:17:09 +01:00
dahn c1570b9c91
Volume migration (#4043)
* Update AncientDataMotionStrategy.java

fix When secondary storage usage is> 90%, VOLUME migration across primary storage will cause the migration to fail and lose VOLUME

* Update AncientDataMotionStrategy.java

Volume is migrated across Primary storage. If no secondary storage is available(Or used capacity> 90% ), the migration is canceled.
Before modification, if secondary storage cannot be found, copyVolumeBetweenPools return NUll

copyAsync considers answer = null to be a sign of successful task execution, so it deletes the VOLUME on the old primary storage. This is the root cause of data loss, because VOLUME did not perform the migration at all.

* code in comment removed

Co-authored-by: div8cn <35140268+div8cn@users.noreply.github.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
2020-04-23 19:56:27 +02:00
Daan Hoogland b984184b7a Merge release branch 4.13 to master
* 4.13:
  Snapshot deletion issues (#3969)
  server: Cannot list affinity group if there are hosts dedicated… (#4025)
  server: Search zone-wide storage pool when allocation algothrim is firstfitleastconsumed (#4002)
2020-04-11 16:45:00 +02:00
dahn f18fe5e1da
Snapshot deletion issues (#3969)
* Fixes snapshot deletion

* Remove legacy '@Component', it is not necessary in this bean/class.

* Fix log message missing %d and remove snapshot on DB

* Remove "dummy" boolean return statement

* Manage snapshot deletion for KVM + NFS (primary storage)

* checkstyle trailing spaces

* rename options strings to *_OPTION

* Fix typo on deleteSnapshotOnSecondaryStorage and enhance log message

* Move the snapshotDao.remove(snapshotId); (#4006)

* Fix deletesnapshot worflow to handle both snapshots created in primary storage and snapshots backed up to secondary storage

* Fix extra space

* refactor out separate handling methods for secondary and primary (reducing returns)

* return false on unexpected error or log when expected

* != instead of ==

* secondary instead of backup storage

* init to null

* Handle snapshot deletion on primary storage. When primary store ref not found for snapshot do not fail the operation.

* Fix debug levels on log messages

Co-authored-by: GabrielBrascher <gabriel@apache.org>
Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
2020-04-11 16:40:27 +02:00
Wei Zhou 6bf92fb136
server: Search zone-wide storage pool when allocation algothrim is firstfitleastconsumed (#4002) 2020-04-06 22:01:40 +02:00
Nicolas Vazquez 73122fd0a9
[KVM] Direct download agnostic of the storage provider (#3828)
* Remove constraint for NFS storage

* Add new property on agent.properties

* Add free disk space on the host prior template download

* Add unit tests for the free space check

* Fix free space check - retrieve avaiable size in bytes

* Update default location for direct download

* Improve the method to retrieve hosts to retry on depending on the destination pool type and scope

* Verify location for temporary download exists before checking free space

* In progress - refactor and extension

* Refactor and fix

* Last fixes and marvin tests

* Remove unused test file

* Improve logging

* Change default path for direct download

* Fix upload certificate

* Fix ISO failure after retry

* Fix metalink filename mismatch error

* Fix iso direct download

* Fix for direct download ISOs on local storage and shared mount point

* Last fix iso

* Fix VM migration with ISO

* Refactor volume migration to remove secondary storage intermediate

* Fix simulator issue
2020-03-06 19:56:54 +01:00
Rohit Yadav 58cf300fb6 Merge remote-tracking branch 'origin/4.13' 2020-03-06 14:22:46 +05:30
Nicolas Vazquez bd7d41bf6d
server: fix VM with ISO attached migration issue (#3935)
As previously described by PR #3929:
If vm has attached ISO, the migration fails with error message "org.libvirt.LibvirtException: Cannot access storage file /mnt/b33e5a1d-e4ea-3465-b6ac-c98dc8ff8af0/207-2-cc5fd717-2d57-3bb3-bcf6-2c930268db6c.iso"
2020-03-06 13:32:19 +05:30
Abhishek Kumar 0ad2370baf
Enable Direct Download for System VMs (#3731)
* changes for configurable timeouts for direct download

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* server: refactor direct download config value retrieval

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactored direc download cmd, downloader classes

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* server, services: allow direct download template for SSVM, CPVM

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* list bypassed system templates

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* ignore direct download template during system tempalte download

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* add direct download entry while adding store

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix previous change, donot add multiple entries for direct download

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* connection request timeout as hidden configuration

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix template zone ref cleanup on zone deletion

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix previous commit test error, change implementation

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactored zone template cleanup

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2020-02-26 13:38:31 +01:00
Rohit Yadav d90341ebf1
cloudstack: add JDK11 support (#3601)
This adds support for JDK11 in CloudStack 4.14+:

- Fixes code to build against JDK11
- Bump to Debian 9 systemvmtemplate with openjdk-11
- Fix Travis to run smoketests against openjdk-11
- Use maven provided jdk11 compatible mysql-connector-java
- Remove old agent init.d scripts

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2020-02-12 12:58:25 +05:30
Wei Zhou fd5bea838b
New feature: Add support to destroy/recover volumes (#3688)
* server: fix resource count of primary storage if some volumes are Expunged but not removed

Steps to reproduce the issue
(1) create a vm and stop it. check resource count of primary storage
(2) download volume. resource count of primary storage is not changed.
(3) expunge the vm, the volume will be Expunged state as there is a volume snapshot on secondary storage. The resource count of primary storage decreased.
(4) update resource count of the account (or domain), the resource count of primary storage is reset to the value in step (2).

* New feature: Add support to destroy/recover volumes

* Add integration test for volume destroy/recover

* marvin: check resource count of more types

* messages translate to JP

* Update messages for CN

* translate message for NL

* fix two issues per Daan's comments

Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
2020-02-07 11:25:10 +01:00
Rohit Yadav 424f10cc77 Merge remote-tracking branch 'origin/4.13' 2020-01-31 14:18:11 +05:30
Abhishek Kumar 9d105b6546
template: copy md5 mismatch (#3383)
Fixes #3191

When a template is registered, code stores md5sum of the downloaded file in the vm_template table. However, this downloaded file could be deleted after template installation if it is not an actual (.qcow2, .ova, etc.) file. When the user copies a template using copyTemplate API, the actual template file will be copied across the image stores. Matching checksum for the copied templated file and the stored value from the vm_template table will result in a mismatch.
Changes will set an empty checksum value for the copied template while passing to download service which allows skipping wrong checksum check for the copied while install.
However, this results in a change in checksum value for concerned template entry in vm_template table post template install.

Co-authored-by: dahn <daan.hoogland@gmail.com>
2020-01-31 14:16:37 +05:30
Gabriel Beims Bräscher 93aad24bbb storage: Handle RBD snapshot deletion (#3615)
When deleting volume snapshots, only records in the database are deleted, and snapshots are not deleted on the main storage.

Fixes: #3586
2019-12-08 14:48:51 +05:30
Rohit Yadav 7f5096a4d0
storage: don't select an SSVM that is removed (#3668)
In case an older SSVM is removed without changing it's state from Up
to Destroyed/Removed etc, the SSVM may be randomly selected for image
store related operations. This fix ensures that endpoints for an image
store are found only from a set of SSVM hosts that are not removed.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-11-12 00:47:21 +05:30
Rohit Yadav b0e3fbec3a Merge remote-tracking branch 'origin/4.13' 2019-11-11 21:57:59 +05:30
dahn 95fbe7c55b datamotion: snapshot failure diagnostics unhidden (#3666)
Diagnostics are hard when a snapshot fails if a null pointer occurs. This is because no stack trace or location of the error is logged. I.E.
2019-10-21 12:55:00,056 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-131:ctx-80420156 job-10033827/job-10033828 ctx-4864e2f5) (logid:21454564) copy snasphot failed: java.lang.NullPointerException
2019-11-11 21:55:36 +05:30
Paul Angus 50fc045f36 Updating pom.xml version numbers for release 4.14.0.0-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2019-09-07 09:57:46 +01:00
Paul Angus 61b8b77913 Updating pom.xml version numbers for release 4.13.1.0-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2019-09-01 13:36:50 +01:00
Paul Angus 8e08b47cc9 Updating pom.xml version numbers for release 4.13.0.0
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2019-08-20 15:35:49 +01:00
Nicolas Vazquez bfc08715cc Display VM snapshot tags on usage records (#3560)
* Refactor usage helper tables to include VM snapshot id

* Fix resource type and resource id while listing usage records

* Add defensive checks
2019-08-20 14:20:23 +01:00
Rohit Yadav b576972f71
test: stabilize 4.13/master (#3547)
Fix failing smoketests, fix NPEs. 

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-08-13 11:51:10 +05:30
Nicolas Vazquez 3c2af55d81 vmware: allow configuring appliances on the VM instance wizard when OVF properties are available (#3271)
Problem: In Vmware, appliances that have options that are required to be answered before deployments are configurable through vSphere vCenter user interface but it is not possible from the CloudStack user interface.

Root cause: CloudStack does not handle vApp configuration options during deployments if the appliance contains configurable options. These configurations are mandatory for VM deployment from the appliance on Vmware vSphere vCenter. As shown in the image below, Vmware detects there are mandatory configurations that the administrator must set before deploy the VM from the appliance (in red on the image below):

Solution:
On template registration, after it is downloaded to secondary storage, the OVF file is examined and OVF properties are extracted from the file when available.
OVF properties extracted from templates after being downloaded to secondary storage are stored on the new table 'template_ovf_properties'.
A new optional section is added to the VM deployment wizard in the UI:
If the selected template does not contain OVF properties, then the optional section is not displayed on the wizard.
If the selected template contains OVF properties, then the optional new section is displayed. Each OVF property is displayed and the user must complete every property before proceeding to the next section.
If any configuration property is empty, then a dialog is displayed indicating that there are empty properties which must be set before proceeding
image
The specific OVF properties set on deployment are stored on the 'user_vm_details' table with the prefix: 'ovfproperties-'.
The VM is configured with the vApp configuration section containing the values that the user provided on the wizard.
2019-08-09 16:14:46 +05:30
Gabriel Beims Bräscher 5dc982d8ba KVM local migration issue #3521 (#3533)
Fix regression bug that affects KVM local storage migration. Some of the desired execution flows for KVM local storage migration had been altered to allow only managed storage to execute. Fixed allowing managed and non managed storages to execute.

Fixes #3521
2019-08-07 15:41:30 +05:30
Abhishek Kumar b2db8979f2 server: fix for respecting secondary storage threshold limit (#3480)
Retrieval of an image store using ImageStoreProviderManager has been refactored by introducing three different methods,
DataStore getRandomImageStore(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will not be used here.
DataStore getImageStoreWithFreeCapacity(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will be used here and the store with max free space will be returned. If no store with filled storage less than the threshold is found, the NULL value will be returned.
List<DataStore> listImageStoresWithFreeCapacity(List<DataStore> imageStores);
To get a list of image stores for writing purpose which fulfills threshold capacity check.

Correspondingly DataStoreManager methods have been refactored to return similar values for a given zone.

Fixes #3287 - NULL value will be returned when secondary storage is needed for writing but there is not store with free space.
Fixes #3041 - Rather than returning random secondary storage for writing, storage with max. free space will be returned.
Fixes #3478 - For migration on VMware, all writable secondary storage will be mounted while preparation.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2019-07-31 15:37:59 +05:30
Rohit Yadav d930982a1c engine/storage: remove unused import
Fixes checkstyle issue caused by previous commit 6a511fce40
from PR #3466 where a minor review fix did not address this. Merging this
one to unblock few other PRs after running a local build test.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-07-24 11:59:08 +05:30
Gabriel Beims Bräscher 6a511fce40 kvm: Add ceph RBD snapshot rollback (#3502)
Add CephSnapshotStrategy to handle RBD revert (rollback) snapshot. In order to support RBD revert (rbd_rollback), this PR adds a CephSnapshotStrategy class to handle Ceph/RBD snapshot actions.
2019-07-23 19:40:56 +05:30
Rohit Yadav f30d716452
cloudstack: fix forward merge issues (#3394)
- Fixes tests path from old layout to standard maven in src/test/java/
- Removed duplicate SnapshotManagerImpl at old path `server/src/com...`

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-12 16:38:58 +05:30
Nicolas Vazquez 0fbf5006b8 kvm: live storage migration intra cluster from NFS source and destination (#2983)
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548

Live storage migration on KVM under these conditions:

From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.

Additional notes:

To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026
https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:

Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-10 15:35:26 +05:30
Rohit Yadav bbc0ae873d
storage: post process locally uploaded multi-disk ova template (#3215)
Problem: When a multi-disk OVA template is uploaded, only the root disk is recognized and VMs deployed using such template only get the root disk provisioned.
Root Cause: The template processor for multi-disk OVA was not used in the template upload processor.
Solution: Added support for local multi-disk OVA template upload. After a multi-disk OVA template is
uploaded, the mechanism that worked on multi-disk OVA templates registered using URL is now also used to discovers and creates data-disk templates in cloud.vm_template table and on the secondary storage.

To enable SSL on SSVMs :
• Upload the certificates like you usually do via the API or UI->Infrastructure tab
• Set the global settings secstorage.encrypt.copy, secstorage.ssl.cert.domain to appropriate values
along with the CPVM ones
• Restart management server (no need to destroy/restart SSVM (or the ssvm agent))

Test cases:
- Upload template and check it creates multi-disk folders on secondary 
storage and entries in cloud.vm_template table
- Upload template and kill/shutdown management server. Then restart MS
to check if template sync works
- Copy template across zone of an uploaded template

Signed-off-by: Rohit Yadav rohit.yadav@shapeblue.com
2019-06-05 23:07:40 +05:30
Gabriel Beims Bräscher 25c4f7fc08 kvm: Remove code that generated /var/lib/libvirt/images/null on target host (#3280)
This commit simplifies the generateDestPath method and fixes an issue where an extra file, named as 'null', was created on the target storage pool during VM local storage volume migration. Without this fix, the VM is migrated and there is no data loss; however, 193 KB is allocated for the unused file named as 'null' and the file stays on the target storage.
2019-05-27 18:15:29 +05:30
Rohit Yadav 0700d91a68 Merge branch '4.12'
- Fixes PR #3146 db cleanup to the correct 4.12->4.13 upgrade path
- Fixes failing unit test due to jdk specific changes after forward
  merging

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-05-14 15:15:17 +05:30
Rohit Yadav 00ff536f81 Merge remote-tracking branch 'origin/4.11' into 4.12
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-05-14 14:26:11 +05:30
skattoju4 4c60a5b1ff Fix slow vm creation when large sf snapshot count (#3282)
* skip geting used bytes for volumes that are not in Ready state
* updated log message
* filter snapshots by state backedup
* removed * import
* filter templates by state 'DOWNLOADED'
* refactored getUsedBytes to use O(1) queries
* querying for ready volumes instead filtering in memory
* make listByStoreIdInReadyState more generic ex listByStoreIdAndState
* updated snapshot search criteria for listByStoreIdAndState
* updated template search criteria for listByPoolIdAndState
* fixed typo in search criteria for listByTemplateAndState
* fixed typo in search criteria for templates in listByPoolIdAndState
2019-05-11 16:02:52 +02:00
Anurag Awasthi f9b61bc737 orchestration: Allow VM that has never started to have volumes attached (#3276)
With this patch b766bf7
we started tracking disks in attaching state so that other attach request can fail gracefully. However this missed the case where disks were in allocated state but attach was requested.

For the use case where users want to attach disk in allocated state but not ready, we need to have allocated-attaching transition as well. We must take care of returning to the original state - allocated or ready - when attach request has completed.

For the use case of unstarted vm's the disk must proceed as follows - "Allocated" -> Attaching -> Allocated. When VM is started, the disk is "created" and pool is assigned. For the use case of started VMs it's more trivial and disk proceeds as follows - Ready -> Attaching -> Ready.

Test this by creating a VM with "startvm=false", create a disk and try attaching it in allocated state. It would give an exception on latest 4.11 but will be fixed on this patch.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-05-10 23:40:38 +05:30
Rohit Yadav 491a10be0c
storage: publish delete usage event for snapshot deletion (#3212)
Problem: Users are billed for destroyed VMs with VM snapshots because usage records don't get that the VM and VM snapshots are removed.
Root Cause: The destroyVirtualMachine and expungeVirtualMachine APIs were removing VM snapshots but not generating VMSNAPSHOT.DELETE usage event due to which the VM snapshots were not marked as removed in the usage_vmsnapshot table.
Solution: The issue was fixed by emitting the proper usage event for all the VM snapshots of a VM that is destroyed.
2019-04-10 17:12:55 +05:30
GabrielBrascher 8d3feb100a Updating pom.xml version numbers for release 4.13.0.0-SNAPSHOT
Signed-off-by: GabrielBrascher <gabriel@pcextreme.nl>
2019-03-20 18:47:35 -03:00
GabrielBrascher a137398bf1 Updating pom.xml version numbers for release 4.12.0.0
Signed-off-by: GabrielBrascher <gabriel@pcextreme.nl>
2019-03-14 10:11:46 -03:00
Gabriel Beims Bräscher 7c5eca9481
Copy template to target KVM host if needed when migrating local <> local storage (#3154)
* Migrate template to target host if needed.

Fix KVM VM local storage live migration by migrating its template to the
target host if needed.

* Address reviewer and add method that updates the DB template reference

* Remove deprecated Config.PrimaryStorageDownloadWait

* Code formating of @Inject to follow checkstyle
2019-02-05 00:18:29 -02:00
Nathan Johnson 637cc6ec4e feature: add libvirt / qemu io bursting (#3133)
* feature: add libvirt / qemu io bursting

Adds the ability to set bursting features from libvirt / qemu

This allows you to utilize the iops and bytes temporary "burst" mode
introduced with libvirt 2.4 and improved upon with libvirt 2.6.

https://blogs.igalia.com/berto/2016/05/24/io-bursts-with-qemu-2-6/

* updates per rafael et al
2019-02-04 19:47:44 -02:00
GabrielBrascher 460d3127ec Fix conflict and merge forward PR #3122 from 4.11 to master (4.12) 2019-02-04 19:24:59 -02:00
Nathan Johnson bf805d1483 Add back ability to disable backup of snapshot to secondary (#3122)
* The snapshot.backup.rightafter configuration variable was removed by:

SHA: 6bb0ca2f85

This adds it back, though named snapshot.backup.to.secondary now instead.

This global parameter, once set, will allow you to prevent automatic backups of
     snapshots to secondary storage, unless they're actually needed.

Fixes #3096

* updates per review
2019-02-04 19:08:42 -02:00
dahn b363fd49f7 Vmware offline migration (#2848)
* - Offline VM and Volume migration on Vmware hypervisor hosts
- Also add VM disk consolidation call on successful VM migrations

* Fix indentation of marvin test file and reformat against PEP8

* * Fix few comment typos
* Refactor debug messages to use String.format() when debug log level is enabled.

* Send list of commands returned by hypervisor Guru instead of explicitly selecting the first one

* Fix unhandled NPE during VM migration

* Revert back to distinct event descriptions for VM to host or storage pool migration

* Reformat test_primary_storage file against PEP-8 and Remove unused imports

* Revert back the deprecation messages in the custom StringUtils class to favour the use of the ApacheUtils
2019-01-25 10:05:13 -02:00
Gabriel Beims Bräscher bf209405e7 Allow KVM VM live migration with ROOT volume on file storage type (#2997)
* Allow KVM VM live migration with ROOT volume on file

* Allow KVM VM live migration with ROOT volume on file
- Add JUnit tests

* Address reviewers and change some variable names to ease future
implementation (developers can easily guess the name and use
autocomplete)
2018-12-14 09:01:28 -02:00
Paul Angus fb80e51307 Updating pom.xml version numbers for release 4.11.3.0-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2018-11-20 13:11:52 +00:00
Gabriel Beims Bräscher e45bed74a5 server: remove unused StrategyPriority.PLUGIN. (#3014)
Remove unused StrategyPriority.PLUGIN enum. The PLUGIN Strategy priority is not used, except by three JUnit test methods.
2018-11-14 15:07:37 +05:30
Kui LIU d53fc94485 CLOUDSTACK-10365: Change the "getXXX" boolean method names to "isXXX" (#2847)
These boolean-return methods are named as "getXXX".
Other boolean-return methods are named as "isXXX".
Considering there methods will return boolean values, it should be more clear and consistent to rename them as "isXXX".
(rebase #2602 and #2816)
2018-09-22 17:20:48 +02:00
Mike Tutkowski d12c106a47
Restrict the number of managed clustered file systems per compute cluster (#2500)
* Restrict the number of managed clustered file systems per compute cluster
2018-09-11 08:23:19 -06:00
Mike Tutkowski 568119d437
Merge pull request #2585 from syed/upstream-snapshot-archive
Add ability to archive snapshots on primary storage
2018-08-23 19:36:04 -06:00
Rafael Weingärtner 85c88e08a7
Remove class snapshot data factory test (#2813)
* rename package from "src" to "org.apache.cloudstack.storage.vmsnapshot"

* Remove Empty test class "SnapshotDataFactoryTest"

* Remove redundant imports (importing a class from its own package)
2018-08-20 10:16:07 -03:00
Rafael Weingärtner 21c0225921
Fix the problem at #1740 when it loads all snapshots in the primary storage (#2808)
* Fix the problem at #1740 when it loads all snapshots in the primary storage

While checking a problem with @mike-tutkowski at PR #1740, we noticed that the method `org.apache.cloudstack.storage.snapshot.SnapshotDataFactoryImpl.getSnapshots(long, DataStoreRole)` was not loading the snapshots in `snapshot_store_ref` table according to their storage role. Instead, it would return a list of all snapshots (even if they are not in the storage role sent as a parameter) saying that they are in the storage that was sent as parameter.

* Add unit test
2018-08-17 08:33:25 -03:00
Mike Tutkowski 3db33b7385 Support online migration of a virtual disk on XenServer from non-managed storage to managed storage 2018-08-12 00:23:36 -06:00
Mike Tutkowski 46c56eaaf9 Merge release branch 4.11 to master
* 4.11:
  Changed the implementation of isVolumeOnManagedStorage(VolumeInfo) to check if the data store in question is for primary storage (and added a unit test from Daan Hoogland)
  vmware: reboot VR after mac updates (#2794)
2018-08-12 00:03:37 -06:00
Mike Tutkowski ab83c198a5 Changed the implementation of isVolumeOnManagedStorage(VolumeInfo) to check if the data store in question is for primary storage (and added a unit test from Daan Hoogland) 2018-08-10 11:24:18 -06:00
Khosrow Moossavi 7c6630bca7 Cleanup POMs (#2613)
* Cleaup and code-formatting POM files

* Remove obsolete mycila license-maven-plugin

* Remove obsolete console-proxy/plugin project

* Move console-proxy-rdbconsole under console-proxy parent

* Use correct parent path for rdpconsole

* Order alphabetally items in setnextversion.sh

* Unifiy License header in POMs

* Alphabetic order of modules definition

* Extract all defined versions into parent pom

* Remove obsolete files: version-info.in, configure-info.in

* Remove redundant defaultGoal

* Remove useless checkstyle plugin from checkstyle project

* Order alphabetally items in pom.xml

* Add aditional SPACEs to fix debian build

* Don't execute checkstyle on parent projects

* Use UTF-8 encoding in building checkstyle project

* Extract plugin versions into properties

* Execute PMD plugin on all the projects with -Penablefindbugs

* Upgrade maven plugins to latest version

* Make sure to always look for apache parent pom from repository

* Fix incorrect version grep in debian packaging

* Fix rebase conflicts

* Fix rebase conflicts

* Remove PMD for now to be fixed on another PR
2018-07-25 14:39:37 -03:00
Mike Tutkowski 73608dec28 Support multiple volume access groups per compute cluster 2018-07-16 15:13:16 -06:00
Khosrow Moossavi 67860d9f46 maven: Updating pom.xml version numbers for release 4.11.2.0-SNAPSHOT (#2728)
Fixes the version in pom etc. to be consistent with versioning pattern as X.Y.Z.0-SNAPSHOT after a minor release.

Signed-off-by: Khosrow Moossavi <khos2ow@gmail.com>
2018-07-06 17:27:12 +05:30
Paul Angus 8ba318da19 Updating pom.xml version numbers for release 4.11.2-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2018-06-26 17:53:54 +01:00
Paul Angus 2cb2dacbe7 Updating pom.xml version numbers for release 4.11.1.0
Signed-off-by: Paul Angus <paulangus@PA-Ansible-GUI.sblab.local>
2018-06-21 15:52:43 +01:00
Rohit Yadav 85750f918b Merge branch '4.11'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-06-20 12:31:52 +05:30
Rohit Yadav 39471c8c00
configdrive: make fewer mountpoints on hosts (#2716)
This ensure that fewer mount points are made on hosts for either
primary storagepools or secondary storagepools.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-06-20 12:25:16 +05:30
Rohit Yadav 72e61bfa1d Merge branch '4.11'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-06-07 11:26:34 +05:30
Nicolas Vazquez 99ca81a676 ui: do not send conserve mode on L2 network offering creation from the UI (#2694)
Do not send conserve mode param on L2 network offering creation from the UI. Fix config drive NPE issue on L2 network.
2018-06-07 11:20:37 +05:30
Rohit Yadav 9146d7b7a0 Merge branch '4.11' 2018-06-06 12:41:18 +05:30
Rafael Weingärtner 9b83337658 Create unit test cases for 'ConfigDriveBuilder' class (#2674)
* Create unit test cases for 'ConfigDriveBuilder' class

* add method 'getProgramToGenerateIso' as suggested by rohit and Daan

* fix encoding for base64 to StandardCharsets.US_ASCII

* fix MockServerTest.testIsMockServerCanUpgradeConnectionToSsl()

This is another method that is causing Jenkins to fail for almost a month
2018-06-04 13:20:09 +02:00
Rohit Yadav 7c6777b8d3 Merge branch '4.11': allow config drives on primary storage for KVM (#2651)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-05-21 14:50:55 +05:30
Rohit Yadav acc5fdcdbd
CLOUDSTACK-10290: allow config drives on primary storage for KVM (#2651)
This introduces a new global setting `vm.configdrive.primarypool.enabled` to toggle creation/hosting of config drive iso files on primary storage, the default will be false causing them to be hosted on secondary storage. The current support is limited from hypervisor resource side and in current implementation limited to `KVM` only. The next big change is that config drive is created at a temporary location by management server and shipped to either KVM or SSVM agent via cmd-answer pattern, the data of which is not logged in logs. This saves us from adding genisoimage dependency on cloudstack-agent pkg.

The APIs to reset ssh public key, password and user-data (via update VM API) requires that VM should be shutdown. Therefore, in the refactoring I removed the case of updation of existing ISO. If there are objections I'll re-put the strategy to detach+attach new config iso as a way of updation. In the refactored implementation, the folder name is changed to lower-cased configdrive. And during VM start, migration or shutdown/removal if primary storage is enable for use, the KVM agent will handle cleanup tasks otherwise SSVM agent will handle them.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-05-21 14:27:23 +05:30
Syed Ahmed cd70ede3c2 Add ability to archive snapshots on primary storage 2018-05-17 04:39:34 -04:00
Rafael Weingärtner 15eddf3dd6 Merge forward branch '4.11' PR #2629
Fix primary storage count when deleting volumes (#2629)
2018-05-16 16:59:17 -03:00
Rafael Weingärtner b9ed42bd29
Fix primary storage count when deleting volumes (#2629)
* Primary Storage count for an account does not decrease when a Data Disk is deleted

When a data disk is created and not attached in a running VM, the "deleteVolume" will not decrement the count for used primary storage in the VMs accounting information. The property that is not being decremented is called "primarystoragetotal"; this information can be retrieved via "listAccounts" API method.

Steps to reproduce this issue:
1 - Create an account, deploy a VM in it
2 - Check the primary storage count for the account with listAccounts API
3 - Create a data disk
4 - Check the primary storage count for the account with listAccounts API
5 - Delete the Data disk
6 - Check the primary storage count for the account with listAccounts API - It is the same as before deleting the data disk (it should not be the same as the value in step 2!)

* formatting and cleanups

* fix imports that were wrongly changed during rebase
2018-05-16 15:28:28 -03:00
Marc-Aurèle Brothier 46bd94c6a2 [CLOUDSTACK-10254] checkstyle: add package name declaration validation (#2422)
* checktyle: verify package name matches directory structure

* fix new checkstyle findings on directory with package name mismatch
2018-04-26 10:32:08 -03:00
Rohit Yadav 41895561a7 Merge branch '4.11'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-03-30 16:21:06 +05:30
Mike Tutkowski e68f5cea67 Only use the host if its Resource State is Enabled. (#2512) 2018-03-29 17:43:22 +00:00
Rohit Yadav 8ef131745a Merge branch '4.11' 2018-03-15 16:46:50 +05:30
Rafael Weingärtner 972b8b71d7
CLOUDSTACK-8855 Improve Error Message for Host Alert State and reconnect host API. (#2387)
* CLOUDSTACK-8855 Improve Error Message for Host Alert State

* [CLOUDSTACK-9846] create column to save the content of alert messages

Remove declaration of throws CloudRuntimeException
I also removed some unused variables and comments left behind

This closes #837

* Isolate a problematic test "smoke/test_certauthority_root"
2018-03-14 15:27:43 -03:00
Raf Smeets 19d6578732 CLOUDSTACK-10303 : Refactor test data to nuage_test_data.py runnable against simulator (#2483)
* Refactored nuage tests

Added simulator support for ConfigDrive
Allow all nuage tests to run against simulator
Refactored nuage tests to remove code duplication

* Move test data from test_data.py to nuage_test_data.py

Nuage test data is now contained in nuage_test_data.py instead of
test_data.py
Removed all nuage test data from nuage_test_data.py

* CLOUD-1252 fixed cleanup of vpc tier network

* Import libVSD into the codebase

* CLOUDSTACK-1253: Volumes are not expunged in simulator

* Fixed some merge issues in test_nuage_vsp_mngd_subnets test

* Implement GetVolumeStatsCommand in Simulator

* Add vspk as marvin nuagevsp dependency, after removing libVSD dependency

* correct libVSD files for license purposes

pep8 pyflakes compliant
2018-03-14 17:17:36 +05:30
Rafael Weingärtner f2efbcecec
[CLOUDSTACK-10240] ACS cannot migrate a local volume to shared storage (#2425)
* [CLOUDSTACK-10240] ACS cannot migrate a volume from local to shared storage.

CloudStack is logically restricting the migration of local storages to shared storage and vice versa. This restriction is a logical one and can be removed for XenServer deployments. Therefore, we will enable migration of volumes between local-shared storages in XenServers independently of their service offering. This will work as an override mechanism to the disk offering used by volumes. If administrators want to migrate local volumes to a shared storage, they should be able to do so (the hypervisor already allows that). The same the other way around.

* Cleanups implemented while working on [CLOUDSTACK-10240]

* Fix test case test_03_migrate_options_storage_tags

The changes applied were:
- When loading hypervisors capabilities we must use "default" instead of nulls
- "Enable" storage migration for simulator hypervisor
- Remove restriction on "ClusterScopeStoragePoolAllocator" to find shared pools
2018-03-07 18:23:15 -03:00
Rohit Yadav 0ece15f86e Updating pom.xml version numbers for release 4.11.1.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-02-26 16:57:48 +01:00
Rohit Yadav 6ffbce6159 Updating pom.xml version numbers for release 4.11.0.1-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-02-05 11:13:50 +01:00
Rohit Yadav 5dada1f7ed Updating pom.xml version numbers for release 4.11.0.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-01-26 13:13:37 +01:00
Rafael Weingärtner c591c5ad3e CLOUDSTACK-10248: Fix errors that appeared after #2283 (#2417)
This fixes move refactoring error introduced in #2283 
For instance, the class DatadiskTO is supposed to be in com.cloud.agent.api.to package. However, the folder structure it was placed in is com.cloud.agent.api.api.to.

Skip tests for cloud-plugin-hypervisor-ovm3:
For some unknown reason, there are quite a lot of broken test cases for cloud-plugin-hypervisor-ovm3. They might have appeared after some dependency upgrade and was overlooked by the person updating them. I checked them to see if they could be fixed, but these tests are not developed in a clear and clean manner. On top of that, we do not see (at least I) people using OVM3-hypervisor with ACS. Therefore, I decided to skip them.

Identention corrected to use spaces instead of tabs in XML files
2018-01-23 12:19:36 +01:00
Marc-Aurèle Brothier 893a88d225 CLOUDSTACK-10105: Use maven standard project structure in all projects (#2283)
Remove maven standard module (which only a few were using) and get ride of maven customization for the projects structure.

- moved all directories to src/main/java, src/main/resources, src/main/scripts, src/test/java, src/test/resources
- grep scan to search for src/com and src/org left over
- grep for <project>/scripts to fix pom.xml configuration
- remove custom <build> configuration in pom.xml

Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
2018-01-20 03:19:27 +05:30
Rohit Yadav 072dbc0720 Updating pom.xml version numbers for master to 4.12.0.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-01-15 17:43:45 +05:30
Mike Tutkowski a30a31c9b7 CLOUDSTACK-9620: Enhancements for managed storage (#2298)
Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in a KVM-only environment (previously this only worked for XenServer and VMware)

Added support for root disks on managed storage with KVM

Added support for volume snapshots with managed storage on KVM

Enable creating a template directly from a volume (i.e. without having to go through a volume snapshot) on KVM with managed storage

Only allow the resizing of a volume for managed storage on KVM if the volume in question is either not attached to a VM or is attached to a VM in the Stopped state.

Included support for Reinstall VM on KVM with managed storage

Enabled offline migration on KVM from non-managed storage to managed storage and vice versa

Included support for online storage migration on KVM with managed storage (NFS and Ceph to managed storage)

Added support to download (extract) a managed-storage volume to a QCOW2 file

When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS, if applicable.

Included support for the KVM auto-convergence feature

The compression flag was actually added in version 1.0.3 (1000003) as opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)

On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable iSCSI volumes that need to be disconnected.

Added a new Global Setting: kvm.storage.live.migration.wait

For XenServer, added a check to enforce that only volumes from zone-wide managed storage can be storage motioned from a host in one cluster to a host in another cluster (cannot do so at the time being with volumes from cluster-scoped managed storage)

Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one or more snapshots.

Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot, create volume from snapshot, and create template from snapshot

Added an SIOC API plug-in to support VMware SIOC

When starting a VM that uses managed storage in a cluster other than the one it last was running in, we need to remove the reference to the iSCSI volume from the original cluster.

Added the ability to revert a volume to a snapshot

Enabled cluster-scoped managed storage

Added support for VMware dynamic discovery
2018-01-15 00:05:52 +05:30
Abhinandan Prateek 64832fd70a CLOUDSTACK-4757: Support OVA files with multiple disks for templates (#2146)
CloudStack volumes and templates are one single virtual disk in case of XenServer/XCP and KVM hypervisors since the files used for templates and volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates are in OVA format, which are archives that can contain a complete VM including multiple VMDKs and other files such as ISOs. And currently, Cloudstack only supports Template creation based on OVA files containing a single disk. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, only the first disk is attached to the new instance and other disks are ignored.
Similarly with uploaded volumes, attaching an uploaded volume that contains multiple disks to a VM will result in only one VMDK to being attached to the VM.

FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Support+OVA+files+containing+multiple+disks

This behavior needs to be improved in VMWare to support OVA files with multiple disks for both uploaded volumes and templates. i.e. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, the first disk should be attached to the new instance as the ROOT disk and volumes should be created based on other VMDK disks in the OVA file and should be attached to the instance.

Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-01-10 22:10:41 +05:30
Nicolas Vazquez e86bb41e0e CLOUDSTACK-10146: Bypass Secondary Storage for KVM templates (#2379)
This feature allows using templates and ISOs avoiding secondary storage as intermediate cache on KVM. The virtual machine deployment process is enhanced to supported bypassed registered templates and ISOs, delegating the work of downloading them to primary storage to the KVM agent instead of the SSVM agent.

Template and ISO registration:
- When hypervisor is KVM, a checkbox is displayed with 'Direct Download' label.
- API methods registerTemplate and registerISO are both extended with this new parameter directdownload.
- On template or ISO registration, no download job is sent to SSVM agent, CloudStack would only persist an entry on template_store_ref indicating that template or ISO has been marked as 'Direct Download' (bypassing Secondary Storage). These entries are persisted as:
template_id = Template or ISO id on vm_template table
store_id NULL
download_state = BYPASSED
state = Ready
(Note: these entries allow users to deploy virtual machine from registered templates or ISOs)
- An URL validation command is sent to a random KVM host to check if template/ISO location can be reached. Metalink are also supported by this feature. In case of a metalink, it is fetched and URL check is performed on each of its URLs.
- Checksum should be provided as indicated on #2246: {ALGORITHM}CHKSUMHASH
- After template or ISO is registered, it would be displayed in the UI

Virtual machine deployment:
When a 'Direct Download' template is selected for deployment, CloudStack would delegate template downloading to destination storage pool via destination host by a new pluggable download manager.
Download manager would handle template downloading depending on URL protocol. In case of HTTP, request headers can be set by the user via vm_template_details. Those details should be persisted as:
Key: HTTP_HEADER
Value: HEADERNAME:HEADERVALUE

In case of HTTPS, a new API method is added uploadTemplateDirectDownloadCertificate to allow user importing a client certificate into all KVM hosts' keystore before deployment.
After template or ISO is downloaded to primary storage, usual entry would be persisted on template_spool_ref indicating the mapping between template/ISO and storage pool.
2018-01-09 12:22:18 +05:30
subhash yedugundla 8eca04e1f6 CLOUDSTACK-9572: Snapshot on primary storage not cleaned up after Storage migration (#1740)
Snapshot on primary storage not cleaned up after Storage migration. This happens in the following scenario:

Steps To Reproduce
Create an instance on the local storage on any host
Create a scheduled snapshot of the volume:
Wait until ACS created the snapshot. ACS is creating a snapshot on local storage and is transferring this snapshot to secondary storage. But the latest snapshot on local storage will stay there. This is as expected.
Migrate the instance to another XenServer host with ACS UI and Storage Live Migration
The Snapshot on the old host on local storage will not be cleaned up and is staying on local storage. So local storage will fill up with unneeded snapshots.
2018-01-05 11:19:56 +05:30
pavanaravapalli 2adbaeb641 CLOUDSTACK-9932: Snapshot is getting deleted while volume creation from the snapshot is in progress (#2149)
Added validation to check if any volume(s) are in creating state , before performing delete snapshot.
2018-01-04 10:54:23 +05:30
Abhinandan Prateek 391952da5b CLOUDSTACK-9867: VM snapshot on primary storage usage metrics (#2035)
VM snapshot on primary storage usage metrics.
2017-12-28 14:57:10 +05:30
Rafael Weingärtner 7f6ae15972 Remove annotation not needed at cloud-engine-storage-image (#2324)
remove bogus depends-on declaration at spring-engine-storage-snapshot-core-context
2017-11-16 23:11:15 +05:30
Rohit Yadav 3ee8d83621 CLOUDSTACK-10136: Fix RemoteHostEndPoint thread growth
This fixes the following:
- Unchecked thread growth in RemoteEndHostEndPoint
- Potential NPE while finding EP for a storage/scope

Unbounded thread growth can be reproduced with following findings:
- Every unreachable template would produce 6 new threads (in a single
ScheduledExecutorService instance) spaced by 10 seconds
- Every reachable template url without the template would produce 1 new
thread (and one ScheduledExecutorService instance), it errors out quickly without
causing more thread growth.
- Every valid url will produce upto 10 threads as the same ep (endpoint
instance) will be reused to query upload/download (async callback)
progresses.

Every RemoteHostEndPoint instances creates its own
ScheduledExecutorService instance which is why in the jstack dump, we
see several threads that share the prefix RemoteHostEndPoint-{1..10}
(given poolsize is defined as 10, it uses suffixes 1-10).

This fixes the discovered thread leakage with following notes:
- Instead of ScheduledExecutorService instance, a cached pool could be
used instead and was implemented, and with `static` scope to be reused
among other future RemoteHostEndPoint instances.
- It was not clear why we would want to wait when we've Answers returned
from the remote EP, and therefore a scheduled/delayed Runnable was
not required at all for processing answers. ScheduledExecutorService
was therefore not really required, moved to ExecutorService instead.
- Another benefit of using a cached pool is that it will shutdown
threads if they are not used in 60 seconds, and they get re-used for
future runnable submissions.
- Caveat: the executor service is still unbounded, however, the use-case
that this method is used for short jobs to check upload/download
progresses fits the case here.
- Refactored CmdRunner to not use/reference objects from parent class.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2017-11-10 22:04:09 +05:30
Rohit Yadav eda3b35bfa CLOUDSTACK-10012: Migrate to Embedded Jetty
- Migrate to embedded Jetty server.
- Improve ServerDaemon implementation.
- Introduce a new server.properties file for easier configuration.
- Have a single /etc/default/cloudstack-management to configure env.
- Reduce shaded jar file, removing unnecessary dependencies.
- Upgrade to Spring 5.x, upgrade several jar dependencies.
- Does not shade and include mysql-connector, used from classpath instead.
- Upgrade and use bountcastle as a separate un-shaded jar dependency.
- Remove tomcat related configuration and files.
- Have both embedded UI assets in uber jar and separate webapp directory.
- Refactor systemd and init scripts, cleanup packaging.
- Made cloudstack-setup-databases faster, using `urandom`.
- Remove unmaintained distro packagings.
- Moves creation and usage of server keystore in CA manager, this
  deprecates the need to create/store cloud.jks in conf folder and
  the db.cloud.keyStorePassphrase in db.properties file. This also
  remove the need of the --keystore-passphrase in the
  cloudstack-setup-encryption script.
- GZip contents dynamically in embedded Jetty

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2017-11-03 23:57:25 +05:30
Mike Tutkowski 4c89b5b97a Merge branch 'pr-2081' 2017-10-18 14:25:45 -06:00
Rohit Yadav 0fedbdd7a9 CLOUDSTACK-9998: Prometheus Exporter for CloudStack (#2287)
This implements a CloudStack Prometheus exporter as a plugin, that serves
metrics on a HTTP port.

New global settings:

1. prometheus.exporter.enable - (default: false), Enable the prometheus
exporter plugin, management server restart needed.
2. prometheus.exporter.port - (default: 9595), The prometheus exporter
server port.
3. prometheus.exporter.allowed.ips - (default: 127.0.0.1), List of comma
separated prometheus server ips (with no spaces) that should be allowed to
access the URLs.

The following list  of  metrics are provided  per pop (zone)  with  the exporter:
• Per host:
o CPU cores:  used, total
o CPU usage:  used, total (in MHz)
o Memory  usage:  used, total (in MiBs)
o Total VMs running on  the host
• CPU cores:  allocated (per  zone)
• CPU usage:  allocated (per  zone, in  MHz)
• Memory  usage:  allocated (per  zone, in  MiBs)
• Hosts:  online, offline,  total
• VMs: in all states -- starting, running, stopping, stopped, destroyed,
       expunging, migrating,  error, unknown
• Volumes:  ready,  destroyed,  total
• Primary Storage Pool: (Disk size) used, allocated,  unallocated,  total (in GiBs)
• Secondary Storage Pool: (Disk size) used, allocated,  unallocated,  total (in GiBs)
• Private IPs:  allocated,  total
• Public  IPs:  allocated,  total
• Shared  Network IPs:  allocated,  total
• VLANs:  allocated,  total

Additional metrics for the environment:
• Summed  domain  (level=1) limit for CPU cores
• Summed  domain  (level=1) limit for memory/ram

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2017-10-11 17:24:22 +05:30
Harika Punna 6bb0ca2f85 This feature separates the snapshot creation on primary and its backing up on secondary.
As part of this, a new parameter, which is optional, is added to CreateSnapshotCmd, which seperates the creation and backup.

More details in the FS-
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Separate+creation+and+backup+operations+for+a+volume+snapshot
2017-10-04 14:39:03 +05:30
niteshsarda 74fe9e386c CLOUDSTACK-10004 : On deletion, Vmware volume snapshots are left behind with message 'the snapshot has child, can't delete it on the storage' (#2188)
Snapshots are not deleted resulting unexpected storage consumption in case of VMware.

Steps to reproduce this issue :

In VMware setup, create a snapshot of volume say Snap1.
After successful creation of snapshot Snap1, create new snapshot of same volume say Snap2.snapshots
While Snap2 is in BackingUp state, delete Snap1.
Snap1 will disappear from Web UI, but when we check secondary storage, files associated with Snap1 still persists even after cleanup job is performed.
In snapshot_store_ref table in DB, Snap1 will be in ready state instead of Destroyed.
Also, in snapshots table, status of Snap1 will be Destroyed but removed column will be null and will never change to the date of snapshot removal.
Fix for this issue :

In VMware, snapshot chain is not maintained, instead full snapshot is taken every time.
So, it makes sense not to assign parent snapshot id for the snapshot. In this way, every snapshot will be individual and can be deleted successfully whenever required.
2017-09-01 11:04:48 +02:00
Daan Hoogland 693d63e7c4 CE-110 remove duplicate-unused functionality 2017-08-25 08:57:51 +02:00
Rajani Karuturi 4bc7c270fa Updating pom.xml version numbers for release 4.11.0.0-SNAPSHOT
Signed-off-by: Rajani Karuturi <rajanikaruturi@gmail.com>
2017-07-12 12:09:38 +05:30
Rajani Karuturi 9d2893d44a Updating pom.xml version numbers for release 4.10.0.0
Signed-off-by: Rajani Karuturi <rajanikaruturi@gmail.com>
2017-07-03 10:06:43 +05:30
Mike Tutkowski ccb68e13a7
Fix for an issue introduced in this commit: 336df84f17
(cherry picked from commit c57f556966)

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2017-06-15 16:28:52 +05:30
Rajani Karuturi 3ddac36d20 Merge pull request #1867 from anshul1886/CLOUDSTACK-9706
CLOUDSTACK-9706: Added snapshots cleanup in start and storage GC thre…
2017-06-06 15:36:23 +05:30
Rajani Karuturi 1bd66cb03e Merge pull request #2072 from Accelerite/CLOUDSTACK-9895_ParallelVolumes
CLOUDSTACK-9895 : Added support for parallel volume(s) creation from a volume snapshot
2017-05-31 14:05:05 +05:30
Pavan Kumar Aravapalli 502f813370 CLOUDSTACK-9895 : Added support for parallel volume(s) creation from a volume snapshot 2017-05-31 11:27:30 +05:30
Rajani Karuturi 0fefc207f0 Merge pull request #1842 from anshul1886/CLOUDSTACK-9686
CLOUDSTACK-9686: Fixed multiple entires for builtin template in template
2017-05-17 10:39:45 +05:30
Rajani Karuturi 86eb5683dd Merge pull request #1840 from anshul1886/CLOUDSTACK-9685
CLOUDSTACK-9685: delete snapshot on primary associated with a volume …
2017-05-16 12:42:48 +05:30
Rajani Karuturi 503c803ba0 Merge pull request #803 from anshul1886/CLOUDSTACK-8833
CLOUDSTACK-8833: Fixed  Generating url and migrate volume to another storage , resulting two entry in UI and listvolume is not working for that volume
2017-05-08 10:14:02 +05:30
Rajani Karuturi 57628b2dd0 Merge pull request #2030 from shapeblue/snapshot-housekeeping
CLOUDSTACK-9864 cleanup stale worker VMs after job expiry time
2017-05-02 11:08:50 +05:30
Daan Hoogland 70ef0788c9 CLOUDSTACK-9408: Fix download urls in sql and scripts
This fixes the agreed upon url on download.cloudstack.org in various
sql files and misc scripts.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2017-04-20 12:33:33 +05:30
Daan Hoogland 4fd72917cd CE-113 removed TODOs 2017-04-18 18:11:00 +02:00
Daan Hoogland c689d4a696 CE-113 trace logging and rethrow instead of nesting CloudRuntimeException 2017-04-18 18:11:00 +02:00
Rajani Karuturi 525c45c1e5 Merge pull request #1994 from nvazquez/CLOUDSTACK-9827
CLOUDSTACK-9827: Storage tags stored in multiple placesIssue description: https://issues.apache.org/jira/browse/CLOUDSTACK-9827

### Fixes
- Create Primary Storage: Persist tags into `storage_pool_tags` instead of `storage_pool_details`
- List Storage Tags: Queries `storage_pool_tags` table instead of `storage_tag_view`
- Find Storage Pools by Tags using proper DAO
- Remove storage tags after deleting Primary Storage
- Remove unused `StorageTagDao`, `StorageTagDaoImpl`, `StorageTagVO` and `storage_tag_view`

* pr/1994:
  CLOUDSTACK-9827: Storage tags stored in multiple places

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2017-03-28 11:09:56 +05:30
nvazquez edf0e2b26f CLOUDSTACK-9827: Storage tags stored in multiple places 2017-03-24 13:37:04 -03:00
Anshul Gangwar c68931fc64 CLOUDSTACK-9706: Added snapshots cleanup in start and storage GC thread if they are failed to cleanup during DeleteSnapshot command 2017-03-17 17:40:55 +05:30
nvazquez c66df6e11f Fix for test failure 2017-03-02 16:14:45 -03:00
Anshul Gangwar 42b89278e9 CLOUDSTACK-8833: Fixed Generating url and migrate volume to another storage , resulting two entry in UI and listvolume is not working for that volume
Update the volume id in volume_store_ref table to newly created volume for migration
2017-02-28 17:55:42 +05:30
Rajani Karuturi 6a18cdd6ef Merge pull request #1825 from Accelerite/CLOUDSTACK-9660
CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests

NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle
only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.

* pr/1825:
  CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2017-02-28 06:00:02 +05:30
Rajani Karuturi 25f1552e37 Merge release branch 4.9 to master
This closes #1644

* 4.9:
  CLOUDSTACK-4858 Honors the snapshot.backup.rightafter configuration variable Unhides snapshot.backup.rightafter from global configuration
2017-02-08 13:43:02 +05:30
Rajani Karuturi c101817d45 Merge pull request #1697 from myENA/feature/49_observe_snapshot_backup_rightafter
CLOUDSTACK-4858 Honors the snapshot.backup.rightafter configuration variable

Unhides snapshot.backup.rightafter from global configuration

If snapshot.backup.rightafter is set to false (defaults to true), snapshots are
not backed up to secondary storage.

This is the same as PR #1644 applied to 4.9, as per @jburwell

* pr/1697:
  CLOUDSTACK-4858 Honors the snapshot.backup.rightafter configuration variable Unhides snapshot.backup.rightafter from global configuration

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2017-02-08 12:34:17 +05:30
nvazquez 6ce6cf67f0 CLOUDSTACK-9738: [Vmware] Optimize vm expunge process for instances with vm snapshots 2017-02-06 23:39:01 -03:00
Rajani Karuturi 7233ac37cd Merge pull request #977 from ustcweizhou/vm-snapshot
[4.10] CLOUDSTACK-8746: VM Snapshotting implementation for KVM

* pr/977:
  Fixes for testing VM Snapshots on KVM. Related to PR 977
  CLOUDSTACK-8746: vm snapshot implementation for KVM

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2017-01-31 05:58:56 +05:30
Rajani Karuturi 4721c53ea0 Merge pull request #1749 from mike-tutkowski/archived_snapshots
CLOUDSTACK-9619: Updates for SAN-assisted snapshotsThis PR is to address a few issues in #1600 (which was recently merged to master for 4.10).

In StorageSystemDataMotionStrategy.performCopyOfVdi we call getSnapshotDetails. In one such scenario, the source snapshot in question is coming from secondary storage (when we are creating a new volume on managed storage from a snapshot of ours thats on secondary storage).

This usually worked in the regression tests due to a bit of "luck": We retrieve the ID of the snapshot (which is on secondary storage) and then try to pull out its StorageVO object (which is for primary storage). If you happen to have a primary storage that matches the ID (which is the ID of a secondary storage), then getSnapshotDetails populates its Map<String, String> with inapplicable data (that is later ignored) and you dont easily see a problem. However, if you dont have a primary storage that matches that ID (which I didnt today because I had removed that primary storage), then a NullPointerException is thrown.

I have fixed that issue by skipping getSnapshotDetails if the source is coming from secondary storage.

While fixing that, I noticed a couple more problems:

1)       We can invoke grantAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).
2)       We can invoke revokeAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).

I have corrected those issues, as well.

I then came across one more problem:
         When using a SAN snapshot and copying it to secondary storage or creating a new managed-storage volume from a snapshot of ours on secondary storage, we attach to the SR in the XenServer code, but detach from it in the StorageSystemDataMotionStrategy code (by sending a message to the XenServer code to perform an SR detach). Since we know to detach from the SR after the copy is done, we should detach from the SR in the XenServer code (without that code having to be explicitly called from outside of the XenServer logic).

I went ahead and changed that, as well.

JIRA Ticket:
https://issues.apache.org/jira/browse/CLOUDSTACK-9619

* pr/1749:
  CLOUDSTACK-9619: Updates for SAN-assisted snapshots

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2017-01-27 05:35:06 +05:30
Wei Zhou a2428508e2 CLOUDSTACK-8746: vm snapshot implementation for KVM
(1) add support to create/delete/revert vm snapshots on running vms with QCOW2 format
(2) add new API to create volume snapshot from vm snapshot
(3) delete metadata of vm snapshots before stopping/migrating and recover vm snapshots after starting/migrating
(4) enable deleting of VM snapshot on stopped vm or vm snapshot is not listed in qcow2 image.
(5) enable smoke tests for vmsnaphsots on KVM
2017-01-24 21:47:30 +01:00
Rohit Yadav 8b6e96bca9 Updating pom.xml version numbers for release 4.9.3.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2017-01-06 10:40:15 +05:30
Rohit Yadav dfc39c1f08 Updating pom.xml version numbers for release 4.9.2.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2017-01-03 12:28:47 +05:30
Anshul Gangwar 929595c114 CLOUDSTACK-9686: Fixed multiple entires for builtin template in template
store ref table so builtin template is never downloaded completely
 In handleSysTemplateDownload method creating template only if there exists no entry
handleTemplateSync will take care of other scenario
2016-12-21 10:08:33 +05:30
Anshul Gangwar 336df84f17 CLOUDSTACK-9685: delete snapshot on primary associated with a volume when that volume is deleted
as that snapshot will never be going to use again and also it will fill up primary storage
2016-12-20 15:21:04 +05:30
Rohit Yadav 0dce1c50c1 CLOUDSTACK-9456: Update Spring version in maven poms
- Bump spring-framework version to 4.x and Jetty to version that runs with JDK8
- Bump servet dependency version
- Migrate spring xmls to version 4, fixes schema locations that are 3.0
  dependent in various xmls.
- Fix failing tests due to spring upgrade
  (Thanks @marcaurele Marc-Aurèle Brothier for fixing them)
    * Fix test DeploymentPlanningManagerImplTest
    * Fix GloboDNS test

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-16 21:21:20 +05:30
Rohit Yadav 5e19e64f2f Updating pom.xml version numbers for release 4.9.2.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-16 20:48:16 +05:30
Rohit Yadav af2679959b Updating pom.xml version numbers for release 4.9.1.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-10 08:38:03 +05:30
Koushik Das d6b41d9ac2 CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests
NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle
only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
2016-12-09 15:49:39 +05:30
Rohit Yadav 40d12ad40e Merge pull request #1772 from syed/template-sync-fix
CLOUDSTACK-9627 Fix template sync for region store.When using a region store like Swift or S3 as secondary storage,
the `zoneId` can be null. This causes an exception when we try
to convert it to a `long`. This fix guards against that.

Before this fix, if you restart the management server, all the templates
would change to "NOT READY" because the code which syncs the NFS cache
and the object store crashes due to the above mentioned issue.
This PR fixes that.

* pr/1772:
  CLOUDSTACK-9627:Fix template sync for region store

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-08 00:29:46 +05:30
Mike Tutkowski 06806a6523 CLOUDSTACK-9619: Updates for SAN-assisted snapshots 2016-12-06 17:32:56 -07:00
Syed 5d274bba51 CLOUDSTACK-9627:Fix template sync for region store 2016-11-29 11:36:06 -05:00
Rohit Yadav dbe57c3e50
Merge branch '4.9' 2016-11-28 16:47:44 +05:30
Syed f46651e672 Support Backup of Snapshots for Managed Storage
This PR adds an ability to Pass a new parameter, locationType,
    to the “createSnapshot” API command. Depending on the locationType,
    we decide where the snapshot should go in case of managed storage.

    There are two possible values for the locationType param

    1) `Standard`: The standard operation for managed storage is to
    keep the snapshot on the device. For non-managed storage, this will
    be to upload it to secondary storage. This option will be the
    default.

    2) `Archive`: Applicable only to managed storage. This will
    keep the snapshot on the secondary storage. For non-managed
    storage, this will result in an error.

    The reason for implementing this feature is to avoid a single
    point of failure for primary storage. Right now in case of managed
    storage, if the primary storage goes down, there is no easy way
    to recover data as all snapshots are also stored on the primary.
    This features allows us to mitigate that risk.
2016-10-30 23:19:58 -06:00
Wei Zhou 784c33585f CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD Storage if vm has been removed 2016-10-17 08:25:34 +02:00
Nathan Johnson 1a55a93f04 CLOUDSTACK-4858 Honors the snapshot.backup.rightafter configuration variable
Unhides snapshot.backup.rightafter from global configuration

If snapshot.backup.rightafter is set to false (defaults to true), snapshots are
not backed up to secondary storage
2016-09-30 09:39:27 -05:00
nvazquez 2e77496601 CLOUDSTACK-9438: Fix for CLOUDSTACK-9252 - Make NFS version changeable in UI 2016-09-28 08:51:37 -07:00
nvazquez bb275a5ad1 CLOUDSTACK-9422: Granular VMware vms creation as full clones on HV 2016-09-13 09:59:04 -07:00
Rohit Yadav 9555492b4d Merge branch '4.9' 2016-08-23 14:16:53 +05:30
Rohit Yadav f13c224da1 Updating pom.xml version numbers for release 4.9.1.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-08-19 13:53:39 +05:30
Will Stevens 62aa3b2bfa Updating pom.xml version numbers for release 4.10.0-SNAPSHOT
Signed-off-by: Will Stevens <williamstevens@gmail.com>
2016-07-29 10:11:34 -04:00
Will Stevens 227ff3884d Updating pom.xml version numbers for release 4.9.0
Signed-off-by: Will Stevens <williamstevens@gmail.com>
2016-07-25 16:56:04 -04:00
Will Stevens 121b3d6403 Merge pull request #1567 from exoscale/CLOUDSTACK-9238
CLOUDSTACK-9238: Fix URL length to 2048 for all url fields in VOI will update the PR to add max field length in the API commands too

* pr/1567:
  API: update url field max length
  not needed on host table
  Fix URL length to 2048 for all url fields in VO

Signed-off-by: Will Stevens <williamstevens@gmail.com>
2016-05-27 15:20:22 -04:00
Marc-Aurèle Brothier a59ee03fd7 Fix URL length to 2048 for all url fields in VO 2016-05-27 08:16:05 +02:00