Commit Graph

444 Commits

Author SHA1 Message Date
dahn 90a0ee0b6c
fix pseudo random behaviour in pool selection (#6307)
* refactor and log trace

* tracelogs

* shuffle pools with real randomiser

* sinlge retrieval of async job context

* some review comments addressed

* Apply suggestions from code review

Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>

* log formatting

* integration test for distribution of volumes over storages

* move test to smoke tests

* imports

* sonarcloud issue # AYCOmVntKzsfKlhz0HDh

* spellos

* review comments

* review comments

* sonarcloud issues

* unittest

* import

* Update AbstractStoragePoolAllocatorTest.java

Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
2022-06-10 08:06:23 -03:00
Daniel Augusto Veronezi Salvador 39fad2d9d7
KVM disk-only based snapshot of volumes instead of taking VM's full snapshot and extracting disks (#5297)
* Refactor create volume snapshot with running VM

* Refactor create volume snapshot with stopped VM

* Refactor create volume from snapshot

* Refactor create template from snapshot

* Refactor volume migration (migrateVolume/ migrateVirtualMachineWithVolume)

* Refactor snapshot deletion

* Refactor snapshot revertion

* Adjusts and fix cherry-pick conflicts

* Remove diffuse tests

* Add validation to add flag '--delete' on command 'virsh blockcommand' only if libvirt version is equal or higher 6.0.0

* Expunge temporary snapshot only if template creation is from snapshot

* Extract strings to constant

* Remove unused imports

* Fix error on revert backed up snapshot

* Turn method's return to void as it is not used

* Rename method in SnapshotHelper

* Fix folder creation when using SharedMountPoint pool

* Remove static import

* Remove unnused method

* Cover take snapshot in centos 7

* Handle right snapshot flag according to qemu version

Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2022-04-12 08:14:27 -03:00
slavkap 2b075ed39e
Storage-based Snapshots for KVM VMs (#3724)
* VM snapshots of running KVM instance using storage providers plugins for disk snapshots

Added new virtual machine snapshot strategy which is using storage providers plugins to take/revert/delete snapshots.
You can take VM snapshot without VM memory on KVM instance, using storage providers implementations for disk snapshots.
Also revert and delete is added as functionality. Added Thaw/Freeze command for KVM instance.
The snapshots will be consistent, because we freeze the VM during the snapshotting. Backup to secondary storage is executed after
thaw of the VM and if it is enabled in global settings.

* Removed duplicated functionality

Set few methods in DefaultVMSnapshotStrategy to protected to reuse them
without duplicating the code. Remove code that is actualy not needed

* Added requirements in global setting kvm.vmstoragesnapshot.enabled

Added more information in kvm.vmstoragesnapshot.enabled global setting,
that it needs installation of:
- qemu version 1.6+
- qemu-guest-agent installed on guest virtual machine

when the option is enabled

* Added Apache license header

* Removed commented code

* If "kvm.vmstoragesnapshot.enabled" is null should be considered as false

* removed unused imports, replaced default template

Removed unused imports which causing failures and replaced template to
CentOS8

* "kvm.vmstoragesnapshot.enabled" set to dynamic

* Getting status of freeze/thaw commands not the return code

Will chacke the status if freeze/thaw of Guest VM succeded, rather than
looking for return code. Code refactoring

* removed "CreatingKVM" VMsnapshot state and events related to it

* renamed AllocatedKVM to AllocatedVM

the states should not be associated to a hypervisor type

* loggin the result of "drive-backup" command

* Check which VM snapshot strategy could handle the vm snapshots

gets the best match of VM snapshot strategy which could handle the vm
snapshots on KVM.
Other storage plugins could integrate with this functionality to support group snapshots

* Added poolId in canHandle for KVM hypervisors

Added poolId into canHandle method used to check if all volumes are on
the same PowerFlex's storage pool

* skip smoke tests if the hypervisor's OS type is CentOS

This PR works with functionality included in qemu-kvm-ev which
does not come by default on CentOS. The smoke tests will be skipped if
the hypervisor OS is CentOS

* Added missed import in smoke test

* Suggested change to use ` org.apache.commons.lang.StringUtils.isNotBlank`

* Fix getting device on Ubuntu

On Ubuntu the device isn't provided and we have to get it from
node-name parameter. For drive-backup command (for Ubuntu) is needed and job-id which
is the value of node-name (this extra param works on Ubuntu and CentOS as well).

* Removed new snapshot states and functionality for NFS

* throw CloudRuntimeException

provide a properer error message when delete VM snapshot fails

* exclude GROUP snapshots when listing snapshots

* Skip tests if there is pool with NFS/Local

* address comments
2022-04-07 21:42:12 -03:00
Nicolas Vazquez 7f0a322b7d
[Vmware] Prevent NPE on template registration if guest OS is removed (#5980) 2022-02-11 07:36:59 -03:00
Suresh Kumar Anaparti bf70566c2c
Merge branch '4.16' into main 2022-02-02 17:30:21 +05:30
Nicolas Vazquez 3e92a63155
[XenServer/XCP-ng] Pass the image store NFS version on storage commands (#5886)
* Add NFS version to mount command

* Remove extra line

* Extend NFS version to mount secondary storage

* Unused import

* Refactor NFS version to be granular

* Make use of the ConfigKey on the NFS version setting value
2022-01-31 12:21:13 +05:30
Harikrishna f15cab16da
server: Decouple service (compute) offering and disk offering (#5008)
Currently, our compute offerings and disk offerings are tightly coupled with respect to many aspects. For example, if a compute offering is created, a corresponding disk offering entry is also created with the same ID as the reference. Also creating compute offering takes few disk-related parameters which anyway goes to the corresponding disk offering only. I think this design was initially made to address compute offering for the root volume created from a template. Also changing the offering of a volume is tightly coupled with storage tags and has to be done in different APIs either migrateVolume or resizeVolume. Changing of disk offering should be seamless and should consider new storage tags, new size and place the volume in appropriate state as defined in disk offering.

more details are mentioned here https://cwiki.apache.org/confluence/display/CLOUDSTACK/Compute+offering+and+disk+offering+refactoring

* Schema changes and disk offering column change from "type" to "compute_only"

* Few more changes

* Decoupled service offering and disk offering

* Remove diskofferingid from vminstance VO

* Decouple service offering and disk offering states

* diskoffering getsize() is only for strict disk offerings

* Fix deployVM flow

* Added new API params to compute offering creation

* Add diskofferingstrictness to serviceoffering vo under quota

* Added overrideDiskOfferingId parameter in deploy VM API which will override disk offering for the root disk both in template and ISO case

Added diskSizeStrictness parameter in create Disk offering API which will decide whether to restrict resize or disk offering change of a volume

* Fix User vm response to show proper service offering and disk offerings

* Added disk size strictness in disk offering response

* Added disk offering strictness to the service offering response

* Remove comments

* Added UI changes for Disk offering strictness in add compute offering form and Disk size strictness in add disk offering form

* Added diskoffering details to the service offering response

* Added UI changes in deployvm wizard to accept override disk offering id

* Fix delete compute offering

* Fix VM deployment from custom service offering

* Move uselocalstorage column access from service offering to disk offering

* UI: Separated compute and disk releated parameters in add compute offering wizard, also added association to disk offering

* Fixed diskoffering automatic selection on add compute offering wizard

* UI: move compute only toggle button outside the box in add compute offering wizard

* Added volumeId parameter to listDiskOfferings API and the disksizestrictness flag of the current disk offering is honored while list disk offerings

* Added configuration parameter to decide whether to check volume tags on the destination storagepool during migration

* Added disk offering change checks during resize volume operation

* Added new API changeofferingforVolume API and corresponding changes

* Add UI form for changeOfferingForVolume API

* Fix UI conflicts

* Fix service offering usage as disk offering

* Fix unit test failures

* fix user_vm_view

* Addressed review comments

* Fixed service_offering_view

* Fix service offering edit flow

* Fix service offering constructor to address custom offering

* Fix domain_router_view to get proper service offering id

* Removed unused import

* Addressed review comments and fixed update service offering flow with storage tags

* Added marvin test cases for checking disk offering strictness

* review comments addressed

* Remove system_use column from disk offering join

* update volume_view to update system_use column from service offering and not disk offering

* Fix changeOfferingForVolume API for custom disk offering

* Fix global setting implementation

* Fix list volumes, after changing system_use column from disk offering to service offering in volume_view

* Changes for override root disk offering in deployvm wizard in case of custom offering

* Fix a unit test case

* Fixed recent unit test cases with new serviceofferingvo constructor

* Fix unit test in VolumeApiServiceImpl

* Added storage id for the list disk offering API and corresponding UI changes in migrateVolume and changeOfferingForVolume flow

* Rename global configuration parameter from storage.pool.tags.disk.offering.strictness to match.storage.pool.tags.with.disk.offering

* Fix smoke test failures

* Added tool tip for migrate volume UI form

* Address review comments and fix UI form of deploy VM in case of ISO.

* Fixed resize volume UI form for data disk

* UI changes to disable override root disk size when override root disk offering is enabled

* UI fix in deploy vm wizard

* Fix listdiskoffering after rebasing with main

* Fixed UI in migrate and changeofferingfor volume to handle empty disk offering list
Removed the volume's current disk offering from listDiskOffering response list

* Added custom Iops to resize volume form and removed the current disk offering during change offering for volume UI form

* Fix false response on updateDiskOffering API

* Added search field for changeofferingforvolume UI form

* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster

* Removed DB changes from 4.16 upgrade file

* Resolving merge conflicts with main 4.17

* Added support for auto migration and auto resize of the root volume upon changing the service offering for VM.

* UI: Added automigrate checkbox in scale VM form

* Addes since attributes to new API params

* Added shrinkOK parameter to changeofferingforvolume API

* Added shrinkOk param to UI in changeOfferingforVolume form

* Added shrinkOk flag to scaleVM and changeServiceForVirtualMachines and UI form

* Removed old foreign key constraint on IDs of service offering and disk offering

* Allow resize and automigrate of root volume if required in all cases of service offering change

* Allow only resize to higher disk size from UI

* Fixing vue syntax error

* Make UI changes to provide root disk size box when the linked disk offering is of custom

* Converted from check box to toggle in scale VM, changeoffering, resize and migrate volume forms

* Fix resize volume operation to update the VM settings

* Fix migratevolume form to pick selected storage pool id in list diskofferings API
2022-01-27 15:08:42 +05:30
Daniel Augusto Veronezi Salvador d26ce157db
Fix camel case (#5898)
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2022-01-26 19:20:18 -03:00
Suresh Kumar Anaparti 982eef202f
Merge branch '4.16' into main 2022-01-26 12:21:24 +05:30
Nicolas Vazquez 84f5768e64
[VMware][Deploy-as-is] OVF properties not importing when template is uploaded from local (#5861)
* Fix ova upload missing details

* Refactor and cleanup

* Unused import
2022-01-26 11:28:52 +05:30
Daniel Augusto Veronezi Salvador b4aabadc4d
Replace string libraries with org.apache.commons.lang3.StringUtils (#5386)
* Replace google lib for lang3 and adjust methods calls

* Replace string libs by lang3

* Prohibit others string libs

Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2021-11-18 13:41:48 +05:30
Pearl Dsilva 93150f465b
api: Fix list templates when no secondary stores present (#5468) 2021-09-20 14:07:47 -03:00
Nicolas Vazquez 3ca3843b02
[Vmware] Fix for ovf templates with prefix (#5448)
* [Vmware] Fix for ovf templates with prefix

* Support multiple hardware versions
2021-09-16 16:16:41 -03:00
Nicolas Vazquez 413d10dd81
server: Extend the Annotations framework (#5103)
* Extend addAnnotation and listAnnotations APIs

* Allow users to add, list and remove comments

* Add adminsonly UI and allow admins or owners to remove comments

* New annotations tab

* In progress: new comments section

* Address review comments

* Fix

* Fix annotationfilter and comments section

* Add keyword and delete action

* Fix and rename annotations tab

* Update annotation visibility API and update comments table accordingly

* Allow users seeing all the comments for their owned resources

* Extend comments for volumes and snapshots

* Extend comments to multiple entities

* Add uuid to ssh keypairs

* SSH keypair UI refactor

* Extend comments to the infrastructure entities

* Add missing entities

* Fix upgrade version for ssh keypairs

* Fix typo on DB upgrade schema

* Fix annotations table columns when there is no data

* Extend the list view of items showing they if they have comments

* Remove extra test

* Add annotation permissions

* Address review comments

* Extend marvin tests for annotations

* updating ui stuff

* addition to toggle visibility

* Fix pagination on comments section

* Extend to kubernetes clusters

* Fixes after last review

* Change default value for adminsonly column

* Remove the required field for the annotationfilter parameter

* Small fixes on visibility and other fixes

* Cleanup to reduce files changed

* Rollback extra line

* Address review comments

* Fix cleanup error on smoke test

* Fix sending incorrect parameter to checkPermissions method

* Add check domain access for the calling account for domain networks

* Fix only display annotations icon if there are comments the user can see

* Simply change the Save button label to Submit

* Change order of the Tools menu to provent users getting 404 error on clicking the text instead of expanding

* Remove comments when removing entities

* Address review comments on marvin tests

* Allow users to list annotations for an entity ID

* Allow users to see all comments for allowed entities

* Fix search filters

* Remove username from search filter

* Add pagination to the annotations tab

* Display username for user comments

* Fix add permissions for domain and resource admins

* Fix for domain admins

* Trivial but important UI fix

* Replace pagination for annotations tab

* Add confirmation for delete comment

* Lint warnings

* Fix reduced list as domain admin

* Fix display remove comment button for non admins

* Improve display remove action button

* Remove unused parameter on groupShow

* Include a clock icon to the all comments filter except for root admin

* Move cleanup SQL to the correct file after rebasing main

Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>
2021-09-08 10:14:06 +05:30
Abhishek Kumar 56f4da6dce Merge remote-tracking branch 'apache/4.15' into main 2021-09-02 16:13:33 +05:30
Pearl Dsilva 557dc5e1a0
api: List details of template download state for stores corresponding to a zone (#5379)
* api: List details of template download state for stores corresponding to a zone

* fix test
2021-09-02 10:58:58 +05:30
Rohit Yadav a1a3aff2b5 Merge remote-tracking branch 'origin/4.15' into main 2021-08-31 14:29:30 +05:30
slavkap 961e85eb60
Fix of creating volumes from snapshots without backup to secondary storage (#5349)
* Fix of creating volumes from snapshots without backup

When few snaphots are created onyl on primary storage, and try to create
a volume or a template from the snapshot only the first operation is
successful. Its because the snapshot is backup on secondary storage with
wrong SQL query. The problem appears on Ceph/NFS but may affects other
storage plugins.
Bypassing secondary storage is implemented only for Ceph primary storage
and it didn't cover the functionality to create volume from snapshot
which is kept only on Ceph

* Address review
2021-08-31 12:46:57 +05:30
Pearl Dsilva ea7d3b34d1
Cleanup volume information from db when deleted (#4551)
* Cleanup volume information from db when deleted

* reuse search builder

* revert change

Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2021-08-09 14:21:07 +05:30
Daniel Augusto Veronezi Salvador 1389862c22
engine/storage: Fix regression on create volume from snapshot (#5282)
* Fix regression on create volume from snapshot

* Log hidden exception

* Revert "Log hidden exception"

This reverts commit 70e655687f.

Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2021-08-09 13:37:10 +05:30
Spaceman1984 96c9c5a5e2
Added disk provisioning type support for VMWare (#4640)
* Added disk provisioning type support for VMWare

* Review changes

* Fixed unit test

* Review changes

* Added missing licenses

* Review changes

* Update StoragePoolInfo.java

Removed white space

* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id

* Delete __init__.py

* Merge fix

* Fixed failing test

* Added comment about parameters

* Added error log when update fails

* Added exception when using API

* Ordering storage pool selection to prefer thick disk capable pools if available

* Removed unused parameter

* Reordering changes

* Returning storage pool details after update

* Removed multiple pool update, updated marvin test, removed duplicate enum

* Removed comment

* Removed unused import

* Removed for loop

* Added missing return statements for failed checks

* Class name change

* Null pointer

* Added more info when a deployment fails

* Null pointer

* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java

Co-authored-by: dahn <daan.hoogland@gmail.com>

* Small bug fix on API response and added missing bracket

* Removed datastore cluster code

* Removed unused imports, added missing signature

* Removed duplicate config key

* Revert "Added more info when a deployment fails"

This reverts commit 2486db78dc.

Co-authored-by: dahn <daan.hoogland@gmail.com>
2021-07-16 22:37:42 -03:00
Abhishek Kumar 50a16979c5
refactor: migrate vm with storage (#5030)
* refactor: migrate with storage host capability check

Refactors Boolean HypervisorCapabilitiesDao::isStorageMotionSupported to boolean HypervisorCapabilitiesDao::isStorageMotionSupported for simplifying callers.
Refactors log messages.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* simplify

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactor

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* changes

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* review comments addressed

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* var rename

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-07-15 12:57:13 +05:30
Rohit Yadav c1a02e1697 Merge remote-tracking branch 'origin/4.15'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-03-29 16:34:22 +05:30
Wei Zhou b8884efa7f
server: create DB entry for storage pool capacity when create storage pool (#4805)
* server: create DB entry for storage pool capacity when create storage pool

* Revert "server: create DB entry for storage pool capacity when create storage pool"

This reverts commit e790167bfe.

* server: create DB entry for storage pool capacity when create zone-wide storage pools
2021-03-29 16:21:24 +05:30
Rohit Yadav 77290df0d5 Merge remote-tracking branch 'origin/4.15'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-02-26 12:09:11 +05:30
Abhishek Kumar 88337bdea4
server: fix finding pools for volume migration (#4693)
While finding pools for volume migration list following compatible storages:
- all zone-wide storages of the same hypervisor.
- when the volume is attached to a VM, then all storages from the same cluster as that of VM.
- for detached volume, all storages that belong to clusters of the same hypervisor. 

Fixes #4692 
Fixes #4400
2021-02-25 22:13:50 +05:30
sureshanaparti eba186aa40
storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
2021-02-24 14:58:33 +05:30
Harikrishna b1ddd7c2e6
vmware: Fix for mapping guest OS type read from OVF to existing guest OS in C… (#4553)
* Fix for mapping guest OS type read from OVF to existing guest OS in CloudStack database  while registering VMware template

* Added unit tests to String Utils methods and updated the code

* Updated the java doc section

* Updated os description logic to keep equals ignore match with guest os display name
2020-12-23 19:37:21 +05:30
Nicolas Vazquez 4617be4583
vmware: Fix template upload from local (#4555)
Update the guest OS from the OVF file after upload is completed
This PR fixes the template upload from local on VMware

Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
2020-12-23 15:13:39 +05:30
Pearl Dsilva e4a504b084
Make global setting non-dynamic (#4505)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2020-12-01 14:00:35 +05:30
Rakesh 735b6de296
Cleanup download urls when SSVM destroyed (#4078)
Co-authored-by: Rakesh Venkatesh <r.venkatesh@global.leaseweb.com>
2020-11-18 14:01:31 +01:00
Spaceman1984 acee15a530
Moved dedicated hosts to the end of the resultset when selecting an e… (#4428) 2020-11-18 12:07:14 +00:00
Pearl Dsilva 1dbb76f64b
Fix: Data migration (#4475)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2020-11-18 09:45:53 +01:00
nvazquez 94bebe8792 Revert back deploy as is column on templates but keep it as default for new templates 2020-10-19 15:05:57 +05:30
nvazquez 9b51a706db Set deploy-as-is to default on VMware 2020-10-19 15:05:57 +05:30
nvazquez b0d3168e0b Fail template registration when guest OS not found 2020-10-19 15:05:57 +05:30
nvazquez 32d85b0fa2 Display storage on logging when not deploy-as-is and guest OS small refactor 2020-10-19 15:05:57 +05:30
nvazquez 41354227e2 Handle guest OS read from deploy-as-is OVF descriptor 2020-10-19 15:05:57 +05:30
nvazquez edfbed34ad Use network adapter from OVF on deploy-as-is 2020-10-19 15:05:57 +05:30
Harikrishna Patnala 44dc0c6072 Fixed rat failure on new class DeployAsIsHelper.java
Also removed some unused imports during rebase
2020-10-19 15:05:57 +05:30
nvazquez f73830acbb Refactor deploy as is constants 2020-10-19 15:05:57 +05:30
nvazquez bb4ce2118d Add new template and vm deploy as is details table and refactor 2020-10-19 15:05:57 +05:30
nvazquez d864e9dc39 [VMware] Full OVF properties support 2020-10-19 15:05:56 +05:30
Harikrishna Patnala 19745ea049 Fix enable primary datastore maintenance command seriliaztion on it 2020-10-19 14:57:16 +05:30
Harikrishna Patnala 61dd85876b Fix migrate vm and volume APIs in case if datastore cluster 2020-10-19 14:57:16 +05:30
Harikrishna Patnala 75fb1d91ee Fix adding Datastore clusters and listing 2020-10-19 14:57:15 +05:30
Harikrishna Patnala b4a23ea5f6 Allocation logic to skip datastore cluster and consider only storagepools inside the datastore cluster 2020-10-19 14:57:15 +05:30
Harikrishna Patnala fb0a96e7fb Check if datastore is complaince with the storagepolicy provided in the disk offering.
Added corresponding manager objects from PBM sdk to do the job.
Made dao layer changes to read the storage policy in diskoffering
2020-10-19 14:57:15 +05:30
Pearl Dsilva 0d487fc8c9
support for data migration of incremental snaps on xen (#4395)
* support for handling incremental snaps (on DB entries) on xen

* Addressed comments

* Update NfsSecondaryStorageResource.java

adjusted space in comment/ log

Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2020-10-18 02:15:10 +05:30
Pearl Dsilva b464fe41c6
server: Secondary Storage Usage Improvements (#4053)
This feature enables the following:
Balanced migration of data objects from source Image store to destination Image store(s)
Complete migration of data
setting an image store to read-only
viewing download progress of templates across all data stores
Related Primate PR: apache/cloudstack-primate#326
2020-09-17 10:12:10 +05:30