Commit Graph

241 Commits

Author SHA1 Message Date
Daan Hoogland 22da57f922 Merge branch '4.22' 2025-12-22 14:13:50 +01:00
Daan Hoogland 55ab7c5589 Merge branch '4.20' into 4.22 2025-12-22 13:23:37 +01:00
vladimirpetrov b394b5ba74
Fix terms, typos and grammar mistakes in the API, error messages, events, etc. (#7857)
This PR aligns the use of terminology, renaming VM / virtual machine references to 'Instance' and also capitalising the terms Templates, Network, Snapshot, User, Account in CloudStack APIs, error and log messages, events, tooltips, etc. Many typos, grammar and spelling mistakes were fixed, also terms like IPv4, VPN, VPC, etc. were properly capitalised. Some error messages were cleaned for better readability. The test cases, expecting some exception strings were adjusted accordingly.

Here is the wiki page, describing the changes in details:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Object+Naming+and+Title+Case+Convention

---------

Co-authored-by: Manoj Kumar <manojkr.itbhu@gmail.com>
Co-authored-by: Harikrishna <harikrishna.patnala@gmail.com>
2025-12-22 15:18:58 +05:30
dahn c81295439f
removed code in comments (#11145) 2025-12-08 16:31:48 +01:00
João Jandre 8171d9568c
Block use of internal and external snapshots on KVM (#11039) 2025-11-24 11:39:19 +01:00
Suresh Kumar Anaparti f67b738eb3
Migrate volume improvements, to bypass secondary storage when copy volume between pools is allowed directly (#11625)
* Migrate volume improvements, to bypass secondary storage when copy volume between pools is allowed directly

* Bypass secondary storage for copy volume between zone-wide pools and
- local storage on host in the same zone
- cluser-wide pools in the same zone

* Bypass secondary storage for volumes on ceph/rdb pool when the scope permits

* Fix dest disk format while migrating volume from ceph/rbd to nfs, and some code improvements

* unit tests

* Update suitable disk offering(s) for volume(s) after migrate VM with volumes when change in pool type (shared or local)

Currently, Migrate VM with volume(s) bypasses the service and disk offerings of the volumes, as the target pools for migration are specified,
which ignores the offerings. Offering change is required when pool type (shared or local) is changed, mainly
- when volume on shared pool is migrated to local pool
- when volume on local pool is migrated to shared pool

* Update with proper message while migrate volume when target pool and offering type mismatches (both are not shared/local)

* Consider host scope first during endpoint selection while copying between primary storages

* Update disk offering count (for listDiskOfferings api) while removing offerings with tags mismatch with storage tags
2025-10-09 16:00:46 +05:30
Wei Zhou e78b5cc3cc
Merge remote-tracking branch 'apache/4.20' 2025-09-24 09:27:08 +02:00
Suresh Kumar Anaparti 40dec99659
server: Cleanup allocated snapshots / vm snapshots, and update pending ones to Error on MS start (#8452)
* Remove allocated snapshots / vm snapshots on start

* Check and Cleanup snapshots / vm snapshots on MS start

* rebase fixes

* Update volume state (from Snapshotting) on MS start when its snapshot job not finished and snapshot in Creating state
2025-09-23 08:37:10 +02:00
Suresh Kumar Anaparti 2c34f5e495
Merge branch '4.20' 2025-08-15 19:54:41 +05:30
João Jandre c6daeb4f78
fix snapshot physical size for primary storage (#11448) 2025-08-15 19:52:50 +05:30
slavkap e5f61164b3
Support of snapshot copy to primary storage in different zones. (#9478)
* Support of snapshot copy to different StorPool primary storage between zones
2025-08-04 16:35:16 +05:30
João Jandre 6ad9296412
Fix KVM incremental snapshot removal when using multiple secondary storages (#11180)
When removing an incremental snapshot (For both KVM and XenServer), it is checked if the snapshot has a child or not. If it has, then the snapshot is not removed from the storage.

For KVM incremental snapshots, snapshots in the same chain may be on different secondary storages (within the same zone).

However, the child search process only considers snapshots from the same secondary storage as theirs. Therefore, if a snapshot has its parent snapshot on a different secondary storage, it will be completely removed, making the snapshot chain inconsistent.
2025-07-21 16:47:27 +05:30
João Jandre 53eb2c5b9b
File-based disk-only VM snapshot with KVM as hypervisor (#10632)
Co-authored-by: João Jandre <joao@scclouds.com.br>
Co-authored-by: Fabricio Duarte <fabricio.duarte.jr@gmail.com>
2025-07-16 08:54:07 +02:00
Suresh Kumar Anaparti 572fc11a64
[PowerFlex] Add & Remove PowerFlex/ScaleIO MDMs for the storage SDC connections (#9903)
* Add & Remove PowerFlex/ScaleIO MDMs while preparing & unpreparing the storage SDC connections (instead of start & stop scini)

* Add/Remove MDM IP addresses during Host connection/disconnection to/from storage pool when powerflex.connect.on.demand is false

* unit test fixes

* Don't remove MDM IPs from SDC when any volumes mapped to SDC

* Don't remove MDM IPs when other pools of same ScaleIO/PowerFlex cluster are connected

* rebase fixes

* update changes, to not remove/disconnect MDMs on maintenance

* import fixes after rebase
2025-05-15 12:42:13 +05:30
João Jandre 6fdaf51ddc
KVM incremental snapshot feature (#9270)
* KVM incremental snapshot feature

* fix log

* fix merge issues

* fix creation of folder

* fix snapshot update

* Check for hypervisor type during parent search

* fix some small bugs

* fix tests

* Address reviews

* do not remove storPool snapshots

* add support for downloading diff snaps

* Add multiple zones support

* make copied snapshots have normal names

* address reviews

* Fix in progress

* continue fix

* Fix bulk delete

* change log to trace

* Start fix on multiple secondary storages for a single zone

* Fix multiple secondary storages for a single zone

* Fix tests

* fix log

* remove bitmaps when deleting snapshots

* minor fixes

* update sql to new file

* Fix merge issues

* Create new snap chain when changing configuration

* add verification

* Fix snapshot operation selector

* fix bitmap removal

* fix chain on different storages

* address reviews

* fix small issue

* fix test

---------

Co-authored-by: João Jandre <joao@scclouds.com.br>
2025-05-12 10:50:30 -03:00
Pearl Dsilva 91c1168861 Merge branch '4.19' of https://github.com/apache/cloudstack into 4.20 2025-05-02 14:59:05 +05:30
Suresh Kumar Anaparti 32cc45e840
[UI] Allow quiescevm and asyncbackup flags while taking volume snapshot from UI when these are supported for the volume (#10265) 2025-05-02 10:45:50 +02:00
Vishesh a4224e58cc
Improve logging to include more identifiable information (#9873)
* Improve logging to include more identifiable information for kvm plugin

* Update logging for scaleio plugin

* Improve logging to include more identifiable information for default volume storage plugin

* Improve logging to include more identifiable information for agent managers

* Improve logging to include more identifiable information for Listeners

* Replace ids with objects or uuids


* Improve logging to include more identifiable information for engine

* Improve logging to include more identifiable information for server

* Fixups in engine

* Improve logging to include more identifiable information for plugins

* Improve logging to include more identifiable information for Cmd classes

* Fix toString method for StorageFilterTO.java
2025-01-06 16:42:37 +05:30
Lucas Martins dc0ebe985c
Remove info about the environment from exception when revert VM snapshot (#9944)
Co-authored-by: Lucas Martins <lucas.martins@scclouds.com.br>
2024-11-28 14:42:34 -03:00
nvazquez b73f634ea6
Merge branch '4.19' 2024-08-06 12:39:13 -03:00
João Jandre 9033ab709e
Fix snapshot chain being deleted on XenServer (#9447)
Using XenServer as the hypervisor, when deleting a snapshot that has a parent, that parent will also get erased on storage, causing data loss. This behavior was introduced with #7873, where the list of snapshot states that can be deleted was changed to add BackedUp snapshots.

This PR changes the states list back to the original list, and swaps the while loop for a do while loop to account for the changes in #7873.

Fixes #9446
2024-08-01 17:33:04 +05:30
John Bampton c923e673cf
pre-commit: add `XML` files to the `trailing-whitespace` check (#9131) 2024-07-12 09:42:54 +02:00
João Jandre 49cecaed06
Normalize loggers and upgrade log4j 1.2 to log4j 2.19 (#7131)
* Normalize logs

All classes that could have their loggers inherited from their fathers had their own loggers deleted;
Most loggers didn't have to be static, so most of them were normalized so that they wouldn't be;
All loggers are protected now;
Static logger's name are now 'LOGGER';
Non-static logger's name are now 'logger';
New class DbUpgradeAbstractImpl created so that all Upgraders extend it and inherit its logger

* Upgrade log4j

* fix errors caused by the merge

* Refactor cglibThrowableRenderer functionality to log4j2 and upgrade the last configuration files

* fix sonarcloud bug

* Fix errors caused by merge, remove some unused loggers, and rename a variable that was mistakenly renamed on the normalization commit

* Readd snmpTrapAppender, remove TestAppender

* Regenerate changes

* regenerate changes

* refactor last custom appender

* fix systemvm configuration xml

* Regenerate changes

* Regenerate changes

* regenerate changes

* Regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* Fix utils pom

* fix some tests

* regenerate changes

* Fix jar being printed on exception

* fix logging in system VMs, fix commands not having log4j2 classpath.

* regenerate changes

* Fix some unwanted renomeations

* fix end of file

* regenerate changes

* regenerate changes

* fix merge error

* regenerate changes

* fix tests

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* readd reload4j to tungsten as juniper depends on it

* Regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* re-add reload4j dependency to network-contrail, as juniper depends on it

* regenerate changes

* regenerate changes

* regenerate changes

* fix typo

* regenerate changes

* regenerate changes

* Fix end of files

* regenerate changes

* add logj42 to cloud-utils-SHADED.jar

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* Regenerate changes

* Regenerate changes

* Regenerate changes

* regenerate changes

* Regenerate changes

* regenerate changes

* Regenerate changes

* Regenerate changes

* Regenerate changes

* regenerate changes

* Regenerate changes

* Regenerate changes

* fix some tests

* Regenerate changes

* Regenerate changes

* fix test

* Regenerate changes

* Regenerate changes
2024-02-08 09:55:41 -03:00
Vishesh 399bd0a067
Upgrade to mockito 4 and handle Mockito deprecations (#8427) 2024-02-06 14:20:37 +01:00
Bryan Lima b0910fc61d
Add dynamic secondary storage selection (#7659) 2023-12-04 09:52:32 +01:00
John Bampton f090c77f41
misc: fix spelling (#7549)
Co-authored-by: Stephan Krug <stekrug@icloud.com>
2023-11-02 09:23:53 +01:00
Abhishek Kumar 543c54c718
api,server,ui: snapshot copy, multi-zone replica (#7873)
This PR adds new functionality to copy snapshots across zones and take snapshots for multiple zones.

Copy functionality is similar to template copy. The source zone acts as the web server from where the destination zone(s) can download the snapshot files. For this purpose, a new API - `copySnapshot` has been added. The response for copySnapshot will be returning zone and download details from the first destination zone of the request. This behaviour is similar to the `copyTemplate` API.

In a similar manner, multiple zones can be selected while taking the snapshots or creating snapshot policies. For this snapshot will be taken in the base zone(in which volume is present) and then copied to the additional zones. A new parameter - `zoneids` has been added to `createSnapshot` and `createSnapshotPolicy` APIs.

As snapshots can be present on multiple zones (secondary stores), a new parameter `zoneid` has been added to delete the snapshot copy on a specific zone.

`listSnapshots` API has been updated to allow listing snapshot entries for different zones/datastores. New parameters - `showUnique`, `locationType` have been added.

Events generated during snapshot operations will now be linked to the snapshot itself rather than the volume of the snapshot.

`listSnapshotPolicies` and `createSnapshotPolicy` APIs will return zone details of the zones in which backup will be scheduled for the policy.

----
New API added
`copySnapshot`

Request and response params updated for APIs
```
- listSnapshots
- deleteSnapshot
- createTemplate
- listZones
- listSnapshotPolicies
- createSnapshotPolicy
```
UI updated for
- Snapshot detail view
- Create snapshot form
- Create snapshot policy form
- Create volume (from snapshot) form
- Create template (from snapshot) form

Doc PR: https://github.com/apache/cloudstack-documentation/pull/344
PR: https://github.com/apache/cloudstack/pull/7873
2023-10-23 09:01:58 +02:00
sato03 e437d1016f
Snapshot removal and storage cleanup logs (#8031) 2023-10-16 16:20:09 +02:00
Rohit Yadav 8a34afa8ab Merge remote-tracking branch 'origin/4.18' 2023-10-11 21:00:06 +05:30
Rohit Yadav 8350ce5aa4
storage: allow VM snapshots without memory for KVM when global setting allows (#8062)
This removes the conditional logic where comment notest to remove it
after PR #5297 is merged that is applicable for ACS 4.18+. Only when the
global setting is enabled and memory isn't selected, VM snapshot could
be allowed for VMs on KVM that have qemu-guest-agent running.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2023-10-11 20:56:45 +05:30
Vishesh 84277e783b
remove powermock from engine (#7975) 2023-09-20 10:11:28 +02:00
John Bampton 6f4503488b
pre-commit: apply `end-of-file-fixer` to all files (#7551) 2023-08-02 13:47:21 +02:00
Vishesh 594c70dde0
Sync precommit config from main (#7732)
Co-authored-by: John Bampton <jbampton@users.noreply.github.com>
Co-authored-by: dahn <daan@onecht.net>
2023-07-07 11:18:16 +02:00
John Bampton 11d45654a6
misc: fix spelling (#7206)
This PR fixes spellings
2023-05-23 11:06:16 +05:30
John Bampton c2e17310d6
Add three more `pre-commit` checks (#7083)
Co-authored-by: dahn <daan@onecht.net>
2023-03-27 13:28:55 +02:00
Daniel Augusto Veronezi Salvador 7936eb04e9
server: Fix delete parent snapshot (#6630)
ACS + Xenserver works with differential snapshots. ACS takes a volume full snapshot and the next ones are referenced as a child of the previous snapshot until the chain reaches the limit defined in the global setting snapshot.delta.max; then, a new full snapshot is taken. PR #5297 introduced disk-only snapshots for KVM volumes. Among the changes, the delete process was also refactored. Before the changes, when one was removing a snapshot with children, ACS was marking it as Destroyed and it was keeping the Image entry on the table cloud.snapshot_store_ref as Ready. When ACS was rotating the snapshots (the max delta was reached) and all the children were already marked as removed; then, ACS would start removing the whole hierarchy, completing the differential snapshot cycle. After the changes, the snapshots with children stopped being marked as removed and the differential snapshot cycle was not being completed.

This PR intends to honor again the differential snapshot cycle for XenServer, making the snapshots to be marked as removed when deleted while having children and following the differential snapshot cycle.

Also, when one takes a volume snapshot and ACS backs it up to the secondary storage, ACS inserts 2 entries on table cloud.snapshot_store_ref (Primary and Image). When one deletes a volume snapshot, ACS first tries to remove the snapshot from the secondary storage and mark the entry Image as removed; then, it tries to remove the snapshot from the primary storage and mark the entry Primary as removed. If ACS cannot remove the snapshot from the primary storage, it will keep the snapshot as BackedUp; however, If it does not exist in the secondary storage and without the entry SNAPSHOT.DELETE on cloud.usage_event. In the end, after the garbage collector flow, the snapshot will be marked as BackedUp, with a value in the field removed and still being rated. This PR also addresses the correction for this situation.

Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2022-10-13 12:31:11 +05:30
Abhishek Kumar a21efe75df
vmware: fix vm snapshot with datastore cluster, drs (#6643)
Fixes #6595
Sync volume datastore, path and chaininfo info while calculating snapshot chain size after snapshot operation is complete from vCenter.
2022-08-31 16:00:14 +05:30
Harikrishna 2c05b63495
kvm: Fix for Revert volume snapshot (#6527)
This PR fixes the issue #6209 where the snapshot revert operation fails after certain volume operations like Migrate VM with volume / migrate volume / reinstall VM.

The root cause of the issue after these volume operations, the primary storage entry is getting deleted for that volume. We have fixed it here to get the primary datastore entry wrt volume and continue the operation.
2022-07-20 11:34:02 +05:30
Pearl Dsilva 48f7f10089
xen: Fix volume snapshot deletion when it has child snapshots (#6296) 2022-04-22 14:36:08 -03:00
slavkap 4004dfcfd8
StorPool storage plugin (#6007)
* StorPool storage plugin

Adds volume storage plugin for StorPool SDS

* Added support for alternative endpoint

Added option to switch to alternative endpoint for SP primary storage

* renamed all classes from Storpool to StorPool

* Address review

* removed unnecessary else

* Removed check about the storage provider

We don't need this check, we'll get if the snapshot is on StorPool be
its name from path

* Check that current plugin supports all functionality before upgrade CS

* Smoke tests for StorPool plug-in

* Fixed conflicts

* Fixed conflicts and added missed Apache license header

* Removed whitespaces in smoke tests

* Added StorPool plugin jar for Debian

the StorPool jar will be included into cloudstack-agent package for
Debian/Ubuntu
2022-04-14 11:12:01 -03:00
Daniel Augusto Veronezi Salvador 39fad2d9d7
KVM disk-only based snapshot of volumes instead of taking VM's full snapshot and extracting disks (#5297)
* Refactor create volume snapshot with running VM

* Refactor create volume snapshot with stopped VM

* Refactor create volume from snapshot

* Refactor create template from snapshot

* Refactor volume migration (migrateVolume/ migrateVirtualMachineWithVolume)

* Refactor snapshot deletion

* Refactor snapshot revertion

* Adjusts and fix cherry-pick conflicts

* Remove diffuse tests

* Add validation to add flag '--delete' on command 'virsh blockcommand' only if libvirt version is equal or higher 6.0.0

* Expunge temporary snapshot only if template creation is from snapshot

* Extract strings to constant

* Remove unused imports

* Fix error on revert backed up snapshot

* Turn method's return to void as it is not used

* Rename method in SnapshotHelper

* Fix folder creation when using SharedMountPoint pool

* Remove static import

* Remove unnused method

* Cover take snapshot in centos 7

* Handle right snapshot flag according to qemu version

Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2022-04-12 08:14:27 -03:00
slavkap 2b075ed39e
Storage-based Snapshots for KVM VMs (#3724)
* VM snapshots of running KVM instance using storage providers plugins for disk snapshots

Added new virtual machine snapshot strategy which is using storage providers plugins to take/revert/delete snapshots.
You can take VM snapshot without VM memory on KVM instance, using storage providers implementations for disk snapshots.
Also revert and delete is added as functionality. Added Thaw/Freeze command for KVM instance.
The snapshots will be consistent, because we freeze the VM during the snapshotting. Backup to secondary storage is executed after
thaw of the VM and if it is enabled in global settings.

* Removed duplicated functionality

Set few methods in DefaultVMSnapshotStrategy to protected to reuse them
without duplicating the code. Remove code that is actualy not needed

* Added requirements in global setting kvm.vmstoragesnapshot.enabled

Added more information in kvm.vmstoragesnapshot.enabled global setting,
that it needs installation of:
- qemu version 1.6+
- qemu-guest-agent installed on guest virtual machine

when the option is enabled

* Added Apache license header

* Removed commented code

* If "kvm.vmstoragesnapshot.enabled" is null should be considered as false

* removed unused imports, replaced default template

Removed unused imports which causing failures and replaced template to
CentOS8

* "kvm.vmstoragesnapshot.enabled" set to dynamic

* Getting status of freeze/thaw commands not the return code

Will chacke the status if freeze/thaw of Guest VM succeded, rather than
looking for return code. Code refactoring

* removed "CreatingKVM" VMsnapshot state and events related to it

* renamed AllocatedKVM to AllocatedVM

the states should not be associated to a hypervisor type

* loggin the result of "drive-backup" command

* Check which VM snapshot strategy could handle the vm snapshots

gets the best match of VM snapshot strategy which could handle the vm
snapshots on KVM.
Other storage plugins could integrate with this functionality to support group snapshots

* Added poolId in canHandle for KVM hypervisors

Added poolId into canHandle method used to check if all volumes are on
the same PowerFlex's storage pool

* skip smoke tests if the hypervisor's OS type is CentOS

This PR works with functionality included in qemu-kvm-ev which
does not come by default on CentOS. The smoke tests will be skipped if
the hypervisor OS is CentOS

* Added missed import in smoke test

* Suggested change to use ` org.apache.commons.lang.StringUtils.isNotBlank`

* Fix getting device on Ubuntu

On Ubuntu the device isn't provided and we have to get it from
node-name parameter. For drive-backup command (for Ubuntu) is needed and job-id which
is the value of node-name (this extra param works on Ubuntu and CentOS as well).

* Removed new snapshot states and functionality for NFS

* throw CloudRuntimeException

provide a properer error message when delete VM snapshot fails

* exclude GROUP snapshots when listing snapshots

* Skip tests if there is pool with NFS/Local

* address comments
2022-04-07 21:42:12 -03:00
Harikrishna f15cab16da
server: Decouple service (compute) offering and disk offering (#5008)
Currently, our compute offerings and disk offerings are tightly coupled with respect to many aspects. For example, if a compute offering is created, a corresponding disk offering entry is also created with the same ID as the reference. Also creating compute offering takes few disk-related parameters which anyway goes to the corresponding disk offering only. I think this design was initially made to address compute offering for the root volume created from a template. Also changing the offering of a volume is tightly coupled with storage tags and has to be done in different APIs either migrateVolume or resizeVolume. Changing of disk offering should be seamless and should consider new storage tags, new size and place the volume in appropriate state as defined in disk offering.

more details are mentioned here https://cwiki.apache.org/confluence/display/CLOUDSTACK/Compute+offering+and+disk+offering+refactoring

* Schema changes and disk offering column change from "type" to "compute_only"

* Few more changes

* Decoupled service offering and disk offering

* Remove diskofferingid from vminstance VO

* Decouple service offering and disk offering states

* diskoffering getsize() is only for strict disk offerings

* Fix deployVM flow

* Added new API params to compute offering creation

* Add diskofferingstrictness to serviceoffering vo under quota

* Added overrideDiskOfferingId parameter in deploy VM API which will override disk offering for the root disk both in template and ISO case

Added diskSizeStrictness parameter in create Disk offering API which will decide whether to restrict resize or disk offering change of a volume

* Fix User vm response to show proper service offering and disk offerings

* Added disk size strictness in disk offering response

* Added disk offering strictness to the service offering response

* Remove comments

* Added UI changes for Disk offering strictness in add compute offering form and Disk size strictness in add disk offering form

* Added diskoffering details to the service offering response

* Added UI changes in deployvm wizard to accept override disk offering id

* Fix delete compute offering

* Fix VM deployment from custom service offering

* Move uselocalstorage column access from service offering to disk offering

* UI: Separated compute and disk releated parameters in add compute offering wizard, also added association to disk offering

* Fixed diskoffering automatic selection on add compute offering wizard

* UI: move compute only toggle button outside the box in add compute offering wizard

* Added volumeId parameter to listDiskOfferings API and the disksizestrictness flag of the current disk offering is honored while list disk offerings

* Added configuration parameter to decide whether to check volume tags on the destination storagepool during migration

* Added disk offering change checks during resize volume operation

* Added new API changeofferingforVolume API and corresponding changes

* Add UI form for changeOfferingForVolume API

* Fix UI conflicts

* Fix service offering usage as disk offering

* Fix unit test failures

* fix user_vm_view

* Addressed review comments

* Fixed service_offering_view

* Fix service offering edit flow

* Fix service offering constructor to address custom offering

* Fix domain_router_view to get proper service offering id

* Removed unused import

* Addressed review comments and fixed update service offering flow with storage tags

* Added marvin test cases for checking disk offering strictness

* review comments addressed

* Remove system_use column from disk offering join

* update volume_view to update system_use column from service offering and not disk offering

* Fix changeOfferingForVolume API for custom disk offering

* Fix global setting implementation

* Fix list volumes, after changing system_use column from disk offering to service offering in volume_view

* Changes for override root disk offering in deployvm wizard in case of custom offering

* Fix a unit test case

* Fixed recent unit test cases with new serviceofferingvo constructor

* Fix unit test in VolumeApiServiceImpl

* Added storage id for the list disk offering API and corresponding UI changes in migrateVolume and changeOfferingForVolume flow

* Rename global configuration parameter from storage.pool.tags.disk.offering.strictness to match.storage.pool.tags.with.disk.offering

* Fix smoke test failures

* Added tool tip for migrate volume UI form

* Address review comments and fix UI form of deploy VM in case of ISO.

* Fixed resize volume UI form for data disk

* UI changes to disable override root disk size when override root disk offering is enabled

* UI fix in deploy vm wizard

* Fix listdiskoffering after rebasing with main

* Fixed UI in migrate and changeofferingfor volume to handle empty disk offering list
Removed the volume's current disk offering from listDiskOffering response list

* Added custom Iops to resize volume form and removed the current disk offering during change offering for volume UI form

* Fix false response on updateDiskOffering API

* Added search field for changeofferingforvolume UI form

* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster

* Removed DB changes from 4.16 upgrade file

* Resolving merge conflicts with main 4.17

* Added support for auto migration and auto resize of the root volume upon changing the service offering for VM.

* UI: Added automigrate checkbox in scale VM form

* Addes since attributes to new API params

* Added shrinkOK parameter to changeofferingforvolume API

* Added shrinkOk param to UI in changeOfferingforVolume form

* Added shrinkOk flag to scaleVM and changeServiceForVirtualMachines and UI form

* Removed old foreign key constraint on IDs of service offering and disk offering

* Allow resize and automigrate of root volume if required in all cases of service offering change

* Allow only resize to higher disk size from UI

* Fixing vue syntax error

* Make UI changes to provide root disk size box when the linked disk offering is of custom

* Converted from check box to toggle in scale VM, changeoffering, resize and migrate volume forms

* Fix resize volume operation to update the VM settings

* Fix migratevolume form to pick selected storage pool id in list diskofferings API
2022-01-27 15:08:42 +05:30
Daniel Augusto Veronezi Salvador d26ce157db
Fix camel case (#5898)
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2022-01-26 19:20:18 -03:00
Rohit Yadav a1a3aff2b5 Merge remote-tracking branch 'origin/4.15' into main 2021-08-31 14:29:30 +05:30
slavkap 961e85eb60
Fix of creating volumes from snapshots without backup to secondary storage (#5349)
* Fix of creating volumes from snapshots without backup

When few snaphots are created onyl on primary storage, and try to create
a volume or a template from the snapshot only the first operation is
successful. Its because the snapshot is backup on secondary storage with
wrong SQL query. The problem appears on Ceph/NFS but may affects other
storage plugins.
Bypassing secondary storage is implemented only for Ceph primary storage
and it didn't cover the functionality to create volume from snapshot
which is kept only on Ceph

* Address review
2021-08-31 12:46:57 +05:30
Rohit Yadav f58b72f6f7 Merge remote-tracking branch 'origin/4.15' 2021-06-27 18:25:46 +05:30
slavkap d82909318f
server: Fix of delete of Ceph's snapshots from secondary storage (#5130)
This PR fixes the deletion will be handled by DefaultSnapshotStrategy::deleteSnapshot #4797
2021-06-25 12:04:36 +05:30
Rohit Yadav 4742ac15f7 Merge remote-tracking branch 'origin/4.15' 2021-04-29 21:50:40 +05:30
dahn be255e4203
server: protect against stray snapshot-details without snapshot (#4924)
This PR makes sure no orphaned snapshot details are considered in the cleanup at startup job.
a real solution would be to implement some kind of cascading delete, but as the parent record is "only" marked as removed this would be a bit com

Co-authored-by: Daan Hoogland <dahn@onecht.net>
2021-04-29 20:40:29 +05:30