listConfigurations is not available for all roles and therefore not fit to use in UI for a generic functionality.
This PR makes default ui pagesize a part for listCapabilities API response to make it available for UI across different role accounts.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR allows migration of public templates that are created from snapshots / volumes. Data migration across secondary stores initially excluded all public templates on the pretext that public templates are automatically synced when a new image store is added; however, this assumption isn't true for templates marked as "public" when created from snapshots / volumes. Such templates can be identified if their url is null
* Filter disk / service offerings by domain at DB level
* Search for tags in the db
* Update search to include host tags
* Differenciate between tags
* Refactor
* server: do not remove volume from DB if fail to expunge it from primary storage or secondary storage
* server/VolumeApiServiceImpl.java: move to method
* update #5373
* server: skip zone check for PERHOST iso during attachIso
Hypervisor tools ISO - vmware-toools.iso, xs-tools.iso are marked as PERHOST in DB. They are active but not downloaded to the secondary storages and hence no template-zone entry.
Skips the template-zone check for such templates.
Fixes#5265
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* inverted check
* use constants in TemplateManager
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Fix of shrinking volumes with QCOW2 format
If the volumes are with QCOW2 format the shrinking will be handled on
the agents side. There are cases in some storage plugins where the
volumes' format is kept in the DB in QCOW2 but the actual format is raw.
Till the current implementation this was limiting the plugins to shrink
the volumes. Now this will be handled by the storage plugins
* Addressed @nvazquez suggested change
Will log the exception instead the exception message
* server: fix failed to apply userdata when enable static nat
* server: fix cannot expunge vm as applyUserdata fails
* configdrive: fix ISO is not recognized when plug a new nic
* configdrive: detach and attach configdrive ISO as it is changed when plug a new nic or migrate vm
* configdrive test: (1) password file does not exists in recreated ISO; (2) vm hostname should be changed after migration
* configdrive: use centos55 template with sshkey and configdrive support
* configdrive: disklabel is 'config-2' for configdrive ISO
* configdrive: use copy for configdrive ISO and move for other template/volume/iso
* configdrive: use public-keys.txt
* configdrive test: fix (1) update_template ; (2) ssh into vm by keypair
This PR fixes the issue that nic has wrong gateway after updating vm nic.
Steps to reproduce the issue
(1) create shared network (in advanced zone or advanced zone with sg)
(2) create new shared network (with same startip/endip/netmask, but different gateway).
(3) create a vm in new network
(4) stop vm and update vm nic ip address
Expected result:
The vm has correct gateway and netmask (of second network)
Actual result:
The vm has wrong gateway and netmask (of first network)
* Cover a case where resizing root disk failed; add isNotPossibleToResize method.
* remove format from resize validation
* Revert if-conditional changes that removed ImageFormat.ISO validation
* Add JUnit tests for VolumeApiServiceImpl.isNotPossibleToResize
* Fix checkstyle of test Class
* Use _templateDao.findByIdIncludingRemoved instead of _templateDao.findById
* Prevent null serviceOfferingView and Mock findByIdIncludingRemoved instead of findById
his PR fixes the problem of not updating the chain info or setting chain info to null after volume migrations.
Problem: While fetching the volume chain info, management server assumes datastore name to be a UUID (this is true only for NFS storages added by CloudStack) but datastore name can be with any name.
Solution: To fetch the volume chain info, use datastore name instead of UUID.
The fix is made in the flow of following API operations
migrateVirtualMachine
migrateVirtualMachineWithVolume
migrateVolume
* Fix of some UEFI related issues
1 - fix of attach/detach ISO of VM with UEFI boot type
2 - if OS type of an ISO is categorized as "Other" the bus type of the disk
will be set to "sata"
* Simplify the validation of OS types
Fixes: #4990
When a VM associated with a backup offering is destroyed/expunged, the backup offering isn't unassigned, and despite the VM having no backups present, backup usage is generated. This PR prevent usage record generation when there are no backups present for a VM with a backup offering associated to it. This is done by ensuring that usage event for backups is generated only when a the backup size > 0
* server: fixes NPE on empty vmware.root.disk.controller config
When global config - vmware.root.disk.controller is set to empty and template is registered with deployasis, server will throw NPE while deploying a VM. This change fixes the problem by using default value of the config in this case.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* use StringUtils utility
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* fix indentation
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Fixes: #4972
This PR sets systevms' agent state to disconnected when it is stopped. Currently, when a systemVM (Console Proxy VM / Secondary storage VM) is stopped, the agent state still appears to be 'Up'
* server: destroy ssvm, cpvm on last host maintenance
When a single or last UP host enters into maintenance just stopping SSVM and CPVM will leave behind VMs on hypervisor side. As these system vms will be recreated they can be destroyed.
Fixes#3719
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix methods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* immediately destroy systemvms
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix destroy
Added bypassHostMaintenance flag in Comma.java class to allow command to be handled by host agent even when host is in maintenace.
Flag is set true only for delete commands for ssvm and cpvm.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* unit test fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix missing return statement
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix
VM should be stopped with cleanup before calling expunge else it server may through error with host in PrepareForMaintenance state.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* rename
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* forceha: fix vm is not started if it is poweroff from inside
steps to reproduce the issue
(1) make sure force.ha is true in global setting. if not, change it to true, and restart mgt server
(2) create a service offering , ha is not enabled
(3) create a vm
(4) log into the vm, and power off via cli.
expected result: vm is started again by cloudstack
actual result: vm is not started.
* forceha: fix vms are still running if host is force-removed
when host can be force removed, however vms are stopped in cloudstack, but not stopped on host
```
(localcloud) 🐱 > delete host id="a5625393-444d-4d0a-b31d-62baf88a8be1" forced=true
{
"success": true
}```
after some minutes, vms are still runnning on host
```
root@mgt01:~# ssh node63 virsh list
Id Name State
---------------------------
1 i-2-19-VM running
2 i-2-11-VM running
```
error message are
```
Cannot transmit host 2 to Enabled state
com.cloud.utils.fsm.NoTransitionException: No next resource state found for current state = Enabled event = DeleteHost
at com.cloud.resource.ResourceManagerImpl.resourceStateTransitTo(ResourceManagerImpl.java:1216)
at com.cloud.resource.ResourceManagerImpl$1.doInTransactionWithoutResult(ResourceManagerImpl.java:907)
```
* forceha: Make ForceHA dynamic
Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.
Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
This NPE may happen when a VM is marked removed in the DB but not its
nics on a shared network. This can usually happen due to a failed
expunged VM or when an admin manually marks a VM as removed in DB but
does not cleanup the nics/network resources.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
IKE version allows selecting ike (autoselect), ikev1, or ikev2.
Split connections gives an option of separating the first right subnet from the rest, and kicking out individual statements for each right subnet for better cross-compatibility.
Backported from PR: #4137
update per PR suggestion
Fixes#3138
Co-authored-by: Greg Goodrich <ggoodrich@ippathways.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This PR addresses the issue raised at #4545 (Fail to change Service offering from local <> shared storage).
When upgrading a VM service offering it is validated if the new offering has the same storage scope (local or shared) as the current offering. I think that the validation makes sense in a way of preventing running Root disks with an offering that does not match the current storage pool. However, the validation only compares both offerings and does not consider that it is possible to migrate Volumes between local <> shared storage pools.
The idea behind this implementation is that CloudStack should check the scope of the current storage pool which the ROOT volume is allocated; this, it is possible to migrate the volume between storage pools and list/upgrade according to the offerings that are supported for such pool.
This PR also fixes an issue where the API command that lists offerings for a VM should follow the same idea and list based on the storage pool that the volume is allocated and not the previous offering.
Fixes: #4545
This PR makes sure no orphaned snapshot details are considered in the cleanup at startup job.
a real solution would be to implement some kind of cascading delete, but as the parent record is "only" marked as removed this would be a bit com
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Fixes: #4808, #4941
This PR adds a force flag to the attachIso / detachIso commands, especially for VMware where it is noticed that when trying to either detach an iso or attach an iso when there already exists another present it fails to do the necessary operation as from ACS end we either answer the question returned by Esxi for CDRom disconnect operation as No (for detach operation) or do not answer the question at all (for Attach operation).
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
Volume can either have an associated disk offering (for DATA disks & ROOT disks for VMs created from ISO) or a compute/service offering (for ROOT disks of VMs created from templates).
This fix simplifies and fixes check to return the appropriate response keys in these cases.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes the CLOUDSTACK-10434. I think some APIs lack access check and list them in below table. I also give the pattch to add the access check for the api in this table. Anyone chould change this table, If you think the APIs do not need access check and change their lable as "no".
API Lack?
VolumeApiServiceImpl # updateVolume yes
VolumeApiServiceImpl # detachVolumeViaDestroyVM yes
VolumeApiServiceImpl # takeSnapshot yes
VolumeApiServiceImpl # migrateVolume yes
AccountManagerImpl#createApiKeyAndSecretKey yes
LoadBalancingRulesManagerImpl#applyLBStickinessPolicy yes
LoadBalancingRulesManagerImpl#applyLBHealthCheckPolicy yes
TemplateManagerImpl#createPrivateTemplate yes
SnapshotManagerImpl#updateSnapshotPolicy
Co-authored-by: lujie <lujie@foxmail.com>
* prevent other vm disks getting deleted
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: fix inter-cluster stopped vm migration
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix detached volume inter-cluster migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* cleanup unused method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* review changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: allow attached volume migration using VmwareStorageMotionStrategy
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* find vm clusterid with multiple ROOT volumes
VM can have multiple ROOT volumes and some can be on zone-wide store therefore iterate over all of them till a cluster ID is found.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix successive storage migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix intercluster check
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor vm cluster, host method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove inter-pod check
Added by mistake, VMware won't have pods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address review comment
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes regression introduced in 71c5dbcf49
which would cause capacity bytes of certain pools to be update which
shouldn't get updated by StatsCollector such as solidfire.
Fixes#4911
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR fixes: #4462
Problem Statement:
In case of VMware, when a VM having multiple data disk is destroyed (without expunge) and tried to recover the VM then the previous data disks are not attached to the VM like before destroy. Only root disk is attached to the VM.
Root cause:
All data disks were removed as part of VM destroy. Only the volumes which are selected to delete (while destroying VM) are supposed to be detached and destroyed.
Solution:
During VM destroy, detach and destroy only volumes which are selected during VM destroy. Detach the other volumes during expunge of VM.