his PR fixes the problem of not updating the chain info or setting chain info to null after volume migrations.
Problem: While fetching the volume chain info, management server assumes datastore name to be a UUID (this is true only for NFS storages added by CloudStack) but datastore name can be with any name.
Solution: To fetch the volume chain info, use datastore name instead of UUID.
The fix is made in the flow of following API operations
migrateVirtualMachine
migrateVirtualMachineWithVolume
migrateVolume
* Fix of some UEFI related issues
1 - fix of attach/detach ISO of VM with UEFI boot type
2 - if OS type of an ISO is categorized as "Other" the bus type of the disk
will be set to "sata"
* Simplify the validation of OS types
This PR introduces new granularity levels to configure VM dynamic scalability. Previously VM is configured to be dynamically scalable based on the template and global setting. Now we bringing this option to configure at service offering and VM level also.
VM can dynamically scale only when all flags are ON at VM level, template, service offering and global setting. If any of the flags is set to false then VM cannot be scalable. This result will be persisted in DB for each VM and will be honoured for that VM till it is updated.
We are introducing 'dynamicscalingallowed' parameter with permitted values of true or false for deployVM API and createServiceOffering API.
Following are the API parameter changes:
createServiceOffering API:
dynamicscalingenabled: an optional parameter of type Boolean with default value “true”.
deployVirtualMachine API:
dynamicscalingenabled: an optional parameter of type Boolean with default value “true”.
Following are the UI changes:
Service offering creation has ON/OFF switch for dynamic scaling enabled with default value true
Inclusivity changes for CloudStack
- Change default git branch name from 'master' to 'main' (post renaming/changing default git branch to 'main' in git repo)
- Rename some offensive words/terms as appropriate for inclusiveness.
This PR updates the default git branch to 'main', as part of #4887.
Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixes: #4990
When a VM associated with a backup offering is destroyed/expunged, the backup offering isn't unassigned, and despite the VM having no backups present, backup usage is generated. This PR prevent usage record generation when there are no backups present for a VM with a backup offering associated to it. This is done by ensuring that usage event for backups is generated only when a the backup size > 0
* server: fixes NPE on empty vmware.root.disk.controller config
When global config - vmware.root.disk.controller is set to empty and template is registered with deployasis, server will throw NPE while deploying a VM. This change fixes the problem by using default value of the config in this case.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* use StringUtils utility
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* fix indentation
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Fixes: #4972
This PR sets systevms' agent state to disconnected when it is stopped. Currently, when a systemVM (Console Proxy VM / Secondary storage VM) is stopped, the agent state still appears to be 'Up'
* server: destroy ssvm, cpvm on last host maintenance
When a single or last UP host enters into maintenance just stopping SSVM and CPVM will leave behind VMs on hypervisor side. As these system vms will be recreated they can be destroyed.
Fixes#3719
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix methods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* immediately destroy systemvms
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix destroy
Added bypassHostMaintenance flag in Comma.java class to allow command to be handled by host agent even when host is in maintenace.
Flag is set true only for delete commands for ssvm and cpvm.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* unit test fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix missing return statement
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix
VM should be stopped with cleanup before calling expunge else it server may through error with host in PrepareForMaintenance state.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* rename
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* forceha: fix vm is not started if it is poweroff from inside
steps to reproduce the issue
(1) make sure force.ha is true in global setting. if not, change it to true, and restart mgt server
(2) create a service offering , ha is not enabled
(3) create a vm
(4) log into the vm, and power off via cli.
expected result: vm is started again by cloudstack
actual result: vm is not started.
* forceha: fix vms are still running if host is force-removed
when host can be force removed, however vms are stopped in cloudstack, but not stopped on host
```
(localcloud) 🐱 > delete host id="a5625393-444d-4d0a-b31d-62baf88a8be1" forced=true
{
"success": true
}```
after some minutes, vms are still runnning on host
```
root@mgt01:~# ssh node63 virsh list
Id Name State
---------------------------
1 i-2-19-VM running
2 i-2-11-VM running
```
error message are
```
Cannot transmit host 2 to Enabled state
com.cloud.utils.fsm.NoTransitionException: No next resource state found for current state = Enabled event = DeleteHost
at com.cloud.resource.ResourceManagerImpl.resourceStateTransitTo(ResourceManagerImpl.java:1216)
at com.cloud.resource.ResourceManagerImpl$1.doInTransactionWithoutResult(ResourceManagerImpl.java:907)
```
* forceha: Make ForceHA dynamic
Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.
Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
This NPE may happen when a VM is marked removed in the DB but not its
nics on a shared network. This can usually happen due to a failed
expunged VM or when an admin manually marks a VM as removed in DB but
does not cleanup the nics/network resources.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
IKE version allows selecting ike (autoselect), ikev1, or ikev2.
Split connections gives an option of separating the first right subnet from the rest, and kicking out individual statements for each right subnet for better cross-compatibility.
Backported from PR: #4137
update per PR suggestion
Fixes#3138
Co-authored-by: Greg Goodrich <ggoodrich@ippathways.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This PR addresses the issue raised at #4545 (Fail to change Service offering from local <> shared storage).
When upgrading a VM service offering it is validated if the new offering has the same storage scope (local or shared) as the current offering. I think that the validation makes sense in a way of preventing running Root disks with an offering that does not match the current storage pool. However, the validation only compares both offerings and does not consider that it is possible to migrate Volumes between local <> shared storage pools.
The idea behind this implementation is that CloudStack should check the scope of the current storage pool which the ROOT volume is allocated; this, it is possible to migrate the volume between storage pools and list/upgrade according to the offerings that are supported for such pool.
This PR also fixes an issue where the API command that lists offerings for a VM should follow the same idea and list based on the storage pool that the volume is allocated and not the previous offering.
Fixes: #4545
This PR makes sure no orphaned snapshot details are considered in the cleanup at startup job.
a real solution would be to implement some kind of cascading delete, but as the parent record is "only" marked as removed this would be a bit com
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Fixes: #4808, #4941
This PR adds a force flag to the attachIso / detachIso commands, especially for VMware where it is noticed that when trying to either detach an iso or attach an iso when there already exists another present it fails to do the necessary operation as from ACS end we either answer the question returned by Esxi for CDRom disconnect operation as No (for detach operation) or do not answer the question at all (for Attach operation).
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
Volume can either have an associated disk offering (for DATA disks & ROOT disks for VMs created from ISO) or a compute/service offering (for ROOT disks of VMs created from templates).
This fix simplifies and fixes check to return the appropriate response keys in these cases.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes the CLOUDSTACK-10434. I think some APIs lack access check and list them in below table. I also give the pattch to add the access check for the api in this table. Anyone chould change this table, If you think the APIs do not need access check and change their lable as "no".
API Lack?
VolumeApiServiceImpl # updateVolume yes
VolumeApiServiceImpl # detachVolumeViaDestroyVM yes
VolumeApiServiceImpl # takeSnapshot yes
VolumeApiServiceImpl # migrateVolume yes
AccountManagerImpl#createApiKeyAndSecretKey yes
LoadBalancingRulesManagerImpl#applyLBStickinessPolicy yes
LoadBalancingRulesManagerImpl#applyLBHealthCheckPolicy yes
TemplateManagerImpl#createPrivateTemplate yes
SnapshotManagerImpl#updateSnapshotPolicy
Co-authored-by: lujie <lujie@foxmail.com>
* prevent other vm disks getting deleted
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: fix inter-cluster stopped vm migration
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix detached volume inter-cluster migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* cleanup unused method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* review changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: allow attached volume migration using VmwareStorageMotionStrategy
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* find vm clusterid with multiple ROOT volumes
VM can have multiple ROOT volumes and some can be on zone-wide store therefore iterate over all of them till a cluster ID is found.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix successive storage migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix intercluster check
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor vm cluster, host method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove inter-pod check
Added by mistake, VMware won't have pods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address review comment
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes regression introduced in 71c5dbcf49
which would cause capacity bytes of certain pools to be update which
shouldn't get updated by StatsCollector such as solidfire.
Fixes#4911
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR fixes: #4462
Problem Statement:
In case of VMware, when a VM having multiple data disk is destroyed (without expunge) and tried to recover the VM then the previous data disks are not attached to the VM like before destroy. Only root disk is attached to the VM.
Root cause:
All data disks were removed as part of VM destroy. Only the volumes which are selected to delete (while destroying VM) are supposed to be detached and destroyed.
Solution:
During VM destroy, detach and destroy only volumes which are selected during VM destroy. Detach the other volumes during expunge of VM.
If VM details contain rootdisksize, volume entry in DB should reflect correct size when VM reset is performed.
Fixes#3957
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
When calling the listUasageRecords API records per domain are fetched recursively. This is not the case if you specify a domain id.
This PR adds a new parameter to enable fetching records recursively (isRecursive) when passing the domain id.
Fixes#4517
Adds capacity checks for RandomAllocator (host allocator)
Factors out host cpu capability and capacity check wrt serviceoffering code into CapacityManager.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes#4244
deploying of VMs from ISOs and from templates with UEFI boot type
deploying of VMs from ISOs and from templates with UEFI boot type with
volumes in RAW format
This PR aims at introducing persistence mode in L2 networks and enhancing the behavior in Isolated networks
Doc PR apache/cloudstack-documentation#183
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This contains 3 main changes
(1) add NETWORK_STATS_ethX for all nics with public ips in VPC VRs (current: NETWORK_STATS_eth1)
(2) DO NOT create records in user_statistics for each VPC tier (only one record per public nic per VPC VR)
(3) send NetworkUsageCommand before unplugging a NIC with public IPs from VPC VR
Public IP addresses dedicated to one domain should not be accessed
by other domains. Also, root admin should be able to display all
public ip addresses in system.
Currently following issues exist
1. Public IP address assigned to one domain can be accessed by
other sibling domains
If use.system.public.ip is false then child domains should not
see public ip of ROOT domain
Before fix
```
(test1) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
"count": 59,
"publicipaddress": [
```
After fix
```
(test) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
"count": 10,
```
Fixes https://github.com/apache/cloudstack/issues/4566
Sets `memoryintfreekbs` to zero if it is greater than `memorykbs`. Caused by KVM returning the RSS memory of the process running the VM rather than the free memory inside the VM.
Co-authored-by: dahn <daan.hoogland@gmail.com>
This PR fixes the issue pertaining to volume resize on VMWare for deploy as-is templates. VMware deploy as-is templates are those that are deployed as per the specification in the imported OVF. Hence override root disk size will not be adhered to for such templates. Moreover, when we deploy VMs in stopped state and resize the volume, the root disk doesn't get resized but the volume size is merely updated in the DB.
This PR also includes the following (for deploy as-is templates):
- Disables overriding root disk size during VM deployment on the UI
- Disables selection of compute offerings with root disk size specified, at the time of deployment
- Provided users with the option to deploy VM is stopped state via UI (so as to give an option to users to resize the volumes before starting the VM)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
* Update vm_template table removed field when template is deleted
* Update method name
* address comment
* Extracted code to separate methods
* Address test failure
* refactor test cleanup
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
* vpc: dnsmasq is not started if use.external.dns is true
* Revert "vpc: dnsmasq is not started if use.external.dns is true"
This reverts commit ee58fe0787.
* #4806 vpc: fix zone dns1/dns2 are missing in vpc VR when restart vpc or vpc VR
* Fix NPE while cloudstack agent failed to connect to mgt server
If `ramOvercommitRatio` field is missing in user_vm_details table
is missing then agent throws NPE after restarting
It is because in user_vm_details, there are 'cpuOvercommitRatio' for all
vms, but for vms the field 'ramOvercommitRatio' is missing in the table.
* code feedback
* server: delete template on storage over capacity threshold
While deleting template for a specific zone, check should be done only for writable secondary storages and not for storages with available capacity threshold.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix for ISOs and refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove writable store check
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix exception message
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This fixes the ostype ID returned in listUsageRecords API response to
be uuid instead of internal DB ID and also returns the os category ID
(uuid) and name.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* novnc: Add client IP check for novnc console in cloudstack 4.16
* novnc ip check : Fix restart CPVM or mgt server does not update novnc param
* novnc ip check: move to method
* Updated libvirt's native reboot operation for VM on KVM using ACPI event, and Added 'forced' reboot option to stop and start the VM (using rebootVirtualMachine API)
* Added 'forced' reboot option for System VM and Router
- New parameter 'forced' in rebootSystemVm API, to stop and then start System VM
- New parameter 'forced' in rebootRouter API, to force stop and then start Router
* Added force reboot tests for User VM, System VM and Router
* server: fix failed to remove template/iso if upload from local fails
When upload template/iso/volume from local fails, the install_path will not be a full path of file so removing it will fail.
```
mysql> select install_path from template_store_ref;
+--------------------------------------------------------------------+
| install_path |
+--------------------------------------------------------------------+
| template/tmpl/1/3/805f4763-248e-40ec-b79a-b868cc480d0a.qcow2 |
| template/tmpl/1/4/c7e32c9e-5e72-3726-85cf-aa5ccd84118d.qcow2 |
| template/tmpl/2/201/bc4f4f08-138a-31b8-af1a-d4450eff7982.qcow2 |
| template/tmpl/2/202 |
| template/tmpl/2/203/203-2-d47f8cde-a2a8-31e7-a826-2628ad98a6c8.iso |
| template/tmpl/2/204 |
| template/tmpl/5/205 |
| template/tmpl/2/206 |
| template/tmpl/2/207 |
| template/tmpl/2/208 |
| template/tmpl/2/209 |
| template/tmpl/2/210 |
+--------------------------------------------------------------------+
12 rows in set (0.00 sec)
mysql> select install_path from volume_store_ref;
+---------------------------------------------------------+
| install_path |
+---------------------------------------------------------+
| volumes/2/22 |
| volumes/2/19/f93face9-6521-4184-b89a-cb07f86bbae8.qcow2 |
| volumes/2/23 |
| volumes/2/24 |
+---------------------------------------------------------+
4 rows in set (0.00 sec)
```
* server: disallow removing template/iso in NotUpload and UploadInProgress state
While finding pools for volume migration list following compatible storages:
- all zone-wide storages of the same hypervisor.
- when the volume is attached to a VM, then all storages from the same cluster as that of VM.
- for detached volume, all storages that belong to clusters of the same hypervisor.
Fixes#4692Fixes#4400