This PR fixes#5047 which can be reproduced on Zones with _(I) Advanced Networks, (II) Security Groups enabled for the Zone, (III) network offering without Security Groups_; for instance, `DefaultSharedNetworkOffering` which does not list Security Group as supported service.
The issue is due to the following code inside the method `VirtualMachineManagerImpl.orchestrateReboot`:
[VirtualMachineManagerImpl.java#L3340](280c13a4bb/engine/orchestration/src/main/java/com/cloud/vm/VirtualMachineManagerImpl.java (L3340)).
```
final Answer rebootAnswer = cmds.getAnswer(RebootAnswer.class);
if (rebootAnswer != null && rebootAnswer.getResult()) {
if (dc.isSecurityGroupEnabled() && vm.getType() == VirtualMachine.Type.User) {
List<Long> affectedVms = new ArrayList<Long>();
affectedVms.add(vm.getId());
_securityGroupManager.scheduleRulesetUpdateToHosts(affectedVms, true, null);
}
return;
}
```
Fixes: #4972
This PR sets systevms' agent state to disconnected when it is stopped. Currently, when a systemVM (Console Proxy VM / Secondary storage VM) is stopped, the agent state still appears to be 'Up'
* server: destroy ssvm, cpvm on last host maintenance
When a single or last UP host enters into maintenance just stopping SSVM and CPVM will leave behind VMs on hypervisor side. As these system vms will be recreated they can be destroyed.
Fixes#3719
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix methods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* immediately destroy systemvms
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix destroy
Added bypassHostMaintenance flag in Comma.java class to allow command to be handled by host agent even when host is in maintenace.
Flag is set true only for delete commands for ssvm and cpvm.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* unit test fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix missing return statement
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix
VM should be stopped with cleanup before calling expunge else it server may through error with host in PrepareForMaintenance state.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* rename
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* forceha: fix vm is not started if it is poweroff from inside
steps to reproduce the issue
(1) make sure force.ha is true in global setting. if not, change it to true, and restart mgt server
(2) create a service offering , ha is not enabled
(3) create a vm
(4) log into the vm, and power off via cli.
expected result: vm is started again by cloudstack
actual result: vm is not started.
* forceha: fix vms are still running if host is force-removed
when host can be force removed, however vms are stopped in cloudstack, but not stopped on host
```
(localcloud) 🐱 > delete host id="a5625393-444d-4d0a-b31d-62baf88a8be1" forced=true
{
"success": true
}```
after some minutes, vms are still runnning on host
```
root@mgt01:~# ssh node63 virsh list
Id Name State
---------------------------
1 i-2-19-VM running
2 i-2-11-VM running
```
error message are
```
Cannot transmit host 2 to Enabled state
com.cloud.utils.fsm.NoTransitionException: No next resource state found for current state = Enabled event = DeleteHost
at com.cloud.resource.ResourceManagerImpl.resourceStateTransitTo(ResourceManagerImpl.java:1216)
at com.cloud.resource.ResourceManagerImpl$1.doInTransactionWithoutResult(ResourceManagerImpl.java:907)
```
* forceha: Make ForceHA dynamic
This PR addresses the issue raised at #4545 (Fail to change Service offering from local <> shared storage).
When upgrading a VM service offering it is validated if the new offering has the same storage scope (local or shared) as the current offering. I think that the validation makes sense in a way of preventing running Root disks with an offering that does not match the current storage pool. However, the validation only compares both offerings and does not consider that it is possible to migrate Volumes between local <> shared storage pools.
The idea behind this implementation is that CloudStack should check the scope of the current storage pool which the ROOT volume is allocated; this, it is possible to migrate the volume between storage pools and list/upgrade according to the offerings that are supported for such pool.
This PR also fixes an issue where the API command that lists offerings for a VM should follow the same idea and list based on the storage pool that the volume is allocated and not the previous offering.
Fixes: #4545
This PR makes sure no orphaned snapshot details are considered in the cleanup at startup job.
a real solution would be to implement some kind of cascading delete, but as the parent record is "only" marked as removed this would be a bit com
Co-authored-by: Daan Hoogland <dahn@onecht.net>
* prevent other vm disks getting deleted
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: fix inter-cluster stopped vm migration
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix detached volume inter-cluster migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* cleanup unused method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* review changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: allow attached volume migration using VmwareStorageMotionStrategy
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* find vm clusterid with multiple ROOT volumes
VM can have multiple ROOT volumes and some can be on zone-wide store therefore iterate over all of them till a cluster ID is found.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix successive storage migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix intercluster check
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor vm cluster, host method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove inter-pod check
Added by mistake, VMware won't have pods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address review comment
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes: #4462
Problem Statement:
In case of VMware, when a VM having multiple data disk is destroyed (without expunge) and tried to recover the VM then the previous data disks are not attached to the VM like before destroy. Only root disk is attached to the VM.
Root cause:
All data disks were removed as part of VM destroy. Only the volumes which are selected to delete (while destroying VM) are supposed to be detached and destroyed.
Solution:
During VM destroy, detach and destroy only volumes which are selected during VM destroy. Detach the other volumes during expunge of VM.
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster during a stopped VM migration. Also, find target datastore using the host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* 4.14:
server: select root disk based on user input during vm import (#4591)
kvm: Use Q35 chipset for UEFI x86_64 (#4576)
server: fix wrong error message when create isolated network without SourceNat (#4624)
server: add possibility to scale vm to current customer offerings (#4622)
server: keep networks order and ips while move a vm with multiple networks (#4602)
server: throw exception when update vm nic on L2 network (#4625)
doc: fix typo in install notes (#4633)
This PR fixes an issue when move a vm from an account to another account.
Steps to reproduce the issue
(1) create a vm with multiple shared networks (in advanced zone, or advanced zone with security groups)
(2) create another account (in same domain who can also access the shared networks)
(3) move vm to new account, with a list of networkid
expected result: the vm has nics on the networks in same order as specified in API request, and nics have the same ips as before actual result: network order is not same as specified, ips are changed.
For Basic network isolation methods are not provided, and exception is
thrown when trying to encode the Vlan id. That's why we have to check
before encoding that the list with isolation methods is not empty
* support for handling incremental snaps (on DB entries) on xen
* Addressed comments
* Update NfsSecondaryStorageResource.java
adjusted space in comment/ log
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This feature enables the following:
Balanced migration of data objects from source Image store to destination Image store(s)
Complete migration of data
setting an image store to read-only
viewing download progress of templates across all data stores
Related Primate PR: apache/cloudstack-primate#326
After a vm is shutdown, the power state isn't updated immediately. This prevents changing the service offering.
This PR updates the power state immediately after the vm is confirmed to be shutdown.
Fixes: #3159
While remove secondary nic from a Running vm, if update the default nic to the secondary nic before the nic is removed, the vm will not have default nic (and cannot be started) when both operations are completed.
It is because UpdateDefaultNic api is not handled as a vm work job (AddNicToVMCmd and RemoveNicFromVMCmd are), it is processed before nic is removed. The result is that secondary nic becomes default nic and got removed.