Updating the op_it_work table entry appropriately in db once the unfinished work item is completed.
(cherry picked from commit c6a8659ac2)
Signed-off-by: Animesh Chaturvedi <animesh@apache.org>
already there instead of randomly picking one in case there are multiple
cache stores in the scope.(cherry picked from commit e00241f41d)
Signed-off-by: Animesh Chaturvedi <animesh@apache.org>
encoded. This cause createStoragePool or addImageStore command to fail if special
characters were present. Updated the code to pass user, password and domain as part
of details while adding primary or secondary. Also made changes on server side to
handle it.
(cherry picked from commit f0b861fede)
Signed-off-by: Animesh Chaturvedi <animesh@apache.org>
The fix is to fail the start operation if a vm snapshot is in progress
(cherry picked from commit 775fa0f0d0)
Signed-off-by: Animesh Chaturvedi <animesh@apache.org>
In case of VMware VM during root volume preparation if we are switching to a new volume, force expunge root disk that was created from the old template.
Because otherwise storage garbage collector will later try to expunge the old disk marked for expunge and fail with 'Cannot delete file' exception
since in VMware the new root vmdk has the same name and is now in use.
(cherry picked from commit 94ea2736f4)
Signed-off-by: Animesh Chaturvedi <animesh@apache.org>
introduces a force option in delete network to forcifully delete a
network. This comes handy in rare cases where network fails to implenet
and network is in shutdown state, but network shutdown to rollback
implement process fails as well.
Create two storage pools, one with storage tag X, one with storage tag Y.
Create a service offering with storage tag X.
Create a disk offering with storage tag Y.
Attempt to deploy a virtual machine with a datadisk, using given offerings, it fails.
Deployment planner keeps a global object 'avoid'. It loops through each volume to
be created, asking storage allocators for matching pools, passing this avoid object.
First disk matches a pool or pools, adds ALL other pools to avoid object, then
deployment planner attaches matching pools to a list for that disk.
Second disk matches a pool, adds all other pools to avoid object, then deployment
planner says "wait, matching pool is in avoid, can't use it". Oops. In fact, at this
point ALL pools are in avoid (unless there are other pools that have both tags).
Need to remove matching pool from the avoid set during each select phase.
This is happening as concurrent operations are happening on the same VM. Earlier this was not seen as all vm operations were synchronized at agent layer. By making execute.in.sequence
global config to false this restriction is no longer there. In the latest code operations to a single vm are synchronized by maintaining a job queue. In some scenarios the destroy vm operation
was not going through this job queue mechanism and so was resulting in failures due to simultaneous operations.