The KVM HA runner uses any NFS secondary storage resource available to a
host to store it's HA data. This causes template deletes to fail because
it cannot delete KVMHA, which is a directory that is not empty. So if
KVMHA directory is found, delete it's contents before trying to delete
it.
Tested with a new 4.1 zone deployment. Verified bug was reproducable
with 4.1 HEAD, applied patch, ran through adding two NFS primary
storages, verified KVM heartbeat was working on them, then ran various
secondary storage operations (register template, download volume, take
snapshot) and verified that they worked, and that KVM heartbeat
operations were NOT acting on them.
Signed-off-by: Chip Childers <chip.childers@gmail.com>
pools the proper way won't cause problems for KVM HA Monitor, this patch closes
holes. Call the KVMStoragePool deleteStoragePool that properly removes it from
the KVMHA hashmap, instead of the pools direct delete() call.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1366172318 -0600
deleted NFS pools, causing failures when defining new storage pools. Sometimes
a storage pool has never been used on a host, and getStoragePool fails when
copying templates or in storage migration. deleteStoragePool(pool) often fails
silently, leaving no pool defined in libvirt, but a mountpoint left behind.
This patch handles some of these exceptions and brings forward any issues via
logging.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1364603486 -0600
The collection of network usage from VPC virtual router on KVM does not work,
because there is no corresponding procedure to deal with VPC virtual router
(cmd.isForVpc() = true).
Reviewed-by: Kishan Kavala <kishan@apache.org>
Reported-by: Wei Zhou <w.zhou@leaseweb.com>
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
KVM to manager. This adds collection of available storage to KVM, not
just used.
Bugfix-for: 4.0.2, 4.1, master
Submitted-by: Ted Smith <darnoth@gmail.com>
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1363966235 -0600
Detail: When we stop a VM, it's definition is no longer valid. Therefore, we
need to catch the exception thrown from libvirt in trying to lookup a
non-existent domain by UUID while trying to check if it's shut down.
BUG-ID:CLOUDSTACK-600
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1363201066 -0600
Detail: A previous patch fixed an issue where we are defining VMs to persist
locally on KVM hosts, which can cause issues if the agent isn't running and
libvirt decides to start the VM unbeknownst to cloudstack. The previous patch
stopped defining VMs as persistent. This patch adds compatibility for existing
cloudstack environments, removing the persistent definition on stop if needed.
BUG-ID: CLOUDSTACK-600
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1363194656 -0600
We used to define domains persistent in libvirt, which caused XML definitions
to stay there after a reboot of the hypervisor.
We however don't do anything with those already defined domains, actually, we wipe
all defined domains when starting the agent.
Some users however reported that libvirt started these domains after a reboot
before the CloudStack agent was started.
By starting domains from the XML description and not defining them we prevent
them from ever being stored in libvirt.
This patch inclused the packaging work from master tailored for 4.1
Not everything has been tested yet, but it should generate DEB packages.
Signed-off-by: Wido den Hollander <wido@42on.com>
minor issue, but the listApis won't list the "listApis" API itself.
So, we manually addAll getCommands() from the class to the cmdClass (the
list of cmd classes)
Signed-off-by: Chip Childers <chip.childers@gmail.com>
cloud-defined resources on the host has caused various problems. As a backward
compatible fix, if an existing pool with a different name collides with a pool
being created (by path), the pool will be redefined with the name cloudstack
knows about. This is actually what brought up the bug, a persisted storage pool
cloudstack wasn't managing.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1363210149 -0600
There is no need for getIpDeployer to depend on the fact a NetScaler
device is allocated (network is implemented state) or not-allocated
(network is in shutdown state)