This bug also happens in XenServer 5.6 FP1, but this issue is hidden by bug 11564.
There are two issues here
1. in XenServer 5.6 FP1/SP2, when a slave host is down ( NIC is unplugged), and before master detect the slave is down( it takes about 10 minutes),
template.createClone doesn't work sometime, it complains can not contact slave host, looks like XenServer try to start VM on this slave host, we use VM.create instead, we can specify host in VM.create API, so it will not try to connect the slave
2. in XenServer 5.6 FP1/SP2, host tag is introduced into VDI, however in HA case, VM.reset_powerstate and VM.destory doesn't remove this host tag, when CloudStack try to start this VM on other host, XenServer complains this VDI is already used ( or mount as RW). CloudStack will manually remove the host tage in VDI in FenceCommand
status 11552: resolved fixed
Added two New values "all" and "default" to global config "network.loadbalancer.haproxy.stats.visibility" . With this change, it can take six possible value:
global - stats visible from public network.
guest-network - stats visible only to guestnetwork.
link-local - stats visible only to link local network(for xen and kvm).
disabled - stats disabled.
all - stats available on public,guest and link-local. (Newly added)
default - stats availble on the serving http port, this does need any specific http port.(Newly added)
Except default and disabled, all the rest of 4 need to configure the stats port.
Force stop the router would release all the resources it used, but router may
still running. Add a column "stop_pending" in the database, and stop it when the
router come back.
Admin would able to choose to force destroy such router, then recover the
network using restartNetwork command with cleanup=false.
remove heartbeat entry for this Primary Storage, when put this Primary Storage into maintenance mode
create heartbeat entry for this Primary Storage, when cancal maintenance for this Primary Storage
status 11275: resolved fixed
status 11036: resolved fixed
1) Use row locks instead of global lock when update resource_count table. When update resource_count for account, make sure that we lock account+all related domains
2) Insert resource_count records for account/domain at the moment when account/domain is created.
3) As a part of DB upgrade, insert missing resource_count records for all non-removed accounts/domains