Changes:
To make sure migration does not attempt to pick a host that has running VMs more than the max guest VM's limit:
- Changed manual migration to call host allocators to return a list of hosts suitable for migration. Host allocators check for the max guest VM limit.
- Earlier we returned hosts with enough capacity but now Host Allocators make other checks along with capacity. So the list of hosts returned are hosts that have enough capacity AND satisfy all other conditions like host tags, max guests limit etc. Or in other words Allocators dont return the hosts that dont satisfy all conditions even if they have capacity.
-Therefore, now we mark the list of hosts returned for manual migration as 'suitable' hosts instead of 'hasenoughCapacity' in the HostResponse.
- HA migration already calls allocators, so no change is needed there.
Changes:
- Adding a new table 'hypervisor_capabilities' that will record capabilities for each hypervisor version. Added db schema changes for this.
- Currently a few capabilities have been added, namely, 'max_guests_limit' and 'security_group_enabled'
- Added a new column 'hypervisor_version' to host table. StartupRouting command now takes in this parameter. It should be set when a host connects.
- If a host's hypervisor version is not present, we find all the capabilities rows for that hypervisor type and use the first record.
- 'max_guests_limit' is the maximum number of running guest Vms that a host can have for the given hypervisor.
- Host Allocators use this limit and skip a host if the number of running VMs on that host exceeds this limit.
status 11326: resolved fixed
Also added more logging to the agent rebalance code.
Conflicts:
server/src/com/cloud/agent/manager/ClusteredAgentManagerImpl.java
When we apply rules or start new VM, we may encounter some running routers that
we can't program. That can due to network issue or host is down or vCenter is
disconnected, etc. To keep the synchronization, we would stop them, but only
when there is the other router we've successfully updated. If both routers are
unable to communicate with, we simply give up and report it user.
Conflicts:
server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java
Now the logic is: if we can only connect to one of the two redundant routers, we
would stop the one that can't be connected. If we fail to program both routers,
just let it go.
Description :
API's:
- Two new api's authorizeSecurityGroupEgress,revokeSecurityGroupEgressCmd are added. These two API's are similer to ingress rule API's.
- authorizeSecurityGroupEgress :Authorizes a particular egress rule for this security group . Usageof API is very similer to that of authorizeSecurityGroupIngress except that instead of source cidr there will be destination cidr. By default like ingress, all the outgoing flows are blocked.
- revokeSecurityGroupEgress : It is similer to revokeSecurityGroupIngress api, It removes the egress rule.
- listSecurityGroup API's response changed. It include's egress list apart from the existing ingress rules in the output of the API.
Hypervisors :
- It is implemented in Xen and KVM.
Pending Tasks : Blocking using destination security groups.
Previous commits: c9fda641673df7701f44963ef27e1d488f121219 , 24e4e44b8f0712a37147a3777833de3f9e24829e
- adding supprt for Netscaler VPX & MPX load blancers
- implemented for virtual networking
- works only with new fetched public IP, inline support is not added yet
- adding supprt for Netscaler VPX & MPX load blancers
- implemented for virtual networking
- works only with new fetched public IP, inline support is not added yet
more details will be added in the bug
Added New value "link-local" to global config network.loadbalancer.haproxy.stats.visibility . With this change it can take new parameter "link-local" value apart from the existing 3 values global,guest-network,disabled.
global - stats visible from public network
guest-network - stats visible only to guestnetwork.
link-local - stats visible only to link local network
disabled - stats disabled.