Changes:
To make sure migration does not attempt to pick a host that has running VMs more than the max guest VM's limit:
- Changed manual migration to call host allocators to return a list of hosts suitable for migration. Host allocators check for the max guest VM limit.
- Earlier we returned hosts with enough capacity but now Host Allocators make other checks along with capacity. So the list of hosts returned are hosts that have enough capacity AND satisfy all other conditions like host tags, max guests limit etc. Or in other words Allocators dont return the hosts that dont satisfy all conditions even if they have capacity.
-Therefore, now we mark the list of hosts returned for manual migration as 'suitable' hosts instead of 'hasenoughCapacity' in the HostResponse.
- HA migration already calls allocators, so no change is needed there.
Changes:
- Adding a new table 'hypervisor_capabilities' that will record capabilities for each hypervisor version. Added db schema changes for this.
- Currently a few capabilities have been added, namely, 'max_guests_limit' and 'security_group_enabled'
- Added a new column 'hypervisor_version' to host table. StartupRouting command now takes in this parameter. It should be set when a host connects.
- If a host's hypervisor version is not present, we find all the capabilities rows for that hypervisor type and use the first record.
- 'max_guests_limit' is the maximum number of running guest Vms that a host can have for the given hypervisor.
- Host Allocators use this limit and skip a host if the number of running VMs on that host exceeds this limit.
status 11326: resolved fixed
Also added more logging to the agent rebalance code.
Conflicts:
server/src/com/cloud/agent/manager/ClusteredAgentManagerImpl.java
When we apply rules or start new VM, we may encounter some running routers that
we can't program. That can due to network issue or host is down or vCenter is
disconnected, etc. To keep the synchronization, we would stop them, but only
when there is the other router we've successfully updated. If both routers are
unable to communicate with, we simply give up and report it user.
Conflicts:
server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java
Now the logic is: if we can only connect to one of the two redundant routers, we
would stop the one that can't be connected. If we fail to program both routers,
just let it go.
Description :
API's:
- Two new api's authorizeSecurityGroupEgress,revokeSecurityGroupEgressCmd are added. These two API's are similer to ingress rule API's.
- authorizeSecurityGroupEgress :Authorizes a particular egress rule for this security group . Usageof API is very similer to that of authorizeSecurityGroupIngress except that instead of source cidr there will be destination cidr. By default like ingress, all the outgoing flows are blocked.
- revokeSecurityGroupEgress : It is similer to revokeSecurityGroupIngress api, It removes the egress rule.
- listSecurityGroup API's response changed. It include's egress list apart from the existing ingress rules in the output of the API.
Hypervisors :
- It is implemented in Xen and KVM.
Pending Tasks : Blocking using destination security groups.
Previous commits: c9fda641673df7701f44963ef27e1d488f121219 , 24e4e44b8f0712a37147a3777833de3f9e24829e
- adding supprt for Netscaler VPX & MPX load blancers
- implemented for virtual networking
- works only with new fetched public IP, inline support is not added yet
- adding supprt for Netscaler VPX & MPX load blancers
- implemented for virtual networking
- works only with new fetched public IP, inline support is not added yet
more details will be added in the bug
Added New value "link-local" to global config network.loadbalancer.haproxy.stats.visibility . With this change it can take new parameter "link-local" value apart from the existing 3 values global,guest-network,disabled.
global - stats visible from public network
guest-network - stats visible only to guestnetwork.
link-local - stats visible only to link local network
disabled - stats disabled.
Changes:
- Changes to updateHostCmd to accepts hosttags parameter
- Changes to wipe out existing tags and save new ones in host_tags DB.
- UpdateHost is Admin only operation - so only root admin can update host tags
Changes:
- CreateTemplate and RegisterTemplate now support adding a template tag. It is a string value. This is root-admin only action - only admin can add template tags.
- ListTemplates will return the template tag in response.
- HostAllocator changed to use template tag along with the existing tag on service offering. If both tags are present, allocator now finds hosts satisfying both tags. If no hosts have both tags, allocation will fail.
- DB changes to add new column to vm_template table.
- DB upgrade changes for upgrade from 2.2.10 to 2.2.11
previous commit: c9fda641673df7701f44963ef27e1d488f121219 ( this under bug 1067, typing error)
changes: 1) partially implemented listing of egress rules along with ingress rules.
2) partially implemneted egress rules for KVM
Added global config to enable/disable rp_filter for domR.
previous commit: d966906374d4a0cb8fa57326a1f7625c871f64fd
Test Case-1 :
1) Set network.disable.rpfilter global config to true
2) Restart the domR
3) check the settings reflected in proc filesystem
- for public interface like eth2,eth3 : /proc/sys/net/ipv4/conf/eth2/rp_filter should have 0 , and rest other interfaces should have value of 1
Test Case-2 :
1) set network.disable.rpfilter global config to false
2) Restart the domR
3) check the settings reflected in proc filesystem
- for public interface like eth2,eth3 : /proc/sys/net/ipv4/conf/eth2/rp_filter should have 1 , and rest other interfaces should also have value of 1
It's very like caused by StartRouterCmd sent to the running router. I can
reproduce it by issue a StartRouterCmd to a running redundant router. And this
patch should the following exception:
Exception: com.cloud.exception.ResourceUnavailableException: Resource
[VirtualNetworkApplianceManagerImpl$$EnhancerByCGLIB$$565b4d45:0] is
unreachable: There are already two redundant routers with IP 10.91.32.126, they
are r-5-VM(5) and r-4-VM(4)
status 11214: resolved fixed
Bug 11186- Cannot restart existing VM if the cluster is disabled after the
VM has been created
Changes:
- We should not check for the cluster 'allocation_state' while starting an
existing VM provided it has storage already allocated. But if volumes are
deleted and new storage needs to be allocated, then we will not allow the
VM
start.
- However we should still prohibit adding new VMs in that cluster.
Bug 11186- Cannot restart existing VM if the cluster is disabled after the
VM has been created
Changes:
- We should not check for the cluster 'allocation_state' while starting an
existing VM provided it has storage already allocated. But if volumes are
deleted and new storage needs to be allocated, then we will not allow the VM
start.
- However we should still prohibit adding new VMs in that cluster.