-made Netscaler, SRX, F5 network elements as pluggable service
-added abstract load balancer device manager ExternaLoadBalancerDeviceManager
-made both F5 and Netscaler pluggable service to extend ExternaLoadBalancerDeviceManager
-added abstract firewall device manager ExternalFirewallDeviceManager
-made SRX pluugable service to extende ExternalFirewallDeviceManager
-added API's to configure and manage netscaler devices
Added two New values "all" and "default" to global config "network.loadbalancer.haproxy.stats.visibility" . With this change, it can take six possible value:
global - stats visible from public network.
guest-network - stats visible only to guestnetwork.
link-local - stats visible only to link local network(for xen and kvm).
disabled - stats disabled.
all - stats available on public,guest and link-local. (Newly added)
default - stats availble on the serving http port, this does need any specific http port.(Newly added)
Except "default" and "disabled", all the rest of 4 need to configure the stats port.
Changes :
- Fixing API doc +response name + errorMessage
- Adding seperate events to Egress rules
- Egress rules Using the same database table as that of ingress with new column type.
Pending Tasks:
- db upgrade
- database table rename from security_ingress_rule to generic name, renaming some of the jave class from ingress to generic name.
- Retesting on kvm
- adding supprt for Netscaler VPX & MPX load blancers
- implemented for virtual networking
- works only with new fetched public IP, inline support is not added yet
more details will be added in the bug
Added New value "link-local" to global config network.loadbalancer.haproxy.stats.visibility . With this change it can take new parameter "link-local" value apart from the existing 3 values global,guest-network,disabled.
global - stats visible from public network
guest-network - stats visible only to guestnetwork.
link-local - stats visible only to link local network
disabled - stats disabled.
previous commit: c9fda641673df7701f44963ef27e1d488f121219 ( this under bug 1067, typing error)
changes: 1) partially implemented listing of egress rules along with ingress rules.
2) partially implemneted egress rules for KVM
haproxy tunning:
0. Test case:
httpd running in 5 user VMs, all of them created on a xenserver host(16 core, 42G memroy, 10G network)
domR running on an anther host with same hardware configuration.
test application, ab, running on anther host behind an anther seperate switch
1.haproxy is not a memory intensive app. I can get 4625.96 connection/s with 1G memory. While it's really a CPU intensive app, domR always uses around 100% CPU on the host.
2.By default, you can't get better connection/s rate, because ip_conntrack_max and tw_bucket are too small, you will see the error in domR like:
"TCP: time wait bucket table overflow" or "nf_conntrack: table full, dropping packet".
So I increase these numbers to 1000000 from 65536, then I can steadly get around 4600 connection/s when memory is >= 1G.
Here is the connection per second, tested by "ab -n 1000000 -c 100 http://192.168.170.152:880/test.html"
domR memory conn/s
128M: 3545.55
256M: 4081.38
512M: 4318.18
1G: 4625.96
7G: 4745.53
3. If I enable notrack for both connections between domr/user vm, and public network, that tell iptable in domR don't track the connection during my test, then I can get better number, around
5800 connections/s. But we can't enable notrack, as iptables is used to track throughput in domR.
4. In a word, with this commit, the connection rate of haproxy can be increased from 1000-2000/s to 4700/s when domR's memory is larger than 1G.
5. How many CPU need to assign to domR to get this number? Haven't finished yet, as CPU is shared by all the VMs on the host, if other VMs are busy, it will impact the performance of haproxy.
status 9336: resolved fixed
Following changes were made:
* deleteSecurityGroup/authorizeSecurityGroupIngress - removed account/domainId parameters as SG is uniquely identified by id now
* removed account_name field from securityGroup DB table; removed allowed_security_group/allowed_sec_grp_acct from security_ingress_rule.
These values were used for api response generation only for performance purposes; added caching on API level to improve performance
* Added missing security checks for securityGroups/ingressRules