- adding supprt for Netscaler VPX & MPX load blancers
- implemented for virtual networking
- works only with new fetched public IP, inline support is not added yet
1) Added new apis: createFirewallRule, deleteFirewallRule, listFirewallRules
2) Modified existing apis - added boolean openFirewall parameter to createPortForwardingRule/createIpForwardingRule/createRemoteAccessVpn. If parameter is set to true, open firewall on the domR before creating an actual PF rule there
Modified backend calls appropriately.
3) Schema changes for firewall_rules table:
* startPort/endPort can be null now
* added icmp_type, icmp_code fields (can be not null only when protocol is icmp)
4) Added new manager - FirewallManagerImpl
Conflicts:
api/src/com/cloud/api/BaseCmd.java
client/tomcatconf/commands.properties.in
server/src/com/cloud/api/ApiResponseHelper.java
server/src/com/cloud/configuration/DefaultComponentLibrary.java
server/src/com/cloud/network/lb/LoadBalancingRulesManagerImpl.java
server/src/com/cloud/network/rules/RulesManagerImpl.java
1) Added new apis: createFirewallRule, deleteFirewallRule, listFirewallRules
2) Modified existing apis - added boolean openFirewall parameter to createPortForwardingRule/createIpForwardingRule/createRemoteAccessVpn. If parameter is set to true, open firewall on the domR before creating an actual PF rule there
Modified backend calls appropriately.
3) Schema changes for firewall_rules table:
* startPort/endPort can be null now
* added icmp_type, icmp_code fields (can be not null only when protocol is icmp)
4) Added new manager - FirewallManagerImpl
1. Are there multiple hosts connect to the same local storage pool due to 2.1.x bug
2. Is there any missed premium upgrade
either true answer of above cause mangemnt stopping and asking user to contact Cloud.com support
1. Are there multiple hosts connect to the same local storage pool due to 2.1.x bug
2. Is there any missed premium upgrade
either true answer of above cause mangemnt stopping and asking user to contact Cloud.com support
1. Are there multiple hosts connect to the same local storage pool due to 2.1.x bug
2. Is there any missed premium upgrade
either true answer of above cause mangemnt stopping and asking user to contact Cloud.com support
Use a new target "system-integrity-checker" in components.xml/components-premium.xml.
All checkers must be explicitly specified in XML file, they will execute before any components load
status 10860: resolved fixed
Use a new target "system-integrity-checker" in components.xml/components-premium.xml.
All checkers must be explicitly specified in XML file, they will execute before any components load
status 10860: resolved fixed
Use a new target "system-integrity-checker" in components.xml/components-premium.xml.
All checkers must be explicitly specified in XML file, they will execute before any components load
status 10860: resolved fixed
Changes:
- Added new Investigator 'ManagementIPSystemVMInvestigator' that checks if Vm is alive only for System VM's that have a management IP address.
- If no management IP is found, ping test cannot be done, so this investigator would return null in that case.
- Current implementation InvestigatorImpl is renamed as 'UserVmDomRInvestigator' and does the ping test for user VMs only.
- Corrected the ping test code that was checking a hard-coded string. Now if the ping answer is negative, we just return null
- Added the new investigator to components.xml
- This NPE happened when starting the DomR and its Volume's podId was null.
- This case should never happen that podId of a Volume is null.
- It looks like this is a side-effect of some other bug- most likely another DomR and basic networking issue (9578)
- While reviewing this bug, found out that we need not use the RecreateHostAllocator anymore. Using it actually is not good, since this allocator ignores what is passed to it in the plan by the DeploymentPlanner and lists pods once again.
- Changed components.xml to use FirstFitRoutingAllocator instead.
- Added a new flag 'allocation_state' to zone,pod,cluster and host
- The possible values for this flag are 'Enabled' or 'Disabled'
- When a new zone,pod,cluster or host is added, allocation_state is 'Disabled' by default.
- For existing zone,pod,cluster or host, the state is 'Enabled'.
- All Add/Update/List commands for each of zone,pod,cluster or host can now take a new parameter 'allocationstate'
- If 'allocation_state' is 'Disabled', Allocators skip that zone or pod or cluster or pod.
- For a root admin, ListZones lists all zones including the 'Disabled' zones. But for any other user, the 'Disabled' zones are not included in the response.
- For any usecase that creates/deploys/adds/registers a resource and takes in zone as parameter, now we check if the Zone is 'Disabled'. If yes then the operation cannot be performed by a user other than root-admin. Add volume, snapshot, templates are examples of this usecase.
- To enable the root admin to test a particular pod/cluster/host, deployVM command takes in 'host_id' parameter that can be passed in only by root admin.
If this parameter is passed in by the admin, allocators do not search for hosts and use that host only. StoragePools are searched in the cluster of that host.
If VM cannot be deployed to that host, allocators and deployVM fails without retrying