Description:
This is work in progress. This set of changes will not
compile. Checking in for team wide code sync up.
Changes are underway to test if VMWareResource can be
leveraged to talk to the VSM, instead of creating a
new resource for the VSM, like we've been doing up
until now.
Reviewed by: Sateesh Chodapuneedi, Devdeep Singh
Description:
This is the first in a series of commits for integrating the
Cloudstack Management Server with the Nexus 1000v Virtual
Supervisor Module.
These changes introduce the necessary API command interfaces
to work with a Cisco N1KV VSM. The backend logic is still to
be put in and will be incorporated in subsequent commits.
Please do not attempt to use these APIs until then. Also,
these are not yet filled in into commands.xml, so they are
not currently exposed.
Additional APIs would be added if required.
These changes will not break any current management server
functionality.
Given below is a description of the changes put in here:
Added Cisco N1KV commands to core/api:
These are the added commands -
AddCiscoNexusVSMCmd
DeleteCiscoNexusVSMCmd
ConfigureCiscoNexusVSMCmd
ListCiscoNexusVSMCmd
ListCiscoNexusVSMNetworksCmd
Added a Network Element service file for Cisco N1KV.
Declared the interface functions that we'll need for
the N1KV VSM.
Defined a DeviceVO file for the Cisco Nexus Element.
Created a response file for Cisco Nexus VSM.
Created new event types for external Switching Management devices.
Put in logic to call interface methods in ListCiscoNexusVSMNetworksCmd
and ListCiscoNexusVSMCmd
NOT VSM RELATED:
Fixed minor typo in some of the event types for external load balancers.
Added properties of a VSM in the VSM VO class.
Replaced the "url" input parameter by "ipaddress"
in the AddCiscoNexusVSMCmd API.
Added a new file - CiscoNexusVSMElement.java to
contain the implementation of the functions
declared in the VSMElementService interface, and
put in implementations of the functions for the
Nexus VSM API commands. These functions are
defined in the CiscoNexusVSMElement class.
Added a class for Port Profiles (PortProfile.java).
The fields in this class are still not correctly
declared as of now. We'll make the required changes
going forward.
Added CiscoNexusVSMDeviceManagerImpl class.
Added CiscoNexusVSMResource class.
Created a new class to provide a package to
connect to Cisco Nexus VSMs. This will be a
set of Java wrapper functions that allow us
to connect/disconnect and send commands and
receive the results of those commands via
XML-RPC. These functions are yet to be
implemented, and will be checked in in future
commits.
Added two new classes, VSMCommand and
VSMResponse, to encapsulate XML-RPCcommands
and responses to and from a Ciscon Nexus VSM.
Put in the following function stubs inside the
CiscoNexusVSMService class:
connectToVSM()
disconnectFromVSM()
executeVSMCommand()
Added new field in the Type enum of the "Host"
interface, for Cisco Nexus VSMs.
Added two parameters to AddCiscoNexusVSMCommand
vsmName
zoneId
Modified the CiscoNexusVSMDeviceVO constructor to
take in an zoneId as a parameter when creating
the VO object.
Added new interface and class for the DeviceDao
implementation for Cisco Nexus VSM devices:
CiscoNexusVSMDeviceDao
CiscoNexusVSMDeviceDaoImpl
Removed the vsmvCenterDomainId property, since it's
going to the same as vsmDomainId, which is the VSM's
switch Domain Id.
Have started putting in the following query functions
in the CiscoNexusVSMDeviceDao interface:
Put in DAO implementations of some of the above functions in the CiscoNexusVSMDeviceDaoImpl class.
Added a vsmName parameter to the CiscoNexusVSMDeviceVO class.
status cs-14510: resolved fixed
This fix ensures VPX's provisioned on the SDX by CloudStack will be made part of both
tagged and untagged public networks
Fixed issues with vif scripts on 5.6FP1
Fixed ipv6 issue on 5.6FP1
Plus other various fixes and improvements
Starting to remove debug code
NOTE: Network is configured correctly but instances do not start. Possibly indefinite wait occuring on some commands
Fixed issues with vif scripts on 5.6FP1
Fixed ipv6 issue on 5.6FP1
Plus other various fixes and improvements
Starting to remove debug code
NOTE: Network is configured correctly but instances do not start. Possibly indefinite wait occuring on some commands
2) Added new api - changeServiceForSystemVm - to support service offering upgrade for system vms
3) Removed global config parameters that are not in use anymore: consoleproxy.ram.size, consoleproxy.cpu.mhz, secstorage.vm.ram.size, secstorage.vm.cpu.mhz
status 14463; resolved fixed
moved all static (config related) variables to non-static in SRX server resource as we can have more than
one SRX device in a zone
Description:
1) Moved RuntimeCloudException from api/ to utils/.
Added simple constructor to RuntimeCloudException.
Modified all classes that extended RuntimeException
to extend RuntimeCloudException. These classes
are listed below:
ServerApiException
CloudAuthenticationException
CloudExecutionException
AsyncCommandQueued
HypervisorVersionChangedException
RuntimeCloudException
2) Added overloaded constructed to CloudException.
Modified all classes that extend Exception to extend CloudException instead.
These classes are listed below:
ConcurrentOperationException
ConflictingNetworkSettingsException
ConnectionException
DiscoveryException
InsufficientCapacityException
ManagementServerException
ResourceUnavailableException
VirtualMachineMigrationException
AgentControlChannelException
OperationTimedoutException.java
UnsupportedVersionException.java
UsageServerException.java
UnableDeleteHostException.java
AgentAuthnException.java
HttpCallException.java
ActiveFencingException.java
ClusterInvalidSessionException.java
GreTunnelException.java
OvsVlanExhaustedException.java
- configuring unique persistence profile for each LB rule with sticky method applied
- removing source based sticky method for source based LB method which is not supported by F5
- converted all mandatory params to optional, and internally fill with default value before sending to haproxy. default value is available through description.
- accept holdtime without units.
Changes:
- We do not need these global setting anymore. These will be hidden since 3.0
- The default traffic label will be picked from the global setting which is null by default. When traffic label is null it means the resource uses tag on the default gateway
- Changes to invoke discoverer to reload the resource object on host connection
- Since a zone can have many physical networks, there can be multiple guest, public networks. Only the zone wide storage and management traffic label will be stored in host_details henceforth.
- If traffic labels are updated, discoverer should update the host_details
Summary of Changes:
- created a generic way for LB rule validations, so as LB device(like Haproxy) specific validations can be done syncronously.
- Removed asyncronous validations from Haproxy and done syncronously.
Summary of changes :
- Added a new flag -s to ipassoc command to carry if the ip address is
used for SNAT or not.
- SNAT is completly decoupled from the first flag. first flag is used
to decide if the ip address is first ip address of the interface.
- -s and -f are independent, SNAT can be enabled on the non-first ip
also.
Reviewed-by:Prasanna.Santhanam@citrix.com
status 13332: resolved fixed
fixing the regression because of the fix done for NetScaler in basic zone deployment. ELB does
not need USIP to be set as 0.0.0.0/0 port ingress security group rule need to be configured
Reviewed-by:Prasanna.Santhanam@citrix.com
This fix configures Inat and LB rules on the NetScaler device to send the source IP recived on the packets
as is, so that security groups configured can take affect
Summary of changes:
- Mutiple routing table for each public interface is added (previously there is only one routing table ). when the packet is send out of public interface corresponding per-interface routing table will be used. per-interface routing table will modified when ever ip/interface added/deleted.
- New parameter is added to ipassoc command to include the default gateway for every interface/ip. prevously it is using only one public interface to send out, default gateway is obtained at the boot up time.
- In the DNAT case. In the revese path(from guest vm to outside, or when DNAT packet receives from the eth0) the public ip/source ip will not be available till POSTROUTING. to overcome this, DNAT connection are marked with routing table number at the time of connection creation, in the reverse path the routing table# from DNAT connection is used to detect per-interface routing table.