This patch adds RBD (RADOS Block Device) support for primary storage in combination with KVM.
To get this patch working you need:
- libvirt-java 0.4.8
- libvirt with RBD storage pool support (>0.9.13)
- Qemu with RBD support (>0.14)
The primary storage does not support all the functions of CloudStack yet, for example snapshotting is disabled
due to the fact that backupping up a RBD snapshot is not possible in the way CloudStack wants to do it.
Creating templates from RBD volumes goes well, creating a VM from a template however is still a hit-and-miss.
NFS primary storage is also still required, you are not able to run your System VM's from RBD, they will need
to run on NFS.
Other then these points you can run instances with RBD backed disks.
[Dropped Vmware support in this commit, due to lack of VMware support in VPC now]
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
[Problem]
CloudStack uses a significant amount of third party software. As part of the move to ASF there is a certain set of licenses that are compatible with ASF policy. We need to make sure that every dependency we have is in that set. If it's not we have to remove it.
[Solution]
First set: Removing JnetPcap.
[Reviewers]
Edison Su, David Nalley
[Testing]
[Test Cases]
Executed ANT build-all sucessfully after removing JnetPcap and its respective dependencies.
[Platform]
Fedora release
Signed-off-by: Pradeep <pradeep.soundararajan@citrix.com>
2) Added services api support for plugging/unplugging the nics to VpcElement
Conflicts:
api/src/com/cloud/network/NetworkService.java
core/src/com/cloud/vm/VMInstanceVO.java
server/src/com/cloud/network/NetworkManagerImpl.java
server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java
server/test/com/cloud/network/MockNetworkManagerImpl.java
CS-15016 SSH connections to VSM are not cleared [Once the connections are exceeded it failed to connect to VSM]
Conflicts:
core/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java
Enabling virtual NIC association with distributed vNetwork only in Nexus dvSwitch is enabled.
Conflicts:
core/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Avoid detection of public traffic label for basic zones. Check switch types along with global parameter for enabling a particular vmware vswitch types. Move credentials information into resource and load during resource configuration. Cleanup.
Conflicts:
server/src/com/cloud/hypervisor/vmware/VmwareServerDiscoverer.java
Description:
Portprofile shaping policies will be fetched
from nexus vswitch instead of vcenter.
ACLs and Policies won't be synced to vCenter.
Get physical network label while adding cluster.
Cleanup.
Conflicts:
core/src/com/cloud/hypervisor/vmware/manager/VmwareManager.java
server/test/com/cloud/network/MockNetworkManagerImpl.java
Description:
Code changes to manage Cisco Nexus 1000v in CloudStack.
VmwareResource has been modified to leverage Nexus vSwitch.
Providing following global configuration parameters,
vmware.use.nexus.vswitch -
This would decide whether Nexus vSwitch in the VMware
cluster environment would be used/managed by CloudStack
for it's network infrastructure needs.
vmware.guest.network.vswitch.type -
This setting would enable CloudStack to use Nexus vSwitch
in the VMware cluster environment for guest traffic.
vmware.private.network.vswitch.type -
This setting would enable CloudStack to use Nexus vSwitch
in the VMware cluster environment for private traffic.
vmware.public.network.vswitch.type -
This setting would enable CloudStack to use Nexus vSwitch
in the VMware cluster environment for private traffic.
Functional Specification -
http://wiki.cloudstack.org/display/RelOps/Cisco+Nexus+1000v+Support+in+CloudStack+-+Functional+Specification
Documentation / README for usage instructions -
http://wiki.cloudstack.org/display/RelOps/Configuration+instructions+for+CloudStack+Deployment+with+Nexus+vSwitch
Conflicts:
core/src/com/cloud/hypervisor/vmware/manager/VmwareManager.java
core/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
server/src/com/cloud/hypervisor/vmware/VmwareServerDiscoverer.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/HostMO.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/HypervisorHostHelper.java
vmware-base/src/com/cloud/hypervisor/vmware/util/VmwareContext.java
Description:
1. Changed AddCiscoNexusVSMCmd to:
a. Extend BaseCmd instead of BaseAsyncCmd.
b. Take in more required parameters (viz
vCenterDCName and vCenterIpAddress)
1a. Changed DeleteCiscoNexusVSMCmd to also
extend BaseCmd.
2. Put in changes that will ensure that
When a VSM is added, it is disabled by default.
3. Fixed code that was leading to exceptions
related to DB reads/writes to VSM related tables.
4. Added new API Constants in ApiConstants.java.
NOTE - Always initialize new attributes in
ApiConstants.java to values in small case.
Never put in upper case there. Also regardless
of what names you give attributes in the
*Cmd.java's class, you pass in parameters via
API calls by specifying <key>=<value> where the
<key> is taken from the value you specified in
ApiConstants.java.
5. Modified the addCiscoNexusVSM() function in
CiscoNexusVSMDeviceManagerImpl.java to write VSM
records to the db.
Description:
Adding a netconf helper class for adding and
deleting port profiles. These functions need
to be further parameterized and error handling
needs to be taken care of.
Description:
This is work in progress. This set of changes will not
compile. Checking in for team wide code sync up.
Changes are underway to test if VMWareResource can be
leveraged to talk to the VSM, instead of creating a
new resource for the VSM, like we've been doing up
until now.
Reviewed by: Sateesh Chodapuneedi, Devdeep Singh
Description:
This is the first in a series of commits for integrating the
Cloudstack Management Server with the Nexus 1000v Virtual
Supervisor Module.
These changes introduce the necessary API command interfaces
to work with a Cisco N1KV VSM. The backend logic is still to
be put in and will be incorporated in subsequent commits.
Please do not attempt to use these APIs until then. Also,
these are not yet filled in into commands.xml, so they are
not currently exposed.
Additional APIs would be added if required.
These changes will not break any current management server
functionality.
Given below is a description of the changes put in here:
Added Cisco N1KV commands to core/api:
These are the added commands -
AddCiscoNexusVSMCmd
DeleteCiscoNexusVSMCmd
ConfigureCiscoNexusVSMCmd
ListCiscoNexusVSMCmd
ListCiscoNexusVSMNetworksCmd
Added a Network Element service file for Cisco N1KV.
Declared the interface functions that we'll need for
the N1KV VSM.
Defined a DeviceVO file for the Cisco Nexus Element.
Created a response file for Cisco Nexus VSM.
Created new event types for external Switching Management devices.
Put in logic to call interface methods in ListCiscoNexusVSMNetworksCmd
and ListCiscoNexusVSMCmd
NOT VSM RELATED:
Fixed minor typo in some of the event types for external load balancers.
Added properties of a VSM in the VSM VO class.
Replaced the "url" input parameter by "ipaddress"
in the AddCiscoNexusVSMCmd API.
Added a new file - CiscoNexusVSMElement.java to
contain the implementation of the functions
declared in the VSMElementService interface, and
put in implementations of the functions for the
Nexus VSM API commands. These functions are
defined in the CiscoNexusVSMElement class.
Added a class for Port Profiles (PortProfile.java).
The fields in this class are still not correctly
declared as of now. We'll make the required changes
going forward.
Added CiscoNexusVSMDeviceManagerImpl class.
Added CiscoNexusVSMResource class.
Created a new class to provide a package to
connect to Cisco Nexus VSMs. This will be a
set of Java wrapper functions that allow us
to connect/disconnect and send commands and
receive the results of those commands via
XML-RPC. These functions are yet to be
implemented, and will be checked in in future
commits.
Added two new classes, VSMCommand and
VSMResponse, to encapsulate XML-RPCcommands
and responses to and from a Ciscon Nexus VSM.
Put in the following function stubs inside the
CiscoNexusVSMService class:
connectToVSM()
disconnectFromVSM()
executeVSMCommand()
Added new field in the Type enum of the "Host"
interface, for Cisco Nexus VSMs.
Added two parameters to AddCiscoNexusVSMCommand
vsmName
zoneId
Modified the CiscoNexusVSMDeviceVO constructor to
take in an zoneId as a parameter when creating
the VO object.
Added new interface and class for the DeviceDao
implementation for Cisco Nexus VSM devices:
CiscoNexusVSMDeviceDao
CiscoNexusVSMDeviceDaoImpl
Removed the vsmvCenterDomainId property, since it's
going to the same as vsmDomainId, which is the VSM's
switch Domain Id.
Have started putting in the following query functions
in the CiscoNexusVSMDeviceDao interface:
Put in DAO implementations of some of the above functions in the CiscoNexusVSMDeviceDaoImpl class.
Added a vsmName parameter to the CiscoNexusVSMDeviceVO class.
status cs-14510: resolved fixed
This fix ensures VPX's provisioned on the SDX by CloudStack will be made part of both
tagged and untagged public networks
Fixed issues with vif scripts on 5.6FP1
Fixed ipv6 issue on 5.6FP1
Plus other various fixes and improvements
Starting to remove debug code
NOTE: Network is configured correctly but instances do not start. Possibly indefinite wait occuring on some commands
Fixed issues with vif scripts on 5.6FP1
Fixed ipv6 issue on 5.6FP1
Plus other various fixes and improvements
Starting to remove debug code
NOTE: Network is configured correctly but instances do not start. Possibly indefinite wait occuring on some commands
2) Added new api - changeServiceForSystemVm - to support service offering upgrade for system vms
3) Removed global config parameters that are not in use anymore: consoleproxy.ram.size, consoleproxy.cpu.mhz, secstorage.vm.ram.size, secstorage.vm.cpu.mhz
status 14463; resolved fixed
moved all static (config related) variables to non-static in SRX server resource as we can have more than
one SRX device in a zone
Description:
1) Moved RuntimeCloudException from api/ to utils/.
Added simple constructor to RuntimeCloudException.
Modified all classes that extended RuntimeException
to extend RuntimeCloudException. These classes
are listed below:
ServerApiException
CloudAuthenticationException
CloudExecutionException
AsyncCommandQueued
HypervisorVersionChangedException
RuntimeCloudException
2) Added overloaded constructed to CloudException.
Modified all classes that extend Exception to extend CloudException instead.
These classes are listed below:
ConcurrentOperationException
ConflictingNetworkSettingsException
ConnectionException
DiscoveryException
InsufficientCapacityException
ManagementServerException
ResourceUnavailableException
VirtualMachineMigrationException
AgentControlChannelException
OperationTimedoutException.java
UnsupportedVersionException.java
UsageServerException.java
UnableDeleteHostException.java
AgentAuthnException.java
HttpCallException.java
ActiveFencingException.java
ClusterInvalidSessionException.java
GreTunnelException.java
OvsVlanExhaustedException.java
- configuring unique persistence profile for each LB rule with sticky method applied
- removing source based sticky method for source based LB method which is not supported by F5
- converted all mandatory params to optional, and internally fill with default value before sending to haproxy. default value is available through description.
- accept holdtime without units.
Changes:
- We do not need these global setting anymore. These will be hidden since 3.0
- The default traffic label will be picked from the global setting which is null by default. When traffic label is null it means the resource uses tag on the default gateway
- Changes to invoke discoverer to reload the resource object on host connection
- Since a zone can have many physical networks, there can be multiple guest, public networks. Only the zone wide storage and management traffic label will be stored in host_details henceforth.
- If traffic labels are updated, discoverer should update the host_details
Summary of Changes:
- created a generic way for LB rule validations, so as LB device(like Haproxy) specific validations can be done syncronously.
- Removed asyncronous validations from Haproxy and done syncronously.
Summary of changes :
- Added a new flag -s to ipassoc command to carry if the ip address is
used for SNAT or not.
- SNAT is completly decoupled from the first flag. first flag is used
to decide if the ip address is first ip address of the interface.
- -s and -f are independent, SNAT can be enabled on the non-first ip
also.
Reviewed-by:Prasanna.Santhanam@citrix.com
status 13332: resolved fixed
fixing the regression because of the fix done for NetScaler in basic zone deployment. ELB does
not need USIP to be set as 0.0.0.0/0 port ingress security group rule need to be configured
Reviewed-by:Prasanna.Santhanam@citrix.com
This fix configures Inat and LB rules on the NetScaler device to send the source IP recived on the packets
as is, so that security groups configured can take affect
Summary of changes:
- Mutiple routing table for each public interface is added (previously there is only one routing table ). when the packet is send out of public interface corresponding per-interface routing table will be used. per-interface routing table will modified when ever ip/interface added/deleted.
- New parameter is added to ipassoc command to include the default gateway for every interface/ip. prevously it is using only one public interface to send out, default gateway is obtained at the boot up time.
- In the DNAT case. In the revese path(from guest vm to outside, or when DNAT packet receives from the eth0) the public ip/source ip will not be available till POSTROUTING. to overcome this, DNAT connection are marked with routing table number at the time of connection creation, in the reverse path the routing table# from DNAT connection is used to detect per-interface routing table.