distributed routing. Fix ensures there is approproate flag in other
config of the network to indicate the bridge type.
Conflicts:
server/src/com/cloud/network/vpc/VpcManagerImpl.java
to flow rules and applies them on the bridge
add event subscriber in OvsTunnelManager, that listens to
replaceNetworkAcl events. On event sends the updated policy info to all
the hosts in the VPC
- get the hosts on which VPC spans given vpc id
- get the VM's in the VPC
- get the hosts on which a network spans
- get the VPC's to which a hosts is part of
- get VM's of a VPC on a hosts
introduces capability to build a physical toplogy representation of a
VPC. This json file is encapsulated in
OvsVpcPhysicalTopologyConfigCommand, and is used to send full topology
to hypervisor hosts. On hypervisor this json config can be used to setup
tunnels, configure bridge, add flow rules etc
Ovs GURU, to use different broasdcast scheme VS://vpcid.gerkey for the
networks in VPC that use distributed routing
each VIF and tunnel interface to carry the network UUID in other/options
config
distributed routing mode, and setup flows appropriatley
script to handle the VPC topology sent from management server in JSOn
format. From the JSON file, reqired configuration (tunnel setup and flow
rules setup) is setup on the bridge
use interface wildcard "+" in iptables to cover potential used VLAN interface to allow output on physical interface.
you will see
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-out bond2+ --physdev-is-bridged
instead of
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-out bond2.1234 --physdev-is-bridged
Anthony
1) vxlan will use bridge scheme 'brvx-<vni>'. Multiple physical networks can host guest
traffic type with vxlan isolation, so long as they don't use the same VNI range.
2) Guest traffic labels can be physical interface if bridge by given name is not found.
Normally we take traffic label name, find the matching bridge, then resolve that to a
physical interface. Then we create guest bridges on that interface. Now we can just
specify the interface.
xs 6.1/6.2 introduce the new virtual platform, so there are two virtual platforms, windows PV driver version must match virtual platforms,
this patch tracks PV driver versions in vm details and template details.
Anthony
Initial patch for VXLAN support.
Fully functional, hopefully, for GuestNetwork - AdvancedZone.
Patch Note:
in cloudstack-server
- Add isolation method VXLAN
- Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
- Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
- Add VXLAN isolation option in zoneWizard UI
in cloudstack-agent (kvm)
- Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation script
-- Usage is exactly same to modifyvlan.sh
- BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when VXLAN is used for isolation
Database changes:
- No change in database structure.
- VXLAN isolation uses same tables that VLAN uses to store vNet allocation status.
Known Issue and/or TODO:
- Some resource still says 'VLAN' in log even if VXLAN is used
- in UI, "Network - GuestNetworks" dosen't display VNI
-- VLAN ID field displays "N/A"
- Documentation!
Signed-off-by : Toshiaki Hatano <haeena@haeena.net>
There still exist two issues after Edison's commits.
(1) Migration from new hosts to old hosts failed.
The bridge name on old host is set to cloudVirBr* if network.bridge.name.schema is set to 3.0 in /etc/cloudstack/agent/agent.properties, but the actual bridge name is breth*-* after running cloudstack-agent-upgrade.
(2) all ports of vms (Basic zone, or Advanced zone with security groups) on old hosts are open, because the iptables rules are binding to device (bridge) name which is changed by cloudstack-agent-upgrade.
After this, the KVM upgrade steps :
a. Install 4.2 cloudstack agent on each kvm host
b. Run "cloudstack-agent-upgrade". This script will upgrade all the existing bridge name to new bridge name, and update related firewall rules.
c. install a libvirt hook:
c1. mkdir /etc/libvirt/hooks
c2. cp /usr/share/cloudstack-agent/lib/libvirtqemuhook /etc/libvirt/hooks/qemu
c3. chmod +x /etc/libvirt/hooks/qemu
c4. service libvirtd restart
c5. service cloudstack-agent restart
Signed-off-by: Wei Zhou <w.zhou@leaseweb.com>
Initial patch for VXLAN support.
Fully functional, hopefully, for GuestNetwork - AdvancedZone.
Patch Note:
in cloudstack-server
- Add isolation method VXLAN
- Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
- Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
- Add VXLAN isolation option in zoneWizard UI
in cloudstack-agent (kvm)
- Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation script
-- Usage is exactly same to modifyvlan.sh
- BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when VXLAN is used for isolation
Database changes:
- No change in database structure.
- VXLAN isolation uses same tables that VLAN uses to store vNet allocation status.
Known Issue:
- Some resource still says 'VLAN' in log even if VXLAN is used
- in UI, "Network - GuestNetworks" dosen't display VNI
-- VLAN ID field displays "N/A"
CS used to access vnc server in xenserver dom0 to get VM console, now CS moves to use XenServer console API. getvncport plugin is not needed any more.
remove the code related to getvncport in XenServer
Detail: KVM recently got a patch that did away with a few dozen ssh calls
when programming virtual router (CLOUDSTACK-3163), saving several seconds
for each vm served by the virtual router when the router is rebooted. This
patch updates Xen to use the same method, and cleans up the old script refs.
Reviewed-by: Sheng Yang, Prasanna Santhanam
util.pread2(['ebtables', '-A', vm_chain, '-i', vif, '-s', '!', vm_mac, '-j', 'DROP'])
if user changes the VM mac address, all egress packet from the VM will be dropped, but the egress packet still contaminate the bridge cache with fake MAC,
This patch moves the rule to ebtables nat table PREROUTING chain, then the egress packet with modified MAC will not contaminate the bridge cache.
Anthony
When 'security_group.py cleanup_rules' is called by the KVM Agent it will clean up all Instances
not in the "running" state according to libvirt.
However, when a snapshot is created of a Instance it will go to the "paused" state while the snapshot
is created.
This leads to Security Rules being removed when a Instance is being snapshotted and the cleanup process
is initiated.
normally, in dhcp reply, the target ip is allocated ip for VM.
but windows 2008 32bit has special field in dhcp reply, which makes dhcp reply use 255.255.255.255 as target ip, which is blocked by SG rule,
createipAlias.sh/deleteipAlias.sh won't be copied to XenServer host.
The directory of the scripts should be ".." rather "../../.." in all the xenserver patches file.
Corrected the path to ".." because the scripts are located at scripts/vm/hypervisor/xenserver/xenserver56/patch
Fixed by updating the patch files that has
entries to copy scipts on xenserver. Here we added
Add-To-VCPUs-Params-Live.sh
Added a check on Host params whether host restricts Dynamic memory control(DMC) to able to allow scale up VM.
If DMC is not enabled then static max and min are set to SO.
Signed Off by - Nitin Mehta <nitin.mehta@citrix.com>
This feature enables adding of guest ip ranges (public ips) form different subnets.
In order to provide the dhcp service to a different subnet we create an ipalias on the router. This allows the router to listen to the dhcp request from the guest vms and respond accordingly. Every time a vm is deployed in the new subnet we configure an ip alias on the router. Cloudstack uses dnsmasq to provide dhcp service. We need to configure the dnsmasq to issue ips on the new subnets. Added a new class dnsmasqconfigurator which generates the dnsmasq confg file, this file replaces the old config in the router.
The details of the alias ips are stored in db in the nic_ip_alias table. Every time a new subnet is added one of the ip from the subnet is used to configure the ip alias.
I have pushed the code to https://github.com/bvbharatk/cloud-stack/tree/Cloudstack-702 , also rebased the code with master.
I need to test the code for advanced sg enabled network using kvm.
I have added the unit test
Marvin tests are at https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=53e4965
Also accomodated some of the changes suggested by koushik.
corrected the import statements. renamed the IpAlias command to createIpAlias command.
This feature supports only ipv4
Start/stop vm/dhcp server are done. Not done with VM migration.
A new command(PvlanSetupCommand) is sent for setting up PVLAN for vms. Currently
it's focus on OVS implementation. Need to be more abstruct and add vSwitch part.
Detail: Added exception handling around iptables chain flushing, along
with a call to default_network_rules() to re-initialize.
Testing:
On agent, ls /var/run/cloud and pick one of the VMs to test with. Make a
backup of it's logfile (eg cp /var/run/cloud/i-2-1722.log /tmp )
Destroy the firewall ruleset for that VM with
/usr/lib64/cloud/common/scripts/vm/network/security_group.py destroy_network_rules_for_vm --vmname i-2-1722-VM --vif vnet10
Now copy the log file back, edit the file and decrement the last field by 1
ACS should notice the out-of-date sequence ID and push a new ruleset for
the VM within 60 seconds.
BUG-ID: CLOUDSTACK-1685
Bugfix-for: John Kinsella
Reviewed-by:
Reported-by:
Signed-off-by: John Kinsella <jlk@stratosec.co> 1363286927 -0700
Detail: A grep in security_group.py wasn't defined well enough, could
potentially delete rules for VMs other than intended
BUG-ID: CLOUDSTACK-309
Bugfix-for: master
Reviewed-by:
Reported-by: Francois Scala
Signed-off-by: John Kinsella <jlk@stratosec.co> 1363222521 -0700
Detail: Code was attempting to concatinate an exception to a string.
Updated to convert to text and concatinate that.
BUG-ID: CLOUDSTACK-1052
Bugfix-for: master
Reported-by: Noa Resare
Signed-off-by: John Kinsella <jlk@stratosec.co> 1363218769 -0700
Detail: This gets rid of the patchdisk method of passing cmdline and
authorized_keys to KVM system VMs. It instead passes them to a virtio socket,
which the KVM guest reads from the character device /dev/vport0p1 during
cloud-early-config. Tested to work on CentOS 6.3 and Ubuntu 12.04. Should
work with even older versions of libvirt.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1362691685 -0700
cause writing the heartbeat to take longer than expected. By comparing
the last successful heartbeat epoch with the current epoch we check if
the timeout value has been met.
Checks the args length, doesn't throw IndexError when no args
passed. Also logs to security_group.log when executed with no args or unknown
command.
Review: https://reviews.apache.org/r/9588
Reviewed-by: Rohit Yadav <bhaisaab@apache.org>
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
- Fixed new join dao impls as spring components
- Fixed component context xml to load api rate limit checker
- Fixed root pom.xml for duplicate plugin
- Fixed list data centers method
- Fixed following conflicts:
api/src/org/apache/cloudstack/api/command/admin/network/CreateNetworkOfferingCmd.java
api/src/org/apache/cloudstack/api/command/user/offering/ListServiceOfferingsCmd.java
api/src/org/apache/cloudstack/api/command/user/template/DeleteTemplateCmd.java
api/src/org/apache/cloudstack/api/command/user/template/ExtractTemplateCmd.java
plugins/api/discovery/src/org/apache/cloudstack/discovery/ApiDiscoveryServiceImpl.java
server/src/com/cloud/api/ApiDBUtils.java
server/src/com/cloud/api/ApiServer.java
server/src/com/cloud/api/query/QueryManagerImpl.java
server/src/com/cloud/configuration/DefaultComponentLibrary.java
server/src/com/cloud/server/ManagementServerImpl.java
server/src/com/cloud/storage/swift/SwiftManagerImpl.java
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
Detail: several scripts in the scripts directory weren't marked executable.
Normally this is handled/fixed in the packaging, but harder to deal with in
development environments, so marking them executable.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1358446902 -0700
Detail: Users can experience long delays during VM migration, because the
linux bridge by default will have a forwarding delay set. This means that the
network will likely miss any gratuitous ARP from qemu notifying the network that
the MAC has moved. This change is a common reccommendation for virtualization
running on Linux bridges.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1357259186 -0700
- Since we're always getting the first from the list, use head -1 to get the first
of the results instead of processing again
- Remove unecessay pop (why was it even there)
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
work)
Cloudstack seems to let you create guest traffic types on multiple
physical networks. However, when I try this with KVM I end up always
bridging to whatever device is used for guest.network.device. This pulls
the traffic label (NicTO.getName()) and uses that bridge to ensure that
we get on the correct physical network, rather than just always using
the guest.network.device.
This also changes the bridge naming scheme from cloudVirBr + vlanid to
br + physicalinterface + "-" + vlanid. This is because we should be able
to support the same vlan numbers per physical network, and the previous
bridge name would not support this and collide.
Signed-off-by: Edison Su <sudison@gmail.com>
host if there are multiple storage pools in a cluster.
The issue is as follows:
1. When CloudStack detects that a host is not responding to ping
requests it'll send a fence command for this host to another host in the
cluster.
2. The agent takes a long time to respond to this check if the storage
is fenced. This is because the agent checks if the first host is writing
to its heartbeat file on all pools in the cluster. It is doing this in a
sequential manner on all storage pool.
Making a fix to get rid of sleep, wait during HA. The behavior is now
similar to Xenserver.
RB: https://reviews.apache.org/r/6133/
Send-by:devdeep.singh@citrix.com
The default_network_rules_systemvm method in security_group.py only created the appropriate rules for
just one bridge.
This however leads to traffic not being forwarded to the virtual machine in the case of the system VMs
both (console & storage) having different bridges in basic networking.
This patch makes sure rules are generated for all target devices based on their source device/bridge
It however excludes the LinkLocalBridge since no filtering is needed on that bridge.
CS-14902: Fixing ovs-vif-flows.py for avoiding it kicks in with exp backoff timeouts if ovs is not running
Also removing unnecessary copies of the same script
Fixed issues with vif scripts on 5.6FP1
Fixed ipv6 issue on 5.6FP1
Plus other various fixes and improvements
Starting to remove debug code
NOTE: Network is configured correctly but instances do not start. Possibly indefinite wait occuring on some commands
Fixed issues with vif scripts on 5.6FP1
Fixed ipv6 issue on 5.6FP1
Plus other various fixes and improvements
Starting to remove debug code
NOTE: Network is configured correctly but instances do not start. Possibly indefinite wait occuring on some commands