CLOUDSTACK-8891: Fixed default iptables rules on VR for guest trafficVR default iptables rules in INPUT chain are configured partially.
In CsAddress.py rules are configured while configuring public interface, guest interface post configuration is missed. Fixed to configure guest post configuration so that iptables rules are configured.
Testing:
1. Deployed vm in the network.
2.iptables rules on the VR configured correctly.
3.VM got the dhcp ip address from the VR.
* pr/867:
CLOUDSTACK-8891: Fixed default iptables rules on VR for guest traffic
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-8864: Not able to add TCP port forwarding rule in VPN for specific ports
Setting port forwarding rules for port 500,1701 and 4500 after enabling VPN, gives the error message "The range specified, xxxx, conflicts with rule xxxx which has xxxx." This happens because the rules added for vpn doesn't have a matching condition to allow port forwarding rules.
Added a unit test to verify the detectRulesConflict function of FirewallManagerImpl.
* pr/851:
CLOUDSTACK-8864: Not able to add TCP port forwarding rule in VPN for specific ports
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-8863: VM doesn't reconnect to internet post VR RESTART/STOP-START/RECREATE
The ongoing ICMP request reply session is broken when the VR is down, the expectation is that it would resume once the VR is up. Investigations revealed that the ongoing ICMP packets are sent out of eth2 without being NATed post VR stop/start or restart or recreate.
TCPDUMP output from VR post restart/stop-start/recreate on eth2:
root@r-4-VM:~# tcpdump -i eth2 icmp -n -vvv
tcpdump: listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes
06:22:52.749770 IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto ICMP (1), length 84)
192.168.200.67 > 173.194.33.163: ICMP echo request, id 30996, seq 81, length 64
06:22:53.749782 IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto ICMP (1), length 84)
192.168.200.67 > 173.194.33.163: ICMP echo request, id 30996, seq 82, length 64
06:22:54.749771 IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto ICMP (1), length 84)
192.168.200.67 > 173.194.33.163: ICMP echo request, id 30996, seq 83, length 64
06:22:55.749775 IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto ICMP (1), length 84)
192.168.200.67 > 173.194.33.163: ICMP echo request, id 30996, seq 84, length 64
06:22:56.749765 IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto ICMP (1), length 84)
192.168.200.67 > 173.194.33.163: ICMP echo request, id 30996, seq 85, length 64
06:22:57.749776 IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto ICMP (1), length 84)
192.168.200.67 > 173.194.33.163: ICMP echo request, id 30996, seq 86, length 64
^C
6 packets captured
6 packets received by filter
0 packets dropped by kernel
root@r-4-VM:~#
root@r-4-VM:~# grep icmp /proc/net/ip_conntrack
icmp 1 29 src=192.168.200.67 dst=173.194.33.163 type=8 code=0 id=30996 [UNREPLIED] src=173.194.33.163 dst=192.168.200.67 type=0 code=0 id=30996 mark=0 use=2
This get fixed after flushing the conntrack table.
Screenshots:
Before fix (ping session doesn't resume, stop and starting the ping works, 120 packets lost):

After fix(ping session resumes, 27 packets lost):

* pr/836:
CLOUDSTACK-8863: VM doesn't reconnect to internet post VR RESTART/STOP-START/RECREATE
Signed-off-by: Remi Bergsma <github@remi.nl>
Fixed box location on vagrant files for devcloud4 (CLOUDSTACK-8898)The centos-6.5 is no longer available
* pr/875:
Added fix to binary installation vagrant files (CLOUDSTACK-8898)
Fixed box location on vagrant files
Signed-off-by: Rajani Karuturi <rajani.karuturi@citrix.com>
[4.6][BLOCKER]CLOUDSTACK-8890: Added isEmpty() check to prevent nullPointerException.Check if the list is empty before trying to get the first entry. If the list is empty, in example when dealing with projects, it will user the caller user id.
Tests to verify working order:
1. Deploy ACS
2. Create project
3. Create resource in project -> Should succeed!
* pr/878:
Added isEmpty() check to prevent nullPointerException.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
[4.6][BLOCKER]CLOUDSTACK-8763: Resolved POD/ZONE deletion failure.Instead of having both checkIfPodIsDeletable() and checkIfZoneIsDeletable have there own SQL query, I've refactored them so they use DAO SQL Queries.
This resolves the SQL Exception thrown by both classes.
Test to confirm working order:
- deploy ACS
- Add zones / pods. -> Try to delete without resources. -> Direct success.
- Add resources to zones / pods. -> Try to delete with resources in the pod / zone. -> Correct exception thrown. (Error message is why it cannot remove the zone / pod. IE: There is still a hosts or vm or .... )
* pr/845:
Added unit tests for checkIfPodIsDeletable() and checkIfZoneIsDeletable().
Updated Dao classes with correct field names.
Refactored checkIfZoneIsDeletable().
Added findByDc(long dcId) to VolumeDao and VolumeDaoImpl.
Added countIPs(long dcId, boolean onlyCountAllocated) to IPAddressDao and IPAddressDaoImpl.
Added countIPs(long dcId, boolean onlyCountAllocated) to DataCenterIpAddressDao and DataCenterIpAddressDaoImpl.
Refactored checkIfPodIsDeletable().
Added findByPodId(Long podId) to HostDao and HostDaoImpl.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
CLOUDSTACK-8726 : Automation for Quickly attaching multiple data disks to a new VMAttach multiple Volumes simultaneously to a Running VM ... === TestName: test_attach_multiple_volumes | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 196.931s
OK
* pr/683:
changed the testcase skip code into setup method
Imparting changes mentioned by nitt10prashant
Automation for multiple disk attachments to instance
Signed-off-by: sanjeev <sanjeev@apache.org>
CLOUDSTACK-8893: Fixing script as per the latest functionalityPlease check https://issues.apache.org/jira/browse/CLOUDSTACK-8893 for more details.
* pr/871:
Modified test description
CLOUDSTACK-8893: Fixing script as per the latest functionality
Signed-off-by: sanjeev <sanjeev@apache.org>
[4.6][BLOCKER]CLOUDSTACK-8883: Resolved connect/reconnect issue.Hi!
@wilderrodrigues by implementing Callable you switched a couple of methods and fields. I switched them some more!
The reason why the Agent wouldn't reconnect was due to two facts.
Problem 1: Selector was blocking.
In the while loop at [1] _selector.select(); was blocking when the connection was lost. This means at [2] _isStartup = false; was never excecuted. Therefore at [3] the call to isStartup() always returned true resulting in an infinite loop.
Resolution 1: Move the call to cleanUp() [4] before checking if isStartup() has turned to false. cleanUp() will close() the _selector resulting in _isStartup to be set to false.
Problem 2: Setting _isStartup & _isRunning to true when init() throwed an unchecked exception (ConnectException).
The exception was nicely caught, but only logged. No action was taken! Resulting in _isStartup & _isRunning being set to true. Resulting in the fact the Agent thought it was connected successfully, though it wasn't.
Resolution 2: Adding return to the catch statement [5]. This way _isStartup & _isRunning aren't set to true.
Steps to test:
1. Deploy ACS.
2. Try all combinations of stopping/starting managment server/agent.
[1]b34f86c8d5/utils/src/main/java/com/cloud/utils/nio/NioConnection.java (L128)
[2]b34f86c8d5/utils/src/main/java/com/cloud/utils/nio/NioConnection.java (L176)
[3]b34f86c8d5/agent/src/com/cloud/agent/Agent.java (L404)
[4]b34f86c8d5/agent/src/com/cloud/agent/Agent.java (L399)
[5]b34f86c8d5/utils/src/main/java/com/cloud/utils/nio/NioConnection.java (L91)
* pr/863:
Added return statement to stop start() if there has been an ConnectException.
Call cleanUp() before looping isStartup().
Signed-off-by: Rajani Karuturi <rajani.karuturi@citrix.com>
CLOUDSTACK-8826: XenServer - Use device id passed as part of attach volume API properly
If device id passed as part of API and available then use it otherwise fallback on XS to automatically assign one.
For ISO device id used is 3 and it is processed before any other entry to avoid conflict.
Signed-off-by: Koushik Das <koushik@apache.org>
CLOUDSTACK-8851 Redundant VR getting started in the same cluster or hwe are not populating the deployment destination of the previous rvr in the avoid set of the rvr that is being created. This was resulting in both the rvrs getting deployed to same host sometimes even when it
could have gone to a different one.
Now we are updating the avoids set of the deployment plan to fix this issue.
* pr/839:
CLOUDSTACK-8851 Redundant VR getting started in the same cluster or host even when there are suitable hosts available
Signed-off-by: Remi Bergsma <github@remi.nl>
If device id passed as part of API and available then use it otherwise fallback on XS to automatically assign one.
For ISO device id used is 3 and it is processed before any other entry to avoid conflict.
CLOUDSTACK-8840: Systemd service for the Usage ServerThere already was a uncompleted systemd service file for the Usage
Server.
This new one replaces sysvinit and the old systemd service file.
* pr/820:
CLOUDSTACK-8840: Do not include old systemd wrapper
CLOUDSTACK-8840: Fix the source path of the service file
CLOUDSTACK-8840: Systemd service for the Usage Server
Signed-off-by: Wido den Hollander <wido@widodh.nl>
CLOUDSTACK-8820: Support for VMware vCenter 6 data centerCLOUDSTACK-8820: Showing error when try to add advance zone using VMware ESXi 6.0 host
Summary: In vCenter 6.0, response headers need to be fetched after service login for server cookie unlike previous versions of vCenter.
* pr/806:
CLOUDSTACK-8820: Updated the code for vCenter6 data center support.
CLOUDSTACK-8820: Showing error when try to add advance zone using VMWare ESXi 6.0 host Summary: In vCenter 6.0, response headers need to be fetched after service login for server cookie unlike previous versions of vCenter.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-8625: Systemd profile for CloudStack AgentWith CentOS 7 and Ubuntu 16.04 (to be released) using systemd
it is preferred that CloudStack's Agent is also being started using
systemd.
This commit includes a service file for the CloudStack Agent with
a wrapper script which actually executes Java
It no longer uses jsvc for daemonizing and thus this requirement
has also been dropped for CentOS 7 packaging.
The Agent log output to stdout has also been modified to no longer
include the timestamp as this is done by journalctl.
This has been tested on a CentOS 7.1 machine and the Agent starts,
stops and restarts properly.
* pr/813:
CLOUDSTACK-8625: Remove the need of a wrapper script for the Agent
CLOUDSTACK-8625: Updated spec file for systemd profile
CLOUDSTACK-8443: Install the systemd wrapper script in RPM
CLOUDSTACK-8625: Systemd profile for CloudStack Agent
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Replaced all occurences of Charset.forName(UTF-8) with StringUtils.getPreferredCharset().
* pr/825:
Replaced all occurences of Charset.forName(UTF-8) with StringUtils.getPreferredCharset().
Signed-off-by: Daan Hoogland <daan@onecht.net>
CLOUDSTACK-8834: Fixed unable to download Template , when in multi zonesWe were listing image stores by zone id which was resulting in listing of only one image store
If in that image store its download state is not DOWNLOADED then download template is failing
* pr/804:
CLOUDSTACK-8834: Fixed unable to download Template , when in multi zones We were listing image stores by zone id which was resulting in listing of only one image store If in that image store its download state is not DOWNLOADED then download template is failing
Signed-off-by: Wido den Hollander <wido@widodh.nl>
CLOUDSTACK-8645: Improve logging of RBD functionality in KVMA simple commit which changes a couple of log lines.
* pr/821:
CLOUDSTACK-8645: Improve logging of RBD functionality in KVM
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Tagging tests appropriately to pick them for running on basic zoneAdding additional tag to the list of tags.
* pr/819:
Tagging tests appropriately to pick them for running on basic zone
Signed-off-by: Rajani Karuturi <rajani.karuturi@citrix.com>