Using Source NAT option on Private Gateway does not work
This fixes#2680
## Description
<!--- Describe your changes in detail -->
When you use the Source NAT feature of Private Gateways on a VPC. This should Source NAT all traffic from CloudStack VMs going towards IPs reachable through Private Gateways.
This change in this PR, stops adding the Source CIDR to SNAT rules. This should be discussed/reviewed, but i can see no reason why the Source CIDR is needed. There can only be one SNAT IP per interface, except for Static (one-to-one) NATs, which still work with this change in place. The outbound interface is what matters in the rule.
<!-- For new features, provide link to FS, dev ML discussion etc. -->
<!-- In case of bug fix, the expected and actual behaviours, steps to reproduce. -->
##### SUMMARY
<!-- Explain the problem/feature briefly -->
There is a bug in the Private Gateway functionality, when Source NAT is enabled for the Private Gateway. When the SNAT is added to iptables, it has the source CIDR of the private gateway subnet. Since no VMs live in that private gateway subnet, the SNAT doesn’t work.
##### STEPS TO REPRODUCE
<!--
For bugs, show exactly how to reproduce the problem, using a minimal test-case. Use Screenshots if accurate.
For new features, show how the feature would be used.
-->
<!-- Paste example playbooks or commands between quotes below -->
Below is an example:
- VMs have IP addresses in the 10.0.0.0/24 subnet.
- The Private Gateway address is 10.101.141.2/30
In the outputs below, the SOURCE field for the new SNAT (eth3) only matches if the source is 10.101.141.0/30. Since the VM has an IP address in 10.0.0.0/24, the VMs don’t get SNAT’d as they should when talking across the private gateway. The SOURCE should be set to ANYWHERE.
##### BEFORE ADDING PRIVATE GATEWAY
~~~
Chain POSTROUTING (policy ACCEPT 1 packets, 52 bytes)
pkts bytes target prot opt in out source destination
2 736 SNAT all -- any eth2 10.0.0.0/24 anywhere to:10.0.0.1
16 1039 SNAT all -- any eth1 anywhere anywhere to:46.99.52.18
~~~
<!-- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!-- What did you expect to happen when running the steps above? -->
~~~
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 SNAT all -- any eth3 anywhere anywhere to:10.101.141.2
2 736 SNAT all -- any eth2 anywhere anywhere to:10.0.0.1
23 1515 SNAT all -- any eth1 anywhere anywhere to:46.99.52.18
~~~
##### ACTUAL RESULTS
<!-- What actually happened? -->
<!-- Paste verbatim command output between quotes below -->
~~~
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 SNAT all -- any eth3 10.101.141.0/30 anywhere to:10.101.141.2
2 736 SNAT all -- any eth2 10.0.0.0/24 anywhere to:10.0.0.1
23 1515 SNAT all -- any eth1 anywhere anywhere to:46.99.52.18
~~~
## Types of changes
<!--- What types of changes does your code introduce? Put an `x` in all the boxes that apply: -->
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] New feature (non-breaking change which adds functionality)
- [X] Bug fix (non-breaking change which fixes an issue)
- [ ] Enhancement (improves an existing feature and functionality)
- [ ] Cleanup (Code refactoring and cleanup, that may add test cases)
## GitHub Issue/PRs
<!-- If this PR is to fix an issue or another PR on GH, uncomment the section and provide the id of issue/PR -->
<!-- When "Fixes: #<id>" is specified, the issue/PR will automatically be closed when this PR gets merged -->
<!-- For addressing multiple issues/PRs, use multiple "Fixes: #<id>" -->
Fixes: #2680
## Screenshots (if appropriate):
## How Has This Been Tested?
<!-- Please describe in detail how you tested your changes. -->
<!-- Include details of your testing environment, and the tests you ran to -->
<!-- see how your change affects other areas of the code, etc. -->
## Checklist:
<!--- Go over all the following points, and put an `x` in all the boxes that apply. -->
<!--- If you're unsure about any of these, don't hesitate to ask. We're here to help! -->
- [x] I have read the [CONTRIBUTING](https://github.com/apache/cloudstack/blob/master/CONTRIBUTING.md) document.
- [x] My code follows the code style of this project.
- [ ] My change requires a change to the documentation.
- [ ] I have updated the documentation accordingly.
Testing
- [ ] I have added tests to cover my changes.
- [ ] All relevant new and existing integration tests have passed.
- [ ] A full integration testsuite with all test that can run on my environment has passed.
* packaging: use libuuid x86_64 package for cloudstack-common
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* 64 bit links is packaged
* post scan filter to exclude libuuid.so.1
* Revert "packaging: use libuuid x86_64 package for cloudstack-common"
This reverts commit b3fb8957fe.
* post scan filter to exclude libuuid.so.1 (centos63)
* revert removal of 32 bit support for vhd-util libs
Not possible to deploy an Advanced zone with Security Groups, and VXLAN isolation method on KVM. Exception: "Unable to convert network offering with specified id to network profile" is logged.
In some environments running the keystore cert renewal (as root user)
over an already connected agent connection may cause exception
such as: `sudo: sorry, you must have a tty to run sudo`. Since, all
agents - KVM, CPVM and SSVM run as root user, we don't need to run
the renewal scripts with sudo.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Now the KVM agent checks whether a storage pool is mounted or not mounted before calling storagePoolCreateXML().
Signed-off-by: Kai Takahashi <k-takahashi@creationline.com>
Changes in PR #2508 have caused network restart to fail in a Nuage setup,
as the new VR takes the same IP as the old one, and the old VR is still running.
Nuage doesn't support multiple VM's having the same IP.
We delay provisioning the interfaces in VSD until the old VR interface is released.
* Create unit test cases for 'ConfigDriveBuilder' class
* add method 'getProgramToGenerateIso' as suggested by rohit and Daan
* fix encoding for base64 to StandardCharsets.US_ASCII
* fix MockServerTest.testIsMockServerCanUpgradeConnectionToSsl()
This is another method that is causing Jenkins to fail for almost a month
This fixes and ensures that every VM has its capacity individually
calculated, with the initial override of 1.0f as overcommit ratio.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When configuring a pre-setup primary storage we can enter the name-label of the storage that is going to be used by ACS and is already set up in the host. The problem is that we can use any String of characters there, and this String does not need to be a UUID. When listing volumes from a primary storage that has such conditions, the list will return all of the volumes in the cloud because the “API framework” will ignore that value as it is not a UUID type.
Example: A VM that uses managed storage is stopped. The VM is then started on a different host in the same cluster. The Start operation fails.
To get around this issue, you must either start the VM up on the same host or on a host in a different cluster.
The reason is due to a slightly erroneous check in VolumeOrchestrator.prepare.
To solve this issue, we should be checking if the cluster ID changes, not if the host ID changes.
This introduces a new global setting `vm.configdrive.primarypool.enabled` to toggle creation/hosting of config drive iso files on primary storage, the default will be false causing them to be hosted on secondary storage. The current support is limited from hypervisor resource side and in current implementation limited to `KVM` only. The next big change is that config drive is created at a temporary location by management server and shipped to either KVM or SSVM agent via cmd-answer pattern, the data of which is not logged in logs. This saves us from adding genisoimage dependency on cloudstack-agent pkg.
The APIs to reset ssh public key, password and user-data (via update VM API) requires that VM should be shutdown. Therefore, in the refactoring I removed the case of updation of existing ISO. If there are objections I'll re-put the strategy to detach+attach new config iso as a way of updation. In the refactored implementation, the folder name is changed to lower-cased configdrive. And during VM start, migration or shutdown/removal if primary storage is enable for use, the KVM agent will handle cleanup tasks otherwise SSVM agent will handle them.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When a user shuts down their VM from the guest OS (and VM HA is enabled), the VM just powers itself back on. Our environment is on KVM hosts.
CloudStack does not know the difference between a VM failing or being shutdown from within the guest OS.
This is a major pain point for all our users - especially since they don't pay for VMs when they are shutoff. It is not intuitive for end-users to understand why they can't shutdown VMs from within the guest OS. Especially when they all come from (non-cloudstack) VMware and Hyper-V environments where this is not an issue.
However, if a host fails, we need VM HA to still work.
This PR that creates a configuration option "ha.vm.restart.hostup". With this option set to false, if CloudStack sees a VM shutdown out-of-band, but the host it was on is still online, then it won't power the VM back on. The logic is that since the host is online, it was most likely shutdown from the guest OS.
For when a host actually fails, standard VM HA logic takes over and powers on VMs (if they have VM HA enabled) if the host they were on fails.
If that "ha.vm.restart.hostup" option is true (the default to match current functionality), it works like always, and even in-guest shutdowns of VMs causes CloudStack to power back on the VM.
In 4.11.0, I added the ability to online migrate volumes from NFS to managed storage. This actually works for Ceph to managed storage in a private 4.8 branch, as well. I thought I had brought along all of the necessary code from that private 4.8 branch to make Ceph to managed storage functional in 4.11.0, but missed one piece (which is fixed by this PR).
* Primary Storage count for an account does not decrease when a Data Disk is deleted
When a data disk is created and not attached in a running VM, the "deleteVolume" will not decrement the count for used primary storage in the VMs accounting information. The property that is not being decremented is called "primarystoragetotal"; this information can be retrieved via "listAccounts" API method.
Steps to reproduce this issue:
1 - Create an account, deploy a VM in it
2 - Check the primary storage count for the account with listAccounts API
3 - Create a data disk
4 - Check the primary storage count for the account with listAccounts API
5 - Delete the Data disk
6 - Check the primary storage count for the account with listAccounts API - It is the same as before deleting the data disk (it should not be the same as the value in step 2!)
* formatting and cleanups
* fix imports that were wrongly changed during rebase
When agent loses connection with management server, the reconnection
logic waits for any pending tasks to finish. However, when such tasks
do finish they fail to send an `Answer` back to managements server.
Therefore from a management server's perspective such pending
operations are stuck in a FSM state and need manual removal or fixing.
This is by design where management server's side cmd-answer request
pattern is code/execution dependent, therefore even if the answer
were to be sent when management server came back up (reconnects)
the management server will fail to acknowledge and process the answer
due to missing listeners or being in the exact state to handle answers.
Historically, the Agent would wait to reconnect until the internal
tasks complete but I found no reason why it should wait for reconnection
at all.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes config drive to use VM's user provided host-name instead of
the internal VM instance ID for hostname related config in both
cloudstack and openstack metadata bundled in the ISO.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This ensures that password server runs on the dhcpserver identifier
IP which is the not the VRRP virtual (10.1.1.1) IP by default but
the actual ip of the interface. When dhcp client discovery is made,
the `dhcp-server-identifier` contains the non VIP address that is
used by password reset script to query guest VM password.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-9184: Fixes#2631 VMware dvs portgroup autogrowth
This deprecates the vmware.ports.per.dvportgroup global setting.
The vSphere Auto Expand feature (introduced in vSphere 5.0) will take
care of dynamically increasing/decreasing the dvPorts when running out
of distributed ports . But in case of vSphere 4.1/4.0 (If used), as this
feature is not there, the new default value (=> 8) have an impact in the
existing deployments. Action item for vSphere 4.1/4.0: Admin should
modify the global configuration setting "vmware.ports.per.dvportgroup"
from 8 to any number based on their environment because the proposal
default value of 8 would be very less without auto expand feature in
general. The current default value of 256 may not need immediate
modification after deployment, but 8 would be very less which means
admin need to update immediately after upgrade.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This introduces a rolling restart of VRs when networks are restarted
with cleanup option for isolated and VPC networks. A make redundant option is
shown for isolated networks now in UI.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Supporting ConfigDrive user data on L2 networks.
Add UI checkbox to create L2 network offering with config drive.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This returns the boolean value of the `isuserdefined` key than
converting it to string. Fixes#2529.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10147 Disabled Xenserver Cluster can still deploy VM's. Added code to skip disabled clusters when selecting a host (#2442)
(cherry picked from commit c3488a51db)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10318: Bug on sorting ACL rules list in chrome (#2478)
(cherry picked from commit 4412563f19)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10284:Creating a snapshot from VM Snapshot generates error if hypervisor is not KVM.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10221: Allow IPv6 when creating a Basic Network (#2397)
Since CloudStack 4.10 Basic Networking supports IPv6 and thus
should be allowed to be specified when creating a network.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
(cherry picked from commit 9733a10ecd)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10214: Unable to remove local primary storage (#2390)
Allow admins to remove primary storage pool.
Cherry-picked from eba2e1d8a1
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* dateutil: constistency of tzdate input and output (#2392)
Signed-off-by: Yoan Blanc <yoan.blanc@exoscale.ch>
Signed-off-by: Daan Hoogland <daan.hoogland@shapeblue.com>
(cherry picked from commit 2ad5202823)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10054:Volume download times out in 3600 seconds (#2244)
(cherry picked from commit bb607d07a9)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* When creating a new account (via domain admin) it is possible to select “root admin” as the role for the new user (#2606)
* create account with domain admin showing 'root admin' role
Domain admins should not be able to assign the role of root admin to new users. Therefore, the role ‘root admin’ (or any other of the same type) should not be visible to domain admins.
* License and formatting
* Break long sentence into multiple lines
* Fix wording of method 'getCurrentAccount'
* fix typo in variable name
* [CLOUDSTACK-10259] Missing float part of secondary storage data in listAccounts
* [CLOUDSTACK-9338] ACS not accounting resources of VMs with custom service offering
ACS is accounting the resources properly when deploying VMs with custom service offerings. However, there are other methods (such as updateResourceCount) that do not execute the resource accounting properly, and these methods update the resource count for an account in the database. Therefore, if a user deploys VMs with custom service offerings, and later this user calls the “updateResourceCount” method, it (the method) will only account for VMs with normal service offerings, and update this as the number of resources used by the account. This will result in a smaller number of resources to be accounted for the given account than the real used value. The problem becomes worse because if the user starts to delete these VMs, it is possible to reach negative values of resources allocated (breaking all of the resource limiting for accounts). This is a very serious attack vector for public cloud providers!
* [CLOUDSTACK-10230] User should not be able to use removed “Guest OS type” (#2404)
* [CLOUDSTACK-10230] User is able to change to “Guest OS type” that has been removed
Users are able to change the OS type of VMs to “Guest OS type” that has been removed. This becomes a security issue when we try to force users to use HVM VMs (Meltdown/Spectre thing). A removed “guest os type” should not be usable by any users in the cloud.
* Remove trailing lines that are breaking build due to checkstyle compliance
* Remove unused imports
* fix classes that were in the wrong folder structure
* Updates to capacity management
This adds and allows Ubuntu 18.04 to be used as KVM host. In addition,
on the UI when hypervisor version key is missing, this adds and display
the host os and version detail which is useful to show the KVM host
os and version.
When cache mode 'none' is used for empty cdrom drives, systemvms
and guest VMs fail to start on newer libvirtd such as Ubuntu bionic.
The fix is ensure that cachemode is not declared when drives are empty
upon starting of the VM. Similar issue logged at redhat here:
https://bugzilla.redhat.com/show_bug.cgi?id=1342999
The workaround is to ensure that we don't configure cachemode for
cdrom devices at all. This also fixes live VM migration issue.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>