* Change logrotate interval to hourly
The logrotate config says interval as hourly but it relies
on timer service to be invoked but in timer the frequency
is mentioned as 12h. So it wont be invoked every hour.
So change the frequency to hourly
* Add change to vpc router
Inclusivity changes for CloudStack
- Change default git branch name from 'master' to 'main' (post renaming/changing default git branch to 'main' in git repo)
- Rename some offensive words/terms as appropriate for inclusiveness.
This PR updates the default git branch to 'main', as part of #4887.
Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR fixes#5058
when start a vm, the old entries in databag for the vm (with same mac addresses) should be removed then set again, to avoid duplicated records in dhcpentry databag and also /etc/dhcphosts.txt
Testing with Isolated networks:
(1) stop vm, change vm ip address, start vm
vm info is updated in /etc/dhcphosts.txt and /etc/cloudstack/dhcpentry.json
(2) stop vm, expunge vm.
vm is removed from /etc/dhcphosts.txt and /var/lib/misc/dnsmasq.leases
Testing with VPC:
(1) create vm in 2 vpc tiers
vm has 2 entries in /etc/dhcphosts.txt, and /etc/cloudstack/dhcpentry.json
(2) stop vm, change ip addresses, change nics order, start vm
entries are updated in /etc/dhcphosts.txt and /etc/cloudstack/dhcpentry.json
(3) remove a nic from vm (hot unplug)
vm nic is removed from /etc/dhcphosts.txt and /var/lib/misc/dnsmasq.leases
entry in /etc/cloudstack/dhcpentry.json is updated.
IKE version allows selecting ike (autoselect), ikev1, or ikev2.
Split connections gives an option of separating the first right subnet from the rest, and kicking out individual statements for each right subnet for better cross-compatibility.
Backported from PR: #4137
update per PR suggestion
Fixes#3138
Co-authored-by: Greg Goodrich <ggoodrich@ippathways.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This PR prepares marvin and tests for python3. it was part of #4479, until nose2 was decided to be abandoned from that PR.
Re-PR of #4543 and #3730 to enable cooperation
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Gabriel Beims Bräscher <gabriel@apache.org>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
There is a potential security issue with having http access to the VR from anywhere.
This PR restricts http access to the VR to the internal network only.
This contains 3 main changes
(1) add NETWORK_STATS_ethX for all nics with public ips in VPC VRs (current: NETWORK_STATS_eth1)
(2) DO NOT create records in user_statistics for each VPC tier (only one record per public nic per VPC VR)
(3) send NetworkUsageCommand before unplugging a NIC with public IPs from VPC VR
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ
Documentation PR: apache/cloudstack-documentation#169
This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack
Other improvements addressed in addition to PowerFlex/ScaleIO support:
- Added support for config drives in host cache for KVM
=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)
- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates
- Updated full deployment destination for preparing the network(s) on VM start
- Propagate the direct download certificates uploaded to the newly added KVM hosts
- Discover the template size for direct download templates using any available host from the zones specified on template registration
=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones
- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)
- Retry VM deployment/start when the host cannot grant access to volume/template
- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore
- Check the router filesystem is writable or not, before performing health checks
=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not
- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)
* Addressed some issues for few operations on PowerFlex storage pool.
- Updated migration volume operation to sync the status and wait for migration to complete.
- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.
- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).
- Updated resize volume error message string.
- Blocked the below operations on PowerFlex storage pool:
-> Extract Volume
-> Create Snapshot for VMSnapshot
* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.
- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html
Other fixes included:
- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)
- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.
- Perform basic tests (for connectivity and file system) on router before updating the health check config data
=> Validate the basic tests (connectivity and file system check) on router
=> Cleanup the health check results when router is destroyed
* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0
* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool
and Minor improvements in PowerFlex/ScaleIO storage plugin code
* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.
- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.
* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py
* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)
* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.
* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration
* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure
* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
Steps to reproduce the issue
(1) create two vm wei-001 and wei-002, start them
(2) check /etc/cloudstack/dhcpentry.json and /etc/dhcphosts.txt in VR
They have entries for both of wei-001 and wei-002
(3) stop wei-002, and restart VR (or restart network with cleanup).
check /etc/cloudstack/dhcpentry.json and /etc/dhcphosts.txt in VR
They have entries for wei-001 only (as wei-002 is stopped)
(4) expunge wei-002. when it is done,
check /etc/cloudstack/dhcpentry.json and /etc/dhcphosts.txt in VR
They do not have entries for wei-001.
VR health check fails at dhcp_check.py and dns_check.py
before
```
root@r-27-VM:/var/cache/cloud# /opt/cloud/bin/configure.py monitor_service.json
ERROR:root:Command 'ip link show eth0 | grep 'state DOWN'' returned non-zero exit status 1
```
with this change
```
root@r-27-VM:/var/cache/cloud# /opt/cloud/bin/configure.py monitor_service.json
root@r-27-VM:/var/cache/cloud#
```
* vpc: fix ips on wrong interfaces after rebooting vpc vrs
* #4467: Rename to updateNicWithDeviceId
* CLSTACK-8923 vr: Force a restart of keepalived if conntrackd is not running or configuration has changed
Currently ssvm checks connectivity only for one mgt server.
Since we can have multiple mgt servers using comma separated
list, make the change in script so that it checks for connectivity
for all mgt servers
The DNS entry "data-server" was not added in /etc/hosts.
Since the VR is now considered as a "dhcpsrvr" (?), we need to apply this commit to add this DNS entry.
/etc/hosts is fully rewritten by this script.
Fixes: #4308
(cherry picked from commit dc65f31f9f)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The DNS entry "data-server" was not added in /etc/hosts.
Since the VR is now considered as a "dhcpsrvr" (?), we need to apply this commit to add this DNS entry.
/etc/hosts is fully rewritten by this script.
Fixes: #4308
As discussed in #3937 (comment)
a rule for port forwarding in VPC router might not be needed.
This fixes the failed result of health check for network VRs.
This upgrades the systemvmtemplate base to Debian 10 with openjdk-11 and a newer strongswan package.
Fixes#3654
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When Guest VM add secondary nic, will get wrong hostname "infiniteh" from dhcp server
infiniteh -->infinite
cat /etc/dhcphosts.txt
02:00:0b:ef:00:04,set:192_168_4_18,192.168.4.18,gumd-tes3,infiniteh
The previous setup of many hours would not work, due to some internal dnsmasq issues - lease was set correclty, but dnsmasq was setting the dhcp-renew-time (and rebind time) to less than 2 years from the date the lease was issued.
Using "infinite" as the value (instead of the number) works as expected - and (atm) the renew date is set to year 2088, etc.
Co-authored-by: dahn <daan.hoogland@gmail.com>
This adds support for JDK11 in CloudStack 4.14+:
- Fixes code to build against JDK11
- Bump to Debian 9 systemvmtemplate with openjdk-11
- Fix Travis to run smoketests against openjdk-11
- Use maven provided jdk11 compatible mysql-connector-java
- Remove old agent init.d scripts
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
since 4.11.3, haproxy is always restarted when add/delete a lb rule.
When haproxy is started, the processes are
```
root@r-854-VM:~# ps aux |grep haproxy
root 22272 0.0 0.2 4036 668 ? Ss 07:52 0:00 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy 22274 0.0 2.3 38444 5856 ? S 07:52 0:00 /usr/sbin/haproxy-master
haproxy 22275 0.0 0.3 38444 880 ? Ss 07:52 0:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
```
When haproxy is reload, the processes are
```
root@r-854-VM:~# ps aux |grep haproxy
root 22272 0.0 0.2 4168 632 ? Ss 07:52 0:00 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy 22283 0.0 2.3 38444 5884 ? S 07:53 0:00 /usr/sbin/haproxy-master
haproxy 22286 0.0 0.3 38444 880 ? Ss 07:53 0:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 22275
```
We need to change the pid file from /var/run/haproxy.pid to /run/haproxy.pid, so the haproxy will be reloaded instead of restarted.
When we create a vm in the network with redundant VRs, the lease file in the vm (for example /var/lib/dhcp/dhclient.eth0.leases) shows the dhcp-server-identifier is the guest ip (not vip/gateway) of master VR. That's the ip ipaddress where the vm fetch password and metadata from.
if we stop the master VR (then backup will be master) or restart the network with cleanup (VRs will be created), the guest ip of master VR changes so vm are not able to get metadata/ssh-key using the ips in dhcp lease file.
Setting up metadata/password/dhcp server on gateway instead of guest IP in redundant VRs will fix the issues.
FIxes#3409
* * Complete API implementation
* Complete UI integration
* Complete marvin test
* Complete Secondary storage GC background task
* improve UI labels
* slight reword and add another missing description
* improve download message clarity
* Address comments
* multiple fixes and cleanups
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* fix more bugs, let it return ip rule list in another log file
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* fix missing iprule bug
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* add support for ARCHIVE type of object to be linked/setup on secstorage
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Fix retrieving files for Xenserver
* Update get_diagnostics_files.py
* Fix bug where executable scripts weren't handled
* Fixed error on script cmd generation
* Do not filter name for log files as it would override similar prefix script names
* Addressed code review comments
* log error instead of printstacktrace
* Treat script as executable and shell script
* Check missing script name case and write to output instead of catching exception
* Use shell = true instead of shlex to support any executable
* fix xenserver bug
* don't set dir permission for vmware
* Code review comments - refactoring
* Add check for possible NPE
* Remove unused imoprt after rebase
* Add better description for configs
Co-authored-by: Nicolas Vazquez <nicovazquez90@gmail.com>
Co-authored-by: Rohit Yadav <rohit@apache.org>
Co-authored-by: Anurag Awasthi <anurag.awasthi@shapeblue.com>
* Increase lease time to infinite
Lease time set to effectively infinite (36000+ days) since we fully control VM lifecycle via CloudStack
Infinite time helps avoid some edge cases which could cause DHCPNAK being sent to VMs since
(RHEL) system lose routes when they receive DHCPNAK
When VM is expunged, it's active lease and DHCP/DNS config is properly removed from related files in VR.
* desc fix
While searching for existing route, don't use the throw keyword when
using the cmd with `ip route show`.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Increase lease time to infinite
Lease time set to effectively infinite (36000+ days) since we fully control VM lifecycle via CloudStack
Infinite time helps avoid some edge cases which could cause DHCPNAK being sent to VMs since
(RHEL) system lose routes when they receive DHCPNAK
When VM is expunged, it's active lease and DHCP/DNS config is properly removed from related files in VR.
* desc fix
This fixes to avoid forking curl to save password but instead call
a HTTP POST url directly within Python code. This may reduce bottleneck
during high VM launches that require passwords.
Fixes#3182
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
In some virtual routers, 'hostname -f' returns 'localhost'. The hostname is also 'localhost' in `/var/log/messages`. This change can fix the issue in new VRs.
In order to reduce memory footprint and improve boot speed/predictability.
The following changes have been made:
- add vm.min_free_kbytes to sysctl
- periodically clear disk cache (depending on memory size)
- only start guest services specific to hypervisor
- use systemvm code to determine hypervisor type (not systemd)
- start cloud service at end of post init rather than through systemd
- reduce initial threads started for httpd
- fix vmtools config file
Fixes#3039
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
On VMware Zone, hitting CTRL over Console Proxy will send a mask of
Meta key as well. This makes Ctrl+A, Ctrl+E and many functionalities
to not work in console.
Read https://github.com/apache/cloudstack/issues/3229 for
details
For fixing ignore Meta key flag passed by SDK if Control was pressed.
The Jquery implementation sets the meta key to control key to support
IE.
Fixes#3229
This does not remove VM entries in dbags when hostnames match. The
current codebase already removes entry when a VM is stopped/removed so
we don't need to handle lazy removal. This will allow a VM on
multiple-tiers in a VPC to get dns/dhcp rules as expected.
This also fixes the issue of dhcp_release based on a specific interface and
removes dhcp/dns entry when a nic is removed on a guest VM.
Fixes#3273
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The VR code has provision for inserting rules at the top or bottom by specifying "front" as the second parameter to self.fw.append. However, there are a number of cases where someone has been unaware of this and added a rule with the pattern self.fw.append(["mangle", "", "-I PREROUTING".... This causes the code to check for the rule already being present to fail, and duplicate rules end up being added.
This PR fixes two of these cases which apply to adding static NAT rules. I am aware of more of these cases, but I don't have the ability to easily test the outcome of fixing them. I'm happy to add these in if you're confident that the automated tests will be sufficient. Searching for "-I (case sensitive) finds these.
The code for dealing with "front" is included below to show that this shouldn't have any ill effects:
if fw[1] == "front":
cpy = cpy.replace('-A', '-I')
Fixes#3177
Since the CloudStack virtual router was redesigned on version 4.6 it has been observed that the DHCP leases file is not persistent across network operations. This causes conflicts on guest VMs static IPs, causing these static IPs to not be renewed by the DHCP server running on isolated and VPC networks' virtual routers (dnsmasq). On stopping or destroying a VM, its dhcp/dns records are not removed from the virtual router causing ghost effects.
Fixes#3272Fixes#3354
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR resolves 2 issues related to Virtual Routers with multiple public interfaces, and works around a third.
- Fixes#3353 - Adds missing throw routes for eth0/eth1 to eth3+ when there are >1 public IPs
- Fixes#3168 - Incorrect marks set on some static NAT rules (some code references were changed from hex(int(interfacenum)) to hex(100 + int(interfacenum)) - this change just adds the remaining ones
- Fixes#3352 - Work around that sends Gratuitous ARP messages when a HA VR becomes master to work around the problem of the MAC address being different between HA VRs. If that issue is fixed properly (i.e. a database entry for the subsequent interfaces so they can be static) then this is unnecessary, though should not cause any problems.