When start a vm or migrate a vm (away from a host in host maintenance), cloudstack will check capacity of all hosts and choose one. If there are hundreds of hosts on the platform, it will take some seconds. When cloudstack choose a host and start/migrate vm to it, the resource consumption of the host might have been changed. This normally happens when we start/migrate multiple vms.
It would be better to double check the host capacity when start vm on a host.
This PR includes the fix for cpucore capacity when start/migrate a vm.
When we calculate a resource consumption of a host, we need to take the vms in following states into calculation: Running, Starting, Stopping, Migrating (to the host), and vms are Migrating from the host. Because, when stop a vm, the resource on host will be released when vm is stopped. When migrate a vm, the resource on destination host will be increased before migration starts, and resource on source host will be decreased after migraiton succeeds.
In cloudstack, there is a task named CapacityChecked which run every 5 minutes (capacity.check.period =300000 ms by default). It recalculates capacity of all hosts. However, it takes only vms in Running and Starting into consideration. We have faced some issues in host maintenance due to it.
Steps to reproduce the issue
(1) migrate N vms from host A to host B, cpu/ram resource increases before the migration.
(2) capacity check recalculate the capacity of hosts. used capacity of Host B will be reset to original value (not including the vms in Migrating).
(3) migrate some more vms from other host to host B, the migrations are allowed by cloudstack (because used capacity is incorrect). If the actual used memory exceed the physical memory on the host, there might be some critical issues (for example, libvirt dies)
When we create a vm in the network with redundant VRs, the lease file in the vm (for example /var/lib/dhcp/dhclient.eth0.leases) shows the dhcp-server-identifier is the guest ip (not vip/gateway) of master VR. That's the ip ipaddress where the vm fetch password and metadata from.
if we stop the master VR (then backup will be master) or restart the network with cleanup (VRs will be created), the guest ip of master VR changes so vm are not able to get metadata/ssh-key using the ips in dhcp lease file.
Setting up metadata/password/dhcp server on gateway instead of guest IP in redundant VRs will fix the issues.
FIxes#3409
Steps to reproduce the issue
(1) create an account (test)
(2) create a vm with the account (test)
(3) login with admin, and upgrade the vm to another offering
(4) the resource count (cpu,memory) of admin increases, not the account (test).
* Increase lease time to infinite
Lease time set to effectively infinite (36000+ days) since we fully control VM lifecycle via CloudStack
Infinite time helps avoid some edge cases which could cause DHCPNAK being sent to VMs since
(RHEL) system lose routes when they receive DHCPNAK
When VM is expunged, it's active lease and DHCP/DNS config is properly removed from related files in VR.
* desc fix
Logrotate should only touch security_group.log and resizevolume.log
as the agent.log is already rotated by log4j inside the Agent.
Having two systems trying to rotate agent.log leads to all kinds of
issues like having binary (compressed) data in the middle of a plain-text
log file.
In addition we do not have to rotate the logs every day, only when they
grow larger than 10M. On fairly idle hypervisors this should not cause
those logs to rotate every day.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
After commit fbf488497f, admin need to specify an ipv4 or ipv6 addresses when add IP to nic which breaks backward compatibity. If IP is not specified, a IPv4 address should be returned.
When I add a secondary IP to a nic on shared network in advanced zone with security groups, the network rules for new IP are not applied on KVM hypervisors.
It is because "--action -A" cannot be recognized in security_group.py after commit ac73e7e671. changing to "--action=-A" will fix it.
When add a acl rule with protocol number, the iptables rules in vpc vr is not applied correctly.
for example, when add an ingress acl rule (protocol number:50, cidr: 2.2.2.2/32), we expect to have a iptables rule: "-A ACL_INBOUND_eth2 -s 2.2.2.2/32 -p esp -j ACCEPT"
the actual rule is "-A ACL_INBOUND_eth2 -j DROP"
It is because the rules in json are not correct.
network_acl.json.a8c52dca-0278-4e1c-b72b-987ca7121f4f.gz:{"device":"eth2","mac_address":"02:00:7d:27:00:02","private_gateway_acl":false,"nic_ip":"192.168.11.12","nic_netmask":"28","ingress_rules":[{"type":"protocol","protocol":50,"cidr":"ACCEPT","allowed":false},{"type":"all","cidr":"0.0.0.0/0","allowed":true},],"egress_rules":[],"type":"networkacl"}
Fixes: #3602
Fixes issue #3590 by using the last element on the array from the snapshot "path" String for retrieving the snapshot id. Additionally, it uses the volumePath as the volume id which should always be the correct value. The error raised on issue #3590 was related to the wrong use of variable "path" where in some cases had a different set of substrings.
The proposed change has been tested and evaluated. The values used for openning the RBD connection and executing the rollback were stable on the tests. Runned rollback on multiple snapshots and could start the VM with the content matching the ROOT reverted snapshot.
While searching for existing route, don't use the throw keyword when
using the cmd with `ip route show`.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Diagnostics are hard when a snapshot fails if a null pointer occurs. This is because no stack trace or location of the error is logged. I.E.
2019-10-21 12:55:00,056 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-131:ctx-80420156 job-10033827/job-10033828 ctx-4864e2f5) (logid:21454564) copy snasphot failed: java.lang.NullPointerException
We've encountered a corner case where bridge -o link show returned two lines per bridge instead of one. get_bridge_physdev in security_group.py returned bond0.701\nbond0.701.
Although this may very well be something on the hypervisor, we should limit the lines returned.
I therefore added a mere | head -1 to the function.
* server: Do NOT cleanup dhcp and dns when stop a vm
According comment in PR #3608, dhcp and dns entries are cleaned up only when a VM is expunged.
Revert part of commit 8fb388e931.
* server: cleanup dns/dhcp entries in removeNic instead of finalizeExpunge
This fixes a behaviour to not cleanup DHCP and DNS rules for NICs of a
VM in the VR when it is stopped, but instead when VM is expunged because
stopped VMs in CloudStack still retain the IPs and records.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Since version 10.0.0 of Debian has become stable, the URL of the Debian 9.9.0 ISO files has changed from current to archive.
The old URL returns a 404 and crash the build of systemvm templates.
is not used; probably it is a legacy code/table.
Therefore, remove the verification that counts the IPs from
UserIpv6AddressVO in order to check if it can use the network for
deploying new VMs in UI [1].
[1] com.cloud.network.NetworkModelImpl.canUseForDeploy(Network).
Fixes NPE when trying to find suitable storage pools for a volume
when the volume is not attached to a VM.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When a network IP range is removed, the "vlan" stays mapped on pod_vlan_map; therefore, the method that lists the VLANs by pod id will return null VLANS.
This PR adds proper verifications to avoid null pointer exception when deploying VRs on a pod with removed VLANs. The exception was caused on getPlaceholderNicForRouter.
Problem: In Vmware, appliances that have options that are required to be answered before deployments are configurable through vSphere vCenter user interface but it is not possible from the CloudStack user interface.
Root cause: CloudStack does not handle vApp configuration options during deployments if the appliance contains configurable options. These configurations are mandatory for VM deployment from the appliance on Vmware vSphere vCenter. As shown in the image below, Vmware detects there are mandatory configurations that the administrator must set before deploy the VM from the appliance (in red on the image below):
Solution:
On template registration, after it is downloaded to secondary storage, the OVF file is examined and OVF properties are extracted from the file when available.
OVF properties extracted from templates after being downloaded to secondary storage are stored on the new table 'template_ovf_properties'.
A new optional section is added to the VM deployment wizard in the UI:
If the selected template does not contain OVF properties, then the optional section is not displayed on the wizard.
If the selected template contains OVF properties, then the optional new section is displayed. Each OVF property is displayed and the user must complete every property before proceeding to the next section.
If any configuration property is empty, then a dialog is displayed indicating that there are empty properties which must be set before proceeding
image
The specific OVF properties set on deployment are stored on the 'user_vm_details' table with the prefix: 'ovfproperties-'.
The VM is configured with the vApp configuration section containing the values that the user provided on the wizard.