Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR to get detected by guest OS.
The time taken to detect hot plugged NIC in guest OS depends on type of NIC adapter like (E1000, VMXNET3, E1000e etc.)
and guest OS itself. In uncommon scenarios the NIC detection may take longer time than 15 seconds,
in such cases NIC hotplug would be treated as failure which results in VPC tier configuration failure.
Alternatively making the wait timeout for NIC adapter hotplug as configurable will be helpful for admins in such scenarios.
Also in future if VMware introduces new NIC adapter types which may take time to get detected by guest OS, it is good to have flexibility of
configuring the wait timeout as fallback mechanism in such scenarios.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
This improves the metrics view feature by improving the rendering performance
of metrics view tables, by reimplementing the logic at the backend and data
served via APIs. In large environments, the older implementation would
make several API calls that increases both network and database load.
List of APIs introduced for improving the performance:
listClustersMetrics
listHostsMetrics
listInfrastructure
listStoragePoolsMetrics
listVMsMetrics
listVolumesMetrics
listZonesMetrics
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* 4.9:
CLOUDSTACK-9746 system-vm: logrotate config causes critical failures
CLOUDSTACK-9788: Fix exception listNetworks with pagesize=0
CLOUDSTACK-8663: Fixed various issues to allow VM snapshots and volume snapshots to exist together
Fix HVM VM restart bug in XenServer
CLOUDSTACK-9363: Fix HVM VM restart bug in XenServerHere is the longer description of the problem:
By default XenServer limits HVM guests to only 4 disks. Two of those are reserved for the ROOT disk (deviceId=0) and CD ROM (device ID=3) which means that we can only attach 2 data disks. This limit however is removed when Xentools is installed on the guest. The information that a guest has Xentools installed and can handle more than 4 disks is stored in the VM metadata on XenServer. When a VM is shut down, Cloudstack removes the VM and all the metadata associated with the VM from XenServer. Now, when you start the VM again, even if it has Xentools installed, it will default to only 4 attachable disks.
Now this problem manifests itself when you have a HVM VM and you stop and start it with more than 2 data disks attached. The VM fails to start and the only way to start the VM is to detach the extra disks and then reattach them after the VM start.
In this fix, I am removing the check which is done before creating a `VBD` which enforces this limit. This will not affect current workflow and will fix the HVM issue.
@koushik-das this is related to the "autodetect" feature that you introduced a while back (https://issues.apache.org/jira/browse/CLOUDSTACK-8826). I would love your review on this fix.
* pr/1829:
Fix HVM VM restart bug in XenServer
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9618: Load Balancer configuration page does not have "Source" method in the drop down list.If we create an isolated network with NetScaler published service offering for Load balancing service, then the load balancing configuration UI does not show "Source" as one of the supported LB methods in the drop down list. It only shows "Round-Robin" and "LeastConnection" methods in the list. However, It successfully creates LB rule with "Source" as the LB method using API.
* pr/1786:
CLOUDSTACK-9618: Load Balancer configuration page does not have "Source" method in the drop down list
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
snapshots to exist together
Reverting VM to disk only snapshot in Xenserver corrupts VM
Stale NFS secondary storage on XS leads to volume creation failure from snapshot
[4.10] CLOUDSTACK-8746: VM Snapshotting implementation for KVM
* pr/977:
Fixes for testing VM Snapshots on KVM. Related to PR 977
CLOUDSTACK-8746: vm snapshot implementation for KVM
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9359: IPv6 for Basic NetworkingThis PR is a proposal for adding very basic IPv6 to Basic Networking. The main goal of this PR is that the API returns a valid IPv6 address over which the Instance is reachable.
The GUI will show the IPv6 address after deployment of the Instance.

If the table VLAN has a proper IPv6 CIDR configured the DirectPodBasedNetworkGuru will calculate the IPv6 Address the Instance will obtain using EUI-64 and SLAAC: https://tools.ietf.org/search/rfc4862
In this case the _vlan_ table contained:
<pre>mysql> select * from vlan \G
*************************** 1. row ***************************
id: 1
uuid: 90e0716c-5261-4992-bb9d-0afd3006f476
vlan_id: vlan://untagged
vlan_gateway: 172.16.0.1
vlan_netmask: 255.255.255.0
description: 172.16.0.10-172.16.0.250
vlan_type: DirectAttached
data_center_id: 1
network_id: 204
physical_network_id: 200
ip6_gateway: 2001:980:7936:112::1
ip6_cidr: 2001:980:7936:112::/64
ip6_range: NULL
removed: NULL
created: 2016-07-19 20:39:41
1 row in set (0.00 sec)
mysql></pre>
It will then log:
<pre>2016-10-04 11:42:44,998 DEBUG [c.c.n.g.DirectPodBasedNetworkGuru] (Work-Job-Executor-1:ctx-1975ec54 job-186/job-187 ctx-0d967d88) (logid:275c4961) Found IPv6 CIDR 2001:980:7936:112::/64 for VLAN 1
2016-10-04 11:42:45,009 INFO [c.c.n.g.DirectPodBasedNetworkGuru] (Work-Job-Executor-1:ctx-1975ec54 job-186/job-187 ctx-0d967d88) (logid:275c4961) Calculated IPv6 address 2001:980:7936:112:4ba:80ff:fe00:e9 using EUI-64 for NIC 6a05deab-b5d9-4116-80da-c94b48333e5e</pre>
The template has to be configured accordingly:
- No IPv6 Privacy Extensions
- Use SLAAC
- Follow RFC4862
This is also described in: https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+in+Basic+Networking
The next steps after this will be:
- Security Grouping to prevent IPv6 Address Spoofing
- Security Grouping to filter ICMP, UDP and TCP traffic
* pr/1700:
CLOUDSTACK-676: IPv6 In -and Egress filtering for Basic Networking
CLOUDSTACK-676: IPv6 Basic Security Grouping for KVM
CLOUDSTACK-9359: IPv6 for Basic Networking with KVM
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9619: Updates for SAN-assisted snapshotsThis PR is to address a few issues in #1600 (which was recently merged to master for 4.10).
In StorageSystemDataMotionStrategy.performCopyOfVdi we call getSnapshotDetails. In one such scenario, the source snapshot in question is coming from secondary storage (when we are creating a new volume on managed storage from a snapshot of ours thats on secondary storage).
This usually worked in the regression tests due to a bit of "luck": We retrieve the ID of the snapshot (which is on secondary storage) and then try to pull out its StorageVO object (which is for primary storage). If you happen to have a primary storage that matches the ID (which is the ID of a secondary storage), then getSnapshotDetails populates its Map<String, String> with inapplicable data (that is later ignored) and you dont easily see a problem. However, if you dont have a primary storage that matches that ID (which I didnt today because I had removed that primary storage), then a NullPointerException is thrown.
I have fixed that issue by skipping getSnapshotDetails if the source is coming from secondary storage.
While fixing that, I noticed a couple more problems:
1) We can invoke grantAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).
2) We can invoke revokeAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).
I have corrected those issues, as well.
I then came across one more problem:
When using a SAN snapshot and copying it to secondary storage or creating a new managed-storage volume from a snapshot of ours on secondary storage, we attach to the SR in the XenServer code, but detach from it in the StorageSystemDataMotionStrategy code (by sending a message to the XenServer code to perform an SR detach). Since we know to detach from the SR after the copy is done, we should detach from the SR in the XenServer code (without that code having to be explicitly called from outside of the XenServer logic).
I went ahead and changed that, as well.
JIRA Ticket:
https://issues.apache.org/jira/browse/CLOUDSTACK-9619
* pr/1749:
CLOUDSTACK-9619: Updates for SAN-assisted snapshots
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This commit implements basic Security Grouping for KVM in
Basic Networking.
It does not implement full Security Grouping yet, but it does:
- Prevent IP-Address source spoofing
- Allow DHCPv6 clients, but disallow DHCPv6 servers
- Disallow Instances to send out Router Advertisements
The Security Grouping allows ICMPv6 packets as described by RFC4890
as they are essential for IPv6 connectivity.
Following RFC4890 it allows:
- Router Solicitations
- Router Advertisements (incoming only)
- Neighbor Advertisements
- Neighbor Solicitations
- Packet Too Big
- Time Exceeded
- Destination Unreachable
- Parameter Problem
- Echo Request
ICMPv6 is a essential part of IPv6, without it connectivity will break or be very
unreliable.
For now it allows any UDP and TCP packet to be send in to the Instance which
effectively opens up the firewall completely.
Future commits will implement Security Grouping further which allows controlling UDP and TCP
ports for IPv6 like can be done with IPv4.
Regardless of the egress filtering (which can't be done yet) it will always allow outbound DNS
to port 53 over UDP or TCP.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
(1) add support to create/delete/revert vm snapshots on running vms with QCOW2 format
(2) add new API to create volume snapshot from vm snapshot
(3) delete metadata of vm snapshots before stopping/migrating and recover vm snapshots after starting/migrating
(4) enable deleting of VM snapshot on stopped vm or vm snapshot is not listed in qcow2 image.
(5) enable smoke tests for vmsnaphsots on KVM
* CloudStack root pom change to use Amazon WS 11.1.16
caused our client to fail, as it was depending on classes,
which are not not present anymore.
Latest client version uses Gson instead.
* increase robustness of nuagevsp tests
`- test_nuage_internal_dns - move vm2 creation upwards
`- test_nuage_static_nat - delete vm in test step to avoid sut restriction
BUG-ID: CLOUDSTACK-9729i
Co-Authored-By: Raf Smeets <raf.smeets@nuagenetworks.net>
Bugfix-for: master
- Switches Travis to use jdk1.8
- Changes java-version to 1.8
- Change jdk/maven version to 1.8
- Switch to F5/java8 compatible library release
- Switch packaging to use jdk 1.8, and jre 1.8 in init/systemd scripts
- Switch systemvm to openjdk-8-jre
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
While remote executing commands/scripts through SSH in VR, ACS uses system vm keyfile.
ACS is fetching this key file using VMwareContext object which encapsulates vCenter connection handle.
This is inefficient because of dependency on getServiceContext() which means a vCenter connection
handle which is not required just to fetch a file in name space in management server.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
Issue
=====
While listing datacenters associated with a zone, only zone Id validation is required.
There is no need to have additional checks like zone is a legacy zone or not.
Fix
===
Removed unnecessary checks over zone ID and just checking if zone with specified ID exists or not.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
CLOUDSTACK-9456: Migrate master to Spring 4.xThis changes makes CloudStack use spring 4:
```
- Bump spring-framework version to 4.x and Jetty to version that runs with JDK7
- Bump servet dependency version
- Migrates various xmls to use version independent schema uris
```
Outstanding issue:
- Testing of various non-standard plugins such as network and storage plugins etc.
Since, this is a big change pinging for review -- @jburwell @karuturi @wido @murali-reddy @abhinandanprateek @DaanHoogland @GaborApatiNagy @JayapalUradi @kishankavala @K0zka @nvazquez @rafaelweingartner @pyr and others
@blueorangutan package
* pr/1638:
CLOUDSTACK-9456: Update Spring version in maven poms
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
XenServer 7 SupportThis PR adds support for XenServer 7. I have manually done the following tests
- Create a new cluster with XenServer7
- Add Primary storage: Should create an SR on XS7
- Add another XS7 host to the Pool
- Add host2 to Cloudstack
- Create VM1 from template
- Create VM2 from template
- Ping/SSH VM1 to VM2 and vice-versa
- Stop/Delete/Expunge VM2
- Create Data disk
- Attach it to VM1
- Create VM snaphsot of VM1
- Restore VM snapshot of VM1
- Delete VM snapshot of VM1
- Create Volume snapshot of Datadisk
- Create volume snapshot of Root disk
- Create new template from snapshot of root disk
- Create volume from snapshot of datadisk
- Detach datadisk volume
- Delete datadisk volume
- Aquire a public IP
- Create a static nat to VM1
- Live migrate VM1 while traffic on VM
- Delete VM1
* pr/1711:
[CLOUDSTACK-9662] Add support for XenServer 7
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Issue
====
Exception occured while creating the CPVM in the VmWare Setup using standard vswitches.
StartCommand failed due to Exception: com.vmware.vim25.AlreadyExists
message: [] com.vmware.vim25.AlreadyExistsFaultMsg: The specified key, name, or identifier already exists
Fix
===
Ensure synchronization while attempting to create port group such that simultaneous attempts are not made with same port group name on same ESXi host.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
- Bump spring-framework version to 4.x and Jetty to version that runs with JDK8
- Bump servet dependency version
- Migrate spring xmls to version 4, fixes schema locations that are 3.0
dependent in various xmls.
- Fix failing tests due to spring upgrade
(Thanks @marcaurele Marc-Aurèle Brothier for fixing them)
* Fix test DeploymentPlanningManagerImplTest
* Fix GloboDNS test
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9403 : Support for shared networks in Nuage VSP pluginThis is first phase of support of Shared Network in cloudstack through NuageVsp Network Plugin. A shared network is a type of virtual network that is shared between multiple accounts i.e. a shared network can be accessed by virtual machines that belong to many different accounts. This basic functionality will be supported with the below common use case:
- shared network can be used for monitoring purposes. A shared network can be assigned to a domain and can be used for monitoring VMs belonging to all accounts in that domain.
With the current implementation with NuageVsp plugin, Each shared network needs its unique IP address range, and can not overlap with another shared network.
In VSD, it is implemented in below manner:
- In order to have tenant isolation for shared networks, we will have to create a Shared L3 Subnet for each shared network, and instantiate it across the relevant enterprises. A shared network will only exist under an enterprise when it is needed, so when the first VM is spinned under that ACS domain inside that shared network.
PR contents:
1) Support for shared networks with tenant isolation on master with Nuage VSP SDN Plugin.
2) Marvin test coverage for shared networks on master with Nuage VSP SDN Plugin.
3) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
4) PEP8 & PyFlakes compliance with our Marvin test code.
* pr/1579:
CLOUDSTACK-9403: Support for shared networks in Nuage VSP plugin
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
-when processing static nat rule, add a mangle table rule, to mark the traffic
from the guest vm when it has associated static nat rule so that traffic gets
routed using the route tabe of the device which has public ip associated
-fix the case where nic_device_id is empty when ip is getting disassociated
resulting in empty deviceid in ips.json
-add utility methods in CsRule, and CsRoute to add 'ip rule' and 'ip route' rules respectivley
-ensure traffic from all public interfaces are connection marked with device number, and restored
for the reverse traffic. use the connection marked number to do device specific routing table lookup
fill the device specific routing table with default route
-component tests for testing multiple public interfaces of VR
- Upgrades Maven dependency version to v1.55
- Fixes bountycastle usages and issues
- Adds timeout to jetty/annotation scanning
- Fixes servlet issue, uses servlet 3.1.0
- Downgrade javassist used by reflections to fix annotation process errors
- Make console-proxy-rdp bc dependency same as rest of the codebase
- Picks up PR #1510 by Daan
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Cloudstack 9586: When using local storage with Xenserver prepareTemplate does not work with multiple primary storeThe race condition will happen whenever there are multiple primary storages and the CS tries to mount the secondary store to xenserver host simultaneously.
Due to synchronised block one mount will be successful and other thread will get the already mounted SR. Without the fix the two thread will try to mount it parallely and one will fail on Xenserver.
* pr/1765:
Cloudstack 9586: When using local storage with Xenserver prepareTemplate does not work with multiple primary store The race condition will happen whenever there are multiple primary storages and the CS tries to mount the secondary store to xenserver host simultaneously. Due to synchronised block one mount will be successful and other thread will get the already mounted SR. Without the fix the two thread will try to mount it parallely and one will fail on Xenserver.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9564: Fix memory leaks in VmwareContextPoolIn a recent management server crash, it was found that the largest contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.
This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.
@blueorangutan package
* pr/1729:
CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Introduced a global configuration flag 'cluster.threshold.enabled'. By default the flag is true.
If the value is false, then a VM can be started in a cluster even if the cluster thresholds are
crossed. However, for a new VM deployment the cluster threshold will always be honoured.
The race condition will happen whenever there are multiple primary storages and the CS tries to mount the secondary store to xenserver host simultaneously.
Due to synchronised block one mount will be successful and other thread will get the already mounted SR. Without the fix the two thread will try to mount it parallely and one will fail on Xenserver.
In a recent management server crash, it was found that the largest contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.
This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's haproxy.cfg fileMultiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is added to the corresponding InternalLbVm instance, it replaces the existing one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules are getting dropped by the InternalLbVm instance.
PR contents:
1) Fix for this bug.
2) Marvin test coverage for Internal LB feature on master with native ACS setup (component directory) including validations for this bug fix.
3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins directory) to validate this bug fix.
4) PEP8 & PyFlakes compliance with the added Marvin test code.
* pr/1577:
CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's haproxy.cfg file
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9402 : Support for underlay features (Source & Static NAT to underlay) in Nuage VSP pluginSupport for underlay features (Source & Static NAT to underlay) with Nuage VSP SDN Plugin including Marvin test coverage for corresponding Source & Static NAT features on master. Moreover, our Marvin tests are written in such a way that they can validate our supported feature set with both Nuage VSP SDN platform's overlay and underlay infra.
PR contents:
1) Support for Source NAT to underlay feature on master with Nuage VSP SDN Plugin.
2) Support for Static NAT to underlay feature on master with Nuage VSP SDN Plugin.
3) Marvin test coverage for Source & Static NAT to underlay on master with Nuage VSP SDN Plugin.
4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
5) PEP8 & PyFlakes compliance with our Marvin test code.
* pr/1580:
CLOUDSTACK-9402 : Support for underlay features (Source & Static NAT to underlay) in Nuage VSP plugin
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9566 instance-id metadata for baremetal VM returns IDThere is difference in instance-id metadata across baremetal and other hypervisors.
On Baremetal
[root@ip-172-17-0-144 ~]# curl http://8.37.203.221/latest/meta-data/instance-id
6021
on Xen
[root@ip-172-17-2-103 ~]# curl http://172.17.0.252/latest/meta-data/instance-id
cbeb517a-e833-4a0c-b1e8-9ed70200fbbf
In both cases it should be vm's uuid.
* pr/1738:
CLOUDSTACK-9566 instance-id metadata for baremetal VM returns ID
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9402 : Marvin tests for Source NAT and Static NAT features verification with NuageVsp (both overlay and underlay infra).
Co-Authored-By: Prashanth Manthena <prashanth.manthena@nuagenetworks.net>, Frank Maximus <frank.maximus@nuagenetworks.net>
This commit adds a additional VirtIO channel with the name
'org.qemu.guest_agent.0' to all Instances.
With the Qemu Guest Agent the Hypervisor gains more control over the Instance if
these tools are present inside the Instance, for example:
* Power control
* Flushing filesystems
* Fetching Network information
In the future this should allow safer snapshots on KVM since we can instruct the
Instance to flush the filesystems prior to snapshotting the disk.
More information: http://wiki.qemu.org/Features/QAPI/GuestAgent
Keep in mind that on Ubuntu AppArmor still needs to be disabled since the default
AppArmor profile doesn't allow libvirt to write into /var/lib/libvirt/qemu
This commit does not add any communication methods through API-calls, it merely
adds the channel to the Instances and installs the Guest Agent in the SSVMs.
With the addition of the Qemu Guest Agent channel a second channel appears in /dev
on a SSVM as a VirtIO port.
The order in which the ports are defined in the XML matters for the naming inside
the SSVM VM and by not relying on /dev/vportXX but looking for a static name the
SSVM still boots properly if the order in the XML definition is changed.
A SSVM with both ports attached will have something like this:
root@v-215-VM:~# ls -l /dev/virtio-ports
total 0
lrwxrwxrwx 1 root root 11 May 13 21:41 org.qemu.guest_agent.0 -> ../vport0p2
lrwxrwxrwx 1 root root 11 May 13 21:41 v-215-VM.vport -> ../vport0p1
root@v-215-VM:~# ls -l /dev/vport*
crw------- 1 root root 251, 1 May 13 21:41 /dev/vport0p1
crw------- 1 root root 251, 2 May 13 21:41 /dev/vport0p2
root@v-215-VM:~#
In this case the SSVM port points to /dev/vport0p1, but if the order in the XML
is different it might point to /dev/vport0p2
By looking for a portname with a pre-defined pattern in /dev/virtio-ports we
do not rely on the order in the XML definition.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
CLOUDSTACK-9379: Support nested virtualization at VM level on VMware Hypervisor## Introduction
[JIRA TICKET](https://issues.apache.org/jira/browse/CLOUDSTACK-9379)
It is desired to support nested virtualization at VM level for VMware hypervisor. Current behaviour supports enabling/desabling global nested virtualization by modifying global config `'vmware.nested.virtualization'`. It is wished to improve this feature, having control at VM level instead of a global control only.
A new global configuration is added, to enable/disable VM nested virtualization control: `'vmware.nested.virtualization.perVM'`. Default value=false
After a vm deployment or start command, vm params include `'nestedVirtualizationFlag'` key and its value is:
- true -> nested virtualization enabled
- false -> nested virtualization disabled
**We will determinate nested virtualization enabled/disabled by examining this 3 values:**
- **(1)** global configuration `'vmware.nested.virtualization'` value
- **(2)** global configuration `'vmware.nested.virtualization.perVM'` value
- **(3)** `'nestedVirtualizationFlag'` value in `user_vm_details` if present, `null` if not.
Using this 3 values, there are different use cases:
- **(1)** = TRUE, **(2)** = TRUE, **(3)** is null -> _ENABLED_
- **(1)** = TRUE, **(2)** = TRUE, **(3)** = TRUE -> _ENABLED_
- **(1)** = TRUE, **(2)** = TRUE, **(3)** = FALSE -> _DISABLED_
- **(1)** = TRUE, **(2)** = FALSE, **(3)** indifferent -> _ENABLED_
- **(1)** = FALSE, **(2)** = TRUE, **(3)** is null -> _DISABLED_
- **(1)** = FALSE, **(2)** = TRUE, **(3)** = TRUE -> _ENABLED_
- **(1)** = FALSE, **(2)** = TRUE, **(3)** = FALSE -> _DISABLED_
- **(1)** = FALSE, **(2)** = FALSE, **(3)** indifferent -> _DISABLED_
* pr/1542:
CLOUDSTACK-9379: Support nested virtualization at VM level on VMware Hypervisor
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9491: incorrect parsing of device list to find ethernet index of plugged NICIn VmwareResource, findRouterEthDeviceIndex() method find ethernet interface index given
the mac address. This method is used, once a nic is plugged to determine ethernet interface.
"/proc/sys/net/ipv4/conf" from the VR and looped through the devices to find the right
ethernet interface. Howver current logic read it once, and loops through the device list.
Its observerd device may not show up '/proc/sys/net/ipv4/conf' immediatly once NIC is plugged
in the VM from vCenter.
Fix ensured, while waiting for 15 sec in the loop, read the latest content from /proc/sys/net/ipv4/conf
, so that right device list is processed.
Manual tested VPC scenarios of adding new tiers which uses findRouterEthDeviceIndex, to find the guest/public network ethernet index.
* pr/1681:
CLOUDSTACK-9491: incorrect parsing of device list to find ethernet index of plugged NIC
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9503: Increased the VR script timeout. Most of the changes are about converting int/long time values to joda Duration.
* pr/1745:
CLOUDSTACK-9503: Increased the VR script timeout. Most of the changes are about converting int/long time values to joda Duration.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9502: DS template copies dont get deleted in VMware ESXi with multiple clusters and zone wide storage (include CLOUDSTACK-9386 into 4.9 release branch)Include #1560 into 4.9 release branch
* pr/1676:
CLOUDSTACK-9502: DS template copies don’t get deleted in VMware ESXi with multiple clusters and zone wide storage
Signed-off-by: John Burwell <meaux@cockamamy.net>
* 4.9:
CLOUDSTACK-8830: Fix for vm snapshots in Vmware, could not create vm snapshot until 12 minutes after vm creation due to vCenter sent null name on snpashot recent task
CLOUDSTACK-8830 - [Vmware] VM snapshot fails for 12 min after instance creation (Targeted for 4.9)Continuing work by @maneesha-p in #798
This closes#798
* pr/1677:
CLOUDSTACK-8830: Fix for vm snapshots in Vmware, could not create vm snapshot until 12 minutes after vm creation due to vCenter sent null name on snpashot recent task
Signed-off-by: John Burwell <meaux@cockamamy.net>
Made the changes to improve logging.CLOUSTACK-9465 Several log refactoring/improvement suggestions.
There are two scenarios of logging which needs refactoring/improvement:
Method invocation replaced by variable
This means that in the logging code, the method invocation is pre-defined as a variable. for simplicity, the method invocation should be replaced by the variable.
Delete variable which must be null
The variable in the logging code is null, there is no need to put the variable there.
* pr/1705:
Made the changes to improve logging.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Support Backup of Snapshots for Managed Storage```
This PR adds an ability to Pass a new parameter, locationType,
to the createSnapshot API command. Depending on the locationType,
we decide where the snapshot should go in case of managed storage.
There are two possible values for the locationType param
1) `Primary`: The standard operation for managed storage is to
keep the snapshot on the device (primary). For non-managed storage, this will
give an error as this option is only supported for managed storage
2) `Secondary`: Applicable only to managed storage. This will
keep the snapshot on the secondary storage. For non-managed
storage, this will result in an error.
The reason for implementing this feature is to avoid a single
point of failure for primary storage. Right now in case of managed
storage, if the primary storage goes down, there is no easy way
to recover data as all snapshots are also stored on the primary.
This features allows us to mitigate that risk.
```
* pr/1600:
Support Backup of Snapshots for Managed Storage
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This PR adds an ability to Pass a new parameter, locationType,
to the “createSnapshot” API command. Depending on the locationType,
we decide where the snapshot should go in case of managed storage.
There are two possible values for the locationType param
1) `Standard`: The standard operation for managed storage is to
keep the snapshot on the device. For non-managed storage, this will
be to upload it to secondary storage. This option will be the
default.
2) `Archive`: Applicable only to managed storage. This will
keep the snapshot on the secondary storage. For non-managed
storage, this will result in an error.
The reason for implementing this feature is to avoid a single
point of failure for primary storage. Right now in case of managed
storage, if the primary storage goes down, there is no easy way
to recover data as all snapshots are also stored on the primary.
This features allows us to mitigate that risk.
In VmwareResource, findRouterEthDeviceIndex() method find ethernet interface index given
the mac address. This method is used, once a nic is plugged to determine ethernet interface.
"/proc/sys/net/ipv4/conf" from the VR and looped through the devices to find the right
ethernet interface. However current logic read it once, and loops through the device list.
Its observerd device may not show up '/proc/sys/net/ipv4/conf' immediatly once NIC is plugged
in the VM from vCenter.Fix ensured, while waiting for 15 sec in the loop, read the latest
content from /proc/sys/net/ipv4/conf, so that right device list is processed.
CLOUDSTACK-9438: Fix for CLOUDSTACK-9252 - Make NFS version changeable in UIJIRA TICKET: https://issues.apache.org/jira/browse/CLOUDSTACK-9438
### Introduction
From #1361 it was possible to configure NFS version for secondary storage mount.
However, changing NFS version requires inserting an new detail on `image_store_details` table, with `name = 'nfs.version'` and `value = X` where X is desired NFS version, and then restarting management server for changes to take effect.
Our improvement aims to make NFS version changeable from UI, instead of previously described workflow.
### Proposed solution
Basically, NFS version is defined as an image store ConfigKey, this implied:
* Adding a new Config scope: **ImageStore**
* Make `ImageStoreDetailsDao` class to extend `ResourceDetailsDaoBase` and `ImageStoreDetailVO` implement `ResourceDetail`
* Insert `'display'` column on `image_store_details` table
* Extending `ListCfgsCmd` and `UpdateCfgCmd` to support **ImageStore** scope, which implied:
** Injecting `ImageStoreDetailsDao` and `ImageStoreDao` on `ConfigurationManagerImpl` class, on `cloud-server` module.
### Important
It is important to mention that `ImageStoreDaoImpl` and `ImageStoreDetailsDaoImpl` classes were moved from `cloud-engine-storage` to `cloud-engine-schema` module in order to Spring find those beans to inject on `ConfigurationManagerImpl` in `cloud-server` module.
We had this maven dependencies between modules:
* `cloud-server --> cloud-engine-schema`
* `cloud-engine-storage --> cloud-secondary-storage --> cloud-server`
As `ImageStoreDaoImpl` and `ImageStoreDetailsDao` were defined in `cloud-engine-storage`, and they needed in `cloud-server` module, to be injected on `ConfigurationManagerImpl`, if we added dependency from `cloud-server` to `cloud-engine-storage` we would introduce a dependency cycle. To avoid this cycle, we moved those classes to `cloud-engine-schema` module
* pr/1615:
CLOUDSTACK-9438: Fix for CLOUDSTACK-9252 - Make NFS version changeable in UI
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
By adding a Random Number Generator device to Instances we can prevent
entropy starvation inside guest.
The default source is /dev/random on the host, but this can be configured
to another source when present, for example a hardware RNG.
When enabled it will add the following to the Instance's XML definition:
<rng model='virtio'>
<rate period='1000' bytes='2048' />
<backend model='random'>/dev/random</backend>
</rng>
If the Instance has the proper support, which most modern distributions have,
it will have a /dev/hwrng device which it can use for gathering entropy.
More information: https://libvirt.org/formatdomain.html#elementsRng
CLOUDSTACK-9422: Granular 'vmware.create.full.clone' as Primary Storage setting### Introduction
For VMware, It is possible to decide creating VMs as full clones on ESX HV, adjusting `vmware.create.full.clone` global setting. We would like to introduce this property as a primary storage detail, and use its value instead of global setting's value.
We propose introducing `fullCloneFlag` on `PrimaryDataStoreTO` sent on `CopyCommand`. This way we can reconfigure `VmwareStorageProcessor` and `VmwareStorageSubsystemCommandHandler` similar as it was done for `nfsVersion` but refactoring it to be more general.
* pr/1602:
CLOUDSTACK-9422: Granular VMware vms creation as full clones on HV
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
On snapshot backup, this converts the rbd raw format on disk to qcow2 for compression.
* pr/1645:
CLOUDSTACK-9461
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Adding support for cross-cluster storage migration for managed storage when using XenServerThis PR adds support for cross-cluster storage migration of VMs that make use of managed storage with XenServer.
Managed storage is when you have a 1:1 mapping between a virtual disk and a volume on a SAN (in the case of XenServer, an SR is placed on this SAN volume and a single virtual disk placed in the SR).
Managed storage allows features such as storage QoS and SAN-side snapshots to work (sort of analogous to VMware VVols).
This PR focuses on enabling VMs that are using managed storage to be migrated across XenServer clusters.
I have successfully run the following tests on this branch:
TestVolumes.py
TestSnapshots.py
TestVMSnapshots.py
TestAddRemoveHosts.py
TestVMMigrationWithStorage.py (which is a new test that is being added with this PR)
* pr/1671:
Adding support for cross-cluster storage migration for managed storage when using XenServer
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>