Displays a VR tab that lists VRs for the network in the detail views for
isolated networks, shared networks and for VPCs.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9811: fix duplicated nics on VR caused by nic name p<slot_number>p<port_number>
* pr/2011:
CLOUDSTACK-9811: fix duplicated nics on VR caused by nic name p<slot_number>p<port_number>
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
[4.9] CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agentThe router.aggregation.command.each.timeout in global configuration is only applied on new created KVM host.
For existing KVM host, changing the value will not be effective.
We need to propagate the configuration to existing host when cloudstack-agent is connected.
* pr/1856:
CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agent
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9746 system-vm: logrotate config causes critical failures* rotate both daily and by size by using maxsize in stead of size
* decrease the max size to 10M for rsyslog files
* remove delaycompress for rsyslog files
* increase rotate to 10 for cloud.log
* pr/1915:
CLOUDSTACK-9746 system-vm: logrotate config causes critical failures
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9363: Fix HVM VM restart bug in XenServerHere is the longer description of the problem:
By default XenServer limits HVM guests to only 4 disks. Two of those are reserved for the ROOT disk (deviceId=0) and CD ROM (device ID=3) which means that we can only attach 2 data disks. This limit however is removed when Xentools is installed on the guest. The information that a guest has Xentools installed and can handle more than 4 disks is stored in the VM metadata on XenServer. When a VM is shut down, Cloudstack removes the VM and all the metadata associated with the VM from XenServer. Now, when you start the VM again, even if it has Xentools installed, it will default to only 4 attachable disks.
Now this problem manifests itself when you have a HVM VM and you stop and start it with more than 2 data disks attached. The VM fails to start and the only way to start the VM is to detach the extra disks and then reattach them after the VM start.
In this fix, I am removing the check which is done before creating a `VBD` which enforces this limit. This will not affect current workflow and will fix the HVM issue.
@koushik-das this is related to the "autodetect" feature that you introduced a while back (https://issues.apache.org/jira/browse/CLOUDSTACK-8826). I would love your review on this fix.
* pr/1829:
Fix HVM VM restart bug in XenServer
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-8663: Fixed various issues to allow VM snapshots and volumesnapshots to exist together
Reverting VM to disk only snapshot in Xenserver corrupts VM
Stale NFS secondary storage on XS leads to volume creation failure from snapshot
Fixed various concerns raised in #672
* pr/1941:
CLOUDSTACK-8663: Fixed various issues to allow VM snapshots and volume snapshots to exist together
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9789: Fix releasing secondary guest IP fails with associated static nat which is actually not used
* pr/1947:
CLOUDSTACK-9789: Fix releasing secondary guest IP fails with associated static nat which is actually not used
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9628: Fix Template Size in Swift as Secondary StorageCloudstack incorrectly uses the physical size as the size of the
template. Ideally, the size should refelct the virtual size. This
PR fixes that issue.
* pr/1770:
CLOUDSTACK-9628: Use correct virtualsize with Swift as secondary storage
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
* rotate both daily and by size by using maxsize in stead of size
* decrease the max size to 10M for rsyslog files
* remove delaycompress for rsyslog files
* increase rotate to 10 for cloud.log
CLOUDSTACK-9691: Fixed unhandeled excetion in list snapshot command when a primary store is deleted related to it@mike-tutkowski After support for snapshots on solidifire there are many places which are prone to these NullPointer exceptions resulting in various issues. Root cause for these issues is that we get the primary storage associated with snapshot and then figure out how to handle but if that store is deleted then it results in NullPointer exceptions. Without solidfire we don't need to access primary storage.
Should we handle that as issues are found or could there be other way to fix all of these issues?
* pr/1847:
CLOUDSTACK-9691: Added test list_snapshots_with_removed_data_store
CLOUDSTACK-9691: Fixed unhandeled excetion in list snapshot command when a primary store is deleted related to it
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9655 The template which is registered in all zones will be deleted by deleting 1 template on any zoneadded extra warning message if it's a cross-zone template ("This is a
cross zone template and will be deleted from all the zones. Are you sure
you want to proceed?").
* pr/1818:
CLOUDSTACK-9655 The template which is registered in all zones will be deleted by deleting 1 template on any zone
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
snapshots to exist together
Reverting VM to disk only snapshot in Xenserver corrupts VM
Stale NFS secondary storage on XS leads to volume creation failure from snapshot
CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%This issue occurs when a volume in Ready state is moved across storage
pools.
While finding if the storage pool has enough space, it has a check to
consider the size of non Ready volumes only. This is true if the volume
to be attached to a vm is in the same storage pool. But, if the volume
is in another storage pool and has to be moved to a vm's storage pool,
the size of the volume should be considered in doing the space check.
computing the asking size when volume is not in ready state or when the
volume is on a different storage pool.
Testing:
I couldnt write unittests for it. This class is not in a unittestable state.
manually tested in the below environment
1. xenserver 6.5 setup with 2 clusters and a host each in each of them.
2. added storage tags for the primary storage.
3. created two service offerings with the storage tags.
4. deployed two vms using newly created offerings in step 3.
5. at this stage, there are two vms one on each host with root disks on the corresponding primary.
6. create a data disk and attach it to vm1
7. detach the data disk. now the data disk is in the primary storage of the cluster of vm1 (let us say primary1)
8. attach this data disk to vm2(running on a host in different cluster)
9. the volume should be moved to the primary storage of another cluster and op_host_capacity should be accordingly updated.
* pr/873:
CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
deleted by deleting 1 template on any zone
added extra warning message if it's a cross-zone template ("This is a
cross zone template and will be deleted from all the zones. Are you sure
you want to proceed?").
This issue occurs when a volume in Ready state is moved across storage
pools.
While finding if the storage pool has enough space, it has a check to
consider the size of non Ready volumes only. This is true if the volume
to be attached to a vm is in the same storage pool. But, if the volume
is in another storage pool and has to be moved to a vm's storage pool,
the size of the volume should be considered in doing the space check.
computing the asking size when volume is not in ready state or when the
volume is on a different storage pool.
[4.9] CLOUDSTACK-9770: fix missing ip routes in VRIn network VR, the routes to current subnets are missing in corresponding ip route Table.
It is a typo in commit 6749785cab
In VPC VR, it works fine.
* pr/1929:
CLOUDSTACK-9770: fix missing ip routes in VR
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-4858 Honors the snapshot.backup.rightafter configuration variable
Unhides snapshot.backup.rightafter from global configuration
If snapshot.backup.rightafter is set to false (defaults to true), snapshots are
not backed up to secondary storage.
This is the same as PR #1644 applied to 4.9, as per @jburwell
* pr/1697:
CLOUDSTACK-4858 Honors the snapshot.backup.rightafter configuration variable Unhides snapshot.backup.rightafter from global configuration
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-8805: Domains become inactive automatically.Handled the '%' case by replacing that with a literal character rather than a wildcard character.
* pr/775:
CLOUDSTACK-8805: Domains become inactive automatically. Handled the '%' case by replacing that with a literal character rather than a wildcard character.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
[4.9] CLOUDSTACK-9692: Fix password server issue in redundant VRsThe password server in RVRs has wrong parameters as the gateway of guest nics is None.
In this case, we should get the gateway from /var/cache/cloud/cmdline.
This issue is caused by commit 45642b8382
* pr/1871:
CLOUDSTACK-9692: Fix password server issue in redundant VRs
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
The password server in RVRs has wrong parameters as the gateway of guest nics is None.
In this case, we should get the gateway from /var/cache/cloud/cmdline.
Use separate lvcreate command on XenServer7 hosts, that checks and passes
different parameters based on the xenserver release version.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
4.9 multiplex testingThis merges several PRs in this PR, this is for speeding up test efforts.
* pr/1854:
CLOUDSTACK-9688: Fix VR smoke test failure in vpc_vpn
CLOUDSTACK-9688: Fix failing test_volumes on centos7/kvm
CLOUDSTACK-9676 Start instance fails after reverting to a VM snapshot, when there are child VM snapshots
CLOUDSTACK-9612: Fixed issue in restarting redundant network with cleanup Rvr Network with cleanup which is updated from the isolated network is failed. Corrected the column name string issue.
CLOUDSTACK-9676 Start instance fails after reverting to a VM snapshot, when there are child VM snapshots
CLOUDSTACK-9673 Exception occured while creating the CPVM in the VmWare Setup over standard vSwitches
CLOUDSTACK-9617: Fixed enabling remote access after PF or LB configured on vpn tcp ports
CLOUDSTACK-9615: Fixed applying ingress rules without ports
CLOUDSTACK-9639: Unable to create shared network with vLan isolation
CLOUDSTACK-9597: Should not fetch resource count for removed entity
CLOUDSTACK-9649: In the management server log there is an error related to 0.0.0.0 IP address
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The test_vpc_vpn uses a cidr that overlaps with the base test environment's
CIDR causing intermittent failure. This changes the cidr to not overlap
with underlying infra and avoid future failures.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Due to OS/hypervisor/environmental configuration, detaching a disk/device
using libvirt can be successful without updating the domain configuration (xml).
This leads to reattachment failure as the device is blocked until the next
reboot. This fixes a specific environment case by performing stop/start on
the VM only in case of KVM, which will recreate a fresh domain config (xml)
as KVM VMs have transient domain configs (xmls don't persist).
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9673 : Exception occured while creating the CPVM in VMware setup over standard vSwitchesJira
===
CLOUDSTACK-9673 : Exception occured while creating the CPVM in VMware setup over standard vSwitches
Issue
====
Exception occured while creating the CPVM in the VmWare Setup using standard vswitches.
```
StartCommand failed due to Exception: com.vmware.vim25.AlreadyExists
message: [] com.vmware.vim25.AlreadyExistsFaultMsg: The specified key, name, or identifier already exists
```
Fix
===
Ensure synchronization while attempting to create port group such that simultaneous attempts are not made with same port group name on same ESXi host.
Testing
======
Successfully ran manual tests (deploy user instance) on top of latest master commit `17653a86fad67447a4f13e455e336694ad5c1735`.This code change is involved in virtual network creation over VMware standard vSwitches. Existing functional tests covers this functionality.
* pr/1827:
CLOUDSTACK-9673 Exception occured while creating the CPVM in the VmWare Setup over standard vSwitches
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9676 Start instance fails after reverting to a VM snapshot when there are child VM snapshotsJira
===
CLOUDSTACK-9676 Start instance fails after reverting to a VM snapshot when there are child VM snapshots
Issue
====
Start instance fails after reverting to a VM snapshot, when there is 1 or more child VM snapshots in the snapshot tree of the VM.
Per the code, the method hasSnapshot() is supposed to detect the presence of any snapshot for the VM. But we are checking for only current snapshot instead of checking presence of any snapshot in the snapshot tree.
The failure to detect all snapshots means ACP reconfigures the VM in wrong way assuming there are no snapshots for the VM.
This results in start failure.
Fix
===
Ensure correct detection of VM snapshots in the VM snapshot tree
* pr/1828:
CLOUDSTACK-9676 Start instance fails after reverting to a VM snapshot, when there are child VM snapshots
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9597: Should not fetch resource count for removed entityFetch the number of resourceCount by domain and account excluding the removed ones.
Signed-off-by: Marc-Aurle Brothier <m@brothier.org>
* pr/1764:
CLOUDSTACK-9597: Should not fetch resource count for removed entity
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9649: In the management server log there is an error
ISSUE
============
In the management server log there is an error
2016-10-01 00:07:31,670 ERROR [c.c.h.v.r.VmwareResource] (DirectAgent-417:ctx-e8c89b3f strmg-esx-01, cmd: GetRouterAlertsCommand) (logid:7beb3819) Command failed due to Exception: java.io.IOException
Message: There was a problem while connecting to 0.0.0.0:3922
In case of basic zone and VMWare ESXi host, the NIC 2 always gets 0.0.0.0 as IP address. Looks like we are generating an error for connecting through this invalid IP.
2016-10-01 04:37:31,680 DEBUG [c.c.a.m.AgentManagerImpl] (RouterStatusMonitor-1:ctx-8880f9c8) (logid:946838b8) Details from executing class com.cloud.agent.api.routing.GetRouterAlertsCommand: Command failed due to Exception: java.io.IOException
Message: There was a problem while connecting to 0.0.0.0:3922
2016-10-01 04:37:31,680 WARN [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-8880f9c8) (logid:946838b8) Unable to get alerts from router r-4-VM Command failed due to Exception: java.io.IOException
Message: There was a problem while connecting to 0.0.0.0:3922
2016-10-01 04:37:31,682 DEBUG [c.c.n.ExternalDeviceUsageManagerImpl] (ExternalNetworkMonitor-1:ctx-913c7bae) (logid:1b926a60) External devices stats collector is running...
Root Cause:
As Link local is not used in basic zone mode (vmware). 0.0.0.0 is just shown as a placeholder address. In getRouterAlerts before sending GetRouterAlertsCommand added check for ip and skip the command if ip is '0.0.0.0'.
* pr/1811:
CLOUDSTACK-9649: In the management server log there is an error related to 0.0.0.0 IP address
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>