CLOUDSTACK-9783: Improve metrics view performanceThis improves the metrics view feature by improving the rendering performance
of metrics view tables, by re-implementing the logic at the backend and data
served via APIs. In large environments, the older implementation would
make several API calls that increases both network and database load.
List of APIs introduced for improving the performance that re-implement the frontend logic at backend:
listClustersMetrics
listHostsMetrics
listInfrastructure
listStoragePoolsMetrics
listVMsMetrics
listVolumesMetrics
listZonesMetrics
Pinging for review - @abhinandanprateek @DaanHoogland @borisstoyanov @karuturi @rashmidixit
Marvin test results:
=== TestName: test_list_clusters_metrics | Status : SUCCESS ===
=== TestName: test_list_hosts_metrics | Status : SUCCESS ===
=== TestName: test_list_infrastructure_metrics | Status : SUCCESS ===
=== TestName: test_list_pstorage_metrics | Status : SUCCESS ===
=== TestName: test_list_vms_metrics | Status : SUCCESS ===
=== TestName: test_list_volumes_metrics | Status : SUCCESS ===
=== TestName: test_list_zones_metrics | Status : SUCCESS ===
* pr/1944:
CLOUDSTACK-9783: Improve metrics view performance
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Summary: CLOUDSTACK-8921
snapshot_store_ref table should store actual size of back snapshot in secondary storage
Calling SR scan to make sure size is updated correctly
CLOUDSTACK-9854: Fix test_primary_storage test failure due to live migrationFix for test_primary_storage integration tests on simulator.
When finding storage pool migration options for volume on running vm, API returns None as hypervisor doesn't support live migration.
````
2017-03-28 06:07:55,958 - DEBUG - ========Sending GET Cmd : findStoragePoolsForMigration=======
2017-03-28 06:07:55,977 - DEBUG - Response : None
2017-03-28 06:07:55,983 - CRITICAL - EXCEPTION: test_03_migration_options_storage_tags: ['Traceback (most recent call last):\n', ' File "/opt/python/2.7.12/lib/python2.7/unittest/case.py", line 329, in run\n testMethod()\n', ' File "/home/travis/.local/lib/python2.7/site-packages/marvin/lib/decoratorGenerators.py", line 30, in test_wrapper\n return test(self, *args, **kwargs)\n', ' File "/home/travis/build/apache/cloudstack/test/integration/smoke/test_primary_storage.py", line 547, in test_03_migration_options_storage_tags\n pools_suitable = filter(lambda p : p.suitableformigration, pools_response)\n', "TypeError: 'NoneType' object is not iterable\n"]
````
So we simply stop vm before sending findStoragePoolsForMigration command
* pr/2021:
CLOUDSTACK-9854: Fix test_primary_storage test failure due to live migration
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This removes network details from the guest VM template OVF xml before deploying
a VM which would fail in case of dvswitch-based vmware environment with no
dummy/existing vswitch.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This removes nic/network specific details while exporting the systemvmtemplate
for vmware (ova file). Having this causes the ssvms to not deploy in
dvswitch-based vmware environments that have no vswitch portgroups (dummy etc).
Tested this on a local Trillian env.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fix for test_snapshots.py using nfs2 instead of nfs templateFix for marvin test failure introduced in #1847
Cc: @borisstoyanov @rhtyd @karuturi
* pr/1961:
Fix for test failure
Fix for test_snapshots.py using nfs2 instead of nfs template
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Displays a VR tab that lists VRs for the network in the detail views for
isolated networks, shared networks and for VPCs.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9811: fix duplicated nics on VR caused by nic name p<slot_number>p<port_number>
* pr/2011:
CLOUDSTACK-9811: fix duplicated nics on VR caused by nic name p<slot_number>p<port_number>
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9811: fixed an issue if the dev is not in the databagDefend against the specified dev not being in the databag.
* pr/2003:
changed the order fix to be closer to the original code
CLOUDSTACK-9811: fixed an issue if the dev is not in the databag
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
There are some VM deployment failures happening when multiple VMs are deployed at a time, failures mainly due to NetworkModel code that iterates over all the vlans in the pod. This causes each deployVM thread to hold the global lock on Network longer and cause delays. This delay in turn causes more threads to choose same host and fail since capacity is not available on that host.
Following are some changes required to be done to reduce delays during VM deployments which in turn causes some vm deployment failures when multiple VMs are launched at a time.
In Planner, remove the clusters that do not contain a host with matching service offering tag. This will save some iterations over clusters that dont have matching tagged host
In NetworkModel, do not query the vlans for the pod within the loop. Also optimized the logic to query the ip/ipv6
In DeploymentPlanningManagerImpl, do not process the affinity group if the plan has hostId provided.
Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm. free memory = total memory - (all vm memory)With memory over-provisioning set to 1, when mgmt server starts VMs in parallel on one host, then the memory allocated on that kvm can be larger than the actual physcial memory of the kvm host.
Fixed by checking free memory on host before starting Vm.
Added test case to check memory usage on Host.
Verified Vm deploy on Host with enough capacity and also without capacity
* pr/847:
Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm. free memory = total memory - (all vm memory)
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9794: Unable to attach more than 14 devices to a VMUpdated hardcoded value with max data volumes limit from hypervisor capabilities.
* pr/1953:
CLOUDSTACK-9794: Unable to attach more than 14 devices to a VM
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-5806: add presetup to storage types that support over provisioning
Ideally this should be configurable via global settings
* pr/1958:
CLOUDSTACK-5806: add presetup to storage types that support over provisioning Ideally this should be configurable via global settings
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9698 [VMware] Make hardcorded wait timeout for NIC adapter hotplug as configurableJira
===
CLOUDSTACK-9698 [VMware] Make hardcoded wait timeout for NIC adapter hotplug as configurable
Description
=========
Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR running on VMware to get detected by guest OS.
The time taken to detect hot plugged NIC in guest OS depends on type of VMware NIC adapter like (E1000, VMXNET3, E1000e etc.)
and guest OS itself. In uncommon scenarios the NIC detection may take longer time than 15 seconds,
in such cases NIC hotplug would be treated as failure which results in VPC tier configuration failure.
Alternatively making the wait timeout for NIC adapter hotplug as configurable will be helpful for admins in such scenarios. This is specific to VR running over VMware hypervisor.
Also in future if VMware introduces new NIC adapter types which may take time to get detected by guest OS, it is good to have flexibility of
configuring the wait timeout as fallback mechanism in such scenarios.
Fix
===
Introduce new configuration parameter (via ConfigKey) "vmware.nic.hotplug.wait.timeout" which is "Wait timeout (milli seconds) for hot plugged NIC of VM to be detected by guest OS." as fallback instead of hard coded timeout, to ensure flexibility for admins given the listed scenarios above.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
* pr/1861:
CLOUDSTACK-9698 Make the wait timeout for NIC adapter hotplug as configurable
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
* 4.9:
moved logrotate from cron.daily to cron.hourly for vpcrouter in cloud-early-config
CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agent
[4.9] CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agentThe router.aggregation.command.each.timeout in global configuration is only applied on new created KVM host.
For existing KVM host, changing the value will not be effective.
We need to propagate the configuration to existing host when cloudstack-agent is connected.
* pr/1856:
CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agent
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This adds support for virtio-scsi on KVM hosts, either
for guests that are associated with a new os_type of 'Other PV Virtio-SCSI (64-bit)',
or when a VM or template is regstered with a detail parameter rootDiskController=scsi.
Update cloudstack add template dialog to allow for selecting rootDiskController with KVM
Update cloudstack kvm virtio-scsi to enable discard=unmap
CLOUDSTACK-9821: Fixed issue in deploying vm in basic zoneFixed issue in deploying vm in basic zone.
There is issue in ipset command with xenserver 6.5. In util.pread2 ipset and -N is passed as single string and it caused the issue in command failure.
util.pread2(['/bin/bash', '-c', 'ipset', '-N ', tmpname , type])
* pr/1991:
CLOUDSTACK-9821: Fixed issue in deploying vm in basic zone
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>