CLOUDSTACK-9720: [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.Updated the template_spool_ref table with the correct template (VMware - OVA file) size.
* pr/1880:
CLOUDSTACK-9720: [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9783: Improve metrics view performanceThis improves the metrics view feature by improving the rendering performance
of metrics view tables, by re-implementing the logic at the backend and data
served via APIs. In large environments, the older implementation would
make several API calls that increases both network and database load.
List of APIs introduced for improving the performance that re-implement the frontend logic at backend:
listClustersMetrics
listHostsMetrics
listInfrastructure
listStoragePoolsMetrics
listVMsMetrics
listVolumesMetrics
listZonesMetrics
Pinging for review - @abhinandanprateek @DaanHoogland @borisstoyanov @karuturi @rashmidixit
Marvin test results:
=== TestName: test_list_clusters_metrics | Status : SUCCESS ===
=== TestName: test_list_hosts_metrics | Status : SUCCESS ===
=== TestName: test_list_infrastructure_metrics | Status : SUCCESS ===
=== TestName: test_list_pstorage_metrics | Status : SUCCESS ===
=== TestName: test_list_vms_metrics | Status : SUCCESS ===
=== TestName: test_list_volumes_metrics | Status : SUCCESS ===
=== TestName: test_list_zones_metrics | Status : SUCCESS ===
* pr/1944:
CLOUDSTACK-9783: Improve metrics view performance
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9854: Fix test_primary_storage test failure due to live migrationFix for test_primary_storage integration tests on simulator.
When finding storage pool migration options for volume on running vm, API returns None as hypervisor doesn't support live migration.
````
2017-03-28 06:07:55,958 - DEBUG - ========Sending GET Cmd : findStoragePoolsForMigration=======
2017-03-28 06:07:55,977 - DEBUG - Response : None
2017-03-28 06:07:55,983 - CRITICAL - EXCEPTION: test_03_migration_options_storage_tags: ['Traceback (most recent call last):\n', ' File "/opt/python/2.7.12/lib/python2.7/unittest/case.py", line 329, in run\n testMethod()\n', ' File "/home/travis/.local/lib/python2.7/site-packages/marvin/lib/decoratorGenerators.py", line 30, in test_wrapper\n return test(self, *args, **kwargs)\n', ' File "/home/travis/build/apache/cloudstack/test/integration/smoke/test_primary_storage.py", line 547, in test_03_migration_options_storage_tags\n pools_suitable = filter(lambda p : p.suitableformigration, pools_response)\n', "TypeError: 'NoneType' object is not iterable\n"]
````
So we simply stop vm before sending findStoragePoolsForMigration command
* pr/2021:
CLOUDSTACK-9854: Fix test_primary_storage test failure due to live migration
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Fix for test_snapshots.py using nfs2 instead of nfs templateFix for marvin test failure introduced in #1847
Cc: @borisstoyanov @rhtyd @karuturi
* pr/1961:
Fix for test failure
Fix for test_snapshots.py using nfs2 instead of nfs template
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9811: fix duplicated nics on VR caused by nic name p<slot_number>p<port_number>
* pr/2011:
CLOUDSTACK-9811: fix duplicated nics on VR caused by nic name p<slot_number>p<port_number>
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9811: fixed an issue if the dev is not in the databagDefend against the specified dev not being in the databag.
* pr/2003:
changed the order fix to be closer to the original code
CLOUDSTACK-9811: fixed an issue if the dev is not in the databag
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm. free memory = total memory - (all vm memory)With memory over-provisioning set to 1, when mgmt server starts VMs in parallel on one host, then the memory allocated on that kvm can be larger than the actual physcial memory of the kvm host.
Fixed by checking free memory on host before starting Vm.
Added test case to check memory usage on Host.
Verified Vm deploy on Host with enough capacity and also without capacity
* pr/847:
Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm. free memory = total memory - (all vm memory)
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9794: Unable to attach more than 14 devices to a VMUpdated hardcoded value with max data volumes limit from hypervisor capabilities.
* pr/1953:
CLOUDSTACK-9794: Unable to attach more than 14 devices to a VM
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-5806: add presetup to storage types that support over provisioning
Ideally this should be configurable via global settings
* pr/1958:
CLOUDSTACK-5806: add presetup to storage types that support over provisioning Ideally this should be configurable via global settings
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9698 [VMware] Make hardcorded wait timeout for NIC adapter hotplug as configurableJira
===
CLOUDSTACK-9698 [VMware] Make hardcoded wait timeout for NIC adapter hotplug as configurable
Description
=========
Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR running on VMware to get detected by guest OS.
The time taken to detect hot plugged NIC in guest OS depends on type of VMware NIC adapter like (E1000, VMXNET3, E1000e etc.)
and guest OS itself. In uncommon scenarios the NIC detection may take longer time than 15 seconds,
in such cases NIC hotplug would be treated as failure which results in VPC tier configuration failure.
Alternatively making the wait timeout for NIC adapter hotplug as configurable will be helpful for admins in such scenarios. This is specific to VR running over VMware hypervisor.
Also in future if VMware introduces new NIC adapter types which may take time to get detected by guest OS, it is good to have flexibility of
configuring the wait timeout as fallback mechanism in such scenarios.
Fix
===
Introduce new configuration parameter (via ConfigKey) "vmware.nic.hotplug.wait.timeout" which is "Wait timeout (milli seconds) for hot plugged NIC of VM to be detected by guest OS." as fallback instead of hard coded timeout, to ensure flexibility for admins given the listed scenarios above.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
* pr/1861:
CLOUDSTACK-9698 Make the wait timeout for NIC adapter hotplug as configurable
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
* 4.9:
moved logrotate from cron.daily to cron.hourly for vpcrouter in cloud-early-config
CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agent
[4.9] CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agentThe router.aggregation.command.each.timeout in global configuration is only applied on new created KVM host.
For existing KVM host, changing the value will not be effective.
We need to propagate the configuration to existing host when cloudstack-agent is connected.
* pr/1856:
CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agent
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9821: Fixed issue in deploying vm in basic zoneFixed issue in deploying vm in basic zone.
There is issue in ipset command with xenserver 6.5. In util.pread2 ipset and -N is passed as single string and it caused the issue in command failure.
util.pread2(['/bin/bash', '-c', 'ipset', '-N ', tmpname , type])
* pr/1991:
CLOUDSTACK-9821: Fixed issue in deploying vm in basic zone
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR to get detected by guest OS.
The time taken to detect hot plugged NIC in guest OS depends on type of NIC adapter like (E1000, VMXNET3, E1000e etc.)
and guest OS itself. In uncommon scenarios the NIC detection may take longer time than 15 seconds,
in such cases NIC hotplug would be treated as failure which results in VPC tier configuration failure.
Alternatively making the wait timeout for NIC adapter hotplug as configurable will be helpful for admins in such scenarios.
Also in future if VMware introduces new NIC adapter types which may take time to get detected by guest OS, it is good to have flexibility of
configuring the wait timeout as fallback mechanism in such scenarios.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
CLOUDSTACK-9784 : GPU detail not displayed in GPU tab of management server UI.ISSUE
==================
When GPU tab of the host is selected on the management server UI, no GPU detail is displayed.
RESOLUTION
==================
In the javascript file "system.js" while fetching the GPU details, sort functionality in dataprovider is returning value as undefined and hence it throwing an exception. So handled the output as undefined gracefully to avoid exception.
**Screenshot before applying fix :**

**Screenshot after applying fix :**

* pr/1942:
CLOUDSTACK-9784 : GPU detail not displayed in GPU tab of management server UI.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK 9601: Upgrade: change logic for update path for filesFor going from version A to version D, it uses to run the SQL files in
that order: A -> B -> C -> D -> A-cleanup -> B-cleanup -> C-cleanup ->
D-cleanup. If you had upgraded each version separatively you would have
run A -> A-cleanup -> B -> B-cleanup -> C -> C-cleanup -> D ->
D-cleanup.
This change the logic to follow the same path if you are jumping over
versions.
Signed-off-by: Marc-Aurle Brothier <m@brothier.org>
* pr/1768:
Upgrade: change logic for update path for files
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This improves the metrics view feature by improving the rendering performance
of metrics view tables, by reimplementing the logic at the backend and data
served via APIs. In large environments, the older implementation would
make several API calls that increases both network and database load.
List of APIs introduced for improving the performance:
listClustersMetrics
listHostsMetrics
listInfrastructure
listStoragePoolsMetrics
listVMsMetrics
listVolumesMetrics
listZonesMetrics
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-8841: Storage XenMotion from XS 6.2 to XS 6.5 fails.Removed Host version check in API. Because
Case 1:(Lower to Higher Version)
Migration from lower version to higher version is valid.
Case 2:(Higher to Lower Version)
In this case system(Host) will not allow.
So no need to check version in API. Additionally, CLOUDSTACK User Interface(UI) does not allow migration between different version of hyper-visors. But sometimes user wants to do migration from Lower to Higher Version. Now he can do it via API.
ACS Link ==>
https://issues.apache.org/jira/browse/CLOUDSTACK-8841
* pr/815:
CLOUDSTACK-8841: Storage XenMotion from XS 6.2 to XS 6.5 fails.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Security group ingress/egress issues with xenserver 6.2There is issue with the xenserver 6.2 ipset type nethash. Fixed it by adding nethash for ipset version 6 which is xenserver 6.5. For ipset version 4.x use iptreemap.
1. Tested configuring egress/ingress rules.
2. Tested the traffic for the configured rules from the VM.
* pr/843:
CLOUDSTACK-8871: fixed issue with the xenserver 6.2 ipset nethash
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests
NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle
only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
* pr/1825:
CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Fix public IPs not being removed from the VR when deprovisionedThis PR replaces #1706. It does not remove the IP from the database, but it does deprovision the IP correctly from the VR when the public IP is removed.
* pr/1907:
Fix public IPs not being removed from the VR when deprovisioned
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9757: Fixed issue in traffic from additional public subnetAcquire ip from additional public subnet and configure nat on that ip.
After this pick any from that network and access additional public subnet from this vm. Traffic is supposed to go via additional public subnet interface in the VR.
* pr/1922:
CLOUDSTACK-9757: Fixed issue in traffic from additional public subnet
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
* 4.9:
CLOUDSTACK-9746 system-vm: logrotate config causes critical failures
CLOUDSTACK-9788: Fix exception listNetworks with pagesize=0
CLOUDSTACK-8663: Fixed various issues to allow VM snapshots and volume snapshots to exist together
Fix HVM VM restart bug in XenServer
CLOUDSTACK-9746 system-vm: logrotate config causes critical failures* rotate both daily and by size by using maxsize in stead of size
* decrease the max size to 10M for rsyslog files
* remove delaycompress for rsyslog files
* increase rotate to 10 for cloud.log
* pr/1915:
CLOUDSTACK-9746 system-vm: logrotate config causes critical failures
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>