CLOUDSTACK-9134: set device_id as the first device_id not in use instead of nic count
when we restart vpc tiers, the old nics will be removed, and create a new nic.
however, the device_id was set to the nic count, which may be already used.
this commit get the first device_id not in use as the device_id of new nic.
This issue also happen when we add multiple networks to a vm and remove them.
* pr/1209:
CLOUDSTACK-9134: set device_id as the first device_id not in use instead of nic count
Signed-off-by: Daan Hoogland <daan@onecht.net>
when we restart vpc tiers, the old nics will be removed, and create a new nic.
however, the device_id was set to the nic count, which may be already used.
this commit get the first device_id not in use as the device_id of new nic.
This issue also happen when we add multiple networks to a vm and remove them.
CLOUDSTACK-6276 Fixing affinity groups for projectsWith some contributions from @resmo and @ustcweizhou.
This closes https://github.com/apache/cloudstack/pull/508
To test manually (need at least 2 hosts):
Create a project
Create an affinity group in that project
Deploy a vm with that affinity group
Deploy a second vm with that affinity group
They should be on different hosts
Ran old and new tests for affinity groups on the simulator
Test create affinity group as admin in project ... === TestName: test_01_admin_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group as domain admin for projects ... === TestName: test_02_doadmin_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group as user for projects ... === TestName: test_03_user_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group that exists (same name) for projects ... === TestName: test_4_user_create_aff_grp_existing_name_for_project | Status : SUCCESS ===
ok
#Delete Affinity Group by id. ... === TestName: test_01_delete_aff_grp_by_id | Status : SUCCESS ===
ok
#Delete Affinity Group by id should fail for user not in project ... === TestName: test_02_delete_aff_grp_by_id_another_user | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups ... === TestName: test_01_deploy_vm_anti_affinity_group | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups with more vms than hosts. ... === TestName: test_02_deploy_vm_anti_affinity_group_fail_on_not_enough_hosts | Status : SUCCESS ===
ok
List affinity group for a vm for projects ... === TestName: test_01_list_aff_grps_for_vm | Status : SUCCESS ===
ok
List multiple affinity groups associated with a vm for projects ... === TestName: test_02_list_multiple_aff_grps_for_vm | Status : SUCCESS ===
ok
List affinity groups by id for projects ... === TestName: test_03_list_aff_grps_by_id | Status : SUCCESS ===
ok
List Affinity Groups by name for projects ... === TestName: test_04_list_aff_grps_by_name | Status : SUCCESS ===
ok
List Affinity Groups by non-existing id for projects ... === TestName: test_05_list_aff_grps_by_non_existing_id | Status : SUCCESS ===
ok
List Affinity Groups by non-existing name for projects ... === TestName: test_06_list_aff_grps_by_non_existing_name | Status : SUCCESS ===
ok
List affinity group should list all for a vms associated with that group for projects ... === TestName: test_07_list_all_vms_in_aff_grp | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupids ... === TestName: test_01_update_aff_grp_by_ids | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 16 tests in 581.706s
OK
Deploy vm as Admin in Affinity Group belonging to regular user (should fail) ... === TestName: test_01_deploy_vm_another_user | Status : SUCCESS ===
ok
Create Affinity Group as admin for regular user ... === TestName: test_02_create_aff_grp_user | Status : SUCCESS ===
ok
List Affinity Groups as admin for all the users ... === TestName: test_03_list_aff_grp_all_users | Status : SUCCESS ===
ok
List Affinity Groups belonging to admin user ... === TestName: test_04_list_all_admin_aff_grp | Status : SUCCESS ===
ok
List Affinity Groups belonging to regular user passing account id and domain id ... === TestName: test_05_list_all_users_aff_grp | Status : SUCCESS ===
ok
List Affinity Groups belonging to regular user passing group id ... === TestName: test_06_list_all_users_aff_grp_by_id | Status : SUCCESS ===
ok
Delete Affinity Group belonging to regular user ... === TestName: test_07_delete_aff_grp_of_other_user | Status : SUCCESS ===
ok
Test create affinity group as admin ... === TestName: test_01_admin_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group as domain admin ... === TestName: test_02_doadmin_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group as user ... === TestName: test_03_user_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group that exists (same name) ... === TestName: test_04_user_create_aff_grp_existing_name | Status : SUCCESS ===
ok
Test create affinity group with existing name but within different account ... === TestName: test_05_create_aff_grp_same_name_diff_acc | Status : SUCCESS ===
ok
Test create affinity group of non-existing type ... === TestName: test_06_create_aff_grp_nonexisting_type | Status : SUCCESS ===
ok
Delete Affinity Group by name ... === TestName: test_01_delete_aff_grp_by_name | Status : SUCCESS ===
ok
Delete Affinity Group as admin for an account ... === TestName: test_02_delete_aff_grp_for_acc | Status : SUCCESS ===
ok
Delete Affinity Group which has vms in it ... === TestName: test_03_delete_aff_grp_with_vms | Status : SUCCESS ===
ok
Delete Affinity Group with id which does not belong to this user ... === TestName: test_05_delete_aff_grp_id | Status : SUCCESS ===
ok
Delete Affinity Group by name which does not belong to this user ... === TestName: test_06_delete_aff_grp_name | Status : SUCCESS ===
ok
Delete Affinity Group by id. ... === TestName: test_08_delete_aff_grp_by_id | Status : SUCCESS ===
ok
Root admin should be able to delete affinity group of other users ... === TestName: test_09_delete_aff_grp_root_admin | Status : SUCCESS ===
ok
Deploy VM without affinity group ... === TestName: test_01_deploy_vm_without_aff_grp | Status : SUCCESS ===
ok
Deploy VM by aff grp name ... === TestName: test_02_deploy_vm_by_aff_grp_name | Status : SUCCESS ===
ok
Deploy VM by aff grp id ... === TestName: test_03_deploy_vm_by_aff_grp_id | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups ... === TestName: test_04_deploy_vm_anti_affinity_group | Status : SUCCESS ===
ok
Deploy vms by affinity group id ... === TestName: test_05_deploy_vm_by_id | Status : SUCCESS ===
ok
Deploy vm in affinity group of another user by name ... === TestName: test_06_deploy_vm_aff_grp_of_other_user_by_name | Status : SUCCESS ===
ok
Deploy vm in affinity group of another user by id ... === TestName: test_07_deploy_vm_aff_grp_of_other_user_by_id | Status : SUCCESS ===
ok
Deploy vm in multiple affinity groups ... === TestName: test_08_deploy_vm_multiple_aff_grps | Status : SUCCESS ===
ok
Deploy multiple vms in multiple affinity groups ... === TestName: test_09_deploy_vm_multiple_aff_grps | Status : SUCCESS ===
ok
Deploy VM by aff grp name and id ... === TestName: test_10_deploy_vm_by_aff_grp_name_and_id | Status : SUCCESS ===
ok
List affinity group for a vm ... === TestName: test_01_list_aff_grps_for_vm | Status : SUCCESS ===
ok
List multiple affinity groups associated with a vm ... === TestName: test_02_list_multiple_aff_grps_for_vm | Status : SUCCESS ===
ok
List affinity groups by id ... === TestName: test_03_list_aff_grps_by_id | Status : SUCCESS ===
ok
List Affinity Groups by name ... === TestName: test_04_list_aff_grps_by_name | Status : SUCCESS ===
ok
List Affinity Groups by non-existing id ... === TestName: test_05_list_aff_grps_by_non_existing_id | Status : SUCCESS ===
ok
List Affinity Groups by non-existing name ... === TestName: test_06_list_aff_grps_by_non_existing_name | Status : SUCCESS ===
ok
List affinity group should list all for a vms associated with that group ... === TestName: test_07_list_all_vms_in_aff_grp | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupids ... === TestName: test_01_update_aff_grp_by_ids | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupnames ... === TestName: test_02_update_aff_grp_by_names | Status : SUCCESS ===
ok
Update the list of affinityGroups for vm which is not associated ... === TestName: test_03_update_aff_grp_for_vm_with_no_aff_grp | Status : SUCCESS ===
ok
Update the list of Affinity Groups to empty list ... SKIP: Skip - Failing - work in progress
Update the list of Affinity Groups on running vm ... === TestName: test_05_update_aff_grp_on_running_vm | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 42 tests in 976.432s
OK (SKIP=1)
* pr/1134:
CLOUDSTACK-6276 Removing unused parameter in integration test for projects
CLOUDSTACK-6276 Removing unused parameter in integration test
CLOUDSTACK-6276 Fixing affinity groups for projects
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-8825, CLOUDSTACK-8824 : Fixed issues if vm.allocation.algorithm is set to firstfitleastconsumedFixed following issues if vm.allocation.algorithm is set to firstfitleastconsumed
1. VM deployment failure if thre is only ZWPS in setup
2. VM migration is impossible from UI
To test
1. Create setup with ZWPS only
2. set vm.allocation.algorithm to firstfitleastconsumed in global settings
3. deploy virtual machine
observation: vm deployment will fail
After this fix it will pass
second scenario
1. Create Cloudstack Setup with two hosts (As it needs setup for migration)
2. Try migrating VM from UI
Observation: There will be error response in logs with nothing available in UI
After fix it will pass
Regarding BVT I am not sure whether there exists tests for firstfitleastconsumed vm allocation algorithm.
* pr/787:
CLOUDSTACK-8825, CLOUDSTACK-8824 : Fixed following issues if vm.allocation.algorithm is set to firstfitleastconsumed 1. VM deployment failure if thre is only ZWPS in setup 2. VM migration is impossible from UI
Signed-off-by: Remi Bergsma <github@remi.nl>
The labeling was broken. Only labels assigned at zone creation
were used, changing labels was not working. Tested with changing
a label and checking it.
As a bonus fixed the consistency of KVM in Dutch compared to other
traffic labels in dutch and copied in the OVM3 translated label
in other languages.
There 2 things which has been changed.
* We look on power_state_update_time instead of update_time. Didn't make sense to me at all to look at update_time.
* Due DB update optimisation, powerState will only be updated if < MAX_CONSECUTIVE_SAME_STATE_UPDATE_COUNT. That is why we can not rely on these information unless we make sure these are up to date.
Guys, can you review it? things need to be discussed:
(1) this supports KVM/QCOW2 only. Anyone want to implement for other Hypervisor/format ?
(2) The original data volume (on primary storage) will be removed.
(3) The script uses the default timeout in libvirtComputingResource. Do we need to add one in global configuration (like copy.volume.wait or backup.snapshot.wait, create.volume.from.snapshot.wait)
(4) In scripts/storage/qcow2/managesnapshot.sh, I use "qemu-img convert -f qcow2 -O qcow2" to copy the snapshot from secondary to primary (hence there is no base image file), instead of "cp -f", this is because convert is faster than cp in my testing.
* pr/732:
CLOUDSTACK-5863: revert volume snapshot for KVM/QCOW2
Signed-off-by: Wei Zhou <w.zhou@tech.leaseweb.com>
This reverts commit cd7218e241, reversing
changes made to f5a7395cc2.
Reason for Revert:
noredist build failed with the below error:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on project cloud-plugin-hypervisor-vmware: Compilation failure
[ERROR] /home/jenkins/acs/workspace/build-master-noredist/plugins/hypervisors/vmware/src/com/cloud/hypervisor/guru/VMwareGuru.java:[484,12] error: non-static variable logger cannot be referenced from a static context
[ERROR] -> [Help 1]
even the normal build is broken as reported by @koushik-das on dev list
http://markmail.org/message/nngimssuzkj5gpbz
Fix for the NicVO.java regression.Renamed set*() methods to correct naming.
* pr/726:
Fix for the NicVO.java regression.
Signed-off-by: Remi Bergsma <github@remi.nl>
Changed methodnames according to Nic.java refactor.
Fixed NicVO.java due to regression from Nic.java refactor.
Fixed VmWareGuru.java after Nic.java refactor.
See issue CLOUDSTACK-8736 for ongoing effort to clean up network code.
Cloudstack 8656: do away with more silently ignoring exceptions.a lot of messages added.
some restructuring for test exception assertions and try-with-resource blocks
* pr/654: (29 commits)
CLOUDSTACK-8656: more logging instead of sysout
CLOUDSTACK-8656: use catch block for validation
CLOUDSTACK-8656: class in json specified not found
CLOUDSTACK-8656: removed unused classes
CLOUDSTACK-8656: restructure of tests
CLOUDSTACK-8656: reorganise sychronized block
CLOUDSTACK-8656: restructure tests to ensure exception throwing
CLOUDSTACK-8656: validate the throwing of ServerApiException
CLOUDSTACK-8656: logging ignored exceptions
CLOUDSTACK-8656: try-w-r removes need for empty catch block
CLOUDSTACK-8656: try-w-r instead of clunckey close-except
CLOUDSTACK-8656: deal with empty SQLException catch block by try-w-r
CLOUDSTACK-8656: unnecessary close construct removed
CLOUDSTACK-8656: message about timed buffer logging
CLOUDSTACK-8656: message about invalid number from store
CLOUDSTACK-8656: move cli test tool to separate file
CLOUDSTACK-8656: exception is the rule for some tests
CLOUDSTACK-8656: network related exception logging
CLOUDSTACK-8656: reporting ignored exceptions in server
CLOUDSTACK-8656: log in case we are on a platform not supporting UTF8
...
Signed-off-by: Remi Bergsma <github@remi.nl>
* pr/547:
CLOUDSTACK-8601. VMFS storage added as local storage can be re-added as shared storage. Fail addition of a VMFS shared storage pool in case it has already been added as local storage in CS.
Signed-off-by: Mike Tutkowski <mike.tutkowski@solidfire.com>
* pr/649:
CLOUDSTACK-8656: checkstyle no longer used import removed
CLOUDSTACK-8656: messages on SQL exception in DbUtils!
CLOUDSTACK-8656: replace empty catch block on close by try-with-resource
CLOUDSTACK-8656: 30x legacy upgrade code exception messages
CLOUDSTACK-8656: removed redundant implements
CLOUDSTACK-8656: silent close failure of clustering socket log as info
CLOUDSTACK-8656: try with resource te eliminate empty catch clauses
CLOUDSTACK-8656: log messages on exception in legacy sql upgrade code
CLOUDSTACK-8656: removed unused input stream there was code to close a stream that was never created
CLOUDSTACK-8656: info on error closing peering channels
CLOUDSTACK-8656: messages on errors closing streams for local templates
CLOUDSTACK-8656: handle template properties loading
Signed-off-by: Daan Hoogland <daan@onecht.net>
* pr/603:
coverity: try-with-resource and restructure in upgrade datacenter
extra try-w-r
coverity issues in old upgrade code
Signed-off-by: Daan Hoogland <daan@onecht.net>
* pr/604:
coverity 1116563: resource count leak for accounts
coverity 1116562: resource count resource leak
coverity 1116612: update network cidrs firewall rules and acls
coverity 1116610: upgrade cluster overprovisioning details
coverity 1212194: reuse of prepared statements in try-block and of course have them autoclosed
coverity 1225199: vmware dc upgrade
coverity 1288575: replace all close with try-with-resource not strictly necessary in all but one case. done consequently.
Signed-off-by: Daan Hoogland <daan@onecht.net>