status 9693: resolved fixed
2 more fixes with this commit:
* bug 9692 is fixed - we don't increment resource count when Direct ip address is allocated.
* as a part of 2.2.2->2.2.4 upgrade resource_count for public_ip records is recalculated - count only Virtual Ip addresses
status 9688: resolved fixed
To verify that the rule was removed:
* make sure that there is no record with lb id in load_balancer table
* verify that lb.delete event was generated for this rule
Changes:
- When a host connects, we check if it has a CPU and RAM entry in capacity table. If the entry is found, the values are updated if possible. If the entry is not found a new one is inserted.
- The searchCriteria used to check if CPU entry is present was wrong. We were passing in a criteria which did not specify capacityType. So for hostId >= 200, the serach would return capacity entries of storage pools also since poolIDs start from 200 onwards.
- Since an entry was found (although the wrong one), we tried to update it. But update does not happen since the capacity ranges dont match.
And a new insert for CPU also does not happen since an entry is found.
- So as a result CPU entries are never inserted in the table for hostIds >=200
- As a fix, corrected the search criteria.
- During VM deployment, when the entry is not found, we get a NPE. Added a null check to avoid that.
Changes:
- When the ROOT volume of a VM is found to be READY, changed planner to reuse the pool for every volume(root or data) that is READY and that has a pool not in maintenance and not in avoid state
- If ROOT volume is not ready, we dont care about the DATA disk. Both would get re-allocated.
- When a pool is reused for a ready volume, Planner does not call storagepool allocators. And such volumes are not assigned a pool in the deployment destination returned by the planner. Accordingly StorageManager :: prepare method wont recreate these volumes since they are not mentioned in the destination.
Changes:
- Reason was that the old volume's templateId was being updated before volume creation was attempted. So on the retry, we dint find a difference in volume's templateId and VM's templateId and did not enter the recreation logic.
- Fix is to update the new volume's templateId with the VM's templateId while creating the new volume. The old volume's templateId stays the same and the volume is marked as 'Destroy' when a new volume is created.