Changes:
- When the ROOT volume of a VM is found to be READY, changed planner to reuse the pool for every volume(root or data) that is READY and that has a pool not in maintenance and not in avoid state
- If ROOT volume is not ready, we dont care about the DATA disk. Both would get re-allocated.
- When a pool is reused for a ready volume, Planner does not call storagepool allocators. And such volumes are not assigned a pool in the deployment destination returned by the planner. Accordingly StorageManager :: prepare method wont recreate these volumes since they are not mentioned in the destination.
Changes:
- Reason was that the old volume's templateId was being updated before volume creation was attempted. So on the retry, we dint find a difference in volume's templateId and VM's templateId and did not enter the recreation logic.
- Fix is to update the new volume's templateId with the VM's templateId while creating the new volume. The old volume's templateId stays the same and the volume is marked as 'Destroy' when a new volume is created.
Partial Changes:
- Changed host allocators/planner to use cpu.overprovisioning.factor
- Removed following: while adding a new host, we were setting the total_cpu in op_host_capacity to be actual_cpu * cpu.overprovisioning.factor. Now we set it to actual_cpu.
- ListCapacities response now calculates the total CPU as actual * cpu.overprovisioning.factor (This change does not add anything new - listCapacities was pulling total CPU from op_host_capacity DB earlier which had the cpu.overprovisioning.factor applied already. Now we need to apply it over the DB entry.)
- HostResponse has a new field: 'cpuWithOverprovisioning' that returns the cpu after applying the cpu.overprovisioning.factor
- Db Upgrade 222 to 224 now updates the total_cpu in op_host_capacity to be the actual_cpu for each Routing host.
Conflicts:
server/src/com/cloud/agent/manager/AgentManagerImpl.java
server/src/com/cloud/api/ApiDBUtils.java
server/src/com/cloud/api/ApiResponseHelper.java
server/src/com/cloud/deploy/BareMetalPlanner.java
server/src/com/cloud/server/ManagementServerImpl.java
status 9453: resolved fixed
1) Problem #1 was that in 2.1.x there was a bug when we didn't delete pf rules for expunged vms. These kind of rules will be ignored during the db upgrade
2) Problem #2. We didn't trim the spaces for PF/LB ports in 2.1.x, and DB upgrade code was failing because of that.