Changes:
- Added a new parameter to pass in deployment plan during vm start
- If a hostId is passed in to the DeployVMCmd (only allowed for a root admin to test a host), a plan is passed in to start the vm in that host's datacenter, pod and cluster and on that host
- If a plan is passed in during start, but if the VM's root volume is READY, then plan of the root volume takes precedence. In that case the plan passed in is not used.
status 9693: resolved fixed
2 more fixes with this commit:
* bug 9692 is fixed - we don't increment resource count when Direct ip address is allocated.
* as a part of 2.2.2->2.2.4 upgrade resource_count for public_ip records is recalculated - count only Virtual Ip addresses
Conflicts:
server/src/com/cloud/network/NetworkManager.java
status 9688: resolved fixed
To verify that the rule was removed:
* make sure that there is no record with lb id in load_balancer table
* verify that lb.delete event was generated for this rule
Changes:
- When a host connects, we check if it has a CPU and RAM entry in capacity table. If the entry is found, the values are updated if possible. If the entry is not found a new one is inserted.
- The searchCriteria used to check if CPU entry is present was wrong. We were passing in a criteria which did not specify capacityType. So for hostId >= 200, the serach would return capacity entries of storage pools also since poolIDs start from 200 onwards.
- Since an entry was found (although the wrong one), we tried to update it. But update does not happen since the capacity ranges dont match.
And a new insert for CPU also does not happen since an entry is found.
- So as a result CPU entries are never inserted in the table for hostIds >=200
- As a fix, corrected the search criteria.
- During VM deployment, when the entry is not found, we get a NPE. Added a null check to avoid that.
Changes:
- Added a new column `source_template_id` to vm_template table to carry the parent/source template ID from which the tempalte was created
- Added the column in db upgrade 224 to 225
- Changed code to save the source_template_id if there is one associated to the volume/ volume from which the snapshot was taken
- API response returns the sourcetemplateid field, if set, in all template usecases.
status 9623: resolved fixed
Also set ram_size to 1024 for console proxy offering during the upgrade
Conflicts:
core/src/com/cloud/vm/SecondaryStorageVmVO.java
server/src/com/cloud/agent/manager/allocator/impl/UserConcentratedAllocator.java
server/src/com/cloud/consoleproxy/ConsoleProxyManagerImpl.java
server/src/com/cloud/storage/allocator/LocalStoragePoolAllocator.java
server/src/com/cloud/storage/secondary/SecondaryStorageManagerImpl.java
- CreateZone API creates a zoneToken, inserts in DB and returns it in the
response
- UpdateZone API takes in 'details' map that is loaded to data_center_details
The port remains 8250.
The keystore saved at /etc/cloud/management/cloud.keystore. We also include one
fail-safe keystore/certificate for fallback if we are unable to generate
certificate and keystore. If we use fail-safe keystore, a warning and calltrace would be show.
Notice you need to upgrade agent, as well as systemVM's images.
Changes:
- Added new Investigator 'ManagementIPSystemVMInvestigator' that checks if Vm is alive only for System VM's that have a management IP address.
- If no management IP is found, ping test cannot be done, so this investigator would return null in that case.
- Current implementation InvestigatorImpl is renamed as 'UserVmDomRInvestigator' and does the ping test for user VMs only.
- Corrected the ping test code that was checking a hard-coded string. Now if the ping answer is negative, we just return null
- Added the new investigator to components.xml
Changes:
- When the ROOT volume of a VM is found to be READY, changed planner to reuse the pool for every volume(root or data) that is READY and that has a pool not in maintenance and not in avoid state
- If ROOT volume is not ready, we dont care about the DATA disk. Both would get re-allocated.
- When a pool is reused for a ready volume, Planner does not call storagepool allocators. And such volumes are not assigned a pool in the deployment destination returned by the planner. Accordingly StorageManager :: prepare method wont recreate these volumes since they are not mentioned in the destination.
Changes:
- Reason was that the old volume's templateId was being updated before volume creation was attempted. So on the retry, we dint find a difference in volume's templateId and VM's templateId and did not enter the recreation logic.
- Fix is to update the new volume's templateId with the VM's templateId while creating the new volume. The old volume's templateId stays the same and the volume is marked as 'Destroy' when a new volume is created.
Changes:
- Changed host allocators/planner to use cpu.overprovisioning.factor
- Removed following: while adding a new host, we were setting the total_cpu in op_host_capacity to be actual_cpu * cpu.overprovisioning.factor. Now we set it to actual_cpu.
- ListCapacities response now calculates the total CPU as actual * cpu.overprovisioning.factor (This change does not add anything new - listCapacities was pulling total CPU from op_host_capacity DB earlier which had the cpu.overprovisioning.factor applied already. Now we need to apply it over the DB entry.)
- HostResponse has a new field: 'cpuWithOverprovisioning' that returns the cpu after applying the cpu.overprovisioning.factor
- Db Upgrade 222 to 224 now updates the total_cpu in op_host_capacity to be the actual_cpu for each Routing host.