Changes:
- Added a new column `source_template_id` to vm_template table to carry the parent/source template ID from which the tempalte was created
- Added the column in db upgrade 224 to 225
- Changed code to save the source_template_id if there is one associated to the volume/ volume from which the snapshot was taken
- API response returns the sourcetemplateid field, if set, in all template usecases.
- CreateZone API creates a zoneToken, inserts in DB and returns it in the
response
- UpdateZone API takes in 'details' map that is loaded to data_center_details
- Local fix to not log the content for ModifySshKeyCommand.
- For commands that do not want to log the parameters, added the facility to indicate this.
- For such commands, we remove the parameters from the log.
Changes:
- Changed host allocators/planner to use cpu.overprovisioning.factor
- Removed following: while adding a new host, we were setting the total_cpu in op_host_capacity to be actual_cpu * cpu.overprovisioning.factor. Now we set it to actual_cpu.
- ListCapacities response now calculates the total CPU as actual * cpu.overprovisioning.factor (This change does not add anything new - listCapacities was pulling total CPU from op_host_capacity DB earlier which had the cpu.overprovisioning.factor applied already. Now we need to apply it over the DB entry.)
- HostResponse has a new field: 'cpuWithOverprovisioning' that returns the cpu after applying the cpu.overprovisioning.factor
- Db Upgrade 222 to 224 now updates the total_cpu in op_host_capacity to be the actual_cpu for each Routing host.
status 9336: resolved fixed
Following changes were made:
* deleteSecurityGroup/authorizeSecurityGroupIngress - removed account/domainId parameters as SG is uniquely identified by id now
* removed account_name field from securityGroup DB table; removed allowed_security_group/allowed_sec_grp_acct from security_ingress_rule.
These values were used for api response generation only for performance purposes; added caching on API level to improve performance
* Added missing security checks for securityGroups/ingressRules
Since private and public keys are logged, this is a Security concern
Changes: Added capability to 'Command' instances to support excluding certain fields from getting logged using GSON @Expose annotation.
Changes:
While starting a System VM:
- We check, incase the ROOT volume is READY, if the templateID of the volume matches the SystemVM's template.
- If it does not match, we update the volumes' templateId and ask deployment planner to reassign a pool to this volume even if it is READY.
In general:
- If a root volume is READY, we remove its entry from the deploydestination before calling storagemanager :: prepare()
- StorageManager creates a volume if a pool is assigned to it in deploydestination passed to it.
- If a volume has no pool assigned to it in deploydestination, it means the volume is ready and has a pool already allocated to it.
Also changed 'RegisterTemplate' to take in a new optional parameter 'checksum'.
The value set to it is stored as-is in the DB in vm_template table, 'checksum' column.
Changes:
- When migration fails we try to do cleanup on the destination host agent. The AgentUnavailableException in this cleanup was not caught.
-Due to that other cleanup like reverting capacity allocated and vm state were skipped.
-Fix is to catch the AgentUnavailableException so that rest of the cleanup can happen.
- Also corrected the exceptions in various cases of migration failure.
- In case the VM is still starting, HA should schedule a retry. Introduced a special migration exception for handling this.
Changes:
- When migration fails we try to do cleanup on the destination host agent. The AgentUnavailableException in this cleanup was not caught.
-Due to that other cleanup like reverting capacity allocated and vm state were skipped.
-Fix is to catch the AgentUnavailableException so that rest of the cleanup can happen.
- Also corrected the exceptions in various cases of migration failure.
- In case the VM is still starting, HA should schedule a retry. Introduced a special migration exception for handling this.
status 7704: resolved fixed
For user vm:
* for default network, take limit from the corresponding service offering
* for all additional networks, take limit from the network offerings
For domainRouter/SSVM/CPVM:
* get info from the network offering
Added new config parameter: "vm.network.throttling.rate". If nw_rate is NULL for serviceOffering, this parameter would be used for default vm's network
1) No longer do multiple searches involving "domain" table; only one join with domain is being done.
2) Do join with domain table only when command is executed by domainAdmin
3) Added index for "path" field in "domain" table
4) No longer do joins with account table as account_id is already present in vm_instance table.
1) No longer do multiple searches involving "domain" table; only one join with domain is being done.
2) Do join with domain table only when command is executed by domainAdmin
3) Added index for "path" field in "domain" table
4) No longer do joins with account table as account_id is already present in vm_instance table.
- Added a new flag 'allocation_state' to zone,pod,cluster and host
- The possible values for this flag are 'Enabled' or 'Disabled'
- When a new zone,pod,cluster or host is added, allocation_state is 'Disabled' by default.
- For existing zone,pod,cluster or host, the state is 'Enabled'.
- All Add/Update/List commands for each of zone,pod,cluster or host can now take a new parameter 'allocationstate'
- If 'allocation_state' is 'Disabled', Allocators skip that zone or pod or cluster or pod.
- For a root admin, ListZones lists all zones including the 'Disabled' zones. But for any other user, the 'Disabled' zones are not included in the response.
- For any usecase that creates/deploys/adds/registers a resource and takes in zone as parameter, now we check if the Zone is 'Disabled'. If yes then the operation cannot be performed by a user other than root-admin. Add volume, snapshot, templates are examples of this usecase.
- To enable the root admin to test a particular pod/cluster/host, deployVM command takes in 'host_id' parameter that can be passed in only by root admin.
If this parameter is passed in by the admin, allocators do not search for hosts and use that host only. StoragePools are searched in the cluster of that host.
If VM cannot be deployed to that host, allocators and deployVM fails without retrying
Following was done as a part of checkin:
1) NetworkOffering/Network:
* add PF service support for default Guest network offering.
* Add one more additional network - Public.
* Allow to enable external firewall in Basic zone.
2) Don't allow to deployVm in Public Network.
3) Allow to add vlan ip ranges to Public networks in Basic zone.
4) Associate IP - allow to associate with Direct vms.
5) Allow to create PF/Static nat rules. Rules are being sent to External Firewall Rule only.
6) Add PF support to External Firewall element.
This is a Root admin only functionality
---------------------
Service API changes:
---------------------
- ManagementServer will expose new API:
Pair<List<HostVO>, List<Long>> listHostsForMigrationOfVM(UserVm vm, Long
startIndex, Long pageSize)
The API returns list of all hosts in the VM's cluster minus the current host and also a list of hostIds that seem to have enough CPU and RAM capacity to host this VM.
- ListHostsCmd will call this service API if virtualmachineid is present in the request.
- MigrateVmCmd is the new command added that takes in virtualmachineid and destination hostid
- UserVmService will expose a new API: UserVm migrateVirtualMachine(UserVm vm, Host destinationHost)
------------------------------------
API throws error in following cases:
------------------------------------
- User is not a root Admin. (‘Permission denied’)
- A VM uses local storage, we cannot migrate it, so ‘listHosts’ will throw error.
- We fail to migrate the VM on the chosen host.
- API will support migration for XenServer only currently. So error is thrown
if hypervisor is not XenServer (e.g KVM, vSphere etc)
- Destination host is not in same cluster as source host.
- VM is not in running state