* 4.6:
CLOUDSTACK-9020: UI enhancements from metrics view
CLOUDSTACK-9064: The users should be able to create multiple Guest Shared Networks in same Vlan ID, same Physical Network and same network, just with a different IP ranges.
CLOUDSTACK-9078: Gave scripts executable permissions.
CLOUDSTACK-9076: Changed ownership of directory /var/lib/cloudstack to cloud.
Cannot list vlanipranges by keyword
CLOUDSTACk-9002: VM deployment is successful even when dhcp entry command fails - Fixed
Reason: The return value of the call to accept() function in the applyRules() function of BasicNetworkTopology.java was not checked for success or failure. As a result even if it fails, exception was not thrown and VM deployment went ahead without any errors.
Fix: Added the necessary checks.
* pr/995:
CLOUDSTACk-9002: VM deployment is successful even when dhcp entry command fails - Fixed
Signed-off-by: Remi Bergsma <github@remi.nl>
There 2 things which has been changed.
* We look on power_state_update_time instead of update_time. Didn't make sense to me at all to look at update_time.
* Due DB update optimisation, powerState will only be updated if < MAX_CONSECUTIVE_SAME_STATE_UPDATE_COUNT. That is why we can not rely on these information unless we make sure these are up to date.
Cloudstack 8816 entityuuid missing in some of the eventsIn some of the events generated, entity uuid was missing making it difficult to find the entity. Fixed the same.
Tested it on rabbitmq instance.
There are the events before after the fix:
Before
--------------------------------------------------------------------------------
routing_key: management-server.ActionEvent.ACCOUNT-DELETE.Account.*
exchange: cloudstack-events
message_count: 2
payload:
{"eventDateTime":"2015-09-04 17:59:24 +0530","status":"Scheduled","description":"deleting User test4 (id: 28) and accountId \u003d 28","event":"ACCOUNT.DELETE","Account":"c09e2e81-8edc-4c27-b072-25005b522b63","account":"bd73dc2e-35c0-11e5-b094-d4ae52cb9af0","user":"bd7ea748-35c0-11e5-b094-d4ae52cb9af0"}
payload_bytes: 304
payload_encoding: string
redelivered: False
--------------------------------------------------------------------------------
routing_key: management-server.AsyncJobEvent.complete.Account.*
exchange: cloudstack-events
message_count: 0
payload: {"cmdInfo":"{\"id\":\"9dd3abc2-3f8b-4852-aa60-a74b234acb13\",\"response\":\"json\",\"sessionkey\":\"5ig1ItP2_5v-mgY4cVJbJN5hw_w\",\"ctxDetails\":\"
{\\\"interface com.cloud.user.Account\\\":\\\"9dd3abc2-3f8b-4852-aa60-a74b234acb13\\\"}
\",\"cmdEventType\":\"ACCOUNT.DELETE\",\"expires\":\"2015-09-07T11:11:56+0000\",\"ctxUserId\":\"2\",\"signatureversion\":\"3\",\"httpmethod\":\"GET\",\"uuid\":\"9dd3abc2-3f8b-4852-aa60-a74b234acb13\",\"ctxAccountId\":\"2\",\"ctxStartEventId\":\"447\"}","instanceType":"Account","jobId":"5004989d-0cde-4922-8afa-66bf38b75ea7","status":"SUCCEEDED","processStatus":"0","commandEventType":"ACCOUNT.DELETE","resultCode":"0","command":"org.apache.cloudstack.api.command.admin.account.DeleteAccountCmd","jobResult":"org.apache.cloudstack.api.response.SuccessResponse/null/
{\"success\":true}
","account":"bd73dc2e-35c0-11e5-b094-d4ae52cb9af0","user":"bd7ea748-35c0-11e5-b094-d4ae52cb9af0"}
payload_bytes: 914
payload_encoding: string
redelivered: False
--------------------------------------------------------------------------------
After
--------------------------------------------------------------------------------
routing_key: management-server.ActionEvent.ACCOUNT-DELETE.Account.e5e2db91-414d-484c-99d5-c4e265c14ad8
exchange: cloudstack-events
message_count: 13
payload: {"eventDateTime":"2015-09-07 17:32:26 +0530","status":"Completed","description":"Successfully completed deleting account. Account Id: 45","event":"ACCOUNT.DELETE","entityuuid":"e5e2db91-414d-484c-99d5-c4e265c14ad8","entity":"com.cloud.user.Account","account":"bd73dc2e-35c0-11e5-b094-d4ae52cb9af0","user":"bd7ea748-35c0-11e5-b094-d4ae52cb9af0"}
payload_bytes: 344
payload_encoding: string
redelivered: True
--------------------------------------------------------------------------------
routing_key: management-server.AsyncJobEvent.complete.Account.e5e2db91-414d-484c-99d5-c4e265c14ad8
exchange: cloudstack-events
message_count: 12
payload: {"cmdInfo":"{\"id\":\"e5e2db91-414d-484c-99d5-c4e265c14ad8\",\"response\":\"json\",\"sessionkey\":\"8AJVbn8HIpg5LZ_VaVfSPs_QN2k\",\"ctxDetails\":\"{\\\"interface com.cloud.user.Account\\\":\\\"e5e2db91-414d-484c-99d5-c4e265c14ad8\\\"}\",\"cmdEventType\":\"ACCOUNT.DELETE\",\"expires\":\"2015-09-07T12:17:42+0000\",\"ctxUserId\":\"2\",\"signatureversion\":\"3\",\"httpmethod\":\"GET\",\"uuid\":\"e5e2db91-414d-484c-99d5-c4e265c14ad8\",\"ctxAccountId\":\"2\",\"ctxStartEventId\":\"465\"}","instanceType":"Account","instanceUuid":"e5e2db91-414d-484c-99d5-c4e265c14ad8","jobId":"0bb08486-6d9f-4e9f-bfef-b7463c42e71b","status":"SUCCEEDED","processStatus":"0","commandEventType":"ACCOUNT.DELETE","resultCode":"0","command":"org.apache.cloudstack.api.command.admin.account.DeleteAccountCmd","jobResult":"org.apache.cloudstack.api.response.SuccessResponse/null/{\"success\":true}","account":"bd73dc2e-35c0-11e5-b094-d4ae52cb9af0","user":"bd7ea748-35c0-11e5-b094-d4ae52cb9af0"}
payload_bytes: 968
payload_encoding: string
redelivered: True
--------------------------------------------------------------------------------
* pr/782:
CLOUDSTACK-8816 Systemvm reboot event doesnt have uuids. Fixed the same
CLOUDSTACK-8816: Project UUID is not showing for some of operations in RabbitMQ.
CLOUDSTACK-8816: entity uuid missing in create network event
CLOUDSTACK-8816: instance uuid is missing in events for delete account
CLOUDSTACK-8816 Fixed entityUuid missing in some cases is events
Signed-off-by: Rajani Karuturi <rajani.karuturi@citrix.com>
This reverts commit cd7218e241, reversing
changes made to f5a7395cc2.
Reason for Revert:
noredist build failed with the below error:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on project cloud-plugin-hypervisor-vmware: Compilation failure
[ERROR] /home/jenkins/acs/workspace/build-master-noredist/plugins/hypervisors/vmware/src/com/cloud/hypervisor/guru/VMwareGuru.java:[484,12] error: non-static variable logger cannot be referenced from a static context
[ERROR] -> [Help 1]
even the normal build is broken as reported by @koushik-das on dev list
http://markmail.org/message/nngimssuzkj5gpbz
CLOUDSTACK-8754: VM migration triggered by dynamic scaling is failingThis is caused by serialization failure for VmWorkMigrateForScale object. Replaced DeployDestination member present in VmWorkMigrateForScale with serializable types.
Haven't added any unit test as couldn't find a clean way to do it. Simply adding a test to do (de)serialization won't help catch any new member addition to the type which breaks serializability.
* pr/725:
CLOUDSTACK-8754: VM migration triggered by dynamic scaling is failing This is caused by serialization failure for VmWorkMigrateForScale object. Replaced DeployDestination member present in VmWorkMigrateForScale with serializable types.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This is caused by serialization failure for VmWorkMigrateForScale object. Replaced DeployDestination member
present in VmWorkMigrateForScale with serializable types.
Changed methodnames according to Nic.java refactor.
Fixed NicVO.java due to regression from Nic.java refactor.
Fixed VmWareGuru.java after Nic.java refactor.
See issue CLOUDSTACK-8736 for ongoing effort to clean up network code.
Instead of putting the host to Alert state immediately, the investigators should be allowed to run for some time based on alert.wait global config.
At the end of this interval if the host state still cannot be determined then put the host in Alert. Also updated some of the log messages.
This closes#621
- New implementation uses nanoseconds. Due to that, the places where the Profiler is used as a Monitor and/or
a stopwatch will suffer with the difference in the return
- Also added a getDuration(), which returns the time in nanoseconds in case someone wants to use it instead
- Added an extra test to check if the getDuration() works fine with nanoseconds
- Fixed the test that checks the time in milliseconds: I added an error margin to cover the test better
Signed-off-by: Daan Hoogland <daan@onecht.net>
During ping task, while scanning and updating status of all VMs on the host that are stuck in a transitional state
and are missing from the power report, do so only for VMs that are not removed.
(cherry picked from commit de7173a0ed)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
During ping task, while scanning and updating status of all VMs on the host that are stuck in a transitional state
and are missing from the power report, do so only for VMs that are not removed.
* removed the "is redundant" flag form the addVpcRouterToGuestNetwork() method
* removed the "is redundant" flag from the removeVpcRouterFromGuestNetwork() method
* changed the path of the master.py file in the keepalived.conf.temp file
* the call to routerDao.addRouterToGuestNetwork() in the VpcRouterDeploymentDefinition is not needed. That step will be performed once a VM is created
- In addition, when restarting a VPC the routers will have the guest net configured, if any exists.
* Pushing the POM.xml as well, to use the old Jetty for now. Could not fix the logging problem. Will replace the POM with master version after VPC is done.
Fixed on 4.4 and master but not on 4.5, cherry-picked on 4.5 using commit
fbafc957dc
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Conflicts:
engine/orchestration/src/com/cloud/agent/manager/DirectAgentAttache.java
pool, the source and destination pools cannot be local and cluster/zone and vice versa.
Cloudstack detects it and throws a exception. However, the end user only sees an
unexpected exception and not the reason for failure. Fixed it by making sure the
reason for the failure is correctly captured and shown to the end user.
(cherry picked from commit cffae8eef0)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Conflicts:
server/src/com/cloud/storage/VolumeApiServiceImpl.java
When migration fails instead of returning NULL, throw the exception.
(cherry picked from commit a5a65c7b55)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
If VM has been cold migrated across different VMware DCs, then unregister the VM from source host.
(cherry picked from commit 15b348632d)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Before registering a VM check if a different CS VM with same name exists in vCenter.
(cherry picked from commit 33179cce56)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Separate global config to enable/disable Storage Migration during normal deployment
Introduced a configuration parameter named enable.storage.migration
(cherry picked from commit c55bc0b2d1)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
During vmsync if StopCommand (issued as part of PowerOff/PowerMissing report) fails to stop VM (since VM is running on HV),
don't transition VM state to "Stopped" in CS db. Also added a check to throw ConcurrentOperationException if vm state is not
"Running" after start operation.
During vmsync if StopCommand (issued as part of PowerOff/PowerMissing report) fails to stop VM (since VM is running on HV),
don't transition VM state to "Stopped" in CS db. Also added a check to throw ConcurrentOperationException if vm state is not
"Running" after start operation.
Changes:
- When there is HA we try to redeploy the affected vm using regular planners and if that fails we retry using the special planner for HA (which skips checking disable threshold)
Now because of job framework the InsufficientCapacittyException gets masked and the special planners are not called. Job framework needs to be fixed to rethrow the correct exception.
- Also the VM Work Job framework is not setting the DeploymentPlanner to the VmWorkJob. So the HA Planner being passed by HAMgr was not getting used.
- Now the job framework sets the planner passed in by any caller of the VM Start operation, to the job
Changes:
- When there is HA we try to redeploy the affected vm using regular planners and if that fails we retry using the special planner for HA (which skips checking disable threshold)
Now because of job framework the InsufficientCapacittyException gets masked and the special planners are not called. Job framework needs to be fixed to rethrow the correct exception.
- Also the VM Work Job framework is not setting the DeploymentPlanner to the VmWorkJob. So the HA Planner being passed by HAMgr was not getting used.
- Now the job framework sets the planner passed in by any caller of the VM Start operation, to the job
On default iptables rules are updated to add ACCEPT egress traffic.
If the network egress default policy is false, CS remove ACCEPT and adds the DROP rule which
is egress default rule when there are no other egress rules.
If the CS network egress default policy is true, CS won't configure any default rule for egress because
router already came up to accept egress traffic. If there are already egress rules for network then the
egress rules get applied on VR.
For isolated network with out firewall service, VR default allows egress traffic (guestnetwork --> public network)
On default iptables rules are updated to add ACCEPT egress traffic.
If the network egress default policy is false, CS remove ACCEPT and adds the DROP rule which
is egress default rule when there are no other egress rules.
If the CS network egress default policy is true, CS won't configure any default rule for egress because
router already came up to accept egress traffic. If there are already egress rules for network then the
egress rules get applied on VR.
For isolated network with out firewall service, VR default allows egress traffic (guestnetwork --> public network)
root cause:
when vmsync reports system VM is down, CCP doesn't release the VM resource before starting it.
fix:
make sure cleanup is called for a VM when it is reported as Stopped
root cause:
when vmsync reports system VM is down, CCP doesn't release the VM resource before starting it.
fix:
make sure cleanup is called for a VM when it is reported as Stopped
pool, the source and destination pools cannot be local and cluster/zone and vice versa.
Cloudstack detects it and throws a exception. However, the end user only sees an
unexpected exception and not the reason for failure. Fixed it by making sure the
reason for the failure is correctly captured and shown to the end user.
Unnecessary exception in MS logs while removing default NIC from VM. Following changes are made:
1. Changed the exception from CloudRuntimeException to InvalidParameterValueExecption.
2. Moved out validation logic to UserVMManagerImpl from VirtualMachineManagerImpl.
3. Handling InvalidParameterValueException from async API calls so that they are not logged as ERROR in MS logs.
Unnecessary exception in MS logs while removing default NIC from VM. Following changes are made:
1. Changed the exception from CloudRuntimeException to InvalidParameterValueExecption.
2. Moved out validation logic to UserVMManagerImpl from VirtualMachineManagerImpl.
3. Handling InvalidParameterValueException from async API calls so that they are not logged as ERROR in MS logs.
(outside cloudstack), the state of the vm is not updated in cloudstack db. The
ping task was not checking for resource (host) status by default. The power
state of the vms is returned as part of the resource status. Fixed the issue by
making sure ping task atleast tries once to get the resource status.
(cherry picked from commit 55b4ead495)
(outside cloudstack), the state of the vm is not updated in cloudstack db. The
ping task was not checking for resource (host) status by default. The power
state of the vms is returned as part of the resource status. Fixed the issue by
making sure ping task atleast tries once to get the resource status.
Separate global config to enable/disable Storage Migration during normal deployment
Introduced a configuration parameter named enable.storage.migration
1. While destroying a ROOT volume do the lookup of the associated VM under the DC and not just cluster.
2. In case of VMware, during VM start if a volume is being recreated no need to detach the old volume because
we now expunge it immediately and don't wait for the storage cleanup task to run.
- Check to see if network is implemented changed from 'state == Implementing||Implemented' to 'state == Implemented'.
The earlier check was a hack to prevent the issue described below.
- At the time of implementing network (using implementNetwork() method), if the VR needs to be deployed then it follows
the same path of regular VM deployment. This leads to a nested call to implementNetwork() while preparing VR nics. This
flow creates issues in dealing with network state transitions. The original call puts network in "Implementing" state
and then the nested call again tries to put it into same state resulting in issues. In order to avoid it, implementNetwork()
call for VR is replaced with below code.
cleanup the rules then destroy
fix adds a provision to specify if cleanup is needed on network on
shutdown. VR is marked as to not to require network rules clean up on
network shutdown as the VR is destroyed and recreated.
ran the simulator tests that test network life cycle
The following changes are made:
- Check to see if network is implemented changed from 'state == Implementing||Implemented' to 'state == Implemented'.
The earlier check was a hack to prevent the issue described below.
- At the time of implementing network (using implementNetwork() method), if the VR needs to be deployed then
it follows the same path of regular VM deployment. This leads to a nested call to implementNetwork() while
preparing VR nics. This flow creates issues in dealing with network state transitions. The original call
puts network in "Implementing" state and then the nested call again tries to put it into same state resulting
in issues. In order to avoid it, implementNetwork() call for VR is replaced with below code.
requires storage migration resulting in failure of VM migration. This also improves
the hostsformigration api. Firstly we were trying to list all hosts and then
finding suitable storage pools for all volumes and then we were checking whether
vm migration requires storage migration to that host. Now the process is updated.
We are checking for only those volumes which are not in zone wide primary store.
We are verifying by comparing volumes->poolid->clusterid to host clusterid. If it
uses local or clusterids are different then verifying whether host has suitable
storage pools for the volume of the vm to be migrated too.
Changes:
PodId in which the router should get started was not being saved to the DB due to the VO's setter method not following the setXXX format. So when planner loaded the router from DB, it always got podId as null and that would allow planner to deploy the router in any pod. If the router happens to start in a different pod than the user VM, the Vm fails to start since the Dhcp service check fails.
Fixed the VO's setPodId method, that was causing the DB save operation fail.
List of changes:
1. Created a separate thread pool for handling cron and ping tasks. The size of the pool is based on direct.agent.pool.size. The existing direct agent pool will run all commands other than cron and ping.
2. For normal tasks (generated as part of user/admin API calls), if throttle limit is reached then tasks get queued up for subsequent execution once threads are available.
3. For cron and ping tasks (internally generated by MS like ping, VM sync etc.), if throttle limit is reached then these gets rejected. Since these are internally generated these can be rejected without any issues.
Added a new flag 'checkBeforeCleanup' to StopCommand based on which check is done to see if VM is running in HV host.
If VM is running then in this case it is not stopped and the operation bails out.
Also modified the MS code to call the StopCommand with appropriate value for the flag based on the context.
Currently it is only set to 'true' when called from the new vmsync logic based on powerstate of VM. For rest it
is set to 'false' meaning no change in behaviour.
And when the flag is updated on the resource accordingly generate usage events again.
Also when display flag is false in deployvm cmd it should be false for the volumes associated with the vm as well
template is downloading, template_store_ref has leftover not in ready
state, when create vm from that template, the code doesn't check either
zone id, nor template_store_ref state.
Conflicts:
engine/orchestration/src/org/apache/cloudstack/engine/orchestration/VolumeOrchestrator.java
CLOUDSTACK-4762 : Enabling VGPU support for XenServer.
This feature is to enable the GPU-passthrough and vGPU functionality,
with the help of this feature, admins/users will be able to leverage
the GPU graphics unit power by deploying a virtul machine with GPU or
vGPU support or by changing the service offering of an existing VM
at any later point of time. There GPU/vGPU enabled VMs are able to run
graphical applications.
For now, this feature is only supported with XenServer hypervisor but
can be extended to add the support of other hypervisors.
PrepareForMigrationCommand, so that destination hypervisor can
mount pool. This further exposed an issue for KVM where iso
was not getting cleaned up upon successful migration, fixed as well.
introduces a force option in delete network to forcifully delete a
network. This comes handy in rare cases where network fails to implenet
and network is in shutdown state, but network shutdown to rollback
implement process fails as well.
Conflicts:
api/src/org/apache/cloudstack/api/command/user/network/DeleteNetworkCmd.java
server/src/com/cloud/user/DomainManagerImpl.java
This is happening as concurrent operations are happening on the same VM. Earlier this was not seen as all vm operations were synchronized at agent layer. By making execute.in.sequence
global config to false this restriction is no longer there. In the latest code operations to a single vm are synchronized by maintaining a job queue. In some scenarios the destroy vm operation
was not going through this job queue mechanism and so was resulting in failures due to simultaneous operations.
Adding the missing file
During HA and maintenance call different planners (if the original planners are not able to find capacity) which skip some heurestics
Changes:
- The vmprofile owner passed in to the planner should be the VM's account and not the caller
- Do not do the access check for Root Admin
Conflicts:
server/src/com/cloud/deploy/DeploymentPlanningManagerImpl.java
local Ip addresses do not get released resulting in all the link local
Ip addresses being consumed eventually.
fix ensure Nics with reservation strategy 'Start' should go through
release phase in the Nic life cycle so that release is performed before
Nic is removed to avoid resource leaks.
1. Egress default policy rules is send to the firewall provider. It is up to the
provider to configure the rules.
2. The default policy rules are send for both allow and deny default policy.
3. On network shutdown rules for delete are send.
4. For VR and SRX, by default deny the traffic. So no default rule to deny traffic is required.
Changes:
- During Vm migration while finding a new host within the cluster, we need to set the storagepool Id to the deployment plan too.
- This will indicate the planner that the volumes are ready and no need to find new pool
- This in turn will prevent the threshold check done during the pool allocation. This step is not needed since there is no need to allocate pools newly.
- Thus the migration wont fail because th threshold check fails.
Changes:
- Added 'virtualmachineid' parameter to the createVolume API to specify a VM for the volume. The Vm should be in 'Running' or 'Stopped' state.
- This parameter is used only when createVolume API is called using snapshotid parameter
- When this parameter is set, the volume is created from the snapshot in the pod/cluster of the VM. Also the volume is then attached to the VM in the same request
- If attach Volume fails but create has succeeded, the API errors out but the Volume created remains available. User may attach the same volume later
- When Vm is provided, but if no storage pool is available in the VM's pod/cluster then the volume is not created and API fails.
Volume create usage event and resource count werent getting registered. Check its type rather than it is UserVm since the code is coming from VirtualMachineManager.
The parameter is optional and true by default to preserve the original behavior
Conflicts:
api/src/org/apache/cloudstack/api/command/user/vpc/CreateVPCCmd.java
engine/orchestration/src/org/apache/cloudstack/engine/orchestration/NetworkOrchestrator.java