- Unit test to demonstrate denial of service attack
The NioConnection uses blocking handlers for various events such as connect,
accept, read, write. In case a client connects NioServer (used by
agent mgr to service agents on port 8250) but fails to participate in SSL
handshake or just sits idle, this would block the main IO/selector loop in
NioConnection. Such a client could be either malicious or aggresive.
This unit test demonstrates such a malicious client that can perform a
denial-of-service attack on NioServer that blocks it to serve any other client.
- Use non-blocking SSL handshake
- Uses non-blocking socket config in NioClient and NioServer/NioConnection
- Scalable connectivity from agents and peer clustered-management server
- Removes blocking ssl handshake code with a non-blocking code
- Protects from denial-of-service issues that can degrade mgmt server responsiveness
due to an aggressive/malicious client
- Uses separate executor services for handling ssl handshakes
Cherry-picked and backported from 9c7518698d
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
There 2 things which has been changed.
* We look on power_state_update_time instead of update_time. Didn't make sense to me at all to look at update_time.
* Due DB update optimisation, powerState will only be updated if < MAX_CONSECUTIVE_SAME_STATE_UPDATE_COUNT. That is why we can not rely on these information unless we make sure these are up to date.
During ping task, while scanning and updating status of all VMs on the host that are stuck in a transitional state
and are missing from the power report, do so only for VMs that are not removed.
(cherry picked from commit de7173a0ed)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixed on 4.4 and master but not on 4.5, cherry-picked on 4.5 using commit
fbafc957dc
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Conflicts:
engine/orchestration/src/com/cloud/agent/manager/DirectAgentAttache.java
pool, the source and destination pools cannot be local and cluster/zone and vice versa.
Cloudstack detects it and throws a exception. However, the end user only sees an
unexpected exception and not the reason for failure. Fixed it by making sure the
reason for the failure is correctly captured and shown to the end user.
(cherry picked from commit cffae8eef0)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Conflicts:
server/src/com/cloud/storage/VolumeApiServiceImpl.java
When migration fails instead of returning NULL, throw the exception.
(cherry picked from commit a5a65c7b55)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
If VM has been cold migrated across different VMware DCs, then unregister the VM from source host.
(cherry picked from commit 15b348632d)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Before registering a VM check if a different CS VM with same name exists in vCenter.
(cherry picked from commit 33179cce56)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Separate global config to enable/disable Storage Migration during normal deployment
Introduced a configuration parameter named enable.storage.migration
(cherry picked from commit c55bc0b2d1)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
During vmsync if StopCommand (issued as part of PowerOff/PowerMissing report) fails to stop VM (since VM is running on HV),
don't transition VM state to "Stopped" in CS db. Also added a check to throw ConcurrentOperationException if vm state is not
"Running" after start operation.
Changes:
- When there is HA we try to redeploy the affected vm using regular planners and if that fails we retry using the special planner for HA (which skips checking disable threshold)
Now because of job framework the InsufficientCapacittyException gets masked and the special planners are not called. Job framework needs to be fixed to rethrow the correct exception.
- Also the VM Work Job framework is not setting the DeploymentPlanner to the VmWorkJob. So the HA Planner being passed by HAMgr was not getting used.
- Now the job framework sets the planner passed in by any caller of the VM Start operation, to the job
On default iptables rules are updated to add ACCEPT egress traffic.
If the network egress default policy is false, CS remove ACCEPT and adds the DROP rule which
is egress default rule when there are no other egress rules.
If the CS network egress default policy is true, CS won't configure any default rule for egress because
router already came up to accept egress traffic. If there are already egress rules for network then the
egress rules get applied on VR.
For isolated network with out firewall service, VR default allows egress traffic (guestnetwork --> public network)
root cause:
when vmsync reports system VM is down, CCP doesn't release the VM resource before starting it.
fix:
make sure cleanup is called for a VM when it is reported as Stopped