mirror of https://github.com/apache/cloudstack.git
Merge branch 'main' into cks-enhancements-upstream
This commit is contained in:
commit
94449f4e20
|
|
@ -54,10 +54,11 @@ github:
|
|||
- gpordeus
|
||||
- hsato03
|
||||
- bernardodemarco
|
||||
- abh1sar
|
||||
- FelipeM525
|
||||
- lucas-a-martins
|
||||
- nicoschmdt
|
||||
- abh1sar
|
||||
- sudo87
|
||||
|
||||
protected_branches: ~
|
||||
|
||||
|
|
|
|||
|
|
@ -4,9 +4,9 @@ Contributing to Apache CloudStack (ACS)
|
|||
Summary
|
||||
-------
|
||||
This document covers how to contribute to the ACS project. ACS uses GitHub PRs to manage code contributions.
|
||||
These instructions assume you have a GitHub.com account, so if you don't have one you will have to create one. Your proposed code changes will be published to your own fork of the ACS project and you will submit a Pull Request for your changes to be added.
|
||||
These instructions assume you have a GitHub.com account, so if you don't have one you will have to create one. Your proposed code changes will be published to your own fork of the ACS project, and you will submit a Pull Request for your changes to be added.
|
||||
|
||||
_Lets get started!!!_
|
||||
_Let's get started!!!_
|
||||
|
||||
Bug fixes
|
||||
---------
|
||||
|
|
@ -26,7 +26,7 @@ No back porting / cherry-picking features to existing branches!
|
|||
|
||||
PendingReleaseNotes file
|
||||
------------------------
|
||||
When developing a new feature or making a (major) change to a existing feature you are encouraged to append this to the PendingReleaseNotes file so that the Release Manager can
|
||||
When developing a new feature or making a (major) change to an existing feature you are encouraged to append this to the PendingReleaseNotes file so that the Release Manager can
|
||||
use this file as a source of information when compiling the Release Notes for a new release.
|
||||
|
||||
When adding information to the PendingReleaseNotes file make sure that you write a good and understandable description of the new feature or change which you have developed.
|
||||
|
|
@ -38,9 +38,9 @@ Fork the code
|
|||
|
||||
In your browser, navigate to: [https://github.com/apache/cloudstack](https://github.com/apache/cloudstack)
|
||||
|
||||
Fork the repository by clicking on the 'Fork' button on the top right hand side. The fork will happen and you will be taken to your own fork of the repository. Copy the Git repository URL by clicking on the clipboard next to the URL on the right hand side of the page under '**HTTPS** clone URL'. You will paste this URL when doing the following `git clone` command.
|
||||
Fork the repository by clicking on the 'Fork' button on the top right hand side. The fork will happen, and you will be taken to your own fork of the repository. Copy the Git repository URL by clicking on the clipboard next to the URL on the right hand side of the page under '**HTTPS** clone URL'. You will paste this URL when doing the following `git clone` command.
|
||||
|
||||
On your computer, follow these steps to setup a local repository for working on ACS:
|
||||
On your computer, follow these steps to set up a local repository for working on ACS:
|
||||
|
||||
```bash
|
||||
$ git clone https://github.com/YOUR_ACCOUNT/cloudstack.git
|
||||
|
|
@ -92,9 +92,9 @@ $ git rebase main
|
|||
Make a GitHub Pull Request to contribute your changes
|
||||
-----------------------------------------------------
|
||||
|
||||
When you are happy with your changes and you are ready to contribute them, you will create a Pull Request on GitHub to do so. This is done by pushing your local changes to your forked repository (default remote name is `origin`) and then initiating a pull request on GitHub.
|
||||
When you are happy with your changes, and you are ready to contribute them, you will create a Pull Request on GitHub to do so. This is done by pushing your local changes to your forked repository (default remote name is `origin`) and then initiating a pull request on GitHub.
|
||||
|
||||
Please include JIRA id, detailed information about the bug/feature, what all tests are executed, how the reviewer can test this feature etc. Incase of UI PRs, a screenshot is preferred.
|
||||
Please include JIRA id, detailed information about the bug/feature, what all tests are executed, how the reviewer can test this feature etc. In case of UI PRs, a screenshot is preferred.
|
||||
|
||||
> **IMPORTANT:** Make sure you have rebased your `feature_x` branch to include the latest code from `upstream/main` _before_ you do this.
|
||||
|
||||
|
|
|
|||
27
INSTALL.md
27
INSTALL.md
|
|
@ -37,6 +37,7 @@ Setup up NodeJS (LTS):
|
|||
Start the MySQL service:
|
||||
|
||||
$ service mysqld start
|
||||
$ mysql_secure_installation
|
||||
|
||||
### Using jenv and/or pyenv for Version Management
|
||||
|
||||
|
|
@ -86,13 +87,33 @@ Start the management server:
|
|||
|
||||
If this works, you've successfully setup a single server Apache CloudStack installation.
|
||||
|
||||
Open the following URL on your browser to access the Management Server UI:
|
||||
|
||||
http://localhost:8080/client/
|
||||
To access the Management Server UI, follow the following procedure:
|
||||
|
||||
The default credentials are; user: admin, password: password and the domain
|
||||
field should be left blank which is defaulted to the ROOT domain.
|
||||
|
||||
## To bring up CloudStack UI
|
||||
|
||||
Move to UI Directory
|
||||
|
||||
$ cd /path/to/cloudstack/ui
|
||||
|
||||
To install dependencies.
|
||||
|
||||
$ npm install
|
||||
|
||||
To build the project.
|
||||
|
||||
$ npm build
|
||||
|
||||
For Development Mode.
|
||||
|
||||
$ npm start
|
||||
|
||||
Make sure to set CS_URL=http://localhost:8080/client on .env.local file on ui.
|
||||
|
||||
You should be able to run the management server on http://localhost:5050
|
||||
|
||||
## Building with non-redistributable plugins
|
||||
|
||||
CloudStack supports several plugins that depend on libraries with distribution restrictions.
|
||||
|
|
|
|||
|
|
@ -20,6 +20,19 @@ import os
|
|||
import logging
|
||||
import sys
|
||||
import socket
|
||||
|
||||
# ---- This snippet of code adds the sources path and the waf configured PYTHONDIR to the Python path ----
|
||||
# ---- We do this so cloud_utils can be looked up in the following order:
|
||||
# ---- 1) Sources directory
|
||||
# ---- 2) waf configured PYTHONDIR
|
||||
# ---- 3) System Python path
|
||||
for pythonpath in (
|
||||
"@PYTHONDIR@",
|
||||
os.path.join(os.path.dirname(__file__),os.path.pardir,os.path.pardir,"python","lib"),
|
||||
):
|
||||
if os.path.isdir(pythonpath): sys.path.insert(0,pythonpath)
|
||||
# ---- End snippet of code ----
|
||||
|
||||
from cloudutils.cloudException import CloudRuntimeException, CloudInternalException
|
||||
from cloudutils.utilities import initLoging, bash
|
||||
from cloudutils.configFileOps import configFileOps
|
||||
|
|
|
|||
|
|
@ -20,6 +20,19 @@ import sys
|
|||
import os
|
||||
import subprocess
|
||||
from threading import Timer
|
||||
|
||||
# ---- This snippet of code adds the sources path and the waf configured PYTHONDIR to the Python path ----
|
||||
# ---- We do this so cloud_utils can be looked up in the following order:
|
||||
# ---- 1) Sources directory
|
||||
# ---- 2) waf configured PYTHONDIR
|
||||
# ---- 3) System Python path
|
||||
for pythonpath in (
|
||||
"@PYTHONDIR@",
|
||||
os.path.join(os.path.dirname(__file__),os.path.pardir,os.path.pardir,"python","lib"),
|
||||
):
|
||||
if os.path.isdir(pythonpath): sys.path.insert(0,pythonpath)
|
||||
# ---- End snippet of code ----
|
||||
|
||||
from xml.dom.minidom import parse
|
||||
from cloudutils.configFileOps import configFileOps
|
||||
from cloudutils.networkConfig import networkConfig
|
||||
|
|
|
|||
|
|
@ -342,7 +342,7 @@ public class Agent implements HandlerFactory, IAgentControl, AgentStatusUpdater
|
|||
logger.info("Attempted to connect to the server, but received an unexpected exception, trying again...", e);
|
||||
}
|
||||
}
|
||||
shell.updateConnectedHost();
|
||||
shell.updateConnectedHost(((NioClient)connection).getHost());
|
||||
scavengeOldAgentObjects();
|
||||
}
|
||||
|
||||
|
|
@ -617,15 +617,11 @@ public class Agent implements HandlerFactory, IAgentControl, AgentStatusUpdater
|
|||
}
|
||||
|
||||
protected void reconnect(final Link link) {
|
||||
reconnect(link, null, null, false);
|
||||
reconnect(link, null, false);
|
||||
}
|
||||
|
||||
protected void reconnect(final Link link, String preferredHost, List<String> avoidHostList, boolean forTransfer) {
|
||||
protected void reconnect(final Link link, String preferredMSHost, boolean forTransfer) {
|
||||
if (!(forTransfer || reconnectAllowed)) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (!reconnectAllowed) {
|
||||
logger.debug("Reconnect requested but it is not allowed {}", () -> getLinkLog(link));
|
||||
return;
|
||||
}
|
||||
|
|
@ -637,19 +633,26 @@ public class Agent implements HandlerFactory, IAgentControl, AgentStatusUpdater
|
|||
serverResource.disconnected();
|
||||
logger.info("Lost connection to host: {}. Attempting reconnection while we still have {} commands in progress.", shell.getConnectedHost(), commandsInProgress.get());
|
||||
stopAndCleanupConnection(true);
|
||||
String host = preferredMSHost;
|
||||
if (org.apache.commons.lang3.StringUtils.isBlank(host)) {
|
||||
host = shell.getNextHost();
|
||||
}
|
||||
List<String> avoidMSHostList = shell.getAvoidHosts();
|
||||
do {
|
||||
final String host = shell.getNextHost();
|
||||
connection = new NioClient(getAgentName(), host, shell.getPort(), shell.getWorkers(), shell.getSslHandshakeTimeout(), this);
|
||||
logger.info("Reconnecting to host: {}", host);
|
||||
try {
|
||||
connection.start();
|
||||
} catch (final NioConnectionException e) {
|
||||
logger.info("Attempted to re-connect to the server, but received an unexpected exception, trying again...", e);
|
||||
stopAndCleanupConnection(false);
|
||||
if (CollectionUtils.isEmpty(avoidMSHostList) || !avoidMSHostList.contains(host)) {
|
||||
connection = new NioClient(getAgentName(), host, shell.getPort(), shell.getWorkers(), shell.getSslHandshakeTimeout(), this);
|
||||
logger.info("Reconnecting to host: {}", host);
|
||||
try {
|
||||
connection.start();
|
||||
} catch (final NioConnectionException e) {
|
||||
logger.info("Attempted to re-connect to the server, but received an unexpected exception, trying again...", e);
|
||||
stopAndCleanupConnection(false);
|
||||
}
|
||||
}
|
||||
shell.getBackoffAlgorithm().waitBeforeRetry();
|
||||
host = shell.getNextHost();
|
||||
} while (!connection.isStartup());
|
||||
shell.updateConnectedHost();
|
||||
shell.updateConnectedHost(((NioClient)connection).getHost());
|
||||
logger.info("Connected to the host: {}", shell.getConnectedHost());
|
||||
}
|
||||
|
||||
|
|
@ -922,7 +925,7 @@ public class Agent implements HandlerFactory, IAgentControl, AgentStatusUpdater
|
|||
return new SetupCertificateAnswer(true);
|
||||
}
|
||||
|
||||
private void processManagementServerList(final List<String> msList, final String lbAlgorithm, final Long lbCheckInterval) {
|
||||
private void processManagementServerList(final List<String> msList, final List<String> avoidMsList, final String lbAlgorithm, final Long lbCheckInterval) {
|
||||
if (CollectionUtils.isNotEmpty(msList) && StringUtils.isNotEmpty(lbAlgorithm)) {
|
||||
try {
|
||||
final String newMSHosts = String.format("%s%s%s", com.cloud.utils.StringUtils.toCSVList(msList), IAgentShell.hostLbAlgorithmSeparator, lbAlgorithm);
|
||||
|
|
@ -934,6 +937,7 @@ public class Agent implements HandlerFactory, IAgentControl, AgentStatusUpdater
|
|||
throw new CloudRuntimeException("Could not persist received management servers list", e);
|
||||
}
|
||||
}
|
||||
shell.setAvoidHosts(avoidMsList);
|
||||
if ("shuffle".equals(lbAlgorithm)) {
|
||||
scheduleHostLBCheckerTask(0);
|
||||
} else {
|
||||
|
|
@ -942,16 +946,18 @@ public class Agent implements HandlerFactory, IAgentControl, AgentStatusUpdater
|
|||
}
|
||||
|
||||
private Answer setupManagementServerList(final SetupMSListCommand cmd) {
|
||||
processManagementServerList(cmd.getMsList(), cmd.getLbAlgorithm(), cmd.getLbCheckInterval());
|
||||
processManagementServerList(cmd.getMsList(), cmd.getAvoidMsList(), cmd.getLbAlgorithm(), cmd.getLbCheckInterval());
|
||||
return new SetupMSListAnswer(true);
|
||||
}
|
||||
|
||||
private Answer migrateAgentToOtherMS(final MigrateAgentConnectionCommand cmd) {
|
||||
try {
|
||||
if (CollectionUtils.isNotEmpty(cmd.getMsList())) {
|
||||
processManagementServerList(cmd.getMsList(), cmd.getLbAlgorithm(), cmd.getLbCheckInterval());
|
||||
processManagementServerList(cmd.getMsList(), cmd.getAvoidMsList(), cmd.getLbAlgorithm(), cmd.getLbCheckInterval());
|
||||
}
|
||||
migrateAgentConnection(cmd.getAvoidMsList());
|
||||
Executors.newSingleThreadScheduledExecutor(new NamedThreadFactory("MigrateAgentConnection-Job")).schedule(() -> {
|
||||
migrateAgentConnection(cmd.getAvoidMsList());
|
||||
}, 3, TimeUnit.SECONDS);
|
||||
} catch (Exception e) {
|
||||
String errMsg = "Migrate agent connection failed, due to " + e.getMessage();
|
||||
logger.debug(errMsg, e);
|
||||
|
|
@ -972,25 +978,26 @@ public class Agent implements HandlerFactory, IAgentControl, AgentStatusUpdater
|
|||
throw new CloudRuntimeException("No other Management Server hosts to migrate");
|
||||
}
|
||||
|
||||
String preferredHost = null;
|
||||
String preferredMSHost = null;
|
||||
for (String msHost : msHostsList) {
|
||||
try (final Socket socket = new Socket()) {
|
||||
socket.connect(new InetSocketAddress(msHost, shell.getPort()), 5000);
|
||||
preferredHost = msHost;
|
||||
preferredMSHost = msHost;
|
||||
break;
|
||||
} catch (final IOException e) {
|
||||
throw new CloudRuntimeException("Management server host: " + msHost + " is not reachable, to migrate connection");
|
||||
}
|
||||
}
|
||||
|
||||
if (preferredHost == null) {
|
||||
if (preferredMSHost == null) {
|
||||
throw new CloudRuntimeException("Management server host(s) are not reachable, to migrate connection");
|
||||
}
|
||||
|
||||
logger.debug("Management server host " + preferredHost + " is found to be reachable, trying to reconnect");
|
||||
logger.debug("Management server host " + preferredMSHost + " is found to be reachable, trying to reconnect");
|
||||
shell.resetHostCounter();
|
||||
shell.setAvoidHosts(avoidMsList);
|
||||
shell.setConnectionTransfer(true);
|
||||
reconnect(link, preferredHost, avoidMsList, true);
|
||||
reconnect(link, preferredMSHost, true);
|
||||
}
|
||||
|
||||
public void processResponse(final Response response, final Link link) {
|
||||
|
|
@ -1003,14 +1010,21 @@ public class Agent implements HandlerFactory, IAgentControl, AgentStatusUpdater
|
|||
for (final IAgentControlListener listener : controlListeners) {
|
||||
listener.processControlResponse(response, (AgentControlAnswer)answer);
|
||||
}
|
||||
} else if (answer instanceof PingAnswer && (((PingAnswer) answer).isSendStartup()) && reconnectAllowed) {
|
||||
logger.info("Management server requested startup command to reinitialize the agent");
|
||||
sendStartup(link);
|
||||
} else if (answer instanceof PingAnswer) {
|
||||
processPingAnswer((PingAnswer) answer);
|
||||
} else {
|
||||
updateLastPingResponseTime();
|
||||
}
|
||||
}
|
||||
|
||||
private void processPingAnswer(final PingAnswer answer) {
|
||||
if ((answer.isSendStartup()) && reconnectAllowed) {
|
||||
logger.info("Management server requested startup command to reinitialize the agent");
|
||||
sendStartup(link);
|
||||
}
|
||||
shell.setAvoidHosts(answer.getAvoidMsList());
|
||||
}
|
||||
|
||||
public void processReadyCommand(final Command cmd) {
|
||||
final ReadyCommand ready = (ReadyCommand)cmd;
|
||||
// Set human readable sizes;
|
||||
|
|
@ -1027,7 +1041,7 @@ public class Agent implements HandlerFactory, IAgentControl, AgentStatusUpdater
|
|||
}
|
||||
|
||||
verifyAgentArch(ready.getArch());
|
||||
processManagementServerList(ready.getMsHostList(), ready.getLbAlgorithm(), ready.getLbCheckInterval());
|
||||
processManagementServerList(ready.getMsHostList(), ready.getAvoidMsHostList(), ready.getLbAlgorithm(), ready.getLbCheckInterval());
|
||||
|
||||
logger.info("Ready command is processed for agent [id: {}, uuid: {}, name: {}]", getId(), getUuid(), getName());
|
||||
}
|
||||
|
|
@ -1374,26 +1388,26 @@ public class Agent implements HandlerFactory, IAgentControl, AgentStatusUpdater
|
|||
if (msList == null || msList.length < 1) {
|
||||
return;
|
||||
}
|
||||
final String preferredHost = msList[0];
|
||||
final String preferredMSHost = msList[0];
|
||||
final String connectedHost = shell.getConnectedHost();
|
||||
logger.debug("Running preferred host checker task, connected host={}, preferred host={}",
|
||||
connectedHost, preferredHost);
|
||||
if (preferredHost == null || preferredHost.equals(connectedHost) || link == null) {
|
||||
connectedHost, preferredMSHost);
|
||||
if (preferredMSHost == null || preferredMSHost.equals(connectedHost) || link == null) {
|
||||
return;
|
||||
}
|
||||
boolean isHostUp = false;
|
||||
try (final Socket socket = new Socket()) {
|
||||
socket.connect(new InetSocketAddress(preferredHost, shell.getPort()), 5000);
|
||||
socket.connect(new InetSocketAddress(preferredMSHost, shell.getPort()), 5000);
|
||||
isHostUp = true;
|
||||
} catch (final IOException e) {
|
||||
logger.debug("Host: {} is not reachable", preferredHost);
|
||||
logger.debug("Host: {} is not reachable", preferredMSHost);
|
||||
}
|
||||
if (isHostUp && link != null && commandsInProgress.get() == 0) {
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("Preferred host {} is found to be reachable, trying to reconnect", preferredHost);
|
||||
logger.debug("Preferred host {} is found to be reachable, trying to reconnect", preferredMSHost);
|
||||
}
|
||||
shell.resetHostCounter();
|
||||
reconnect(link);
|
||||
reconnect(link, preferredMSHost, false);
|
||||
}
|
||||
} catch (Throwable t) {
|
||||
logger.error("Error caught while attempting to connect to preferred host", t);
|
||||
|
|
|
|||
|
|
@ -66,6 +66,7 @@ public class AgentShell implements IAgentShell, Daemon {
|
|||
private String _zone;
|
||||
private String _pod;
|
||||
private String _host;
|
||||
private List<String> _avoidHosts;
|
||||
private String _privateIp;
|
||||
private int _port;
|
||||
private int _proxyPort;
|
||||
|
|
@ -76,7 +77,6 @@ public class AgentShell implements IAgentShell, Daemon {
|
|||
private volatile boolean _exit = false;
|
||||
private int _pingRetries;
|
||||
private final List<Agent> _agents = new ArrayList<Agent>();
|
||||
private String hostToConnect;
|
||||
private String connectedHost;
|
||||
private Long preferredHostCheckInterval;
|
||||
private boolean connectionTransfer = false;
|
||||
|
|
@ -121,7 +121,7 @@ public class AgentShell implements IAgentShell, Daemon {
|
|||
if (_hostCounter >= hosts.length) {
|
||||
_hostCounter = 0;
|
||||
}
|
||||
hostToConnect = hosts[_hostCounter % hosts.length];
|
||||
String hostToConnect = hosts[_hostCounter % hosts.length];
|
||||
_hostCounter++;
|
||||
return hostToConnect;
|
||||
}
|
||||
|
|
@ -143,11 +143,10 @@ public class AgentShell implements IAgentShell, Daemon {
|
|||
}
|
||||
|
||||
@Override
|
||||
public void updateConnectedHost() {
|
||||
connectedHost = hostToConnect;
|
||||
public void updateConnectedHost(String connectedHost) {
|
||||
this.connectedHost = connectedHost;
|
||||
}
|
||||
|
||||
|
||||
@Override
|
||||
public void resetHostCounter() {
|
||||
_hostCounter = 0;
|
||||
|
|
@ -166,6 +165,16 @@ public class AgentShell implements IAgentShell, Daemon {
|
|||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setAvoidHosts(List<String> avoidHosts) {
|
||||
_avoidHosts = avoidHosts;
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<String> getAvoidHosts() {
|
||||
return _avoidHosts;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getPrivateIp() {
|
||||
return _privateIp;
|
||||
|
|
|
|||
|
|
@ -16,6 +16,7 @@
|
|||
// under the License.
|
||||
package com.cloud.agent;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Properties;
|
||||
|
||||
|
|
@ -63,9 +64,13 @@ public interface IAgentShell {
|
|||
|
||||
String[] getHosts();
|
||||
|
||||
void setAvoidHosts(List<String> hosts);
|
||||
|
||||
List<String> getAvoidHosts();
|
||||
|
||||
long getLbCheckerInterval(Long receivedLbInterval);
|
||||
|
||||
void updateConnectedHost();
|
||||
void updateConnectedHost(String connectedHost);
|
||||
|
||||
String getConnectedHost();
|
||||
|
||||
|
|
|
|||
|
|
@ -816,7 +816,7 @@ public class AgentProperties{
|
|||
* Data type: Integer.<br>
|
||||
* Default value: <code>null</code>
|
||||
*/
|
||||
public static final Property<Integer> SSL_HANDSHAKE_TIMEOUT = new Property<>("ssl.handshake.timeout", null, Integer.class);
|
||||
public static final Property<Integer> SSL_HANDSHAKE_TIMEOUT = new Property<>("ssl.handshake.timeout", 30, Integer.class);
|
||||
|
||||
public static class Property <T>{
|
||||
private String name;
|
||||
|
|
|
|||
|
|
@ -358,7 +358,7 @@ public class AgentShellTest {
|
|||
AgentShell shell = new AgentShell();
|
||||
shell.setHosts("test");
|
||||
shell.getNextHost();
|
||||
shell.updateConnectedHost();
|
||||
shell.updateConnectedHost("test");
|
||||
|
||||
Assert.assertEquals(expected, shell.getConnectedHost());
|
||||
}
|
||||
|
|
|
|||
|
|
@ -741,6 +741,13 @@ public class EventTypes {
|
|||
//Purge resources
|
||||
public static final String EVENT_PURGE_EXPUNGED_RESOURCES = "PURGE.EXPUNGED.RESOURCES";
|
||||
|
||||
// Management Server
|
||||
public static final String EVENT_MS_MAINTENANCE_PREPARE = "MS.MAINTENANCE.PREPARE";
|
||||
public static final String EVENT_MS_MAINTENANCE_CANCEL = "MS.MAINTENANCE.CANCEL";
|
||||
public static final String EVENT_MS_SHUTDOWN_PREPARE = "MS.SHUTDOWN.PREPARE";
|
||||
public static final String EVENT_MS_SHUTDOWN_CANCEL = "MS.SHUTDOWN.CANCEL";
|
||||
public static final String EVENT_MS_SHUTDOWN = "MS.SHUTDOWN";
|
||||
|
||||
// OBJECT STORE
|
||||
public static final String EVENT_OBJECT_STORE_CREATE = "OBJECT.STORE.CREATE";
|
||||
public static final String EVENT_OBJECT_STORE_DELETE = "OBJECT.STORE.DELETE";
|
||||
|
|
@ -1235,6 +1242,12 @@ public class EventTypes {
|
|||
entityEventDetails.put(EVENT_UPDATE_IMAGE_STORE_ACCESS_STATE, ImageStore.class);
|
||||
entityEventDetails.put(EVENT_LIVE_PATCH_SYSTEMVM, "SystemVMs");
|
||||
|
||||
entityEventDetails.put(EVENT_MS_MAINTENANCE_PREPARE, "ManagementServer");
|
||||
entityEventDetails.put(EVENT_MS_MAINTENANCE_CANCEL, "ManagementServer");
|
||||
entityEventDetails.put(EVENT_MS_SHUTDOWN_PREPARE, "ManagementServer");
|
||||
entityEventDetails.put(EVENT_MS_SHUTDOWN_CANCEL, "ManagementServer");
|
||||
entityEventDetails.put(EVENT_MS_SHUTDOWN, "ManagementServer");
|
||||
|
||||
//Object Store
|
||||
entityEventDetails.put(EVENT_OBJECT_STORE_CREATE, ObjectStore.class);
|
||||
entityEventDetails.put(EVENT_OBJECT_STORE_UPDATE, ObjectStore.class);
|
||||
|
|
|
|||
|
|
@ -1171,6 +1171,7 @@ public class ApiConstants {
|
|||
public static final String PENDING_JOBS_COUNT = "pendingjobscount";
|
||||
public static final String AGENTS_COUNT = "agentscount";
|
||||
public static final String AGENTS = "agents";
|
||||
public static final String LAST_AGENTS = "lastagents";
|
||||
|
||||
public static final String PUBLIC_MTU = "publicmtu";
|
||||
public static final String PRIVATE_MTU = "privatemtu";
|
||||
|
|
|
|||
|
|
@ -30,6 +30,7 @@ public enum ApiErrorCode {
|
|||
UNSUPPORTED_ACTION_ERROR(432),
|
||||
API_LIMIT_EXCEED(429),
|
||||
|
||||
SERVICE_UNAVAILABLE(503),
|
||||
INTERNAL_ERROR(530),
|
||||
ACCOUNT_ERROR(531),
|
||||
ACCOUNT_RESOURCE_LIMIT_ERROR(532),
|
||||
|
|
|
|||
|
|
@ -41,6 +41,7 @@ import org.apache.cloudstack.api.response.ResourceIconResponse;
|
|||
import org.apache.cloudstack.api.response.SecurityGroupResponse;
|
||||
import org.apache.cloudstack.api.response.ServiceOfferingResponse;
|
||||
import org.apache.cloudstack.api.response.TemplateResponse;
|
||||
import org.apache.cloudstack.api.response.UserDataResponse;
|
||||
import org.apache.cloudstack.api.response.UserResponse;
|
||||
import org.apache.cloudstack.api.response.UserVmResponse;
|
||||
import org.apache.cloudstack.api.response.VpcResponse;
|
||||
|
|
@ -149,6 +150,9 @@ public class ListVMsCmd extends BaseListRetrieveOnlyResourceCountCmd implements
|
|||
@Parameter(name = ApiConstants.USER_DATA, type = CommandType.BOOLEAN, description = "Whether to return the VMs' user data or not. By default, user data will not be returned.", since = "4.18.0.0")
|
||||
private Boolean showUserData;
|
||||
|
||||
@Parameter(name = ApiConstants.USER_DATA_ID, type = CommandType.UUID, entityType = UserDataResponse.class, required = false, description = "the instances by userdata", since = "4.20.1")
|
||||
private Long userdataId;
|
||||
|
||||
/////////////////////////////////////////////////////
|
||||
/////////////////// Accessors ///////////////////////
|
||||
/////////////////////////////////////////////////////
|
||||
|
|
@ -243,6 +247,10 @@ public class ListVMsCmd extends BaseListRetrieveOnlyResourceCountCmd implements
|
|||
return CollectionUtils.isEmpty(viewDetails);
|
||||
}
|
||||
|
||||
public Long getUserdataId() {
|
||||
return userdataId;
|
||||
}
|
||||
|
||||
public EnumSet<VMDetails> getDetails() throws InvalidParameterValueException {
|
||||
if (isViewDetailsEmpty()) {
|
||||
if (_queryService.ReturnVmStatsOnVmList.value()) {
|
||||
|
|
|
|||
|
|
@ -82,6 +82,14 @@ public class ManagementServerResponse extends BaseResponse {
|
|||
@Param(description = "the Management Server Peers")
|
||||
private List<PeerManagementServerNodeResponse> peers;
|
||||
|
||||
@SerializedName(ApiConstants.LAST_AGENTS)
|
||||
@Param(description = "the last agents this Management Server is responsible for, before shutdown or preparing for maintenance", since = "4.21.0.0")
|
||||
private List<String> lastAgents;
|
||||
|
||||
@SerializedName(ApiConstants.AGENTS)
|
||||
@Param(description = "the agents this Management Server is responsible for", since = "4.21.0.0")
|
||||
private List<String> agents;
|
||||
|
||||
@SerializedName(ApiConstants.AGENTS_COUNT)
|
||||
@Param(description = "the number of host agents this Management Server is responsible for", since = "4.21.0.0")
|
||||
private Long agentsCount;
|
||||
|
|
@ -134,6 +142,14 @@ public class ManagementServerResponse extends BaseResponse {
|
|||
return ipAddress;
|
||||
}
|
||||
|
||||
public List<String> getLastAgents() {
|
||||
return lastAgents;
|
||||
}
|
||||
|
||||
public List<String> getAgents() {
|
||||
return agents;
|
||||
}
|
||||
|
||||
public Long getAgentsCount() {
|
||||
return this.agentsCount;
|
||||
}
|
||||
|
|
@ -190,6 +206,14 @@ public class ManagementServerResponse extends BaseResponse {
|
|||
this.ipAddress = ipAddress;
|
||||
}
|
||||
|
||||
public void setLastAgents(List<String> lastAgents) {
|
||||
this.lastAgents = lastAgents;
|
||||
}
|
||||
|
||||
public void setAgents(List<String> agents) {
|
||||
this.agents = agents;
|
||||
}
|
||||
|
||||
public void setAgentsCount(Long agentsCount) {
|
||||
this.agentsCount = agentsCount;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -28,12 +28,11 @@ import org.apache.cloudstack.api.BaseResponseWithAssociatedNetwork;
|
|||
import org.apache.cloudstack.api.EntityReference;
|
||||
|
||||
import com.cloud.network.Network;
|
||||
import com.cloud.projects.ProjectAccount;
|
||||
import com.cloud.serializer.Param;
|
||||
import com.google.gson.annotations.SerializedName;
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
@EntityReference(value = {Network.class, ProjectAccount.class})
|
||||
@EntityReference(value = {Network.class})
|
||||
public class NetworkResponse extends BaseResponseWithAssociatedNetwork implements ControlledEntityResponse, SetResourceIconResponse {
|
||||
|
||||
@SerializedName(ApiConstants.ID)
|
||||
|
|
|
|||
|
|
@ -149,6 +149,10 @@ public class StoragePoolResponse extends BaseResponseWithAnnotations {
|
|||
@Param(description = "whether this pool is managed or not")
|
||||
private Boolean managed;
|
||||
|
||||
@SerializedName(ApiConstants.DETAILS)
|
||||
@Param(description = "the storage pool details")
|
||||
private Map<String, String> details;
|
||||
|
||||
public Map<String, String> getCaps() {
|
||||
return caps;
|
||||
}
|
||||
|
|
@ -407,4 +411,12 @@ public class StoragePoolResponse extends BaseResponseWithAnnotations {
|
|||
public void setManaged(Boolean managed) {
|
||||
this.managed = managed;
|
||||
}
|
||||
|
||||
public Map<String, String> getDetails() {
|
||||
return details;
|
||||
}
|
||||
|
||||
public void setDetails(Map<String, String> details) {
|
||||
this.details = details;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -16,13 +16,27 @@
|
|||
# specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import os
|
||||
import sys
|
||||
# ---- This snippet of code adds the sources path and the waf configured PYTHONDIR to the Python path ----
|
||||
# ---- We do this so cloud_utils can be looked up in the following order:
|
||||
# ---- 1) Sources directory
|
||||
# ---- 2) waf configured PYTHONDIR
|
||||
# ---- 3) System Python path
|
||||
for pythonpath in (
|
||||
"@PYTHONDIR@",
|
||||
os.path.join(os.path.dirname(__file__),os.path.pardir,os.path.pardir,"python","lib"),
|
||||
):
|
||||
if os.path.isdir(pythonpath): sys.path.insert(0,pythonpath)
|
||||
# ---- End snippet of code ----
|
||||
|
||||
from cloudutils.syscfg import sysConfigFactory
|
||||
from cloudutils.utilities import initLoging, UnknownSystemException
|
||||
from cloudutils.cloudException import CloudRuntimeException, CloudInternalException
|
||||
from cloudutils.globalEnv import globalEnv
|
||||
from cloudutils.serviceConfigServer import cloudManagementConfig
|
||||
from optparse import OptionParser
|
||||
|
||||
if __name__ == '__main__':
|
||||
initLoging("@MSLOGDIR@/setupManagement.log")
|
||||
glbEnv = globalEnv()
|
||||
|
|
|
|||
|
|
@ -19,18 +19,22 @@
|
|||
|
||||
package com.cloud.agent.api;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
public class PingAnswer extends Answer {
|
||||
private PingCommand _command = null;
|
||||
|
||||
private boolean sendStartup = false;
|
||||
private List<String> avoidMsList;
|
||||
|
||||
protected PingAnswer() {
|
||||
}
|
||||
|
||||
public PingAnswer(PingCommand cmd, boolean sendStartup) {
|
||||
public PingAnswer(PingCommand cmd, List<String> avoidMsList, boolean sendStartup) {
|
||||
super(cmd);
|
||||
_command = cmd;
|
||||
this.sendStartup = sendStartup;
|
||||
this.avoidMsList = avoidMsList;
|
||||
}
|
||||
|
||||
public PingCommand getCommand() {
|
||||
|
|
@ -44,4 +48,8 @@ public class PingAnswer extends Answer {
|
|||
public void setSendStartup(boolean sendStartup) {
|
||||
this.sendStartup = sendStartup;
|
||||
}
|
||||
|
||||
public List<String> getAvoidMsList() {
|
||||
return avoidMsList;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -35,6 +35,7 @@ public class ReadyCommand extends Command {
|
|||
private String hostUuid;
|
||||
private String hostName;
|
||||
private List<String> msHostList;
|
||||
private List<String> avoidMsHostList;
|
||||
private String lbAlgorithm;
|
||||
private Long lbCheckInterval;
|
||||
private Boolean enableHumanReadableSizes;
|
||||
|
|
@ -90,6 +91,14 @@ public class ReadyCommand extends Command {
|
|||
this.msHostList = msHostList;
|
||||
}
|
||||
|
||||
public List<String> getAvoidMsHostList() {
|
||||
return avoidMsHostList;
|
||||
}
|
||||
|
||||
public void setAvoidMsHostList(List<String> msHostList) {
|
||||
this.avoidMsHostList = avoidMsHostList;
|
||||
}
|
||||
|
||||
public String getLbAlgorithm() {
|
||||
return lbAlgorithm;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -26,12 +26,14 @@ import com.cloud.agent.api.Command;
|
|||
public class SetupMSListCommand extends Command {
|
||||
|
||||
private List<String> msList;
|
||||
private List<String> avoidMsList;
|
||||
private String lbAlgorithm;
|
||||
private Long lbCheckInterval;
|
||||
|
||||
public SetupMSListCommand(final List<String> msList, final String lbAlgorithm, final Long lbCheckInterval) {
|
||||
public SetupMSListCommand(final List<String> msList, final List<String> avoidMsList, final String lbAlgorithm, final Long lbCheckInterval) {
|
||||
super();
|
||||
this.msList = msList;
|
||||
this.avoidMsList = avoidMsList;
|
||||
this.lbAlgorithm = lbAlgorithm;
|
||||
this.lbCheckInterval = lbCheckInterval;
|
||||
}
|
||||
|
|
@ -40,6 +42,10 @@ public class SetupMSListCommand extends Command {
|
|||
return msList;
|
||||
}
|
||||
|
||||
public List<String> getAvoidMsList() {
|
||||
return avoidMsList;
|
||||
}
|
||||
|
||||
public String getLbAlgorithm() {
|
||||
return lbAlgorithm;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -16,7 +16,6 @@
|
|||
// under the License.
|
||||
package com.cloud.agent;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import org.apache.cloudstack.framework.config.ConfigKey;
|
||||
|
|
@ -173,8 +172,4 @@ public interface AgentManager {
|
|||
void propagateChangeToAgents(Map<String, String> params);
|
||||
|
||||
boolean transferDirectAgentsFromMS(String fromMsUuid, long fromMsId, long timeoutDurationInMs);
|
||||
|
||||
List<String> getLastAgents();
|
||||
|
||||
void setLastAgents(List<String> lastAgents);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -214,13 +214,13 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
|
|||
|
||||
protected final ConfigKey<Integer> Workers = new ConfigKey<>("Advanced", Integer.class, "workers", "5",
|
||||
"Number of worker threads handling remote agent connections.", false);
|
||||
protected final ConfigKey<Integer> Port = new ConfigKey<>("Advanced", Integer.class, "port", "8250", "Port to listen on for remote agent connections.", false);
|
||||
protected final ConfigKey<Integer> Port = new ConfigKey<>("Advanced", Integer.class, "port", "8250", "Port to listen on for remote (indirect) agent connections.", false);
|
||||
protected final ConfigKey<Integer> RemoteAgentSslHandshakeTimeout = new ConfigKey<>("Advanced",
|
||||
Integer.class, "agent.ssl.handshake.timeout", "30",
|
||||
"Seconds after which SSL handshake times out during remote agent connections.", false);
|
||||
"Seconds after which SSL handshake times out during remote (indirect) agent connections.", false);
|
||||
protected final ConfigKey<Integer> RemoteAgentMaxConcurrentNewConnections = new ConfigKey<>("Advanced",
|
||||
Integer.class, "agent.max.concurrent.new.connections", "0",
|
||||
"Number of maximum concurrent new connections server allows for remote agents. " +
|
||||
"Number of maximum concurrent new connections server allows for remote (indirect) agents. " +
|
||||
"If set to zero (default value) then no limit will be enforced on concurrent new connections",
|
||||
false);
|
||||
protected final ConfigKey<Integer> AlertWait = new ConfigKey<>("Advanced", Integer.class, "alert.wait", "1800",
|
||||
|
|
@ -255,9 +255,7 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
|
|||
|
||||
_executor = new ThreadPoolExecutor(agentTaskThreads, agentTaskThreads, 60L, TimeUnit.SECONDS, new LinkedBlockingQueue<>(), new NamedThreadFactory("AgentTaskPool"));
|
||||
|
||||
_connectExecutor = new ThreadPoolExecutor(100, 500, 60L, TimeUnit.SECONDS, new LinkedBlockingQueue<>(), new NamedThreadFactory("AgentConnectTaskPool"));
|
||||
// allow core threads to time out even when there are no items in the queue
|
||||
_connectExecutor.allowCoreThreadTimeOut(true);
|
||||
initConnectExecutor();
|
||||
|
||||
maxConcurrentNewAgentConnections = RemoteAgentMaxConcurrentNewConnections.value();
|
||||
|
||||
|
|
@ -273,10 +271,6 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
|
|||
logger.debug("Created DirectAgentAttache pool with size: {}.", directAgentPoolSize);
|
||||
_directAgentThreadCap = Math.round(directAgentPoolSize * DirectAgentThreadCap.value()) + 1; // add 1 to always make the value > 0
|
||||
|
||||
_monitorExecutor = new ScheduledThreadPoolExecutor(1, new NamedThreadFactory("AgentMonitor"));
|
||||
|
||||
newAgentConnectionsMonitor = Executors.newScheduledThreadPool(1, new NamedThreadFactory("NewAgentConnectionsMonitor"));
|
||||
|
||||
initializeCommandTimeouts();
|
||||
|
||||
return true;
|
||||
|
|
@ -351,10 +345,27 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
|
|||
_hostMonitors.remove(id);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onManagementServerPreparingForMaintenance() {
|
||||
logger.debug("Management server preparing for maintenance");
|
||||
if (_connection != null) {
|
||||
_connection.block();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onManagementServerCancelPreparingForMaintenance() {
|
||||
logger.debug("Management server cancel preparing for maintenance");
|
||||
if (_connection != null) {
|
||||
_connection.unblock();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onManagementServerMaintenance() {
|
||||
logger.debug("Management server maintenance enabled");
|
||||
_monitorExecutor.shutdownNow();
|
||||
newAgentConnectionsMonitor.shutdownNow();
|
||||
if (_connection != null) {
|
||||
_connection.stop();
|
||||
|
||||
|
|
@ -371,10 +382,8 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
|
|||
public void onManagementServerCancelMaintenance() {
|
||||
logger.debug("Management server maintenance disabled");
|
||||
if (_connectExecutor.isShutdown()) {
|
||||
_connectExecutor = new ThreadPoolExecutor(100, 500, 60L, TimeUnit.SECONDS, new LinkedBlockingQueue<>(), new NamedThreadFactory("AgentConnectTaskPool"));
|
||||
_connectExecutor.allowCoreThreadTimeOut(true);
|
||||
initConnectExecutor();
|
||||
}
|
||||
|
||||
startDirectlyConnectedHosts(true);
|
||||
if (_connection != null) {
|
||||
try {
|
||||
|
|
@ -385,9 +394,28 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
|
|||
}
|
||||
|
||||
if (_monitorExecutor.isShutdown()) {
|
||||
_monitorExecutor = new ScheduledThreadPoolExecutor(1, new NamedThreadFactory("AgentMonitor"));
|
||||
_monitorExecutor.scheduleWithFixedDelay(new MonitorTask(), mgmtServiceConf.getPingInterval(), mgmtServiceConf.getPingInterval(), TimeUnit.SECONDS);
|
||||
initAndScheduleMonitorExecutor();
|
||||
}
|
||||
if (newAgentConnectionsMonitor.isShutdown()) {
|
||||
initAndScheduleAgentConnectionsMonitor();
|
||||
}
|
||||
}
|
||||
|
||||
private void initConnectExecutor() {
|
||||
_connectExecutor = new ThreadPoolExecutor(100, 500, 60L, TimeUnit.SECONDS, new LinkedBlockingQueue<>(), new NamedThreadFactory("AgentConnectTaskPool"));
|
||||
// allow core threads to time out even when there are no items in the queue
|
||||
_connectExecutor.allowCoreThreadTimeOut(true);
|
||||
}
|
||||
|
||||
private void initAndScheduleMonitorExecutor() {
|
||||
_monitorExecutor = new ScheduledThreadPoolExecutor(1, new NamedThreadFactory("AgentMonitor"));
|
||||
_monitorExecutor.scheduleWithFixedDelay(new MonitorTask(), mgmtServiceConf.getPingInterval(), mgmtServiceConf.getPingInterval(), TimeUnit.SECONDS);
|
||||
}
|
||||
|
||||
private void initAndScheduleAgentConnectionsMonitor() {
|
||||
final int cleanupTimeInSecs = Wait.value();
|
||||
newAgentConnectionsMonitor = Executors.newScheduledThreadPool(1, new NamedThreadFactory("NewAgentConnectionsMonitor"));
|
||||
newAgentConnectionsMonitor.scheduleAtFixedRate(new AgentNewConnectionsMonitorTask(), cleanupTimeInSecs, cleanupTimeInSecs, TimeUnit.SECONDS);
|
||||
}
|
||||
|
||||
private AgentControlAnswer handleControlCommand(final AgentAttache attache, final AgentControlCommand cmd) {
|
||||
|
|
@ -426,16 +454,6 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
|
|||
return attache;
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<String> getLastAgents() {
|
||||
return lastAgents;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setLastAgents(List<String> lastAgents) {
|
||||
this.lastAgents = lastAgents;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Answer sendTo(final Long dcId, final HypervisorType type, final Command cmd) {
|
||||
final List<ClusterVO> clusters = _clusterDao.listByDcHyType(dcId, type.toString());
|
||||
|
|
@ -779,6 +797,7 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
|
|||
ManagementServerHostVO msHost = _mshostDao.findByMsid(_nodeId);
|
||||
if (msHost != null && (ManagementServerHost.State.Maintenance.equals(msHost.getState()) || ManagementServerHost.State.PreparingForMaintenance.equals(msHost.getState()))) {
|
||||
_monitorExecutor.shutdownNow();
|
||||
newAgentConnectionsMonitor.shutdownNow();
|
||||
return true;
|
||||
}
|
||||
|
||||
|
|
@ -792,12 +811,8 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
|
|||
}
|
||||
}
|
||||
|
||||
_monitorExecutor.scheduleWithFixedDelay(new MonitorTask(), mgmtServiceConf.getPingInterval(), mgmtServiceConf.getPingInterval(), TimeUnit.SECONDS);
|
||||
|
||||
final int cleanupTime = Wait.value();
|
||||
newAgentConnectionsMonitor.scheduleAtFixedRate(new AgentNewConnectionsMonitorTask(), cleanupTime,
|
||||
cleanupTime, TimeUnit.MINUTES);
|
||||
|
||||
initAndScheduleMonitorExecutor();
|
||||
initAndScheduleAgentConnectionsMonitor();
|
||||
return true;
|
||||
}
|
||||
|
||||
|
|
@ -1304,6 +1319,8 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
|
|||
if (!indirectAgentLB.compareManagementServerList(host.getId(), host.getDataCenterId(), agentMSHostList, lbAlgorithm)) {
|
||||
final List<String> newMSList = indirectAgentLB.getManagementServerList(host.getId(), host.getDataCenterId(), null);
|
||||
ready.setMsHostList(newMSList);
|
||||
final List<String> avoidMsList = _mshostDao.listNonUpStateMsIPs();
|
||||
ready.setAvoidMsHostList(avoidMsList);
|
||||
ready.setLbAlgorithm(indirectAgentLB.getLBAlgorithmName());
|
||||
ready.setLbCheckInterval(indirectAgentLB.getLBPreferredHostCheckInterval(host.getClusterId()));
|
||||
logger.debug("Agent's management server host list is not up to date, sending list update: {}", newMSList);
|
||||
|
|
@ -1608,7 +1625,8 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
|
|||
if (host!= null && host.getStatus() != Status.Up && gatewayAccessible) {
|
||||
requestStartupCommand = true;
|
||||
}
|
||||
answer = new PingAnswer((PingCommand)cmd, requestStartupCommand);
|
||||
final List<String> avoidMsList = _mshostDao.listNonUpStateMsIPs();
|
||||
answer = new PingAnswer((PingCommand)cmd, avoidMsList, requestStartupCommand);
|
||||
} else if (cmd instanceof ReadyAnswer) {
|
||||
final HostVO host = _hostDao.findById(attache.getId());
|
||||
if (host == null) {
|
||||
|
|
@ -1929,25 +1947,19 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
|
|||
logger.trace("Agent New Connections Monitor is started.");
|
||||
final int cleanupTime = Wait.value();
|
||||
Set<Map.Entry<String, Long>> entrySet = newAgentConnections.entrySet();
|
||||
long cutOff = System.currentTimeMillis() - (cleanupTime * 60 * 1000L);
|
||||
if (logger.isDebugEnabled()) {
|
||||
List<String> expiredConnections = newAgentConnections.entrySet()
|
||||
.stream()
|
||||
.filter(e -> e.getValue() <= cutOff)
|
||||
.map(Map.Entry::getKey)
|
||||
.collect(Collectors.toList());
|
||||
logger.debug("Currently {} active new connections, of which {} have expired - {}",
|
||||
entrySet.size(),
|
||||
expiredConnections.size(),
|
||||
StringUtils.join(expiredConnections));
|
||||
}
|
||||
for (Map.Entry<String, Long> entry : entrySet) {
|
||||
if (entry.getValue() <= cutOff) {
|
||||
if (logger.isTraceEnabled()) {
|
||||
logger.trace("Cleaning up new agent connection for {}", entry.getKey());
|
||||
}
|
||||
newAgentConnections.remove(entry.getKey());
|
||||
}
|
||||
long cutOff = System.currentTimeMillis() - (cleanupTime * 1000L);
|
||||
List<String> expiredConnections = newAgentConnections.entrySet()
|
||||
.stream()
|
||||
.filter(e -> e.getValue() <= cutOff)
|
||||
.map(Map.Entry::getKey)
|
||||
.collect(Collectors.toList());
|
||||
logger.debug("Currently {} active new connections, of which {} have expired - {}",
|
||||
entrySet.size(),
|
||||
expiredConnections.size(),
|
||||
StringUtils.join(expiredConnections));
|
||||
for (String connection : expiredConnections) {
|
||||
logger.trace("Cleaning up new agent connection for {}", connection);
|
||||
newAgentConnections.remove(connection);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -151,11 +151,11 @@ public class ClusteredAgentManagerImpl extends AgentManagerImpl implements Clust
|
|||
super();
|
||||
}
|
||||
|
||||
protected final ConfigKey<Boolean> EnableLB = new ConfigKey<>(Boolean.class, "agent.lb.enabled", "Advanced", "false", "Enable agent load balancing between management server nodes", true);
|
||||
protected final ConfigKey<Boolean> EnableLB = new ConfigKey<>(Boolean.class, "agent.lb.enabled", "Advanced", "false", "Enable direct agents load balancing between management server nodes", true);
|
||||
protected final ConfigKey<Double> ConnectedAgentThreshold = new ConfigKey<>(Double.class, "agent.load.threshold", "Advanced", "0.7",
|
||||
"What percentage of the agents can be held by one management server before load balancing happens", true, EnableLB.key());
|
||||
protected final ConfigKey<Integer> LoadSize = new ConfigKey<>(Integer.class, "direct.agent.load.size", "Advanced", "16", "How many agents to connect to in each round", true);
|
||||
protected final ConfigKey<Integer> ScanInterval = new ConfigKey<>(Integer.class, "direct.agent.scan.interval", "Advanced", "90", "Interval between scans to load agents", false,
|
||||
"What percentage of the direct agents can be held by one management server before load balancing happens", true, EnableLB.key());
|
||||
protected final ConfigKey<Integer> LoadSize = new ConfigKey<>(Integer.class, "direct.agent.load.size", "Advanced", "16", "How many direct agents to connect to in each round", true);
|
||||
protected final ConfigKey<Integer> ScanInterval = new ConfigKey<>(Integer.class, "direct.agent.scan.interval", "Advanced", "90", "Interval between scans to load direct agents", false,
|
||||
ConfigKey.Scope.Global, 1000);
|
||||
|
||||
@Override
|
||||
|
|
@ -1395,7 +1395,7 @@ public class ClusteredAgentManagerImpl extends AgentManagerImpl implements Clust
|
|||
return false;
|
||||
}
|
||||
|
||||
long transferStartTime = System.currentTimeMillis();
|
||||
long transferStartTimeInMs = System.currentTimeMillis();
|
||||
if (CollectionUtils.isEmpty(getDirectAgentHosts(fromMsId))) {
|
||||
logger.info("No direct agent hosts available on management server node {} (id: {}), to transfer", fromMsId, fromMsUuid);
|
||||
return true;
|
||||
|
|
@ -1417,7 +1417,7 @@ public class ClusteredAgentManagerImpl extends AgentManagerImpl implements Clust
|
|||
}
|
||||
logger.debug("Transferring {} direct agents from management server node {} (id: {}) of zone {}", directAgentHostsInDc.size(), fromMsId, fromMsUuid, dc);
|
||||
for (HostVO host : directAgentHostsInDc) {
|
||||
long transferElapsedTimeInMs = System.currentTimeMillis() - transferStartTime;
|
||||
long transferElapsedTimeInMs = System.currentTimeMillis() - transferStartTimeInMs;
|
||||
if (transferElapsedTimeInMs >= timeoutDurationInMs) {
|
||||
logger.debug("Stop transferring remaining direct agents from management server node {} (id: {}), timed out", fromMsId, fromMsUuid);
|
||||
return false;
|
||||
|
|
@ -1486,6 +1486,18 @@ public class ClusteredAgentManagerImpl extends AgentManagerImpl implements Clust
|
|||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onManagementServerPreparingForMaintenance() {
|
||||
logger.debug("Management server preparing for maintenance");
|
||||
super.onManagementServerPreparingForMaintenance();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onManagementServerCancelPreparingForMaintenance() {
|
||||
logger.debug("Management server cancel preparing for maintenance");
|
||||
super.onManagementServerPreparingForMaintenance();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onManagementServerMaintenance() {
|
||||
logger.debug("Management server maintenance enabled");
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ import org.apache.cloudstack.framework.config.Configurable;
|
|||
|
||||
public interface ManagementServiceConfiguration extends Configurable {
|
||||
ConfigKey<Integer> PingInterval = new ConfigKey<Integer>("Advanced", Integer.class, "ping.interval", "60",
|
||||
"Interval to send application level pings to make sure the connection is still working", false);
|
||||
"Interval in seconds to send application level pings to make sure the connection is still working", false);
|
||||
ConfigKey<Float> PingTimeout = new ConfigKey<Float>("Advanced", Float.class, "ping.timeout", "2.5",
|
||||
"Multiplier to ping.interval before announcing an agent has timed out", true);
|
||||
public int getPingInterval();
|
||||
|
|
|
|||
|
|
@ -183,6 +183,13 @@ public interface HostDao extends GenericDao<HostVO, Long>, StateDao<Status, Stat
|
|||
*/
|
||||
List<String> listByMs(long msId);
|
||||
|
||||
/**
|
||||
* Retrieves the last host ids/agents this {@see ManagementServer} has responsibility over.
|
||||
* @param msId the id of the {@see ManagementServer}
|
||||
* @return the last host ids/agents this {@see ManagementServer} has responsibility over
|
||||
*/
|
||||
List<String> listByLastMs(long msId);
|
||||
|
||||
/**
|
||||
* Retrieves the hypervisor versions of the hosts in the datacenter which are in Up state in ascending order
|
||||
* @param datacenterId data center id
|
||||
|
|
@ -200,7 +207,7 @@ public interface HostDao extends GenericDao<HostVO, Long>, StateDao<Status, Stat
|
|||
boolean isHostUp(long hostId);
|
||||
|
||||
List<Long> findHostIdsByZoneClusterResourceStateTypeAndHypervisorType(final Long zoneId, final Long clusterId,
|
||||
final List<ResourceState> resourceStates, final List<Type> types,
|
||||
final Long msId, final List<ResourceState> resourceStates, final List<Type> types,
|
||||
final List<Hypervisor.HypervisorType> hypervisorTypes);
|
||||
|
||||
List<HypervisorType> listDistinctHypervisorTypes(final Long zoneId);
|
||||
|
|
|
|||
|
|
@ -129,6 +129,7 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
|
|||
protected SearchBuilder<HostVO> ResponsibleMsSearch;
|
||||
protected SearchBuilder<HostVO> ResponsibleMsDcSearch;
|
||||
protected GenericSearchBuilder<HostVO, String> ResponsibleMsIdSearch;
|
||||
protected GenericSearchBuilder<HostVO, String> LastMsIdSearch;
|
||||
protected SearchBuilder<HostVO> HostTypeClusterCountSearch;
|
||||
protected SearchBuilder<HostVO> HostTypeZoneCountSearch;
|
||||
protected SearchBuilder<HostVO> ClusterStatusSearch;
|
||||
|
|
@ -209,6 +210,11 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
|
|||
ResponsibleMsIdSearch.and("managementServerId", ResponsibleMsIdSearch.entity().getManagementServerId(), SearchCriteria.Op.EQ);
|
||||
ResponsibleMsIdSearch.done();
|
||||
|
||||
LastMsIdSearch = createSearchBuilder(String.class);
|
||||
LastMsIdSearch.selectFields(LastMsIdSearch.entity().getUuid());
|
||||
LastMsIdSearch.and("lastManagementServerId", LastMsIdSearch.entity().getLastManagementServerId(), SearchCriteria.Op.EQ);
|
||||
LastMsIdSearch.done();
|
||||
|
||||
HostTypeClusterCountSearch = createSearchBuilder();
|
||||
HostTypeClusterCountSearch.and("cluster", HostTypeClusterCountSearch.entity().getClusterId(), SearchCriteria.Op.EQ);
|
||||
HostTypeClusterCountSearch.and("type", HostTypeClusterCountSearch.entity().getType(), SearchCriteria.Op.EQ);
|
||||
|
|
@ -1569,6 +1575,13 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
|
|||
return customSearch(sc, null);
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<String> listByLastMs(long msId) {
|
||||
SearchCriteria<String> sc = LastMsIdSearch.create();
|
||||
sc.addAnd("lastManagementServerId", SearchCriteria.Op.EQ, msId);
|
||||
return customSearch(sc, null);
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<String> listOrderedHostsHypervisorVersionsInDatacenter(long datacenterId, HypervisorType hypervisorType) {
|
||||
PreparedStatement pstmt;
|
||||
|
|
@ -1745,13 +1758,15 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
|
|||
}
|
||||
|
||||
@Override
|
||||
public List<Long> findHostIdsByZoneClusterResourceStateTypeAndHypervisorType(final Long zoneId, final Long clusterId,
|
||||
public List<Long> findHostIdsByZoneClusterResourceStateTypeAndHypervisorType(final Long zoneId,
|
||||
final Long clusterId, final Long managementServerId,
|
||||
final List<ResourceState> resourceStates, final List<Type> types,
|
||||
final List<Hypervisor.HypervisorType> hypervisorTypes) {
|
||||
GenericSearchBuilder<HostVO, Long> sb = createSearchBuilder(Long.class);
|
||||
sb.selectFields(sb.entity().getId());
|
||||
sb.and("zoneId", sb.entity().getDataCenterId(), SearchCriteria.Op.EQ);
|
||||
sb.and("clusterId", sb.entity().getClusterId(), SearchCriteria.Op.EQ);
|
||||
sb.and("msId", sb.entity().getManagementServerId(), SearchCriteria.Op.EQ);
|
||||
sb.and("resourceState", sb.entity().getResourceState(), SearchCriteria.Op.IN);
|
||||
sb.and("type", sb.entity().getType(), SearchCriteria.Op.IN);
|
||||
if (CollectionUtils.isNotEmpty(hypervisorTypes)) {
|
||||
|
|
@ -1767,6 +1782,9 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
|
|||
if (clusterId != null) {
|
||||
sc.setParameters("clusterId", clusterId);
|
||||
}
|
||||
if (managementServerId != null) {
|
||||
sc.setParameters("msId", managementServerId);
|
||||
}
|
||||
if (CollectionUtils.isNotEmpty(hypervisorTypes)) {
|
||||
sc.setParameters("hypervisorTypes", hypervisorTypes.toArray());
|
||||
}
|
||||
|
|
|
|||
|
|
@ -104,6 +104,7 @@ public class HostDaoImplTest {
|
|||
public void testFindHostIdsByZoneClusterResourceStateTypeAndHypervisorType() {
|
||||
Long zoneId = 1L;
|
||||
Long clusterId = 2L;
|
||||
Long msId = 1L;
|
||||
List<ResourceState> resourceStates = List.of(ResourceState.Enabled);
|
||||
List<Host.Type> types = List.of(Host.Type.Routing);
|
||||
List<Hypervisor.HypervisorType> hypervisorTypes = List.of(Hypervisor.HypervisorType.KVM);
|
||||
|
|
@ -117,10 +118,11 @@ public class HostDaoImplTest {
|
|||
Mockito.doReturn(sb).when(hostDao).createSearchBuilder(Long.class);
|
||||
Mockito.doReturn(mockResults).when(hostDao).customSearch(Mockito.any(SearchCriteria.class), Mockito.any());
|
||||
List<Long> hostIds = hostDao.findHostIdsByZoneClusterResourceStateTypeAndHypervisorType(
|
||||
zoneId, clusterId, resourceStates, types, hypervisorTypes);
|
||||
zoneId, clusterId, msId, resourceStates, types, hypervisorTypes);
|
||||
Assert.assertEquals(mockResults, hostIds);
|
||||
Mockito.verify(sc).setParameters("zoneId", zoneId);
|
||||
Mockito.verify(sc).setParameters("clusterId", clusterId);
|
||||
Mockito.verify(sc).setParameters("msId", msId);
|
||||
Mockito.verify(sc).setParameters("resourceState", resourceStates.toArray());
|
||||
Mockito.verify(sc).setParameters("type", types.toArray());
|
||||
Mockito.verify(sc).setParameters("hypervisorTypes", hypervisorTypes.toArray());
|
||||
|
|
|
|||
|
|
@ -22,14 +22,16 @@ import java.util.List;
|
|||
|
||||
import javax.inject.Inject;
|
||||
|
||||
import com.cloud.dc.dao.DataCenterDao;
|
||||
import org.apache.cloudstack.engine.subsystem.api.storage.ClusterScope;
|
||||
import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
|
||||
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
|
||||
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
|
||||
import org.apache.cloudstack.storage.volume.datastore.PrimaryDataStoreHelper;
|
||||
|
||||
import com.cloud.agent.AgentManager;
|
||||
import com.cloud.agent.api.Answer;
|
||||
import com.cloud.agent.api.DeleteStoragePoolCommand;
|
||||
import com.cloud.dc.dao.DataCenterDao;
|
||||
import com.cloud.host.HostVO;
|
||||
import com.cloud.host.dao.HostDao;
|
||||
import com.cloud.hypervisor.Hypervisor.HypervisorType;
|
||||
|
|
@ -37,8 +39,12 @@ import com.cloud.resource.ResourceManager;
|
|||
import com.cloud.storage.StorageManager;
|
||||
import com.cloud.storage.StoragePool;
|
||||
import com.cloud.storage.StoragePoolHostVO;
|
||||
import com.cloud.storage.VMTemplateStoragePoolVO;
|
||||
import com.cloud.storage.VMTemplateStorageResourceAssoc;
|
||||
import com.cloud.storage.dao.StoragePoolHostDao;
|
||||
import com.cloud.template.TemplateManager;
|
||||
import com.cloud.utils.Pair;
|
||||
|
||||
import org.apache.logging.log4j.LogManager;
|
||||
import org.apache.logging.log4j.Logger;
|
||||
|
||||
|
|
@ -59,6 +65,10 @@ public class BasePrimaryDataStoreLifeCycleImpl {
|
|||
protected DataCenterDao zoneDao;
|
||||
@Inject
|
||||
protected StoragePoolHostDao storagePoolHostDao;
|
||||
@Inject
|
||||
private PrimaryDataStoreDao primaryDataStoreDao;
|
||||
@Inject
|
||||
private TemplateManager templateMgr;
|
||||
|
||||
private List<HostVO> getPoolHostsList(ClusterScope clusterScope, HypervisorType hypervisorType) {
|
||||
List<HostVO> hosts;
|
||||
|
|
@ -81,7 +91,7 @@ public class BasePrimaryDataStoreLifeCycleImpl {
|
|||
try {
|
||||
storageMgr.connectHostToSharedPool(host, store.getId());
|
||||
} catch (Exception e) {
|
||||
logger.warn("Unable to establish a connection between " + host + " and " + store, e);
|
||||
logger.warn("Unable to establish a connection between {} and {}", host, store, e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -99,7 +109,7 @@ public class BasePrimaryDataStoreLifeCycleImpl {
|
|||
|
||||
if (answer != null) {
|
||||
if (!answer.getResult()) {
|
||||
logger.debug("Failed to delete storage pool: " + answer.getResult());
|
||||
logger.debug("Failed to delete storage pool: {}", answer.getResult());
|
||||
} else if (HypervisorType.KVM != hypervisorType) {
|
||||
break;
|
||||
}
|
||||
|
|
@ -108,4 +118,42 @@ public class BasePrimaryDataStoreLifeCycleImpl {
|
|||
}
|
||||
dataStoreHelper.switchToCluster(store, clusterScope);
|
||||
}
|
||||
|
||||
private void evictTemplates(StoragePoolVO storagePoolVO) {
|
||||
List<VMTemplateStoragePoolVO> unusedTemplatesInPool = templateMgr.getUnusedTemplatesInPool(storagePoolVO);
|
||||
for (VMTemplateStoragePoolVO templatePoolVO : unusedTemplatesInPool) {
|
||||
if (templatePoolVO.getDownloadState() == VMTemplateStorageResourceAssoc.Status.DOWNLOADED) {
|
||||
templateMgr.evictTemplateFromStoragePool(templatePoolVO);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private void deleteAgentStoragePools(StoragePool storagePool) {
|
||||
List<StoragePoolHostVO> poolHostVOs = storagePoolHostDao.listByPoolId(storagePool.getId());
|
||||
for (StoragePoolHostVO poolHostVO : poolHostVOs) {
|
||||
DeleteStoragePoolCommand deleteStoragePoolCommand = new DeleteStoragePoolCommand(storagePool);
|
||||
final Answer answer = agentMgr.easySend(poolHostVO.getHostId(), deleteStoragePoolCommand);
|
||||
if (answer != null && answer.getResult()) {
|
||||
logger.info("Successfully deleted storage pool: {} from host: {}", storagePool.getId(), poolHostVO.getHostId());
|
||||
} else {
|
||||
if (answer != null) {
|
||||
logger.error("Failed to delete storage pool: {} from host: {} , result: {}", storagePool.getId(), poolHostVO.getHostId(), answer.getResult());
|
||||
} else {
|
||||
logger.error("Failed to delete storage pool: {} from host: {}", storagePool.getId(), poolHostVO.getHostId());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
protected boolean cleanupDatastore(DataStore store) {
|
||||
StoragePool storagePool = (StoragePool)store;
|
||||
StoragePoolVO storagePoolVO = primaryDataStoreDao.findById(storagePool.getId());
|
||||
if (storagePoolVO == null) {
|
||||
return false;
|
||||
}
|
||||
|
||||
evictTemplates(storagePoolVO);
|
||||
deleteAgentStoragePools(storagePool);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -43,7 +43,6 @@ import com.cloud.storage.StoragePool;
|
|||
import com.cloud.storage.StoragePoolHostVO;
|
||||
import com.cloud.storage.StorageService;
|
||||
import com.cloud.storage.dao.StoragePoolHostDao;
|
||||
import com.cloud.utils.Pair;
|
||||
import com.cloud.utils.exception.CloudRuntimeException;
|
||||
|
||||
import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager;
|
||||
|
|
@ -60,6 +59,7 @@ import javax.inject.Inject;
|
|||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
|
||||
public class DefaultHostListener implements HypervisorHostListener {
|
||||
protected Logger logger = LogManager.getLogger(getClass());
|
||||
|
|
@ -133,9 +133,11 @@ public class DefaultHostListener implements HypervisorHostListener {
|
|||
@Override
|
||||
public boolean hostConnect(long hostId, long poolId) throws StorageConflictException {
|
||||
StoragePool pool = (StoragePool) this.dataStoreMgr.getDataStore(poolId, DataStoreRole.Primary);
|
||||
Pair<Map<String, String>, Boolean> nfsMountOpts = storageManager.getStoragePoolNFSMountOpts(pool, null);
|
||||
Map<String, String> detailsMap = storagePoolDetailsDao.listDetailsKeyPairs(poolId);
|
||||
Map<String, String> nfsMountOpts = storageManager.getStoragePoolNFSMountOpts(pool, null).first();
|
||||
|
||||
ModifyStoragePoolCommand cmd = new ModifyStoragePoolCommand(true, pool, nfsMountOpts.first());
|
||||
Optional.ofNullable(nfsMountOpts).ifPresent(detailsMap::putAll);
|
||||
ModifyStoragePoolCommand cmd = new ModifyStoragePoolCommand(true, pool, detailsMap);
|
||||
cmd.setWait(modifyStoragePoolCommandWait);
|
||||
HostVO host = hostDao.findById(hostId);
|
||||
logger.debug("Sending modify storage pool command to agent: {} for storage pool: {} with timeout {} seconds", host, pool, cmd.getWait());
|
||||
|
|
|
|||
|
|
@ -1107,9 +1107,19 @@ public class ClusterManagerImpl extends ManagerBase implements ClusterManager, C
|
|||
if (_mshostId != null) {
|
||||
final ManagementServerHostVO mshost = _mshostDao.findByMsid(_msId);
|
||||
if (mshost != null) {
|
||||
final ManagementServerStatusVO mshostStatus = mshostStatusDao.findByMsId(mshost.getUuid());
|
||||
mshostStatus.setLastJvmStop(new Date());
|
||||
mshostStatusDao.update(mshostStatus.getId(), mshostStatus);
|
||||
ManagementServerStatusVO mshostStatus = mshostStatusDao.findByMsId(mshost.getUuid());
|
||||
if (mshostStatus != null) {
|
||||
mshostStatus.setLastJvmStop(new Date());
|
||||
mshostStatusDao.update(mshostStatus.getId(), mshostStatus);
|
||||
} else {
|
||||
logger.warn("Found a management server host [{}] without a status. This should never happen!", mshost);
|
||||
mshostStatus = new ManagementServerStatusVO();
|
||||
mshostStatus.setMsId(mshost.getUuid());
|
||||
mshostStatus.setLastSystemBoot(new Date());
|
||||
mshostStatus.setLastJvmStart(new Date());
|
||||
mshostStatus.setUpdated(new Date());
|
||||
mshostStatusDao.persist(mshostStatus);
|
||||
}
|
||||
|
||||
ManagementServerHost.State msHostState = ManagementServerHost.State.Down;
|
||||
if (ManagementServerHost.State.Maintenance.equals(mshost.getState()) || ManagementServerHost.State.PreparingForMaintenance.equals(mshost.getState())) {
|
||||
|
|
|
|||
|
|
@ -85,7 +85,7 @@ public class ConfigDepotImpl implements ConfigDepot, ConfigDepotAdmin {
|
|||
List<ScopedConfigStorage> _scopedStorages;
|
||||
Set<Configurable> _configured = Collections.synchronizedSet(new HashSet<Configurable>());
|
||||
Set<String> newConfigs = Collections.synchronizedSet(new HashSet<>());
|
||||
LazyCache<String, String> configCache;
|
||||
LazyCache<Ternary<String, ConfigKey.Scope, Long>, String> configCache;
|
||||
|
||||
private HashMap<String, Pair<String, ConfigKey<?>>> _allKeys = new HashMap<String, Pair<String, ConfigKey<?>>>(1007);
|
||||
|
||||
|
|
@ -275,15 +275,10 @@ public class ConfigDepotImpl implements ConfigDepot, ConfigDepotAdmin {
|
|||
return _configDao;
|
||||
}
|
||||
|
||||
protected String getConfigStringValueInternal(String cacheKey) {
|
||||
String[] parts = cacheKey.split("-");
|
||||
String key = parts[0];
|
||||
ConfigKey.Scope scope = ConfigKey.Scope.Global;
|
||||
Long scopeId = null;
|
||||
try {
|
||||
scope = ConfigKey.Scope.valueOf(parts[1]);
|
||||
scopeId = Long.valueOf(parts[2]);
|
||||
} catch (IllegalArgumentException ignored) {}
|
||||
protected String getConfigStringValueInternal(Ternary<String, ConfigKey.Scope, Long> cacheKey) {
|
||||
String key = cacheKey.first();
|
||||
ConfigKey.Scope scope = cacheKey.second();
|
||||
Long scopeId = cacheKey.third();
|
||||
if (!ConfigKey.Scope.Global.equals(scope) && scopeId != null) {
|
||||
ScopedConfigStorage scopedConfigStorage = getScopedStorage(scope);
|
||||
if (scopedConfigStorage == null) {
|
||||
|
|
@ -298,8 +293,8 @@ public class ConfigDepotImpl implements ConfigDepot, ConfigDepotAdmin {
|
|||
return null;
|
||||
}
|
||||
|
||||
private String getConfigCacheKey(String key, ConfigKey.Scope scope, Long scopeId) {
|
||||
return String.format("%s-%s-%d", key, scope, (scopeId == null ? 0 : scopeId));
|
||||
protected Ternary<String, ConfigKey.Scope, Long> getConfigCacheKey(String key, ConfigKey.Scope scope, Long scopeId) {
|
||||
return new Ternary<>(key, scope, scopeId);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
|||
|
|
@ -89,6 +89,12 @@ public class ConfigDepotImplTest {
|
|||
runTestGetConfigStringValue("test", "value");
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testGetConfigStringValue_nameWithCharacters() {
|
||||
runTestGetConfigStringValue("test.1-1", "value");
|
||||
runTestGetConfigStringValue("test_1#2", "value");
|
||||
}
|
||||
|
||||
private void runTestGetConfigStringValueExpiry(long wait, int configDBRetrieval) {
|
||||
String key = "test1";
|
||||
String value = "expiry";
|
||||
|
|
|
|||
|
|
@ -237,7 +237,7 @@ public class AsyncJobManagerImpl extends ManagerBase implements AsyncJobManager,
|
|||
}
|
||||
}
|
||||
|
||||
throw new CloudRuntimeException("Maintenance or Shutdown has been initiated on this management server. Can not accept new jobs");
|
||||
throw new CloudRuntimeException("Maintenance or Shutdown has been initiated on this management server. Can not accept new async jobs");
|
||||
}
|
||||
|
||||
private boolean checkSyncQueueItemAllowed(SyncQueueItemVO item) {
|
||||
|
|
|
|||
|
|
@ -49,8 +49,12 @@ public class GenericPresetVariable {
|
|||
fieldNamesToIncludeInToString.add("name");
|
||||
}
|
||||
|
||||
/***
|
||||
* Converts the preset variable into a valid JSON object that will be injected into the JS interpreter.
|
||||
* This method should not be overridden or changed.
|
||||
*/
|
||||
@Override
|
||||
public String toString() {
|
||||
public final String toString() {
|
||||
return ReflectionToStringBuilderUtils.reflectOnlySelectedFields(this, fieldNamesToIncludeInToString.toArray(new String[0]));
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -40,8 +40,12 @@ public class Resource {
|
|||
this.domainId = domainId;
|
||||
}
|
||||
|
||||
/***
|
||||
* Converts the preset variable into a valid JSON object that will be injected into the JS interpreter.
|
||||
* This method should not be overridden or changed.
|
||||
*/
|
||||
@Override
|
||||
public String toString() {
|
||||
public final String toString() {
|
||||
return ToStringBuilder.reflectionToString(this, ToStringStyle.JSON_STYLE);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -191,6 +191,9 @@ public class VeeamBackupProvider extends AdapterBase implements BackupProvider,
|
|||
public boolean removeVMFromBackupOffering(final VirtualMachine vm) {
|
||||
final VeeamClient client = getClient(vm.getDataCenterId());
|
||||
final VmwareDatacenter vmwareDC = findVmwareDatacenterForVM(vm);
|
||||
if (vm.getBackupExternalId() == null) {
|
||||
throw new CloudRuntimeException("The VM does not have a backup job assigned.");
|
||||
}
|
||||
try {
|
||||
if (!client.removeVMFromVeeamJob(vm.getBackupExternalId(), vm.getInstanceName(), vmwareDC.getVcenterHost())) {
|
||||
logger.warn("Failed to remove VM from Veeam Job id: " + vm.getBackupExternalId());
|
||||
|
|
|
|||
|
|
@ -108,9 +108,7 @@ public final class LibvirtCreatePrivateTemplateFromVolumeCommandWrapper extends
|
|||
} else {
|
||||
logger.debug("Converting RBD disk " + disk.getPath() + " into template " + command.getUniqueName());
|
||||
|
||||
final QemuImgFile srcFile =
|
||||
new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(primary.getSourceHost(), primary.getSourcePort(), primary.getAuthUserName(),
|
||||
primary.getAuthSecret(), disk.getPath()));
|
||||
final QemuImgFile srcFile = new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(primary, disk.getPath()));
|
||||
srcFile.setFormat(PhysicalDiskFormat.RAW);
|
||||
|
||||
final QemuImgFile destFile = new QemuImgFile(tmpltPath + "/" + command.getUniqueName() + ".qcow2");
|
||||
|
|
|
|||
|
|
@ -161,11 +161,7 @@ public final class LibvirtGetVolumesOnStorageCommandWrapper extends CommandWrapp
|
|||
QemuImg qemu = new QemuImg(0);
|
||||
QemuImgFile qemuFile = new QemuImgFile(disk.getPath(), disk.getFormat());
|
||||
if (StoragePoolType.RBD.equals(pool.getType())) {
|
||||
String rbdDestFile = KVMPhysicalDisk.RBDStringBuilder(pool.getSourceHost(),
|
||||
pool.getSourcePort(),
|
||||
pool.getAuthUserName(),
|
||||
pool.getAuthSecret(),
|
||||
disk.getPath());
|
||||
String rbdDestFile = KVMPhysicalDisk.RBDStringBuilder(pool, disk.getPath());
|
||||
qemuFile = new QemuImgFile(rbdDestFile, disk.getFormat());
|
||||
}
|
||||
return qemu.info(qemuFile, secure);
|
||||
|
|
|
|||
|
|
@ -410,9 +410,7 @@ public class IscsiAdmStorageAdaptor implements StorageAdaptor {
|
|||
KVMStoragePool srcPool = srcDisk.getPool();
|
||||
|
||||
if (srcPool.getType() == StoragePoolType.RBD) {
|
||||
srcFile = new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(srcPool.getSourceHost(), srcPool.getSourcePort(),
|
||||
srcPool.getAuthUserName(), srcPool.getAuthSecret(),
|
||||
srcDisk.getPath()),srcDisk.getFormat());
|
||||
srcFile = new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(srcPool, srcDisk.getPath()), srcDisk.getFormat());
|
||||
} else {
|
||||
srcFile = new QemuImgFile(srcDisk.getPath(), srcDisk.getFormat());
|
||||
}
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@ import org.apache.commons.lang3.StringUtils;
|
|||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
public class KVMPhysicalDisk {
|
||||
private String path;
|
||||
|
|
@ -32,10 +33,17 @@ public class KVMPhysicalDisk {
|
|||
private String vmName;
|
||||
private boolean useAsTemplate;
|
||||
|
||||
public static String RBDStringBuilder(String monHost, int monPort, String authUserName, String authSecret, String image) {
|
||||
String rbdOpts;
|
||||
public static final String RBD_DEFAULT_DATA_POOL = "rbd_default_data_pool";
|
||||
|
||||
rbdOpts = "rbd:" + image;
|
||||
public static String RBDStringBuilder(KVMStoragePool storagePool, String image) {
|
||||
String monHost = storagePool.getSourceHost();
|
||||
int monPort = storagePool.getSourcePort();
|
||||
String authUserName = storagePool.getAuthUserName();
|
||||
String authSecret = storagePool.getAuthSecret();
|
||||
Map<String, String> details = storagePool.getDetails();
|
||||
String dataPool = (details == null) ? null : details.get(RBD_DEFAULT_DATA_POOL);
|
||||
|
||||
String rbdOpts = "rbd:" + image;
|
||||
rbdOpts += ":mon_host=" + composeOptionForMonHosts(monHost, monPort);
|
||||
|
||||
if (authUserName == null) {
|
||||
|
|
@ -46,6 +54,10 @@ public class KVMPhysicalDisk {
|
|||
rbdOpts += ":key=" + authSecret;
|
||||
}
|
||||
|
||||
if (dataPool != null) {
|
||||
rbdOpts += String.format(":rbd_default_data_pool=%s", dataPool);
|
||||
}
|
||||
|
||||
rbdOpts += ":rbd_default_format=2";
|
||||
rbdOpts += ":client_mount_timeout=30";
|
||||
|
||||
|
|
|
|||
|
|
@ -53,28 +53,6 @@ import com.cloud.vm.VirtualMachine;
|
|||
public class KVMStoragePoolManager {
|
||||
protected Logger logger = LogManager.getLogger(getClass());
|
||||
|
||||
private class StoragePoolInformation {
|
||||
String name;
|
||||
String host;
|
||||
int port;
|
||||
String path;
|
||||
String userInfo;
|
||||
boolean type;
|
||||
StoragePoolType poolType;
|
||||
Map<String, String> details;
|
||||
|
||||
public StoragePoolInformation(String name, String host, int port, String path, String userInfo, StoragePoolType poolType, Map<String, String> details, boolean type) {
|
||||
this.name = name;
|
||||
this.host = host;
|
||||
this.port = port;
|
||||
this.path = path;
|
||||
this.userInfo = userInfo;
|
||||
this.type = type;
|
||||
this.poolType = poolType;
|
||||
this.details = details;
|
||||
}
|
||||
}
|
||||
|
||||
private KVMHAMonitor _haMonitor;
|
||||
private final Map<String, StoragePoolInformation> _storagePools = new ConcurrentHashMap<String, StoragePoolInformation>();
|
||||
private final Map<String, StorageAdaptor> _storageMapper = new HashMap<String, StorageAdaptor>();
|
||||
|
|
@ -303,14 +281,33 @@ public class KVMStoragePoolManager {
|
|||
} catch (Exception e) {
|
||||
StoragePoolInformation info = _storagePools.get(uuid);
|
||||
if (info != null) {
|
||||
pool = createStoragePool(info.name, info.host, info.port, info.path, info.userInfo, info.poolType, info.details, info.type);
|
||||
pool = createStoragePool(info.getName(), info.getHost(), info.getPort(), info.getPath(), info.getUserInfo(), info.getPoolType(), info.getDetails(), info.isType());
|
||||
} else {
|
||||
throw new CloudRuntimeException("Could not fetch storage pool " + uuid + " from libvirt due to " + e.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
if (pool instanceof LibvirtStoragePool) {
|
||||
addPoolDetails(uuid, (LibvirtStoragePool) pool);
|
||||
}
|
||||
|
||||
return pool;
|
||||
}
|
||||
|
||||
/**
|
||||
* As the class {@link LibvirtStoragePool} is constrained to the {@link org.libvirt.StoragePool} class, there is no way of saving a generic parameter such as the details, hence,
|
||||
* this method was created to always make available the details of libvirt primary storages for when they are needed.
|
||||
*/
|
||||
private void addPoolDetails(String uuid, LibvirtStoragePool pool) {
|
||||
StoragePoolInformation storagePoolInformation = _storagePools.get(uuid);
|
||||
Map<String, String> details = storagePoolInformation.getDetails();
|
||||
|
||||
if (MapUtils.isNotEmpty(details)) {
|
||||
logger.trace("Adding the details {} to the pool with UUID {}.", details, uuid);
|
||||
pool.setDetails(details);
|
||||
}
|
||||
}
|
||||
|
||||
public KVMStoragePool getStoragePoolByURI(String uri) {
|
||||
URI storageUri = null;
|
||||
|
||||
|
|
|
|||
|
|
@ -667,9 +667,7 @@ public class KVMStorageProcessor implements StorageProcessor {
|
|||
} else {
|
||||
logger.debug("Converting RBD disk " + disk.getPath() + " into template " + templateName);
|
||||
|
||||
final QemuImgFile srcFile =
|
||||
new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(primary.getSourceHost(), primary.getSourcePort(), primary.getAuthUserName(),
|
||||
primary.getAuthSecret(), disk.getPath()));
|
||||
final QemuImgFile srcFile = new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(primary, disk.getPath()));
|
||||
srcFile.setFormat(PhysicalDiskFormat.RAW);
|
||||
|
||||
final QemuImgFile destFile = new QemuImgFile(tmpltPath + "/" + templateName + ".qcow2");
|
||||
|
|
@ -1022,9 +1020,7 @@ public class KVMStorageProcessor implements StorageProcessor {
|
|||
logger.debug("Attempting to create " + snapDir.getAbsolutePath() + " recursively for snapshot storage");
|
||||
FileUtils.forceMkdir(snapDir);
|
||||
|
||||
final QemuImgFile srcFile =
|
||||
new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(primaryPool.getSourceHost(), primaryPool.getSourcePort(), primaryPool.getAuthUserName(),
|
||||
primaryPool.getAuthSecret(), rbdSnapshot));
|
||||
final QemuImgFile srcFile = new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(primaryPool, rbdSnapshot));
|
||||
srcFile.setFormat(snapshotDisk.getFormat());
|
||||
|
||||
final QemuImgFile destFile = new QemuImgFile(snapshotFile);
|
||||
|
|
|
|||
|
|
@ -960,17 +960,55 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a physical disk depending on the {@link StoragePoolType}:
|
||||
* <ul>
|
||||
* <li>
|
||||
* <b>{@link StoragePoolType#RBD}</b>
|
||||
* <ul>
|
||||
* <li>
|
||||
* If it is an erasure code pool, utilizes QemuImg to create the physical disk through the method
|
||||
* {@link LibvirtStorageAdaptor#createPhysicalDiskByQemuImg(String, KVMStoragePool, PhysicalDiskFormat, Storage.ProvisioningType, long, byte[])}
|
||||
* </li>
|
||||
* <li>
|
||||
* Otherwise, utilize Libvirt to create the physical disk through the method
|
||||
* {@link LibvirtStorageAdaptor#createPhysicalDiskByLibVirt(String, KVMStoragePool, PhysicalDiskFormat, Storage.ProvisioningType, long)}
|
||||
* </li>
|
||||
* </ul>
|
||||
* </li>
|
||||
* <li>
|
||||
* {@link StoragePoolType#NetworkFilesystem} and {@link StoragePoolType#Filesystem}
|
||||
* <ul>
|
||||
* <li>
|
||||
* If the format is {@link PhysicalDiskFormat#QCOW2} or {@link PhysicalDiskFormat#RAW}, utilizes QemuImg to create the physical disk through the method
|
||||
* {@link LibvirtStorageAdaptor#createPhysicalDiskByQemuImg(String, KVMStoragePool, PhysicalDiskFormat, Storage.ProvisioningType, long, byte[])}
|
||||
* </li>
|
||||
* <li>
|
||||
* If the format is {@link PhysicalDiskFormat#DIR} or {@link PhysicalDiskFormat#TAR}, utilize Libvirt to create the physical disk through the method
|
||||
* {@link LibvirtStorageAdaptor#createPhysicalDiskByLibVirt(String, KVMStoragePool, PhysicalDiskFormat, Storage.ProvisioningType, long)}
|
||||
* </li>
|
||||
* </ul>
|
||||
* </li>
|
||||
* <li>
|
||||
* For the rest of the {@link StoragePoolType} types, utilizes the Libvirt method
|
||||
* {@link LibvirtStorageAdaptor#createPhysicalDiskByLibVirt(String, KVMStoragePool, PhysicalDiskFormat, Storage.ProvisioningType, long)}
|
||||
* </li>
|
||||
* </ul>
|
||||
*/
|
||||
@Override
|
||||
public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool,
|
||||
PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) {
|
||||
|
||||
logger.info("Attempting to create volume " + name + " (" + pool.getType().toString() + ") in pool "
|
||||
+ pool.getUuid() + " with size " + toHumanReadableSize(size));
|
||||
logger.info("Attempting to create volume {} ({}) in pool {} with size {}", name, pool.getType().toString(), pool.getUuid(), toHumanReadableSize(size));
|
||||
|
||||
StoragePoolType poolType = pool.getType();
|
||||
if (poolType.equals(StoragePoolType.RBD)) {
|
||||
return createPhysicalDiskByLibVirt(name, pool, PhysicalDiskFormat.RAW, provisioningType, size);
|
||||
} else if (poolType.equals(StoragePoolType.NetworkFilesystem) || poolType.equals(StoragePoolType.Filesystem)) {
|
||||
if (StoragePoolType.RBD.equals(poolType)) {
|
||||
Map<String, String> details = pool.getDetails();
|
||||
String dataPool = (details == null) ? null : details.get(KVMPhysicalDisk.RBD_DEFAULT_DATA_POOL);
|
||||
|
||||
return (dataPool == null) ? createPhysicalDiskByLibVirt(name, pool, PhysicalDiskFormat.RAW, provisioningType, size) :
|
||||
createPhysicalDiskByQemuImg(name, pool, PhysicalDiskFormat.RAW, provisioningType, size, passphrase);
|
||||
} else if (StoragePoolType.NetworkFilesystem.equals(poolType) || StoragePoolType.Filesystem.equals(poolType)) {
|
||||
switch (format) {
|
||||
case QCOW2:
|
||||
case RAW:
|
||||
|
|
@ -1018,18 +1056,25 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
|
|||
}
|
||||
|
||||
|
||||
private KVMPhysicalDisk createPhysicalDiskByQemuImg(String name, KVMStoragePool pool,
|
||||
PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) {
|
||||
String volPath = pool.getLocalPath() + "/" + name;
|
||||
private KVMPhysicalDisk createPhysicalDiskByQemuImg(String name, KVMStoragePool pool, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size,
|
||||
byte[] passphrase) {
|
||||
String volPath;
|
||||
String volName = name;
|
||||
long virtualSize = 0;
|
||||
long actualSize = 0;
|
||||
QemuObject.EncryptFormat encryptFormat = null;
|
||||
List<QemuObject> passphraseObjects = new ArrayList<>();
|
||||
|
||||
final int timeout = 0;
|
||||
QemuImgFile destFile;
|
||||
|
||||
if (StoragePoolType.RBD.equals(pool.getType())) {
|
||||
volPath = pool.getSourceDir() + File.separator + name;
|
||||
destFile = new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(pool, volPath));
|
||||
} else {
|
||||
volPath = pool.getLocalPath() + File.separator + name;
|
||||
destFile = new QemuImgFile(volPath);
|
||||
}
|
||||
|
||||
QemuImgFile destFile = new QemuImgFile(volPath);
|
||||
destFile.setFormat(format);
|
||||
destFile.setSize(size);
|
||||
Map<String, String> options = new HashMap<String, String>();
|
||||
|
|
@ -1312,11 +1357,7 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
|
|||
|
||||
|
||||
QemuImgFile srcFile;
|
||||
QemuImgFile destFile = new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(destPool.getSourceHost(),
|
||||
destPool.getSourcePort(),
|
||||
destPool.getAuthUserName(),
|
||||
destPool.getAuthSecret(),
|
||||
disk.getPath()));
|
||||
QemuImgFile destFile = new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(destPool, disk.getPath()));
|
||||
destFile.setFormat(format);
|
||||
|
||||
if (srcPool.getType() != StoragePoolType.RBD) {
|
||||
|
|
@ -1591,11 +1632,7 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
|
|||
try {
|
||||
srcFile = new QemuImgFile(sourcePath, sourceFormat);
|
||||
String rbdDestPath = destPool.getSourceDir() + "/" + name;
|
||||
String rbdDestFile = KVMPhysicalDisk.RBDStringBuilder(destPool.getSourceHost(),
|
||||
destPool.getSourcePort(),
|
||||
destPool.getAuthUserName(),
|
||||
destPool.getAuthSecret(),
|
||||
rbdDestPath);
|
||||
String rbdDestFile = KVMPhysicalDisk.RBDStringBuilder(destPool, rbdDestPath);
|
||||
destFile = new QemuImgFile(rbdDestFile, destFormat);
|
||||
|
||||
logger.debug("Starting copy from source image " + srcFile.getFileName() + " to RBD image " + rbdDestPath);
|
||||
|
|
@ -1638,9 +1675,7 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
|
|||
We let Qemu-Img do the work here. Although we could work with librbd and have that do the cloning
|
||||
it doesn't benefit us. It's better to keep the current code in place which works
|
||||
*/
|
||||
srcFile =
|
||||
new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(srcPool.getSourceHost(), srcPool.getSourcePort(), srcPool.getAuthUserName(), srcPool.getAuthSecret(),
|
||||
sourcePath));
|
||||
srcFile = new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(srcPool, sourcePath));
|
||||
srcFile.setFormat(sourceFormat);
|
||||
destFile = new QemuImgFile(destPath);
|
||||
destFile.setFormat(destFormat);
|
||||
|
|
|
|||
|
|
@ -56,8 +56,8 @@ public class LibvirtStoragePool implements KVMStoragePool {
|
|||
protected String authSecret;
|
||||
protected String sourceHost;
|
||||
protected int sourcePort;
|
||||
|
||||
protected String sourceDir;
|
||||
protected Map<String, String> details;
|
||||
|
||||
public LibvirtStoragePool(String uuid, String name, StoragePoolType type, StorageAdaptor adaptor, StoragePool pool) {
|
||||
this.uuid = uuid;
|
||||
|
|
@ -311,7 +311,11 @@ public class LibvirtStoragePool implements KVMStoragePool {
|
|||
|
||||
@Override
|
||||
public Map<String, String> getDetails() {
|
||||
return null;
|
||||
return this.details;
|
||||
}
|
||||
|
||||
public void setDetails(Map<String, String> details) {
|
||||
this.details = details;
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
|||
|
|
@ -0,0 +1,75 @@
|
|||
// Licensed to the Apache Software Foundation (ASF) under one
|
||||
// or more contributor license agreements. See the NOTICE file
|
||||
// distributed with this work for additional information
|
||||
// regarding copyright ownership. The ASF licenses this file
|
||||
// to you under the Apache License, Version 2.0 (the
|
||||
// "License"); you may not use this file except in compliance
|
||||
// with the License. You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing,
|
||||
// software distributed under the License is distributed on an
|
||||
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
// KIND, either express or implied. See the License for the
|
||||
// specific language governing permissions and limitations
|
||||
// under the License.
|
||||
package com.cloud.hypervisor.kvm.storage;
|
||||
|
||||
import com.cloud.storage.Storage;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
class StoragePoolInformation {
|
||||
private String name;
|
||||
private String host;
|
||||
private int port;
|
||||
private String path;
|
||||
private String userInfo;
|
||||
private boolean type;
|
||||
private Storage.StoragePoolType poolType;
|
||||
private Map<String, String> details;
|
||||
|
||||
public StoragePoolInformation(String name, String host, int port, String path, String userInfo, Storage.StoragePoolType poolType, Map<String, String> details, boolean type) {
|
||||
this.name = name;
|
||||
this.host = host;
|
||||
this.port = port;
|
||||
this.path = path;
|
||||
this.userInfo = userInfo;
|
||||
this.type = type;
|
||||
this.poolType = poolType;
|
||||
this.details = details;
|
||||
}
|
||||
|
||||
public String getName() {
|
||||
return name;
|
||||
}
|
||||
|
||||
public String getHost() {
|
||||
return host;
|
||||
}
|
||||
|
||||
public int getPort() {
|
||||
return port;
|
||||
}
|
||||
|
||||
public String getPath() {
|
||||
return path;
|
||||
}
|
||||
|
||||
public String getUserInfo() {
|
||||
return userInfo;
|
||||
}
|
||||
|
||||
public boolean isType() {
|
||||
return type;
|
||||
}
|
||||
|
||||
public Storage.StoragePoolType getPoolType() {
|
||||
return poolType;
|
||||
}
|
||||
|
||||
public Map<String, String> getDetails() {
|
||||
return details;
|
||||
}
|
||||
}
|
||||
|
|
@ -17,43 +17,73 @@
|
|||
package com.cloud.hypervisor.kvm.storage;
|
||||
|
||||
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
|
||||
import org.junit.Assert;
|
||||
import org.junit.Test;
|
||||
import org.junit.runner.RunWith;
|
||||
import org.mockito.Mock;
|
||||
import org.mockito.Mockito;
|
||||
|
||||
import junit.framework.TestCase;
|
||||
import org.mockito.junit.MockitoJUnitRunner;
|
||||
|
||||
|
||||
@RunWith(MockitoJUnitRunner.class)
|
||||
public class KVMPhysicalDiskTest extends TestCase {
|
||||
public class KVMPhysicalDiskTest {
|
||||
@Mock
|
||||
KVMStoragePool kvmStoragePoolMock;
|
||||
|
||||
private final String authUserName = "admin";
|
||||
|
||||
private final String authSecret = "supersecret";
|
||||
|
||||
@Test
|
||||
public void testRBDStringBuilder() {
|
||||
assertEquals(KVMPhysicalDisk.RBDStringBuilder("ceph-monitor", 8000, "admin", "supersecret", "volume1"),
|
||||
"rbd:volume1:mon_host=ceph-monitor\\:8000:auth_supported=cephx:id=admin:key=supersecret:rbd_default_format=2:client_mount_timeout=30");
|
||||
String monHosts = "ceph-monitor";
|
||||
int monPort = 8000;
|
||||
|
||||
Mockito.doReturn(monHosts).when(kvmStoragePoolMock).getSourceHost();
|
||||
Mockito.doReturn(monPort).when(kvmStoragePoolMock).getSourcePort();
|
||||
Mockito.doReturn(authUserName).when(kvmStoragePoolMock).getAuthUserName();
|
||||
Mockito.doReturn(authSecret).when(kvmStoragePoolMock).getAuthSecret();
|
||||
|
||||
String expected = "rbd:volume1:mon_host=ceph-monitor\\:8000:auth_supported=cephx:id=admin:key=supersecret:rbd_default_format=2:client_mount_timeout=30";
|
||||
String result = KVMPhysicalDisk.RBDStringBuilder(kvmStoragePoolMock, "volume1");
|
||||
|
||||
Assert.assertEquals(expected, result);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testRBDStringBuilder2() {
|
||||
String monHosts = "ceph-monitor1,ceph-monitor2,ceph-monitor3";
|
||||
int monPort = 3300;
|
||||
|
||||
Mockito.doReturn(monHosts).when(kvmStoragePoolMock).getSourceHost();
|
||||
Mockito.doReturn(monPort).when(kvmStoragePoolMock).getSourcePort();
|
||||
Mockito.doReturn(authUserName).when(kvmStoragePoolMock).getAuthUserName();
|
||||
Mockito.doReturn(authSecret).when(kvmStoragePoolMock).getAuthSecret();
|
||||
|
||||
String expected = "rbd:volume1:" +
|
||||
"mon_host=ceph-monitor1\\:3300\\;ceph-monitor2\\:3300\\;ceph-monitor3\\:3300:" +
|
||||
"auth_supported=cephx:id=admin:key=supersecret:rbd_default_format=2:client_mount_timeout=30";
|
||||
String actualResult = KVMPhysicalDisk.RBDStringBuilder(monHosts, monPort, "admin", "supersecret", "volume1");
|
||||
assertEquals(expected, actualResult);
|
||||
String actualResult = KVMPhysicalDisk.RBDStringBuilder(kvmStoragePoolMock, "volume1");
|
||||
|
||||
Assert.assertEquals(expected, actualResult);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testRBDStringBuilder3() {
|
||||
String monHosts = "[fc00:1234::1],[fc00:1234::2],[fc00:1234::3]";
|
||||
int monPort = 3300;
|
||||
|
||||
Mockito.doReturn(monHosts).when(kvmStoragePoolMock).getSourceHost();
|
||||
Mockito.doReturn(monPort).when(kvmStoragePoolMock).getSourcePort();
|
||||
Mockito.doReturn(authUserName).when(kvmStoragePoolMock).getAuthUserName();
|
||||
Mockito.doReturn(authSecret).when(kvmStoragePoolMock).getAuthSecret();
|
||||
|
||||
String expected = "rbd:volume1:" +
|
||||
"mon_host=[fc00\\:1234\\:\\:1]\\:3300\\;[fc00\\:1234\\:\\:2]\\:3300\\;[fc00\\:1234\\:\\:3]\\:3300:" +
|
||||
"auth_supported=cephx:id=admin:key=supersecret:rbd_default_format=2:client_mount_timeout=30";
|
||||
String actualResult = KVMPhysicalDisk.RBDStringBuilder(monHosts, monPort, "admin", "supersecret", "volume1");
|
||||
assertEquals(expected, actualResult);
|
||||
String actualResult = KVMPhysicalDisk.RBDStringBuilder(kvmStoragePoolMock, "volume1");
|
||||
|
||||
Assert.assertEquals(expected, actualResult);
|
||||
}
|
||||
|
||||
@Test
|
||||
|
|
@ -64,18 +94,18 @@ public class KVMPhysicalDiskTest extends TestCase {
|
|||
LibvirtStoragePool pool = Mockito.mock(LibvirtStoragePool.class);
|
||||
|
||||
KVMPhysicalDisk disk = new KVMPhysicalDisk(path, name, pool);
|
||||
assertEquals(disk.getName(), name);
|
||||
assertEquals(disk.getPath(), path);
|
||||
assertEquals(disk.getPool(), pool);
|
||||
assertEquals(disk.getSize(), 0);
|
||||
assertEquals(disk.getVirtualSize(), 0);
|
||||
Assert.assertEquals(disk.getName(), name);
|
||||
Assert.assertEquals(disk.getPath(), path);
|
||||
Assert.assertEquals(disk.getPool(), pool);
|
||||
Assert.assertEquals(disk.getSize(), 0);
|
||||
Assert.assertEquals(disk.getVirtualSize(), 0);
|
||||
|
||||
disk.setSize(1024);
|
||||
disk.setVirtualSize(2048);
|
||||
assertEquals(disk.getSize(), 1024);
|
||||
assertEquals(disk.getVirtualSize(), 2048);
|
||||
Assert.assertEquals(disk.getSize(), 1024);
|
||||
Assert.assertEquals(disk.getVirtualSize(), 2048);
|
||||
|
||||
disk.setFormat(PhysicalDiskFormat.RAW);
|
||||
assertEquals(disk.getFormat(), PhysicalDiskFormat.RAW);
|
||||
Assert.assertEquals(disk.getFormat(), PhysicalDiskFormat.RAW);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -54,7 +54,7 @@ public final class XenServer56FenceCommandWrapper extends CommandWrapper<FenceCo
|
|||
for (final VM vm : vms) {
|
||||
logger.info("Fence command for VM " + command.getVmName());
|
||||
vm.powerStateReset(conn);
|
||||
vm.destroy(conn);
|
||||
xenServer56.destroyVm(vm, conn);
|
||||
}
|
||||
return new FenceAnswer(command);
|
||||
} catch (final XmlRpcException e) {
|
||||
|
|
|
|||
|
|
@ -66,7 +66,7 @@ public final class XenServer56FP1FenceCommandWrapper extends CommandWrapper<Fenc
|
|||
}
|
||||
logger.info("Fence command for VM " + command.getVmName());
|
||||
vm.powerStateReset(conn);
|
||||
vm.destroy(conn);
|
||||
xenServer56.destroyVm(vm, conn);
|
||||
for (final VDI vdi : vdis) {
|
||||
final Map<String, String> smConfig = vdi.getSmConfig(conn);
|
||||
for (final String key : smConfig.keySet()) {
|
||||
|
|
|
|||
|
|
@ -69,7 +69,7 @@ public final class CitrixCreateVMSnapshotCommandWrapper extends CommandWrapper<C
|
|||
try {
|
||||
// check if VM snapshot already exists
|
||||
final Set<VM> vmSnapshots = VM.getByNameLabel(conn, command.getTarget().getSnapshotName());
|
||||
if (vmSnapshots == null || vmSnapshots.size() > 0) {
|
||||
if (vmSnapshots == null || !vmSnapshots.isEmpty()) {
|
||||
return new CreateVMSnapshotAnswer(command, command.getTarget(), command.getVolumeTOs());
|
||||
}
|
||||
|
||||
|
|
@ -98,6 +98,7 @@ public final class CitrixCreateVMSnapshotCommandWrapper extends CommandWrapper<C
|
|||
vm = citrixResourceBase.getVM(conn, vmName);
|
||||
vmState = vm.getPowerState(conn);
|
||||
} catch (final Exception e) {
|
||||
logger.debug("Failed to find VM with name: {} due to:", vmName, e);
|
||||
if (!snapshotMemory) {
|
||||
vm = citrixResourceBase.createWorkingVM(conn, vmName, guestOSType, platformEmulator, listVolumeTo);
|
||||
}
|
||||
|
|
@ -107,7 +108,7 @@ public final class CitrixCreateVMSnapshotCommandWrapper extends CommandWrapper<C
|
|||
return new CreateVMSnapshotAnswer(command, false, "Creating VM Snapshot Failed due to can not find vm: " + vmName);
|
||||
}
|
||||
|
||||
// call Xenserver API
|
||||
// call XenServer API
|
||||
if (!snapshotMemory) {
|
||||
task = vm.snapshotAsync(conn, vmSnapshotName);
|
||||
} else {
|
||||
|
|
@ -136,7 +137,7 @@ public final class CitrixCreateVMSnapshotCommandWrapper extends CommandWrapper<C
|
|||
vmSnapshot = Types.toVM(ref);
|
||||
try {
|
||||
Thread.sleep(5000);
|
||||
} catch (final InterruptedException ex) {
|
||||
} catch (final InterruptedException ignored) {
|
||||
|
||||
}
|
||||
// calculate used capacity for this VM snapshot
|
||||
|
|
@ -144,7 +145,7 @@ public final class CitrixCreateVMSnapshotCommandWrapper extends CommandWrapper<C
|
|||
try {
|
||||
final long size = citrixResourceBase.getVMSnapshotChainSize(conn, volumeTo, command.getVmName(), vmSnapshotName);
|
||||
volumeTo.setSize(size);
|
||||
} catch (final CloudRuntimeException cre) {
|
||||
} catch (final CloudRuntimeException ignored) {
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -161,13 +162,13 @@ public final class CitrixCreateVMSnapshotCommandWrapper extends CommandWrapper<C
|
|||
} else {
|
||||
msg = e.toString();
|
||||
}
|
||||
logger.warn("Creating VM Snapshot " + command.getTarget().getSnapshotName() + " failed due to: " + msg, e);
|
||||
logger.warn("Creating VM Snapshot {} failed due to: {}", command.getTarget().getSnapshotName(), msg, e);
|
||||
return new CreateVMSnapshotAnswer(command, false, msg);
|
||||
} finally {
|
||||
try {
|
||||
if (!success) {
|
||||
if (vmSnapshot != null) {
|
||||
logger.debug("Delete existing VM Snapshot " + vmSnapshotName + " after making VolumeTO failed");
|
||||
logger.debug("Delete existing VM Snapshot {} after making VolumeTO failed", vmSnapshotName);
|
||||
final Set<VBD> vbds = vmSnapshot.getVBDs(conn);
|
||||
for (final VBD vbd : vbds) {
|
||||
final VBD.Record vbdr = vbd.getRecord(conn);
|
||||
|
|
@ -176,16 +177,14 @@ public final class CitrixCreateVMSnapshotCommandWrapper extends CommandWrapper<C
|
|||
vdi.destroy(conn);
|
||||
}
|
||||
}
|
||||
vmSnapshot.destroy(conn);
|
||||
citrixResourceBase.destroyVm(vmSnapshot, conn, true);
|
||||
}
|
||||
}
|
||||
if (vmState == VmPowerState.HALTED) {
|
||||
if (vm != null) {
|
||||
vm.destroy(conn);
|
||||
}
|
||||
if (vmState == VmPowerState.HALTED && vm != null) {
|
||||
citrixResourceBase.destroyVm(vm, conn);
|
||||
}
|
||||
} catch (final Exception e2) {
|
||||
logger.error("delete snapshot error due to " + e2.getMessage());
|
||||
logger.error("delete snapshot error due to {}", e2.getMessage());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -66,7 +66,7 @@ public final class CitrixDeleteVMSnapshotCommandWrapper extends CommandWrapper<D
|
|||
if (command.getTarget().getType() == VMSnapshot.Type.DiskAndMemory) {
|
||||
vdiList.add(snapshot.getSuspendVDI(conn));
|
||||
}
|
||||
snapshot.destroy(conn);
|
||||
citrixResourceBase.destroyVm(snapshot, conn, true);
|
||||
for (final VDI vdi : vdiList) {
|
||||
vdi.destroy(conn);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -51,12 +51,12 @@ public final class CitrixRevertToVMSnapshotCommandWrapper extends CommandWrapper
|
|||
final VMSnapshot.Type vmSnapshotType = command.getTarget().getType();
|
||||
final Boolean snapshotMemory = vmSnapshotType == VMSnapshot.Type.DiskAndMemory;
|
||||
final Connection conn = citrixResourceBase.getConnection();
|
||||
PowerState vmState = null;
|
||||
VM vm = null;
|
||||
PowerState vmState;
|
||||
VM vm;
|
||||
try {
|
||||
|
||||
final Set<VM> vmSnapshots = VM.getByNameLabel(conn, command.getTarget().getSnapshotName());
|
||||
if (vmSnapshots == null || vmSnapshots.size() == 0) {
|
||||
if (vmSnapshots == null || vmSnapshots.isEmpty()) {
|
||||
return new RevertToVMSnapshotAnswer(command, false, "Cannot find vmSnapshot with name: " + command.getTarget().getSnapshotName());
|
||||
}
|
||||
|
||||
|
|
@ -66,6 +66,7 @@ public final class CitrixRevertToVMSnapshotCommandWrapper extends CommandWrapper
|
|||
try {
|
||||
vm = citrixResourceBase.getVM(conn, vmName);
|
||||
} catch (final Exception e) {
|
||||
logger.debug("Failed to find VM with name: {} due to:", vmName, e);
|
||||
vm = citrixResourceBase.createWorkingVM(conn, vmName, command.getGuestOSType(), command.getPlatformEmulator(), listVolumeTo);
|
||||
}
|
||||
|
||||
|
|
@ -77,7 +78,7 @@ public final class CitrixRevertToVMSnapshotCommandWrapper extends CommandWrapper
|
|||
citrixResourceBase.revertToSnapshot(conn, vmSnapshot, vmName, vm.getUuid(conn), snapshotMemory, citrixResourceBase.getHost().getUuid());
|
||||
vm = citrixResourceBase.getVM(conn, vmName);
|
||||
final Set<VBD> vbds = vm.getVBDs(conn);
|
||||
final Map<String, VDI> vdiMap = new HashMap<String, VDI>();
|
||||
final Map<String, VDI> vdiMap = new HashMap<>();
|
||||
// get vdi:vbdr to a map
|
||||
for (final VBD vbd : vbds) {
|
||||
final VBD.Record vbdr = vbd.getRecord(conn);
|
||||
|
|
@ -88,7 +89,7 @@ public final class CitrixRevertToVMSnapshotCommandWrapper extends CommandWrapper
|
|||
}
|
||||
|
||||
if (!snapshotMemory) {
|
||||
vm.destroy(conn);
|
||||
citrixResourceBase.destroyVm(vm, conn);
|
||||
vmState = PowerState.PowerOff;
|
||||
} else {
|
||||
vmState = PowerState.PowerOn;
|
||||
|
|
@ -103,7 +104,7 @@ public final class CitrixRevertToVMSnapshotCommandWrapper extends CommandWrapper
|
|||
|
||||
return new RevertToVMSnapshotAnswer(command, listVolumeTo, vmState);
|
||||
} catch (final Exception e) {
|
||||
logger.error("revert vm " + vmName + " to snapshot " + command.getTarget().getSnapshotName() + " failed due to " + e.getMessage());
|
||||
logger.error("revert vm {} to snapshot {} failed due to {}", vmName, command.getTarget().getSnapshotName(), e.getMessage());
|
||||
return new RevertToVMSnapshotAnswer(command, false, e.getMessage());
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -73,7 +73,7 @@ public final class CitrixStartCommandWrapper extends CommandWrapper<StartCommand
|
|||
for (final VM v : vms) {
|
||||
final VM.Record vRec = v.getRecord(conn);
|
||||
if (vRec.powerState == VmPowerState.HALTED) {
|
||||
v.destroy(conn);
|
||||
citrixResourceBase.destroyVm(v, conn, true);
|
||||
} else if (vRec.powerState == VmPowerState.RUNNING) {
|
||||
final String host = vRec.residentOn.getUuid(conn);
|
||||
final String msg = "VM " + vmName + " is runing on host " + host;
|
||||
|
|
|
|||
|
|
@ -141,7 +141,7 @@ public final class CitrixStopCommandWrapper extends CommandWrapper<StopCommand,
|
|||
for (final VIF vif : vifs) {
|
||||
networks.add(vif.getNetwork(conn));
|
||||
}
|
||||
vm.destroy(conn);
|
||||
citrixResourceBase.destroyVm(vm, conn);
|
||||
final SR sr = citrixResourceBase.getISOSRbyVmName(conn, command.getVmName(), false);
|
||||
citrixResourceBase.removeSR(conn, sr);
|
||||
final SR configDriveSR = citrixResourceBase.getISOSRbyVmName(conn, command.getVmName(), true);
|
||||
|
|
|
|||
|
|
@ -18,6 +18,10 @@
|
|||
package org.apache.cloudstack.maintenance;
|
||||
|
||||
public interface ManagementServerMaintenanceListener {
|
||||
void onManagementServerPreparingForMaintenance();
|
||||
|
||||
void onManagementServerCancelPreparingForMaintenance();
|
||||
|
||||
void onManagementServerMaintenance();
|
||||
|
||||
void onManagementServerCancelMaintenance();
|
||||
|
|
|
|||
|
|
@ -44,6 +44,10 @@ public interface ManagementServerMaintenanceManager {
|
|||
|
||||
void unregisterListener(ManagementServerMaintenanceListener listener);
|
||||
|
||||
void onPreparingForMaintenance();
|
||||
|
||||
void onCancelPreparingForMaintenance();
|
||||
|
||||
void onMaintenance();
|
||||
|
||||
void onCancelMaintenance();
|
||||
|
|
|
|||
|
|
@ -53,6 +53,9 @@ import com.cloud.agent.api.Command;
|
|||
import com.cloud.cluster.ClusterManager;
|
||||
import com.cloud.cluster.ManagementServerHostVO;
|
||||
import com.cloud.cluster.dao.ManagementServerHostDao;
|
||||
import com.cloud.event.ActionEvent;
|
||||
import com.cloud.event.EventTypes;
|
||||
import com.cloud.host.HostVO;
|
||||
import com.cloud.host.dao.HostDao;
|
||||
import com.cloud.serializer.GsonHelper;
|
||||
import com.cloud.utils.StringUtils;
|
||||
|
|
@ -108,6 +111,25 @@ public class ManagementServerMaintenanceManagerImpl extends ManagerBase implemen
|
|||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean stop() {
|
||||
ManagementServerHostVO msHost = msHostDao.findByMsid(ManagementServerNode.getManagementServerId());
|
||||
if (msHost != null) {
|
||||
updateLastManagementServerForHosts(msHost.getMsid());
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
private void updateLastManagementServerForHosts(long msId) {
|
||||
List<HostVO> hosts = hostDao.listHostsByMs(msId);
|
||||
for (HostVO host : hosts) {
|
||||
if (host != null) {
|
||||
host.setLastManagementServerId(msId);
|
||||
hostDao.update(host.getId(), host);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void registerListener(ManagementServerMaintenanceListener listener) {
|
||||
synchronized (_listeners) {
|
||||
|
|
@ -124,6 +146,26 @@ public class ManagementServerMaintenanceManagerImpl extends ManagerBase implemen
|
|||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onPreparingForMaintenance() {
|
||||
synchronized (_listeners) {
|
||||
for (final ManagementServerMaintenanceListener listener : _listeners) {
|
||||
logger.info("Invoke, on preparing for maintenance for listener " + listener.getClass());
|
||||
listener.onManagementServerPreparingForMaintenance();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onCancelPreparingForMaintenance() {
|
||||
synchronized (_listeners) {
|
||||
for (final ManagementServerMaintenanceListener listener : _listeners) {
|
||||
logger.info("Invoke, on cancel preparing for maintenance for listener " + listener.getClass());
|
||||
listener.onManagementServerCancelPreparingForMaintenance();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onMaintenance() {
|
||||
synchronized (_listeners) {
|
||||
|
|
@ -243,6 +285,7 @@ public class ManagementServerMaintenanceManagerImpl extends ManagerBase implemen
|
|||
this.maintenanceStartTime = System.currentTimeMillis();
|
||||
this.lbAlgorithm = lbAlorithm;
|
||||
jobManager.disableAsyncJobs();
|
||||
onPreparingForMaintenance();
|
||||
waitForPendingJobs();
|
||||
}
|
||||
|
||||
|
|
@ -257,8 +300,13 @@ public class ManagementServerMaintenanceManagerImpl extends ManagerBase implemen
|
|||
jobManager.enableAsyncJobs();
|
||||
cancelWaitForPendingJobs();
|
||||
ManagementServerHostVO msHost = msHostDao.findByMsid(ManagementServerNode.getManagementServerId());
|
||||
if (msHost != null && State.Maintenance.equals(msHost.getState())) {
|
||||
onCancelMaintenance();
|
||||
if (msHost != null) {
|
||||
if (State.PreparingForMaintenance.equals(msHost.getState())) {
|
||||
onCancelPreparingForMaintenance();
|
||||
}
|
||||
if (State.Maintenance.equals(msHost.getState())) {
|
||||
onCancelMaintenance();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -284,6 +332,7 @@ public class ManagementServerMaintenanceManagerImpl extends ManagerBase implemen
|
|||
}
|
||||
|
||||
@Override
|
||||
@ActionEvent(eventType = EventTypes.EVENT_MS_SHUTDOWN_PREPARE, eventDescription = "preparing for shutdown")
|
||||
public ManagementServerMaintenanceResponse prepareForShutdown(PrepareForShutdownCmd cmd) {
|
||||
ManagementServerHostVO msHost = msHostDao.findById(cmd.getManagementServerId());
|
||||
if (msHost == null) {
|
||||
|
|
@ -294,19 +343,18 @@ public class ManagementServerMaintenanceManagerImpl extends ManagerBase implemen
|
|||
throw new CloudRuntimeException("Management server is not in the right state to prepare for shutdown");
|
||||
}
|
||||
|
||||
checkAnyMsInPreparingStates("prepare for shutdown");
|
||||
|
||||
final Command[] cmds = new Command[1];
|
||||
cmds[0] = new PrepareForShutdownManagementServerHostCommand(msHost.getMsid());
|
||||
String result = clusterManager.execute(String.valueOf(msHost.getMsid()), 0, gson.toJson(cmds), true);
|
||||
logger.info("PrepareForShutdownCmd result : " + result);
|
||||
if (!result.startsWith("Success")) {
|
||||
throw new CloudRuntimeException(result);
|
||||
}
|
||||
executeCmd(msHost, cmds);
|
||||
|
||||
msHostDao.updateState(msHost.getId(), State.PreparingForShutDown);
|
||||
return prepareMaintenanceResponse(cmd.getManagementServerId());
|
||||
}
|
||||
|
||||
@Override
|
||||
@ActionEvent(eventType = EventTypes.EVENT_MS_SHUTDOWN, eventDescription = "triggering shutdown")
|
||||
public ManagementServerMaintenanceResponse triggerShutdown(TriggerShutdownCmd cmd) {
|
||||
ManagementServerHostVO msHost = msHostDao.findById(cmd.getManagementServerId());
|
||||
if (msHost == null) {
|
||||
|
|
@ -319,22 +367,20 @@ public class ManagementServerMaintenanceManagerImpl extends ManagerBase implemen
|
|||
}
|
||||
|
||||
if (State.Up.equals(msHost.getState())) {
|
||||
checkAnyMsInPreparingStates("trigger shutdown");
|
||||
msHostDao.updateState(msHost.getId(), State.PreparingForShutDown);
|
||||
}
|
||||
|
||||
final Command[] cmds = new Command[1];
|
||||
cmds[0] = new TriggerShutdownManagementServerHostCommand(msHost.getMsid());
|
||||
String result = clusterManager.execute(String.valueOf(msHost.getMsid()), 0, gson.toJson(cmds), true);
|
||||
logger.info("TriggerShutdownCmd result : " + result);
|
||||
if (!result.startsWith("Success")) {
|
||||
throw new CloudRuntimeException(result);
|
||||
}
|
||||
executeCmd(msHost, cmds);
|
||||
|
||||
msHostDao.updateState(msHost.getId(), State.ShuttingDown);
|
||||
return prepareMaintenanceResponse(cmd.getManagementServerId());
|
||||
}
|
||||
|
||||
@Override
|
||||
@ActionEvent(eventType = EventTypes.EVENT_MS_SHUTDOWN_CANCEL, eventDescription = "cancelling shutdown")
|
||||
public ManagementServerMaintenanceResponse cancelShutdown(CancelShutdownCmd cmd) {
|
||||
ManagementServerHostVO msHost = msHostDao.findById(cmd.getManagementServerId());
|
||||
if (msHost == null) {
|
||||
|
|
@ -347,17 +393,14 @@ public class ManagementServerMaintenanceManagerImpl extends ManagerBase implemen
|
|||
|
||||
final Command[] cmds = new Command[1];
|
||||
cmds[0] = new CancelShutdownManagementServerHostCommand(msHost.getMsid());
|
||||
String result = clusterManager.execute(String.valueOf(msHost.getMsid()), 0, gson.toJson(cmds), true);
|
||||
logger.info("CancelShutdownCmd result : " + result);
|
||||
if (!result.startsWith("Success")) {
|
||||
throw new CloudRuntimeException(result);
|
||||
}
|
||||
executeCmd(msHost, cmds);
|
||||
|
||||
msHostDao.updateState(msHost.getId(), State.Up);
|
||||
return prepareMaintenanceResponse(cmd.getManagementServerId());
|
||||
}
|
||||
|
||||
@Override
|
||||
@ActionEvent(eventType = EventTypes.EVENT_MS_MAINTENANCE_PREPARE, eventDescription = "preparing for maintenance")
|
||||
public ManagementServerMaintenanceResponse prepareForMaintenance(PrepareForMaintenanceCmd cmd) {
|
||||
if (StringUtils.isNotBlank(cmd.getAlgorithm())) {
|
||||
indirectAgentLB.checkLBAlgorithmName(cmd.getAlgorithm());
|
||||
|
|
@ -381,10 +424,7 @@ public class ManagementServerMaintenanceManagerImpl extends ManagerBase implemen
|
|||
throw new CloudRuntimeException("Management server is not in the right state to prepare for maintenance");
|
||||
}
|
||||
|
||||
final List<ManagementServerHostVO> preparingForMaintenanceMsList = msHostDao.listBy(State.PreparingForMaintenance);
|
||||
if (CollectionUtils.isNotEmpty(preparingForMaintenanceMsList)) {
|
||||
throw new CloudRuntimeException("Cannot prepare for maintenance, there are other management servers preparing for maintenance");
|
||||
}
|
||||
checkAnyMsInPreparingStates("prepare for maintenance");
|
||||
|
||||
if (indirectAgentLB.haveAgentBasedHosts(msHost.getMsid())) {
|
||||
List<String> indirectAgentMsList = indirectAgentLB.getManagementServerList();
|
||||
|
|
@ -396,23 +436,16 @@ public class ManagementServerMaintenanceManagerImpl extends ManagerBase implemen
|
|||
}
|
||||
}
|
||||
|
||||
List<String> lastAgents = hostDao.listByMs(cmd.getManagementServerId());
|
||||
agentMgr.setLastAgents(lastAgents);
|
||||
|
||||
final Command[] cmds = new Command[1];
|
||||
cmds[0] = new PrepareForMaintenanceManagementServerHostCommand(msHost.getMsid(), cmd.getAlgorithm());
|
||||
String result = clusterManager.execute(String.valueOf(msHost.getMsid()), 0, gson.toJson(cmds), true);
|
||||
logger.info("PrepareForMaintenanceCmd result : " + result);
|
||||
if (!result.startsWith("Success")) {
|
||||
agentMgr.setLastAgents(null);
|
||||
throw new CloudRuntimeException(result);
|
||||
}
|
||||
executeCmd(msHost, cmds);
|
||||
|
||||
msHostDao.updateState(msHost.getId(), State.PreparingForMaintenance);
|
||||
return prepareMaintenanceResponse(cmd.getManagementServerId());
|
||||
}
|
||||
|
||||
@Override
|
||||
@ActionEvent(eventType = EventTypes.EVENT_MS_MAINTENANCE_CANCEL, eventDescription = "cancelling maintenance")
|
||||
public ManagementServerMaintenanceResponse cancelMaintenance(CancelMaintenanceCmd cmd) {
|
||||
ManagementServerHostVO msHost = msHostDao.findById(cmd.getManagementServerId());
|
||||
if (msHost == null) {
|
||||
|
|
@ -425,15 +458,29 @@ public class ManagementServerMaintenanceManagerImpl extends ManagerBase implemen
|
|||
|
||||
final Command[] cmds = new Command[1];
|
||||
cmds[0] = new CancelMaintenanceManagementServerHostCommand(msHost.getMsid());
|
||||
String result = clusterManager.execute(String.valueOf(msHost.getMsid()), 0, gson.toJson(cmds), true);
|
||||
logger.info("CancelMaintenanceCmd result : " + result);
|
||||
executeCmd(msHost, cmds);
|
||||
|
||||
msHostDao.updateState(msHost.getId(), State.Up);
|
||||
return prepareMaintenanceResponse(cmd.getManagementServerId());
|
||||
}
|
||||
|
||||
private void executeCmd(ManagementServerHostVO msHost, Command[] cmds) {
|
||||
if (msHost == null) {
|
||||
throw new CloudRuntimeException("Management server node not specified, to execute the cmd");
|
||||
}
|
||||
if (cmds == null || cmds.length <= 0) {
|
||||
throw new CloudRuntimeException(String.format("Cmd not specified, to execute on the management server node %s", msHost));
|
||||
}
|
||||
String result = clusterManager.execute(String.valueOf(msHost.getMsid()), 0, gson.toJson(cmds), false);
|
||||
if (result == null) {
|
||||
String msg = String.format("Unable to reach or execute %s on the management server node: %s", cmds[0], msHost);
|
||||
logger.warn(msg);
|
||||
throw new CloudRuntimeException(msg);
|
||||
}
|
||||
logger.info(String.format("Cmd %s - result: %s", cmds[0], result));
|
||||
if (!result.startsWith("Success")) {
|
||||
throw new CloudRuntimeException(result);
|
||||
}
|
||||
|
||||
msHostDao.updateState(msHost.getId(), State.Up);
|
||||
agentMgr.setLastAgents(null);
|
||||
return prepareMaintenanceResponse(cmd.getManagementServerId());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
@ -445,9 +492,17 @@ public class ManagementServerMaintenanceManagerImpl extends ManagerBase implemen
|
|||
if (msHost == null) {
|
||||
msHost = msHostDao.findByMsid(ManagementServerNode.getManagementServerId());
|
||||
}
|
||||
onCancelPreparingForMaintenance();
|
||||
msHostDao.updateState(msHost.getId(), State.Up);
|
||||
}
|
||||
|
||||
private void checkAnyMsInPreparingStates(String operation) {
|
||||
final List<ManagementServerHostVO> preparingForMaintenanceOrShutDownMsList = msHostDao.listBy(State.PreparingForMaintenance, State.PreparingForShutDown);
|
||||
if (CollectionUtils.isNotEmpty(preparingForMaintenanceOrShutDownMsList)) {
|
||||
throw new CloudRuntimeException(String.format("Cannot %s, there are other management servers preparing for maintenance/shutdown", operation));
|
||||
}
|
||||
}
|
||||
|
||||
private ManagementServerMaintenanceResponse prepareMaintenanceResponse(Long managementServerId) {
|
||||
ManagementServerHostVO msHost;
|
||||
Long[] msIds;
|
||||
|
|
@ -465,8 +520,8 @@ public class ManagementServerMaintenanceManagerImpl extends ManagerBase implemen
|
|||
boolean maintenanceInitiatedForMS = Arrays.asList(maintenanceStates).contains(msHost.getState());
|
||||
boolean shutdownTriggeredForMS = Arrays.asList(shutdownStates).contains(msHost.getState());
|
||||
msIds = new Long[]{msHost.getMsid()};
|
||||
List<String> agents = hostDao.listByMs(managementServerId);
|
||||
long agentsCount = hostDao.countByMs(managementServerId);
|
||||
List<String> agents = hostDao.listByMs(msHost.getMsid());
|
||||
long agentsCount = agents.size();
|
||||
long pendingJobCount = countPendingJobs(msIds);
|
||||
return new ManagementServerMaintenanceResponse(msHost.getUuid(), msHost.getState(), maintenanceInitiatedForMS, shutdownTriggeredForMS, pendingJobCount == 0, pendingJobCount, agentsCount, agents);
|
||||
}
|
||||
|
|
@ -535,7 +590,6 @@ public class ManagementServerMaintenanceManagerImpl extends ManagerBase implemen
|
|||
// No more pending jobs. Good to terminate
|
||||
if (managementServerMaintenanceManager.isShutdownTriggered()) {
|
||||
logger.info("MS is Shutting Down Now");
|
||||
// update state to down ?
|
||||
System.exit(0);
|
||||
}
|
||||
if (managementServerMaintenanceManager.isPreparingForMaintenance()) {
|
||||
|
|
|
|||
|
|
@ -17,7 +17,23 @@
|
|||
|
||||
package org.apache.cloudstack.maintenance;
|
||||
|
||||
import static org.mockito.ArgumentMatchers.any;
|
||||
import static org.mockito.ArgumentMatchers.anyBoolean;
|
||||
import static org.mockito.ArgumentMatchers.anyLong;
|
||||
import static org.mockito.ArgumentMatchers.anyString;
|
||||
import static org.mockito.Mockito.mock;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
import org.apache.cloudstack.agent.lb.IndirectAgentLB;
|
||||
import org.apache.cloudstack.api.command.CancelMaintenanceCmd;
|
||||
import org.apache.cloudstack.api.command.CancelShutdownCmd;
|
||||
import org.apache.cloudstack.api.command.PrepareForMaintenanceCmd;
|
||||
import org.apache.cloudstack.api.command.PrepareForShutdownCmd;
|
||||
import org.apache.cloudstack.api.command.TriggerShutdownCmd;
|
||||
import org.apache.cloudstack.framework.jobs.AsyncJobManager;
|
||||
import org.apache.cloudstack.management.ManagementServerHost;
|
||||
import org.junit.Assert;
|
||||
import org.junit.Test;
|
||||
import org.junit.runner.RunWith;
|
||||
|
|
@ -27,6 +43,11 @@ import org.mockito.Mockito;
|
|||
import org.mockito.Spy;
|
||||
import org.mockito.junit.MockitoJUnitRunner;
|
||||
|
||||
import com.cloud.agent.AgentManager;
|
||||
import com.cloud.cluster.ClusterManager;
|
||||
import com.cloud.cluster.ManagementServerHostVO;
|
||||
import com.cloud.cluster.dao.ManagementServerHostDao;
|
||||
import com.cloud.host.dao.HostDao;
|
||||
import com.cloud.utils.exception.CloudRuntimeException;
|
||||
|
||||
|
||||
|
|
@ -40,6 +61,21 @@ public class ManagementServerMaintenanceManagerImplTest {
|
|||
@Mock
|
||||
AsyncJobManager jobManagerMock;
|
||||
|
||||
@Mock
|
||||
IndirectAgentLB indirectAgentLBMock;
|
||||
|
||||
@Mock
|
||||
AgentManager agentManagerMock;
|
||||
|
||||
@Mock
|
||||
ClusterManager clusterManagerMock;
|
||||
|
||||
@Mock
|
||||
HostDao hostDao;
|
||||
|
||||
@Mock
|
||||
ManagementServerHostDao msHostDao;
|
||||
|
||||
private long prepareCountPendingJobs() {
|
||||
long expectedCount = 1L;
|
||||
Mockito.doReturn(expectedCount).when(jobManagerMock).countPendingNonPseudoJobs(1L);
|
||||
|
|
@ -53,13 +89,6 @@ public class ManagementServerMaintenanceManagerImplTest {
|
|||
Assert.assertEquals(expectedCount, count);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void cancelShutdown() {
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.cancelShutdown();
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForShutdown() {
|
||||
Mockito.doNothing().when(jobManagerMock).disableAsyncJobs();
|
||||
|
|
@ -74,4 +103,463 @@ public class ManagementServerMaintenanceManagerImplTest {
|
|||
spy.cancelShutdown();
|
||||
Mockito.verify(jobManagerMock).enableAsyncJobs();
|
||||
}
|
||||
|
||||
@Test
|
||||
public void cancelShutdown() {
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.cancelShutdown();
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void triggerShutdown() {
|
||||
Mockito.doNothing().when(jobManagerMock).disableAsyncJobs();
|
||||
Mockito.lenient().when(spy.isShutdownTriggered()).thenReturn(false);
|
||||
spy.triggerShutdown();
|
||||
Mockito.verify(jobManagerMock).disableAsyncJobs();
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.triggerShutdown();
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForShutdownCmdNoMsHost() {
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(null);
|
||||
PrepareForShutdownCmd cmd = mock(PrepareForShutdownCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.prepareForShutdown(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForShutdownCmdMsHostWithNonUpState() {
|
||||
ManagementServerHostVO msHost = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost.getState()).thenReturn(ManagementServerHost.State.Maintenance);
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost);
|
||||
PrepareForShutdownCmd cmd = mock(PrepareForShutdownCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.prepareForShutdown(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForShutdownCmdOtherMsHostsInPreparingState() {
|
||||
ManagementServerHostVO msHost1 = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost1.getState()).thenReturn(ManagementServerHost.State.Up);
|
||||
ManagementServerHostVO msHost2 = mock(ManagementServerHostVO.class);
|
||||
List<ManagementServerHostVO> msHostList = new ArrayList<>();
|
||||
msHostList.add(msHost2);
|
||||
Mockito.when(msHostDao.listBy(any())).thenReturn(msHostList);
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost1);
|
||||
PrepareForShutdownCmd cmd = mock(PrepareForShutdownCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.prepareForShutdown(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForShutdownCmdNullResponseFromClusterManager() {
|
||||
ManagementServerHostVO msHost = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost.getState()).thenReturn(ManagementServerHost.State.Up);
|
||||
List<ManagementServerHostVO> msHostList = new ArrayList<>();
|
||||
Mockito.when(msHostDao.listBy(any())).thenReturn(msHostList);
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost);
|
||||
PrepareForShutdownCmd cmd = mock(PrepareForShutdownCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
Mockito.when(clusterManagerMock.execute(anyString(), anyLong(), anyString(), anyBoolean())).thenReturn(null);
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.prepareForShutdown(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForShutdownCmdFailedResponseFromClusterManager() {
|
||||
ManagementServerHostVO msHost = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost.getState()).thenReturn(ManagementServerHost.State.Up);
|
||||
List<ManagementServerHostVO> msHostList = new ArrayList<>();
|
||||
Mockito.when(msHostDao.listBy(any())).thenReturn(msHostList);
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost);
|
||||
PrepareForShutdownCmd cmd = mock(PrepareForShutdownCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
Mockito.when(clusterManagerMock.execute(anyString(), anyLong(), anyString(), anyBoolean())).thenReturn("Failed");
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.prepareForShutdown(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForShutdownCmdSuccessResponseFromClusterManager() {
|
||||
ManagementServerHostVO msHost = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost.getState()).thenReturn(ManagementServerHost.State.Up);
|
||||
Mockito.when(msHostDao.listBy(any())).thenReturn(new ArrayList<>());
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost);
|
||||
Mockito.when(hostDao.listByMs(anyLong())).thenReturn(new ArrayList<>());
|
||||
PrepareForShutdownCmd cmd = mock(PrepareForShutdownCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
Mockito.when(clusterManagerMock.execute(anyString(), anyLong(), anyString(), anyBoolean())).thenReturn("Success");
|
||||
|
||||
spy.prepareForShutdown(cmd);
|
||||
Mockito.verify(clusterManagerMock, Mockito.times(1)).execute(anyString(), anyLong(), anyString(), anyBoolean());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void cancelShutdownCmdNoMsHost() {
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(null);
|
||||
CancelShutdownCmd cmd = mock(CancelShutdownCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.cancelShutdown(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void cancelShutdownCmdMsHostNotInShutdownState() {
|
||||
ManagementServerHostVO msHost = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost.getState()).thenReturn(ManagementServerHost.State.Up);
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost);
|
||||
CancelShutdownCmd cmd = mock(CancelShutdownCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.cancelShutdown(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void cancelShutdownCmd() {
|
||||
ManagementServerHostVO msHost = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost.getState()).thenReturn(ManagementServerHost.State.ReadyToShutDown);
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost);
|
||||
CancelShutdownCmd cmd = mock(CancelShutdownCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
Mockito.when(clusterManagerMock.execute(anyString(), anyLong(), anyString(), anyBoolean())).thenReturn("Success");
|
||||
|
||||
spy.cancelShutdown(cmd);
|
||||
Mockito.verify(clusterManagerMock, Mockito.times(1)).execute(anyString(), anyLong(), anyString(), anyBoolean());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void triggerShutdownCmdNoMsHost() {
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(null);
|
||||
TriggerShutdownCmd cmd = mock(TriggerShutdownCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.triggerShutdown(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void triggerShutdownCmdMsHostWithNotRightState() {
|
||||
ManagementServerHostVO msHost = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost.getState()).thenReturn(ManagementServerHost.State.PreparingForMaintenance);
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost);
|
||||
TriggerShutdownCmd cmd = mock(TriggerShutdownCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.triggerShutdown(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void triggerShutdownCmdMsInUpStateAndOtherMsHostsInPreparingState() {
|
||||
ManagementServerHostVO msHost1 = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost1.getState()).thenReturn(ManagementServerHost.State.Up);
|
||||
ManagementServerHostVO msHost2 = mock(ManagementServerHostVO.class);
|
||||
List<ManagementServerHostVO> msHostList = new ArrayList<>();
|
||||
msHostList.add(msHost2);
|
||||
Mockito.when(msHostDao.listBy(any())).thenReturn(msHostList);
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost1);
|
||||
TriggerShutdownCmd cmd = mock(TriggerShutdownCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.triggerShutdown(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void triggerShutdownCmd() {
|
||||
ManagementServerHostVO msHost = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost.getState()).thenReturn(ManagementServerHost.State.ReadyToShutDown);
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost);
|
||||
TriggerShutdownCmd cmd = mock(TriggerShutdownCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
Mockito.when(clusterManagerMock.execute(anyString(), anyLong(), anyString(), anyBoolean())).thenReturn("Success");
|
||||
|
||||
spy.triggerShutdown(cmd);
|
||||
Mockito.verify(clusterManagerMock, Mockito.times(1)).execute(anyString(), anyLong(), anyString(), anyBoolean());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForMaintenanceAndCancelFromMaintenanceState() {
|
||||
Mockito.doNothing().when(jobManagerMock).disableAsyncJobs();
|
||||
spy.prepareForMaintenance("static");
|
||||
Mockito.verify(jobManagerMock).disableAsyncJobs();
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.prepareForMaintenance("static");
|
||||
});
|
||||
|
||||
ManagementServerHostVO msHost = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost.getState()).thenReturn(ManagementServerHost.State.Maintenance);
|
||||
Mockito.when(msHostDao.findByMsid(anyLong())).thenReturn(msHost);
|
||||
Mockito.doNothing().when(jobManagerMock).enableAsyncJobs();
|
||||
spy.cancelMaintenance();
|
||||
Mockito.verify(jobManagerMock).enableAsyncJobs();
|
||||
Mockito.verify(spy, Mockito.times(1)).onCancelMaintenance();
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForMaintenanceAndCancelFromPreparingForMaintenanceState() {
|
||||
Mockito.doNothing().when(jobManagerMock).disableAsyncJobs();
|
||||
spy.prepareForMaintenance("static");
|
||||
Mockito.verify(jobManagerMock).disableAsyncJobs();
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.prepareForMaintenance("static");
|
||||
});
|
||||
|
||||
ManagementServerHostVO msHost = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost.getState()).thenReturn(ManagementServerHost.State.PreparingForMaintenance);
|
||||
Mockito.when(msHostDao.findByMsid(anyLong())).thenReturn(msHost);
|
||||
Mockito.doNothing().when(jobManagerMock).enableAsyncJobs();
|
||||
spy.cancelMaintenance();
|
||||
Mockito.verify(jobManagerMock).enableAsyncJobs();
|
||||
Mockito.verify(spy, Mockito.times(1)).onCancelPreparingForMaintenance();
|
||||
}
|
||||
|
||||
@Test
|
||||
public void cancelMaintenance() {
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.cancelMaintenance();
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void cancelPreparingForMaintenance() {
|
||||
ManagementServerHostVO msHost = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHostDao.findByMsid(anyLong())).thenReturn(msHost);
|
||||
|
||||
spy.cancelPreparingForMaintenance(null);
|
||||
Mockito.verify(jobManagerMock).enableAsyncJobs();
|
||||
Mockito.verify(spy, Mockito.times(1)).onCancelPreparingForMaintenance();
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForMaintenanceCmdNoOtherMsHostsWithUpState() {
|
||||
Mockito.when(msHostDao.listBy(any())).thenReturn(new ArrayList<>());
|
||||
PrepareForMaintenanceCmd cmd = mock(PrepareForMaintenanceCmd.class);
|
||||
Mockito.when(cmd.getAlgorithm()).thenReturn("test algorithm");
|
||||
Mockito.doNothing().when(indirectAgentLBMock).checkLBAlgorithmName(anyString());
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.prepareForMaintenance(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForMaintenanceCmdOnlyOneMsHostsWithUpState() {
|
||||
ManagementServerHostVO msHost = mock(ManagementServerHostVO.class);
|
||||
List<ManagementServerHostVO> msHostList = new ArrayList<>();
|
||||
msHostList.add(msHost);
|
||||
Mockito.when(msHostDao.listBy(ManagementServerHost.State.Up)).thenReturn(msHostList);
|
||||
PrepareForMaintenanceCmd cmd = mock(PrepareForMaintenanceCmd.class);
|
||||
Mockito.when(cmd.getAlgorithm()).thenReturn("test algorithm");
|
||||
Mockito.doNothing().when(indirectAgentLBMock).checkLBAlgorithmName(anyString());
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.prepareForMaintenance(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForMaintenanceCmdNoMsHost() {
|
||||
ManagementServerHostVO msHost1 = mock(ManagementServerHostVO.class);
|
||||
ManagementServerHostVO msHost2 = mock(ManagementServerHostVO.class);
|
||||
List<ManagementServerHostVO> msHostList = new ArrayList<>();
|
||||
msHostList.add(msHost1);
|
||||
msHostList.add(msHost2);
|
||||
Mockito.when(msHostDao.listBy(ManagementServerHost.State.Up)).thenReturn(msHostList);
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(null);
|
||||
PrepareForMaintenanceCmd cmd = mock(PrepareForMaintenanceCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.prepareForMaintenance(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForMaintenanceCmdMsHostWithNonUpState() {
|
||||
ManagementServerHostVO msHost1 = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost1.getState()).thenReturn(ManagementServerHost.State.Maintenance);
|
||||
ManagementServerHostVO msHost2 = mock(ManagementServerHostVO.class);
|
||||
List<ManagementServerHostVO> msHostList = new ArrayList<>();
|
||||
msHostList.add(msHost1);
|
||||
msHostList.add(msHost2);
|
||||
Mockito.when(msHostDao.listBy(ManagementServerHost.State.Up)).thenReturn(msHostList);
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost1);
|
||||
PrepareForMaintenanceCmd cmd = mock(PrepareForMaintenanceCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.prepareForMaintenance(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForMaintenanceCmdOtherMsHostsInPreparingState() {
|
||||
ManagementServerHostVO msHost1 = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost1.getState()).thenReturn(ManagementServerHost.State.Up);
|
||||
ManagementServerHostVO msHost2 = mock(ManagementServerHostVO.class);
|
||||
List<ManagementServerHostVO> msHostList1 = new ArrayList<>();
|
||||
msHostList1.add(msHost1);
|
||||
msHostList1.add(msHost2);
|
||||
ManagementServerHostVO msHost3 = mock(ManagementServerHostVO.class);
|
||||
List<ManagementServerHostVO> msHostList2 = new ArrayList<>();
|
||||
msHostList2.add(msHost3);
|
||||
Mockito.when(msHostDao.listBy(ManagementServerHost.State.Up)).thenReturn(msHostList1);
|
||||
Mockito.when(msHostDao.listBy(ManagementServerHost.State.PreparingForMaintenance, ManagementServerHost.State.PreparingForShutDown)).thenReturn(msHostList2);
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost1);
|
||||
PrepareForMaintenanceCmd cmd = mock(PrepareForMaintenanceCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.prepareForMaintenance(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForMaintenanceCmdNoIndirectMsHosts() {
|
||||
ManagementServerHostVO msHost1 = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost1.getState()).thenReturn(ManagementServerHost.State.Up);
|
||||
ManagementServerHostVO msHost2 = mock(ManagementServerHostVO.class);
|
||||
List<ManagementServerHostVO> msHostList = new ArrayList<>();
|
||||
msHostList.add(msHost1);
|
||||
msHostList.add(msHost2);
|
||||
Mockito.when(msHostDao.listBy(ManagementServerHost.State.Up)).thenReturn(msHostList);
|
||||
Mockito.when(msHostDao.listBy(ManagementServerHost.State.PreparingForMaintenance, ManagementServerHost.State.PreparingForShutDown)).thenReturn(new ArrayList<>());
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost1);
|
||||
Mockito.when(msHostDao.listNonUpStateMsIPs()).thenReturn(new ArrayList<>());
|
||||
PrepareForMaintenanceCmd cmd = mock(PrepareForMaintenanceCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
Mockito.when(indirectAgentLBMock.haveAgentBasedHosts(anyLong())).thenReturn(true);
|
||||
Mockito.when(indirectAgentLBMock.getManagementServerList()).thenReturn(new ArrayList<>());
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.prepareForMaintenance(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForMaintenanceCmdNullResponseFromClusterManager() {
|
||||
ManagementServerHostVO msHost1 = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost1.getState()).thenReturn(ManagementServerHost.State.Up);
|
||||
ManagementServerHostVO msHost2 = mock(ManagementServerHostVO.class);
|
||||
List<ManagementServerHostVO> msHostList = new ArrayList<>();
|
||||
msHostList.add(msHost1);
|
||||
msHostList.add(msHost2);
|
||||
Mockito.when(msHostDao.listBy(ManagementServerHost.State.Up)).thenReturn(msHostList);
|
||||
Mockito.when(msHostDao.listBy(ManagementServerHost.State.PreparingForMaintenance, ManagementServerHost.State.PreparingForShutDown)).thenReturn(new ArrayList<>());
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost1);
|
||||
PrepareForMaintenanceCmd cmd = mock(PrepareForMaintenanceCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
Mockito.when(indirectAgentLBMock.haveAgentBasedHosts(anyLong())).thenReturn(false);
|
||||
Mockito.when(clusterManagerMock.execute(anyString(), anyLong(), anyString(), anyBoolean())).thenReturn(null);
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.prepareForMaintenance(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForMaintenanceCmdFailedResponseFromClusterManager() {
|
||||
ManagementServerHostVO msHost1 = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost1.getState()).thenReturn(ManagementServerHost.State.Up);
|
||||
ManagementServerHostVO msHost2 = mock(ManagementServerHostVO.class);
|
||||
List<ManagementServerHostVO> msHostList = new ArrayList<>();
|
||||
msHostList.add(msHost1);
|
||||
msHostList.add(msHost2);
|
||||
Mockito.when(msHostDao.listBy(ManagementServerHost.State.Up)).thenReturn(msHostList);
|
||||
Mockito.when(msHostDao.listBy(ManagementServerHost.State.PreparingForMaintenance, ManagementServerHost.State.PreparingForShutDown)).thenReturn(new ArrayList<>());
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost1);
|
||||
PrepareForMaintenanceCmd cmd = mock(PrepareForMaintenanceCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
Mockito.when(indirectAgentLBMock.haveAgentBasedHosts(anyLong())).thenReturn(false);
|
||||
Mockito.when(clusterManagerMock.execute(anyString(), anyLong(), anyString(), anyBoolean())).thenReturn("Failed");
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.prepareForMaintenance(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prepareForMaintenanceCmdSuccessResponseFromClusterManager() {
|
||||
ManagementServerHostVO msHost1 = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost1.getState()).thenReturn(ManagementServerHost.State.Up);
|
||||
ManagementServerHostVO msHost2 = mock(ManagementServerHostVO.class);
|
||||
List<ManagementServerHostVO> msHostList = new ArrayList<>();
|
||||
msHostList.add(msHost1);
|
||||
msHostList.add(msHost2);
|
||||
Mockito.when(msHostDao.listBy(ManagementServerHost.State.Up)).thenReturn(msHostList);
|
||||
Mockito.when(msHostDao.listBy(ManagementServerHost.State.PreparingForMaintenance, ManagementServerHost.State.PreparingForShutDown)).thenReturn(new ArrayList<>());
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost1);
|
||||
PrepareForMaintenanceCmd cmd = mock(PrepareForMaintenanceCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
Mockito.when(indirectAgentLBMock.haveAgentBasedHosts(anyLong())).thenReturn(false);
|
||||
Mockito.when(hostDao.listByMs(anyLong())).thenReturn(new ArrayList<>());
|
||||
Mockito.when(clusterManagerMock.execute(anyString(), anyLong(), anyString(), anyBoolean())).thenReturn("Success");
|
||||
|
||||
spy.prepareForMaintenance(cmd);
|
||||
Mockito.verify(clusterManagerMock, Mockito.times(1)).execute(anyString(), anyLong(), anyString(), anyBoolean());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void cancelMaintenanceCmdNoMsHost() {
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(null);
|
||||
CancelMaintenanceCmd cmd = mock(CancelMaintenanceCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.cancelMaintenance(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void cancelMaintenanceCmdMsHostNotInMaintenanceState() {
|
||||
ManagementServerHostVO msHost = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost.getState()).thenReturn(ManagementServerHost.State.Up);
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost);
|
||||
CancelMaintenanceCmd cmd = mock(CancelMaintenanceCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
|
||||
Assert.assertThrows(CloudRuntimeException.class, () -> {
|
||||
spy.cancelMaintenance(cmd);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
public void cancelMaintenanceCmd() {
|
||||
ManagementServerHostVO msHost = mock(ManagementServerHostVO.class);
|
||||
Mockito.when(msHost.getState()).thenReturn(ManagementServerHost.State.Maintenance);
|
||||
Mockito.when(msHostDao.findById(1L)).thenReturn(msHost);
|
||||
CancelMaintenanceCmd cmd = mock(CancelMaintenanceCmd.class);
|
||||
Mockito.when(cmd.getManagementServerId()).thenReturn(1L);
|
||||
Mockito.when(clusterManagerMock.execute(anyString(), anyLong(), anyString(), anyBoolean())).thenReturn("Success");
|
||||
|
||||
spy.cancelMaintenance(cmd);
|
||||
Mockito.verify(clusterManagerMock, Mockito.times(1)).execute(anyString(), anyLong(), anyString(), anyBoolean());
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -189,13 +189,6 @@ public class MetricsServiceImpl extends MutualExclusiveIdsManagerBase implements
|
|||
super();
|
||||
}
|
||||
|
||||
private Double findRatioValue(final String value) {
|
||||
if (value != null) {
|
||||
return Double.valueOf(value);
|
||||
}
|
||||
return 1.0;
|
||||
}
|
||||
|
||||
private void updateHostMetrics(final HostMetrics hostMetrics, final HostJoinVO host) {
|
||||
hostMetrics.addCpuAllocated(host.getCpuReservedCapacity() + host.getCpuUsedCapacity());
|
||||
hostMetrics.addMemoryAllocated(host.getMemReservedCapacity() + host.getMemUsedCapacity());
|
||||
|
|
@ -767,14 +760,10 @@ public class MetricsServiceImpl extends MutualExclusiveIdsManagerBase implements
|
|||
if (AllowListMetricsComputation.value()) {
|
||||
List<Ternary<Long, Long, Long>> cpuList = new ArrayList<>();
|
||||
List<Ternary<Long, Long, Long>> memoryList = new ArrayList<>();
|
||||
for (final Host host : hostDao.findByClusterId(clusterId)) {
|
||||
if (host == null || host.getType() != Host.Type.Routing) {
|
||||
continue;
|
||||
}
|
||||
updateHostMetrics(hostMetrics, hostJoinDao.findById(host.getId()));
|
||||
HostJoinVO hostJoin = hostJoinDao.findById(host.getId());
|
||||
cpuList.add(new Ternary<>(hostJoin.getCpuUsedCapacity(), hostJoin.getCpuReservedCapacity(), hostJoin.getCpus() * hostJoin.getSpeed()));
|
||||
memoryList.add(new Ternary<>(hostJoin.getMemUsedCapacity(), hostJoin.getMemReservedCapacity(), hostJoin.getTotalMemory()));
|
||||
for (final HostJoinVO host : hostJoinDao.findByClusterId(clusterId, Host.Type.Routing)) {
|
||||
updateHostMetrics(hostMetrics, host);
|
||||
cpuList.add(new Ternary<>(host.getCpuUsedCapacity(), host.getCpuReservedCapacity(), host.getCpus() * host.getSpeed()));
|
||||
memoryList.add(new Ternary<>(host.getMemUsedCapacity(), host.getMemReservedCapacity(), host.getTotalMemory()));
|
||||
}
|
||||
try {
|
||||
Double imbalance = ClusterDrsAlgorithm.getClusterImbalance(clusterId, cpuList, memoryList, null);
|
||||
|
|
@ -955,11 +944,8 @@ public class MetricsServiceImpl extends MutualExclusiveIdsManagerBase implements
|
|||
if (cluster == null) {
|
||||
continue;
|
||||
}
|
||||
for (final Host host: hostDao.findByClusterId(cluster.getId())) {
|
||||
if (host == null || host.getType() != Host.Type.Routing) {
|
||||
continue;
|
||||
}
|
||||
updateHostMetrics(hostMetrics, hostJoinDao.findById(host.getId()));
|
||||
for (final HostJoinVO host: hostJoinDao.findByClusterId(cluster.getId(), Host.Type.Routing)) {
|
||||
updateHostMetrics(hostMetrics, host);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
|
|
|
|||
|
|
@ -31,11 +31,11 @@ public class ManagementServerMetricsResponse extends ManagementServerResponse {
|
|||
private Integer availableProcessors;
|
||||
|
||||
@SerializedName(MetricConstants.LAST_AGENTS)
|
||||
@Param(description = "the last agents this Management Server is responsible for, before preparing for maintenance", since = "4.18.1")
|
||||
@Param(description = "the last agents this Management Server is responsible for, before shutdown or preparing for maintenance", since = "4.21.0.0")
|
||||
private List<String> lastAgents;
|
||||
|
||||
@SerializedName(MetricConstants.AGENTS)
|
||||
@Param(description = "the agents this Management Server is responsible for", since = "4.18.1")
|
||||
@Param(description = "the agents this Management Server is responsible for", since = "4.21.0.0")
|
||||
private List<String> agents;
|
||||
|
||||
@SerializedName(MetricConstants.AGENT_COUNT)
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ name=storage-volume-<providername>
|
|||
parent=storage
|
||||
```
|
||||
### Spring Bean Context Configuration
|
||||
This provides instructions of which provider implementation class to load when the Spring bean initilization is running.
|
||||
This provides instructions of which provider implementation class to load when the Spring bean initialization is running.
|
||||
```
|
||||
<!-- resources/META-INF/cloudstack/storage-volume-<providername>/spring-storage-volume-<providername>-context.xml -->
|
||||
<beans xmlns="http://www.springframework.org/schema/beans"
|
||||
|
|
|
|||
|
|
@ -5,6 +5,12 @@ All notable changes to Linstor CloudStack plugin will be documented in this file
|
|||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [2025-03-13]
|
||||
|
||||
### Fixed
|
||||
|
||||
- Implemented missing delete datastore, to correctly cleanup on datastore removal
|
||||
|
||||
## [2025-02-21]
|
||||
|
||||
### Fixed
|
||||
|
|
|
|||
|
|
@ -286,7 +286,10 @@ public class LinstorPrimaryDataStoreLifeCycleImpl extends BasePrimaryDataStoreLi
|
|||
|
||||
@Override
|
||||
public boolean deleteDataStore(DataStore store) {
|
||||
return dataStoreHelper.deletePrimaryDataStore(store);
|
||||
if (cleanupDatastore(store)) {
|
||||
return dataStoreHelper.deletePrimaryDataStore(store);
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/* (non-Javadoc)
|
||||
|
|
|
|||
|
|
@ -18,7 +18,6 @@
|
|||
*/
|
||||
package org.apache.cloudstack.storage.datastore.lifecycle;
|
||||
|
||||
import java.io.UnsupportedEncodingException;
|
||||
import java.net.URI;
|
||||
import java.net.URISyntaxException;
|
||||
import java.net.URLDecoder;
|
||||
|
|
@ -30,6 +29,7 @@ import java.util.UUID;
|
|||
|
||||
import javax.inject.Inject;
|
||||
|
||||
import com.cloud.utils.StringUtils;
|
||||
import org.apache.cloudstack.engine.subsystem.api.storage.ClusterScope;
|
||||
import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
|
||||
import org.apache.cloudstack.engine.subsystem.api.storage.HostScope;
|
||||
|
|
@ -48,8 +48,6 @@ import org.apache.cloudstack.storage.volume.datastore.PrimaryDataStoreHelper;
|
|||
import org.apache.commons.collections.CollectionUtils;
|
||||
|
||||
import com.cloud.agent.AgentManager;
|
||||
import com.cloud.agent.api.Answer;
|
||||
import com.cloud.agent.api.DeleteStoragePoolCommand;
|
||||
import com.cloud.agent.api.StoragePoolInfo;
|
||||
import com.cloud.capacity.CapacityManager;
|
||||
import com.cloud.dc.ClusterVO;
|
||||
|
|
@ -63,9 +61,6 @@ import com.cloud.storage.Storage;
|
|||
import com.cloud.storage.StorageManager;
|
||||
import com.cloud.storage.StoragePool;
|
||||
import com.cloud.storage.StoragePoolAutomation;
|
||||
import com.cloud.storage.StoragePoolHostVO;
|
||||
import com.cloud.storage.VMTemplateStoragePoolVO;
|
||||
import com.cloud.storage.VMTemplateStorageResourceAssoc;
|
||||
import com.cloud.storage.dao.StoragePoolHostDao;
|
||||
import com.cloud.template.TemplateManager;
|
||||
import com.cloud.utils.UriUtils;
|
||||
|
|
@ -111,7 +106,7 @@ public class ScaleIOPrimaryDataStoreLifeCycle extends BasePrimaryDataStoreLifeCy
|
|||
List<org.apache.cloudstack.storage.datastore.api.StoragePool> storagePools = client.listStoragePools();
|
||||
for (org.apache.cloudstack.storage.datastore.api.StoragePool pool : storagePools) {
|
||||
if (pool.getName().equals(storagePoolName)) {
|
||||
logger.info("Found PowerFlex storage pool: " + storagePoolName);
|
||||
logger.info("Found PowerFlex storage pool: {}", storagePoolName);
|
||||
final org.apache.cloudstack.storage.datastore.api.StoragePoolStatistics poolStatistics = client.getStoragePoolStatistics(pool.getId());
|
||||
pool.setStatistics(poolStatistics);
|
||||
|
||||
|
|
@ -164,7 +159,7 @@ public class ScaleIOPrimaryDataStoreLifeCycle extends BasePrimaryDataStoreLifeCy
|
|||
throw new CloudRuntimeException("Cluster Id must also be specified when the Pod Id is specified for Cluster-wide primary storage.");
|
||||
}
|
||||
|
||||
URI uri = null;
|
||||
URI uri;
|
||||
try {
|
||||
uri = new URI(UriUtils.encodeURIComponent(url));
|
||||
if (uri.getScheme() == null || !uri.getScheme().equalsIgnoreCase("powerflex")) {
|
||||
|
|
@ -174,12 +169,8 @@ public class ScaleIOPrimaryDataStoreLifeCycle extends BasePrimaryDataStoreLifeCy
|
|||
throw new InvalidParameterValueException(url + " is not a valid uri");
|
||||
}
|
||||
|
||||
String storagePoolName = null;
|
||||
try {
|
||||
storagePoolName = URLDecoder.decode(uri.getPath(), "UTF-8");
|
||||
} catch (UnsupportedEncodingException e) {
|
||||
logger.error("[ignored] we are on a platform not supporting \"UTF-8\"!?!", e);
|
||||
}
|
||||
String storagePoolName;
|
||||
storagePoolName = URLDecoder.decode(uri.getPath(), StringUtils.getPreferredCharset());
|
||||
if (storagePoolName == null) { // if decoding fails, use getPath() anyway
|
||||
storagePoolName = uri.getPath();
|
||||
}
|
||||
|
|
@ -187,7 +178,7 @@ public class ScaleIOPrimaryDataStoreLifeCycle extends BasePrimaryDataStoreLifeCy
|
|||
|
||||
final String storageHost = uri.getHost();
|
||||
final int port = uri.getPort();
|
||||
String gatewayApiURL = null;
|
||||
String gatewayApiURL;
|
||||
if (port == -1) {
|
||||
gatewayApiURL = String.format("https://%s/api", storageHost);
|
||||
} else {
|
||||
|
|
@ -321,37 +312,11 @@ public class ScaleIOPrimaryDataStoreLifeCycle extends BasePrimaryDataStoreLifeCy
|
|||
|
||||
@Override
|
||||
public boolean deleteDataStore(DataStore dataStore) {
|
||||
StoragePool storagePool = (StoragePool)dataStore;
|
||||
StoragePoolVO storagePoolVO = primaryDataStoreDao.findById(storagePool.getId());
|
||||
if (storagePoolVO == null) {
|
||||
return false;
|
||||
if (cleanupDatastore(dataStore)) {
|
||||
ScaleIOGatewayClientConnectionPool.getInstance().removeClient(dataStore);
|
||||
return dataStoreHelper.deletePrimaryDataStore(dataStore);
|
||||
}
|
||||
|
||||
List<VMTemplateStoragePoolVO> unusedTemplatesInPool = templateMgr.getUnusedTemplatesInPool(storagePoolVO);
|
||||
for (VMTemplateStoragePoolVO templatePoolVO : unusedTemplatesInPool) {
|
||||
if (templatePoolVO.getDownloadState() == VMTemplateStorageResourceAssoc.Status.DOWNLOADED) {
|
||||
templateMgr.evictTemplateFromStoragePool(templatePoolVO);
|
||||
}
|
||||
}
|
||||
|
||||
List<StoragePoolHostVO> poolHostVOs = storagePoolHostDao.listByPoolId(dataStore.getId());
|
||||
for (StoragePoolHostVO poolHostVO : poolHostVOs) {
|
||||
DeleteStoragePoolCommand deleteStoragePoolCommand = new DeleteStoragePoolCommand(storagePool);
|
||||
final Answer answer = agentMgr.easySend(poolHostVO.getHostId(), deleteStoragePoolCommand);
|
||||
if (answer != null && answer.getResult()) {
|
||||
logger.info("Successfully deleted storage pool: {} from host: {}", storagePool, poolHostVO.getHostId());
|
||||
} else {
|
||||
if (answer != null) {
|
||||
logger.error("Failed to delete storage pool: {} from host: {} , result: {}", storagePool, poolHostVO.getHostId(), answer.getResult());
|
||||
} else {
|
||||
logger.error("Failed to delete storage pool: {} from host: {}", storagePool, poolHostVO.getHostId());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
ScaleIOGatewayClientConnectionPool.getInstance().removeClient(dataStore);
|
||||
|
||||
return dataStoreHelper.deletePrimaryDataStore(dataStore);
|
||||
return false;
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ independent parts:
|
|||
* ./src/com/... directory tree: agent related classes and commands send from management to agent
|
||||
* ./src/org/... directory tree: management related classes
|
||||
|
||||
The plugin is intended to be self contained and non-intrusive, thus ideally deploying it would consist of only
|
||||
The plugin is intended to be self-contained and non-intrusive, thus ideally deploying it would consist of only
|
||||
dropping the jar file into the appropriate places. This is the reason why all StorPool related communication
|
||||
(ex. data copying, volume resize) is done with StorPool specific commands even when there is a CloudStack command
|
||||
that does pretty much the same.
|
||||
|
|
@ -183,7 +183,7 @@ This storage tag may be used later, when defining service or disk offerings.
|
|||
<td>takeSnapshot + copyAsync (S => S)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Create volume from snapshoot</td>
|
||||
<td>Create volume from snapshot</td>
|
||||
<td>create volume from snapshot</td>
|
||||
<td>management + agent(?)</td>
|
||||
<td>copyAsync (S => V)</td>
|
||||
|
|
@ -279,7 +279,7 @@ In this case only snapshots won't be downloaded to secondary storage.
|
|||
|
||||
#### If bypass option is enabled
|
||||
|
||||
The snapshot exists only on PRIMARY (StorPool) storage. From this snapshot it will be created a template on SECONADRY.
|
||||
The snapshot exists only on PRIMARY (StorPool) storage. From this snapshot it will be created a template on SECONDARY.
|
||||
|
||||
#### If bypass option is disabled
|
||||
|
||||
|
|
@ -290,7 +290,7 @@ This is independent of StorPool as snapshots exist on secondary.
|
|||
### Creating ROOT volume from templates
|
||||
|
||||
When creating the first volume based on the given template, if snapshot of the template does not exists on StorPool it will be first downloaded (cached) to PRIMARY storage.
|
||||
This is mapped to a StorPool snapshot so, creating succecutive volumes from the same template does not incur additional
|
||||
This is mapped to a StorPool snapshot so, creating successive volumes from the same template does not incur additional
|
||||
copying of data to PRIMARY storage.
|
||||
|
||||
This cached snapshot is garbage collected when the original template is deleted from CloudStack. This cleanup is done
|
||||
|
|
|
|||
|
|
@ -114,11 +114,7 @@ public final class StorPoolDownloadVolumeCommandWrapper extends CommandWrapper<S
|
|||
if (isRBDPool) {
|
||||
KVMStoragePool srcPool = srcDisk.getPool();
|
||||
String rbdDestPath = srcPool.getSourceDir() + "/" + srcDisk.getName();
|
||||
srcPath = KVMPhysicalDisk.RBDStringBuilder(srcPool.getSourceHost(),
|
||||
srcPool.getSourcePort(),
|
||||
srcPool.getAuthUserName(),
|
||||
srcPool.getAuthSecret(),
|
||||
rbdDestPath);
|
||||
srcPath = KVMPhysicalDisk.RBDStringBuilder(srcPool, rbdDestPath);
|
||||
} else {
|
||||
srcPath = srcDisk.getPath();
|
||||
}
|
||||
|
|
|
|||
2
pom.xml
2
pom.xml
|
|
@ -50,7 +50,7 @@
|
|||
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
|
||||
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
|
||||
<project.systemvm.template.location>https://download.cloudstack.org/systemvm</project.systemvm.template.location>
|
||||
<project.systemvm.template.version>4.20.0.0</project.systemvm.template.version>
|
||||
<project.systemvm.template.version>4.20.1.0</project.systemvm.template.version>
|
||||
<sonar.organization>apache</sonar.organization>
|
||||
<sonar.host.url>https://sonarcloud.io</sonar.host.url>
|
||||
|
||||
|
|
|
|||
|
|
@ -39,11 +39,11 @@ class sysConfigAgentFactory:
|
|||
return sysConfigAgentUbuntu(glbEnv)
|
||||
elif distribution == "CentOS" or distribution == "RHEL5":
|
||||
return sysConfigEL5(glbEnv)
|
||||
elif distribution == "Fedora" or distribution == "RHEL6":
|
||||
elif distribution == "RHEL6":
|
||||
return sysConfigEL6(glbEnv)
|
||||
elif distribution == "RHEL7":
|
||||
return sysConfigEL7(glbEnv)
|
||||
elif distribution in ["RHEL8", "RHEL9"]:
|
||||
elif distribution in ["Fedora", "RHEL8", "RHEL9", "RHEL10"]:
|
||||
return sysConfigEL(glbEnv)
|
||||
elif distribution == "SUSE":
|
||||
return sysConfigSUSE(glbEnv)
|
||||
|
|
@ -183,9 +183,10 @@ class sysConfigEL5(sysConfigAgentRedhatBase):
|
|||
networkConfigRedhat(self),
|
||||
libvirtConfigRedhat(self),
|
||||
firewallConfigAgent(self),
|
||||
nfsConfig(self),
|
||||
cloudAgentConfig(self)]
|
||||
|
||||
#it covers RHEL6/Fedora13/Fedora14
|
||||
#it covers RHEL6
|
||||
class sysConfigEL6(sysConfigAgentRedhatBase):
|
||||
def __init__(self, glbEnv):
|
||||
super(sysConfigEL6, self).__init__(glbEnv)
|
||||
|
|
|
|||
|
|
@ -124,6 +124,10 @@ class Distribution:
|
|||
version.find("Red Hat Enterprise Linux release 9") != -1 or version.find("Linux release 9.") != -1 or
|
||||
version.find("Linux release 9") != -1):
|
||||
self.distro = "RHEL9"
|
||||
elif (version.find("Red Hat Enterprise Linux Server release 10") != -1 or version.find("Scientific Linux release 10") != -1 or
|
||||
version.find("Red Hat Enterprise Linux release 10") != -1 or version.find("Linux release 10.") != -1 or
|
||||
version.find("Linux release 10") != -1):
|
||||
self.distro = "RHEL10"
|
||||
elif version.find("CentOS") != -1:
|
||||
self.distro = "CentOS"
|
||||
else:
|
||||
|
|
|
|||
|
|
@ -99,7 +99,7 @@ if [[ -f $destdir/template.properties ]]; then
|
|||
failed 2 "Data already exists at destination $destdir"
|
||||
fi
|
||||
|
||||
destfiles=$(find $destdir -name \*.$ext)
|
||||
destfiles=$(sudo find $destdir -name \*.$ext)
|
||||
if [[ "$destfiles" != "" ]]; then
|
||||
failed 2 "Data already exists at destination $destdir"
|
||||
fi
|
||||
|
|
@ -108,12 +108,12 @@ tmpfolder=/tmp/cloud/templates/
|
|||
mkdir -p $tmpfolder
|
||||
tmplfile=$tmpfolder/$localfile
|
||||
|
||||
sudo touch $tmplfile
|
||||
touch $tmplfile
|
||||
if [[ $? -ne 0 ]]; then
|
||||
failed 2 "Failed to create temporary file in directory $tmpfolder -- is it read-only or full?\n"
|
||||
fi
|
||||
|
||||
destcap=$(df -P $destdir | awk '{print $4}' | tail -1 )
|
||||
destcap=$(sudo df -P $destdir | awk '{print $4}' | tail -1 )
|
||||
[ $destcap -lt $DISKSPACE ] && echo "Insufficient free disk space for target folder $destdir: avail=${destcap}k req=${DISKSPACE}k" && failed 4
|
||||
|
||||
localcap=$(df -P $tmpfolder | awk '{print $4}' | tail -1 )
|
||||
|
|
@ -146,9 +146,9 @@ fi
|
|||
|
||||
|
||||
tmpltfile=$destdir/$localfile
|
||||
tmpltsize=$(ls -l $tmpltfile | awk -F" " '{print $5}')
|
||||
tmpltsize=$(sudo ls -l $tmpltfile | awk -F" " '{print $5}')
|
||||
if [[ "$ext" == "qcow2" ]]; then
|
||||
vrtmpltsize=$($qemuimgcmd info $tmpltfile | grep -i 'virtual size' | sed -ne 's/.*(\([0-9]*\).*/\1/p' | xargs)
|
||||
vrtmpltsize=$(sudo $qemuimgcmd info $tmpltfile | grep -i 'virtual size' | sed -ne 's/.*(\([0-9]*\).*/\1/p' | xargs)
|
||||
else
|
||||
vrtmpltsize=$tmpltsize
|
||||
fi
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@
|
|||
# The CloudStack management server needs sudo permissions
|
||||
# without a password.
|
||||
|
||||
Cmnd_Alias CLOUDSTACK = /bin/mkdir, /bin/mount, /bin/umount, /bin/cp, /bin/chmod, /usr/bin/keytool, /bin/keytool, /bin/touch
|
||||
Cmnd_Alias CLOUDSTACK = /bin/mkdir, /bin/mount, /bin/umount, /bin/cp, /bin/chmod, /usr/bin/keytool, /bin/keytool, /bin/touch, /bin/find, /bin/df, /bin/ls, /bin/qemu-img
|
||||
|
||||
Defaults:@MSUSER@ !requiretty
|
||||
|
||||
|
|
|
|||
|
|
@ -94,7 +94,7 @@ public class ApiDispatcher {
|
|||
if (asyncJobManager.isAsyncJobsEnabled()) {
|
||||
asyncCreationDispatchChain.dispatch(new DispatchTask(cmd, params));
|
||||
} else {
|
||||
throw new CloudRuntimeException("Maintenance or Shutdown has been initiated on this management server. Can not accept new jobs");
|
||||
throw new CloudRuntimeException("Maintenance or Shutdown has been initiated on this management server. Can not accept new async creation jobs");
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -750,6 +750,11 @@ public class ApiServer extends ManagerBase implements HttpRequestHandler, ApiSer
|
|||
// BaseAsyncCreateCmd: cmd params are processed and create() is called, then same workflow as BaseAsyncCmd.
|
||||
// BaseAsyncCmd: cmd is processed and submitted as an AsyncJob, job related info is serialized and returned.
|
||||
if (cmdObj instanceof BaseAsyncCmd) {
|
||||
if (!asyncMgr.isAsyncJobsEnabled()) {
|
||||
String msg = "Maintenance or Shutdown has been initiated on this management server. Can not accept new jobs";
|
||||
logger.warn(msg);
|
||||
throw new ServerApiException(ApiErrorCode.SERVICE_UNAVAILABLE, msg);
|
||||
}
|
||||
Long objectId = null;
|
||||
String objectUuid = null;
|
||||
if (cmdObj instanceof BaseAsyncCreateCmd) {
|
||||
|
|
|
|||
|
|
@ -165,6 +165,7 @@ import org.apache.cloudstack.storage.datastore.db.TemplateDataStoreDao;
|
|||
import org.apache.commons.collections.CollectionUtils;
|
||||
import org.apache.commons.collections.MapUtils;
|
||||
import org.apache.commons.lang3.EnumUtils;
|
||||
import org.apache.commons.lang3.ObjectUtils;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.springframework.stereotype.Component;
|
||||
|
||||
|
|
@ -1305,6 +1306,7 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
|
|||
Long storageId = null;
|
||||
StoragePoolVO pool = null;
|
||||
Long userId = cmd.getUserId();
|
||||
Long userdataId = cmd.getUserdataId();
|
||||
Map<String, String> tags = cmd.getTags();
|
||||
|
||||
boolean isAdmin = false;
|
||||
|
|
@ -1377,6 +1379,10 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
|
|||
userVmSearchBuilder.and("templateId", userVmSearchBuilder.entity().getTemplateId(), Op.EQ);
|
||||
}
|
||||
|
||||
if (userdataId != null) {
|
||||
userVmSearchBuilder.and("userdataId", userVmSearchBuilder.entity().getUserDataId(), Op.EQ);
|
||||
}
|
||||
|
||||
if (hypervisor != null) {
|
||||
userVmSearchBuilder.and("hypervisorType", userVmSearchBuilder.entity().getHypervisorType(), Op.EQ);
|
||||
}
|
||||
|
|
@ -1569,6 +1575,10 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
|
|||
userVmSearchCriteria.setParameters("templateId", templateId);
|
||||
}
|
||||
|
||||
if (userdataId != null) {
|
||||
userVmSearchCriteria.setParameters("userdataId", userdataId);
|
||||
}
|
||||
|
||||
if (display != null) {
|
||||
userVmSearchCriteria.setParameters("display", display);
|
||||
}
|
||||
|
|
@ -3149,28 +3159,41 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
|
|||
List<StoragePoolResponse> poolResponses = ViewResponseHelper.createStoragePoolResponse(getCustomStats, storagePools.first().toArray(new StoragePoolJoinVO[storagePools.first().size()]));
|
||||
Map<String, Long> poolUuidToIdMap = storagePools.first().stream().collect(Collectors.toMap(StoragePoolJoinVO::getUuid, StoragePoolJoinVO::getId, (a, b) -> a));
|
||||
for (StoragePoolResponse poolResponse : poolResponses) {
|
||||
Long poolId = poolUuidToIdMap.get(poolResponse.getId());
|
||||
DataStore store = dataStoreManager.getPrimaryDataStore(poolResponse.getId());
|
||||
|
||||
if (store != null) {
|
||||
DataStoreDriver driver = store.getDriver();
|
||||
if (driver != null && driver.getCapabilities() != null) {
|
||||
Map<String, String> caps = driver.getCapabilities();
|
||||
if (Storage.StoragePoolType.NetworkFilesystem.toString().equals(poolResponse.getType()) &&
|
||||
HypervisorType.VMware.toString().equals(poolResponse.getHypervisor())) {
|
||||
StoragePoolDetailVO detail = _storagePoolDetailsDao.findDetail(poolUuidToIdMap.get(poolResponse.getId()), Storage.Capability.HARDWARE_ACCELERATION.toString());
|
||||
if (detail != null) {
|
||||
caps.put(Storage.Capability.HARDWARE_ACCELERATION.toString(), detail.getValue());
|
||||
}
|
||||
}
|
||||
poolResponse.setCaps(caps);
|
||||
}
|
||||
addPoolDetailsAndCapabilities(poolResponse, store, poolId);
|
||||
}
|
||||
setPoolResponseNFSMountOptions(poolResponse, poolUuidToIdMap.get(poolResponse.getId()));
|
||||
|
||||
setPoolResponseNFSMountOptions(poolResponse, poolId);
|
||||
}
|
||||
|
||||
response.setResponses(poolResponses, storagePools.second());
|
||||
return response;
|
||||
}
|
||||
|
||||
private void addPoolDetailsAndCapabilities(StoragePoolResponse poolResponse, DataStore store, Long poolId) {
|
||||
Map<String, String> details = _storagePoolDetailsDao.listDetailsKeyPairs(store.getId(), true);
|
||||
poolResponse.setDetails(details);
|
||||
|
||||
DataStoreDriver driver = store.getDriver();
|
||||
if (ObjectUtils.anyNull(driver, driver.getCapabilities())) {
|
||||
return;
|
||||
}
|
||||
|
||||
Map<String, String> caps = driver.getCapabilities();
|
||||
if (Storage.StoragePoolType.NetworkFilesystem.toString().equals(poolResponse.getType()) && HypervisorType.VMware.toString().equals(poolResponse.getHypervisor())) {
|
||||
StoragePoolDetailVO detail = _storagePoolDetailsDao.findDetail(poolId, Storage.Capability.HARDWARE_ACCELERATION.toString());
|
||||
if (detail != null) {
|
||||
caps.put(Storage.Capability.HARDWARE_ACCELERATION.toString(), detail.getValue());
|
||||
}
|
||||
}
|
||||
poolResponse.setCaps(caps);
|
||||
}
|
||||
|
||||
|
||||
|
||||
private Pair<List<StoragePoolJoinVO>, Integer> searchForStoragePoolsInternal(ListStoragePoolsCmd cmd) {
|
||||
ScopeType scopeType = ScopeType.validateAndGetScopeType(cmd.getScope());
|
||||
StoragePoolStatus status = StoragePoolStatus.validateAndGetStatus(cmd.getStatus());
|
||||
|
|
@ -5441,7 +5464,11 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
|
|||
mgmtResponse.addPeer(createPeerManagementServerNodeResponse(peer));
|
||||
}
|
||||
}
|
||||
mgmtResponse.setAgentsCount((long) hostDao.countByMs(mgmt.getMsid()));
|
||||
List<String> lastAgents = hostDao.listByLastMs(mgmt.getMsid());
|
||||
mgmtResponse.setLastAgents(lastAgents);
|
||||
List<String> agents = hostDao.listByMs(mgmt.getMsid());
|
||||
mgmtResponse.setAgents(agents);
|
||||
mgmtResponse.setAgentsCount((long) agents.size());
|
||||
mgmtResponse.setPendingJobsCount(jobManager.countPendingNonPseudoJobs(mgmt.getMsid()));
|
||||
mgmtResponse.setIpAddress(mgmt.getServiceIP());
|
||||
mgmtResponse.setObjectName("managementserver");
|
||||
|
|
|
|||
|
|
@ -446,7 +446,9 @@ public class NetworkHelperImpl implements NetworkHelper {
|
|||
final int retryIndex = 5;
|
||||
final ExcludeList[] avoids = new ExcludeList[5];
|
||||
avoids[0] = new ExcludeList();
|
||||
avoids[0].addPod(routerToBeAvoid.getPodIdToDeployIn());
|
||||
if (routerToBeAvoid.getPodIdToDeployIn() != null) {
|
||||
avoids[0].addPod(routerToBeAvoid.getPodIdToDeployIn());
|
||||
}
|
||||
avoids[1] = new ExcludeList();
|
||||
avoids[1].addCluster(_hostDao.findById(routerToBeAvoid.getHostId()).getClusterId());
|
||||
avoids[2] = new ExcludeList();
|
||||
|
|
|
|||
|
|
@ -60,9 +60,10 @@ import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
|
|||
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
|
||||
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
|
||||
import org.apache.cloudstack.utils.identity.ManagementServerNode;
|
||||
|
||||
import org.apache.commons.collections.CollectionUtils;
|
||||
import org.apache.commons.lang.ObjectUtils;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
|
||||
import org.springframework.stereotype.Component;
|
||||
|
||||
import com.cloud.agent.AgentManager;
|
||||
|
|
@ -92,7 +93,6 @@ import com.cloud.capacity.CapacityVO;
|
|||
import com.cloud.capacity.dao.CapacityDao;
|
||||
import com.cloud.cluster.ClusterManager;
|
||||
import com.cloud.configuration.Config;
|
||||
import com.cloud.configuration.ConfigurationManager;
|
||||
import com.cloud.cpu.CPU;
|
||||
import com.cloud.dc.ClusterDetailsDao;
|
||||
import com.cloud.dc.ClusterDetailsVO;
|
||||
|
|
@ -125,7 +125,6 @@ import com.cloud.exception.DiscoveryException;
|
|||
import com.cloud.exception.InsufficientServerCapacityException;
|
||||
import com.cloud.exception.InvalidParameterValueException;
|
||||
import com.cloud.exception.PermissionDeniedException;
|
||||
import com.cloud.exception.ResourceInUseException;
|
||||
import com.cloud.exception.ResourceUnavailableException;
|
||||
import com.cloud.exception.StorageConflictException;
|
||||
import com.cloud.exception.StorageUnavailableException;
|
||||
|
|
@ -170,7 +169,6 @@ import com.cloud.storage.StorageService;
|
|||
import com.cloud.storage.VMTemplateVO;
|
||||
import com.cloud.storage.Volume;
|
||||
import com.cloud.storage.VolumeVO;
|
||||
import com.cloud.storage.dao.DiskOfferingDao;
|
||||
import com.cloud.storage.dao.GuestOSCategoryDao;
|
||||
import com.cloud.storage.dao.StoragePoolHostDao;
|
||||
import com.cloud.storage.dao.VMTemplateDao;
|
||||
|
|
@ -203,6 +201,7 @@ import com.cloud.utils.net.NetUtils;
|
|||
import com.cloud.utils.ssh.SSHCmdHelper;
|
||||
import com.cloud.utils.ssh.SshException;
|
||||
import com.cloud.vm.UserVmManager;
|
||||
import com.cloud.utils.StringUtils;
|
||||
import com.cloud.vm.VMInstanceVO;
|
||||
import com.cloud.vm.VirtualMachine;
|
||||
import com.cloud.vm.VirtualMachine.State;
|
||||
|
|
@ -236,8 +235,6 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
@Inject
|
||||
private CapacityDao _capacityDao;
|
||||
@Inject
|
||||
private DiskOfferingDao diskOfferingDao;
|
||||
@Inject
|
||||
private ServiceOfferingDao serviceOfferingDao;
|
||||
@Inject
|
||||
private HostDao _hostDao;
|
||||
|
|
@ -296,8 +293,6 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
@Inject
|
||||
private VMTemplateDao _templateDao;
|
||||
@Inject
|
||||
private ConfigurationManager _configMgr;
|
||||
@Inject
|
||||
private ClusterVSMMapDao _clusterVSMMapDao;
|
||||
@Inject
|
||||
private UserVmDetailsDao userVmDetailsDao;
|
||||
|
|
@ -312,9 +307,9 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
|
||||
private final long _nodeId = ManagementServerNode.getManagementServerId();
|
||||
|
||||
private final HashMap<String, ResourceStateAdapter> _resourceStateAdapters = new HashMap<String, ResourceStateAdapter>();
|
||||
private final HashMap<String, ResourceStateAdapter> _resourceStateAdapters = new HashMap<>();
|
||||
|
||||
private final HashMap<Integer, List<ResourceListener>> _lifeCycleListeners = new HashMap<Integer, List<ResourceListener>>();
|
||||
private final HashMap<Integer, List<ResourceListener>> _lifeCycleListeners = new HashMap<>();
|
||||
private HypervisorType _defaultSystemVMHypervisor;
|
||||
|
||||
private static final int ACQUIRE_GLOBAL_LOCK_TIMEOUT_FOR_COOPERATION = 30; // seconds
|
||||
|
|
@ -324,11 +319,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
private SearchBuilder<HostGpuGroupsVO> _gpuAvailability;
|
||||
|
||||
private void insertListener(final Integer event, final ResourceListener listener) {
|
||||
List<ResourceListener> lst = _lifeCycleListeners.get(event);
|
||||
if (lst == null) {
|
||||
lst = new ArrayList<ResourceListener>();
|
||||
_lifeCycleListeners.put(event, lst);
|
||||
}
|
||||
List<ResourceListener> lst = _lifeCycleListeners.computeIfAbsent(event, k -> new ArrayList<>());
|
||||
|
||||
if (lst.contains(listener)) {
|
||||
throw new CloudRuntimeException("Duplicate resource lisener:" + listener.getClass().getSimpleName());
|
||||
|
|
@ -370,9 +361,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
@Override
|
||||
public void unregisterResourceEvent(final ResourceListener listener) {
|
||||
synchronized (_lifeCycleListeners) {
|
||||
final Iterator it = _lifeCycleListeners.entrySet().iterator();
|
||||
while (it.hasNext()) {
|
||||
final Map.Entry<Integer, List<ResourceListener>> items = (Map.Entry<Integer, List<ResourceListener>>)it.next();
|
||||
for (Map.Entry<Integer, List<ResourceListener>> items : _lifeCycleListeners.entrySet()) {
|
||||
final List<ResourceListener> lst = items.getValue();
|
||||
lst.remove(listener);
|
||||
}
|
||||
|
|
@ -381,7 +370,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
|
||||
protected void processResourceEvent(final Integer event, final Object... params) {
|
||||
final List<ResourceListener> lst = _lifeCycleListeners.get(event);
|
||||
if (lst == null || lst.size() == 0) {
|
||||
if (lst == null || lst.isEmpty()) {
|
||||
return;
|
||||
}
|
||||
|
||||
|
|
@ -422,7 +411,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
|
||||
@DB
|
||||
@Override
|
||||
public List<? extends Cluster> discoverCluster(final AddClusterCmd cmd) throws IllegalArgumentException, DiscoveryException, ResourceInUseException {
|
||||
public List<? extends Cluster> discoverCluster(final AddClusterCmd cmd) throws IllegalArgumentException, DiscoveryException {
|
||||
final long dcId = cmd.getZoneId();
|
||||
final long podId = cmd.getPodId();
|
||||
final String clusterName = cmd.getClusterName();
|
||||
|
|
@ -432,10 +421,10 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
CPU.CPUArch arch = cmd.getArch();
|
||||
|
||||
if (url != null) {
|
||||
url = URLDecoder.decode(url);
|
||||
url = URLDecoder.decode(url, com.cloud.utils.StringUtils.getPreferredCharset());
|
||||
}
|
||||
|
||||
URI uri = null;
|
||||
URI uri;
|
||||
|
||||
// Check if the zone exists in the system
|
||||
final DataCenterVO zone = _dcDao.findById(dcId);
|
||||
|
|
@ -519,7 +508,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
discoverer.putParam(allParams);
|
||||
}
|
||||
|
||||
final List<ClusterVO> result = new ArrayList<ClusterVO>();
|
||||
final List<ClusterVO> result = new ArrayList<>();
|
||||
|
||||
ClusterVO cluster = new ClusterVO(dcId, podId, clusterName);
|
||||
cluster.setHypervisorType(hypervisorType.toString());
|
||||
|
|
@ -540,7 +529,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
result.add(cluster);
|
||||
|
||||
if (clusterType == Cluster.ClusterType.CloudManaged) {
|
||||
final Map<String, String> details = new HashMap<String, String>();
|
||||
final Map<String, String> details = new HashMap<>();
|
||||
// should do this nicer perhaps ?
|
||||
if (hypervisorType == HypervisorType.Ovm3) {
|
||||
final Map<String, String> allParams = cmd.getFullUrlParams();
|
||||
|
|
@ -578,8 +567,8 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
throw new InvalidParameterValueException(url + " is not a valid uri");
|
||||
}
|
||||
|
||||
final List<HostVO> hosts = new ArrayList<HostVO>();
|
||||
Map<? extends ServerResource, Map<String, String>> resources = null;
|
||||
final List<HostVO> hosts = new ArrayList<>();
|
||||
Map<? extends ServerResource, Map<String, String>> resources;
|
||||
resources = discoverer.find(dcId, podId, cluster.getId(), uri, username, password, null);
|
||||
|
||||
if (resources != null) {
|
||||
|
|
@ -670,7 +659,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
private List<HostVO> discoverHostsFull(final Long dcId, final Long podId, Long clusterId, final String clusterName, String url, String username, String password,
|
||||
final String hypervisorType, final List<String> hostTags, final Map<String, String> params, final boolean deferAgentCreation) throws IllegalArgumentException, DiscoveryException,
|
||||
InvalidParameterValueException {
|
||||
URI uri = null;
|
||||
URI uri;
|
||||
|
||||
// Check if the zone exists in the system
|
||||
final DataCenterVO zone = _dcDao.findById(dcId);
|
||||
|
|
@ -810,7 +799,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
throw new InvalidParameterValueException(url + " is not a valid uri");
|
||||
}
|
||||
|
||||
final List<HostVO> hosts = new ArrayList<HostVO>();
|
||||
final List<HostVO> hosts = new ArrayList<>();
|
||||
logger.info("Trying to add a new host at {} in data center {}", url, zone);
|
||||
boolean isHypervisorTypeSupported = false;
|
||||
for (final Discoverer discoverer : _discoverers) {
|
||||
|
|
@ -872,7 +861,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
return null;
|
||||
}
|
||||
|
||||
HostVO host = null;
|
||||
HostVO host;
|
||||
if (deferAgentCreation) {
|
||||
host = (HostVO)createHostAndAgentDeferred(resource, entry.getValue(), true, hostTags, false);
|
||||
} else {
|
||||
|
|
@ -1099,7 +1088,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
// don't allow to remove the cluster if it has non-removed storage
|
||||
// pools
|
||||
final List<StoragePoolVO> storagePools = _storagePoolDao.listPoolsByCluster(cmd.getId());
|
||||
if (storagePools.size() > 0) {
|
||||
if (!storagePools.isEmpty()) {
|
||||
logger.debug("{} still has storage pools, can't remove", cluster);
|
||||
throw new CloudRuntimeException(String.format("Cluster: %s cannot be removed. Cluster still has storage pools", cluster));
|
||||
}
|
||||
|
|
@ -1166,7 +1155,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
}
|
||||
}
|
||||
|
||||
Cluster.ClusterType newClusterType = null;
|
||||
Cluster.ClusterType newClusterType;
|
||||
if (clusterType != null && !clusterType.isEmpty()) {
|
||||
try {
|
||||
newClusterType = Cluster.ClusterType.valueOf(clusterType);
|
||||
|
|
@ -1182,7 +1171,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
}
|
||||
}
|
||||
|
||||
Grouping.AllocationState newAllocationState = null;
|
||||
Grouping.AllocationState newAllocationState;
|
||||
if (allocationState != null && !allocationState.isEmpty()) {
|
||||
try {
|
||||
newAllocationState = Grouping.AllocationState.valueOf(allocationState);
|
||||
|
|
@ -1244,12 +1233,13 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
}
|
||||
}
|
||||
final int retry = 40;
|
||||
boolean lsuccess = true;
|
||||
boolean lsuccess;
|
||||
for (int i = 0; i < retry; i++) {
|
||||
lsuccess = true;
|
||||
try {
|
||||
Thread.sleep(5 * 1000);
|
||||
} catch (final Exception e) {
|
||||
Thread.currentThread().wait(5 * 1000);
|
||||
} catch (final InterruptedException e) {
|
||||
logger.debug("thread unexpectedly interrupted during wait, while updating cluster");
|
||||
}
|
||||
hosts = listAllUpAndEnabledHosts(Host.Type.Routing, cluster.getId(), cluster.getPodId(), cluster.getDataCenterId());
|
||||
for (final HostVO host : hosts) {
|
||||
|
|
@ -1258,12 +1248,12 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
break;
|
||||
}
|
||||
}
|
||||
if (lsuccess == true) {
|
||||
if (lsuccess) {
|
||||
success = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (success == false) {
|
||||
if (!success) {
|
||||
throw new CloudRuntimeException("PrepareUnmanaged Failed due to some hosts are still in UP status after 5 Minutes, please try later ");
|
||||
}
|
||||
} finally {
|
||||
|
|
@ -1384,7 +1374,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
|
||||
/* TODO: move below to listener */
|
||||
if (host.getType() == Host.Type.Routing) {
|
||||
if (vms.size() == 0) {
|
||||
if (vms.isEmpty()) {
|
||||
return true;
|
||||
}
|
||||
|
||||
|
|
@ -1412,7 +1402,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
String logMessage = String.format(
|
||||
"Unsupported host.maintenance.local.storage.strategy: %s. Please set a strategy according to the global settings description: "
|
||||
+ "'Error', 'Migration', or 'ForceStop'.",
|
||||
HOST_MAINTENANCE_LOCAL_STRATEGY.value().toString());
|
||||
HOST_MAINTENANCE_LOCAL_STRATEGY.value());
|
||||
logger.error(logMessage);
|
||||
throw new CloudRuntimeException("There are active VMs using the host's local storage pool. Please stop all VMs on this host that use local storage.");
|
||||
}
|
||||
|
|
@ -1469,14 +1459,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
ServiceOfferingVO offeringVO = serviceOfferingDao.findById(vm.getServiceOfferingId());
|
||||
final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm, null, offeringVO, null, null);
|
||||
plan.setMigrationPlan(true);
|
||||
DeployDestination dest = null;
|
||||
DeploymentPlanner.ExcludeList avoids = new DeploymentPlanner.ExcludeList();
|
||||
avoids.addHost(host.getId());
|
||||
try {
|
||||
dest = deploymentManager.planDeployment(profile, plan, avoids, null);
|
||||
} catch (InsufficientServerCapacityException e) {
|
||||
throw new CloudRuntimeException(String.format("Maintenance failed, could not find deployment destination for VM: %s.", vm), e);
|
||||
}
|
||||
DeployDestination dest = getDeployDestination(vm, profile, plan, host);
|
||||
Host destHost = dest.getHost();
|
||||
|
||||
try {
|
||||
|
|
@ -1487,6 +1470,22 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
}
|
||||
}
|
||||
|
||||
private DeployDestination getDeployDestination(VMInstanceVO vm, VirtualMachineProfile profile, DataCenterDeployment plan, HostVO hostToAvoid) {
|
||||
DeployDestination dest;
|
||||
DeploymentPlanner.ExcludeList avoids = new DeploymentPlanner.ExcludeList();
|
||||
avoids.addHost(hostToAvoid.getId());
|
||||
try {
|
||||
dest = deploymentManager.planDeployment(profile, plan, avoids, null);
|
||||
} catch (InsufficientServerCapacityException e) {
|
||||
throw new CloudRuntimeException(String.format("Maintenance failed, could not find deployment destination for VM [id=%s, name=%s].", vm.getId(), vm.getInstanceName()),
|
||||
e);
|
||||
}
|
||||
if (dest == null) {
|
||||
throw new CloudRuntimeException(String.format("Maintenance failed, could not find deployment destination for VM [id=%s, name=%s], using plan: %s.", vm.getId(), vm.getInstanceName(), plan));
|
||||
}
|
||||
return dest;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean maintain(final long hostId) throws AgentUnavailableException {
|
||||
final Boolean result = propagateResourceEvent(hostId, ResourceState.Event.AdminAskMaintenance);
|
||||
|
|
@ -1535,15 +1534,15 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
|
||||
List<VMInstanceVO> migratingInVMs = _vmDao.findByHostInStates(hostId, State.Migrating);
|
||||
|
||||
if (migratingInVMs.size() > 0) {
|
||||
if (!migratingInVMs.isEmpty()) {
|
||||
throw new CloudRuntimeException("Host contains incoming VMs migrating. Please wait for them to complete before putting to maintenance.");
|
||||
}
|
||||
|
||||
if (_vmDao.findByHostInStates(hostId, State.Starting, State.Stopping).size() > 0) {
|
||||
if (!_vmDao.findByHostInStates(hostId, State.Starting, State.Stopping).isEmpty()) {
|
||||
throw new CloudRuntimeException("Host contains VMs in starting/stopping state. Please wait for them to complete before putting to maintenance.");
|
||||
}
|
||||
|
||||
if (_vmDao.findByHostInStates(hostId, State.Error, State.Unknown).size() > 0) {
|
||||
if (!_vmDao.findByHostInStates(hostId, State.Error, State.Unknown).isEmpty()) {
|
||||
throw new CloudRuntimeException("Host contains VMs in error/unknown/shutdown state. Please fix errors to proceed.");
|
||||
}
|
||||
|
||||
|
|
@ -1564,25 +1563,22 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
if(StringUtils.isBlank(HOST_MAINTENANCE_LOCAL_STRATEGY.value())) {
|
||||
return false;
|
||||
}
|
||||
return HOST_MAINTENANCE_LOCAL_STRATEGY.value().toLowerCase().equals(WorkType.Migration.toString().toLowerCase());
|
||||
return HOST_MAINTENANCE_LOCAL_STRATEGY.value().equalsIgnoreCase(WorkType.Migration.toString());
|
||||
}
|
||||
|
||||
protected boolean isMaintenanceLocalStrategyForceStop() {
|
||||
if(StringUtils.isBlank(HOST_MAINTENANCE_LOCAL_STRATEGY.value())) {
|
||||
return false;
|
||||
}
|
||||
return HOST_MAINTENANCE_LOCAL_STRATEGY.value().toLowerCase().equals(WorkType.ForceStop.toString().toLowerCase());
|
||||
return HOST_MAINTENANCE_LOCAL_STRATEGY.value().equalsIgnoreCase(WorkType.ForceStop.toString());
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns true if the host.maintenance.local.storage.strategy is the Default: "Error", blank, empty, or null.
|
||||
*/
|
||||
protected boolean isMaintenanceLocalStrategyDefault() {
|
||||
if (StringUtils.isBlank(HOST_MAINTENANCE_LOCAL_STRATEGY.value().toString())
|
||||
|| HOST_MAINTENANCE_LOCAL_STRATEGY.value().toLowerCase().equals(State.Error.toString().toLowerCase())) {
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
return StringUtils.isBlank(HOST_MAINTENANCE_LOCAL_STRATEGY.value())
|
||||
|| HOST_MAINTENANCE_LOCAL_STRATEGY.value().equalsIgnoreCase(State.Error.toString());
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -1733,7 +1729,6 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
* Return true if host goes into Maintenance mode. There are various possibilities for VMs' states
|
||||
* on a host. We need to track the various VM states on each run and accordingly transit to the
|
||||
* appropriate state.
|
||||
*
|
||||
* We change states as follows -
|
||||
* 1. If there are no VMs in running, migrating, starting, stopping, error, unknown states we can move
|
||||
* to maintenance state. Note that there cannot be incoming migrations as the API Call prepare for
|
||||
|
|
@ -1907,7 +1902,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
guestOSDetail.setValue(String.valueOf(guestOSCategory.getId()));
|
||||
_hostDetailsDao.update(guestOSDetail.getId(), guestOSDetail);
|
||||
} else {
|
||||
final Map<String, String> detail = new HashMap<String, String>();
|
||||
final Map<String, String> detail = new HashMap<>();
|
||||
detail.put("guest.os.category.id", String.valueOf(guestOSCategory.getId()));
|
||||
_hostDetailsDao.persist(hostId, detail);
|
||||
}
|
||||
|
|
@ -2057,9 +2052,9 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
|
||||
@Override
|
||||
public List<HypervisorType> getSupportedHypervisorTypes(final long zoneId, final boolean forVirtualRouter, final Long podId) {
|
||||
final List<HypervisorType> hypervisorTypes = new ArrayList<HypervisorType>();
|
||||
final List<HypervisorType> hypervisorTypes = new ArrayList<>();
|
||||
|
||||
List<ClusterVO> clustersForZone = new ArrayList<ClusterVO>();
|
||||
List<ClusterVO> clustersForZone;
|
||||
if (podId != null) {
|
||||
clustersForZone = _clusterDao.listByPodId(podId);
|
||||
} else {
|
||||
|
|
@ -2068,7 +2063,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
|
||||
for (final ClusterVO cluster : clustersForZone) {
|
||||
final HypervisorType hType = cluster.getHypervisorType();
|
||||
if (!forVirtualRouter || forVirtualRouter && hType != HypervisorType.BareMetal && hType != HypervisorType.Ovm) {
|
||||
if (!forVirtualRouter || (hType != HypervisorType.BareMetal && hType != HypervisorType.Ovm)) {
|
||||
hypervisorTypes.add(hType);
|
||||
}
|
||||
}
|
||||
|
|
@ -2104,7 +2099,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
|
||||
if (isValid) {
|
||||
final List<ClusterVO> clusters = _clusterDao.listByDcHyType(zoneId, defaultHyper.toString());
|
||||
if (clusters.size() <= 0) {
|
||||
if (clusters.isEmpty()) {
|
||||
isValid = false;
|
||||
}
|
||||
}
|
||||
|
|
@ -2121,7 +2116,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
HypervisorType defaultHype = getDefaultHypervisor(zoneId);
|
||||
if (defaultHype == HypervisorType.None) {
|
||||
final List<HypervisorType> supportedHypes = getSupportedHypervisorTypes(zoneId, false, null);
|
||||
if (supportedHypes.size() > 0) {
|
||||
if (!supportedHypes.isEmpty()) {
|
||||
Collections.shuffle(supportedHypes);
|
||||
defaultHype = supportedHypes.get(0);
|
||||
}
|
||||
|
|
@ -2245,10 +2240,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
final String cidrNetmask = NetUtils.getCidrSubNet("255.255.255.255", cidrSize);
|
||||
final long cidrNetmaskNumeric = NetUtils.ip2Long(cidrNetmask);
|
||||
final long serverNetmaskNumeric = NetUtils.ip2Long(serverPrivateNetmask);
|
||||
if (serverNetmaskNumeric > cidrNetmaskNumeric) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
return serverNetmaskNumeric <= cidrNetmaskNumeric;
|
||||
}
|
||||
|
||||
private HostVO getNewHost(StartupCommand[] startupCommands) {
|
||||
|
|
@ -2262,11 +2254,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
|
||||
host = findHostByGuid(startupCommand.getGuidWithoutResource());
|
||||
|
||||
if (host != null) {
|
||||
return host;
|
||||
}
|
||||
|
||||
return null;
|
||||
return host; // even when host == null!
|
||||
}
|
||||
|
||||
protected HostVO createHostVO(final StartupCommand[] cmds, final ServerResource resource, final Map<String, String> details, List<String> hostTags,
|
||||
|
|
@ -2297,11 +2285,11 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
}
|
||||
}
|
||||
|
||||
long dcId = -1;
|
||||
long dcId;
|
||||
DataCenterVO dc = _dcDao.findByName(dataCenter);
|
||||
if (dc == null) {
|
||||
try {
|
||||
dcId = Long.parseLong(dataCenter);
|
||||
dcId = Long.parseLong(dataCenter != null ? dataCenter : "-1");
|
||||
dc = _dcDao.findById(dcId);
|
||||
} catch (final NumberFormatException e) {
|
||||
logger.debug("Cannot parse " + dataCenter + " into Long.");
|
||||
|
|
@ -2315,7 +2303,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
HostPodVO p = _podDao.findByName(pod, dcId);
|
||||
if (p == null) {
|
||||
try {
|
||||
final long podId = Long.parseLong(pod);
|
||||
final long podId = Long.parseLong(pod != null ? pod : "-1");
|
||||
p = _podDao.findById(podId);
|
||||
} catch (final NumberFormatException e) {
|
||||
logger.debug("Cannot parse " + pod + " into Long.");
|
||||
|
|
@ -2334,9 +2322,9 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
clusterId = Long.valueOf(cluster);
|
||||
} catch (final NumberFormatException e) {
|
||||
if (podId != null) {
|
||||
ClusterVO c = _clusterDao.findBy(cluster, podId.longValue());
|
||||
ClusterVO c = _clusterDao.findBy(cluster, podId);
|
||||
if (c == null) {
|
||||
c = new ClusterVO(dcId, podId.longValue(), cluster);
|
||||
c = new ClusterVO(dcId, podId, cluster);
|
||||
c = _clusterDao.persist(c);
|
||||
}
|
||||
clusterId = c.getId();
|
||||
|
|
@ -2439,7 +2427,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
for (Long hostId : hostIds) {
|
||||
DetailVO hostDetailVO = _hostDetailsDao.findDetail(hostId, name);
|
||||
|
||||
if (hostDetailVO == null || Boolean.parseBoolean(hostDetailVO.getValue()) == false) {
|
||||
if (hostDetailVO == null || !Boolean.parseBoolean(hostDetailVO.getValue())) {
|
||||
clusterSupportsResigning = false;
|
||||
|
||||
break;
|
||||
|
|
@ -2531,7 +2519,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
}
|
||||
|
||||
if (logger.isDebugEnabled()) {
|
||||
new Request(-1l, -1l, cmds, true, false).logD("Startup request from directly connected host: ", true);
|
||||
new Request(-1L, -1L, cmds, true, false).logD("Startup request from directly connected host: ", true);
|
||||
}
|
||||
|
||||
if (old) {
|
||||
|
|
@ -2601,7 +2589,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
}
|
||||
|
||||
if (logger.isDebugEnabled()) {
|
||||
new Request(-1l, -1l, cmds, true, false).logD("Startup request from directly connected host: ", true);
|
||||
new Request(-1L, -1L, cmds, true, false).logD("Startup request from directly connected host: ", true);
|
||||
}
|
||||
|
||||
if (old) {
|
||||
|
|
@ -2702,8 +2690,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
throw new InvalidParameterValueException("Can't find zone with id " + zoneId);
|
||||
}
|
||||
|
||||
final Map<String, String> details = hostDetails;
|
||||
final String guid = details.get("guid");
|
||||
final String guid = hostDetails.get("guid");
|
||||
final List<HostVO> currentHosts = listAllUpAndEnabledHostsInOneZoneByType(hostType, zoneId);
|
||||
for (final HostVO currentHost : currentHosts) {
|
||||
if (currentHost.getGuid().equals(guid)) {
|
||||
|
|
@ -2719,7 +2706,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
return createHostVO(cmds, null, null, null, ResourceStateAdapter.Event.CREATE_HOST_VO_FOR_CONNECTED);
|
||||
}
|
||||
|
||||
private void checkIPConflicts(final HostPodVO pod, final DataCenterVO dc, final String serverPrivateIP, final String serverPrivateNetmask, final String serverPublicIP, final String serverPublicNetmask) {
|
||||
private void checkIPConflicts(final HostPodVO pod, final DataCenterVO dc, final String serverPrivateIP, final String serverPublicIP) {
|
||||
// If the server's private IP is the same as is public IP, this host has
|
||||
// a host-only private network. Don't check for conflicts with the
|
||||
// private IP address table.
|
||||
|
|
@ -2748,7 +2735,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
// If the server's public IP address is already in the database,
|
||||
// return false
|
||||
final List<IPAddressVO> existingPublicIPs = _publicIPAddressDao.listByDcIdIpAddress(dc.getId(), serverPublicIP);
|
||||
if (existingPublicIPs.size() > 0) {
|
||||
if (!existingPublicIPs.isEmpty()) {
|
||||
throw new IllegalArgumentException("The public ip address of the server (" + serverPublicIP + ") is already in use in zone: " + dc.getName());
|
||||
}
|
||||
}
|
||||
|
|
@ -2785,7 +2772,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
|
||||
final HostPodVO pod = _podDao.findById(host.getPodId());
|
||||
final DataCenterVO dc = _dcDao.findById(host.getDataCenterId());
|
||||
checkIPConflicts(pod, dc, ssCmd.getPrivateIpAddress(), ssCmd.getPublicIpAddress(), ssCmd.getPublicIpAddress(), ssCmd.getPublicNetmask());
|
||||
checkIPConflicts(pod, dc, ssCmd.getPrivateIpAddress(), ssCmd.getPublicIpAddress());
|
||||
host.setType(com.cloud.host.Host.Type.Routing);
|
||||
host.setDetails(details);
|
||||
host.setCaps(ssCmd.getCapabilities());
|
||||
|
|
@ -2823,8 +2810,8 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
throw new UnableDeleteHostException("Failed to set primary storage into maintenance mode");
|
||||
}
|
||||
} catch (final Exception e) {
|
||||
logger.debug("Failed to set primary storage into maintenance mode, due to: " + e.toString());
|
||||
throw new UnableDeleteHostException("Failed to set primary storage into maintenance mode, due to: " + e.toString());
|
||||
logger.debug("Failed to set primary storage into maintenance mode", e);
|
||||
throw new UnableDeleteHostException("Failed to set primary storage into maintenance mode, due to: " + e);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -2968,7 +2955,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
if (result.getReturnCode() != 0) {
|
||||
throw new CloudRuntimeException(String.format("Could not restart agent on %s due to: %s", host, result.getStdErr()));
|
||||
}
|
||||
logger.debug("cloudstack-agent restart result: " + result.toString());
|
||||
logger.debug("cloudstack-agent restart result: {}", result);
|
||||
} catch (final SshException e) {
|
||||
throw new CloudRuntimeException("SSH to agent is enabled, but agent restart failed", e);
|
||||
}
|
||||
|
|
@ -2989,7 +2976,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
}
|
||||
|
||||
@Override
|
||||
public boolean executeUserRequest(final long hostId, final ResourceState.Event event) throws AgentUnavailableException {
|
||||
public boolean executeUserRequest(final long hostId, final ResourceState.Event event) {
|
||||
if (event == ResourceState.Event.AdminAskMaintenance) {
|
||||
return doMaintain(hostId);
|
||||
} else if (event == ResourceState.Event.AdminCancelMaintenance) {
|
||||
|
|
@ -3315,7 +3302,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
public HostStats getHostStatistics(final Host host) {
|
||||
final Answer answer = _agentMgr.easySend(host.getId(), new GetHostStatsCommand(host.getGuid(), host.getName(), host.getId()));
|
||||
|
||||
if (answer != null && answer instanceof UnsupportedAnswer) {
|
||||
if (answer instanceof UnsupportedAnswer) {
|
||||
return null;
|
||||
}
|
||||
|
||||
|
|
@ -3351,20 +3338,16 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
@Override
|
||||
public String getHostTags(final long hostId) {
|
||||
final List<String> hostTags = _hostTagsDao.getHostTags(hostId).parallelStream().map(HostTagVO::getTag).collect(Collectors.toList());
|
||||
if (hostTags == null) {
|
||||
return null;
|
||||
} else {
|
||||
return com.cloud.utils.StringUtils.listToCsvTags(hostTags);
|
||||
}
|
||||
return StringUtils.listToCsvTags(hostTags);
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<PodCluster> listByDataCenter(final long dcId) {
|
||||
final List<HostPodVO> pods = _podDao.listByDataCenterId(dcId);
|
||||
final ArrayList<PodCluster> pcs = new ArrayList<PodCluster>();
|
||||
final ArrayList<PodCluster> pcs = new ArrayList<>();
|
||||
for (final HostPodVO pod : pods) {
|
||||
final List<ClusterVO> clusters = _clusterDao.listByPodId(pod.getId());
|
||||
if (clusters.size() == 0) {
|
||||
if (clusters.isEmpty()) {
|
||||
pcs.add(new PodCluster(pod, null));
|
||||
} else {
|
||||
for (final ClusterVO cluster : clusters) {
|
||||
|
|
@ -3409,7 +3392,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
public boolean isHostGpuEnabled(final long hostId) {
|
||||
final SearchCriteria<HostGpuGroupsVO> sc = _gpuAvailability.create();
|
||||
sc.setParameters("hostId", hostId);
|
||||
return _hostGpuGroupsDao.customSearch(sc, null).size() > 0 ? true : false;
|
||||
return !_hostGpuGroupsDao.customSearch(sc, null).isEmpty();
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
@ -3474,7 +3457,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
// Update GPU group capacity
|
||||
final TransactionLegacy txn = TransactionLegacy.currentTxn();
|
||||
txn.start();
|
||||
_hostGpuGroupsDao.persist(hostId, new ArrayList<String>(groupDetails.keySet()));
|
||||
_hostGpuGroupsDao.persist(hostId, new ArrayList<>(groupDetails.keySet()));
|
||||
_vgpuTypesDao.persist(hostId, groupDetails);
|
||||
txn.commit();
|
||||
}
|
||||
|
|
@ -3482,7 +3465,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
@Override
|
||||
public HashMap<String, HashMap<String, VgpuTypesInfo>> getGPUStatistics(final HostVO host) {
|
||||
final Answer answer = _agentMgr.easySend(host.getId(), new GetGPUStatsCommand(host.getGuid(), host.getName()));
|
||||
if (answer != null && answer instanceof UnsupportedAnswer) {
|
||||
if (answer instanceof UnsupportedAnswer) {
|
||||
return null;
|
||||
}
|
||||
if (answer == null || !answer.getResult()) {
|
||||
|
|
@ -3523,7 +3506,7 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
|
|||
@ActionEvent(eventType = EventTypes.EVENT_HOST_RESERVATION_RELEASE, eventDescription = "releasing host reservation", async = true)
|
||||
public boolean releaseHostReservation(final Long hostId) {
|
||||
try {
|
||||
return Transaction.execute(new TransactionCallback<Boolean>() {
|
||||
return Transaction.execute(new TransactionCallback<>() {
|
||||
@Override
|
||||
public Boolean doInTransaction(final TransactionStatus status) {
|
||||
final PlannerHostReservationVO reservationEntry = _plannerHostReserveDao.findByHostId(hostId);
|
||||
|
|
|
|||
|
|
@ -752,21 +752,21 @@ public class StatsCollector extends ManagerBase implements ComponentMethodInterc
|
|||
logger.debug(String.format("%s is running...", this.getClass().getSimpleName()));
|
||||
long msid = ManagementServerNode.getManagementServerId();
|
||||
ManagementServerHostVO mshost = null;
|
||||
ManagementServerHostStatsEntry hostStatsEntry = null;
|
||||
ManagementServerHostStatsEntry msHostStatsEntry = null;
|
||||
try {
|
||||
mshost = managementServerHostDao.findByMsid(msid);
|
||||
// get local data
|
||||
hostStatsEntry = getDataFrom(mshost);
|
||||
managementServerHostStats.put(mshost.getUuid(), hostStatsEntry);
|
||||
msHostStatsEntry = getDataFrom(mshost);
|
||||
managementServerHostStats.put(mshost.getUuid(), msHostStatsEntry);
|
||||
// send to other hosts
|
||||
clusterManager.publishStatus(gson.toJson(hostStatsEntry));
|
||||
clusterManager.publishStatus(gson.toJson(msHostStatsEntry));
|
||||
} catch (Throwable t) {
|
||||
// pokemon catch to make sure the thread stays running
|
||||
logger.error("Error trying to retrieve management server host statistics", t);
|
||||
}
|
||||
try {
|
||||
// send to DB
|
||||
storeStatus(hostStatsEntry, mshost);
|
||||
storeStatus(msHostStatsEntry, mshost);
|
||||
} catch (Throwable t) {
|
||||
// pokemon catch to make sure the thread stays running
|
||||
logger.error("Error trying to store management server host statistics", t);
|
||||
|
|
@ -834,11 +834,11 @@ public class StatsCollector extends ManagerBase implements ComponentMethodInterc
|
|||
}
|
||||
|
||||
private void getDataBaseStatistics(ManagementServerHostStatsEntry newEntry, long msid) {
|
||||
newEntry.setLastAgents(_agentMgr.getLastAgents());
|
||||
List<String> lastAgents = _hostDao.listByLastMs(msid);
|
||||
newEntry.setLastAgents(lastAgents);
|
||||
List<String> agents = _hostDao.listByMs(msid);
|
||||
newEntry.setAgents(agents);
|
||||
int count = _hostDao.countByMs(msid);
|
||||
newEntry.setAgentCount(count);
|
||||
newEntry.setAgentCount(agents.size());
|
||||
}
|
||||
|
||||
private void getMemoryData(@NotNull ManagementServerHostStatsEntry newEntry) {
|
||||
|
|
|
|||
|
|
@ -19,10 +19,13 @@ package org.apache.cloudstack.agent.lb;
|
|||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.Comparator;
|
||||
import java.util.EnumSet;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.EnumSet;
|
||||
import java.util.concurrent.ExecutorService;
|
||||
import java.util.concurrent.Executors;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import javax.inject.Inject;
|
||||
import javax.naming.ConfigurationException;
|
||||
|
|
@ -33,6 +36,8 @@ import org.apache.cloudstack.agent.lb.algorithm.IndirectAgentLBStaticAlgorithm;
|
|||
import org.apache.cloudstack.config.ApiServiceConfiguration;
|
||||
import org.apache.cloudstack.framework.config.ConfigKey;
|
||||
import org.apache.cloudstack.framework.config.Configurable;
|
||||
import org.apache.cloudstack.managed.context.ManagedContextRunnable;
|
||||
import org.apache.commons.collections.CollectionUtils;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
|
||||
import com.cloud.agent.AgentManager;
|
||||
|
|
@ -40,6 +45,7 @@ import com.cloud.agent.api.Answer;
|
|||
import com.cloud.agent.api.MigrateAgentConnectionCommand;
|
||||
import com.cloud.cluster.ManagementServerHostVO;
|
||||
import com.cloud.cluster.dao.ManagementServerHostDao;
|
||||
import com.cloud.dc.DataCenter;
|
||||
import com.cloud.dc.DataCenterVO;
|
||||
import com.cloud.dc.dao.ClusterDao;
|
||||
import com.cloud.dc.dao.DataCenterDao;
|
||||
|
|
@ -49,20 +55,20 @@ import com.cloud.host.dao.HostDao;
|
|||
import com.cloud.hypervisor.Hypervisor;
|
||||
import com.cloud.resource.ResourceState;
|
||||
import com.cloud.utils.component.ComponentLifecycleBase;
|
||||
import com.cloud.utils.concurrency.NamedThreadFactory;
|
||||
import com.cloud.utils.exception.CloudRuntimeException;
|
||||
|
||||
import org.apache.commons.collections.CollectionUtils;
|
||||
|
||||
public class IndirectAgentLBServiceImpl extends ComponentLifecycleBase implements IndirectAgentLB, Configurable {
|
||||
|
||||
public static final ConfigKey<String> IndirectAgentLBAlgorithm = new ConfigKey<>(String.class,
|
||||
"indirect.agent.lb.algorithm", "Advanced", "static",
|
||||
"The algorithm to be applied on the provided 'host' management server list that is sent to indirect agents. Allowed values are: static, roundrobin and shuffle.",
|
||||
"The algorithm to be applied on the provided management server list in the 'host' config that that is sent to indirect agents. Allowed values are: static, roundrobin and shuffle.",
|
||||
true, ConfigKey.Scope.Global, null, null, null, null, null, ConfigKey.Kind.Select, "static,roundrobin,shuffle");
|
||||
|
||||
public static final ConfigKey<Long> IndirectAgentLBCheckInterval = new ConfigKey<>("Advanced", Long.class,
|
||||
"indirect.agent.lb.check.interval", "0",
|
||||
"The interval in seconds after which agent should check and try to connect to its preferred host. Set 0 to disable it.",
|
||||
"The interval in seconds after which indirect agent should check and try to connect to its preferred host (the first management server from the propagated list provided in the 'host' config)." +
|
||||
" Set 0 to disable it.",
|
||||
true, ConfigKey.Scope.Cluster);
|
||||
|
||||
private static Map<String, org.apache.cloudstack.agent.lb.IndirectAgentLBAlgorithm> algorithmMap = new HashMap<>();
|
||||
|
|
@ -85,6 +91,8 @@ public class IndirectAgentLBServiceImpl extends ComponentLifecycleBase implement
|
|||
ResourceState.ErrorInMaintenance, ResourceState.PrepareForMaintenance);
|
||||
private static final List<Host.Type> agentValidHostTypes = List.of(Host.Type.Routing, Host.Type.ConsoleProxy,
|
||||
Host.Type.SecondaryStorage, Host.Type.SecondaryStorageVM);
|
||||
private static final List<Host.Type> agentNonRoutingHostTypes = List.of(Host.Type.ConsoleProxy,
|
||||
Host.Type.SecondaryStorage, Host.Type.SecondaryStorageVM);
|
||||
private static final List<Hypervisor.HypervisorType> agentValidHypervisorTypes = List.of(
|
||||
Hypervisor.HypervisorType.KVM, Hypervisor.HypervisorType.LXC);
|
||||
|
||||
|
|
@ -246,8 +254,18 @@ public class IndirectAgentLBServiceImpl extends ComponentLifecycleBase implement
|
|||
agentBasedHosts.add(host);
|
||||
}
|
||||
|
||||
private List<Long> getAllAgentBasedNonRoutingHostsFromDB(final Long zoneId, final Long msId) {
|
||||
return hostDao.findHostIdsByZoneClusterResourceStateTypeAndHypervisorType(zoneId, null, msId,
|
||||
agentValidResourceStates, agentNonRoutingHostTypes, agentValidHypervisorTypes);
|
||||
}
|
||||
|
||||
private List<Long> getAllAgentBasedRoutingHostsFromDB(final Long zoneId, final Long clusterId, final Long msId) {
|
||||
return hostDao.findHostIdsByZoneClusterResourceStateTypeAndHypervisorType(zoneId, clusterId, msId,
|
||||
agentValidResourceStates, List.of(Host.Type.Routing), agentValidHypervisorTypes);
|
||||
}
|
||||
|
||||
private List<Long> getAllAgentBasedHostsFromDB(final Long zoneId, final Long clusterId) {
|
||||
return hostDao.findHostIdsByZoneClusterResourceStateTypeAndHypervisorType(zoneId, clusterId,
|
||||
return hostDao.findHostIdsByZoneClusterResourceStateTypeAndHypervisorType(zoneId, clusterId, null,
|
||||
agentValidResourceStates, agentValidHostTypes, agentValidHypervisorTypes);
|
||||
}
|
||||
|
||||
|
|
@ -287,31 +305,159 @@ public class IndirectAgentLBServiceImpl extends ComponentLifecycleBase implement
|
|||
@Override
|
||||
public void propagateMSListToAgents() {
|
||||
logger.debug("Propagating management server list update to agents");
|
||||
ExecutorService setupMSListExecutorService = Executors.newFixedThreadPool(10, new NamedThreadFactory("SetupMSList-Worker"));
|
||||
final String lbAlgorithm = getLBAlgorithmName();
|
||||
final Long globalLbCheckInterval = getLBPreferredHostCheckInterval(null);
|
||||
List<DataCenterVO> zones = dataCenterDao.listAll();
|
||||
for (DataCenterVO zone : zones) {
|
||||
List<Long> zoneHostIds = new ArrayList<>();
|
||||
List<Long> nonRoutingHostIds = getAllAgentBasedNonRoutingHostsFromDB(zone.getId(), null);
|
||||
zoneHostIds.addAll(nonRoutingHostIds);
|
||||
Map<Long, List<Long>> clusterHostIdsMap = new HashMap<>();
|
||||
List<Long> clusterIds = clusterDao.listAllClusterIds(zone.getId());
|
||||
for (Long clusterId : clusterIds) {
|
||||
List<Long> hostIds = getAllAgentBasedHostsFromDB(zone.getId(), clusterId);
|
||||
List<Long> hostIds = getAllAgentBasedRoutingHostsFromDB(zone.getId(), clusterId, null);
|
||||
clusterHostIdsMap.put(clusterId, hostIds);
|
||||
zoneHostIds.addAll(hostIds);
|
||||
}
|
||||
zoneHostIds.sort(Comparator.comparingLong(x -> x));
|
||||
final List<String> avoidMsList = mshostDao.listNonUpStateMsIPs();
|
||||
for (Long nonRoutingHostId : nonRoutingHostIds) {
|
||||
setupMSListExecutorService.submit(new SetupMSListTask(nonRoutingHostId, zone.getId(), zoneHostIds, avoidMsList, lbAlgorithm, globalLbCheckInterval));
|
||||
}
|
||||
for (Long clusterId : clusterIds) {
|
||||
final Long lbCheckInterval = getLBPreferredHostCheckInterval(clusterId);
|
||||
final Long clusterLbCheckInterval = getLBPreferredHostCheckInterval(clusterId);
|
||||
List<Long> hostIds = clusterHostIdsMap.get(clusterId);
|
||||
for (Long hostId : hostIds) {
|
||||
final List<String> msList = getManagementServerList(hostId, zone.getId(), zoneHostIds);
|
||||
final SetupMSListCommand cmd = new SetupMSListCommand(msList, lbAlgorithm, lbCheckInterval);
|
||||
final Answer answer = agentManager.easySend(hostId, cmd);
|
||||
if (answer == null || !answer.getResult()) {
|
||||
logger.warn("Failed to setup management servers list to the agent of ID: {}", hostId);
|
||||
}
|
||||
setupMSListExecutorService.submit(new SetupMSListTask(hostId, zone.getId(), zoneHostIds, avoidMsList, lbAlgorithm, clusterLbCheckInterval));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
setupMSListExecutorService.shutdown();
|
||||
try {
|
||||
if (!setupMSListExecutorService.awaitTermination(300, TimeUnit.SECONDS)) {
|
||||
setupMSListExecutorService.shutdownNow();
|
||||
}
|
||||
} catch (InterruptedException e) {
|
||||
setupMSListExecutorService.shutdownNow();
|
||||
logger.debug(String.format("Force shutdown setup ms list service as it did not shutdown in the desired time due to: %s", e.getMessage()));
|
||||
}
|
||||
}
|
||||
|
||||
private final class SetupMSListTask extends ManagedContextRunnable {
|
||||
private Long hostId;
|
||||
private Long dcId;
|
||||
private List<Long> orderedHostIdList;
|
||||
private List<String> avoidMsList;
|
||||
private String lbAlgorithm;
|
||||
private Long lbCheckInterval;
|
||||
|
||||
public SetupMSListTask(Long hostId, Long dcId, List<Long> orderedHostIdList, List<String> avoidMsList,
|
||||
String lbAlgorithm, Long lbCheckInterval) {
|
||||
this.hostId = hostId;
|
||||
this.dcId = dcId;
|
||||
this.orderedHostIdList = orderedHostIdList;
|
||||
this.avoidMsList = avoidMsList;
|
||||
this.lbAlgorithm = lbAlgorithm;
|
||||
this.lbCheckInterval = lbCheckInterval;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void runInContext() {
|
||||
final List<String> msList = getManagementServerList(hostId, dcId, orderedHostIdList);
|
||||
final SetupMSListCommand cmd = new SetupMSListCommand(msList, avoidMsList, lbAlgorithm, lbCheckInterval);
|
||||
cmd.setWait(60);
|
||||
final Answer answer = agentManager.easySend(hostId, cmd);
|
||||
if (answer == null || !answer.getResult()) {
|
||||
logger.warn(String.format("Failed to setup management servers list to the agent of ID: %d", hostId));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
protected boolean migrateNonRoutingHostAgentsInZone(String fromMsUuid, long fromMsId, DataCenter dc,
|
||||
long migrationStartTimeInMs, long timeoutDurationInMs, final List<String> avoidMsList, String lbAlgorithm,
|
||||
boolean lbAlgorithmChanged, List<Long> orderedHostIdList) {
|
||||
List<Long> systemVmAgentsInDc = getAllAgentBasedNonRoutingHostsFromDB(dc.getId(), fromMsId);
|
||||
if (CollectionUtils.isEmpty(systemVmAgentsInDc)) {
|
||||
return true;
|
||||
}
|
||||
logger.debug(String.format("Migrating %d non-routing host agents from management server node %d (id: %s) of zone %s",
|
||||
systemVmAgentsInDc.size(), fromMsId, fromMsUuid, dc));
|
||||
ExecutorService migrateAgentsExecutorService = Executors.newFixedThreadPool(5, new NamedThreadFactory("MigrateNonRoutingHostAgent-Worker"));
|
||||
Long lbCheckInterval = getLBPreferredHostCheckInterval(null);
|
||||
boolean stopMigration = false;
|
||||
for (final Long hostId : systemVmAgentsInDc) {
|
||||
long migrationElapsedTimeInMs = System.currentTimeMillis() - migrationStartTimeInMs;
|
||||
if (migrationElapsedTimeInMs >= timeoutDurationInMs) {
|
||||
logger.debug(String.format("Stop migrating remaining non-routing host agents from management server node %d (id: %s), timed out", fromMsId, fromMsUuid));
|
||||
stopMigration = true;
|
||||
break;
|
||||
}
|
||||
|
||||
migrateAgentsExecutorService.submit(new MigrateAgentConnectionTask(fromMsId, hostId, dc.getId(), orderedHostIdList, avoidMsList, lbCheckInterval, lbAlgorithm, lbAlgorithmChanged));
|
||||
}
|
||||
|
||||
if (stopMigration) {
|
||||
migrateAgentsExecutorService.shutdownNow();
|
||||
return false;
|
||||
}
|
||||
|
||||
migrateAgentsExecutorService.shutdown();
|
||||
long pendingTimeoutDurationInMs = timeoutDurationInMs - (System.currentTimeMillis() - migrationStartTimeInMs);
|
||||
try {
|
||||
if (pendingTimeoutDurationInMs <= 0 || !migrateAgentsExecutorService.awaitTermination(pendingTimeoutDurationInMs, TimeUnit.MILLISECONDS)) {
|
||||
migrateAgentsExecutorService.shutdownNow();
|
||||
}
|
||||
} catch (InterruptedException e) {
|
||||
migrateAgentsExecutorService.shutdownNow();
|
||||
logger.debug(String.format("Force shutdown migrate non-routing agents service as it did not shutdown in the desired time due to: %s", e.getMessage()));
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
protected boolean migrateRoutingHostAgentsInCluster(long clusterId, String fromMsUuid, long fromMsId, DataCenter dc,
|
||||
long migrationStartTimeInMs, long timeoutDurationInMs, final List<String> avoidMsList, String lbAlgorithm,
|
||||
boolean lbAlgorithmChanged, List<Long> orderedHostIdList) {
|
||||
|
||||
List<Long> agentBasedHostsOfMsInDcAndCluster = getAllAgentBasedRoutingHostsFromDB(dc.getId(), clusterId, fromMsId);
|
||||
if (CollectionUtils.isEmpty(agentBasedHostsOfMsInDcAndCluster)) {
|
||||
return true;
|
||||
}
|
||||
logger.debug(String.format("Migrating %d indirect routing host agents from management server node %d (id: %s) of zone %s, " +
|
||||
"cluster ID: %d", agentBasedHostsOfMsInDcAndCluster.size(), fromMsId, fromMsUuid, dc, clusterId));
|
||||
ExecutorService migrateAgentsExecutorService = Executors.newFixedThreadPool(10, new NamedThreadFactory("MigrateRoutingHostAgent-Worker"));
|
||||
Long lbCheckInterval = getLBPreferredHostCheckInterval(clusterId);
|
||||
boolean stopMigration = false;
|
||||
for (final Long hostId : agentBasedHostsOfMsInDcAndCluster) {
|
||||
long migrationElapsedTimeInMs = System.currentTimeMillis() - migrationStartTimeInMs;
|
||||
if (migrationElapsedTimeInMs >= timeoutDurationInMs) {
|
||||
logger.debug(String.format("Stop migrating remaining indirect routing host agents from management server node %d (id: %s), timed out", fromMsId, fromMsUuid));
|
||||
stopMigration = true;
|
||||
break;
|
||||
}
|
||||
|
||||
migrateAgentsExecutorService.submit(new MigrateAgentConnectionTask(fromMsId, hostId, dc.getId(), orderedHostIdList, avoidMsList, lbCheckInterval, lbAlgorithm, lbAlgorithmChanged));
|
||||
}
|
||||
|
||||
if (stopMigration) {
|
||||
migrateAgentsExecutorService.shutdownNow();
|
||||
return false;
|
||||
}
|
||||
|
||||
migrateAgentsExecutorService.shutdown();
|
||||
long pendingTimeoutDurationInMs = timeoutDurationInMs - (System.currentTimeMillis() - migrationStartTimeInMs);
|
||||
try {
|
||||
if (pendingTimeoutDurationInMs <= 0 || !migrateAgentsExecutorService.awaitTermination(pendingTimeoutDurationInMs, TimeUnit.MILLISECONDS)) {
|
||||
migrateAgentsExecutorService.shutdownNow();
|
||||
}
|
||||
} catch (InterruptedException e) {
|
||||
migrateAgentsExecutorService.shutdownNow();
|
||||
logger.debug(String.format("Force shutdown migrate routing agents service as it did not shutdown in the desired time due to: %s", e.getMessage()));
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
@ -322,7 +468,7 @@ public class IndirectAgentLBServiceImpl extends ComponentLifecycleBase implement
|
|||
}
|
||||
|
||||
logger.debug(String.format("Migrating indirect agents from management server node %d (id: %s) to other nodes", fromMsId, fromMsUuid));
|
||||
long migrationStartTime = System.currentTimeMillis();
|
||||
long migrationStartTimeInMs = System.currentTimeMillis();
|
||||
if (!haveAgentBasedHosts(fromMsId)) {
|
||||
logger.info(String.format("No indirect agents available on management server node %d (id: %s), to migrate", fromMsId, fromMsUuid));
|
||||
return true;
|
||||
|
|
@ -342,37 +488,75 @@ public class IndirectAgentLBServiceImpl extends ComponentLifecycleBase implement
|
|||
|
||||
List<DataCenterVO> dataCenterList = dcDao.listAll();
|
||||
for (DataCenterVO dc : dataCenterList) {
|
||||
Long dcId = dc.getId();
|
||||
List<Long> orderedHostIdList = getOrderedHostIdList(dcId);
|
||||
List<Host> agentBasedHostsOfMsInDc = getAllAgentBasedHostsInDc(fromMsId, dcId);
|
||||
if (CollectionUtils.isEmpty(agentBasedHostsOfMsInDc)) {
|
||||
continue;
|
||||
}
|
||||
logger.debug(String.format("Migrating %d indirect agents from management server node %d (id: %s) of zone %s", agentBasedHostsOfMsInDc.size(), fromMsId, fromMsUuid, dc));
|
||||
for (final Host host : agentBasedHostsOfMsInDc) {
|
||||
long migrationElapsedTimeInMs = System.currentTimeMillis() - migrationStartTime;
|
||||
if (migrationElapsedTimeInMs >= timeoutDurationInMs) {
|
||||
logger.debug(String.format("Stop migrating remaining indirect agents from management server node %d (id: %s), timed out", fromMsId, fromMsUuid));
|
||||
return false;
|
||||
}
|
||||
|
||||
List<String> msList = null;
|
||||
Long lbCheckInterval = 0L;
|
||||
if (lbAlgorithmChanged) {
|
||||
// send new MS list when there is change in lb algorithm
|
||||
msList = getManagementServerList(host.getId(), dcId, orderedHostIdList, lbAlgorithm);
|
||||
lbCheckInterval = getLBPreferredHostCheckInterval(host.getClusterId());
|
||||
}
|
||||
|
||||
final MigrateAgentConnectionCommand cmd = new MigrateAgentConnectionCommand(msList, avoidMsList, lbAlgorithm, lbCheckInterval);
|
||||
agentManager.easySend(host.getId(), cmd); //answer not received as the agent disconnects and reconnects to other ms
|
||||
updateLastManagementServer(host.getId(), fromMsId);
|
||||
if (!migrateAgentsInZone(dc, fromMsUuid, fromMsId, avoidMsList, lbAlgorithm, lbAlgorithmChanged,
|
||||
migrationStartTimeInMs, timeoutDurationInMs)) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
private boolean migrateAgentsInZone(DataCenterVO dc, String fromMsUuid, long fromMsId, List<String> avoidMsList,
|
||||
String lbAlgorithm, boolean lbAlgorithmChanged, long migrationStartTimeInMs, long timeoutDurationInMs) {
|
||||
List<Long> orderedHostIdList = getOrderedHostIdList(dc.getId());
|
||||
if (!migrateNonRoutingHostAgentsInZone(fromMsUuid, fromMsId, dc, migrationStartTimeInMs,
|
||||
timeoutDurationInMs, avoidMsList, lbAlgorithm, lbAlgorithmChanged, orderedHostIdList)) {
|
||||
return false;
|
||||
}
|
||||
List<Long> clusterIds = clusterDao.listAllClusterIds(dc.getId());
|
||||
for (Long clusterId : clusterIds) {
|
||||
if (!migrateRoutingHostAgentsInCluster(clusterId, fromMsUuid, fromMsId, dc, migrationStartTimeInMs,
|
||||
timeoutDurationInMs, avoidMsList, lbAlgorithm, lbAlgorithmChanged, orderedHostIdList)) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
private final class MigrateAgentConnectionTask extends ManagedContextRunnable {
|
||||
private long fromMsId;
|
||||
Long hostId;
|
||||
Long dcId;
|
||||
List<Long> orderedHostIdList;
|
||||
List<String> avoidMsList;
|
||||
Long lbCheckInterval;
|
||||
String lbAlgorithm;
|
||||
boolean lbAlgorithmChanged;
|
||||
|
||||
public MigrateAgentConnectionTask(long fromMsId, Long hostId, Long dcId, List<Long> orderedHostIdList,
|
||||
List<String> avoidMsList, Long lbCheckInterval, String lbAlgorithm, boolean lbAlgorithmChanged) {
|
||||
this.fromMsId = fromMsId;
|
||||
this.hostId = hostId;
|
||||
this.orderedHostIdList = orderedHostIdList;
|
||||
this.avoidMsList = avoidMsList;
|
||||
this.lbCheckInterval = lbCheckInterval;
|
||||
this.lbAlgorithm = lbAlgorithm;
|
||||
this.lbAlgorithmChanged = lbAlgorithmChanged;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void runInContext() {
|
||||
try {
|
||||
List<String> msList = null;
|
||||
if (lbAlgorithmChanged) {
|
||||
// send new MS list when there is change in lb algorithm
|
||||
msList = getManagementServerList(hostId, dcId, orderedHostIdList, lbAlgorithm);
|
||||
}
|
||||
|
||||
final MigrateAgentConnectionCommand cmd = new MigrateAgentConnectionCommand(msList, avoidMsList, lbAlgorithm, lbCheckInterval);
|
||||
cmd.setWait(60);
|
||||
final Answer answer = agentManager.easySend(hostId, cmd); //may not receive answer when the agent disconnects immediately and try reconnecting to other ms host
|
||||
if (answer != null && !answer.getResult()) {
|
||||
logger.warn(String.format("Error while initiating migration of agent connection for host agent ID: %d - %s", hostId, answer.getDetails()));
|
||||
}
|
||||
updateLastManagementServer(hostId, fromMsId);
|
||||
} catch (final Exception e) {
|
||||
logger.error(String.format("Error migrating agent connection for host %d", hostId), e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private void updateLastManagementServer(long hostId, long msId) {
|
||||
HostVO hostVO = hostDao.findById(hostId);
|
||||
if (hostVO != null) {
|
||||
|
|
|
|||
|
|
@ -394,10 +394,10 @@ public class BackupManagerImpl extends ManagerBase implements BackupManager {
|
|||
|
||||
boolean result = false;
|
||||
try {
|
||||
vm.setBackupOfferingId(null);
|
||||
vm.setBackupExternalId(null);
|
||||
vm.setBackupVolumes(null);
|
||||
result = backupProvider.removeVMFromBackupOffering(vm);
|
||||
vm.setBackupOfferingId(null);
|
||||
vm.setBackupVolumes(null);
|
||||
vm.setBackupExternalId(null);
|
||||
if (result && backupProvider.willDeleteBackupsOnOfferingRemoval()) {
|
||||
final List<Backup> backups = backupDao.listByVmId(null, vm.getId());
|
||||
for (final Backup backup : backups) {
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@
|
|||
// under the License.
|
||||
package org.apache.cloudstack.storage.heuristics.presetvariables;
|
||||
|
||||
public class Domain extends GenericHeuristicPresetVariable{
|
||||
public class Domain extends GenericHeuristicPresetVariable {
|
||||
private String id;
|
||||
|
||||
public String getId() {
|
||||
|
|
|
|||
|
|
@ -36,10 +36,12 @@ public class GenericHeuristicPresetVariable {
|
|||
fieldNamesToIncludeInToString.add("name");
|
||||
}
|
||||
|
||||
/***
|
||||
* Converts the preset variable into a valid JSON object that will be injected into the JS interpreter.
|
||||
* This method should not be overridden or changed.
|
||||
*/
|
||||
@Override
|
||||
public String toString() {
|
||||
return String.format("GenericHeuristicPresetVariable %s",
|
||||
ReflectionToStringBuilderUtils.reflectOnlySelectedFields(
|
||||
this, fieldNamesToIncludeInToString.toArray(new String[0])));
|
||||
public final String toString() {
|
||||
return ReflectionToStringBuilderUtils.reflectOnlySelectedFields(this, fieldNamesToIncludeInToString.toArray(new String[0]));
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -106,7 +106,7 @@ public class IndirectAgentLBServiceImplTest {
|
|||
|
||||
List<Long> hostIds = hosts.stream().map(HostVO::getId).collect(Collectors.toList());
|
||||
doReturn(hostIds).when(hostDao).findHostIdsByZoneClusterResourceStateTypeAndHypervisorType(Mockito.anyLong(),
|
||||
Mockito.eq(null), Mockito.anyList(), Mockito.anyList(), Mockito.anyList());
|
||||
Mockito.eq(null), Mockito.eq(null), Mockito.anyList(), Mockito.anyList(), Mockito.anyList());
|
||||
}
|
||||
|
||||
@Before
|
||||
|
|
@ -203,14 +203,14 @@ public class IndirectAgentLBServiceImplTest {
|
|||
@Test
|
||||
public void testGetOrderedRunningHostIdsEmptyList() {
|
||||
doReturn(Collections.emptyList()).when(hostDao).findHostIdsByZoneClusterResourceStateTypeAndHypervisorType(
|
||||
Mockito.eq(DC_1_ID), Mockito.eq(null), Mockito.anyList(), Mockito.anyList(), Mockito.anyList());
|
||||
Mockito.eq(DC_1_ID), Mockito.eq(null), Mockito.eq(null), Mockito.anyList(), Mockito.anyList(), Mockito.anyList());
|
||||
Assert.assertTrue(agentMSLB.getOrderedHostIdList(DC_1_ID).isEmpty());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testGetOrderedRunningHostIdsOrderList() {
|
||||
doReturn(Arrays.asList(host4.getId(), host2.getId(), host1.getId(), host3.getId())).when(hostDao)
|
||||
.findHostIdsByZoneClusterResourceStateTypeAndHypervisorType(Mockito.eq(DC_1_ID), Mockito.eq(null),
|
||||
.findHostIdsByZoneClusterResourceStateTypeAndHypervisorType(Mockito.eq(DC_1_ID), Mockito.eq(null), Mockito.eq(null),
|
||||
Mockito.anyList(), Mockito.anyList(), Mockito.anyList());
|
||||
Assert.assertEquals(Arrays.asList(host1.getId(), host2.getId(), host3.getId(), host4.getId()),
|
||||
agentMSLB.getOrderedHostIdList(DC_1_ID));
|
||||
|
|
|
|||
|
|
@ -0,0 +1,46 @@
|
|||
// Licensed to the Apache Software Foundation (ASF) under one
|
||||
// or more contributor license agreements. See the NOTICE file
|
||||
// distributed with this work for additional information
|
||||
// regarding copyright ownership. The ASF licenses this file
|
||||
// to you under the Apache License, Version 2.0 (the
|
||||
// "License"); you may not use this file except in compliance
|
||||
// with the License. You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing,
|
||||
// software distributed under the License is distributed on an
|
||||
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
// KIND, either express or implied. See the License for the
|
||||
// specific language governing permissions and limitations
|
||||
// under the License.
|
||||
|
||||
package org.apache.cloudstack.storage.heuristics.presetvariables;
|
||||
|
||||
import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils;
|
||||
import org.junit.Assert;
|
||||
import org.junit.Test;
|
||||
import org.junit.runner.RunWith;
|
||||
import org.mockito.junit.MockitoJUnitRunner;
|
||||
|
||||
@RunWith(MockitoJUnitRunner.class)
|
||||
public class AccountTest {
|
||||
|
||||
@Test
|
||||
public void toStringTestReturnsValidJson() {
|
||||
Account variable = new Account();
|
||||
variable.setName("test name");
|
||||
variable.setId("test id");
|
||||
|
||||
Domain domainVariable = new Domain();
|
||||
domainVariable.setId("domain id");
|
||||
domainVariable.setName("domain name");
|
||||
variable.setDomain(domainVariable);
|
||||
|
||||
String expected = ReflectionToStringBuilderUtils.reflectOnlySelectedFields(variable, "name", "id", "domain");
|
||||
String result = variable.toString();
|
||||
|
||||
Assert.assertEquals(expected, result);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
// Licensed to the Apache Software Foundation (ASF) under one
|
||||
// or more contributor license agreements. See the NOTICE file
|
||||
// distributed with this work for additional information
|
||||
// regarding copyright ownership. The ASF licenses this file
|
||||
// to you under the Apache License, Version 2.0 (the
|
||||
// "License"); you may not use this file except in compliance
|
||||
// with the License. You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing,
|
||||
// software distributed under the License is distributed on an
|
||||
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
// KIND, either express or implied. See the License for the
|
||||
// specific language governing permissions and limitations
|
||||
// under the License.
|
||||
|
||||
package org.apache.cloudstack.storage.heuristics.presetvariables;
|
||||
|
||||
import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils;
|
||||
import org.junit.Assert;
|
||||
import org.junit.Test;
|
||||
import org.junit.runner.RunWith;
|
||||
import org.mockito.junit.MockitoJUnitRunner;
|
||||
|
||||
@RunWith(MockitoJUnitRunner.class)
|
||||
public class DomainTest {
|
||||
|
||||
@Test
|
||||
public void toStringTestReturnsValidJson() {
|
||||
Domain variable = new Domain();
|
||||
variable.setName("test name");
|
||||
variable.setId("test id");
|
||||
|
||||
String expected = ReflectionToStringBuilderUtils.reflectOnlySelectedFields(variable, "name", "id");
|
||||
String result = variable.toString();
|
||||
|
||||
Assert.assertEquals(expected, result);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,40 @@
|
|||
// Licensed to the Apache Software Foundation (ASF) under one
|
||||
// or more contributor license agreements. See the NOTICE file
|
||||
// distributed with this work for additional information
|
||||
// regarding copyright ownership. The ASF licenses this file
|
||||
// to you under the Apache License, Version 2.0 (the
|
||||
// "License"); you may not use this file except in compliance
|
||||
// with the License. You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing,
|
||||
// software distributed under the License is distributed on an
|
||||
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
// KIND, either express or implied. See the License for the
|
||||
// specific language governing permissions and limitations
|
||||
// under the License.
|
||||
|
||||
package org.apache.cloudstack.storage.heuristics.presetvariables;
|
||||
|
||||
import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils;
|
||||
import org.junit.Assert;
|
||||
import org.junit.Test;
|
||||
import org.junit.runner.RunWith;
|
||||
import org.mockito.junit.MockitoJUnitRunner;
|
||||
|
||||
@RunWith(MockitoJUnitRunner.class)
|
||||
public class GenericHeuristicPresetVariableTest {
|
||||
|
||||
@Test
|
||||
public void toStringTestReturnsValidJson() {
|
||||
GenericHeuristicPresetVariable variable = new GenericHeuristicPresetVariable();
|
||||
variable.setName("test name");
|
||||
|
||||
String expected = ReflectionToStringBuilderUtils.reflectOnlySelectedFields(variable, "name");
|
||||
String result = variable.toString();
|
||||
|
||||
Assert.assertEquals(expected, result);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,45 @@
|
|||
// Licensed to the Apache Software Foundation (ASF) under one
|
||||
// or more contributor license agreements. See the NOTICE file
|
||||
// distributed with this work for additional information
|
||||
// regarding copyright ownership. The ASF licenses this file
|
||||
// to you under the Apache License, Version 2.0 (the
|
||||
// "License"); you may not use this file except in compliance
|
||||
// with the License. You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing,
|
||||
// software distributed under the License is distributed on an
|
||||
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
// KIND, either express or implied. See the License for the
|
||||
// specific language governing permissions and limitations
|
||||
// under the License.
|
||||
|
||||
package org.apache.cloudstack.storage.heuristics.presetvariables;
|
||||
|
||||
import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils;
|
||||
import org.junit.Assert;
|
||||
import org.junit.Test;
|
||||
import org.junit.runner.RunWith;
|
||||
import org.mockito.junit.MockitoJUnitRunner;
|
||||
|
||||
@RunWith(MockitoJUnitRunner.class)
|
||||
public class SecondaryStorageTest {
|
||||
|
||||
@Test
|
||||
public void toStringTestReturnsValidJson() {
|
||||
SecondaryStorage variable = new SecondaryStorage();
|
||||
variable.setName("test name");
|
||||
variable.setId("test id");
|
||||
variable.setProtocol("test protocol");
|
||||
variable.setUsedDiskSize(1L);
|
||||
variable.setTotalDiskSize(2L);
|
||||
|
||||
String expected = ReflectionToStringBuilderUtils.reflectOnlySelectedFields(variable, "name", "id",
|
||||
"protocol", "usedDiskSize", "totalDiskSize");
|
||||
String result = variable.toString();
|
||||
|
||||
Assert.assertEquals(expected, result);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,44 @@
|
|||
// Licensed to the Apache Software Foundation (ASF) under one
|
||||
// or more contributor license agreements. See the NOTICE file
|
||||
// distributed with this work for additional information
|
||||
// regarding copyright ownership. The ASF licenses this file
|
||||
// to you under the Apache License, Version 2.0 (the
|
||||
// "License"); you may not use this file except in compliance
|
||||
// with the License. You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing,
|
||||
// software distributed under the License is distributed on an
|
||||
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
// KIND, either express or implied. See the License for the
|
||||
// specific language governing permissions and limitations
|
||||
// under the License.
|
||||
|
||||
package org.apache.cloudstack.storage.heuristics.presetvariables;
|
||||
|
||||
import com.cloud.hypervisor.Hypervisor;
|
||||
import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils;
|
||||
import org.junit.Assert;
|
||||
import org.junit.Test;
|
||||
import org.junit.runner.RunWith;
|
||||
import org.mockito.junit.MockitoJUnitRunner;
|
||||
|
||||
@RunWith(MockitoJUnitRunner.class)
|
||||
public class SnapshotTest {
|
||||
|
||||
@Test
|
||||
public void toStringTestReturnsValidJson() {
|
||||
Snapshot variable = new Snapshot();
|
||||
variable.setName("test name");
|
||||
variable.setSize(1L);
|
||||
variable.setHypervisorType(Hypervisor.HypervisorType.KVM);
|
||||
|
||||
String expected = ReflectionToStringBuilderUtils.reflectOnlySelectedFields(variable, "name", "size",
|
||||
"hypervisorType");
|
||||
String result = variable.toString();
|
||||
|
||||
Assert.assertEquals(expected, result);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,46 @@
|
|||
// Licensed to the Apache Software Foundation (ASF) under one
|
||||
// or more contributor license agreements. See the NOTICE file
|
||||
// distributed with this work for additional information
|
||||
// regarding copyright ownership. The ASF licenses this file
|
||||
// to you under the Apache License, Version 2.0 (the
|
||||
// "License"); you may not use this file except in compliance
|
||||
// with the License. You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing,
|
||||
// software distributed under the License is distributed on an
|
||||
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
// KIND, either express or implied. See the License for the
|
||||
// specific language governing permissions and limitations
|
||||
// under the License.
|
||||
|
||||
package org.apache.cloudstack.storage.heuristics.presetvariables;
|
||||
|
||||
import com.cloud.hypervisor.Hypervisor;
|
||||
import com.cloud.storage.Storage;
|
||||
import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils;
|
||||
import org.junit.Assert;
|
||||
import org.junit.Test;
|
||||
import org.junit.runner.RunWith;
|
||||
import org.mockito.junit.MockitoJUnitRunner;
|
||||
|
||||
@RunWith(MockitoJUnitRunner.class)
|
||||
public class TemplateTest {
|
||||
|
||||
@Test
|
||||
public void toStringTestReturnsValidJson() {
|
||||
Template variable = new Template();
|
||||
variable.setName("test name");
|
||||
variable.setTemplateType(Storage.TemplateType.USER);
|
||||
variable.setHypervisorType(Hypervisor.HypervisorType.KVM);
|
||||
variable.setFormat(Storage.ImageFormat.QCOW2);
|
||||
|
||||
String expected = ReflectionToStringBuilderUtils.reflectOnlySelectedFields(variable, "name", "templateType",
|
||||
"hypervisorType", "format");
|
||||
String result = variable.toString();
|
||||
|
||||
Assert.assertEquals(expected, result);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,44 @@
|
|||
// Licensed to the Apache Software Foundation (ASF) under one
|
||||
// or more contributor license agreements. See the NOTICE file
|
||||
// distributed with this work for additional information
|
||||
// regarding copyright ownership. The ASF licenses this file
|
||||
// to you under the Apache License, Version 2.0 (the
|
||||
// "License"); you may not use this file except in compliance
|
||||
// with the License. You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing,
|
||||
// software distributed under the License is distributed on an
|
||||
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
// KIND, either express or implied. See the License for the
|
||||
// specific language governing permissions and limitations
|
||||
// under the License.
|
||||
|
||||
package org.apache.cloudstack.storage.heuristics.presetvariables;
|
||||
|
||||
import com.cloud.storage.Storage;
|
||||
import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils;
|
||||
import org.junit.Assert;
|
||||
import org.junit.Test;
|
||||
import org.junit.runner.RunWith;
|
||||
import org.mockito.junit.MockitoJUnitRunner;
|
||||
|
||||
@RunWith(MockitoJUnitRunner.class)
|
||||
public class VolumeTest {
|
||||
|
||||
@Test
|
||||
public void toStringTestReturnsValidJson() {
|
||||
Volume variable = new Volume();
|
||||
variable.setName("test name");
|
||||
variable.setFormat(Storage.ImageFormat.QCOW2);
|
||||
variable.setSize(1L);
|
||||
|
||||
String expected = ReflectionToStringBuilderUtils.reflectOnlySelectedFields(variable, "name", "format",
|
||||
"size");
|
||||
String result = variable.toString();
|
||||
|
||||
Assert.assertEquals(expected, result);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -344,7 +344,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_all_domainuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is able to deploy a VM for other users in a shared network with scope=all
|
||||
Validate that ROOT admin is able to deploy a VM for other users in a shared network with scope=all
|
||||
"""
|
||||
|
||||
# Deploy VM for a user in a domain under ROOT as admin
|
||||
|
|
@ -372,7 +372,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_all_domainadminuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is able to deploy a VM for a domain admin users in a shared network with scope=all
|
||||
Validate that ROOT admin is able to deploy a VM for a domain admin users in a shared network with scope=all
|
||||
|
||||
"""
|
||||
# Deploy VM for an admin user in a domain under ROOT as admin
|
||||
|
|
@ -400,7 +400,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is able to deploy a VM for any user in a subdomain in a shared network with scope=all
|
||||
Validate that ROOT admin is able to deploy a VM for any user in a subdomain in a shared network with scope=all
|
||||
"""
|
||||
# Deploy VM as user in a subdomain under ROOT
|
||||
|
||||
|
|
@ -426,7 +426,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_all_subdomainadminuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is able to deploy a VM for admin user in a domain in a shared network with scope=all
|
||||
Validate that ROOT admin is able to deploy a VM for admin user in a domain in a shared network with scope=all
|
||||
|
||||
"""
|
||||
# Deploy VM as an admin user in a subdomain under ROOT
|
||||
|
|
@ -453,7 +453,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_all_ROOTuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is able to deploy a VM for user in ROOT domain in a shared network with scope=all
|
||||
Validate that ROOT admin is able to deploy a VM for user in ROOT domain in a shared network with scope=all
|
||||
|
||||
"""
|
||||
# Deploy VM as user in ROOT domain
|
||||
|
|
@ -482,7 +482,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_domain_nosubdomainaccess_domainuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is able to deploy a VM for domain user in a shared network with scope=domain with no subdomain access
|
||||
Validate that ROOT admin is able to deploy a VM for domain user in a shared network with scope=domain with no subdomain access
|
||||
|
||||
"""
|
||||
# Deploy VM as user in a domain that has shared network with no subdomain access
|
||||
|
|
@ -510,7 +510,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_domain_nosubdomainaccess_domainadminuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is able to deploy a VM for domain admin user in a shared network with scope=domain with no subdomain access
|
||||
Validate that ROOT admin is able to deploy a VM for domain admin user in a shared network with scope=domain with no subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as an admin user in a domain that has shared network with no subdomain access
|
||||
|
|
@ -538,7 +538,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_domain_nosubdomainaccess_subdomainuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is NOT able to deploy a VM for sub domain user in a shared network with scope=domain with no subdomain access
|
||||
Validate that ROOT admin is NOT able to deploy a VM for sub domain user in a shared network with scope=domain with no subdomain access
|
||||
|
||||
"""
|
||||
# Deploy VM as user in a subdomain under a domain that has shared network with no subdomain access
|
||||
|
|
@ -569,7 +569,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_domain_nosubdomainaccess_subdomainadminuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is NOT able to deploy a VM for sub domain admin user in a shared network with scope=domain with no subdomain access
|
||||
Validate that ROOT admin is NOT able to deploy a VM for sub domain admin user in a shared network with scope=domain with no subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as an admin user in a subdomain under a domain that has shared network with no subdomain access
|
||||
|
|
@ -599,7 +599,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_domain_nosubdomainaccess_parentdomainuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is NOT able to deploy a VM for parent domain user in a shared network with scope=domain with no subdomain access
|
||||
Validate that ROOT admin is NOT able to deploy a VM for parent domain user in a shared network with scope=domain with no subdomain access
|
||||
|
||||
"""
|
||||
# Deploy VM as user in parentdomain of a domain that has shared network with no subdomain access
|
||||
|
|
@ -629,7 +629,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_domain_nosubdomainaccess_parentdomainadminuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is NOT able to deploy a VM for parent domain admin user in a shared network with scope=domain with no subdomain access
|
||||
Validate that ROOT admin is NOT able to deploy a VM for parent domain admin user in a shared network with scope=domain with no subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as an admin user in parentdomain of a domain that has shared network with no subdomain access
|
||||
|
|
@ -659,7 +659,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_domain_nosubdomainaccess_ROOTuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is NOT able to deploy a VM for parent domain admin user in a shared network with scope=domain with no subdomain access
|
||||
Validate that ROOT admin is NOT able to deploy a VM for parent domain admin user in a shared network with scope=domain with no subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as user in ROOT domain
|
||||
|
|
@ -691,7 +691,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_domain_withsubdomainaccess_domainuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is able to deploy a VM for domain user in a shared network with scope=domain with subdomain access
|
||||
Validate that ROOT admin is able to deploy a VM for domain user in a shared network with scope=domain with subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as user in a domain that has shared network with subdomain access
|
||||
|
|
@ -719,7 +719,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_domain_withsubdomainaccess_domainadminuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is able to deploy a VM for domain admin user in a shared network with scope=domain with subdomain access
|
||||
Validate that ROOT admin is able to deploy a VM for domain admin user in a shared network with scope=domain with subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as an admin user in a domain that has shared network with subdomain access
|
||||
|
|
@ -747,7 +747,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_domain_withsubdomainaccess_subdomainuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is able to deploy a VM for subdomain user in a shared network with scope=domain with subdomain access
|
||||
Validate that ROOT admin is able to deploy a VM for subdomain user in a shared network with scope=domain with subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as user in a subdomain under a domain that has shared network with subdomain access
|
||||
|
|
@ -774,7 +774,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_domain_withsubdomainaccess_subdomainadminuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is able to deploy a VM for subdomain admin user in a shared network with scope=domain with subdomain access
|
||||
Validate that ROOT admin is able to deploy a VM for subdomain admin user in a shared network with scope=domain with subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as an admin user in a subdomain under a domain that has shared network with subdomain access
|
||||
|
|
@ -801,7 +801,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_domain_withsubdomainaccess_parentdomainuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is NOT able to deploy a VM for parent domain user in a shared network with scope=domain with subdomain access
|
||||
Validate that ROOT admin is NOT able to deploy a VM for parent domain user in a shared network with scope=domain with subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as user in parentdomain of a domain that has shared network with subdomain access
|
||||
|
|
@ -831,7 +831,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_domain_withsubdomainaccess_parentdomainadminuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is NOT able to deploy a VM for parent domain admin user in a shared network with scope=domain with subdomain access
|
||||
Validate that ROOT admin is NOT able to deploy a VM for parent domain admin user in a shared network with scope=domain with subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as an admin user in parentdomain of a domain that has shared network with subdomain access
|
||||
|
|
@ -861,7 +861,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_domain_withsubdomainaccess_ROOTuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is NOT able to deploy a VM for user in ROOT domain in a shared network with scope=domain with subdomain access
|
||||
Validate that ROOT admin is NOT able to deploy a VM for user in ROOT domain in a shared network with scope=domain with subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as user in ROOT domain
|
||||
|
|
@ -893,7 +893,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_account_domainuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is NOT able to deploy a VM for user in the same domain but in a different account in a shared network with scope=account
|
||||
Validate that ROOT admin is NOT able to deploy a VM for user in the same domain but in a different account in a shared network with scope=account
|
||||
"""
|
||||
|
||||
# Deploy VM as user in a domain under the same domain but different account from the account that has a shared network with scope=account
|
||||
|
|
@ -923,7 +923,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_account_domainadminuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is NOT able to deploy a VM for admin user in the same domain but in a different account in a shared network with scope=account
|
||||
Validate that ROOT admin is NOT able to deploy a VM for admin user in the same domain but in a different account in a shared network with scope=account
|
||||
|
||||
"""
|
||||
# Deploy VM as admin user for a domain that has an account with shared network with scope=account
|
||||
|
|
@ -953,7 +953,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_account_user(self):
|
||||
"""
|
||||
Valiate that ROOT admin is able to deploy a VM for regular user in a shared network with scope=account
|
||||
Validate that ROOT admin is able to deploy a VM for regular user in a shared network with scope=account
|
||||
"""
|
||||
|
||||
# Deploy VM as account with shared network with scope=account
|
||||
|
|
@ -981,7 +981,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_account_differentdomain(self):
|
||||
"""
|
||||
Valiate that ROOT admin is NOT able to deploy a VM for a admin user in a shared network with scope=account which the admin user does not have access to
|
||||
Validate that ROOT admin is NOT able to deploy a VM for a admin user in a shared network with scope=account which the admin user does not have access to
|
||||
"""
|
||||
|
||||
# Deploy VM as an admin user in a subdomain under ROOT
|
||||
|
|
@ -1011,7 +1011,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_admin_scope_account_ROOTuser(self):
|
||||
"""
|
||||
Valiate that ROOT admin is NOT able to deploy a VM for a user in ROOT domain in a shared network with scope=account which the user does not have access to
|
||||
Validate that ROOT admin is NOT able to deploy a VM for a user in ROOT domain in a shared network with scope=account which the user does not have access to
|
||||
"""
|
||||
|
||||
# Deploy VM as user in ROOT domain
|
||||
|
|
@ -1043,7 +1043,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_all_domainuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is able to deploy a VM for a domain user in a shared network with scope=all
|
||||
Validate that Domain admin is able to deploy a VM for a domain user in a shared network with scope=all
|
||||
"""
|
||||
|
||||
# Deploy VM for a user in a domain under ROOT as admin
|
||||
|
|
@ -1070,7 +1070,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_all_domainadminuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is able to deploy a VM for a domain admin user in a shared network with scope=all
|
||||
Validate that Domain admin is able to deploy a VM for a domain admin user in a shared network with scope=all
|
||||
"""
|
||||
|
||||
# Deploy VM for an admin user in a domain under ROOT as admin
|
||||
|
|
@ -1097,7 +1097,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_all_subdomainuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is able to deploy a VM for a sub domain user in a shared network with scope=all
|
||||
Validate that Domain admin is able to deploy a VM for a sub domain user in a shared network with scope=all
|
||||
"""
|
||||
|
||||
# Deploy VM as user in a subdomain under ROOT
|
||||
|
|
@ -1123,7 +1123,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_all_subdomainadminuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is able to deploy a VM for a sub domain admin user in a shared network with scope=all
|
||||
Validate that Domain admin is able to deploy a VM for a sub domain admin user in a shared network with scope=all
|
||||
"""
|
||||
|
||||
# Deploy VM as an admin user in a subdomain under ROOT
|
||||
|
|
@ -1149,7 +1149,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_all_ROOTuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is NOT able to deploy a VM for user in ROOT domain in a shared network with scope=all
|
||||
Validate that Domain admin is NOT able to deploy a VM for user in ROOT domain in a shared network with scope=all
|
||||
"""
|
||||
|
||||
# Deploy VM as user in ROOT domain
|
||||
|
|
@ -1177,7 +1177,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_all_crossdomainuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is NOT able to deploy a VM for user in other domain in a shared network with scope=all
|
||||
Validate that Domain admin is NOT able to deploy a VM for user in other domain in a shared network with scope=all
|
||||
"""
|
||||
|
||||
# Deploy VM as user in ROOT domain
|
||||
|
|
@ -1208,7 +1208,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_nosubdomainaccess_domainuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is able to deploy a VM for domain user in a shared network with scope=Domain and no subdomain access
|
||||
Validate that Domain admin is able to deploy a VM for domain user in a shared network with scope=Domain and no subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as user in a domain that has shared network with no subdomain access
|
||||
|
|
@ -1235,7 +1235,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_nosubdomainaccess_domainadminuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is able to deploy a VM for domain admin user in a shared network with scope=Domain and no subdomain access
|
||||
Validate that Domain admin is able to deploy a VM for domain admin user in a shared network with scope=Domain and no subdomain access
|
||||
|
||||
"""
|
||||
# Deploy VM as an admin user in a domain that has shared network with no subdomain access
|
||||
|
|
@ -1263,7 +1263,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_nosubdomainaccess_subdomainuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is NOT able to deploy a VM for sub domain user in a shared network with scope=Domain and no subdomain access
|
||||
Validate that Domain admin is NOT able to deploy a VM for sub domain user in a shared network with scope=Domain and no subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as user in a subdomain under a domain that has shared network with no subdomain access
|
||||
|
|
@ -1293,7 +1293,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_nosubdomainaccess_subdomainadminuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is NOT able to deploy a VM for sub domain admin user in a shared network with scope=Domain and no subdomain access
|
||||
Validate that Domain admin is NOT able to deploy a VM for sub domain admin user in a shared network with scope=Domain and no subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as an admin user in a subdomain under a domain that has shared network with no subdomain access
|
||||
|
|
@ -1323,7 +1323,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_nosubdomainaccess_parentdomainuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is NOT able to deploy a VM for parent domain user in a shared network with scope=Domain and no subdomain access
|
||||
Validate that Domain admin is NOT able to deploy a VM for parent domain user in a shared network with scope=Domain and no subdomain access
|
||||
|
||||
"""
|
||||
# Deploy VM as user in parentdomain of a domain that has shared network with no subdomain access
|
||||
|
|
@ -1353,7 +1353,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_nosubdomainaccess_parentdomainadminuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is NOT able to deploy a VM for parent domain admin user in a shared network with scope=Domain and no subdomain access
|
||||
Validate that Domain admin is NOT able to deploy a VM for parent domain admin user in a shared network with scope=Domain and no subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as an admin user in parentdomain of a domain that has shared network with no subdomain access
|
||||
|
|
@ -1383,7 +1383,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_nosubdomainaccess_ROOTuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is NOT able to deploy a VM for user in ROOT domain in a shared network with scope=Domain and no subdomain access
|
||||
Validate that Domain admin is NOT able to deploy a VM for user in ROOT domain in a shared network with scope=Domain and no subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as user in ROOT domain
|
||||
|
|
@ -1414,7 +1414,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_withsubdomainaccess_domainuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is able to deploy a VM for regular user in domain in a shared network with scope=Domain and subdomain access
|
||||
Validate that Domain admin is able to deploy a VM for regular user in domain in a shared network with scope=Domain and subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as user in a domain that has shared network with subdomain access
|
||||
|
|
@ -1441,7 +1441,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_withsubdomainaccess_domainadminuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is able to deploy a VM for admin user in domain in a shared network with scope=Domain and subdomain access
|
||||
Validate that Domain admin is able to deploy a VM for admin user in domain in a shared network with scope=Domain and subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as an admin user in a domain that has shared network with subdomain access
|
||||
|
|
@ -1468,7 +1468,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_withsubdomainaccess_subdomainuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is able to deploy a VM for regular user in subdomain in a shared network with scope=Domain and subdomain access
|
||||
Validate that Domain admin is able to deploy a VM for regular user in subdomain in a shared network with scope=Domain and subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as user in a subdomain under a domain that has shared network with subdomain access
|
||||
|
|
@ -1494,7 +1494,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_withsubdomainaccess_subdomainadminuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is able to deploy a VM for admin user in subdomain in a shared network with scope=Domain and subdomain access
|
||||
Validate that Domain admin is able to deploy a VM for admin user in subdomain in a shared network with scope=Domain and subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as an admin user in a subdomain under a domain that has shared network with subdomain access
|
||||
|
|
@ -1520,7 +1520,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_withsubdomainaccess_parentdomainuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is NOT able to deploy a VM for regular user in parent domain in a shared network with scope=Domain and subdomain access
|
||||
Validate that Domain admin is NOT able to deploy a VM for regular user in parent domain in a shared network with scope=Domain and subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as user in parentdomain of a domain that has shared network with subdomain access
|
||||
|
|
@ -1549,7 +1549,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_withsubdomainaccess_parentdomainadminuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is NOT able to deploy a VM for admin user in parent domain in a shared network with scope=Domain and subdomain access
|
||||
Validate that Domain admin is NOT able to deploy a VM for admin user in parent domain in a shared network with scope=Domain and subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as an admin user in parentdomain of a domain that has shared network with subdomain access
|
||||
|
|
@ -1579,7 +1579,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_withsubdomainaccess_ROOTuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is NOT able to deploy a VM for user in ROOT domain in a shared network with scope=Domain and subdomain access
|
||||
Validate that Domain admin is NOT able to deploy a VM for user in ROOT domain in a shared network with scope=Domain and subdomain access
|
||||
"""
|
||||
|
||||
# Deploy VM as user in ROOT domain
|
||||
|
|
@ -1610,7 +1610,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_account_domainuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is NOT able to deploy a VM for user in the same domain but belonging to a different account in a shared network with scope=account
|
||||
Validate that Domain admin is NOT able to deploy a VM for user in the same domain but belonging to a different account in a shared network with scope=account
|
||||
"""
|
||||
|
||||
# Deploy VM as user in a domain under the same domain but different account from the account that has a shared network with scope=account
|
||||
|
|
@ -1639,7 +1639,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_account_domainadminuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is NOT able to deploy a VM for an admin user in the same domain but belonging to a different account in a shared network with scope=account
|
||||
Validate that Domain admin is NOT able to deploy a VM for an admin user in the same domain but belonging to a different account in a shared network with scope=account
|
||||
"""
|
||||
|
||||
# Deploy VM as admin user for a domain that has an account with shared network with scope=account
|
||||
|
|
@ -1668,7 +1668,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_account_user(self):
|
||||
"""
|
||||
Valiate that Domain admin is able to deploy a VM for an regular user in a shared network with scope=account
|
||||
Validate that Domain admin is able to deploy a VM for an regular user in a shared network with scope=account
|
||||
"""
|
||||
|
||||
# Deploy VM as account with shared network with scope=account
|
||||
|
|
@ -1695,7 +1695,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_account_differentdomain(self):
|
||||
"""
|
||||
Valiate that Domain admin is able NOT able to deploy a VM for an regular user from a differnt domain in a shared network with scope=account
|
||||
Validate that Domain admin is able NOT able to deploy a VM for an regular user from a differnt domain in a shared network with scope=account
|
||||
"""
|
||||
|
||||
# Deploy VM as an admin user in a subdomain under ROOT
|
||||
|
|
@ -1724,7 +1724,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_account_ROOTuser(self):
|
||||
"""
|
||||
Valiate that Domain admin is NOT able to deploy a VM for an regular user in ROOT domain in a shared network with scope=account
|
||||
Validate that Domain admin is NOT able to deploy a VM for an regular user in ROOT domain in a shared network with scope=account
|
||||
"""
|
||||
|
||||
# Deploy VM as user in ROOT domain
|
||||
|
|
@ -1754,7 +1754,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_regularuser_scope_all_anotherusersamedomain(self):
|
||||
"""
|
||||
Valiate that regular user is able NOT able to deploy a VM for another user in the same domain in a shared network with scope=all
|
||||
Validate that regular user is able NOT able to deploy a VM for another user in the same domain in a shared network with scope=all
|
||||
"""
|
||||
|
||||
# Deploy VM for a user in a domain under ROOT as admin
|
||||
|
|
@ -1782,7 +1782,7 @@ class TestSharedNetworkImpersonation(cloudstackTestCase):
|
|||
@attr("simulator_only", tags=["advanced"], required_hardware="false")
|
||||
def test_deployVM_in_sharedNetwork_as_regularuser_scope_all_crossdomain(self):
|
||||
"""
|
||||
Valiate that regular user is able NOT able to deploy a VM for another user in a different domain in a shared network with scope=all
|
||||
Validate that regular user is able NOT able to deploy a VM for another user in a different domain in a shared network with scope=all
|
||||
"""
|
||||
|
||||
# Deploy VM for a user in a domain under ROOT as admin
|
||||
|
|
|
|||
|
|
@ -32,8 +32,8 @@
|
|||
"format": "qcow2",
|
||||
"headless": true,
|
||||
"http_directory": "http",
|
||||
"iso_checksum": "sha512:04a2a128852c2dff8bb71779ad325721385051eb1264d897bdb5918ab207a9b1de636ded149c56c61a09eb8c7f428496815e70d3be31b1b1cf4c70bf6427cedd",
|
||||
"iso_url": "https://cdimage.debian.org/mirror/cdimage/release/12.9.0/arm64/iso-cd/debian-12.9.0-arm64-netinst.iso",
|
||||
"iso_checksum": "sha512:022895e699231c94abf7012f86cabc587dc576f07f856c87609d5d40c1f921d805a5a862cba94c1a47d09aaa565ec445222e338e73d1fa1affc4fc5908bb50ad",
|
||||
"iso_url": "https://cdimage.debian.org/mirror/cdimage/release/12.10.0/arm64/iso-cd/debian-12.10.0-arm64-netinst.iso",
|
||||
"net_device": "virtio-net",
|
||||
"output_directory": "../dist",
|
||||
"qemu_binary": "qemu-system-aarch64",
|
||||
|
|
|
|||
|
|
@ -31,8 +31,8 @@
|
|||
"format": "qcow2",
|
||||
"headless": true,
|
||||
"http_directory": "http",
|
||||
"iso_checksum": "sha512:04a2a128852c2dff8bb71779ad325721385051eb1264d897bdb5918ab207a9b1de636ded149c56c61a09eb8c7f428496815e70d3be31b1b1cf4c70bf6427cedd",
|
||||
"iso_url": "https://cdimage.debian.org/mirror/cdimage/release/12.9.0/arm64/iso-cd/debian-12.9.0-arm64-netinst.iso",
|
||||
"iso_checksum": "sha512:022895e699231c94abf7012f86cabc587dc576f07f856c87609d5d40c1f921d805a5a862cba94c1a47d09aaa565ec445222e338e73d1fa1affc4fc5908bb50ad",
|
||||
"iso_url": "https://cdimage.debian.org/mirror/cdimage/release/12.10.0/arm64/iso-cd/debian-12.10.0-arm64-netinst.iso",
|
||||
"net_device": "virtio-net",
|
||||
"output_directory": "../dist",
|
||||
"qemu_binary": "qemu-system-aarch64",
|
||||
|
|
|
|||
|
|
@ -27,8 +27,8 @@
|
|||
"format": "qcow2",
|
||||
"headless": true,
|
||||
"http_directory": "http",
|
||||
"iso_checksum": "sha512:9ebe405c3404a005ce926e483bc6c6841b405c4d85e0c8a7b1707a7fe4957c617ae44bd807a57ec3e5c2d3e99f2101dfb26ef36b3720896906bdc3aaeec4cd80",
|
||||
"iso_url": "https://cdimage.debian.org/mirror/cdimage/release/12.9.0/amd64/iso-cd/debian-12.9.0-amd64-netinst.iso",
|
||||
"iso_checksum": "sha512:cb089def0684fd93c9c2fbe45fd16ecc809c949a6fd0c91ee199faefe7d4b82b64658a264a13109d59f1a40ac3080be2f7bd3d8bf3e9cdf509add6d72576a79b",
|
||||
"iso_url": "https://cdimage.debian.org/mirror/cdimage/release/12.10.0/amd64/iso-cd/debian-12.10.0-amd64-netinst.iso",
|
||||
"net_device": "virtio-net",
|
||||
"output_directory": "../dist",
|
||||
"qemuargs": [
|
||||
|
|
|
|||
|
|
@ -58,7 +58,7 @@ docker run -ti --name cloudstack --link cloudstack-mysql:mysql -d -p 8080:8080 -
|
|||
### Marvin
|
||||
|
||||
Use marvin to deploy or test your CloudStack environment.
|
||||
Use Marvin with cloudstack connection thru the API port (8096)
|
||||
Use Marvin with cloudstack connection through the API port (8096)
|
||||
|
||||
```
|
||||
docker pull cloudstack/marvin
|
||||
|
|
@ -99,7 +99,7 @@ tag:latest = main branch
|
|||
docker build -f Dockerfile.centos6 -t cloudstack/management_centos6 .
|
||||
```
|
||||
|
||||
2. on jenkins, database and systemvm.iso are pre-deployed. the inital start require privileged container to
|
||||
2. on jenkins, database and systemvm.iso are pre-deployed. the initial start require privileged container to
|
||||
mount systemvm.iso and copy ssh_rsa.pub into it.
|
||||
|
||||
```
|
||||
|
|
|
|||
|
|
@ -1069,7 +1069,7 @@ test_data = {
|
|||
"format": "raw",
|
||||
"hypervisor": "kvm",
|
||||
"ostype": "Other Linux (64-bit)",
|
||||
"url": "https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img",
|
||||
"url": "https://cloud-images.ubuntu.com/releases/jammy/release/ubuntu-22.04-server-cloudimg-amd64.img",
|
||||
"requireshvm": "True",
|
||||
"ispublic": "True",
|
||||
"isextractable": "False"
|
||||
|
|
|
|||
|
|
@ -72,7 +72,7 @@ systems - these are virtual/physical infrastructure mapped to cobbler profiles b
|
|||
|
||||
When a new image needs to be added we create a 'distro' in cobbler and associate that with a profile's kickstart. Any new systems to be hooked-up to be serviced by the profile can then be added easily by cmd line.
|
||||
|
||||
b. Puppet master - Cobbler reimages machines on-demand but it is upto puppet recipes to do configuration management within them. The configuration management is required for kvm hypervisors (kvm agent for eg:) and for the cloudstack management server which needs mysql, cloudstack, etc. The puppetmasterd daemon on the driver-vm is responsible for 'kicking' nodes to initiate configuration management on themselves when they come alive.
|
||||
b. Puppet master - Cobbler reimages machines on-demand, but it is upto puppet recipes to do configuration management within them. The configuration management is required for kvm hypervisors (kvm agent for eg:) and for the cloudstack management server which needs mysql, cloudstack, etc. The puppetmasterd daemon on the driver-vm is responsible for 'kicking' nodes to initiate configuration management on themselves when they come alive.
|
||||
|
||||
So the driver-vm is also the repository of all the puppet recipes for various modules that need to be configured for the test infrastructure to work. The modules are placed in /etc/puppet and bear the same structure as our GitHub repo. When we need to affect a configuration change on any of our systems we only change the GitHub repo and the systems in place are affected upon next run.
|
||||
|
||||
|
|
@ -80,7 +80,7 @@ c. dnsmasq - DNS is controlled by cobbler but its configuration of hosts is set
|
|||
|
||||
d. dhcp - DHCP is also done by dnsmasq. All configuration is in /etc/dnsmasq.conf. static mac-ip-name mappings are given for hypervisors while the virtual instances get dynamic ips
|
||||
|
||||
e. ipmitool - ipmi for power management is setup on all the test servers and the ipmitool provides a convienient cli for booting the machines on the network into PXEing.
|
||||
e. ipmitool - ipmi for power management is setup on all the test servers and the ipmitool provides a convenient cli for booting the machines on the network into PXEing.
|
||||
|
||||
f. jenkins-slave - jenkins slave.jar is placed on the driver-vm as a service in /etc/init.d to react to jenkins schedules and to post reports to. The slave runs in headless mode as the driver-vm does not run X.
|
||||
|
||||
|
|
@ -99,7 +99,7 @@ d. multi-pod tests
|
|||
marvin integration
|
||||
==================
|
||||
|
||||
once cloudstack has been installed and the hypervisors prepared we are ready to use marvin to stitch together zones, pods, clusters and compute and storage to put together a 'cloud'. once configured - we perform a cursory health check to see if we have all systemVMs running in all zones and that built-in templates are downloaded in all zones. Subsequently we are able to launch tests on this environment
|
||||
once cloudstack has been installed and the hypervisors prepared we are ready to use marvin to stitch together zones, pods, clusters and compute and storage to put together a 'cloud'. once configured - we perform a cursory health check to see if we have all systemVMs running in all zones and that built-in templates are downloaded in all zones. Subsequently, we are able to launch tests on this environment
|
||||
|
||||
Only the latest tests from git are run on the setup. This allows us to test in a pseudo-continuous fashion with a nightly build deployed on the environment. Each test run takes a few hours to finish.
|
||||
|
||||
|
|
@ -121,7 +121,7 @@ When jenkins triggers the job following sequence of actions occur on the test in
|
|||
|
||||
3. we fetch the last successful marvin build from builds.a.o and install it within this virtualenv. installing a new marvin on each run helps us test with the latest APIs available.
|
||||
|
||||
4. we fetch the latest version of the driver script from github:cloud-autodeploy. fetching the latest allows us to make adjustments to the infra without having to copy scripts in to the test infrastrcuture.
|
||||
4. we fetch the latest version of the driver script from github:cloud-autodeploy. fetching the latest allows us to make adjustments to the infra without having to copy scripts in to the test infrastructure.
|
||||
|
||||
5. based on the hypervisor chosen we choose a profile for cobbler to reimage the hosts in the infrastructure. if xen is chosen we bring up the profile of the latest xen kickstart available in cobbler. currently - this is at xen 6.0.2. if kvm is chosen we can pick between ubuntu and rhel based host OS kickstarts.
|
||||
|
||||
|
|
|
|||
|
|
@ -62,7 +62,7 @@ Fix issues and vulnerabilities:
|
|||
|
||||
npm audit
|
||||
|
||||
A basic development guide and explaination of the basic components can be found
|
||||
A basic development guide and explanation of the basic components can be found
|
||||
[here](docs/development.md)
|
||||
|
||||
## Production
|
||||
|
|
|
|||
|
|
@ -484,7 +484,7 @@ This requires configuring and setting up CKS: http://docs.cloudstack.apache.org/
|
|||
- [ ] Disable/enable host
|
||||
- [ ] Enable/cancel maintenance mode
|
||||
- [ ] Enable/disable out-of-band management
|
||||
- [ ] Enable/disale HA
|
||||
- [ ] Enable/disable HA
|
||||
- [ ] Delete host (only if disabled)
|
||||
|
||||
**Infrastructure > Primary Storage**
|
||||
|
|
|
|||
|
|
@ -58,7 +58,7 @@ This requires configuring and setting up CKS: http://docs.cloudstack.apache.org/
|
|||
|
||||
**VPC**
|
||||
- [ ] Add VPC
|
||||
- [ ] VPC actions - updat, restart, delete
|
||||
- [ ] VPC actions - update, restart, delete
|
||||
- [ ] Add security group
|
||||
- [ ] Add/delete ingress/egress rule
|
||||
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue