CKS Enhancements (#9102)

CKS Enhancements:

* Ability to specify different compute or service offerings for different types of CKS cluster nodes – worker, master or etcd

* Ability to use CKS ready custom templates for CKS cluster nodes

* Add and Remove external nodes to and from a kubernetes cluster

Co-authored-by: nvazquez <nicovazquez90@gmail.com>

* Update remove node timeout global setting

* CKS/NSX : Missing variables in worker nodes

* CKS: Fix ISO attach logic

* CKS: Fix ISO attach logic

* address comment

* Fix Port - Node mapping when cluster is scaled in the presence of external node(s)

* CKS: Externalize control and worker node setup wait time and installation attempts

* Fix logger

* Add missing headers and fix end of line on files

* CKS Mark Nodes for Manual Upgrade and Filter Nodes to add to CKS cluster from the same network

* Add support to deploy CKS cluster nodes on hosts dedicated to a domain

---------

Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>

* Support unstacked ETCD

---------

Co-authored-by: nvazquez <nicovazquez90@gmail.com>

* Fix CKS cluster scaling and minor UI improvement

* Reuse k8s cluster public IP for etcd nodes and rename etcd nodes

* Fix DNS resolver issue

* Update UDP active monitor to ICMP

* Add hypervisor type to CKS cluster creation to fix CKS cluster creation when External hosts added

* Fix build

* Fix logger

* Modify hypervisor param description in the create CKS cluster API

* CKS delete fails when external nodes are present

* CKS delete fails when external nodes are present

* address comment

* Improve network rules cleanup on failure adding external nodes to CKS cluster

* UI: Fix etcd template was not honoured

* UI: Fix etcd template was not honoured

* Refactor

* CKS: Exclude etcd nodes when calculating port numbers

* Fix network cleanup in case of CKS cluster failure

* Externalize retries and inverval for NSX segment deletion

* Fix CKS scaling when external node(s) present in the cluster

* CKS: Fix port numbers displayed against ETCD nodes

* Add node version details to every node of k8s cluster - as we now support manual upgrade

* Add node version details to every node of k8s cluster - as we now support manual upgrade

* update column name

* CKS: Exclude etcd nodes when calculating port numbers

* update param name

* update param

* UI: Fix CKS cluster creation templates listing for non admins

* CKS: Prevent etcd node start port number to coincide with k8s cluster start port numbers

* CKS: Set default kubernetes cluster node version to the kubernetes cluster version on upgrade

* CKS: Set default kubernetes cluster node version to the kubernetes cluster version on upgrade

* consolidate query

* Fix upgrade logic

---------

Co-authored-by: nvazquez <nicovazquez90@gmail.com>

* Fix CKS cluster version upgrade

* CKS: Fix etcd port numbers being skipped

* Fix CKS cluster with etcd nodes on VPC

* Move schema and upgrade for 4.20

* Fix logger

* Fix after rebasing

* Add support for using different CNI plugins with CKS

* Add support for using different CNI plugins with CKS

* remove unused import

* Add UI support and list cni config API

* necessary UI changes

* add license

* changes to support external cni

* UI changes

* Fix NPE on restarting VPC with additional public IPs

* fix merge conflict

* add asnumber to create k8s svc layer

* support cni framework to use as-numbers

* update code

* condition to ignore undefined jinja template variables

* CKS: Do not pass AS number when network ID is passed

* Fix deletion of Userdata / CNI Configuration in projects

* CKS: Add CNI configuration details to the response and UI

* Explicit events for registering cni configuration

* Add Delete cni configuration API

* Fix CKS deployment when using VPC tiers with custom ACLs

* Fix DNS list on VR

* CKS: Use Network offering of the network passed during CKS cluster creation to get the AS number

* CKS cluster with guest IP

* Fix: Use control node guest IP as join IP for external nodes addition

* Fix DNS resolver issue

* Improve etcd indexing - start from 1

* CKS: Add external node to a CKS cluster deployed with etcd node(s) successfully

* CKS: Add external node to a CKS cluster deployed with etcd node(s) successfully

* simplify logic

* Tweak setup-kube-system script for baremetal external nodes

* Consider cordoned nodes while getting ready nodes

* Fix CKS cluster scale calculations

* Set token TTL to 0 (no expire) for external etcd

* Fix missing quotes

* Fix build

* Revert PR 9133

* Add calico commands for ens35 interface

* Address review comments: plan CKS cluster deployment based on the node type

* Add qemu-guest-agent dependency for kvm based templates

* Add marvin test for CKS clusters with different offerings per node type

* Remove test tag

* Add marvin test and fix update template for cks and since annotations

* Fix marvin test for adding and removing external nodes

* Fix since version on API params

* Address review comments

* Fix unit test

* Address review comments

* UI: Make CKS public templates visible to non-admins on CKS cluster creation

* Fix linter

* Fix merge error

* Fix positional parameters on the create kubernetes ISO script and make the ETCD version optional

* fix etcd port displayed

* Further improvements to CKS  (#118)

* Multiple nics support on Ubuntu template

* Multiple nics support on Ubuntu template

* supports allocating IP to the nic when VM is added to another network - no delay

* Add option to select DNS or VR IP as resolver on VPC creation

* Add API param and UI to select option

* Add column on vpc and pass the value on the databags for CsDhcp.py to fix accordingly

* Externalize the CKS Configuration, so that end users can tweak the configuration before deploying the cluster

* Add new directory to c8 packaging for CKS config

* Remove k8s configuration from resources and make it configurable

* Revert "Remove k8s configuration from resources and make it configurable"

This reverts commit d5997033ebe4ba559e6478a64578b894f8e7d3db.

* copy conf to mgmt server and consume them from there

* Remove node from cluster

* Add missing /opt/bin directory requrired by external nodes

* Login to a specific Project view

* add indents

* Fix CKS HA clusters

* Fix build

---------

Co-authored-by: Nicolas Vazquez <nicovazquez90@gmail.com>

* Add missing headers

* Fix linter

* Address more review comments

* Fix unit test

* Fix scaling case for the same offering

* Revert "Login to a specific Project view"

This reverts commit 95e37563f4.

* Revert "Fix CKS HA clusters" (#120)

This reverts commit 8dac16aa35.

* Apply suggestions from code review about user data

Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>

* Update api/src/main/java/org/apache/cloudstack/api/command/user/userdata/BaseRegisterUserDataCmd.java

Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>

* Refactor column names and schema path

* Fix scaling for non existing previous offering per node type

* Update node offering entry if there was an existing offering but a global service offering has been provided on scale

---------

Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Daan Hoogland <daan@onecht.net>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
This commit is contained in:
Nicolas Vazquez 2025-06-19 02:30:42 -03:00 committed by GitHub
parent 4662ffc424
commit 6adfda2818
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
142 changed files with 7384 additions and 638 deletions

View File

@ -289,6 +289,8 @@ public class EventTypes {
//registering userdata events
public static final String EVENT_REGISTER_USER_DATA = "REGISTER.USER.DATA";
public static final String EVENT_REGISTER_CNI_CONFIG = "REGISTER.CNI.CONFIG";
public static final String EVENT_DELETE_CNI_CONFIG = "DELETE.CNI.CONFIG";
//register for user API and secret keys
public static final String EVENT_REGISTER_FOR_SECRET_API_KEY = "REGISTER.USER.KEY";

View File

@ -44,6 +44,8 @@ public interface KubernetesCluster extends ControlledEntity, com.cloud.utils.fsm
AutoscaleRequested,
ScaleUpRequested,
ScaleDownRequested,
AddNodeRequested,
RemoveNodeRequested,
UpgradeRequested,
OperationSucceeded,
OperationFailed,
@ -59,6 +61,8 @@ public interface KubernetesCluster extends ControlledEntity, com.cloud.utils.fsm
Stopped("All resources for the Kubernetes cluster are destroyed, Kubernetes cluster may still have ephemeral resource like persistent volumes provisioned"),
Scaling("Transient state in which resources are either getting scaled up/down"),
Upgrading("Transient state in which cluster is getting upgraded"),
Importing("Transient state in which additional nodes are added as worker nodes to a cluster"),
RemovingNodes("Transient state in which additional nodes are removed from a cluster"),
Alert("State to represent Kubernetes clusters which are not in expected desired state (operationally in active control place, stopped cluster VM's etc)."),
Recovering("State in which Kubernetes cluster is recovering from alert state"),
Destroyed("End state of Kubernetes cluster in which all resources are destroyed, cluster will not be usable further"),
@ -96,6 +100,17 @@ public interface KubernetesCluster extends ControlledEntity, com.cloud.utils.fsm
s_fsm.addTransition(State.Upgrading, Event.OperationSucceeded, State.Running);
s_fsm.addTransition(State.Upgrading, Event.OperationFailed, State.Alert);
s_fsm.addTransition(State.Running, Event.AddNodeRequested, State.Importing);
s_fsm.addTransition(State.Alert, Event.AddNodeRequested, State.Importing);
s_fsm.addTransition(State.Importing, Event.OperationSucceeded, State.Running);
s_fsm.addTransition(State.Importing, Event.OperationFailed, State.Running);
s_fsm.addTransition(State.Alert, Event.OperationSucceeded, State.Running);
s_fsm.addTransition(State.Running, Event.RemoveNodeRequested, State.RemovingNodes);
s_fsm.addTransition(State.Alert, Event.RemoveNodeRequested, State.RemovingNodes);
s_fsm.addTransition(State.RemovingNodes, Event.OperationSucceeded, State.Running);
s_fsm.addTransition(State.RemovingNodes, Event.OperationFailed, State.Running);
s_fsm.addTransition(State.Alert, Event.RecoveryRequested, State.Recovering);
s_fsm.addTransition(State.Recovering, Event.OperationSucceeded, State.Running);
s_fsm.addTransition(State.Recovering, Event.OperationFailed, State.Alert);
@ -142,4 +157,13 @@ public interface KubernetesCluster extends ControlledEntity, com.cloud.utils.fsm
Long getMaxSize();
Long getSecurityGroupId();
ClusterType getClusterType();
Long getControlNodeServiceOfferingId();
Long getWorkerNodeServiceOfferingId();
Long getEtcdNodeServiceOfferingId();
Long getControlNodeTemplateId();
Long getWorkerNodeTemplateId();
Long getEtcdNodeTemplateId();
Long getEtcdNodeCount();
Long getCniConfigId();
String getCniConfigDetails();
}

View File

@ -18,14 +18,23 @@ package com.cloud.kubernetes.cluster;
import org.apache.cloudstack.acl.ControlledEntity;
import java.util.Map;
import com.cloud.user.Account;
import com.cloud.uservm.UserVm;
import com.cloud.utils.component.Adapter;
public interface KubernetesServiceHelper extends Adapter {
enum KubernetesClusterNodeType {
CONTROL, WORKER, ETCD, DEFAULT
}
ControlledEntity findByUuid(String uuid);
ControlledEntity findByVmId(long vmId);
void checkVmCanBeDestroyed(UserVm userVm);
boolean isValidNodeType(String nodeType);
Map<String, Long> getServiceOfferingNodeTypeMap(Map<String, Map<String, String>> serviceOfferingNodeTypeMap);
Map<String, Long> getTemplateNodeTypeMap(Map<String, Map<String, String>> templateNodeTypeMap);
void cleanupForAccount(Account account);
}

View File

@ -268,4 +268,6 @@ public interface NetworkService {
InternalLoadBalancerElementService getInternalLoadBalancerElementByNetworkServiceProviderId(long networkProviderId);
InternalLoadBalancerElementService getInternalLoadBalancerElementById(long providerId);
List<InternalLoadBalancerElementService> getInternalLoadBalancerElements();
boolean handleCksIsoOnNetworkVirtualRouter(Long virtualRouterId, boolean mount) throws ResourceUnavailableException;
}

View File

@ -105,4 +105,6 @@ public interface Vpc extends ControlledEntity, Identity, InternalIdentity {
String getIp6Dns1();
String getIp6Dns2();
boolean useRouterIpAsResolver();
}

View File

@ -48,17 +48,17 @@ public interface VpcService {
* @param vpcName
* @param displayText
* @param cidr
* @param networkDomain TODO
* @param networkDomain TODO
* @param ip4Dns1
* @param ip4Dns2
* @param displayVpc TODO
* @param displayVpc TODO
* @param useVrIpResolver
* @return
* @throws ResourceAllocationException TODO
*/
Vpc createVpc(long zoneId, long vpcOffId, long vpcOwnerId, String vpcName, String displayText, String cidr, String networkDomain,
String ip4Dns1, String ip4Dns2, String ip6Dns1, String ip6Dns2, Boolean displayVpc, Integer publicMtu, Integer cidrSize,
Long asNumber, List<Long> bgpPeerIds)
throws ResourceAllocationException;
Long asNumber, List<Long> bgpPeerIds, Boolean useVrIpResolver) throws ResourceAllocationException;
/**
* Persists VPC record in the database

View File

@ -62,8 +62,10 @@ import org.apache.cloudstack.api.command.user.ssh.CreateSSHKeyPairCmd;
import org.apache.cloudstack.api.command.user.ssh.DeleteSSHKeyPairCmd;
import org.apache.cloudstack.api.command.user.ssh.ListSSHKeyPairsCmd;
import org.apache.cloudstack.api.command.user.ssh.RegisterSSHKeyPairCmd;
import org.apache.cloudstack.api.command.user.userdata.DeleteCniConfigurationCmd;
import org.apache.cloudstack.api.command.user.userdata.DeleteUserDataCmd;
import org.apache.cloudstack.api.command.user.userdata.ListUserDataCmd;
import org.apache.cloudstack.api.command.user.userdata.RegisterCniConfigurationCmd;
import org.apache.cloudstack.api.command.user.userdata.RegisterUserDataCmd;
import org.apache.cloudstack.api.command.user.vm.GetVMPasswordCmd;
import org.apache.cloudstack.api.command.user.vmgroup.UpdateVMGroupCmd;
@ -369,17 +371,23 @@ public interface ManagementService {
* The api command class.
* @return The list of userdatas found.
*/
Pair<List<? extends UserData>, Integer> listUserDatas(ListUserDataCmd cmd);
Pair<List<? extends UserData>, Integer> listUserDatas(ListUserDataCmd cmd, boolean forCks);
/**
* Registers a cni configuration.
*
* @param cmd The api command class.
* @return A VO with the registered user data.
*/
UserData registerCniConfiguration(RegisterCniConfigurationCmd cmd);
/**
* Registers a userdata.
*
* @param cmd
* The api command class.
* @param cmd The api command class.
* @return A VO with the registered userdata.
*/
UserData registerUserData(RegisterUserDataCmd cmd);
/**
* Deletes a userdata.
*
@ -389,6 +397,14 @@ public interface ManagementService {
*/
boolean deleteUserData(DeleteUserDataCmd cmd);
/**
* Deletes user data.
*
* @param cmd
* The api command class.
* @return True on success. False otherwise.
*/
boolean deleteCniConfiguration(DeleteCniConfigurationCmd cmd);
/**
* Search registered key pairs for the logged in user.
*

View File

@ -58,10 +58,23 @@ public interface TemplateApiService {
VirtualMachineTemplate prepareTemplate(long templateId, long zoneId, Long storageId);
/**
* Detach ISO from VM
* @param vmId id of the VM
* @param isoId id of the ISO (when passed). If it is not passed, it will get it from user_vm table
* @param extraParams forced, isVirtualRouter
* @return true when operation succeeds, false if not
*/
boolean detachIso(long vmId, Long isoId, Boolean... extraParams);
boolean detachIso(long vmId, boolean forced);
boolean attachIso(long isoId, long vmId, boolean forced);
/**
* Attach ISO to a VM
* @param isoId id of the ISO to attach
* @param vmId id of the VM to attach the ISO to
* @param extraParams: forced, isVirtualRouter
* @return true when operation succeeds, false if not
*/
boolean attachIso(long isoId, long vmId, Boolean... extraParams);
/**
* Deletes a template

View File

@ -145,6 +145,8 @@ public interface VirtualMachineTemplate extends ControlledEntity, Identity, Inte
boolean isDeployAsIs();
boolean isForCks();
Long getUserDataId();
UserData.UserDataOverridePolicy getUserDataOverridePolicy();

View File

@ -29,4 +29,5 @@ public interface UserData extends ControlledEntity, InternalIdentity, Identity {
String getUserData();
String getParams();
boolean isForCks();
}

View File

@ -20,6 +20,7 @@ import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import com.cloud.deploy.DeploymentPlan;
import org.apache.cloudstack.api.BaseCmd.HTTPMethod;
import org.apache.cloudstack.api.command.admin.vm.AssignVMCmd;
import org.apache.cloudstack.api.command.admin.vm.RecoverVMCmd;
@ -111,7 +112,7 @@ public interface UserVmService {
UserVm rebootVirtualMachine(RebootVMCmd cmd) throws InsufficientCapacityException, ResourceUnavailableException, ResourceAllocationException;
void startVirtualMachine(UserVm vm) throws OperationTimedoutException, ResourceUnavailableException, InsufficientCapacityException;
void startVirtualMachine(UserVm vm, DeploymentPlan plan) throws OperationTimedoutException, ResourceUnavailableException, InsufficientCapacityException;
void startVirtualMachineForHA(VirtualMachine vm, Map<VirtualMachineProfile.Param, Object> params,
DeploymentPlanner planner) throws InsufficientCapacityException, ResourceUnavailableException,

View File

@ -89,6 +89,9 @@ public interface VmDetailConstants {
String DEPLOY_AS_IS_CONFIGURATION = "configurationId";
String KEY_PAIR_NAMES = "keypairnames";
String CKS_CONTROL_NODE_LOGIN_USER = "controlNodeLoginUser";
String CKS_NODE_TYPE = "node";
String OFFERING = "offering";
String TEMPLATE = "template";
// VMware to KVM VM migrations specific
String VMWARE_TO_KVM_PREFIX = "vmware-to-kvm";

View File

@ -85,7 +85,7 @@ public enum ApiCommandResourceType {
ObjectStore(org.apache.cloudstack.storage.object.ObjectStore.class),
Bucket(org.apache.cloudstack.storage.object.Bucket.class),
QuotaTariff(org.apache.cloudstack.quota.QuotaTariff.class),
KubernetesCluster(null),
KubernetesCluster(com.cloud.kubernetes.cluster.KubernetesCluster.class),
KubernetesSupportedVersion(null),
SharedFS(org.apache.cloudstack.storage.sharedfs.SharedFS.class);

View File

@ -119,6 +119,10 @@ public class ApiConstants {
public static final String CN = "cn";
public static final String COMMAND = "command";
public static final String CMD_EVENT_TYPE = "cmdeventtype";
public static final String CNI_CONFIG = "cniconfig";
public static final String CNI_CONFIG_ID = "cniconfigurationid";
public static final String CNI_CONFIG_DETAILS = "cniconfigdetails";
public static final String CNI_CONFIG_NAME = "cniconfigname";
public static final String COMPONENT = "component";
public static final String CPU_CORE_PER_SOCKET = "cpucorepersocket";
public static final String CPU_NUMBER = "cpunumber";
@ -140,6 +144,7 @@ public class ApiConstants {
public static final String ENCRYPT_FORMAT = "encryptformat";
public static final String ENCRYPT_ROOT = "encryptroot";
public static final String ENCRYPTION_SUPPORTED = "encryptionsupported";
public static final String ETCD_IPS = "etcdips";
public static final String MIN_IOPS = "miniops";
public static final String MAX_IOPS = "maxiops";
public static final String HYPERVISOR_SNAPSHOT_RESERVE = "hypervisorsnapshotreserve";
@ -333,6 +338,7 @@ public class ApiConstants {
public static final String LBID = "lbruleid";
public static final String LB_PROVIDER = "lbprovider";
public static final String MAC_ADDRESS = "macaddress";
public static final String MANUAL_UPGRADE = "manualupgrade";
public static final String MAX = "max";
public static final String MAX_SNAPS = "maxsnaps";
public static final String MAX_BACKUPS = "maxbackups";
@ -344,6 +350,7 @@ public class ApiConstants {
public static final String MIGRATIONS = "migrations";
public static final String MEMORY = "memory";
public static final String MODE = "mode";
public static final String MOUNT_CKS_ISO_ON_VR = "mountcksisoonvr";
public static final String MULTI_ARCH = "ismultiarch";
public static final String NSX_MODE = "nsxmode";
public static final String NETWORK_MODE = "networkmode";
@ -360,6 +367,7 @@ public class ApiConstants {
public static final String NIC_PACKED_VIRTQUEUES_ENABLED = "nicpackedvirtqueuesenabled";
public static final String NEW_START_IP = "newstartip";
public static final String NEW_END_IP = "newendip";
public static final String KUBERNETES_NODE_VERSION = "kubernetesnodeversion";
public static final String NUM_RETRIES = "numretries";
public static final String OFFER_HA = "offerha";
public static final String OS_DISTRIBUTION = "osdistribution";
@ -547,6 +555,7 @@ public class ApiConstants {
public static final String USER_SECURITY_GROUP_LIST = "usersecuritygrouplist";
public static final String USER_SECRET_KEY = "usersecretkey";
public static final String USE_VIRTUAL_NETWORK = "usevirtualnetwork";
public static final String USE_VIRTUAL_ROUTER_IP_RESOLVER = "userouteripresolver";
public static final String UPDATE_IN_SEQUENCE = "updateinsequence";
public static final String VALUE = "value";
public static final String VIRTUAL_MACHINE_ID = "virtualmachineid";
@ -563,6 +572,12 @@ public class ApiConstants {
public static final String VLAN = "vlan";
public static final String VLAN_RANGE = "vlanrange";
public static final String WORKER_SERVICE_OFFERING_ID = "workerofferingid";
public static final String WORKER_SERVICE_OFFERING_NAME = "workerofferingname";
public static final String CONTROL_SERVICE_OFFERING_ID = "controlofferingid";
public static final String CONTROL_SERVICE_OFFERING_NAME = "controlofferingname";
public static final String ETCD_SERVICE_OFFERING_ID = "etcdofferingid";
public static final String ETCD_SERVICE_OFFERING_NAME = "etcdofferingname";
public static final String REMOVE_VLAN = "removevlan";
public static final String VLAN_ID = "vlanid";
public static final String ISOLATED_PVLAN = "isolatedpvlan";
@ -913,6 +928,7 @@ public class ApiConstants {
public static final String SPLIT_CONNECTIONS = "splitconnections";
public static final String FOR_VPC = "forvpc";
public static final String FOR_NSX = "fornsx";
public static final String FOR_CKS = "forcks";
public static final String NSX_SUPPORT_LB = "nsxsupportlb";
public static final String NSX_SUPPORTS_INTERNAL_LB = "nsxsupportsinternallb";
public static final String FOR_TUNGSTEN = "fortungsten";
@ -1121,6 +1137,10 @@ public class ApiConstants {
public static final String MASTER_NODES = "masternodes";
public static final String NODE_IDS = "nodeids";
public static final String CONTROL_NODES = "controlnodes";
public static final String ETCD_NODES = "etcdnodes";
public static final String EXTERNAL_NODES = "externalnodes";
public static final String IS_EXTERNAL_NODE = "isexternalnode";
public static final String IS_ETCD_NODE = "isetcdnode";
public static final String MIN_SEMANTIC_VERSION = "minimumsemanticversion";
public static final String MIN_KUBERNETES_VERSION_ID = "minimumkubernetesversionid";
public static final String NODE_ROOT_DISK_SIZE = "noderootdisksize";
@ -1129,6 +1149,8 @@ public class ApiConstants {
public static final String AUTOSCALING_ENABLED = "autoscalingenabled";
public static final String MIN_SIZE = "minsize";
public static final String MAX_SIZE = "maxsize";
public static final String NODE_TYPE_OFFERING_MAP = "nodeofferings";
public static final String NODE_TYPE_TEMPLATE_MAP = "nodetemplates";
public static final String BOOT_TYPE = "boottype";
public static final String BOOT_MODE = "bootmode";

View File

@ -104,7 +104,7 @@ public class DetachIsoCmd extends BaseAsyncCmd implements UserCmd {
@Override
public void execute() {
boolean result = _templateService.detachIso(virtualMachineId, isForced());
boolean result = _templateService.detachIso(virtualMachineId, null, isForced());
if (result) {
UserVm userVm = _entityMgr.findById(UserVm.class, virtualMachineId);
UserVmResponse response = _responseGenerator.createUserVmResponse(getResponseView(), "virtualmachine", userVm).get(0);

View File

@ -99,6 +99,11 @@ public class GetUploadParamsForTemplateCmd extends AbstractGetUploadParamsCmd {
description = "(VMware only) true if VM deployments should preserve all the configurations defined for this template", since = "4.15.1")
private Boolean deployAsIs;
@Parameter(name=ApiConstants.FOR_CKS,
type = CommandType.BOOLEAN,
description = "if true, the templates would be available for deploying CKS clusters", since = "4.21.0")
protected Boolean forCks;
public String getDisplayText() {
return StringUtils.isBlank(displayText) ? getName() : displayText;
}
@ -168,6 +173,10 @@ public class GetUploadParamsForTemplateCmd extends AbstractGetUploadParamsCmd {
Boolean.TRUE.equals(deployAsIs);
}
public boolean isForCks() {
return Boolean.TRUE.equals(forCks);
}
public CPU.CPUArch getArch() {
return CPU.CPUArch.fromType(arch);
}

View File

@ -105,6 +105,11 @@ public class ListTemplatesCmd extends BaseListTaggedResourcesCmd implements User
since = "4.19.0")
private Boolean isVnf;
@Parameter(name = ApiConstants.FOR_CKS, type = CommandType.BOOLEAN,
description = "list templates that can be used to deploy CKS clusters",
since = "4.21.0")
private Boolean forCks;
@Parameter(name = ApiConstants.ARCH, type = CommandType.STRING,
description = "the CPU arch of the template. Valid options are: x86_64, aarch64",
since = "4.20")
@ -202,6 +207,8 @@ public class ListTemplatesCmd extends BaseListTaggedResourcesCmd implements User
return isVnf;
}
public Boolean getForCks() { return forCks; }
public CPU.CPUArch getArch() {
if (StringUtils.isBlank(arch)) {
return null;

View File

@ -168,6 +168,11 @@ public class RegisterTemplateCmd extends BaseCmd implements UserCmd {
description = "(VMware only) true if VM deployments should preserve all the configurations defined for this template", since = "4.15.1")
protected Boolean deployAsIs;
@Parameter(name=ApiConstants.FOR_CKS,
type = CommandType.BOOLEAN,
description = "if true, the templates would be available for deploying CKS clusters", since = "4.21.0")
protected Boolean forCks;
@Parameter(name = ApiConstants.TEMPLATE_TYPE, type = CommandType.STRING,
description = "the type of the template. Valid options are: USER/VNF (for all users) and SYSTEM/ROUTING/BUILTIN (for admins only).",
since = "4.19.0")
@ -295,6 +300,10 @@ public class RegisterTemplateCmd extends BaseCmd implements UserCmd {
Boolean.TRUE.equals(deployAsIs);
}
public boolean isForCks() {
return Boolean.TRUE.equals(forCks);
}
public String getTemplateType() {
return templateType;
}

View File

@ -46,6 +46,11 @@ public class UpdateTemplateCmd extends BaseUpdateTemplateOrIsoCmd implements Use
@Parameter(name = ApiConstants.TEMPLATE_TAG, type = CommandType.STRING, description = "the tag for this template.", since = "4.20.0")
private String templateTag;
@Parameter(name = ApiConstants.FOR_CKS, type = CommandType.BOOLEAN,
description = "indicates that the template can be used for deployment of CKS clusters",
since = "4.21.0")
private Boolean forCks;
/////////////////////////////////////////////////////
/////////////////// Accessors ///////////////////////
/////////////////////////////////////////////////////
@ -63,6 +68,10 @@ public class UpdateTemplateCmd extends BaseUpdateTemplateOrIsoCmd implements Use
return templateTag;
}
public Boolean getForCks() {
return forCks;
}
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
/////////////////////////////////////////////////////

View File

@ -0,0 +1,87 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.api.command.user.userdata;
import com.cloud.exception.InvalidParameterValueException;
import com.cloud.network.NetworkModel;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.BaseCmd;
import org.apache.cloudstack.api.Parameter;
import org.apache.cloudstack.api.response.DomainResponse;
import org.apache.cloudstack.api.response.ProjectResponse;
import org.apache.commons.lang3.StringUtils;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
public abstract class BaseRegisterUserDataCmd extends BaseCmd {
@Parameter(name = ApiConstants.NAME, type = CommandType.STRING, required = true, description = "Name of the user data")
private String name;
//Owner information
@Parameter(name = ApiConstants.ACCOUNT, type = CommandType.STRING, description = "an optional account for the user data. Must be used with domainId.")
private String accountName;
@Parameter(name = ApiConstants.DOMAIN_ID,
type = CommandType.UUID,
entityType = DomainResponse.class,
description = "an optional domainId for the user data. If the account parameter is used, domainId must also be used.")
private Long domainId;
@Parameter(name = ApiConstants.PROJECT_ID, type = CommandType.UUID, entityType = ProjectResponse.class, description = "an optional project for the user data")
private Long projectId;
@Parameter(name = ApiConstants.PARAMS, type = CommandType.STRING, description = "comma separated list of variables declared in user data content")
private String params;
/////////////////////////////////////////////////////
/////////////////// Accessors ///////////////////////
/////////////////////////////////////////////////////
public String getName() {
return name;
}
public String getAccountName() {
return accountName;
}
public Long getDomainId() {
return domainId;
}
public Long getProjectId() {
return projectId;
}
public String getParams() {
checkForVRMetadataFileNames(params);
return params;
}
public void checkForVRMetadataFileNames(String params) {
if (StringUtils.isNotEmpty(params)) {
List<String> keyValuePairs = new ArrayList<>(Arrays.asList(params.split(",")));
keyValuePairs.retainAll(NetworkModel.metadataFileNames);
if (!keyValuePairs.isEmpty()) {
throw new InvalidParameterValueException(String.format("Params passed here have a few virtual router metadata file names %s", keyValuePairs));
}
}
}
}

View File

@ -0,0 +1,74 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.api.command.user.userdata;
import org.apache.cloudstack.acl.RoleType;
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiErrorCode;
import org.apache.cloudstack.api.ServerApiException;
import org.apache.cloudstack.api.response.SuccessResponse;
import org.apache.cloudstack.context.CallContext;
import com.cloud.user.Account;
import com.cloud.user.UserData;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@APICommand(name = "deleteCniConfiguration", description = "Deletes a CNI Configuration", responseObject = SuccessResponse.class, entityType = {UserData.class},
requestHasSensitiveInfo = false, responseHasSensitiveInfo = false, since = "4.21.0",
authorized = {RoleType.Admin, RoleType.ResourceAdmin, RoleType.DomainAdmin, RoleType.User})
public class DeleteCniConfigurationCmd extends DeleteUserDataCmd {
public static final Logger logger = LogManager.getLogger(DeleteCniConfigurationCmd.class.getName());
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
/////////////////////////////////////////////////////
@Override
public void execute() {
boolean result = _mgr.deleteCniConfiguration(this);
if (result) {
SuccessResponse response = new SuccessResponse(getCommandName());
response.setSuccess(result);
setResponseObject(response);
} else {
throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, "Failed to delete CNI configuration");
}
}
@Override
public long getEntityOwnerId() {
Account account = CallContext.current().getCallingAccount();
Long domainId = this.getDomainId();
String accountName = this.getAccountName();
if ((account == null || _accountService.isAdmin(account.getId())) && (domainId != null && accountName != null)) {
Account userAccount = _responseGenerator.findAccountByNameDomain(accountName, domainId);
if (userAccount != null) {
return userAccount.getId();
}
}
if (account != null) {
return account.getId();
}
return Account.ACCOUNT_ID_SYSTEM; // no account info given, parent this command to SYSTEM so ERROR events are tracked
}
}

View File

@ -0,0 +1,59 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.api.command.user.userdata;
import com.cloud.user.UserData;
import com.cloud.utils.Pair;
import org.apache.cloudstack.acl.RoleType;
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.response.ListResponse;
import org.apache.cloudstack.api.response.UserDataResponse;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import java.util.ArrayList;
import java.util.List;
@APICommand(name = "listCniConfiguration", description = "List user data for CNI plugins", responseObject = UserDataResponse.class, entityType = {UserData.class},
requestHasSensitiveInfo = false, responseHasSensitiveInfo = false, since = "4.21.0",
authorized = {RoleType.Admin, RoleType.ResourceAdmin, RoleType.DomainAdmin, RoleType.User})
public class ListCniConfigurationCmd extends ListUserDataCmd {
public static final Logger logger = LogManager.getLogger(ListCniConfigurationCmd.class.getName());
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
/////////////////////////////////////////////////////
@Override
public void execute() {
Pair<List<? extends UserData>, Integer> resultList = _mgr.listUserDatas(this, true);
List<UserDataResponse> responses = new ArrayList<>();
for (UserData result : resultList.first()) {
UserDataResponse r = _responseGenerator.createUserDataResponse(result);
r.setObjectName(ApiConstants.CNI_CONFIG);
responses.add(r);
}
ListResponse<UserDataResponse> response = new ListResponse<>();
response.setResponses(responses, resultList.second());
response.setResponseName(getCommandName());
setResponseObject(response);
}
}

View File

@ -61,7 +61,7 @@ public class ListUserDataCmd extends BaseListProjectAndAccountResourcesCmd {
@Override
public void execute() {
Pair<List<? extends UserData>, Integer> resultList = _mgr.listUserDatas(this);
Pair<List<? extends UserData>, Integer> resultList = _mgr.listUserDatas(this, false);
List<UserDataResponse> responses = new ArrayList<>();
for (UserData result : resultList.first()) {
UserDataResponse r = _responseGenerator.createUserDataResponse(result);

View File

@ -0,0 +1,77 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.api.command.user.userdata;
import com.cloud.user.UserData;
import org.apache.cloudstack.acl.RoleType;
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.Parameter;
import org.apache.cloudstack.api.response.SuccessResponse;
import org.apache.cloudstack.api.response.UserDataResponse;
import org.apache.cloudstack.context.CallContext;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@APICommand(name = "registerCniConfiguration",
description = "Register a CNI Configuration to be used with CKS cluster",
since = "4.21.0",
responseObject = SuccessResponse.class,
requestHasSensitiveInfo = false,
responseHasSensitiveInfo = false,
authorized = {RoleType.Admin, RoleType.ResourceAdmin, RoleType.DomainAdmin, RoleType.User})
public class RegisterCniConfigurationCmd extends BaseRegisterUserDataCmd {
public static final Logger logger = LogManager.getLogger(RegisterCniConfigurationCmd.class.getName());
/////////////////////////////////////////////////////
//////////////// API parameters /////////////////////
/////////////////////////////////////////////////////
@Parameter(name = ApiConstants.CNI_CONFIG, type = CommandType.STRING, description = "CNI Configuration content to be registered as User data", length = 1048576)
private String cniConfig;
/////////////////////////////////////////////////////
/////////////////// Accessors ///////////////////////
/////////////////////////////////////////////////////
public String getCniConfig() {
return cniConfig;
}
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
/////////////////////////////////////////////////////
@Override
public void execute() {
UserData result = _mgr.registerCniConfiguration(this);
UserDataResponse response = _responseGenerator.createUserDataResponse(result);
response.setResponseName(getCommandName());
response.setObjectName(ApiConstants.CNI_CONFIG);
setResponseObject(response);
}
@Override
public long getEntityOwnerId() {
Long accountId = _accountService.finalyzeAccountId(getAccountName(), getDomainId(), getProjectId(), true);
if (accountId == null) {
return CallContext.current().getCallingAccount().getId();
}
return accountId;
}
}

View File

@ -16,30 +16,20 @@
// under the License.
package org.apache.cloudstack.api.command.user.userdata;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.apache.cloudstack.acl.RoleType;
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.BaseCmd;
import org.apache.cloudstack.api.Parameter;
import org.apache.cloudstack.api.ServerApiException;
import org.apache.cloudstack.api.response.DomainResponse;
import org.apache.cloudstack.api.response.ProjectResponse;
import org.apache.cloudstack.api.response.SuccessResponse;
import org.apache.cloudstack.api.response.UserDataResponse;
import org.apache.cloudstack.context.CallContext;
import org.apache.commons.lang3.StringUtils;
import com.cloud.exception.ConcurrentOperationException;
import com.cloud.exception.InsufficientCapacityException;
import com.cloud.exception.InvalidParameterValueException;
import com.cloud.exception.NetworkRuleConflictException;
import com.cloud.exception.ResourceAllocationException;
import com.cloud.exception.ResourceUnavailableException;
import com.cloud.network.NetworkModel;
import com.cloud.user.UserData;
@APICommand(name = "registerUserData",
@ -49,89 +39,28 @@ import com.cloud.user.UserData;
requestHasSensitiveInfo = false,
responseHasSensitiveInfo = false,
authorized = {RoleType.Admin, RoleType.ResourceAdmin, RoleType.DomainAdmin, RoleType.User})
public class RegisterUserDataCmd extends BaseCmd {
public class RegisterUserDataCmd extends BaseRegisterUserDataCmd {
/////////////////////////////////////////////////////
//////////////// API parameters /////////////////////
/////////////////////////////////////////////////////
@Parameter(name = ApiConstants.NAME, type = CommandType.STRING, required = true, description = "Name of the userdata")
private String name;
@Parameter(name = ApiConstants.USER_DATA, type = CommandType.STRING, required = true, description = "User data content", length = 1048576)
protected String userData;
//Owner information
@Parameter(name = ApiConstants.ACCOUNT, type = CommandType.STRING, description = "an optional account for the userdata. Must be used with domainId.")
private String accountName;
@Parameter(name = ApiConstants.DOMAIN_ID,
type = CommandType.UUID,
entityType = DomainResponse.class,
description = "an optional domainId for the userdata. If the account parameter is used, domainId must also be used.")
private Long domainId;
@Parameter(name = ApiConstants.PROJECT_ID, type = CommandType.UUID, entityType = ProjectResponse.class, description = "an optional project for the userdata")
private Long projectId;
@Parameter(name = ApiConstants.USER_DATA,
type = CommandType.STRING,
required = true,
description = "Base64 encoded userdata content. " +
"Using HTTP GET (via querystring), you can send up to 4KB of data after base64 encoding. " +
"Using HTTP POST (via POST body), you can send up to 1MB of data after base64 encoding. " +
"You also need to change vm.userdata.max.length value",
length = 1048576)
private String userData;
@Parameter(name = ApiConstants.PARAMS, type = CommandType.STRING, description = "comma separated list of variables declared in userdata content")
private String params;
/////////////////////////////////////////////////////
/////////////////// Accessors ///////////////////////
/////////////////////////////////////////////////////
public String getName() {
return name;
}
public String getAccountName() {
return accountName;
}
public Long getDomainId() {
return domainId;
}
public Long getProjectId() {
return projectId;
}
public String getUserData() {
return userData;
}
public String getParams() {
checkForVRMetadataFileNames(params);
return params;
}
public void checkForVRMetadataFileNames(String params) {
if (StringUtils.isNotEmpty(params)) {
List<String> keyValuePairs = new ArrayList<>(Arrays.asList(params.split(",")));
keyValuePairs.retainAll(NetworkModel.metadataFileNames);
if (!keyValuePairs.isEmpty()) {
throw new InvalidParameterValueException(String.format("Params passed here have a few virtual router metadata file names %s", keyValuePairs));
}
}
}
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
/////////////////////////////////////////////////////
@Override
public long getEntityOwnerId() {
Long accountId = _accountService.finalyzeAccountId(accountName, domainId, projectId, true);
Long accountId = _accountService.finalyzeAccountId(getAccountName(), getDomainId(), getProjectId(), true);
if (accountId == null) {
return CallContext.current().getCallingAccount().getId();
}

View File

@ -16,6 +16,7 @@
// under the License.
package org.apache.cloudstack.api.command.user.vpc;
import org.apache.commons.lang3.BooleanUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.cloudstack.acl.RoleType;
@ -125,6 +126,10 @@ public class CreateVPCCmd extends BaseAsyncCreateCmd implements UserCmd {
@Parameter(name=ApiConstants.AS_NUMBER, type=CommandType.LONG, since = "4.20.0", description="the AS Number of the VPC tiers")
private Long asNumber;
@Parameter(name=ApiConstants.USE_VIRTUAL_ROUTER_IP_RESOLVER, type=CommandType.BOOLEAN,
description="(optional) for NSX based VPCs: when set to true, use the VR IP as nameserver, otherwise use DNS1 and DNS2")
private Boolean useVrIpResolver;
// ///////////////////////////////////////////////////
// ///////////////// Accessors ///////////////////////
// ///////////////////////////////////////////////////
@ -205,6 +210,10 @@ public class CreateVPCCmd extends BaseAsyncCreateCmd implements UserCmd {
return asNumber;
}
public boolean getUseVrIpResolver() {
return BooleanUtils.toBoolean(useVrIpResolver);
}
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
/////////////////////////////////////////////////////

View File

@ -0,0 +1,51 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.api.response;
import com.cloud.network.router.VirtualRouter;
import com.cloud.serializer.Param;
import com.cloud.uservm.UserVm;
import com.cloud.vm.VirtualMachine;
import com.google.gson.annotations.SerializedName;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.EntityReference;
@EntityReference(value = {VirtualMachine.class, UserVm.class, VirtualRouter.class})
public class KubernetesUserVmResponse extends UserVmResponse {
@SerializedName(ApiConstants.IS_EXTERNAL_NODE)
@Param(description = "If the VM is an externally added node")
private boolean isExternalNode;
@SerializedName(ApiConstants.IS_ETCD_NODE)
@Param(description = "If the VM is an etcd node")
private boolean isEtcdNode;
@SerializedName(ApiConstants.KUBERNETES_NODE_VERSION)
@Param(description = "Kubernetes version of the node")
private String nodeVersion;
public void setExternalNode(boolean externalNode) {
isExternalNode = externalNode;
}
public void setEtcdNode(boolean etcdNode) {
isEtcdNode = etcdNode;
}
public void setNodeVersion(String nodeVersion) { this.nodeVersion = nodeVersion;}
}

View File

@ -210,6 +210,11 @@ public class TemplateResponse extends BaseResponseWithTagInformation implements
since = "4.15")
private Boolean deployAsIs;
@SerializedName(ApiConstants.FOR_CKS)
@Param(description = "If true it indicates that the template can be used for CKS cluster deployments",
since = "4.21.0")
private Boolean forCks;
@SerializedName(ApiConstants.DEPLOY_AS_IS_DETAILS)
@Param(description = "VMware only: additional key/value details tied with deploy-as-is template",
since = "4.15")
@ -463,6 +468,10 @@ public class TemplateResponse extends BaseResponseWithTagInformation implements
this.deployAsIs = deployAsIs;
}
public void setForCks(Boolean forCks) {
this.forCks = forCks;
}
public void setParentTemplateId(String parentTemplateId) {
this.parentTemplateId = parentTemplateId;
}

View File

@ -68,7 +68,7 @@ public class ListUserDataCmdTest {
Pair<List<? extends UserData>, Integer> result = new Pair<List<? extends UserData>, Integer>(userDataList, 1);
UserDataResponse userDataResponse = Mockito.mock(UserDataResponse.class);
Mockito.when(_mgr.listUserDatas(cmd)).thenReturn(result);
Mockito.when(_mgr.listUserDatas(cmd, false)).thenReturn(result);
Mockito.when(_responseGenerator.createUserDataResponse(userData)).thenReturn(userDataResponse);
cmd.execute();
@ -82,7 +82,7 @@ public class ListUserDataCmdTest {
List<UserData> userDataList = new ArrayList<UserData>();
Pair<List<? extends UserData>, Integer> result = new Pair<List<? extends UserData>, Integer>(userDataList, 0);
Mockito.when(_mgr.listUserDatas(cmd)).thenReturn(result);
Mockito.when(_mgr.listUserDatas(cmd, false)).thenReturn(result);
cmd.execute();

View File

@ -0,0 +1,34 @@
//
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
//
package com.cloud.agent.api;
import com.cloud.agent.api.routing.NetworkElementCommand;
public class HandleCksIsoCommand extends NetworkElementCommand {
private boolean mountCksIso;
public HandleCksIsoCommand(boolean mountCksIso) {
this.mountCksIso = mountCksIso;
}
public boolean isMountCksIso() {
return mountCksIso;
}
}

View File

@ -82,5 +82,8 @@ public class VRScripts {
public static final String VR_UPDATE_INTERFACE_CONFIG = "update_interface_config.sh";
public static final String ROUTER_FILESYSTEM_WRITABLE_CHECK = "filesystem_writable_check.py";
// CKS ISO mount
public static final String CKS_ISO_MOUNT_SERVE = "cks_iso.sh";
public static final String MANAGE_SERVICE = "manage_service.sh";
}

View File

@ -34,6 +34,7 @@ import java.util.concurrent.locks.ReentrantLock;
import javax.naming.ConfigurationException;
import com.cloud.agent.api.HandleCksIsoCommand;
import org.apache.cloudstack.agent.routing.ManageServiceCommand;
import com.cloud.agent.api.routing.UpdateNetworkCommand;
import com.cloud.agent.api.to.IpAddressTO;
@ -145,6 +146,10 @@ public class VirtualRoutingResource {
return execute((UpdateNetworkCommand) cmd);
}
if (cmd instanceof HandleCksIsoCommand) {
return execute((HandleCksIsoCommand) cmd);
}
if (cmd instanceof ManageServiceCommand) {
return execute((ManageServiceCommand) cmd);
}
@ -176,6 +181,13 @@ public class VirtualRoutingResource {
}
}
protected Answer execute(final HandleCksIsoCommand cmd) {
String routerIp = getRouterSshControlIp(cmd);
logger.info("Attempting to mount CKS ISO on Virtual Router");
ExecutionResult result = _vrDeployer.executeInVR(routerIp, VRScripts.CKS_ISO_MOUNT_SERVE, String.valueOf(cmd.isMountCksIso()));
return new Answer(cmd, result.isSuccess(), result.getDetails());
}
private Answer execute(final SetupKeyStoreCommand cmd) {
final String args = String.format("/usr/local/cloud/systemvm/conf/agent.properties " +
"/usr/local/cloud/systemvm/conf/%s " +

8
debian/rules vendored
View File

@ -70,6 +70,7 @@ override_dh_auto_install:
mkdir -p $(DESTDIR)/usr/share/$(PACKAGE)-management/lib
mkdir -p $(DESTDIR)/usr/share/$(PACKAGE)-management/setup
mkdir -p $(DESTDIR)/usr/share/$(PACKAGE)-management/templates/systemvm
mkdir -p $(DESTDIR)/usr/share/$(PACKAGE)-management/cks/conf
mkdir $(DESTDIR)/var/log/$(PACKAGE)/management
mkdir $(DESTDIR)/var/cache/$(PACKAGE)/management
mkdir $(DESTDIR)/var/log/$(PACKAGE)/ipallocator
@ -83,6 +84,7 @@ override_dh_auto_install:
cp client/target/cloud-client-ui-$(VERSION).jar $(DESTDIR)/usr/share/$(PACKAGE)-management/lib/cloudstack-$(VERSION).jar
cp client/target/lib/*jar $(DESTDIR)/usr/share/$(PACKAGE)-management/lib/
cp -r engine/schema/dist/systemvm-templates/* $(DESTDIR)/usr/share/$(PACKAGE)-management/templates/systemvm/
cp -r plugins/integrations/kubernetes-service/src/main/resources/conf/* $(DESTDIR)/usr/share/$(PACKAGE)-management/cks/conf/
rm -rf $(DESTDIR)/usr/share/$(PACKAGE)-management/templates/systemvm/md5sum.txt
# Bundle cmk in cloudstack-management
@ -95,6 +97,12 @@ override_dh_auto_install:
chmod 0440 $(DESTDIR)/$(SYSCONFDIR)/sudoers.d/$(PACKAGE)
install -D client/target/utilities/bin/cloud-update-xenserver-licenses $(DESTDIR)/usr/bin/cloudstack-update-xenserver-licenses
install -D plugins/integrations/kubernetes-service/src/main/resources/conf/etcd-node.yml $(DESTDIR)/usr/share/$(PACKAGE)-management/cks/conf/etcd-node.yml
install -D plugins/integrations/kubernetes-service/src/main/resources/conf/k8s-control-node.yml $(DESTDIR)/usr/share/$(PACKAGE)-management/cks/conf/k8s-control-node.yml
install -D plugins/integrations/kubernetes-service/src/main/resources/conf/k8s-control-node-add.yml $(DESTDIR)/usr/share/$(PACKAGE)-management/cks/conf/k8s-control-node-add.yml
install -D plugins/integrations/kubernetes-service/src/main/resources/conf/k8s-node.yml $(DESTDIR)/usr/share/$(PACKAGE)-management/cks/conf/k8s-node.yml
# Remove configuration in /ur/share/cloudstack-management/webapps/client/WEB-INF
# This should all be in /etc/cloudstack/management
ln -s ../../..$(SYSCONFDIR)/$(PACKAGE)/management $(DESTDIR)/usr/share/$(PACKAGE)-management/conf

View File

@ -75,5 +75,7 @@ public interface FirewallRulesDao extends GenericDao<FirewallRuleVO, Long> {
void loadDestinationCidrs(FirewallRuleVO rule);
FirewallRuleVO findByNetworkIdAndPorts(long networkId, int startPort, int endPort);
List<FirewallRuleVO> listRoutingIngressFirewallRules(long networkId);
}

View File

@ -48,6 +48,7 @@ public class FirewallRulesDaoImpl extends GenericDaoBase<FirewallRuleVO, Long> i
protected final SearchBuilder<FirewallRuleVO> NotRevokedSearch;
protected final SearchBuilder<FirewallRuleVO> ReleaseSearch;
protected SearchBuilder<FirewallRuleVO> VmSearch;
protected SearchBuilder<FirewallRuleVO> FirewallByPortsAndNetwork;
protected final SearchBuilder<FirewallRuleVO> SystemRuleSearch;
protected final GenericSearchBuilder<FirewallRuleVO, Long> RulesByIpCount;
protected final SearchBuilder<FirewallRuleVO> RoutingFirewallRulesSearch;
@ -106,6 +107,12 @@ public class FirewallRulesDaoImpl extends GenericDaoBase<FirewallRuleVO, Long> i
RulesByIpCount.and("state", RulesByIpCount.entity().getState(), Op.EQ);
RulesByIpCount.done();
FirewallByPortsAndNetwork = createSearchBuilder();
FirewallByPortsAndNetwork.and("networkId", FirewallByPortsAndNetwork.entity().getNetworkId(), Op.EQ);
FirewallByPortsAndNetwork.and("sourcePortStart", FirewallByPortsAndNetwork.entity().getSourcePortStart(), Op.EQ);
FirewallByPortsAndNetwork.and("sourcePortEnd", FirewallByPortsAndNetwork.entity().getSourcePortEnd(), Op.EQ);
FirewallByPortsAndNetwork.done();
RoutingFirewallRulesSearch = createSearchBuilder();
RoutingFirewallRulesSearch.and("networkId", RoutingFirewallRulesSearch.entity().getNetworkId(), Op.EQ);
RoutingFirewallRulesSearch.and("purpose", RoutingFirewallRulesSearch.entity().getPurpose(), Op.EQ);
@ -408,6 +415,16 @@ public class FirewallRulesDaoImpl extends GenericDaoBase<FirewallRuleVO, Long> i
rule.setDestinationCidrsList(destCidrs);
}
@Override
public FirewallRuleVO findByNetworkIdAndPorts(long networkId, int startPort, int endPort) {
SearchCriteria<FirewallRuleVO> sc = FirewallByPortsAndNetwork.create();
sc.setParameters("networkId", networkId);
sc.setParameters("sourcePortStart", startPort);
sc.setParameters("sourcePortEnd", endPort);
return findOneBy(sc);
}
@Override
public List<FirewallRuleVO> listRoutingIngressFirewallRules(long networkId) {
SearchCriteria<FirewallRuleVO> sc = RoutingFirewallRulesSearch.create();

View File

@ -47,5 +47,7 @@ public interface PortForwardingRulesDao extends GenericDao<PortForwardingRuleVO,
PortForwardingRuleVO findByIdAndIp(long id, String secondaryIp);
List<PortForwardingRuleVO> listByNetworkAndDestIpAddr(String ip4Address, long networkId);
PortForwardingRuleVO findByNetworkAndPorts(long networkId, int startPort, int endPort);
int expungeByVmList(List<Long> vmIds, Long batchSize);
}

View File

@ -58,6 +58,8 @@ public class PortForwardingRulesDaoImpl extends GenericDaoBase<PortForwardingRul
AllFieldsSearch.and("vmId", AllFieldsSearch.entity().getVirtualMachineId(), Op.EQ);
AllFieldsSearch.and("purpose", AllFieldsSearch.entity().getPurpose(), Op.EQ);
AllFieldsSearch.and("dstIp", AllFieldsSearch.entity().getDestinationIpAddress(), Op.EQ);
AllFieldsSearch.and("sourcePortStart", AllFieldsSearch.entity().getSourcePortStart(), Op.EQ);
AllFieldsSearch.and("sourcePortEnd", AllFieldsSearch.entity().getSourcePortEnd(), Op.EQ);
AllFieldsSearch.done();
ApplicationSearch = createSearchBuilder();
@ -175,6 +177,15 @@ public class PortForwardingRulesDaoImpl extends GenericDaoBase<PortForwardingRul
return findOneBy(sc);
}
@Override
public PortForwardingRuleVO findByNetworkAndPorts(long networkId, int startPort, int endPort) {
SearchCriteria<PortForwardingRuleVO> sc = AllFieldsSearch.create();
sc.setParameters("networkId", networkId);
sc.setParameters("sourcePortStart", startPort);
sc.setParameters("sourcePortEnd", endPort);
return findOneBy(sc);
}
@Override
public int expungeByVmList(List<Long> vmIds, Long batchSize) {
if (CollectionUtils.isEmpty(vmIds)) {

View File

@ -105,6 +105,9 @@ public class VpcVO implements Vpc {
@Column(name = "ip6Dns2")
String ip6Dns2;
@Column(name = "use_router_ip_resolver")
boolean useRouterIpResolver = false;
@Transient
boolean rollingRestart = false;
@ -309,4 +312,13 @@ public class VpcVO implements Vpc {
public String getIp6Dns2() {
return ip6Dns2;
}
@Override
public boolean useRouterIpAsResolver() {
return useRouterIpResolver;
}
public void setUseRouterIpResolver(boolean useRouterIpResolver) {
this.useRouterIpResolver = useRouterIpResolver;
}
}

View File

@ -162,6 +162,9 @@ public class VMTemplateVO implements VirtualMachineTemplate {
@Column(name = "deploy_as_is")
private boolean deployAsIs;
@Column(name = "for_cks")
private boolean forCks;
@Column(name = "user_data_id")
private Long userDataId;
@ -664,6 +667,14 @@ public class VMTemplateVO implements VirtualMachineTemplate {
this.deployAsIs = deployAsIs;
}
public boolean isForCks() {
return forCks;
}
public void setForCks(boolean forCks) {
this.forCks = forCks;
}
@Override
public Long getUserDataId() {
return userDataId;

View File

@ -23,8 +23,12 @@ import com.cloud.utils.exception.CloudRuntimeException;
import java.io.InputStream;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.cloudstack.framework.config.ConfigKey;
@ -59,6 +63,7 @@ public class Upgrade42010to42100 extends DbUpgradeAbstractImpl implements DbUpgr
@Override
public void performDataMigration(Connection conn) {
updateKubernetesClusterNodeVersions(conn);
migrateConfigurationScopeToBitmask(conn);
}
@ -88,6 +93,95 @@ public class Upgrade42010to42100 extends DbUpgradeAbstractImpl implements DbUpgr
}
}
private void updateKubernetesClusterNodeVersions(Connection conn) {
//get list of all non removed kubernetes clusters
try {
Map<Long, String> clusterAndVersion = getKubernetesClusterIdsAndVersion(conn);
updateKubernetesNodeVersions(conn, clusterAndVersion);
} catch (Exception e) {
String errMsg = "Failed to update kubernetes cluster nodes version";
logger.error(errMsg);
throw new CloudRuntimeException(errMsg, e);
}
}
private Map<Long, String> getKubernetesClusterIdsAndVersion(Connection conn) {
String listKubernetesClusters = "SELECT c.id, v.semantic_version FROM `cloud`.`kubernetes_cluster` c JOIN `cloud`.`kubernetes_supported_version` v ON (c.kubernetes_version_id = v.id) WHERE c.removed is NULL;";
Map<Long, String> clusterAndVersion = new HashMap<>();
try {
PreparedStatement pstmt = conn.prepareStatement(listKubernetesClusters);
ResultSet rs = pstmt.executeQuery();
while (rs.next()) {
clusterAndVersion.put(rs.getLong(1), rs.getString(2));
}
rs.close();
pstmt.close();
} catch (SQLException e) {
String errMsg = String.format("Failed to get all the kubernetes cluster ids due to: %s", e.getMessage());
logger.error(errMsg);
throw new CloudRuntimeException(errMsg, e);
}
return clusterAndVersion;
}
private void updateKubernetesNodeVersions(Connection conn, Map<Long, String> clusterAndVersion) {
List<Long> kubernetesClusterVmIds;
for (Map.Entry<Long, String> clusterVersionEntry : clusterAndVersion.entrySet()) {
try {
Long cksClusterId = clusterVersionEntry.getKey();
String cksVersion = clusterVersionEntry.getValue();
logger.debug(String.format("Adding CKS version %s to existing CKS cluster %s nodes", cksVersion, cksClusterId));
kubernetesClusterVmIds = getKubernetesClusterVmMapIds(conn, cksClusterId);
updateKubernetesNodeVersion(conn, kubernetesClusterVmIds, cksClusterId, cksVersion);
} catch (Exception e) {
String errMsg = String.format("Failed to update the node version for kubernetes cluster nodes for the" +
" kubernetes cluster with id: %s," +
" due to: %s", clusterVersionEntry.getKey(), e.getMessage());
logger.error(errMsg, e);
throw new CloudRuntimeException(errMsg, e);
}
}
}
private List<Long> getKubernetesClusterVmMapIds(Connection conn, Long cksClusterId) {
List<Long> kubernetesClusterVmIds = new ArrayList<>();
String getKubernetesClustersVmMap = "SELECT id FROM `cloud`.`kubernetes_cluster_vm_map` WHERE cluster_id = %s;";
try {
PreparedStatement pstmt = conn.prepareStatement(String.format(getKubernetesClustersVmMap, cksClusterId));
ResultSet rs = pstmt.executeQuery();
while (rs.next()) {
kubernetesClusterVmIds.add(rs.getLong(1));
}
rs.close();
pstmt.close();
} catch (SQLException e) {
String errMsg = String.format("Failed to get the kubernetes cluster vm map IDs for kubernetes cluster with id: %s," +
" due to: %s", cksClusterId, e.getMessage());
logger.error(errMsg, e);
throw new CloudRuntimeException(errMsg, e);
}
return kubernetesClusterVmIds;
}
private void updateKubernetesNodeVersion(Connection conn, List<Long> kubernetesClusterVmIds, Long cksClusterId, String cksVersion) {
String updateKubernetesNodeVersion = "UPDATE `cloud`.`kubernetes_cluster_vm_map` set kubernetes_node_version = ? WHERE id = ?;";
for (Long nodeVmId : kubernetesClusterVmIds) {
try {
PreparedStatement pstmt = conn.prepareStatement(updateKubernetesNodeVersion);
pstmt.setString(1, cksVersion);
pstmt.setLong(2, nodeVmId);
pstmt.executeUpdate();
pstmt.close();
} catch (Exception e) {
String errMsg = String.format("Failed to update the node version for kubernetes cluster nodes for the" +
" kubernetes cluster with id: %s," +
" due to: %s", cksClusterId, e.getMessage());
logger.error(errMsg, e);
throw new CloudRuntimeException(errMsg, e);
}
}
}
protected void migrateConfigurationScopeToBitmask(Connection conn) {
String scopeDataType = DbUpgradeUtils.getTableColumnType(conn, "configuration", "scope");
logger.info("Data type of the column scope of table configuration is {}", scopeDataType);

View File

@ -65,6 +65,9 @@ public class UserDataVO implements UserData {
@Column(name = GenericDao.REMOVED_COLUMN)
private Date removed;
@Column(name = "for_cks")
private boolean forCks;
@Override
public long getDomainId() {
return domainId;
@ -105,6 +108,11 @@ public class UserDataVO implements UserData {
return params;
}
@Override
public boolean isForCks() {
return forCks;
}
public void setAccountId(long accountId) {
this.accountId = accountId;
}
@ -132,4 +140,6 @@ public class UserDataVO implements UserData {
public Date getRemoved() {
return removed;
}
public void setForCks(boolean forCks) { this.forCks = forCks; }
}

View File

@ -34,10 +34,47 @@ INSERT INTO `cloud`.`role_permissions` (uuid, role_id, rule, permission, sort_or
SELECT uuid(), role_id, 'quotaCreditsList', permission, sort_order
FROM `cloud`.`role_permissions` rp
WHERE rp.rule = 'quotaStatement'
AND NOT EXISTS(SELECT 1 FROM cloud.role_permissions rp_ WHERE rp.role_id = rp_.role_id AND rp_.rule = 'quotaCreditsList');
AND NOT EXISTS(SELECT 1 FROM cloud.role_permissions rp_ WHERE rp.role_id = rp_.role_id AND rp_.rule = 'quotaCreditsList');
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.host', 'last_mgmt_server_id', 'bigint unsigned DEFAULT NULL COMMENT "last management server this host is connected to" AFTER `mgmt_server_id`');
-----------------------------------------------------------
-- CKS Enhancements:
-----------------------------------------------------------
-- Add for_cks column to the vm_template table
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.vm_template','for_cks', 'int(1) unsigned DEFAULT "0" COMMENT "if true, the template can be used for CKS cluster deployment"');
-- Add support for different node types service offerings on CKS clusters
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.kubernetes_cluster','control_node_service_offering_id', 'bigint unsigned COMMENT "service offering ID for Control Node(s)"');
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.kubernetes_cluster','worker_node_service_offering_id', 'bigint unsigned COMMENT "service offering ID for Worker Node(s)"');
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.kubernetes_cluster','etcd_node_service_offering_id', 'bigint unsigned COMMENT "service offering ID for etcd Nodes"');
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.kubernetes_cluster','etcd_node_count', 'bigint unsigned COMMENT "number of etcd nodes to be deployed for the Kubernetes cluster"');
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.kubernetes_cluster','control_node_template_id', 'bigint unsigned COMMENT "template id to be used for Control Node(s)"');
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.kubernetes_cluster','worker_node_template_id', 'bigint unsigned COMMENT "template id to be used for Worker Node(s)"');
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.kubernetes_cluster','etcd_node_template_id', 'bigint unsigned COMMENT "template id to be used for etcd Nodes"');
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.kubernetes_cluster','cni_config_id', 'bigint unsigned COMMENT "user data id representing the associated cni configuration"');
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.kubernetes_cluster','cni_config_details', 'varchar(4096) DEFAULT NULL COMMENT "user data details representing the values required for the cni configuration associated"');
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.kubernetes_cluster_vm_map','etcd_node', 'tinyint(1) unsigned NOT NULL DEFAULT 0 COMMENT "indicates if the VM is an etcd node"');
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.kubernetes_cluster_vm_map','external_node', 'tinyint(1) unsigned NOT NULL DEFAULT 0 COMMENT "indicates if the node was imported into the Kubernetes cluster"');
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.kubernetes_cluster_vm_map','manual_upgrade', 'tinyint(1) unsigned NOT NULL DEFAULT 0 COMMENT "indicates if the node is marked for manual upgrade and excluded from the Kubernetes cluster upgrade operation"');
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.kubernetes_cluster_vm_map','kubernetes_node_version', 'varchar(40) COMMENT "version of k8s the cluster node is on"');
ALTER TABLE `cloud`.`kubernetes_cluster` ADD CONSTRAINT `fk_cluster__control_node_service_offering_id` FOREIGN KEY `fk_cluster__control_node_service_offering_id`(`control_node_service_offering_id`) REFERENCES `service_offering`(`id`) ON DELETE CASCADE;
ALTER TABLE `cloud`.`kubernetes_cluster` ADD CONSTRAINT `fk_cluster__worker_node_service_offering_id` FOREIGN KEY `fk_cluster__worker_node_service_offering_id`(`worker_node_service_offering_id`) REFERENCES `service_offering`(`id`) ON DELETE CASCADE;
ALTER TABLE `cloud`.`kubernetes_cluster` ADD CONSTRAINT `fk_cluster__etcd_node_service_offering_id` FOREIGN KEY `fk_cluster__etcd_node_service_offering_id`(`etcd_node_service_offering_id`) REFERENCES `service_offering`(`id`) ON DELETE CASCADE;
ALTER TABLE `cloud`.`kubernetes_cluster` ADD CONSTRAINT `fk_cluster__control_node_template_id` FOREIGN KEY `fk_cluster__control_node_template_id`(`control_node_template_id`) REFERENCES `vm_template`(`id`) ON DELETE CASCADE;
ALTER TABLE `cloud`.`kubernetes_cluster` ADD CONSTRAINT `fk_cluster__worker_node_template_id` FOREIGN KEY `fk_cluster__worker_node_template_id`(`worker_node_template_id`) REFERENCES `vm_template`(`id`) ON DELETE CASCADE;
ALTER TABLE `cloud`.`kubernetes_cluster` ADD CONSTRAINT `fk_cluster__etcd_node_template_id` FOREIGN KEY `fk_cluster__etcd_node_template_id`(`etcd_node_template_id`) REFERENCES `vm_template`(`id`) ON DELETE CASCADE;
-- Add for_cks column to the user_data table to represent CNI Configuration stored as userdata
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.user_data','for_cks', 'int(1) unsigned DEFAULT "0" COMMENT "if true, the user data represent CNI configuration meant for CKS use only"');
-- Add use VR IP as resolver option on VPC
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.vpc','use_router_ip_resolver', 'tinyint(1) DEFAULT 0 COMMENT "use router ip as resolver instead of dns options"');
-----------------------------------------------------------
-- END - CKS Enhancements
-----------------------------------------------------------
-- Add table for reconcile commands
CREATE TABLE IF NOT EXISTS `cloud`.`reconcile_commands` (
`id` bigint unsigned NOT NULL UNIQUE AUTO_INCREMENT,

View File

@ -101,6 +101,7 @@ SELECT
IFNULL(`data_center`.`id`, 0)) AS `temp_zone_pair`,
`vm_template`.`direct_download` AS `direct_download`,
`vm_template`.`deploy_as_is` AS `deploy_as_is`,
`vm_template`.`for_cks` AS `for_cks`,
`user_data`.`id` AS `user_data_id`,
`user_data`.`uuid` AS `user_data_uuid`,
`user_data`.`name` AS `user_data_name`,

View File

@ -16,10 +16,12 @@
// under the License.
package com.cloud.upgrade.dao;
import static org.mockito.ArgumentMatchers.anyString;
import static org.mockito.Mockito.when;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import org.junit.Test;
@ -64,6 +66,12 @@ public class Upgrade42010to42100Test {
" ELSE 0" +
" END WHERE scope IS NOT NULL;";
when(txn.prepareAutoCloseStatement(sql)).thenReturn(pstmt);
PreparedStatement preparedStatement = Mockito.mock(PreparedStatement.class);
ResultSet resultSet = Mockito.mock(ResultSet.class);
Mockito.when(resultSet.next()).thenReturn(false);
Mockito.when(preparedStatement.executeQuery()).thenReturn(resultSet);
Mockito.when(conn.prepareStatement(anyString())).thenReturn(preparedStatement);
upgrade.performDataMigration(conn);
Mockito.verify(pstmt, Mockito.times(1)).executeUpdate();

View File

@ -433,6 +433,11 @@ public class TemplateObject implements TemplateInfo {
return this.imageVO.isDeployAsIs();
}
@Override
public boolean isForCks() {
return imageVO.isForCks();
}
public void setInstallPath(String installPath) {
this.installPath = installPath;
}

View File

@ -248,6 +248,7 @@ cp -r plugins/network-elements/cisco-vnmc/src/main/scripts/network/cisco/* ${RPM
mkdir -p ${RPM_BUILD_ROOT}%{_datadir}/%{name}-management/
mkdir -p ${RPM_BUILD_ROOT}%{_datadir}/%{name}-management/lib
mkdir -p ${RPM_BUILD_ROOT}%{_datadir}/%{name}-management/setup
mkdir -p ${RPM_BUILD_ROOT}%{_datadir}/%{name}-management/cks/conf
mkdir -p ${RPM_BUILD_ROOT}%{_localstatedir}/log/%{name}/management
mkdir -p ${RPM_BUILD_ROOT}%{_sysconfdir}/%{name}/management
mkdir -p ${RPM_BUILD_ROOT}%{_sysconfdir}/systemd/system/%{name}-management.service.d
@ -273,7 +274,7 @@ wget https://github.com/apache/cloudstack-cloudmonkey/releases/download/$CMK_REL
chmod +x ${RPM_BUILD_ROOT}%{_bindir}/cmk
cp -r client/target/utilities/scripts/db/* ${RPM_BUILD_ROOT}%{_datadir}/%{name}-management/setup
cp -r plugins/integrations/kubernetes-service/src/main/resources/conf/* ${RPM_BUILD_ROOT}%{_datadir}/%{name}-management/cks/conf
cp -r client/target/cloud-client-ui-%{_maventag}.jar ${RPM_BUILD_ROOT}%{_datadir}/%{name}-management/
cp -r client/target/classes/META-INF/webapp ${RPM_BUILD_ROOT}%{_datadir}/%{name}-management/webapp
cp ui/dist/config.json ${RPM_BUILD_ROOT}%{_sysconfdir}/%{name}/management/
@ -308,6 +309,11 @@ touch ${RPM_BUILD_ROOT}%{_localstatedir}/run/%{name}-management.pid
#install -D server/target/conf/cloudstack-catalina.logrotate ${RPM_BUILD_ROOT}%{_sysconfdir}/logrotate.d/%{name}-catalina
install -D server/target/conf/cloudstack-management.logrotate ${RPM_BUILD_ROOT}%{_sysconfdir}/logrotate.d/%{name}-management
install -D plugins/integrations/kubernetes-service/src/main/resources/conf/etcd-node.yml ${RPM_BUILD_ROOT}%{_datadir}/%{name}-management/cks/conf/etcd-node.yml
install -D plugins/integrations/kubernetes-service/src/main/resources/conf/k8s-control-node.yml ${RPM_BUILD_ROOT}%{_datadir}/%{name}-management/cks/conf/k8s-control-node.yml
install -D plugins/integrations/kubernetes-service/src/main/resources/conf/k8s-control-node-add.yml ${RPM_BUILD_ROOT}%{_datadir}/%{name}-management/cks/conf/k8s-control-node-add.yml
install -D plugins/integrations/kubernetes-service/src/main/resources/conf/k8s-node.yml ${RPM_BUILD_ROOT}%{_datadir}/%{name}-management/cks/conf/k8s-node.yml
# SystemVM template
mkdir -p ${RPM_BUILD_ROOT}%{_datadir}/%{name}-management/templates/systemvm
cp -r engine/schema/dist/systemvm-templates/* ${RPM_BUILD_ROOT}%{_datadir}/%{name}-management/templates/systemvm
@ -608,6 +614,7 @@ pip3 install --upgrade /usr/share/cloudstack-marvin/Marvin-*.tar.gz
%attr(0755,root,root) %{_bindir}/%{name}-sysvmadm
%attr(0755,root,root) %{_bindir}/%{name}-setup-encryption
%attr(0755,root,root) %{_bindir}/cmk
%{_datadir}/%{name}-management/cks/conf/*.yml
%{_datadir}/%{name}-management/setup/*.sql
%{_datadir}/%{name}-management/setup/*.sh
%{_datadir}/%{name}-management/setup/server-setup.xml

View File

@ -23,4 +23,6 @@ public class KubernetesClusterEventTypes {
public static final String EVENT_KUBERNETES_CLUSTER_STOP = "KUBERNETES.CLUSTER.STOP";
public static final String EVENT_KUBERNETES_CLUSTER_SCALE = "KUBERNETES.CLUSTER.SCALE";
public static final String EVENT_KUBERNETES_CLUSTER_UPGRADE = "KUBERNETES.CLUSTER.UPGRADE";
public static final String EVENT_KUBERNETES_CLUSTER_NODES_ADD = "KUBERNETES.CLUSTER.NODES.ADD";
public static final String EVENT_KUBERNETES_CLUSTER_NODES_REMOVE = "KUBERNETES.CLUSTER.NODES.REMOVE";
}

View File

@ -16,6 +16,10 @@
// under the License.
package com.cloud.kubernetes.cluster;
import com.cloud.exception.InsufficientCapacityException;
import com.cloud.exception.ManagementServerException;
import com.cloud.exception.ResourceUnavailableException;
import org.apache.cloudstack.api.command.user.kubernetes.cluster.AddNodesToKubernetesClusterCmd;
import java.util.List;
import org.apache.cloudstack.api.command.user.kubernetes.cluster.AddVirtualMachinesToKubernetesClusterCmd;
@ -23,6 +27,7 @@ import org.apache.cloudstack.api.command.user.kubernetes.cluster.CreateKubernete
import org.apache.cloudstack.api.command.user.kubernetes.cluster.DeleteKubernetesClusterCmd;
import org.apache.cloudstack.api.command.user.kubernetes.cluster.GetKubernetesClusterConfigCmd;
import org.apache.cloudstack.api.command.user.kubernetes.cluster.ListKubernetesClustersCmd;
import org.apache.cloudstack.api.command.user.kubernetes.cluster.RemoveNodesFromKubernetesClusterCmd;
import org.apache.cloudstack.api.command.user.kubernetes.cluster.RemoveVirtualMachinesFromKubernetesClusterCmd;
import org.apache.cloudstack.api.command.user.kubernetes.cluster.ScaleKubernetesClusterCmd;
import org.apache.cloudstack.api.command.user.kubernetes.cluster.StartKubernetesClusterCmd;
@ -82,6 +87,18 @@ public interface KubernetesClusterService extends PluggableService, Configurable
"The number of retries if fail to upgrade kubernetes cluster due to some reasons (e.g. drain node, etcdserver leader changed)",
true,
KubernetesServiceEnabled.key());
static final ConfigKey<Long> KubernetesClusterAddNodeTimeout = new ConfigKey<Long>("Advanced", Long.class,
"cloud.kubernetes.cluster.add.node.timeout",
"3600",
"Timeout interval (in seconds) in which an external node (VM / baremetal host) addition to a cluster should be completed",
true,
KubernetesServiceEnabled.key());
static final ConfigKey<Long> KubernetesClusterRemoveNodeTimeout = new ConfigKey<Long>("Advanced", Long.class,
"cloud.kubernetes.cluster.remove.node.timeout",
"900",
"Timeout interval (in seconds) in which an external node (VM / baremetal host) removal from a cluster should be completed",
true,
KubernetesServiceEnabled.key());
static final ConfigKey<Boolean> KubernetesClusterExperimentalFeaturesEnabled = new ConfigKey<Boolean>("Advanced", Boolean.class,
"cloud.kubernetes.cluster.experimental.features.enabled",
"false",
@ -95,6 +112,36 @@ public interface KubernetesClusterService extends PluggableService, Configurable
true,
ConfigKey.Scope.Account,
KubernetesServiceEnabled.key());
static final ConfigKey<Long> KubernetesControlNodeInstallAttemptWait = new ConfigKey<Long>("Advanced", Long.class,
"cloud.kubernetes.control.node.install.attempt.wait.duration",
"15",
"Control Nodes: Time in seconds for the installation process to wait before it re-attempts",
true,
KubernetesServiceEnabled.key());
static final ConfigKey<Long> KubernetesControlNodeInstallReattempts = new ConfigKey<Long>("Advanced", Long.class,
"cloud.kubernetes.control.node.install.reattempt.count",
"100",
"Control Nodes: Number of times the offline installation of K8S will be re-attempted",
true,
KubernetesServiceEnabled.key());
final ConfigKey<Long> KubernetesWorkerNodeInstallAttemptWait = new ConfigKey<Long>("Advanced", Long.class,
"cloud.kubernetes.worker.node.install.attempt.wait.duration",
"30",
"Worker Nodes: Time in seconds for the installation process to wait before it re-attempts",
true,
KubernetesServiceEnabled.key());
static final ConfigKey<Long> KubernetesWorkerNodeInstallReattempts = new ConfigKey<Long>("Advanced", Long.class,
"cloud.kubernetes.worker.node.install.reattempt.count",
"40",
"Worker Nodes: Number of times the offline installation of K8S will be re-attempted",
true,
KubernetesServiceEnabled.key());
static final ConfigKey<Integer> KubernetesEtcdNodeStartPort = new ConfigKey<Integer>("Advanced", Integer.class,
"cloud.kubernetes.etcd.node.start.port",
"50000",
"Start port for Port forwarding rules for etcd nodes",
true,
KubernetesServiceEnabled.key());
KubernetesCluster findById(final Long id);
@ -102,9 +149,11 @@ public interface KubernetesClusterService extends PluggableService, Configurable
KubernetesCluster createManagedKubernetesCluster(CreateKubernetesClusterCmd cmd) throws CloudRuntimeException;
void startKubernetesCluster(CreateKubernetesClusterCmd cmd) throws CloudRuntimeException;
void startKubernetesCluster(CreateKubernetesClusterCmd cmd) throws CloudRuntimeException, ManagementServerException, ResourceUnavailableException, InsufficientCapacityException;
void startKubernetesCluster(StartKubernetesClusterCmd cmd) throws CloudRuntimeException;
void startKubernetesCluster(StartKubernetesClusterCmd cmd) throws CloudRuntimeException, ManagementServerException, ResourceUnavailableException, InsufficientCapacityException;
boolean startKubernetesCluster(long kubernetesClusterId, Long domainId, String accountName, Long asNumber, boolean onCreate) throws CloudRuntimeException, ManagementServerException, ResourceUnavailableException, InsufficientCapacityException;
boolean stopKubernetesCluster(StopKubernetesClusterCmd cmd) throws CloudRuntimeException;
@ -124,6 +173,10 @@ public interface KubernetesClusterService extends PluggableService, Configurable
boolean addVmsToCluster(AddVirtualMachinesToKubernetesClusterCmd cmd);
boolean addNodesToKubernetesCluster(AddNodesToKubernetesClusterCmd cmd);
boolean removeNodesFromKubernetesCluster(RemoveNodesFromKubernetesClusterCmd cmd) throws Exception;
List<RemoveVirtualMachinesFromKubernetesClusterResponse> removeVmsFromCluster(RemoveVirtualMachinesFromKubernetesClusterCmd cmd);
boolean isDirectAccess(Network network);

View File

@ -118,6 +118,33 @@ public class KubernetesClusterVO implements KubernetesCluster {
@Column(name = "cluster_type")
private ClusterType clusterType;
@Column(name = "control_node_service_offering_id")
private Long controlNodeServiceOfferingId;
@Column(name = "worker_node_service_offering_id")
private Long workerNodeServiceOfferingId;
@Column(name = "etcd_node_service_offering_id")
private Long etcdNodeServiceOfferingId;
@Column(name = "etcd_node_count")
private Long etcdNodeCount;
@Column(name = "control_node_template_id")
private Long controlNodeTemplateId;
@Column(name = "worker_node_template_id")
private Long workerNodeTemplateId;
@Column(name = "etcd_node_template_id")
private Long etcdNodeTemplateId;
@Column(name = "cni_config_id", nullable = true)
private Long cniConfigId = null;
@Column(name = "cni_config_details", updatable = true, length = 4096)
private String cniConfigDetails;
@Override
public long getId() {
return id;
@ -237,7 +264,7 @@ public class KubernetesClusterVO implements KubernetesCluster {
@Override
public long getTotalNodeCount() {
return this.controlNodeCount + this.nodeCount;
return this.controlNodeCount + this.nodeCount + this.getEtcdNodeCount();
}
@Override
@ -414,4 +441,77 @@ public class KubernetesClusterVO implements KubernetesCluster {
public Class<?> getEntityType() {
return KubernetesCluster.class;
}
public Long getControlNodeServiceOfferingId() {
return controlNodeServiceOfferingId;
}
public void setControlNodeServiceOfferingId(Long controlNodeServiceOfferingId) {
this.controlNodeServiceOfferingId = controlNodeServiceOfferingId;
}
public Long getWorkerNodeServiceOfferingId() {
return workerNodeServiceOfferingId;
}
public void setWorkerNodeServiceOfferingId(Long workerNodeServiceOfferingId) {
this.workerNodeServiceOfferingId = workerNodeServiceOfferingId;
}
public Long getEtcdNodeServiceOfferingId() {
return etcdNodeServiceOfferingId;
}
public void setEtcdNodeServiceOfferingId(Long etcdNodeServiceOfferingId) {
this.etcdNodeServiceOfferingId = etcdNodeServiceOfferingId;
}
public Long getEtcdNodeCount() {
return etcdNodeCount != null ? etcdNodeCount : 0L;
}
public void setEtcdNodeCount(Long etcdNodeCount) {
this.etcdNodeCount = etcdNodeCount;
}
public Long getEtcdNodeTemplateId() {
return etcdNodeTemplateId;
}
public void setEtcdNodeTemplateId(Long etcdNodeTemplateId) {
this.etcdNodeTemplateId = etcdNodeTemplateId;
}
public Long getWorkerNodeTemplateId() {
return workerNodeTemplateId;
}
public void setWorkerNodeTemplateId(Long workerNodeTemplateId) {
this.workerNodeTemplateId = workerNodeTemplateId;
}
public Long getControlNodeTemplateId() {
return controlNodeTemplateId;
}
public void setControlNodeTemplateId(Long controlNodeTemplateId) {
this.controlNodeTemplateId = controlNodeTemplateId;
}
public Long getCniConfigId() {
return cniConfigId;
}
public void setCniConfigId(Long cniConfigId) {
this.cniConfigId = cniConfigId;
}
public String getCniConfigDetails() {
return cniConfigDetails;
}
public void setCniConfigDetails(String cniConfigDetails) {
this.cniConfigDetails = cniConfigDetails;
}
}

View File

@ -42,6 +42,18 @@ public class KubernetesClusterVmMapVO implements KubernetesClusterVmMap {
@Column(name = "control_node")
boolean controlNode;
@Column(name = "etcd_node")
boolean etcdNode;
@Column(name = "external_node")
boolean externalNode;
@Column(name = "manual_upgrade")
boolean manualUpgrade;
@Column(name = "kubernetes_node_version")
String nodeVersion;
public KubernetesClusterVmMapVO() {
}
@ -83,4 +95,36 @@ public class KubernetesClusterVmMapVO implements KubernetesClusterVmMap {
public void setControlNode(boolean controlNode) {
this.controlNode = controlNode;
}
public boolean isEtcdNode() {
return etcdNode;
}
public void setEtcdNode(boolean etcdNode) {
this.etcdNode = etcdNode;
}
public boolean isExternalNode() {
return externalNode;
}
public void setExternalNode(boolean externalNode) {
this.externalNode = externalNode;
}
public boolean isManualUpgrade() {
return manualUpgrade;
}
public void setManualUpgrade(boolean manualUpgrade) {
this.manualUpgrade = manualUpgrade;
}
public String getNodeVersion() {
return nodeVersion;
}
public void setNodeVersion(String nodeVersion) {
this.nodeVersion = nodeVersion;
}
}

View File

@ -18,10 +18,18 @@ package com.cloud.kubernetes.cluster;
import java.lang.reflect.Field;
import java.lang.reflect.Modifier;
import java.util.HashMap;
import java.util.Map;
import java.util.Objects;
import javax.inject.Inject;
import com.cloud.exception.InvalidParameterValueException;
import com.cloud.offering.ServiceOffering;
import com.cloud.service.dao.ServiceOfferingDao;
import com.cloud.storage.VMTemplateVO;
import com.cloud.storage.dao.VMTemplateDao;
import com.cloud.vm.VmDetailConstants;
import org.apache.cloudstack.acl.ControlledEntity;
import org.apache.cloudstack.api.ApiCommandResourceType;
import org.apache.cloudstack.framework.config.ConfigKey;
@ -38,6 +46,8 @@ import com.cloud.utils.component.AdapterBase;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.vm.UserVmManager;
import org.apache.commons.collections.MapUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.LogManager;
import org.apache.commons.lang3.ObjectUtils;
@ -52,6 +62,10 @@ public class KubernetesServiceHelperImpl extends AdapterBase implements Kubernet
@Inject
private KubernetesClusterVmMapDao kubernetesClusterVmMapDao;
@Inject
protected ServiceOfferingDao serviceOfferingDao;
@Inject
protected VMTemplateDao vmTemplateDao;
@Inject
KubernetesClusterService kubernetesClusterService;
protected void setEventTypeEntityDetails(Class<?> eventTypeDefinedClass, Class<?> entityClass) {
@ -110,6 +124,126 @@ public class KubernetesServiceHelperImpl extends AdapterBase implements Kubernet
}
@Override
public boolean isValidNodeType(String nodeType) {
if (StringUtils.isBlank(nodeType)) {
return false;
}
try {
KubernetesClusterNodeType.valueOf(nodeType.toUpperCase());
return true;
} catch (IllegalArgumentException e) {
return false;
}
}
@Override
public Map<String, Long> getServiceOfferingNodeTypeMap(Map<String, Map<String, String>> serviceOfferingNodeTypeMap) {
Map<String, Long> mapping = new HashMap<>();
if (MapUtils.isNotEmpty(serviceOfferingNodeTypeMap)) {
for (Map<String, String> entry : serviceOfferingNodeTypeMap.values()) {
processNodeTypeOfferingEntryAndAddToMappingIfValid(entry, mapping);
}
}
return mapping;
}
protected void checkNodeTypeOfferingEntryCompleteness(String nodeTypeStr, String serviceOfferingUuid) {
if (StringUtils.isAnyEmpty(nodeTypeStr, serviceOfferingUuid)) {
String error = String.format("Incomplete Node Type to Service Offering ID mapping: '%s' -> '%s'", nodeTypeStr, serviceOfferingUuid);
logger.error(error);
throw new InvalidParameterValueException(error);
}
}
protected void checkNodeTypeOfferingEntryValues(String nodeTypeStr, ServiceOffering serviceOffering, String serviceOfferingUuid) {
if (!isValidNodeType(nodeTypeStr)) {
String error = String.format("The provided value '%s' for Node Type is invalid", nodeTypeStr);
logger.error(error);
throw new InvalidParameterValueException(String.format(error));
}
if (serviceOffering == null) {
String error = String.format("Cannot find a service offering with ID %s", serviceOfferingUuid);
logger.error(error);
throw new InvalidParameterValueException(error);
}
}
protected void addNodeTypeOfferingEntry(String nodeTypeStr, String serviceOfferingUuid, ServiceOffering serviceOffering, Map<String, Long> mapping) {
if (logger.isDebugEnabled()) {
logger.debug("Node Type: '{}' should use Service Offering ID: '{}'", nodeTypeStr, serviceOfferingUuid);
}
KubernetesClusterNodeType nodeType = KubernetesClusterNodeType.valueOf(nodeTypeStr.toUpperCase());
mapping.put(nodeType.name(), serviceOffering.getId());
}
protected void processNodeTypeOfferingEntryAndAddToMappingIfValid(Map<String, String> entry, Map<String, Long> mapping) {
if (MapUtils.isEmpty(entry)) {
return;
}
String nodeTypeStr = entry.get(VmDetailConstants.CKS_NODE_TYPE);
String serviceOfferingUuid = entry.get(VmDetailConstants.OFFERING);
checkNodeTypeOfferingEntryCompleteness(nodeTypeStr, serviceOfferingUuid);
ServiceOffering serviceOffering = serviceOfferingDao.findByUuid(serviceOfferingUuid);
checkNodeTypeOfferingEntryValues(nodeTypeStr, serviceOffering, serviceOfferingUuid);
addNodeTypeOfferingEntry(nodeTypeStr, serviceOfferingUuid, serviceOffering, mapping);
}
protected void checkNodeTypeTemplateEntryCompleteness(String nodeTypeStr, String templateUuid) {
if (StringUtils.isAnyEmpty(nodeTypeStr, templateUuid)) {
String error = String.format("Incomplete Node Type to template ID mapping: '%s' -> '%s'", nodeTypeStr, templateUuid);
logger.error(error);
throw new InvalidParameterValueException(error);
}
}
protected void checkNodeTypeTemplateEntryValues(String nodeTypeStr, VMTemplateVO template, String templateUuid) {
if (!isValidNodeType(nodeTypeStr)) {
String error = String.format("The provided value '%s' for Node Type is invalid", nodeTypeStr);
logger.error(error);
throw new InvalidParameterValueException(String.format(error));
}
if (template == null) {
String error = String.format("Cannot find a template with ID %s", templateUuid);
logger.error(error);
throw new InvalidParameterValueException(error);
}
}
protected void addNodeTypeTemplateEntry(String nodeTypeStr, String templateUuid, VMTemplateVO template, Map<String, Long> mapping) {
if (logger.isDebugEnabled()) {
logger.debug(String.format("Node Type: '%s' should use template ID: '%s'", nodeTypeStr, templateUuid));
}
KubernetesClusterNodeType nodeType = KubernetesClusterNodeType.valueOf(nodeTypeStr.toUpperCase());
mapping.put(nodeType.name(), template.getId());
}
protected void processNodeTypeTemplateEntryAndAddToMappingIfValid(Map<String, String> entry, Map<String, Long> mapping) {
if (MapUtils.isEmpty(entry)) {
return;
}
String nodeTypeStr = entry.get(VmDetailConstants.CKS_NODE_TYPE);
String templateUuid = entry.get(VmDetailConstants.TEMPLATE);
checkNodeTypeTemplateEntryCompleteness(nodeTypeStr, templateUuid);
VMTemplateVO template = vmTemplateDao.findByUuid(templateUuid);
checkNodeTypeTemplateEntryValues(nodeTypeStr, template, templateUuid);
addNodeTypeTemplateEntry(nodeTypeStr, templateUuid, template, mapping);
}
@Override
public Map<String, Long> getTemplateNodeTypeMap(Map<String, Map<String, String>> templateNodeTypeMap) {
Map<String, Long> mapping = new HashMap<>();
if (MapUtils.isNotEmpty(templateNodeTypeMap)) {
for (Map<String, String> entry : templateNodeTypeMap.values()) {
processNodeTypeTemplateEntryAndAddToMappingIfValid(entry, mapping);
}
}
return mapping;
}
public void cleanupForAccount(Account account) {
kubernetesClusterService.cleanupForAccount(account);
}

View File

@ -14,17 +14,24 @@
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.kubernetes.cluster.actionworkers;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.lang.reflect.Field;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.stream.Collectors;
import javax.inject.Inject;
@ -32,13 +39,40 @@ import javax.inject.Inject;
import org.apache.logging.log4j.Level;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.LogManager;
import com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType;
import com.cloud.kubernetes.cluster.KubernetesClusterService;
import com.cloud.network.dao.NetworkVO;
import com.cloud.offering.ServiceOffering;
import com.cloud.exception.ManagementServerException;
import com.cloud.exception.NetworkRuleConflictException;
import com.cloud.kubernetes.cluster.utils.KubernetesClusterUtil;
import com.cloud.network.firewall.FirewallService;
import com.cloud.network.rules.FirewallRule;
import com.cloud.network.rules.PortForwardingRuleVO;
import com.cloud.network.rules.RulesService;
import com.cloud.network.rules.dao.PortForwardingRulesDao;
import com.cloud.user.SSHKeyPairVO;
import com.cloud.user.dao.UserDataDao;
import com.cloud.utils.component.ComponentContext;
import com.cloud.utils.db.TransactionCallbackWithException;
import com.cloud.utils.net.Ip;
import com.cloud.vm.Nic;
import com.cloud.vm.NicVO;
import com.cloud.vm.VirtualMachine;
import com.cloud.vm.dao.NicDao;
import com.cloud.vm.UserVmManager;
import org.apache.cloudstack.affinity.AffinityGroupVO;
import org.apache.cloudstack.affinity.dao.AffinityGroupDao;
import org.apache.cloudstack.api.ApiCommandResourceType;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.command.user.firewall.CreateFirewallRuleCmd;
import org.apache.cloudstack.ca.CAManager;
import org.apache.cloudstack.config.ApiServiceConfiguration;
import org.apache.cloudstack.context.CallContext;
import org.apache.cloudstack.engine.orchestration.service.NetworkOrchestrationService;
import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
import org.apache.cloudstack.userdata.UserDataManager;
import org.apache.commons.codec.binary.Base64;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.io.IOUtils;
import org.apache.commons.lang3.StringUtils;
@ -91,11 +125,14 @@ import com.cloud.utils.ssh.SshHelper;
import com.cloud.vm.UserVmDetailVO;
import com.cloud.vm.UserVmService;
import com.cloud.vm.UserVmVO;
import com.cloud.vm.VirtualMachine;
import com.cloud.vm.VmDetailConstants;
import com.cloud.vm.dao.UserVmDao;
import com.cloud.vm.dao.UserVmDetailsDao;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.CONTROL;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.ETCD;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.WORKER;
public class KubernetesClusterActionWorker {
@ -103,13 +140,18 @@ public class KubernetesClusterActionWorker {
public static final int CLUSTER_API_PORT = 6443;
public static final int DEFAULT_SSH_PORT = 22;
public static final int CLUSTER_NODES_DEFAULT_START_SSH_PORT = 2222;
public static final int ETCD_NODE_CLIENT_REQUEST_PORT = 2379;
public static final int ETCD_NODE_PEER_COMM_PORT = 2380;
public static final int CLUSTER_NODES_DEFAULT_SSH_PORT_SG = DEFAULT_SSH_PORT;
public static final String CKS_CLUSTER_SECURITY_GROUP_NAME = "CKSSecurityGroup";
public static final String CKS_SECURITY_GROUP_DESCRIPTION = "Security group for CKS nodes";
public static final String CKS_CONFIG_PATH = "/usr/share/cloudstack-management/cks";
protected Logger logger = LogManager.getLogger(getClass());
protected final static List<KubernetesClusterNodeType> CLUSTER_NODES_TYPES_LIST = Arrays.asList(WORKER, CONTROL, ETCD);
protected StateMachine2<KubernetesCluster.State, KubernetesCluster.Event, KubernetesCluster> _stateMachine = KubernetesCluster.State.getStateMachine();
@Inject
@ -147,6 +189,12 @@ public class KubernetesClusterActionWorker {
@Inject
protected UserVmService userVmService;
@Inject
protected UserDataManager userDataManager;
@Inject
protected UserDataDao userDataDao;
@Inject
protected UserVmManager userVmManager;
@Inject
protected VlanDao vlanDao;
@Inject
protected LaunchPermissionDao launchPermissionDao;
@ -154,6 +202,16 @@ public class KubernetesClusterActionWorker {
public ProjectService projectService;
@Inject
public VpcService vpcService;
@Inject
public PortForwardingRulesDao portForwardingRulesDao;
@Inject
protected RulesService rulesService;
@Inject
protected FirewallService firewallService;
@Inject
private NicDao nicDao;
@Inject
protected AffinityGroupDao affinityGroupDao;
protected KubernetesClusterDao kubernetesClusterDao;
protected KubernetesClusterVmMapDao kubernetesClusterVmMapDao;
@ -163,6 +221,9 @@ public class KubernetesClusterActionWorker {
protected KubernetesCluster kubernetesCluster;
protected Account owner;
protected VirtualMachineTemplate clusterTemplate;
protected VirtualMachineTemplate controlNodeTemplate;
protected VirtualMachineTemplate workerNodeTemplate;
protected VirtualMachineTemplate etcdTemplate;
protected File sshKeyFile;
protected String publicIpAddress;
protected int sshPort;
@ -171,6 +232,8 @@ public class KubernetesClusterActionWorker {
protected final String deploySecretsScriptFilename = "deploy-cloudstack-secret";
protected final String deployProviderScriptFilename = "deploy-provider";
protected final String autoscaleScriptFilename = "autoscale-kube-cluster";
protected final String validateNodeScript = "validate-cks-node";
protected final String removeNodeFromClusterScript = "remove-node-from-cluster";
protected final String scriptPath = "/opt/bin/";
protected File deploySecretsScriptFile;
protected File deployProviderScriptFile;
@ -194,7 +257,10 @@ public class KubernetesClusterActionWorker {
DataCenterVO dataCenterVO = dataCenterDao.findById(zoneId);
VMTemplateVO template = templateDao.findById(templateId);
Hypervisor.HypervisorType type = template.getHypervisorType();
this.clusterTemplate = manager.getKubernetesServiceTemplate(dataCenterVO, type);
this.clusterTemplate = manager.getKubernetesServiceTemplate(dataCenterVO, type, null, KubernetesClusterNodeType.DEFAULT);
this.controlNodeTemplate = templateDao.findById(this.kubernetesCluster.getControlNodeTemplateId());
this.workerNodeTemplate = templateDao.findById(this.kubernetesCluster.getWorkerNodeTemplateId());
this.etcdTemplate = templateDao.findById(this.kubernetesCluster.getEtcdNodeTemplateId());
this.sshKeyFile = getManagementServerSshPublicKeyFile();
}
@ -202,6 +268,11 @@ public class KubernetesClusterActionWorker {
return IOUtils.toString(Objects.requireNonNull(Thread.currentThread().getContextClassLoader().getResourceAsStream(resource)), com.cloud.utils.StringUtils.getPreferredCharset());
}
protected String readK8sConfigFile(String resource) throws IOException {
Path path = Paths.get(String.format("%s%s", CKS_CONFIG_PATH, resource));
return Files.readString(path);
}
protected String getControlNodeLoginUser() {
List<KubernetesClusterVmMapVO> vmMapVOList = getKubernetesClusterVMMaps();
if (!vmMapVOList.isEmpty()) {
@ -254,7 +325,7 @@ public class KubernetesClusterActionWorker {
}
protected void logTransitStateDetachIsoAndThrow(final Level logLevel, final String message, final KubernetesCluster kubernetesCluster,
final List<UserVm> clusterVMs, final KubernetesCluster.Event event, final Exception e) throws CloudRuntimeException {
final List<UserVm> clusterVMs, final KubernetesCluster.Event event, final Exception e) throws CloudRuntimeException {
logMessage(logLevel, message, e);
stateTransitTo(kubernetesCluster.getId(), event);
detachIsoKubernetesVMs(clusterVMs);
@ -265,7 +336,7 @@ public class KubernetesClusterActionWorker {
}
protected void deleteTemplateLaunchPermission() {
if (clusterTemplate != null && owner != null) {
if (isDefaultTemplateUsed() && owner != null) {
logger.info("Revoking launch permission for systemVM template");
launchPermissionDao.removePermissions(clusterTemplate.getId(), Collections.singletonList(owner.getId()));
}
@ -304,11 +375,20 @@ public class KubernetesClusterActionWorker {
return new File(keyFile);
}
protected KubernetesClusterVmMapVO addKubernetesClusterVm(final long kubernetesClusterId, final long vmId, boolean isControlNode) {
return Transaction.execute(new TransactionCallback<>() {
protected KubernetesClusterVmMapVO addKubernetesClusterVm(final long kubernetesClusterId, final long vmId,
boolean isControlNode, boolean isExternalNode,
boolean isEtcdNode, boolean markForManualUpgrade) {
KubernetesSupportedVersion kubernetesVersion = kubernetesSupportedVersionDao.findById(kubernetesCluster.getKubernetesVersionId());
return Transaction.execute(new TransactionCallback<KubernetesClusterVmMapVO>() {
@Override
public KubernetesClusterVmMapVO doInTransaction(TransactionStatus status) {
KubernetesClusterVmMapVO newClusterVmMap = new KubernetesClusterVmMapVO(kubernetesClusterId, vmId, isControlNode);
newClusterVmMap.setExternalNode(isExternalNode);
newClusterVmMap.setManualUpgrade(markForManualUpgrade);
newClusterVmMap.setEtcdNode(isEtcdNode);
if (!isEtcdNode) {
newClusterVmMap.setNodeVersion(kubernetesVersion.getSemanticVersion());
}
kubernetesClusterVmMapDao.persist(newClusterVmMap);
return newClusterVmMap;
}
@ -319,6 +399,7 @@ public class KubernetesClusterActionWorker {
if (controlVm != null) {
return controlVm;
}
Long etcdNodeCount = kubernetesCluster.getEtcdNodeCount();
List<KubernetesClusterVmMapVO> clusterVMs = kubernetesClusterVmMapDao.listByClusterId(kubernetesCluster.getId());
if (CollectionUtils.isEmpty(clusterVMs)) {
logger.warn(String.format("Unable to retrieve VMs for Kubernetes cluster : %s", kubernetesCluster.getName()));
@ -329,7 +410,8 @@ public class KubernetesClusterActionWorker {
vmIds.add(vmMap.getVmId());
}
Collections.sort(vmIds);
return userVmDao.findById(vmIds.get(0));
int controlNodeIndex = Objects.nonNull(etcdNodeCount) ? etcdNodeCount.intValue() : 0;
return userVmDao.findById(vmIds.get(controlNodeIndex));
}
protected String getControlVmPrivateIp() {
@ -368,7 +450,23 @@ public class KubernetesClusterActionWorker {
return address;
}
protected IpAddress acquireVpcTierKubernetesPublicIp(Network network) throws
protected IpAddress getPublicIp(Network network) throws ManagementServerException {
if (network.getVpcId() != null) {
IpAddress publicIp = getVpcTierKubernetesPublicIp(network);
if (publicIp == null) {
throw new ManagementServerException(String.format("No public IP addresses found for VPC tier : %s, Kubernetes cluster : %s", network.getName(), kubernetesCluster.getName()));
}
return publicIp;
}
IpAddress publicIp = getNetworkSourceNatIp(network);
if (publicIp == null) {
throw new ManagementServerException(String.format("No source NAT IP addresses found for network : %s, Kubernetes cluster : %s",
network.getName(), kubernetesCluster.getName()));
}
return publicIp;
}
protected IpAddress acquireVpcTierKubernetesPublicIp(Network network, boolean forEtcd) throws
InsufficientAddressCapacityException, ResourceAllocationException, ResourceUnavailableException {
IpAddress ip = networkService.allocateIP(owner, kubernetesCluster.getZoneId(), network.getId(), null, null);
if (ip == null) {
@ -376,7 +474,19 @@ public class KubernetesClusterActionWorker {
}
ip = vpcService.associateIPToVpc(ip.getId(), network.getVpcId());
ip = ipAddressManager.associateIPToGuestNetwork(ip.getId(), network.getId(), false);
kubernetesClusterDetailsDao.addDetail(kubernetesCluster.getId(), ApiConstants.PUBLIC_IP_ID, ip.getUuid(), false);
if (!forEtcd) {
kubernetesClusterDetailsDao.addDetail(kubernetesCluster.getId(), ApiConstants.PUBLIC_IP_ID, ip.getUuid(), false);
}
return ip;
}
protected IpAddress acquirePublicIpForIsolatedNetwork(Network network) throws
InsufficientAddressCapacityException, ResourceAllocationException, ResourceUnavailableException {
IpAddress ip = networkService.allocateIP(owner, kubernetesCluster.getZoneId(), network.getId(), null, null);
if (ip == null) {
return null;
}
ip = networkService.associateIPToNetwork(ip.getId(), network.getId());
return ip;
}
@ -408,7 +518,7 @@ public class KubernetesClusterActionWorker {
return new Pair<>(address.getAddress().addr(), port);
}
if (acquireNewPublicIpForVpcTierIfNeeded) {
address = acquireVpcTierKubernetesPublicIp(network);
address = acquireVpcTierKubernetesPublicIp(network, false);
if (address != null) {
return new Pair<>(address.getAddress().addr(), port);
}
@ -501,7 +611,7 @@ public class KubernetesClusterActionWorker {
CallContext vmContext = CallContext.register(CallContext.current(), ApiCommandResourceType.VirtualMachine);
vmContext.putContextParameter(VirtualMachine.class, vm.getUuid());
try {
result = templateService.detachIso(vm.getId(), true);
result = templateService.detachIso(vm.getId(), null, true);
} catch (CloudRuntimeException ex) {
logger.warn("Failed to detach binaries ISO from VM: {} in the Kubernetes cluster: {} ", vm, kubernetesCluster, ex);
} finally {
@ -569,14 +679,14 @@ public class KubernetesClusterActionWorker {
try {
String command = String.format("sudo %s/%s -u '%s' -k '%s' -s '%s'",
scriptPath, deploySecretsScriptFilename, ApiServiceConfiguration.ApiServletPath.value(), keys[0], keys[1]);
scriptPath, deploySecretsScriptFilename, ApiServiceConfiguration.ApiServletPath.value(), keys[0], keys[1]);
Account account = accountDao.findById(kubernetesCluster.getAccountId());
if (account != null && account.getType() == Account.Type.PROJECT) {
String projectId = projectService.findByProjectAccountId(account.getId()).getUuid();
command = String.format("%s -p '%s'", command, projectId);
}
Pair<Boolean, String> result = SshHelper.sshExecute(publicIpAddress, sshPort, getControlNodeLoginUser(),
pkFile, null, command, 10000, 10000, 60000);
pkFile, null, command, 10000, 10000, 60000);
return result.first();
} catch (Exception e) {
String msg = String.format("Failed to add cloudstack-secret to Kubernetes cluster: %s", kubernetesCluster.getName());
@ -595,7 +705,7 @@ public class KubernetesClusterActionWorker {
writer.close();
} catch (IOException e) {
logAndThrow(Level.ERROR, String.format("Kubernetes Cluster %s : Failed to fetch script %s",
kubernetesCluster.getName(), filename), e);
kubernetesCluster.getName(), filename), e);
}
return file;
}
@ -612,13 +722,17 @@ public class KubernetesClusterActionWorker {
copyScriptFile(nodeAddress, sshPort, autoscaleScriptFile, autoscaleScriptFilename);
}
protected void copyScriptFile(String nodeAddress, final int sshPort, File file, String desitnation) {
protected void copyScriptFile(String nodeAddress, final int sshPort, File file, String destination) {
try {
if (Objects.isNull(sshKeyFile)) {
sshKeyFile = getManagementServerSshPublicKeyFile();
}
SshHelper.scpTo(nodeAddress, sshPort, getControlNodeLoginUser(), sshKeyFile, null,
"~/", file.getAbsolutePath(), "0755");
String cmdStr = String.format("sudo mv ~/%s %s/%s", file.getName(), scriptPath, desitnation);
SshHelper.sshExecute(publicIpAddress, sshPort, getControlNodeLoginUser(), sshKeyFile, null,
cmdStr, 10000, 10000, 10 * 60 * 1000);
"~/", file.getAbsolutePath(), "0755", 20000, 30 * 60 * 1000);
// Ensure destination dir scriptPath exists and copy file to destination
String cmdStr = String.format("sudo mkdir -p %s ; sudo mv ~/%s %s/%s", scriptPath, file.getName(), scriptPath, destination);
SshHelper.sshExecute(nodeAddress, sshPort, getControlNodeLoginUser(), sshKeyFile, null,
cmdStr, 10000, 10000, 10 * 60 * 1000);
} catch (Exception e) {
throw new CloudRuntimeException(e);
}
@ -635,20 +749,30 @@ public class KubernetesClusterActionWorker {
String command = String.format("sudo /opt/bin/kubectl annotate node %s cluster-autoscaler.kubernetes.io/scale-down-disabled=true ; ", name);
commands.append(command);
}
try {
File pkFile = getManagementServerSshPublicKeyFile();
Pair<String, Integer> publicIpSshPort = getKubernetesClusterServerIpSshPort(null);
publicIpAddress = publicIpSshPort.first();
sshPort = publicIpSshPort.second();
int retryCounter = 0;
while (retryCounter < 3) {
retryCounter++;
try {
File pkFile = getManagementServerSshPublicKeyFile();
Pair<String, Integer> publicIpSshPort = getKubernetesClusterServerIpSshPort(null);
publicIpAddress = publicIpSshPort.first();
sshPort = publicIpSshPort.second();
Pair<Boolean, String> result = SshHelper.sshExecute(publicIpAddress, sshPort, getControlNodeLoginUser(),
pkFile, null, commands.toString(), 10000, 10000, 60000);
return result.first();
} catch (Exception e) {
String msg = String.format("Failed to taint control nodes on : %s : %s", kubernetesCluster.getName(), e.getMessage());
logMessage(Level.ERROR, msg, e);
return false;
Pair<Boolean, String> result = SshHelper.sshExecute(publicIpAddress, sshPort, getControlNodeLoginUser(),
pkFile, null, commands.toString(), 10000, 10000, 60000);
return result.first();
} catch (Exception e) {
String msg = String.format("Failed to taint control nodes on : %s : %s", kubernetesCluster.getName(), e.getMessage());
logMessage(Level.ERROR, msg, e);
}
try {
Thread.sleep(5 * 1000L);
} catch (InterruptedException ie) {
logger.error(String.format("Error while attempting to taint nodes on Kubernetes cluster: %s", kubernetesCluster.getName()), ie);
}
retryCounter++;
}
return false;
}
protected boolean deployProvider() {
@ -656,7 +780,7 @@ public class KubernetesClusterActionWorker {
// Since the provider creates IP addresses, don't deploy it unless the underlying network supports it
if (manager.isDirectAccess(network)) {
logMessage(Level.INFO, String.format("Skipping adding the provider as %s is not on an isolated network",
kubernetesCluster.getName()), null);
kubernetesCluster.getName()), null);
return true;
}
File pkFile = getManagementServerSshPublicKeyFile();
@ -667,7 +791,7 @@ public class KubernetesClusterActionWorker {
try {
String command = String.format("sudo %s/%s", scriptPath, deployProviderScriptFilename);
Pair<Boolean, String> result = SshHelper.sshExecute(publicIpAddress, sshPort, getControlNodeLoginUser(),
pkFile, null, command, 10000, 10000, 60000);
pkFile, null, command, 10000, 10000, 60000);
// Maybe the file isn't present. Try and copy it
if (!result.first()) {
@ -677,12 +801,12 @@ public class KubernetesClusterActionWorker {
if (!createCloudStackSecret(keys)) {
logTransitStateAndThrow(Level.ERROR, String.format("Failed to setup keys for Kubernetes cluster %s",
kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.OperationFailed);
kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.OperationFailed);
}
// If at first you don't succeed ...
result = SshHelper.sshExecute(publicIpAddress, sshPort, getControlNodeLoginUser(),
pkFile, null, command, 10000, 10000, 60000);
pkFile, null, command, 10000, 10000, 60000);
if (!result.first()) {
throw new CloudRuntimeException(result.second());
}
@ -698,4 +822,247 @@ public class KubernetesClusterActionWorker {
public void setKeys(String[] keys) {
this.keys = keys;
}
protected ServiceOffering getServiceOfferingForNodeTypeOnCluster(KubernetesClusterNodeType nodeType,
KubernetesCluster cluster) {
Long offeringId = null;
Long defaultOfferingId = cluster.getServiceOfferingId();
Long controlOfferingId = cluster.getControlNodeServiceOfferingId();
Long workerOfferingId = cluster.getWorkerNodeServiceOfferingId();
Long etcdOfferingId = cluster.getEtcdNodeServiceOfferingId();
if (KubernetesClusterNodeType.CONTROL == nodeType) {
offeringId = controlOfferingId != null ? controlOfferingId : defaultOfferingId;
} else if (KubernetesClusterNodeType.WORKER == nodeType) {
offeringId = workerOfferingId != null ? workerOfferingId : defaultOfferingId;
} else if (KubernetesClusterNodeType.ETCD == nodeType && cluster.getEtcdNodeCount() != null && cluster.getEtcdNodeCount() > 0) {
offeringId = etcdOfferingId != null ? etcdOfferingId : defaultOfferingId;
}
if (offeringId == null) {
String msg = String.format("Cannot find a service offering for the %s nodes on the Kubernetes cluster %s", nodeType.name(), cluster.getName());
logger.error(msg);
throw new CloudRuntimeException(msg);
}
return serviceOfferingDao.findById(offeringId);
}
protected boolean isDefaultTemplateUsed() {
if (Arrays.asList(kubernetesCluster.getControlNodeTemplateId(), kubernetesCluster.getWorkerNodeTemplateId(), kubernetesCluster.getEtcdNodeTemplateId()).contains(kubernetesCluster.getTemplateId())) {
return true;
}
return false;
}
protected void provisionPublicIpPortForwardingRule(IpAddress publicIp, Network network, Account account,
final long vmId, final int sourcePort, final int destPort) throws NetworkRuleConflictException, ResourceUnavailableException {
final long publicIpId = publicIp.getId();
final long networkId = network.getId();
final long accountId = account.getId();
final long domainId = account.getDomainId();
Nic vmNic = networkModel.getNicInNetwork(vmId, networkId);
final Ip vmIp = new Ip(vmNic.getIPv4Address());
PortForwardingRuleVO pfRule = Transaction.execute((TransactionCallbackWithException<PortForwardingRuleVO, NetworkRuleConflictException>) status -> {
PortForwardingRuleVO newRule =
new PortForwardingRuleVO(null, publicIpId,
sourcePort, sourcePort,
vmIp,
destPort, destPort,
"tcp", networkId, accountId, domainId, vmId);
newRule.setDisplay(true);
newRule.setState(FirewallRule.State.Add);
newRule = portForwardingRulesDao.persist(newRule);
return newRule;
});
rulesService.applyPortForwardingRules(publicIp.getId(), account);
if (logger.isInfoEnabled()) {
logger.info(String.format("Provisioned SSH port forwarding rule: %s from port %d to %d on %s to the VM IP : %s in Kubernetes cluster : %s", pfRule.getUuid(), sourcePort, destPort, publicIp.getAddress().addr(), vmIp.toString(), kubernetesCluster.getName()));
}
}
public String getKubernetesNodeConfig(final String joinIp, final boolean ejectIso, final boolean mountCksIsoOnVR) throws IOException {
String k8sNodeConfig = readK8sConfigFile("/conf/k8s-node.yml");
final String sshPubKey = "{{ k8s.ssh.pub.key }}";
final String joinIpKey = "{{ k8s_control_node.join_ip }}";
final String clusterTokenKey = "{{ k8s_control_node.cluster.token }}";
final String ejectIsoKey = "{{ k8s.eject.iso }}";
final String routerIpKey = "{{ k8s.vr.iso.mounted.ip }}";
final String installWaitTime = "{{ k8s.install.wait.time }}";
final String installReattemptsCount = "{{ k8s.install.reattempts.count }}";
final Long waitTime = KubernetesClusterService.KubernetesWorkerNodeInstallAttemptWait.value();
final Long reattempts = KubernetesClusterService.KubernetesWorkerNodeInstallReattempts.value();
String routerIp = "";
if (mountCksIsoOnVR) {
NicVO routerNicOnNetwork = getVirtualRouterNicOnKubernetesClusterNetwork(kubernetesCluster);
if (Objects.nonNull(routerNicOnNetwork)) {
routerIp = routerNicOnNetwork.getIPv4Address();
}
}
String pubKey = "- \"" + configurationDao.getValue("ssh.publickey") + "\"";
String sshKeyPair = kubernetesCluster.getKeyPair();
if (StringUtils.isNotEmpty(sshKeyPair)) {
SSHKeyPairVO sshkp = sshKeyPairDao.findByName(owner.getAccountId(), owner.getDomainId(), sshKeyPair);
if (sshkp != null) {
pubKey += "\n - \"" + sshkp.getPublicKey() + "\"";
}
}
k8sNodeConfig = k8sNodeConfig.replace(sshPubKey, pubKey);
k8sNodeConfig = k8sNodeConfig.replace(joinIpKey, joinIp);
k8sNodeConfig = k8sNodeConfig.replace(clusterTokenKey, KubernetesClusterUtil.generateClusterToken(kubernetesCluster));
k8sNodeConfig = k8sNodeConfig.replace(ejectIsoKey, String.valueOf(ejectIso));
k8sNodeConfig = k8sNodeConfig.replace(routerIpKey, routerIp);
k8sNodeConfig = k8sNodeConfig.replace(installWaitTime, String.valueOf(waitTime));
k8sNodeConfig = k8sNodeConfig.replace(installReattemptsCount, String.valueOf(reattempts));
k8sNodeConfig = updateKubeConfigWithRegistryDetails(k8sNodeConfig);
return k8sNodeConfig;
}
protected String updateKubeConfigWithRegistryDetails(String k8sConfig) {
/* genarate /etc/containerd/config.toml file on the nodes only if Kubernetes cluster is created to
* use docker private registry */
String registryUsername = null;
String registryPassword = null;
String registryUrl = null;
List<KubernetesClusterDetailsVO> details = kubernetesClusterDetailsDao.listDetails(kubernetesCluster.getId());
for (KubernetesClusterDetailsVO detail : details) {
if (detail.getName().equals(ApiConstants.DOCKER_REGISTRY_USER_NAME)) {
registryUsername = detail.getValue();
}
if (detail.getName().equals(ApiConstants.DOCKER_REGISTRY_PASSWORD)) {
registryPassword = detail.getValue();
}
if (detail.getName().equals(ApiConstants.DOCKER_REGISTRY_URL)) {
registryUrl = detail.getValue();
}
}
if (StringUtils.isNoneEmpty(registryUsername, registryPassword, registryUrl)) {
// Update runcmd in the cloud-init configuration to run a script that updates the containerd config with provided registry details
String runCmd = "- bash -x /opt/bin/setup-containerd";
String registryEp = registryUrl.split("://")[1];
k8sConfig = k8sConfig.replace("- containerd config default > /etc/containerd/config.toml", runCmd);
final String registryUrlKey = "{{registry.url}}";
final String registryUrlEpKey = "{{registry.url.endpoint}}";
final String registryAuthKey = "{{registry.token}}";
final String registryUname = "{{registry.username}}";
final String registryPsswd = "{{registry.password}}";
final String usernamePasswordKey = registryUsername + ":" + registryPassword;
String base64Auth = Base64.encodeBase64String(usernamePasswordKey.getBytes(com.cloud.utils.StringUtils.getPreferredCharset()));
k8sConfig = k8sConfig.replace(registryUrlKey, registryUrl);
k8sConfig = k8sConfig.replace(registryUrlEpKey, registryEp);
k8sConfig = k8sConfig.replace(registryUname, registryUsername);
k8sConfig = k8sConfig.replace(registryPsswd, registryPassword);
k8sConfig = k8sConfig.replace(registryAuthKey, base64Auth);
}
return k8sConfig;
}
public Map<Long, Integer> addFirewallRulesForNodes(IpAddress publicIp, int size) throws ManagementServerException {
Map<Long, Integer> vmIdPortMap = new HashMap<>();
CallContext.register(CallContext.current(), null);
try {
List<KubernetesClusterVmMapVO> clusterVmList = kubernetesClusterVmMapDao.listByClusterId(kubernetesCluster.getId());
List<KubernetesClusterVmMapVO> externalNodes = clusterVmList.stream().filter(KubernetesClusterVmMapVO::isExternalNode).collect(Collectors.toList());
int endPort = (CLUSTER_NODES_DEFAULT_START_SSH_PORT + clusterVmList.size() - externalNodes.size() - kubernetesCluster.getEtcdNodeCount().intValue() - 1);
provisionFirewallRules(publicIp, owner, CLUSTER_NODES_DEFAULT_START_SSH_PORT, endPort);
if (logger.isInfoEnabled()) {
logger.info(String.format("Provisioned firewall rule to open up port %d to %d on %s for Kubernetes cluster : %s", CLUSTER_NODES_DEFAULT_START_SSH_PORT, endPort, publicIp.getAddress().addr(), kubernetesCluster.getName()));
}
if (!externalNodes.isEmpty()) {
AtomicInteger additionalNodes = new AtomicInteger(1);
externalNodes.forEach(externalNode -> {
int port = endPort + additionalNodes.get();
try {
provisionFirewallRules(publicIp, owner, port, port);
vmIdPortMap.put(externalNode.getVmId(), port);
} catch (NoSuchFieldException | IllegalAccessException | ResourceUnavailableException | NetworkRuleConflictException e) {
throw new CloudRuntimeException(String.format("Failed to provision firewall rules for SSH access for the Kubernetes cluster : %s", kubernetesCluster.getName()), e);
}
additionalNodes.addAndGet(1);
});
}
} catch (NoSuchFieldException | IllegalAccessException | ResourceUnavailableException | NetworkRuleConflictException e) {
throw new ManagementServerException(String.format("Failed to provision firewall rules for SSH access for the Kubernetes cluster : %s", kubernetesCluster.getName()), e);
} finally {
CallContext.unregister();
}
return vmIdPortMap;
}
protected void provisionFirewallRules(final IpAddress publicIp, final Account account, int startPort, int endPort) throws NoSuchFieldException,
IllegalAccessException, ResourceUnavailableException, NetworkRuleConflictException {
List<String> sourceCidrList = new ArrayList<String>();
sourceCidrList.add("0.0.0.0/0");
CreateFirewallRuleCmd rule = new CreateFirewallRuleCmd();
rule = ComponentContext.inject(rule);
Field addressField = rule.getClass().getDeclaredField("ipAddressId");
addressField.setAccessible(true);
addressField.set(rule, publicIp.getId());
Field protocolField = rule.getClass().getDeclaredField("protocol");
protocolField.setAccessible(true);
protocolField.set(rule, "TCP");
Field startPortField = rule.getClass().getDeclaredField("publicStartPort");
startPortField.setAccessible(true);
startPortField.set(rule, startPort);
Field endPortField = rule.getClass().getDeclaredField("publicEndPort");
endPortField.setAccessible(true);
endPortField.set(rule, endPort);
Field cidrField = rule.getClass().getDeclaredField("cidrlist");
cidrField.setAccessible(true);
cidrField.set(rule, sourceCidrList);
firewallService.createIngressFirewallRule(rule);
firewallService.applyIngressFwRules(publicIp.getId(), account);
}
protected NicVO getVirtualRouterNicOnKubernetesClusterNetwork(KubernetesCluster kubernetesCluster) {
long networkId = kubernetesCluster.getNetworkId();
NetworkVO kubernetesClusterNetwork = networkDao.findById(networkId);
if (kubernetesClusterNetwork == null) {
logAndThrow(Level.ERROR, String.format("Cannot find network %s set on Kubernetes Cluster %s", networkId, kubernetesCluster.getName()));
}
NicVO routerNicOnNetwork = nicDao.findByNetworkIdAndType(networkId, VirtualMachine.Type.DomainRouter);
if (routerNicOnNetwork == null) {
logAndThrow(Level.ERROR, String.format("Cannot find a Virtual Router on Kubernetes Cluster %s network %s", kubernetesCluster.getName(), kubernetesClusterNetwork.getName()));
}
return routerNicOnNetwork;
}
protected Map<Long, Integer> getVmPortMap() {
List<KubernetesClusterVmMapVO> clusterVmList = kubernetesClusterVmMapDao.listByClusterId(kubernetesCluster.getId());
List<KubernetesClusterVmMapVO> externalNodes = clusterVmList.stream().filter(KubernetesClusterVmMapVO::isExternalNode).collect(Collectors.toList());
Map<Long, Integer> vmIdPortMap = new HashMap<>();
int defaultNodesCount = clusterVmList.size() - externalNodes.size();
AtomicInteger i = new AtomicInteger(0);
externalNodes.forEach(node -> {
vmIdPortMap.put(node.getVmId(), CLUSTER_NODES_DEFAULT_START_SSH_PORT + defaultNodesCount + i.get());
i.addAndGet(1);
});
return vmIdPortMap;
}
public Long getExplicitAffinityGroup(Long domainId, Long accountId) {
AffinityGroupVO groupVO = null;
if (Objects.nonNull(accountId)) {
groupVO = affinityGroupDao.findByAccountAndType(accountId, "ExplicitDedication");
}
if (Objects.isNull(groupVO)) {
groupVO = affinityGroupDao.findDomainLevelGroupByType(domainId, "ExplicitDedication");
}
if (Objects.nonNull(groupVO)) {
return groupVO.getId();
}
return null;
}
}

View File

@ -0,0 +1,326 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.kubernetes.cluster.actionworkers;
import com.cloud.event.ActionEventUtils;
import com.cloud.event.EventVO;
import com.cloud.exception.InsufficientCapacityException;
import com.cloud.exception.ManagementServerException;
import com.cloud.exception.NetworkRuleConflictException;
import com.cloud.exception.ResourceUnavailableException;
import com.cloud.hypervisor.Hypervisor;
import com.cloud.kubernetes.cluster.KubernetesCluster;
import com.cloud.kubernetes.cluster.KubernetesClusterEventTypes;
import com.cloud.kubernetes.cluster.KubernetesClusterManagerImpl;
import com.cloud.kubernetes.cluster.KubernetesClusterService;
import com.cloud.kubernetes.cluster.KubernetesClusterVO;
import com.cloud.kubernetes.cluster.utils.KubernetesClusterUtil;
import com.cloud.network.IpAddress;
import com.cloud.network.Network;
import com.cloud.network.dao.FirewallRulesDao;
import com.cloud.network.rules.FirewallRuleVO;
import com.cloud.network.rules.PortForwardingRuleVO;
import com.cloud.service.ServiceOfferingVO;
import com.cloud.user.Account;
import com.cloud.uservm.UserVm;
import com.cloud.utils.Pair;
import com.cloud.utils.Ternary;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.ssh.SshHelper;
import com.cloud.vm.UserVmVO;
import org.apache.cloudstack.api.ApiCommandResourceType;
import org.apache.cloudstack.api.BaseCmd;
import org.apache.cloudstack.api.command.user.vm.RebootVMCmd;
import org.apache.cloudstack.context.CallContext;
import org.apache.commons.codec.binary.Base64;
import org.apache.logging.log4j.Level;
import javax.inject.Inject;
import java.io.File;
import java.io.IOException;
import java.lang.reflect.Field;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
import java.util.stream.Collectors;
public class KubernetesClusterAddWorker extends KubernetesClusterActionWorker {
@Inject
private FirewallRulesDao firewallRulesDao;
private long addNodeTimeoutTime;
List<Long> finalNodeIds = new ArrayList<>();
public KubernetesClusterAddWorker(KubernetesCluster kubernetesCluster, KubernetesClusterManagerImpl clusterManager) {
super(kubernetesCluster, clusterManager);
}
public boolean addNodesToCluster(List<Long> nodeIds, boolean mountCksIsoOnVr, boolean manualUpgrade) throws CloudRuntimeException {
try {
init();
addNodeTimeoutTime = System.currentTimeMillis() + KubernetesClusterService.KubernetesClusterAddNodeTimeout.value() * 1000;
Long networkId = kubernetesCluster.getNetworkId();
Network network = networkDao.findById(networkId);
if (Objects.isNull(network)) {
throw new CloudRuntimeException(String.format("Failed to find network with id: %s", networkId));
}
templateDao.findById(kubernetesCluster.getTemplateId());
IpAddress publicIp = null;
try {
publicIp = getPublicIp(network);
} catch (ManagementServerException e) {
throw new CloudRuntimeException(String.format("Failed to retrieve public IP for the network: %s ", network.getName()));
}
attachCksIsoForNodesAdditionToCluster(nodeIds, kubernetesCluster.getId(), mountCksIsoOnVr);
stateTransitTo(kubernetesCluster.getId(), KubernetesCluster.Event.AddNodeRequested);
String controlNodeGuestIp = getControlVmPrivateIp();
Ternary<Integer, Long, Long> nodesAddedAndMemory = importNodeToCluster(nodeIds, network, publicIp, controlNodeGuestIp, mountCksIsoOnVr);
int nodesAdded = nodesAddedAndMemory.first();
updateKubernetesCluster(kubernetesCluster.getId(), nodesAddedAndMemory, manualUpgrade);
if (nodeIds.size() != nodesAdded) {
String msg = String.format("Not every node was added to the CKS cluster %s, nodes added: %s out of %s", kubernetesCluster.getUuid(), nodesAdded, nodeIds.size());
logger.info(msg);
detachCksIsoFromNodesAddedToCluster(nodeIds, kubernetesCluster.getId(), mountCksIsoOnVr);
stateTransitTo(kubernetesCluster.getId(), KubernetesCluster.Event.OperationFailed);
ActionEventUtils.onCompletedActionEvent(CallContext.current().getCallingUserId(), CallContext.current().getCallingAccountId(),
EventVO.LEVEL_ERROR, KubernetesClusterEventTypes.EVENT_KUBERNETES_CLUSTER_NODES_ADD,
msg, kubernetesCluster.getId(), ApiCommandResourceType.KubernetesCluster.toString(), 0);
return false;
}
Pair<String, Integer> publicIpSshPort = getKubernetesClusterServerIpSshPort(null);
KubernetesClusterUtil.validateKubernetesClusterReadyNodesCount(kubernetesCluster, publicIpSshPort.first(), publicIpSshPort.second(),
getControlNodeLoginUser(), sshKeyFile, addNodeTimeoutTime, 15000);
detachCksIsoFromNodesAddedToCluster(nodeIds, kubernetesCluster.getId(), mountCksIsoOnVr);
stateTransitTo(kubernetesCluster.getId(), KubernetesCluster.Event.OperationSucceeded);
String description = String.format("Successfully added %s nodes to Kubernetes Cluster %s", nodesAdded, kubernetesCluster.getUuid());
ActionEventUtils.onCompletedActionEvent(CallContext.current().getCallingUserId(), CallContext.current().getCallingAccountId(),
EventVO.LEVEL_INFO, KubernetesClusterEventTypes.EVENT_KUBERNETES_CLUSTER_NODES_ADD,
description, kubernetesCluster.getId(), ApiCommandResourceType.KubernetesCluster.toString(), 0);
return true;
} catch (Exception e) {
stateTransitTo(kubernetesCluster.getId(), KubernetesCluster.Event.OperationFailed);
throw new CloudRuntimeException(e);
}
}
private void detachCksIsoFromNodesAddedToCluster(List<Long> nodeIds, long kubernetesClusterId, boolean mountCksIsoOnVr) {
if (mountCksIsoOnVr) {
detachIsoOnVirtualRouter(kubernetesClusterId);
} else {
logger.info("Detaching CKS ISO from the nodes");
List<UserVm> vms = nodeIds.stream().map(nodeId -> userVmDao.findById(nodeId)).collect(Collectors.toList());
detachIsoKubernetesVMs(vms);
}
}
public void detachIsoOnVirtualRouter(Long kubernetesClusterId) {
KubernetesClusterVO kubernetesCluster = kubernetesClusterDao.findById(kubernetesClusterId);
Long virtualRouterId = getVirtualRouterNicOnKubernetesClusterNetwork(kubernetesCluster).getInstanceId();
long isoId = kubernetesSupportedVersionDao.findById(kubernetesCluster.getKubernetesVersionId()).getIsoId();
try {
networkService.handleCksIsoOnNetworkVirtualRouter(virtualRouterId, false);
} catch (ResourceUnavailableException e) {
String err = String.format("Error trying to handle ISO %s on virtual router %s", isoId, virtualRouterId);
logger.error(err);
throw new CloudRuntimeException(err);
}
try {
templateService.detachIso(virtualRouterId, isoId, true, true);
} catch (CloudRuntimeException e) {
String err = String.format("Error trying to detach ISO %s from virtual router %s", isoId, virtualRouterId);
logger.error(err, e);
}
}
public void attachCksIsoForNodesAdditionToCluster(List<Long> nodeIds, Long kubernetesClusterId, boolean mountCksIsoOnVr) {
if (mountCksIsoOnVr) {
attachAndServeIsoOnVirtualRouter(kubernetesClusterId);
} else {
logger.info("Attaching CKS ISO to the nodes");
List<UserVm> vms = nodeIds.stream().map(nodeId -> userVmDao.findById(nodeId)).collect(Collectors.toList());
attachIsoKubernetesVMs(vms);
}
}
public void attachAndServeIsoOnVirtualRouter(Long kubernetesClusterId) {
KubernetesClusterVO kubernetesCluster = kubernetesClusterDao.findById(kubernetesClusterId);
Long virtualRouterId = getVirtualRouterNicOnKubernetesClusterNetwork(kubernetesCluster).getInstanceId();
long isoId = kubernetesSupportedVersionDao.findById(kubernetesCluster.getKubernetesVersionId()).getIsoId();
try {
templateService.attachIso(isoId, virtualRouterId, true, true);
} catch (CloudRuntimeException e) {
String err = String.format("Error trying to attach ISO %s to virtual router %s", isoId, virtualRouterId);
logger.error(err);
throw new CloudRuntimeException(err);
}
try {
networkService.handleCksIsoOnNetworkVirtualRouter(virtualRouterId, true);
} catch (ResourceUnavailableException e) {
String err = String.format("Error trying to handle ISO %s on virtual router %s", isoId, virtualRouterId);
logger.error(err);
throw new CloudRuntimeException(err);
}
}
private Ternary<Integer, Long, Long> importNodeToCluster(List<Long> nodeIds, Network network, IpAddress publicIp,
String controlNodeGuestIp, boolean mountCksIsoOnVr) {
int nodeIndex = 0;
Long additionalMemory = 0L;
Long additionalCores = 0L;
for (Long nodeId : nodeIds) {
UserVmVO vm = userVmDao.findById(nodeId);
String k8sControlNodeConfig = null;
try {
k8sControlNodeConfig = getKubernetesNodeConfig(controlNodeGuestIp, Hypervisor.HypervisorType.VMware.equals(clusterTemplate.getHypervisorType()), mountCksIsoOnVr);
} catch (IOException e) {
logAndThrow(Level.ERROR, "Failed to read Kubernetes control node configuration file", e);
}
if (Objects.isNull(k8sControlNodeConfig)) {
logAndThrow(Level.ERROR, "Error generating worker node configuration");
}
String base64UserData = Base64.encodeBase64String(k8sControlNodeConfig.getBytes(com.cloud.utils.StringUtils.getPreferredCharset()));
Pair<Boolean, Integer> result = validateAndSetupNode(network, publicIp, owner, nodeId, nodeIndex, base64UserData);
if (Boolean.TRUE.equals(result.first())) {
ServiceOfferingVO offeringVO = serviceOfferingDao.findById(vm.getId(), vm.getServiceOfferingId());
additionalMemory += offeringVO.getRamSize();
additionalCores += offeringVO.getCpu();
String msg = String.format("VM %s added as a node on the Kubernetes Cluster %s", vm.getUuid(), kubernetesCluster.getUuid());
ActionEventUtils.onCompletedActionEvent(CallContext.current().getCallingUserId(), CallContext.current().getCallingAccountId(),
EventVO.LEVEL_INFO, KubernetesClusterEventTypes.EVENT_KUBERNETES_CLUSTER_NODES_ADD,
msg, vm.getId(), ApiCommandResourceType.VirtualMachine.toString(), 0);
}
if (Boolean.FALSE.equals(result.first())) {
logger.error(String.format("Failed to add node %s [%s] to Kubernetes cluster : %s", vm.getName(), vm.getUuid(), kubernetesCluster.getName()));
}
if (System.currentTimeMillis() > addNodeTimeoutTime) {
logger.error(String.format("Failed to add node %s to Kubernetes cluster : %s", nodeId, kubernetesCluster.getName()));
}
nodeIndex = result.second();
}
return new Ternary<>(nodeIndex, additionalMemory, additionalCores);
}
private Pair<Boolean, Integer> validateAndSetupNode(Network network, IpAddress publicIp, Account account,
Long nodeId, int nodeIndex, String base64UserData) {
int startSshPortNumber = KubernetesClusterActionWorker.CLUSTER_NODES_DEFAULT_START_SSH_PORT + (int) kubernetesCluster.getTotalNodeCount() - kubernetesCluster.getEtcdNodeCount().intValue();
int sshStartPort = startSshPortNumber + nodeIndex;
try {
if (Objects.isNull(network.getVpcId())) {
provisionFirewallRules(publicIp, owner, sshStartPort, sshStartPort);
}
provisionPublicIpPortForwardingRule(publicIp, network, account, nodeId, sshStartPort, DEFAULT_SSH_PORT);
boolean isCompatible = validateNodeCompatibility(publicIp, nodeId, sshStartPort);
if (!isCompatible) {
revertNetworkRules(network, nodeId, sshStartPort);
return new Pair<>(false, nodeIndex);
}
userVmManager.updateVirtualMachine(nodeId, null, null, null, null,
null, null, base64UserData, null, null, null,
BaseCmd.HTTPMethod.POST, null, null, null, null, null);
RebootVMCmd rebootVMCmd = new RebootVMCmd();
Field idField = rebootVMCmd.getClass().getDeclaredField("id");
idField.setAccessible(true);
idField.set(rebootVMCmd, nodeId);
userVmService.rebootVirtualMachine(rebootVMCmd);
finalNodeIds.add(nodeId);
} catch (ResourceUnavailableException | NetworkRuleConflictException | NoSuchFieldException |
InsufficientCapacityException | IllegalAccessException e) {
logger.error(String.format("Failed to activate API port forwarding rules for the Kubernetes cluster : %s", kubernetesCluster.getName()));
// remove added Firewall and PF rules
revertNetworkRules(network, nodeId, sshStartPort);
return new Pair<>( false, nodeIndex);
} catch (Exception e) {
String errMsg = String.format("Unexpected exception while trying to add the external node %s to the Kubernetes cluster %s: %s",
nodeId, kubernetesCluster.getName(), e.getMessage());
logger.error(errMsg, e);
revertNetworkRules(network, nodeId, sshStartPort);
throw new CloudRuntimeException(e);
}
return new Pair<>(true, ++nodeIndex);
}
private void updateKubernetesCluster(long clusterId, Ternary<Integer, Long, Long> additionalNodesDetails, boolean manualUpgrade) {
int additionalNodeCount = additionalNodesDetails.first();
KubernetesClusterVO kubernetesClusterVO = kubernetesClusterDao.findById(clusterId);
kubernetesClusterVO.setNodeCount(kubernetesClusterVO.getNodeCount() + additionalNodeCount);
kubernetesClusterVO.setMemory(kubernetesClusterVO.getMemory() + additionalNodesDetails.second());
kubernetesClusterVO.setCores(kubernetesClusterVO.getCores() + additionalNodesDetails.third());
kubernetesClusterDao.update(clusterId, kubernetesClusterVO);
kubernetesCluster = kubernetesClusterVO;
finalNodeIds.forEach(id -> addKubernetesClusterVm(clusterId, id, false, true, false, manualUpgrade));
}
private boolean validateNodeCompatibility(IpAddress publicIp, long nodeId, int nodeSshPort) throws CloudRuntimeException {
File pkFile = getManagementServerSshPublicKeyFile();
try {
File validateNodeScriptFile = retrieveScriptFile(validateNodeScript);
Thread.sleep(15*1000);
copyScriptFile(publicIp.getAddress().addr(), nodeSshPort, validateNodeScriptFile, validateNodeScript);
String command = String.format("%s%s", scriptPath, validateNodeScript);
Pair<Boolean, String> result = SshHelper.sshExecute(publicIp.getAddress().addr(), nodeSshPort, getControlNodeLoginUser(),
pkFile, null, command, 10000, 10000, 10 * 60 * 1000);
if (Boolean.FALSE.equals(result.first())) {
logger.error(String.format("Node with ID: %s cannot be added as a worker node as it does not have " +
"the following dependencies: %s ", nodeId, result.second()));
return false;
}
} catch (Exception e) {
logger.error(String.format("Failed to validate node with ID: %s", nodeId), e);
return false;
}
UserVmVO userVm = userVmDao.findById(nodeId);
cleanupCloudInitSemFolder(userVm, publicIp, pkFile, nodeSshPort);
return true;
}
private void cleanupCloudInitSemFolder(UserVm userVm, IpAddress publicIp, File pkFile, int nodeSshPort) {
try {
String command = String.format("sudo rm -rf /var/lib/cloud/instances/%s/sem/*", userVm.getUuid());
Pair<Boolean, String> result = SshHelper.sshExecute(publicIp.getAddress().addr(), nodeSshPort, getControlNodeLoginUser(),
pkFile, null, command, 10000, 10000, 10 * 60 * 1000);
if (Boolean.FALSE.equals(result.first())) {
logger.error(String.format("Failed to cleanup previous applied userdata on node: %s; This may hamper to addition of the node to the cluster ", userVm.getName()));
}
} catch (Exception e) {
logger.error(String.format("Failed to cleanup previous applied userdata on node: %s; This may hamper to addition of the node to the cluster ", userVm.getName()), e);
}
}
private void revertNetworkRules(Network network, long vmId, int port) {
logger.debug(String.format("Reverting network rules for VM ID %s on network %s", vmId, network.getName()));
FirewallRuleVO ruleVO = firewallRulesDao.findByNetworkIdAndPorts(network.getId(), port, port);
if (Objects.isNull(network.getVpcId())) {
logger.debug(String.format("Removing firewall rule %s", ruleVO.getId()));
firewallService.revokeIngressFirewallRule(ruleVO.getId(), true);
}
List<PortForwardingRuleVO> pfRules = portForwardingRulesDao.listByVm(vmId);
for (PortForwardingRuleVO pfRule : pfRules) {
logger.debug(String.format("Removing port forwarding rule %s", pfRule.getId()));
rulesService.revokePortForwardingRule(pfRule.getId(), true);
}
}
}

View File

@ -23,6 +23,10 @@ import java.util.stream.Collectors;
import javax.inject.Inject;
import com.cloud.bgp.BGPService;
import com.cloud.dc.ASNumberVO;
import com.cloud.dc.DataCenter;
import com.cloud.dc.dao.ASNumberDao;
import org.apache.cloudstack.annotation.AnnotationService;
import org.apache.cloudstack.annotation.dao.AnnotationDao;
import org.apache.cloudstack.api.ApiCommandResourceType;
@ -63,6 +67,10 @@ public class KubernetesClusterDestroyWorker extends KubernetesClusterResourceMod
protected AccountManager accountManager;
@Inject
private AnnotationDao annotationDao;
@Inject
private ASNumberDao asNumberDao;
@Inject
private BGPService bgpService;
private List<KubernetesClusterVmMapVO> clusterVMs;
@ -131,6 +139,7 @@ public class KubernetesClusterDestroyWorker extends KubernetesClusterResourceMod
Account owner = accountManager.getAccount(network.getAccountId());
User callerUser = accountManager.getActiveUser(CallContext.current().getCallingUserId());
ReservationContext context = new ReservationContextImpl(null, null, callerUser, owner);
releaseASNumber(kubernetesCluster.getZoneId(), kubernetesCluster.getNetworkId());
boolean networkDestroyed = networkMgr.destroyNetwork(kubernetesCluster.getNetworkId(), context, true);
if (!networkDestroyed) {
String msg = String.format("Failed to destroy network: %s as part of Kubernetes cluster: %s cleanup", network, kubernetesCluster);
@ -143,6 +152,15 @@ public class KubernetesClusterDestroyWorker extends KubernetesClusterResourceMod
}
}
private void releaseASNumber(Long zoneId, long networkId) {
DataCenter zone = dataCenterDao.findById(zoneId);
ASNumberVO asNumber = asNumberDao.findByZoneAndNetworkId(zone.getId(), networkId);
if (asNumber != null) {
logger.debug(String.format("Releasing AS number %s from network %s", asNumber.getAsNumber(), networkId));
bgpService.releaseASNumber(zone.getId(), asNumber.getAsNumber(), true);
}
}
protected void deleteKubernetesClusterIsolatedNetworkRules(Network network, List<Long> removedVmIds) throws ManagementServerException {
IpAddress publicIp = getNetworkSourceNatIp(network);
if (publicIp == null) {
@ -157,7 +175,7 @@ public class KubernetesClusterDestroyWorker extends KubernetesClusterResourceMod
if (firewallRule == null) {
logMessage(Level.WARN, "Firewall rule for API access can't be removed", null);
}
firewallRule = removeSshFirewallRule(publicIp);
firewallRule = removeSshFirewallRule(publicIp, network.getId());
if (firewallRule == null) {
logMessage(Level.WARN, "Firewall rule for SSH access can't be removed", null);
}
@ -256,6 +274,12 @@ public class KubernetesClusterDestroyWorker extends KubernetesClusterResourceMod
}
if (cleanupNetwork) { // if network has additional VM, cannot proceed with cluster destroy
NetworkVO network = networkDao.findById(kubernetesCluster.getNetworkId());
List<KubernetesClusterVmMapVO> externalNodes = clusterVMs.stream().filter(KubernetesClusterVmMapVO::isExternalNode).collect(Collectors.toList());
if (!externalNodes.isEmpty()) {
String errMsg = String.format("Failed to delete kubernetes cluster %s as there are %s external node(s) present. Please remove the external node(s) from the cluster (and network) or delete them before deleting the cluster.", kubernetesCluster.getName(), externalNodes.size());
logger.error(errMsg);
throw new CloudRuntimeException(errMsg);
}
if (network != null) {
List<VMInstanceVO> networkVMs = vmInstanceDao.listNonRemovedVmsByTypeAndNetwork(network.getId(), VirtualMachine.Type.User);
if (networkVMs.size() > clusterVMs.size()) {

View File

@ -0,0 +1,183 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.kubernetes.cluster.actionworkers;
import com.cloud.event.ActionEventUtils;
import com.cloud.event.EventVO;
import com.cloud.exception.ManagementServerException;
import com.cloud.kubernetes.cluster.KubernetesCluster;
import com.cloud.kubernetes.cluster.KubernetesClusterEventTypes;
import com.cloud.kubernetes.cluster.KubernetesClusterManagerImpl;
import com.cloud.kubernetes.cluster.KubernetesClusterService;
import com.cloud.kubernetes.cluster.KubernetesClusterVO;
import com.cloud.network.IpAddress;
import com.cloud.network.Network;
import com.cloud.network.dao.FirewallRulesDao;
import com.cloud.network.rules.FirewallRuleVO;
import com.cloud.network.rules.PortForwardingRuleVO;
import com.cloud.service.ServiceOfferingVO;
import com.cloud.utils.Pair;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.ssh.SshHelper;
import com.cloud.vm.UserVmVO;
import org.apache.cloudstack.api.ApiCommandResourceType;
import org.apache.cloudstack.context.CallContext;
import javax.inject.Inject;
import java.io.File;
import java.util.ArrayList;
import java.util.List;
import java.util.Locale;
import java.util.Objects;
import java.util.Optional;
public class KubernetesClusterRemoveWorker extends KubernetesClusterActionWorker {
@Inject
private FirewallRulesDao firewallRulesDao;
private long removeNodeTimeoutTime;
public KubernetesClusterRemoveWorker(KubernetesCluster kubernetesCluster, KubernetesClusterManagerImpl clusterManager) {
super(kubernetesCluster, clusterManager);
}
public boolean removeNodesFromCluster(List<Long> nodeIds) {
init();
removeNodeTimeoutTime = System.currentTimeMillis() + KubernetesClusterService.KubernetesClusterRemoveNodeTimeout.value() * 1000;
Long networkId = kubernetesCluster.getNetworkId();
Network network = networkDao.findById(networkId);
if (Objects.isNull(network)) {
throw new CloudRuntimeException(String.format("Failed to find network with id: %s", networkId));
}
IpAddress publicIp = null;
try {
publicIp = getPublicIp(network);
} catch (ManagementServerException e) {
throw new CloudRuntimeException(String.format("Failed to retrieve public IP for the network: %s ", network.getName()));
}
stateTransitTo(kubernetesCluster.getId(), KubernetesCluster.Event.RemoveNodeRequested);
boolean result = removeNodesFromCluster(nodeIds, network, publicIp);
if (!result) {
stateTransitTo(kubernetesCluster.getId(), KubernetesCluster.Event.OperationFailed);
} else {
stateTransitTo(kubernetesCluster.getId(), KubernetesCluster.Event.OperationSucceeded);
}
String description = String.format("Successfully removed %s nodes from the Kubernetes Cluster %s", nodeIds.size(), kubernetesCluster.getUuid());
ActionEventUtils.onCompletedActionEvent(CallContext.current().getCallingUserId(), CallContext.current().getCallingAccountId(),
EventVO.LEVEL_INFO, KubernetesClusterEventTypes.EVENT_KUBERNETES_CLUSTER_NODES_REMOVE,
description, kubernetesCluster.getId(), ApiCommandResourceType.KubernetesCluster.toString(), 0);
return result;
}
private boolean removeNodesFromCluster(List<Long> nodeIds, Network network, IpAddress publicIp) {
boolean result = true;
List<Long> removedNodeIds = new ArrayList<>();
long removedMemory = 0L;
long removedCores = 0L;
for (Long nodeId : nodeIds) {
UserVmVO vm = userVmDao.findById(nodeId);
if (vm == null) {
logger.debug(String.format("Couldn't find a VM with ID %s, skipping removal from Kubernetes cluster", nodeId));
continue;
}
try {
removeNodeVmFromCluster(nodeId, vm.getDisplayName().toLowerCase(Locale.ROOT), publicIp.getAddress().addr());
result &= removeNodePortForwardingRules(nodeId, network, vm);
if (System.currentTimeMillis() > removeNodeTimeoutTime) {
logger.error(String.format("Removal of node %s from Kubernetes cluster %s timed out", vm.getName(), kubernetesCluster.getName()));
result = false;
continue;
}
ServiceOfferingVO offeringVO = serviceOfferingDao.findById(vm.getId(), vm.getServiceOfferingId());
removedNodeIds.add(nodeId);
removedMemory += offeringVO.getRamSize();
removedCores += offeringVO.getCpu();
String description = String.format("Successfully removed the node %s from Kubernetes cluster %s", vm.getUuid(), kubernetesCluster.getUuid());
logger.info(description);
ActionEventUtils.onCompletedActionEvent(CallContext.current().getCallingUserId(), CallContext.current().getCallingAccountId(),
EventVO.LEVEL_INFO, KubernetesClusterEventTypes.EVENT_KUBERNETES_CLUSTER_NODES_REMOVE,
description, vm.getId(), ApiCommandResourceType.VirtualMachine.toString(), 0);
} catch (Exception e) {
String err = String.format("Error trying to remove node %s from Kubernetes Cluster %s: %s", vm.getUuid(), kubernetesCluster.getUuid(), e.getMessage());
logger.error(err, e);
result = false;
}
}
updateKubernetesCluster(kubernetesCluster.getId(), removedNodeIds, removedMemory, removedCores);
return result;
}
protected boolean removeNodePortForwardingRules(Long nodeId, Network network, UserVmVO vm) {
List<PortForwardingRuleVO> pfRules = portForwardingRulesDao.listByVm(nodeId);
boolean result = true;
for (PortForwardingRuleVO pfRule : pfRules) {
try {
result &= rulesService.revokePortForwardingRule(pfRule.getId(), true);
if (Objects.isNull(network.getVpcId())) {
FirewallRuleVO ruleVO = firewallRulesDao.findByNetworkIdAndPorts(network.getId(), pfRule.getSourcePortStart(), pfRule.getSourcePortEnd());
result &= firewallService.revokeIngressFirewallRule(ruleVO.getId(), true);
}
} catch (Exception e) {
String err = String.format("Failed to cleanup network rules for node %s, due to: %s", vm.getName(), e.getMessage());
logger.error(err, e);
}
}
return result;
}
private void removeNodeVmFromCluster(Long nodeId, String nodeName, String publicIp) throws Exception {
File removeNodeScriptFile = retrieveScriptFile(removeNodeFromClusterScript);
copyScriptFile(publicIp, CLUSTER_NODES_DEFAULT_START_SSH_PORT, removeNodeScriptFile, removeNodeFromClusterScript);
File pkFile = getManagementServerSshPublicKeyFile();
String command = String.format("%s%s %s %s %s", scriptPath, removeNodeFromClusterScript, nodeName, "control", "remove");
Pair<Boolean, String> result = SshHelper.sshExecute(publicIp, CLUSTER_NODES_DEFAULT_START_SSH_PORT, getControlNodeLoginUser(),
pkFile, null, command, 10000, 10000, 10 * 60 * 1000);
if (Boolean.FALSE.equals(result.first())) {
logger.error(String.format("Node: %s failed to be gracefully drained as a worker node from cluster %s ", nodeName, kubernetesCluster.getName()));
}
List<PortForwardingRuleVO> nodePfRules = portForwardingRulesDao.listByVm(nodeId);
Optional<PortForwardingRuleVO> nodeSshPort = nodePfRules.stream().filter(rule -> rule.getDestinationPortStart() == DEFAULT_SSH_PORT
&& rule.getVirtualMachineId() == nodeId && rule.getSourcePortStart() >= CLUSTER_NODES_DEFAULT_START_SSH_PORT).findFirst();
if (nodeSshPort.isPresent()) {
copyScriptFile(publicIp, nodeSshPort.get().getSourcePortStart(), removeNodeScriptFile, removeNodeFromClusterScript);
command = String.format("sudo %s%s %s %s %s", scriptPath, removeNodeFromClusterScript, nodeName, "worker", "remove");
result = SshHelper.sshExecute(publicIp, nodeSshPort.get().getSourcePortStart(), getControlNodeLoginUser(),
pkFile, null, command, 10000, 10000, 10 * 60 * 1000);
if (Boolean.FALSE.equals(result.first())) {
logger.error(String.format("Failed to reset node: %s from cluster %s ", nodeName, kubernetesCluster.getName()));
}
command = String.format("%s%s %s %s %s", scriptPath, removeNodeFromClusterScript, nodeName, "control", "delete");
result = SshHelper.sshExecute(publicIp, CLUSTER_NODES_DEFAULT_START_SSH_PORT, getControlNodeLoginUser(),
pkFile, null, command, 10000, 10000, 10 * 60 * 1000);
if (Boolean.FALSE.equals(result.first())) {
logger.error(String.format("Node: %s failed to be gracefully delete node from cluster %s ", nodeName, kubernetesCluster.getName()));
}
}
}
private void updateKubernetesCluster(long clusterId, List<Long> nodesRemoved, long deallocatedRam, long deallocatedCores) {
KubernetesClusterVO kubernetesClusterVO = kubernetesClusterDao.findById(clusterId);
kubernetesClusterVO.setNodeCount(kubernetesClusterVO.getNodeCount() - nodesRemoved.size());
kubernetesClusterVO.setMemory(kubernetesClusterVO.getMemory() - deallocatedRam);
kubernetesClusterVO.setCores(kubernetesClusterVO.getCores() - deallocatedCores);
kubernetesClusterDao.update(clusterId, kubernetesClusterVO);
nodesRemoved.forEach(id -> kubernetesClusterVmMapDao.removeByClusterIdAndVmIdsIn(clusterId, nodesRemoved));
}
}

View File

@ -17,23 +17,37 @@
package com.cloud.kubernetes.cluster.actionworkers;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.CONTROL;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.ETCD;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.WORKER;
import static com.cloud.utils.NumbersUtil.toHumanReadableSize;
import static com.cloud.utils.db.Transaction.execute;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.ConcurrentHashMap;
import java.util.stream.Collectors;
import javax.inject.Inject;
import com.cloud.deploy.DataCenterDeployment;
import com.cloud.deploy.DeploymentPlan;
import com.cloud.dc.DedicatedResourceVO;
import com.cloud.dc.dao.DedicatedResourceDao;
import com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType;
import com.cloud.network.rules.RulesService;
import com.cloud.network.rules.dao.PortForwardingRulesDao;
import com.cloud.network.rules.FirewallManager;
import com.cloud.offering.NetworkOffering;
import com.cloud.offerings.dao.NetworkOfferingDao;
import org.apache.cloudstack.api.ApiConstants;
import com.cloud.utils.db.Transaction;
import com.cloud.utils.net.Ip;
import org.apache.cloudstack.api.BaseCmd;
import org.apache.cloudstack.api.command.user.firewall.CreateFirewallRuleCmd;
import org.apache.cloudstack.api.command.user.network.CreateNetworkACLCmd;
@ -64,23 +78,18 @@ import com.cloud.host.HostVO;
import com.cloud.host.dao.HostDao;
import com.cloud.hypervisor.Hypervisor;
import com.cloud.kubernetes.cluster.KubernetesCluster;
import com.cloud.kubernetes.cluster.KubernetesClusterDetailsVO;
import com.cloud.kubernetes.cluster.KubernetesClusterManagerImpl;
import com.cloud.kubernetes.cluster.KubernetesClusterVO;
import com.cloud.kubernetes.cluster.utils.KubernetesClusterUtil;
import com.cloud.network.IpAddress;
import com.cloud.network.Network;
import com.cloud.network.dao.FirewallRulesDao;
import com.cloud.network.dao.LoadBalancerDao;
import com.cloud.network.dao.LoadBalancerVO;
import com.cloud.network.firewall.FirewallService;
import com.cloud.network.lb.LoadBalancingRulesService;
import com.cloud.network.rules.FirewallRule;
import com.cloud.network.rules.FirewallRuleVO;
import com.cloud.network.rules.LoadBalancer;
import com.cloud.network.rules.PortForwardingRuleVO;
import com.cloud.network.rules.RulesService;
import com.cloud.network.rules.dao.PortForwardingRulesDao;
import com.cloud.network.vpc.NetworkACL;
import com.cloud.network.vpc.NetworkACLItem;
import com.cloud.network.vpc.NetworkACLItemDao;
@ -94,15 +103,12 @@ import com.cloud.storage.VolumeVO;
import com.cloud.storage.dao.LaunchPermissionDao;
import com.cloud.storage.dao.VolumeDao;
import com.cloud.user.Account;
import com.cloud.user.SSHKeyPairVO;
import com.cloud.uservm.UserVm;
import com.cloud.utils.Pair;
import com.cloud.utils.component.ComponentContext;
import com.cloud.utils.db.Transaction;
import com.cloud.utils.db.TransactionCallback;
import com.cloud.utils.db.TransactionCallbackWithException;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.net.Ip;
import com.cloud.utils.net.NetUtils;
import com.cloud.utils.ssh.SshHelper;
import com.cloud.vm.Nic;
@ -127,8 +133,6 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
@Inject
protected FirewallRulesDao firewallRulesDao;
@Inject
protected FirewallService firewallService;
@Inject
protected NetworkACLService networkACLService;
@Inject
protected NetworkACLItemDao networkACLItemDao;
@ -143,6 +147,8 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
@Inject
protected ResourceManager resourceManager;
@Inject
protected DedicatedResourceDao dedicatedResourceDao;
@Inject
protected LoadBalancerDao loadBalancerDao;
@Inject
protected VMInstanceDao vmInstanceDao;
@ -168,81 +174,37 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
kubernetesClusterNodeNamePrefix = getKubernetesClusterNodeNamePrefix();
}
private String getKubernetesNodeConfig(final String joinIp, final boolean ejectIso) throws IOException {
String k8sNodeConfig = readResourceFile("/conf/k8s-node.yml");
final String sshPubKey = "{{ k8s.ssh.pub.key }}";
final String joinIpKey = "{{ k8s_control_node.join_ip }}";
final String clusterTokenKey = "{{ k8s_control_node.cluster.token }}";
final String ejectIsoKey = "{{ k8s.eject.iso }}";
String pubKey = "- \"" + configurationDao.getValue("ssh.publickey") + "\"";
String sshKeyPair = kubernetesCluster.getKeyPair();
if (StringUtils.isNotEmpty(sshKeyPair)) {
SSHKeyPairVO sshkp = sshKeyPairDao.findByName(owner.getAccountId(), owner.getDomainId(), sshKeyPair);
if (sshkp != null) {
pubKey += "\n - \"" + sshkp.getPublicKey() + "\"";
}
}
k8sNodeConfig = k8sNodeConfig.replace(sshPubKey, pubKey);
k8sNodeConfig = k8sNodeConfig.replace(joinIpKey, joinIp);
k8sNodeConfig = k8sNodeConfig.replace(clusterTokenKey, KubernetesClusterUtil.generateClusterToken(kubernetesCluster));
k8sNodeConfig = k8sNodeConfig.replace(ejectIsoKey, String.valueOf(ejectIso));
k8sNodeConfig = updateKubeConfigWithRegistryDetails(k8sNodeConfig);
return k8sNodeConfig;
}
protected String updateKubeConfigWithRegistryDetails(String k8sConfig) {
/* genarate /etc/containerd/config.toml file on the nodes only if Kubernetes cluster is created to
* use docker private registry */
String registryUsername = null;
String registryPassword = null;
String registryUrl = null;
List<KubernetesClusterDetailsVO> details = kubernetesClusterDetailsDao.listDetails(kubernetesCluster.getId());
for (KubernetesClusterDetailsVO detail : details) {
if (detail.getName().equals(ApiConstants.DOCKER_REGISTRY_USER_NAME)) {
registryUsername = detail.getValue();
}
if (detail.getName().equals(ApiConstants.DOCKER_REGISTRY_PASSWORD)) {
registryPassword = detail.getValue();
}
if (detail.getName().equals(ApiConstants.DOCKER_REGISTRY_URL)) {
registryUrl = detail.getValue();
}
}
if (StringUtils.isNoneEmpty(registryUsername, registryPassword, registryUrl)) {
// Update runcmd in the cloud-init configuration to run a script that updates the containerd config with provided registry details
String runCmd = "- bash -x /opt/bin/setup-containerd";
String registryEp = registryUrl.split("://")[1];
k8sConfig = k8sConfig.replace("- containerd config default > /etc/containerd/config.toml", runCmd);
final String registryUrlKey = "{{registry.url}}";
final String registryUrlEpKey = "{{registry.url.endpoint}}";
final String registryAuthKey = "{{registry.token}}";
final String registryUname = "{{registry.username}}";
final String registryPsswd = "{{registry.password}}";
final String usernamePasswordKey = registryUsername + ":" + registryPassword;
String base64Auth = Base64.encodeBase64String(usernamePasswordKey.getBytes(com.cloud.utils.StringUtils.getPreferredCharset()));
k8sConfig = k8sConfig.replace(registryUrlKey, registryUrl);
k8sConfig = k8sConfig.replace(registryUrlEpKey, registryEp);
k8sConfig = k8sConfig.replace(registryUname, registryUsername);
k8sConfig = k8sConfig.replace(registryPsswd, registryPassword);
k8sConfig = k8sConfig.replace(registryAuthKey, base64Auth);
}
return k8sConfig;
}
protected DeployDestination plan(final long nodesCount, final DataCenter zone, final ServiceOffering offering) throws InsufficientServerCapacityException {
protected DeployDestination plan(final long nodesCount, final DataCenter zone, final ServiceOffering offering,
final Long domainId, final Long accountId, final Hypervisor.HypervisorType hypervisorType) throws InsufficientServerCapacityException {
final int cpu_requested = offering.getCpu() * offering.getSpeed();
final long ram_requested = offering.getRamSize() * 1024L * 1024L;
List<HostVO> hosts = resourceManager.listAllHostsInOneZoneByType(Host.Type.Routing, zone.getId());
boolean useDedicatedHosts = false;
List<HostVO> hosts = new ArrayList<>();
Long group = getExplicitAffinityGroup(domainId, accountId);
if (Objects.nonNull(group)) {
List<DedicatedResourceVO> dedicatedHosts = new ArrayList<>();
if (Objects.nonNull(accountId)) {
dedicatedHosts = dedicatedResourceDao.listByAccountId(accountId);
} else if (Objects.nonNull(domainId)) {
dedicatedHosts = dedicatedResourceDao.listByDomainId(domainId);
}
for (DedicatedResourceVO dedicatedHost : dedicatedHosts) {
hosts.add(hostDao.findById(dedicatedHost.getHostId()));
useDedicatedHosts = true;
}
}
if (hosts.isEmpty()) {
hosts = resourceManager.listAllHostsInOneZoneByType(Host.Type.Routing, zone.getId());
}
if (hypervisorType != null) {
hosts = hosts.stream().filter(x -> x.getHypervisorType() == hypervisorType).collect(Collectors.toList());
}
final Map<String, Pair<HostVO, Integer>> hosts_with_resevered_capacity = new ConcurrentHashMap<String, Pair<HostVO, Integer>>();
for (HostVO h : hosts) {
hosts_with_resevered_capacity.put(h.getUuid(), new Pair<HostVO, Integer>(h, 0));
}
boolean suitable_host_found = false;
HostVO suitableHost = null;
for (int i = 1; i <= nodesCount; i++) {
suitable_host_found = false;
for (Map.Entry<String, Pair<HostVO, Integer>> hostEntry : hosts_with_resevered_capacity.entrySet()) {
@ -269,6 +231,7 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
logger.debug("Found host {} with enough capacity: CPU={} RAM={}", h.getName(), cpu_requested * reserved, toHumanReadableSize(ram_requested * reserved));
hostEntry.setValue(new Pair<HostVO, Integer>(h, reserved));
suitable_host_found = true;
suitableHost = h;
break;
}
}
@ -284,6 +247,9 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
if (logger.isInfoEnabled()) {
logger.info("Suitable hosts found in datacenter: {}, creating deployment destination", zone);
}
if (useDedicatedHosts) {
return new DeployDestination(zone, null, null, suitableHost);
}
return new DeployDestination(zone, null, null, null);
}
String msg = String.format("Cannot find enough capacity for Kubernetes cluster(requested cpu=%d memory=%s) with offering: %s and hypervisor: %s",
@ -293,13 +259,35 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
throw new InsufficientServerCapacityException(msg, DataCenter.class, zone.getId());
}
protected DeployDestination plan() throws InsufficientServerCapacityException {
ServiceOffering offering = serviceOfferingDao.findById(kubernetesCluster.getServiceOfferingId());
/**
* Plan Kubernetes Cluster Deployment
* @return a map of DeployDestination per node type
*/
protected Map<String, DeployDestination> planKubernetesCluster(Long domainId, Long accountId, Hypervisor.HypervisorType hypervisorType) throws InsufficientServerCapacityException {
Map<String, DeployDestination> destinationMap = new HashMap<>();
DataCenter zone = dataCenterDao.findById(kubernetesCluster.getZoneId());
if (logger.isDebugEnabled()) {
logger.debug("Checking deployment destination for Kubernetes cluster: {} in zone: {}", kubernetesCluster, zone);
}
return plan(kubernetesCluster.getTotalNodeCount(), zone, offering);
long controlNodeCount = kubernetesCluster.getControlNodeCount();
long clusterSize = kubernetesCluster.getNodeCount();
long etcdNodes = kubernetesCluster.getEtcdNodeCount();
Map<String, Long> nodeTypeCount = Map.of(WORKER.name(), clusterSize,
CONTROL.name(), controlNodeCount, ETCD.name(), etcdNodes);
for (KubernetesClusterNodeType nodeType : CLUSTER_NODES_TYPES_LIST) {
Long nodes = nodeTypeCount.getOrDefault(nodeType.name(), kubernetesCluster.getServiceOfferingId());
if (nodes == null || nodes == 0) {
continue;
}
ServiceOffering nodeOffering = getServiceOfferingForNodeTypeOnCluster(nodeType, kubernetesCluster);
if (logger.isDebugEnabled()) {
logger.debug("Checking deployment destination for {} nodes on Kubernetes cluster : {} in zone : {}", nodeType.name(), kubernetesCluster.getName(), zone.getName());
}
DeployDestination planForNodeType = plan(nodes, zone, nodeOffering, domainId, accountId, hypervisorType);
destinationMap.put(nodeType.name(), planForNodeType);
}
return destinationMap;
}
protected void resizeNodeVolume(final UserVm vm) throws ManagementServerException {
@ -322,14 +310,33 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
}
}
protected void startKubernetesVM(final UserVm vm) throws ManagementServerException {
protected void startKubernetesVM(final UserVm vm, final Long domainId, final Long accountId, KubernetesClusterNodeType nodeType) throws ManagementServerException {
CallContext vmContext = null;
if (!ApiCommandResourceType.VirtualMachine.equals(CallContext.current().getEventResourceType())); {
vmContext = CallContext.register(CallContext.current(), ApiCommandResourceType.VirtualMachine);
vmContext.setEventResourceId(vm.getId());
}
DeploymentPlan plan = null;
if (Objects.nonNull(domainId) && !listDedicatedHostsInDomain(domainId).isEmpty()) {
DeployDestination dest = null;
try {
Map<String, DeployDestination> destinationMap = planKubernetesCluster(domainId, accountId, vm.getHypervisorType());
dest = destinationMap.get(nodeType.name());
} catch (InsufficientCapacityException e) {
logTransitStateAndThrow(Level.ERROR, String.format("Provisioning the cluster failed due to insufficient capacity in the Kubernetes cluster: %s", kubernetesCluster.getUuid()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed, e);
}
if (dest != null) {
plan = new DataCenterDeployment(
Objects.nonNull(dest.getDataCenter()) ? dest.getDataCenter().getId() : 0,
Objects.nonNull(dest.getPod()) ? dest.getPod().getId() : null,
Objects.nonNull(dest.getCluster()) ? dest.getCluster().getId() : null,
Objects.nonNull(dest.getHost()) ? dest.getHost().getId() : null,
null,
null);
}
}
try {
userVmManager.startVirtualMachine(vm);
userVmManager.startVirtualMachine(vm, plan);
} catch (OperationTimedoutException | ResourceUnavailableException | InsufficientCapacityException ex) {
throw new ManagementServerException(String.format("Failed to start VM in the Kubernetes cluster : %s", kubernetesCluster.getName()), ex);
} finally {
@ -344,19 +351,20 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
}
}
protected List<UserVm> provisionKubernetesClusterNodeVms(final long nodeCount, final int offset, final String publicIpAddress) throws ManagementServerException,
protected List<UserVm> provisionKubernetesClusterNodeVms(final long nodeCount, final int offset,
final String controlIpAddress, final Long domainId, final Long accountId) throws ManagementServerException,
ResourceUnavailableException, InsufficientCapacityException {
List<UserVm> nodes = new ArrayList<>();
for (int i = offset + 1; i <= nodeCount; i++) {
CallContext vmContext = CallContext.register(CallContext.current(), ApiCommandResourceType.VirtualMachine);
try {
UserVm vm = createKubernetesNode(publicIpAddress);
UserVm vm = createKubernetesNode(controlIpAddress, domainId, accountId);
vmContext.setEventResourceId(vm.getId());
addKubernetesClusterVm(kubernetesCluster.getId(), vm.getId(), false);
addKubernetesClusterVm(kubernetesCluster.getId(), vm.getId(), false, false, false, false);
if (kubernetesCluster.getNodeRootDiskSize() > 0) {
resizeNodeVolume(vm);
}
startKubernetesVM(vm);
startKubernetesVM(vm, domainId, accountId, WORKER);
vm = userVmDao.findById(vm.getId());
if (vm == null) {
throw new ManagementServerException(String.format("Failed to provision worker VM for Kubernetes cluster : %s", kubernetesCluster.getName()));
@ -370,16 +378,16 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
return nodes;
}
protected List<UserVm> provisionKubernetesClusterNodeVms(final long nodeCount, final String publicIpAddress) throws ManagementServerException,
protected List<UserVm> provisionKubernetesClusterNodeVms(final long nodeCount, final String controlIpAddress, final Long domainId, final Long accountId) throws ManagementServerException,
ResourceUnavailableException, InsufficientCapacityException {
return provisionKubernetesClusterNodeVms(nodeCount, 0, publicIpAddress);
return provisionKubernetesClusterNodeVms(nodeCount, 0, controlIpAddress, domainId, accountId);
}
protected UserVm createKubernetesNode(String joinIp) throws ManagementServerException,
protected UserVm createKubernetesNode(String joinIp, Long domainId, Long accountId) throws ManagementServerException,
ResourceUnavailableException, InsufficientCapacityException {
UserVm nodeVm = null;
DataCenter zone = dataCenterDao.findById(kubernetesCluster.getZoneId());
ServiceOffering serviceOffering = serviceOfferingDao.findById(kubernetesCluster.getServiceOfferingId());
ServiceOffering serviceOffering = getServiceOfferingForNodeTypeOnCluster(WORKER, kubernetesCluster);
List<Long> networkIds = new ArrayList<Long>();
networkIds.add(kubernetesCluster.getNetworkId());
Account owner = accountDao.findById(kubernetesCluster.getAccountId());
@ -396,7 +404,7 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
String hostName = String.format("%s-node-%s", kubernetesClusterNodeNamePrefix, suffix);
String k8sNodeConfig = null;
try {
k8sNodeConfig = getKubernetesNodeConfig(joinIp, Hypervisor.HypervisorType.VMware.equals(clusterTemplate.getHypervisorType()));
k8sNodeConfig = getKubernetesNodeConfig(joinIp, Hypervisor.HypervisorType.VMware.equals(clusterTemplate.getHypervisorType()), false);
} catch (IOException e) {
logAndThrow(Level.ERROR, "Failed to read Kubernetes node configuration file", e);
}
@ -406,18 +414,21 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
if (StringUtils.isNotBlank(kubernetesCluster.getKeyPair())) {
keypairs.add(kubernetesCluster.getKeyPair());
}
Long affinityGroupId = getExplicitAffinityGroup(domainId, accountId);
if (kubernetesCluster.getSecurityGroupId() != null && networkModel.checkSecurityGroupSupportForNetwork(owner, zone, networkIds, List.of(kubernetesCluster.getSecurityGroupId()))) {
List<Long> securityGroupIds = new ArrayList<>();
securityGroupIds.add(kubernetesCluster.getSecurityGroupId());
nodeVm = userVmService.createAdvancedSecurityGroupVirtualMachine(zone, serviceOffering, clusterTemplate, networkIds, securityGroupIds, owner,
nodeVm = userVmService.createAdvancedSecurityGroupVirtualMachine(zone, serviceOffering, workerNodeTemplate, networkIds, securityGroupIds, owner,
hostName, hostName, null, null, null, Hypervisor.HypervisorType.None, BaseCmd.HTTPMethod.POST,base64UserData, null, null, keypairs,
null, addrs, null, null, null, customParameterMap, null, null, null,
null, addrs, null, null, Objects.nonNull(affinityGroupId) ?
Collections.singletonList(affinityGroupId) : null, customParameterMap, null, null, null,
null, true, null, UserVmManager.CKS_NODE);
} else {
nodeVm = userVmService.createAdvancedVirtualMachine(zone, serviceOffering, clusterTemplate, networkIds, owner,
nodeVm = userVmService.createAdvancedVirtualMachine(zone, serviceOffering, workerNodeTemplate, networkIds, owner,
hostName, hostName, null, null, null,
Hypervisor.HypervisorType.None, BaseCmd.HTTPMethod.POST, base64UserData, null, null, keypairs,
null, addrs, null, null, null, customParameterMap, null, null, null, null, true, UserVmManager.CKS_NODE, null);
null, addrs, null, null, Objects.nonNull(affinityGroupId) ?
Collections.singletonList(affinityGroupId) : null, customParameterMap, null, null, null, null, true, UserVmManager.CKS_NODE, null);
}
if (logger.isInfoEnabled()) {
logger.info("Created node VM : {}, {} in the Kubernetes cluster : {}", hostName, nodeVm, kubernetesCluster.getName());
@ -455,7 +466,7 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
final long domainId = account.getDomainId();
Nic vmNic = networkModel.getNicInNetwork(vmId, networkId);
final Ip vmIp = new Ip(vmNic.getIPv4Address());
PortForwardingRuleVO pfRule = Transaction.execute((TransactionCallbackWithException<PortForwardingRuleVO, NetworkRuleConflictException>) status -> {
PortForwardingRuleVO pfRule = execute((TransactionCallbackWithException<PortForwardingRuleVO, NetworkRuleConflictException>) status -> {
PortForwardingRuleVO newRule =
new PortForwardingRuleVO(null, publicIpId,
sourcePort, sourcePort,
@ -487,11 +498,18 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
* @throws NetworkRuleConflictException
*/
protected void provisionSshPortForwardingRules(IpAddress publicIp, Network network, Account account,
List<Long> clusterVMIds) throws ResourceUnavailableException,
List<Long> clusterVMIds, Map<Long, Integer> vmIdPortMap) throws ResourceUnavailableException,
NetworkRuleConflictException {
if (!CollectionUtils.isEmpty(clusterVMIds)) {
for (int i = 0; i < clusterVMIds.size(); ++i) {
provisionPublicIpPortForwardingRule(publicIp, network, account, clusterVMIds.get(i), CLUSTER_NODES_DEFAULT_START_SSH_PORT + i, DEFAULT_SSH_PORT);
int defaultNodesCount = clusterVMIds.size() - vmIdPortMap.size();
int sourcePort = CLUSTER_NODES_DEFAULT_START_SSH_PORT;
for (int i = 0; i < defaultNodesCount; ++i) {
sourcePort = CLUSTER_NODES_DEFAULT_START_SSH_PORT + i;
provisionPublicIpPortForwardingRule(publicIp, network, account, clusterVMIds.get(i), sourcePort, DEFAULT_SSH_PORT);
}
for (int i = defaultNodesCount; i < clusterVMIds.size(); ++i) {
sourcePort += 1;
provisionPublicIpPortForwardingRule(publicIp, network, account, clusterVMIds.get(i), sourcePort, DEFAULT_SSH_PORT);
}
}
}
@ -513,15 +531,15 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
return rule;
}
protected FirewallRule removeSshFirewallRule(final IpAddress publicIp) {
protected FirewallRule removeSshFirewallRule(final IpAddress publicIp, final long networkId) {
FirewallRule rule = null;
List<FirewallRuleVO> firewallRules = firewallRulesDao.listByIpPurposeProtocolAndNotRevoked(publicIp.getId(), FirewallRule.Purpose.Firewall, NetUtils.TCP_PROTO);
for (FirewallRuleVO firewallRule : firewallRules) {
Integer startPort = firewallRule.getSourcePortStart();
if (startPort != null && startPort == CLUSTER_NODES_DEFAULT_START_SSH_PORT) {
PortForwardingRuleVO pfRule = portForwardingRulesDao.findByNetworkAndPorts(networkId, firewallRule.getSourcePortStart(), firewallRule.getSourcePortEnd());
if (firewallRule.getSourcePortStart() == CLUSTER_NODES_DEFAULT_START_SSH_PORT || (Objects.nonNull(pfRule) && pfRule.getDestinationPortStart() == DEFAULT_SSH_PORT) ) {
rule = firewallRule;
firewallService.revokeIngressFwRule(firewallRule.getId(), true);
logger.debug("The SSH firewall rule [%s] with the id [%s] was revoked",firewallRule.getName(),firewallRule.getId());
logger.debug("The SSH firewall rule {} with the id {} was revoked", firewallRule.getName(), firewallRule.getId());
break;
}
}
@ -540,7 +558,7 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
logger.trace("Marking PF rule {} with Revoke state", pfRule);
pfRule.setState(FirewallRule.State.Revoke);
revokedRules.add(pfRule);
logger.debug("The Port forwarding rule [%s] with the id [%s] was removed.", pfRule.getName(), pfRule.getId());
logger.debug("The Port forwarding rule {} with the id {} was removed.", pfRule.getName(), pfRule.getId());
break;
}
}
@ -633,22 +651,11 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
lbService.assignToLoadBalancer(lb.getId(), null, vmIdIpMap, false);
}
protected void createFirewallRules(IpAddress publicIp, List<Long> clusterVMIds, boolean apiRule) throws ManagementServerException {
protected Map<Long, Integer> createFirewallRules(IpAddress publicIp, List<Long> clusterVMIds, boolean apiRule) throws ManagementServerException {
// Firewall rule for SSH access on each node VM
CallContext.register(CallContext.current(), null);
try {
int endPort = CLUSTER_NODES_DEFAULT_START_SSH_PORT + clusterVMIds.size() - 1;
provisionFirewallRules(publicIp, owner, CLUSTER_NODES_DEFAULT_START_SSH_PORT, endPort);
if (logger.isInfoEnabled()) {
logger.info("Provisioned firewall rule to open up port {} to {} on {} for Kubernetes cluster: {}", CLUSTER_NODES_DEFAULT_START_SSH_PORT, endPort, publicIp.getAddress().addr(), kubernetesCluster);
}
} catch (NoSuchFieldException | IllegalAccessException | ResourceUnavailableException | NetworkRuleConflictException e) {
throw new ManagementServerException(String.format("Failed to provision firewall rules for SSH access for the Kubernetes cluster : %s", kubernetesCluster.getName()), e);
} finally {
CallContext.unregister();
}
Map<Long, Integer> vmIdPortMap = addFirewallRulesForNodes(publicIp, clusterVMIds.size());
if (!apiRule) {
return;
return vmIdPortMap;
}
// Firewall rule for API access for control node VMs
CallContext.register(CallContext.current(), null);
@ -662,6 +669,7 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
} finally {
CallContext.unregister();
}
return vmIdPortMap;
}
/**
@ -676,11 +684,11 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
* @throws ManagementServerException
*/
protected void setupKubernetesClusterIsolatedNetworkRules(IpAddress publicIp, Network network, List<Long> clusterVMIds, boolean apiRule) throws ManagementServerException {
createFirewallRules(publicIp, clusterVMIds, apiRule);
Map<Long, Integer> vmIdPortMap = createFirewallRules(publicIp, clusterVMIds, apiRule);
// Port forwarding rule for SSH access on each node VM
try {
provisionSshPortForwardingRules(publicIp, network, owner, clusterVMIds);
provisionSshPortForwardingRules(publicIp, network, owner, clusterVMIds, vmIdPortMap);
} catch (ResourceUnavailableException | NetworkRuleConflictException e) {
throw new ManagementServerException(String.format("Failed to activate SSH port forwarding rules for the Kubernetes cluster : %s", kubernetesCluster.getName()), e);
}
@ -772,7 +780,8 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
// Add port forwarding rule for SSH access on each node VM
try {
provisionSshPortForwardingRules(publicIp, network, owner, clusterVMIds);
Map<Long, Integer> vmIdPortMap = getVmPortMap();
provisionSshPortForwardingRules(publicIp, network, owner, clusterVMIds, vmIdPortMap);
} catch (ResourceUnavailableException | NetworkRuleConflictException e) {
throw new ManagementServerException(String.format("Failed to activate SSH port forwarding rules for the Kubernetes cluster : %s", kubernetesCluster.getName()), e);
}
@ -793,8 +802,27 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
return prefix;
}
protected String getEtcdNodeNameForCluster() {
String prefix = kubernetesCluster.getName();
if (!NetUtils.verifyDomainNameLabel(prefix, true)) {
prefix = prefix.replaceAll("[^a-zA-Z0-9-]", "");
if (prefix.isEmpty()) {
prefix = kubernetesCluster.getUuid();
}
}
prefix = prefix + "-etcd" ;
if (prefix.length() > 40) {
prefix = prefix.substring(0, 40);
}
return prefix;
}
protected KubernetesClusterVO updateKubernetesClusterEntry(final Long cores, final Long memory, final Long size,
final Long serviceOfferingId, final Boolean autoscaleEnabled, final Long minSize, final Long maxSize) {
final Long serviceOfferingId, final Boolean autoscaleEnabled,
final Long minSize, final Long maxSize,
final KubernetesClusterNodeType nodeType,
final boolean updateNodeOffering,
final boolean updateClusterOffering) {
return Transaction.execute((TransactionCallback<KubernetesClusterVO>) status -> {
KubernetesClusterVO updatedCluster = kubernetesClusterDao.createForUpdate(kubernetesCluster.getId());
@ -807,7 +835,16 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
if (size != null) {
updatedCluster.setNodeCount(size);
}
if (serviceOfferingId != null) {
if (updateNodeOffering && serviceOfferingId != null && nodeType != null) {
if (WORKER == nodeType) {
updatedCluster.setWorkerNodeServiceOfferingId(serviceOfferingId);
} else if (CONTROL == nodeType) {
updatedCluster.setControlNodeServiceOfferingId(serviceOfferingId);
} else if (ETCD == nodeType) {
updatedCluster.setEtcdNodeServiceOfferingId(serviceOfferingId);
}
}
if (updateClusterOffering && serviceOfferingId != null) {
updatedCluster.setServiceOfferingId(serviceOfferingId);
}
if (autoscaleEnabled != null) {
@ -815,12 +852,14 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
}
updatedCluster.setMinSize(minSize);
updatedCluster.setMaxSize(maxSize);
return kubernetesClusterDao.persist(updatedCluster);
kubernetesClusterDao.persist(updatedCluster);
// Prevent null attributes set by the createForUpdate method
return kubernetesClusterDao.findById(kubernetesCluster.getId());
});
}
private KubernetesClusterVO updateKubernetesClusterEntry(final Boolean autoscaleEnabled, final Long minSize, final Long maxSize) throws CloudRuntimeException {
KubernetesClusterVO kubernetesClusterVO = updateKubernetesClusterEntry(null, null, null, null, autoscaleEnabled, minSize, maxSize);
KubernetesClusterVO kubernetesClusterVO = updateKubernetesClusterEntry(null, null, null, null, autoscaleEnabled, minSize, maxSize, null, false, false);
if (kubernetesClusterVO == null) {
logTransitStateAndThrow(Level.ERROR, String.format("Scaling Kubernetes cluster %s failed, unable to update Kubernetes cluster",
kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.OperationFailed);
@ -883,4 +922,8 @@ public class KubernetesClusterResourceModifierActionWorker extends KubernetesClu
updateLoginUserDetails(null);
}
}
protected List<DedicatedResourceVO> listDedicatedHostsInDomain(Long domainId) {
return dedicatedResourceDao.listByDomainId(domainId);
}
}

View File

@ -19,17 +19,23 @@ package com.cloud.kubernetes.cluster.actionworkers;
import java.io.File;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import javax.inject.Inject;
import com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType;
import com.cloud.service.ServiceOfferingVO;
import com.cloud.storage.VMTemplateVO;
import org.apache.cloudstack.api.ApiCommandResourceType;
import org.apache.cloudstack.api.InternalIdentity;
import org.apache.cloudstack.context.CallContext;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.ObjectUtils;
import org.apache.commons.lang3.StringUtils;
import com.cloud.dc.DataCenter;
@ -60,12 +66,17 @@ import com.cloud.vm.VirtualMachine;
import com.cloud.vm.dao.VMInstanceDao;
import org.apache.logging.log4j.Level;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.CONTROL;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.DEFAULT;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.ETCD;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.WORKER;
public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModifierActionWorker {
@Inject
protected VMInstanceDao vmInstanceDao;
private ServiceOffering serviceOffering;
private Map<String, ServiceOffering> serviceOfferingNodeTypeMap;
private Long clusterSize;
private List<Long> nodeIds;
private KubernetesCluster.State originalState;
@ -75,8 +86,12 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
private Boolean isAutoscalingEnabled;
private long scaleTimeoutTime;
protected KubernetesClusterScaleWorker(final KubernetesCluster kubernetesCluster, final KubernetesClusterManagerImpl clusterManager) {
super(kubernetesCluster, clusterManager);
}
public KubernetesClusterScaleWorker(final KubernetesCluster kubernetesCluster,
final ServiceOffering serviceOffering,
final Map<String, ServiceOffering> serviceOfferingNodeTypeMap,
final Long clusterSize,
final List<Long> nodeIds,
final Boolean isAutoscalingEnabled,
@ -84,7 +99,7 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
final Long maxSize,
final KubernetesClusterManagerImpl clusterManager) {
super(kubernetesCluster, clusterManager);
this.serviceOffering = serviceOffering;
this.serviceOfferingNodeTypeMap = serviceOfferingNodeTypeMap;
this.nodeIds = nodeIds;
this.isAutoscalingEnabled = isAutoscalingEnabled;
this.minSize = minSize;
@ -123,7 +138,7 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
}
// Remove existing SSH firewall rules
FirewallRule firewallRule = removeSshFirewallRule(publicIp);
FirewallRule firewallRule = removeSshFirewallRule(publicIp, network.getId());
if (firewallRule == null) {
throw new ManagementServerException("Firewall rule for node SSH access can't be provisioned");
}
@ -148,7 +163,8 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
}
// Add port forwarding rule for SSH access on each node VM
try {
provisionSshPortForwardingRules(publicIp, network, owner, clusterVMIds);
Map<Long, Integer> vmIdPortMap = getVmPortMap();
provisionSshPortForwardingRules(publicIp, network, owner, clusterVMIds, vmIdPortMap);
} catch (ResourceUnavailableException | NetworkRuleConflictException e) {
throw new ManagementServerException(String.format("Failed to activate SSH port forwarding rules for the Kubernetes cluster : %s", kubernetesCluster.getName()), e);
}
@ -176,15 +192,19 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
scaleKubernetesClusterIsolatedNetworkRules(clusterVMIds);
}
private KubernetesClusterVO updateKubernetesClusterEntry(final Long newSize, final ServiceOffering newServiceOffering) throws CloudRuntimeException {
private KubernetesClusterVO updateKubernetesClusterEntryForNodeType(final Long newWorkerSize, final KubernetesClusterNodeType nodeType,
final ServiceOffering newServiceOffering,
final boolean updateNodeOffering, boolean updateClusterOffering) throws CloudRuntimeException {
final ServiceOffering serviceOffering = newServiceOffering == null ?
serviceOfferingDao.findById(kubernetesCluster.getServiceOfferingId()) : newServiceOffering;
final Long serviceOfferingId = newServiceOffering == null ? null : serviceOffering.getId();
final long size = newSize == null ? kubernetesCluster.getTotalNodeCount() : (newSize + kubernetesCluster.getControlNodeCount());
final long cores = serviceOffering.getCpu() * size;
final long memory = serviceOffering.getRamSize() * size;
KubernetesClusterVO kubernetesClusterVO = updateKubernetesClusterEntry(cores, memory, newSize, serviceOfferingId,
kubernetesCluster.getAutoscalingEnabled(), kubernetesCluster.getMinSize(), kubernetesCluster.getMaxSize());
Pair<Long, Long> clusterCountAndCapacity = calculateNewClusterCountAndCapacity(newWorkerSize, nodeType, serviceOffering);
long cores = clusterCountAndCapacity.first();
long memory = clusterCountAndCapacity.second();
KubernetesClusterVO kubernetesClusterVO = updateKubernetesClusterEntry(cores, memory, newWorkerSize, serviceOfferingId,
kubernetesCluster.getAutoscalingEnabled(), kubernetesCluster.getMinSize(), kubernetesCluster.getMaxSize(), nodeType, updateNodeOffering, updateClusterOffering);
if (kubernetesClusterVO == null) {
logTransitStateAndThrow(Level.ERROR, String.format("Scaling Kubernetes cluster %s failed, unable to update Kubernetes cluster",
kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.OperationFailed);
@ -192,6 +212,58 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
return kubernetesClusterVO;
}
protected Pair<Long, Long> calculateNewClusterCountAndCapacity(Long newWorkerSize, KubernetesClusterNodeType nodeType, ServiceOffering serviceOffering) {
long cores;
long memory;
long totalClusterSize = newWorkerSize == null ? kubernetesCluster.getTotalNodeCount() : (newWorkerSize + kubernetesCluster.getControlNodeCount() + kubernetesCluster.getEtcdNodeCount());
if (nodeType == DEFAULT) {
cores = serviceOffering.getCpu() * totalClusterSize;
memory = serviceOffering.getRamSize() * totalClusterSize;
} else {
long nodeCount = getNodeCountForType(nodeType, kubernetesCluster);
Long existingOfferingId = getExistingOfferingIdForNodeType(nodeType, kubernetesCluster);
if (existingOfferingId == null) {
existingOfferingId = serviceOffering.getId();
}
ServiceOfferingVO previousOffering = serviceOfferingDao.findById(existingOfferingId);
Pair<Long, Long> previousNodesCapacity = calculateNodesCapacity(previousOffering, nodeCount);
if (WORKER == nodeType) {
nodeCount = newWorkerSize == null ? kubernetesCluster.getNodeCount() : newWorkerSize;
}
Pair<Long, Long> newNodesCapacity = calculateNodesCapacity(serviceOffering, nodeCount);
Pair<Long, Long> newClusterCapacity = calculateClusterNewCapacity(kubernetesCluster, previousNodesCapacity, newNodesCapacity);
cores = newClusterCapacity.first();
memory = newClusterCapacity.second();
}
return new Pair<>(cores, memory);
}
private long getNodeCountForType(KubernetesClusterNodeType nodeType, KubernetesCluster kubernetesCluster) {
if (WORKER == nodeType) {
return kubernetesCluster.getNodeCount();
} else if (CONTROL == nodeType) {
return kubernetesCluster.getControlNodeCount();
} else if (ETCD == nodeType) {
return kubernetesCluster.getEtcdNodeCount();
}
return kubernetesCluster.getTotalNodeCount();
}
protected Pair<Long, Long> calculateClusterNewCapacity(KubernetesCluster kubernetesCluster,
Pair<Long, Long> previousNodeTypeCapacity,
Pair<Long, Long> newNodeTypeCapacity) {
long previousCores = kubernetesCluster.getCores();
long previousMemory = kubernetesCluster.getMemory();
long newCores = previousCores - previousNodeTypeCapacity.first() + newNodeTypeCapacity.first();
long newMemory = previousMemory - previousNodeTypeCapacity.second() + newNodeTypeCapacity.second();
return new Pair<>(newCores, newMemory);
}
protected Pair<Long, Long> calculateNodesCapacity(ServiceOffering offering, long nodeCount) {
return new Pair<>(offering.getCpu() * nodeCount, offering.getRamSize() * nodeCount);
}
private boolean removeKubernetesClusterNode(final String ipAddress, final int port, final UserVm userVm, final int retries, final int waitDuration) {
File pkFile = getManagementServerSshPublicKeyFile();
int retryCounter = 0;
@ -266,11 +338,12 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
}
if (newVmRequiredCount > 0) {
final DataCenter zone = dataCenterDao.findById(kubernetesCluster.getZoneId());
VMTemplateVO clusterTemplate = templateDao.findById(kubernetesCluster.getTemplateId());
try {
if (originalState.equals(KubernetesCluster.State.Running)) {
plan(newVmRequiredCount, zone, clusterServiceOffering);
plan(newVmRequiredCount, zone, clusterServiceOffering, kubernetesCluster.getDomainId(), kubernetesCluster.getAccountId(), clusterTemplate.getHypervisorType());
} else {
plan(kubernetesCluster.getTotalNodeCount() + newVmRequiredCount, zone, clusterServiceOffering);
plan(kubernetesCluster.getTotalNodeCount() + newVmRequiredCount, zone, clusterServiceOffering, kubernetesCluster.getDomainId(), kubernetesCluster.getAccountId(), clusterTemplate.getHypervisorType());
}
} catch (InsufficientCapacityException e) {
logTransitStateToFailedIfNeededAndThrow(Level.WARN, String.format("Scaling failed for Kubernetes cluster : %s in zone : %s, insufficient capacity", kubernetesCluster.getName(), zone.getName()));
@ -282,17 +355,18 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
}
}
private void scaleKubernetesClusterOffering() throws CloudRuntimeException {
private void scaleKubernetesClusterOffering(KubernetesClusterNodeType nodeType, ServiceOffering serviceOffering,
boolean updateNodeOffering, boolean updateClusterOffering) throws CloudRuntimeException {
validateKubernetesClusterScaleOfferingParameters();
if (!kubernetesCluster.getState().equals(KubernetesCluster.State.Scaling)) {
stateTransitTo(kubernetesCluster.getId(), KubernetesCluster.Event.ScaleUpRequested);
}
if (KubernetesCluster.State.Created.equals(originalState)) {
kubernetesCluster = updateKubernetesClusterEntry(null, serviceOffering);
kubernetesCluster = updateKubernetesClusterEntryForNodeType(null, nodeType, serviceOffering, updateNodeOffering, updateClusterOffering);
return;
}
final long size = kubernetesCluster.getTotalNodeCount();
List<KubernetesClusterVmMapVO> vmList = kubernetesClusterVmMapDao.listByClusterId(kubernetesCluster.getId());
final long size = getNodeCountForType(nodeType, kubernetesCluster);
List<KubernetesClusterVmMapVO> vmList = kubernetesClusterVmMapDao.listByClusterIdAndVmType(kubernetesCluster.getId(), nodeType);
final long tobeScaledVMCount = Math.min(vmList.size(), size);
for (long i = 0; i < tobeScaledVMCount; i++) {
KubernetesClusterVmMapVO vmMapVO = vmList.get((int) i);
@ -310,7 +384,7 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
logTransitStateAndThrow(Level.WARN, String.format("Scaling Kubernetes cluster : %s failed, scaling action timed out", kubernetesCluster.getName()),kubernetesCluster.getId(), KubernetesCluster.Event.OperationFailed);
}
}
kubernetesCluster = updateKubernetesClusterEntry(null, serviceOffering);
kubernetesCluster = updateKubernetesClusterEntryForNodeType(null, nodeType, serviceOffering, updateNodeOffering, updateClusterOffering);
}
private void removeNodesFromCluster(List<KubernetesClusterVmMapVO> vmMaps) throws CloudRuntimeException {
@ -346,7 +420,10 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
// Scale network rules to update firewall rule
try {
List<Long> clusterVMIds = getKubernetesClusterVMMaps().stream().map(KubernetesClusterVmMapVO::getVmId).collect(Collectors.toList());
List<Long> clusterVMIds = getKubernetesClusterVMMaps()
.stream()
.filter(x -> !x.isEtcdNode())
.map(KubernetesClusterVmMapVO::getVmId).collect(Collectors.toList());
scaleKubernetesClusterNetworkRules(clusterVMIds);
} catch (ManagementServerException e) {
logTransitStateAndThrow(Level.ERROR, String.format("Scaling failed for Kubernetes " +
@ -361,10 +438,13 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
}
List<KubernetesClusterVmMapVO> vmList;
if (this.nodeIds != null) {
vmList = getKubernetesClusterVMMapsForNodes(this.nodeIds);
vmList = getKubernetesClusterVMMapsForNodes(this.nodeIds).stream().filter(vm -> !vm.isExternalNode()).collect(Collectors.toList());
} else {
vmList = getKubernetesClusterVMMaps();
vmList = vmList.subList((int) (kubernetesCluster.getControlNodeCount() + clusterSize), vmList.size());
vmList = vmList.stream()
.filter(vm -> !vm.isExternalNode() && !vm.isControlNode() && !vm.isEtcdNode())
.collect(Collectors.toList());
vmList = vmList.subList((int) (kubernetesCluster.getControlNodeCount() + clusterSize - 1), vmList.size());
}
Collections.reverse(vmList);
removeNodesFromCluster(vmList);
@ -375,16 +455,20 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
stateTransitTo(kubernetesCluster.getId(), KubernetesCluster.Event.ScaleUpRequested);
}
List<UserVm> clusterVMs = new ArrayList<>();
LaunchPermissionVO launchPermission = new LaunchPermissionVO(clusterTemplate.getId(), owner.getId());
launchPermissionDao.persist(launchPermission);
if (isDefaultTemplateUsed()) {
LaunchPermissionVO launchPermission = new LaunchPermissionVO(clusterTemplate.getId(), owner.getId());
launchPermissionDao.persist(launchPermission);
}
try {
clusterVMs = provisionKubernetesClusterNodeVms((int)(newVmCount + kubernetesCluster.getNodeCount()), (int)kubernetesCluster.getNodeCount(), publicIpAddress);
clusterVMs = provisionKubernetesClusterNodeVms((int)(newVmCount + kubernetesCluster.getNodeCount()), (int)kubernetesCluster.getNodeCount(), publicIpAddress, kubernetesCluster.getDomainId(), kubernetesCluster.getAccountId());
updateLoginUserDetails(clusterVMs.stream().map(InternalIdentity::getId).collect(Collectors.toList()));
} catch (CloudRuntimeException | ManagementServerException | ResourceUnavailableException | InsufficientCapacityException e) {
logTransitStateToFailedIfNeededAndThrow(Level.ERROR, String.format("Scaling failed for Kubernetes cluster : %s, unable to provision node VM in the cluster", kubernetesCluster.getName()), e);
}
try {
List<Long> clusterVMIds = getKubernetesClusterVMMaps().stream().map(KubernetesClusterVmMapVO::getVmId).collect(Collectors.toList());
List<Long> externalNodeIds = getKubernetesClusterVMMaps().stream().filter(KubernetesClusterVmMapVO::isExternalNode).map(KubernetesClusterVmMapVO::getVmId).collect(Collectors.toList());
List<Long> clusterVMIds = getKubernetesClusterVMMaps().stream().filter(vm -> !vm.isExternalNode() && !vm.isEtcdNode()).map(KubernetesClusterVmMapVO::getVmId).collect(Collectors.toList());
clusterVMIds.addAll(externalNodeIds);
scaleKubernetesClusterNetworkRules(clusterVMIds);
} catch (ManagementServerException e) {
logTransitStateToFailedIfNeededAndThrow(Level.ERROR, String.format("Scaling failed for Kubernetes cluster : %s, unable to update network rules", kubernetesCluster.getName()), e);
@ -401,7 +485,7 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
}
}
private void scaleKubernetesClusterSize() throws CloudRuntimeException {
private void scaleKubernetesClusterSize(KubernetesClusterNodeType nodeType) throws CloudRuntimeException {
validateKubernetesClusterScaleSizeParameters();
final long originalClusterSize = kubernetesCluster.getNodeCount();
final long newVmRequiredCount = clusterSize - originalClusterSize;
@ -409,7 +493,7 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
if (!kubernetesCluster.getState().equals(KubernetesCluster.State.Scaling)) {
stateTransitTo(kubernetesCluster.getId(), newVmRequiredCount > 0 ? KubernetesCluster.Event.ScaleUpRequested : KubernetesCluster.Event.ScaleDownRequested);
}
kubernetesCluster = updateKubernetesClusterEntry(null, serviceOffering);
kubernetesCluster = updateKubernetesClusterEntryForNodeType(null, nodeType, serviceOfferingNodeTypeMap.get(nodeType.name()), false, false);
return;
}
Pair<String, Integer> publicIpSshPort = getKubernetesClusterServerIpSshPort(null);
@ -423,7 +507,9 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
} else { // upscale, same node count handled above
scaleUpKubernetesClusterSize(newVmRequiredCount);
}
kubernetesCluster = updateKubernetesClusterEntry(clusterSize, null);
boolean updateNodeOffering = serviceOfferingNodeTypeMap.containsKey(nodeType.name());
ServiceOffering nodeOffering = serviceOfferingNodeTypeMap.getOrDefault(nodeType.name(), null);
kubernetesCluster = updateKubernetesClusterEntryForNodeType(clusterSize, nodeType, nodeOffering, updateNodeOffering, false);
}
private boolean isAutoscalingChanged() {
@ -446,37 +532,99 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
}
scaleTimeoutTime = System.currentTimeMillis() + KubernetesClusterService.KubernetesClusterScaleTimeout.value() * 1000;
final long originalClusterSize = kubernetesCluster.getNodeCount();
final ServiceOffering existingServiceOffering = serviceOfferingDao.findById(kubernetesCluster.getServiceOfferingId());
if (existingServiceOffering == null) {
logAndThrow(Level.ERROR, String.format("Scaling Kubernetes cluster %s failed, service offering for the Kubernetes cluster not found!", kubernetesCluster));
boolean scaleClusterDefaultOffering = serviceOfferingNodeTypeMap.containsKey(DEFAULT.name());
if (scaleClusterDefaultOffering) {
final ServiceOffering existingServiceOffering = serviceOfferingDao.findById(kubernetesCluster.getServiceOfferingId());
final ServiceOffering existingControlOffering = serviceOfferingDao.findById(kubernetesCluster.getControlNodeServiceOfferingId());
final ServiceOffering existingWorkerOffering = serviceOfferingDao.findById(kubernetesCluster.getWorkerNodeServiceOfferingId());
if (existingServiceOffering == null && ObjectUtils.anyNull(existingControlOffering, existingWorkerOffering)) {
logAndThrow(Level.ERROR, String.format("Scaling Kubernetes cluster : %s failed, service offering for the Kubernetes cluster not found!", kubernetesCluster.getName()));
}
}
final boolean autoscalingChanged = isAutoscalingChanged();
final boolean serviceOfferingScalingNeeded = serviceOffering != null && serviceOffering.getId() != existingServiceOffering.getId();
if (autoscalingChanged) {
boolean autoScaled = autoscaleCluster(this.isAutoscalingEnabled, minSize, maxSize);
if (autoScaled && serviceOfferingScalingNeeded) {
scaleKubernetesClusterOffering();
final boolean autoscalingChanged = isAutoscalingChanged();
ServiceOffering defaultServiceOffering = serviceOfferingNodeTypeMap.getOrDefault(DEFAULT.name(), null);
for (KubernetesClusterNodeType nodeType : Arrays.asList(CONTROL, ETCD, WORKER)) {
boolean isWorkerNodeOrAllNodes = WORKER == nodeType;
final long newVMRequired = (!isWorkerNodeOrAllNodes || clusterSize == null) ? 0 : clusterSize - originalClusterSize;
if (!scaleClusterDefaultOffering && !serviceOfferingNodeTypeMap.containsKey(nodeType.name()) && newVMRequired == 0) {
continue;
}
stateTransitTo(kubernetesCluster.getId(), KubernetesCluster.Event.OperationSucceeded);
return autoScaled;
}
final boolean clusterSizeScalingNeeded = clusterSize != null && clusterSize != originalClusterSize;
final long newVMRequired = clusterSize == null ? 0 : clusterSize - originalClusterSize;
if (serviceOfferingScalingNeeded && clusterSizeScalingNeeded) {
if (newVMRequired > 0) {
scaleKubernetesClusterOffering();
scaleKubernetesClusterSize();
} else {
scaleKubernetesClusterSize();
scaleKubernetesClusterOffering();
Long existingNodeTypeOfferingId = getKubernetesClusterNodeTypeOfferingId(kubernetesCluster, nodeType);
boolean clusterHasExistingOfferingForNodeType = existingNodeTypeOfferingId != null;
boolean serviceOfferingScalingNeeded = isServiceOfferingScalingNeededForNodeType(nodeType, serviceOfferingNodeTypeMap, kubernetesCluster);
ServiceOffering serviceOffering = serviceOfferingNodeTypeMap.getOrDefault(nodeType.name(), defaultServiceOffering);
boolean updateNodeOffering = serviceOfferingNodeTypeMap.containsKey(nodeType.name()) ||
scaleClusterDefaultOffering && clusterHasExistingOfferingForNodeType;
boolean updateClusterOffering = isWorkerNodeOrAllNodes && scaleClusterDefaultOffering;
if (isWorkerNodeOrAllNodes && autoscalingChanged) {
boolean autoScaled = autoscaleCluster(this.isAutoscalingEnabled, minSize, maxSize);
if (autoScaled && serviceOfferingScalingNeeded) {
scaleKubernetesClusterOffering(nodeType, serviceOffering, updateNodeOffering, updateClusterOffering);
}
stateTransitTo(kubernetesCluster.getId(), KubernetesCluster.Event.OperationSucceeded);
return autoScaled;
}
final boolean clusterSizeScalingNeeded = isWorkerNodeOrAllNodes && clusterSize != null && clusterSize != originalClusterSize;
if (serviceOfferingScalingNeeded && clusterSizeScalingNeeded) {
if (newVMRequired > 0) {
scaleKubernetesClusterOffering(nodeType, serviceOffering, updateNodeOffering, updateClusterOffering);
scaleKubernetesClusterSize(nodeType);
} else {
scaleKubernetesClusterSize(nodeType);
scaleKubernetesClusterOffering(nodeType, serviceOffering, updateNodeOffering, updateClusterOffering);
}
} else if (serviceOfferingScalingNeeded) {
scaleKubernetesClusterOffering(nodeType, serviceOffering, updateNodeOffering, updateClusterOffering);
} else if (clusterSizeScalingNeeded) {
scaleKubernetesClusterSize(nodeType);
}
} else if (serviceOfferingScalingNeeded) {
scaleKubernetesClusterOffering();
} else if (clusterSizeScalingNeeded) {
scaleKubernetesClusterSize();
}
stateTransitTo(kubernetesCluster.getId(), KubernetesCluster.Event.OperationSucceeded);
return true;
}
private Long getKubernetesClusterNodeTypeOfferingId(KubernetesCluster kubernetesCluster, KubernetesClusterNodeType nodeType) {
if (nodeType == WORKER) {
return kubernetesCluster.getWorkerNodeServiceOfferingId();
} else if (nodeType == ETCD) {
return kubernetesCluster.getEtcdNodeServiceOfferingId();
} else if (nodeType == CONTROL) {
return kubernetesCluster.getControlNodeServiceOfferingId();
}
return null;
}
protected boolean isServiceOfferingScalingNeededForNodeType(KubernetesClusterNodeType nodeType,
Map<String, ServiceOffering> map, KubernetesCluster kubernetesCluster) {
// DEFAULT node type means only the global service offering has been set for the Kubernetes cluster
Long existingOfferingId = map.containsKey(DEFAULT.name()) ?
kubernetesCluster.getServiceOfferingId() :
getExistingOfferingIdForNodeType(nodeType, kubernetesCluster);
if (existingOfferingId == null) {
logAndThrow(Level.ERROR, String.format("The Kubernetes cluster %s does not have a global service offering set", kubernetesCluster.getName()));
}
ServiceOffering existingOffering = serviceOfferingDao.findById(existingOfferingId);
if (existingOffering == null) {
logAndThrow(Level.ERROR, String.format("Cannot find the global service offering with ID %s set on the Kubernetes cluster %s", existingOfferingId, kubernetesCluster.getName()));
}
ServiceOffering newOffering = map.containsKey(DEFAULT.name()) ? map.get(DEFAULT.name()) : map.get(nodeType.name());
return newOffering != null && newOffering.getId() != existingOffering.getId();
}
protected Long getExistingOfferingIdForNodeType(KubernetesClusterNodeType nodeType, KubernetesCluster kubernetesCluster) {
List<KubernetesClusterVmMapVO> clusterVms = kubernetesClusterVmMapDao.listByClusterIdAndVmType(kubernetesCluster.getId(), nodeType);
if (CollectionUtils.isEmpty(clusterVms)) {
return null;
}
KubernetesClusterVmMapVO clusterVm = clusterVms.get(0);
UserVmVO clusterUserVm = userVmDao.findById(clusterVm.getVmId());
if (clusterUserVm == null) {
return null;
}
return clusterUserVm.getServiceOfferingId();
}
}

View File

@ -24,11 +24,20 @@ import java.net.URL;
import java.net.UnknownHostException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.stream.Collectors;
import com.cloud.exception.InvalidParameterValueException;
import com.cloud.exception.NetworkRuleConflictException;
import com.cloud.exception.PermissionDeniedException;
import com.cloud.kubernetes.cluster.KubernetesServiceHelper;
import com.cloud.network.vpc.NetworkACL;
import com.cloud.storage.VMTemplateVO;
import com.cloud.user.UserDataVO;
import org.apache.cloudstack.api.BaseCmd;
import org.apache.cloudstack.api.InternalIdentity;
import org.apache.cloudstack.framework.ca.Certificate;
@ -75,6 +84,10 @@ import com.cloud.vm.VirtualMachine;
import com.cloud.vm.VmDetailConstants;
import org.apache.logging.log4j.Level;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.CONTROL;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.ETCD;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.WORKER;
public class KubernetesClusterStartWorker extends KubernetesClusterResourceModifierActionWorker {
private KubernetesSupportedVersion kubernetesClusterVersion;
@ -128,10 +141,10 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
return haSupported;
}
private String getKubernetesControlNodeConfig(final String controlNodeIp, final String serverIp,
final String hostName, final boolean haSupported,
final boolean ejectIso) throws IOException {
String k8sControlNodeConfig = readResourceFile("/conf/k8s-control-node.yml");
private Pair<String, String> getKubernetesControlNodeConfig(final String controlNodeIp, final String serverIp,
final List<Network.IpAddresses> etcdIps, final String hostName, final boolean haSupported,
final boolean ejectIso, final boolean externalCni) throws IOException {
String k8sControlNodeConfig = readK8sConfigFile("/conf/k8s-control-node.yml");
final String apiServerCert = "{{ k8s_control_node.apiserver.crt }}";
final String apiServerKey = "{{ k8s_control_node.apiserver.key }}";
final String caCert = "{{ k8s_control_node.ca.crt }}";
@ -139,18 +152,33 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
final String clusterToken = "{{ k8s_control_node.cluster.token }}";
final String clusterInitArgsKey = "{{ k8s_control_node.cluster.initargs }}";
final String ejectIsoKey = "{{ k8s.eject.iso }}";
final String installWaitTime = "{{ k8s.install.wait.time }}";
final String installReattemptsCount = "{{ k8s.install.reattempts.count }}";
final String externalEtcdNodes = "{{ etcd.unstacked_etcd }}";
final String etcdEndpointList = "{{ etcd.etcd_endpoint_list }}";
final String k8sServerIp = "{{ k8s_control.server_ip }}";
final String k8sApiPort = "{{ k8s.api_server_port }}";
final String certSans = "{{ k8s_control.server_ips }}";
final String k8sCertificate = "{{ k8s_control.certificate_key }}";
final String externalCniPlugin = "{{ k8s.external.cni.plugin }}";
final List<String> addresses = new ArrayList<>();
addresses.add(controlNodeIp);
if (!serverIp.equals(controlNodeIp)) {
addresses.add(serverIp);
}
boolean externalEtcd = !etcdIps.isEmpty();
final Certificate certificate = caManager.issueCertificate(null, Arrays.asList(hostName, "kubernetes",
"kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local"),
"kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local"),
addresses, 3650, null);
final String tlsClientCert = CertUtils.x509CertificateToPem(certificate.getClientCertificate());
final String tlsPrivateKey = CertUtils.privateKeyToPem(certificate.getPrivateKey());
final String tlsCaCert = CertUtils.x509CertificatesToPem(certificate.getCaCertificates());
final Long waitTime = KubernetesClusterService.KubernetesControlNodeInstallAttemptWait.value();
final Long reattempts = KubernetesClusterService.KubernetesControlNodeInstallReattempts.value();
String endpointList = getEtcdEndpointList(etcdIps);
k8sControlNodeConfig = k8sControlNodeConfig.replace(apiServerCert, tlsClientCert.replace("\n", "\n "));
k8sControlNodeConfig = k8sControlNodeConfig.replace(apiServerKey, tlsPrivateKey.replace("\n", "\n "));
k8sControlNodeConfig = k8sControlNodeConfig.replace(caCert, tlsCaCert.replace("\n", "\n "));
@ -162,29 +190,39 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
pubKey += "\n - \"" + sshkp.getPublicKey() + "\"";
}
}
k8sControlNodeConfig = k8sControlNodeConfig.replace(installWaitTime, String.valueOf(waitTime));
k8sControlNodeConfig = k8sControlNodeConfig.replace(installReattemptsCount, String.valueOf(reattempts));
k8sControlNodeConfig = k8sControlNodeConfig.replace(sshPubKey, pubKey);
k8sControlNodeConfig = k8sControlNodeConfig.replace(clusterToken, KubernetesClusterUtil.generateClusterToken(kubernetesCluster));
k8sControlNodeConfig = k8sControlNodeConfig.replace(externalEtcdNodes, String.valueOf(externalEtcd));
String initArgs = "";
if (haSupported) {
initArgs = String.format("--control-plane-endpoint %s:%d --upload-certs --certificate-key %s ",
serverIp,
controlNodeIp,
CLUSTER_API_PORT,
KubernetesClusterUtil.generateClusterHACertificateKey(kubernetesCluster));
}
initArgs += String.format("--apiserver-cert-extra-sans=%s", serverIp);
initArgs += String.format("--apiserver-cert-extra-sans=%s", controlNodeIp);
initArgs += String.format(" --kubernetes-version=%s", getKubernetesClusterVersion().getSemanticVersion());
k8sControlNodeConfig = k8sControlNodeConfig.replace(clusterInitArgsKey, initArgs);
k8sControlNodeConfig = k8sControlNodeConfig.replace(ejectIsoKey, String.valueOf(ejectIso));
k8sControlNodeConfig = k8sControlNodeConfig.replace(etcdEndpointList, endpointList);
k8sControlNodeConfig = k8sControlNodeConfig.replace(k8sServerIp, controlNodeIp);
k8sControlNodeConfig = k8sControlNodeConfig.replace(k8sApiPort, String.valueOf(CLUSTER_API_PORT));
k8sControlNodeConfig = k8sControlNodeConfig.replace(certSans, String.format("- %s", serverIp));
k8sControlNodeConfig = k8sControlNodeConfig.replace(k8sCertificate, KubernetesClusterUtil.generateClusterHACertificateKey(kubernetesCluster));
k8sControlNodeConfig = k8sControlNodeConfig.replace(externalCniPlugin, String.valueOf(externalCni));
k8sControlNodeConfig = updateKubeConfigWithRegistryDetails(k8sControlNodeConfig);
return k8sControlNodeConfig;
return new Pair<>(k8sControlNodeConfig, controlNodeIp);
}
private UserVm createKubernetesControlNode(final Network network, String serverIp) throws ManagementServerException,
private Pair<UserVm,String> createKubernetesControlNode(final Network network, String serverIp, List<Network.IpAddresses> etcdIps, Long domainId, Long accountId, Long asNumber) throws ManagementServerException,
ResourceUnavailableException, InsufficientCapacityException {
UserVm controlVm = null;
DataCenter zone = dataCenterDao.findById(kubernetesCluster.getZoneId());
ServiceOffering serviceOffering = serviceOfferingDao.findById(kubernetesCluster.getServiceOfferingId());
ServiceOffering serviceOffering = getServiceOfferingForNodeTypeOnCluster(CONTROL, kubernetesCluster);
List<Long> networkIds = new ArrayList<Long>();
networkIds.add(kubernetesCluster.getNetworkId());
Pair<String, Map<Long, Network.IpAddresses>> ipAddresses = getKubernetesControlNodeIpAddresses(zone, network, owner);
@ -205,45 +243,75 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
String suffix = Long.toHexString(System.currentTimeMillis());
String hostName = String.format("%s-control-%s", kubernetesClusterNodeNamePrefix, suffix);
boolean haSupported = isKubernetesVersionSupportsHA();
String k8sControlNodeConfig = null;
Long userDataId = kubernetesCluster.getCniConfigId();
Pair<String, String> k8sControlNodeConfigAndControlIp = new Pair<>(null, null);
try {
k8sControlNodeConfig = getKubernetesControlNodeConfig(controlNodeIp, serverIp, hostName, haSupported, Hypervisor.HypervisorType.VMware.equals(clusterTemplate.getHypervisorType()));
k8sControlNodeConfigAndControlIp = getKubernetesControlNodeConfig(controlNodeIp, serverIp, etcdIps, hostName, haSupported, Hypervisor.HypervisorType.VMware.equals(clusterTemplate.getHypervisorType()), Objects.nonNull(userDataId));
} catch (IOException e) {
logAndThrow(Level.ERROR, "Failed to read Kubernetes control node configuration file", e);
}
String k8sControlNodeConfig = k8sControlNodeConfigAndControlIp.first();
String base64UserData = Base64.encodeBase64String(k8sControlNodeConfig.getBytes(com.cloud.utils.StringUtils.getPreferredCharset()));
if (Objects.nonNull(userDataId)) {
logger.info("concatenating userdata");
UserDataVO cniConfigVo = userDataDao.findById(userDataId);
String cniConfig = new String(Base64.decodeBase64(cniConfigVo.getUserData()));
if (Objects.nonNull(asNumber)) {
cniConfig = substituteASNumber(cniConfig, asNumber);
}
cniConfig = Base64.encodeBase64String(cniConfig.getBytes(com.cloud.utils.StringUtils.getPreferredCharset()));
base64UserData = userDataManager.concatenateUserData(base64UserData, cniConfig, null);
}
List<String> keypairs = new ArrayList<String>();
if (StringUtils.isNotBlank(kubernetesCluster.getKeyPair())) {
keypairs.add(kubernetesCluster.getKeyPair());
}
Long affinityGroupId = getExplicitAffinityGroup(domainId, accountId);
String userDataDetails = kubernetesCluster.getCniConfigDetails();
if (kubernetesCluster.getSecurityGroupId() != null &&
networkModel.checkSecurityGroupSupportForNetwork(owner, zone, networkIds,
List.of(kubernetesCluster.getSecurityGroupId()))) {
List<Long> securityGroupIds = new ArrayList<>();
securityGroupIds.add(kubernetesCluster.getSecurityGroupId());
controlVm = userVmService.createAdvancedSecurityGroupVirtualMachine(zone, serviceOffering, clusterTemplate, networkIds, securityGroupIds, owner,
hostName, hostName, null, null, null, Hypervisor.HypervisorType.None, BaseCmd.HTTPMethod.POST,base64UserData, null, null, keypairs,
requestedIps, addrs, null, null, null, customParameterMap, null, null, null,
controlVm = userVmService.createAdvancedSecurityGroupVirtualMachine(zone, serviceOffering, controlNodeTemplate, networkIds, securityGroupIds, owner,
hostName, hostName, null, null, null, Hypervisor.HypervisorType.None, BaseCmd.HTTPMethod.POST,base64UserData, userDataId, userDataDetails, keypairs,
requestedIps, addrs, null, null, Objects.nonNull(affinityGroupId) ?
Collections.singletonList(affinityGroupId) : null, customParameterMap, null, null, null,
null, true, null, UserVmManager.CKS_NODE);
} else {
controlVm = userVmService.createAdvancedVirtualMachine(zone, serviceOffering, clusterTemplate, networkIds, owner,
controlVm = userVmService.createAdvancedVirtualMachine(zone, serviceOffering, controlNodeTemplate, networkIds, owner,
hostName, hostName, null, null, null,
Hypervisor.HypervisorType.None, BaseCmd.HTTPMethod.POST, base64UserData, null, null, keypairs,
requestedIps, addrs, null, null, null, customParameterMap, null, null, null, null, true, UserVmManager.CKS_NODE, null);
Hypervisor.HypervisorType.None, BaseCmd.HTTPMethod.POST, base64UserData, userDataId, userDataDetails, keypairs,
requestedIps, addrs, null, null, Objects.nonNull(affinityGroupId) ?
Collections.singletonList(affinityGroupId) : null, customParameterMap, null, null, null, null, true, UserVmManager.CKS_NODE, null);
}
if (logger.isInfoEnabled()) {
logger.info("Created control VM: {}, {} in the Kubernetes cluster: {}", controlVm, hostName, kubernetesCluster);
}
return controlVm;
return new Pair<>(controlVm, k8sControlNodeConfigAndControlIp.second());
}
private String substituteASNumber(String cniConfig, Long asNumber) {
final String asNumberKey = "{{ AS_NUMBER }}";
cniConfig = cniConfig.replace(asNumberKey, String.valueOf(asNumber));
return cniConfig;
}
private String getKubernetesAdditionalControlNodeConfig(final String joinIp, final boolean ejectIso) throws IOException {
String k8sControlNodeConfig = readResourceFile("/conf/k8s-control-node-add.yml");
String k8sControlNodeConfig = readK8sConfigFile("/conf/k8s-control-node-add.yml");
final String joinIpKey = "{{ k8s_control_node.join_ip }}";
final String clusterTokenKey = "{{ k8s_control_node.cluster.token }}";
final String sshPubKey = "{{ k8s.ssh.pub.key }}";
final String clusterHACertificateKey = "{{ k8s_control_node.cluster.ha.certificate.key }}";
final String ejectIsoKey = "{{ k8s.eject.iso }}";
final String installWaitTime = "{{ k8s.install.wait.time }}";
final String installReattemptsCount = "{{ k8s.install.reattempts.count }}";
final Long waitTime = KubernetesClusterService.KubernetesControlNodeInstallAttemptWait.value();
final Long reattempts = KubernetesClusterService.KubernetesControlNodeInstallReattempts.value();
String pubKey = "- \"" + configurationDao.getValue("ssh.publickey") + "\"";
String sshKeyPair = kubernetesCluster.getKeyPair();
@ -253,6 +321,8 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
pubKey += "\n - \"" + sshkp.getPublicKey() + "\"";
}
}
k8sControlNodeConfig = k8sControlNodeConfig.replace(installWaitTime, String.valueOf(waitTime));
k8sControlNodeConfig = k8sControlNodeConfig.replace(installReattemptsCount, String.valueOf(reattempts));
k8sControlNodeConfig = k8sControlNodeConfig.replace(sshPubKey, pubKey);
k8sControlNodeConfig = k8sControlNodeConfig.replace(joinIpKey, joinIp);
k8sControlNodeConfig = k8sControlNodeConfig.replace(clusterTokenKey, KubernetesClusterUtil.generateClusterToken(kubernetesCluster));
@ -263,11 +333,84 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
return k8sControlNodeConfig;
}
private UserVm createKubernetesAdditionalControlNode(final String joinIp, final int additionalControlNodeInstance) throws ManagementServerException,
private String getInitialEtcdClusterDetails(List<String> ipAddresses, List<String> hostnames) {
String initialCluster = "%s=http://%s:%s";
StringBuilder clusterInfo = new StringBuilder();
for (int i = 0; i < ipAddresses.size(); i++) {
clusterInfo.append(String.format(initialCluster, hostnames.get(i), ipAddresses.get(i), KubernetesClusterActionWorker.ETCD_NODE_PEER_COMM_PORT));
if (i < ipAddresses.size()-1) {
clusterInfo.append(",");
}
}
return clusterInfo.toString();
}
/**
*
* @param ipAddresses list of etcd node guest IPs
* @return a formatted list of etcd endpoints adhering to YAML syntax
*/
private String getEtcdEndpointList(List<Network.IpAddresses> ipAddresses) {
StringBuilder endpoints = new StringBuilder();
for (int i = 0; i < ipAddresses.size(); i++) {
endpoints.append(String.format("- http://%s:%s", ipAddresses.get(i).getIp4Address(), KubernetesClusterActionWorker.ETCD_NODE_CLIENT_REQUEST_PORT));
if (i < ipAddresses.size()-1) {
endpoints.append("\n ");
}
}
return endpoints.toString();
}
private List<String> getEtcdNodeHostnames() {
List<String> hostnames = new ArrayList<>();
for (int etcdNodeIndex = 1; etcdNodeIndex <= kubernetesCluster.getEtcdNodeCount(); etcdNodeIndex++) {
String suffix = Long.toHexString(System.currentTimeMillis());
hostnames.add(String.format("%s-%s-%s", getEtcdNodeNameForCluster(), etcdNodeIndex, suffix));
}
return hostnames;
}
private String getEtcdNodeConfig(final List<String> ipAddresses, final List<String> hostnames, final int etcdNodeIndex,
final boolean ejectIso) throws IOException {
String k8sEtcdNodeConfig = readK8sConfigFile("/conf/etcd-node.yml");
final String sshPubKey = "{{ k8s.ssh.pub.key }}";
final String ejectIsoKey = "{{ k8s.eject.iso }}";
final String installWaitTime = "{{ k8s.install.wait.time }}";
final String installReattemptsCount = "{{ k8s.install.reattempts.count }}";
final String etcdNodeName = "{{ etcd.node_name }}";
final String etcdNodeIp = "{{ etcd.node_ip }}";
final String etcdInitialClusterNodes = "{{ etcd.initial_cluster_nodes }}";
final Long waitTime = KubernetesClusterService.KubernetesControlNodeInstallAttemptWait.value();
final Long reattempts = KubernetesClusterService.KubernetesControlNodeInstallReattempts.value();
String pubKey = "- \"" + configurationDao.getValue("ssh.publickey") + "\"";
String sshKeyPair = kubernetesCluster.getKeyPair();
if (StringUtils.isNotEmpty(sshKeyPair)) {
SSHKeyPairVO sshkp = sshKeyPairDao.findByName(owner.getAccountId(), owner.getDomainId(), sshKeyPair);
if (sshkp != null) {
pubKey += "\n - \"" + sshkp.getPublicKey() + "\"";
}
}
String initialClusterDetails = getInitialEtcdClusterDetails(ipAddresses, hostnames);
k8sEtcdNodeConfig = k8sEtcdNodeConfig.replace(installWaitTime, String.valueOf(waitTime));
k8sEtcdNodeConfig = k8sEtcdNodeConfig.replace(installReattemptsCount, String.valueOf(reattempts));
k8sEtcdNodeConfig = k8sEtcdNodeConfig.replace(sshPubKey, pubKey);
k8sEtcdNodeConfig = k8sEtcdNodeConfig.replace(ejectIsoKey, String.valueOf(ejectIso));
k8sEtcdNodeConfig = k8sEtcdNodeConfig.replace(etcdNodeName, hostnames.get(etcdNodeIndex));
k8sEtcdNodeConfig = k8sEtcdNodeConfig.replace(etcdNodeIp, ipAddresses.get(etcdNodeIndex));
k8sEtcdNodeConfig = k8sEtcdNodeConfig.replace(etcdInitialClusterNodes, initialClusterDetails);
return k8sEtcdNodeConfig;
}
private UserVm createKubernetesAdditionalControlNode(final String joinIp, final int additionalControlNodeInstance,
final Long domainId, final Long accountId) throws ManagementServerException,
ResourceUnavailableException, InsufficientCapacityException {
UserVm additionalControlVm = null;
DataCenter zone = dataCenterDao.findById(kubernetesCluster.getZoneId());
ServiceOffering serviceOffering = serviceOfferingDao.findById(kubernetesCluster.getServiceOfferingId());
ServiceOffering serviceOffering = getServiceOfferingForNodeTypeOnCluster(CONTROL, kubernetesCluster);
List<Long> networkIds = new ArrayList<Long>();
networkIds.add(kubernetesCluster.getNetworkId());
Network.IpAddresses addrs = new Network.IpAddresses(null, null);
@ -293,20 +436,24 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
if (StringUtils.isNotBlank(kubernetesCluster.getKeyPair())) {
keypairs.add(kubernetesCluster.getKeyPair());
}
Long affinityGroupId = getExplicitAffinityGroup(domainId, accountId);
if (kubernetesCluster.getSecurityGroupId() != null &&
networkModel.checkSecurityGroupSupportForNetwork(owner, zone, networkIds,
List.of(kubernetesCluster.getSecurityGroupId()))) {
List<Long> securityGroupIds = new ArrayList<>();
securityGroupIds.add(kubernetesCluster.getSecurityGroupId());
additionalControlVm = userVmService.createAdvancedSecurityGroupVirtualMachine(zone, serviceOffering, clusterTemplate, networkIds, securityGroupIds, owner,
additionalControlVm = userVmService.createAdvancedSecurityGroupVirtualMachine(zone, serviceOffering, controlNodeTemplate, networkIds, securityGroupIds, owner,
hostName, hostName, null, null, null, Hypervisor.HypervisorType.None, BaseCmd.HTTPMethod.POST,base64UserData, null, null, keypairs,
null, addrs, null, null, null, customParameterMap, null, null, null,
null, addrs, null, null, Objects.nonNull(affinityGroupId) ?
Collections.singletonList(affinityGroupId) : null, customParameterMap, null, null, null,
null, true, null, UserVmManager.CKS_NODE);
} else {
additionalControlVm = userVmService.createAdvancedVirtualMachine(zone, serviceOffering, clusterTemplate, networkIds, owner,
additionalControlVm = userVmService.createAdvancedVirtualMachine(zone, serviceOffering, controlNodeTemplate, networkIds, owner,
hostName, hostName, null, null, null,
Hypervisor.HypervisorType.None, BaseCmd.HTTPMethod.POST, base64UserData, null, null, keypairs,
null, addrs, null, null, null, customParameterMap, null, null, null, null, true, UserVmManager.CKS_NODE, null);
null, addrs, null, null, Objects.nonNull(affinityGroupId) ?
Collections.singletonList(affinityGroupId) : null, customParameterMap, null, null, null, null, true, UserVmManager.CKS_NODE, null);
}
if (logger.isInfoEnabled()) {
@ -315,15 +462,62 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
return additionalControlVm;
}
private UserVm provisionKubernetesClusterControlVm(final Network network, final String publicIpAddress) throws
private UserVm createEtcdNode(List<Network.IpAddresses> requestedIps, List<String> etcdNodeHostnames, int etcdNodeIndex, Long domainId, Long accountId) throws ResourceUnavailableException, InsufficientCapacityException, ResourceAllocationException {
UserVm etcdNode = null;
DataCenter zone = dataCenterDao.findById(kubernetesCluster.getZoneId());
ServiceOffering serviceOffering = getServiceOfferingForNodeTypeOnCluster(ETCD, kubernetesCluster);
List<Long> networkIds = Collections.singletonList(kubernetesCluster.getNetworkId());
Network.IpAddresses addrs = new Network.IpAddresses(null, null);
List<String> guestIps = requestedIps.stream().map(Network.IpAddresses::getIp4Address).collect(Collectors.toList());
String k8sControlNodeConfig = null;
try {
k8sControlNodeConfig = getEtcdNodeConfig(guestIps, etcdNodeHostnames, etcdNodeIndex, Hypervisor.HypervisorType.VMware.equals(clusterTemplate.getHypervisorType()));
} catch (IOException e) {
logAndThrow(Level.ERROR, "Failed to read Kubernetes control configuration file", e);
}
String base64UserData = Base64.encodeBase64String(k8sControlNodeConfig.getBytes(com.cloud.utils.StringUtils.getPreferredCharset()));
List<String> keypairs = new ArrayList<String>();
if (StringUtils.isNotBlank(kubernetesCluster.getKeyPair())) {
keypairs.add(kubernetesCluster.getKeyPair());
}
Long affinityGroupId = getExplicitAffinityGroup(domainId, accountId);
String hostName = etcdNodeHostnames.get(etcdNodeIndex);
Map<String, String> customParameterMap = new HashMap<String, String>();
if (zone.isSecurityGroupEnabled()) {
List<Long> securityGroupIds = new ArrayList<>();
securityGroupIds.add(kubernetesCluster.getSecurityGroupId());
etcdNode = userVmService.createAdvancedSecurityGroupVirtualMachine(zone, serviceOffering, etcdTemplate, networkIds, securityGroupIds, owner,
hostName, hostName, null, null, null, Hypervisor.HypervisorType.None, BaseCmd.HTTPMethod.POST,base64UserData, null, null, keypairs,
Map.of(kubernetesCluster.getNetworkId(), requestedIps.get(etcdNodeIndex)), addrs, null, null, Objects.nonNull(affinityGroupId) ?
Collections.singletonList(affinityGroupId) : null, customParameterMap, null, null, null,
null, true, null, null);
} else {
etcdNode = userVmService.createAdvancedVirtualMachine(zone, serviceOffering, etcdTemplate, networkIds, owner,
hostName, hostName, null, null, null,
Hypervisor.HypervisorType.None, BaseCmd.HTTPMethod.POST, base64UserData, null, null, keypairs,
Map.of(kubernetesCluster.getNetworkId(), requestedIps.get(etcdNodeIndex)), addrs, null, null, Objects.nonNull(affinityGroupId) ?
Collections.singletonList(affinityGroupId) : null, customParameterMap, null, null, null, null, true, UserVmManager.CKS_NODE, null);
}
if (logger.isInfoEnabled()) {
logger.info(String.format("Created control VM ID : %s, %s in the Kubernetes cluster : %s", etcdNode.getUuid(), hostName, kubernetesCluster.getName()));
}
return etcdNode;
}
private Pair<UserVm, String> provisionKubernetesClusterControlVm(final Network network, final String publicIpAddress, final List<Network.IpAddresses> etcdIps,
final Long domainId, final Long accountId, Long asNumber) throws
ManagementServerException, InsufficientCapacityException, ResourceUnavailableException {
UserVm k8sControlVM = null;
k8sControlVM = createKubernetesControlNode(network, publicIpAddress);
addKubernetesClusterVm(kubernetesCluster.getId(), k8sControlVM.getId(), true);
Pair<UserVm, String> k8sControlVMAndControlIP;
k8sControlVMAndControlIP = createKubernetesControlNode(network, publicIpAddress, etcdIps, domainId, accountId, asNumber);
k8sControlVM = k8sControlVMAndControlIP.first();
addKubernetesClusterVm(kubernetesCluster.getId(), k8sControlVM.getId(), true, false, false, false);
if (kubernetesCluster.getNodeRootDiskSize() > 0) {
resizeNodeVolume(k8sControlVM);
}
startKubernetesVM(k8sControlVM);
startKubernetesVM(k8sControlVM, domainId, accountId, CONTROL);
k8sControlVM = userVmDao.findById(k8sControlVM.getId());
if (k8sControlVM == null) {
throw new ManagementServerException(String.format("Failed to provision control VM for Kubernetes cluster : %s" , kubernetesCluster.getName()));
@ -331,21 +525,22 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
if (logger.isInfoEnabled()) {
logger.info("Provisioned the control VM: {} in to the Kubernetes cluster: {}", k8sControlVM, kubernetesCluster);
}
return k8sControlVM;
return new Pair<>(k8sControlVM, k8sControlVMAndControlIP.second());
}
private List<UserVm> provisionKubernetesClusterAdditionalControlVms(final String publicIpAddress) throws
private List<UserVm> provisionKubernetesClusterAdditionalControlVms(final String controlIpAddress, final Long domainId,
final Long accountId) throws
InsufficientCapacityException, ManagementServerException, ResourceUnavailableException {
List<UserVm> additionalControlVms = new ArrayList<>();
if (kubernetesCluster.getControlNodeCount() > 1) {
for (int i = 1; i < kubernetesCluster.getControlNodeCount(); i++) {
UserVm vm = null;
vm = createKubernetesAdditionalControlNode(publicIpAddress, i);
addKubernetesClusterVm(kubernetesCluster.getId(), vm.getId(), true);
vm = createKubernetesAdditionalControlNode(controlIpAddress, i, domainId, accountId);
addKubernetesClusterVm(kubernetesCluster.getId(), vm.getId(), true, false, false, false);
if (kubernetesCluster.getNodeRootDiskSize() > 0) {
resizeNodeVolume(vm);
}
startKubernetesVM(vm);
startKubernetesVM(vm, domainId, accountId, CONTROL);
vm = userVmDao.findById(vm.getId());
if (vm == null) {
throw new ManagementServerException(String.format("Failed to provision additional control VM for Kubernetes cluster : %s" , kubernetesCluster.getName()));
@ -359,6 +554,35 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
return additionalControlVms;
}
private Pair<List<UserVm>, List<Network.IpAddresses>> provisionEtcdCluster(final Network network, final Long domainId, final Long accountId)
throws InsufficientCapacityException, ResourceUnavailableException, ManagementServerException {
List<UserVm> etcdNodeVms = new ArrayList<>();
List<Network.IpAddresses> etcdNodeGuestIps = getEtcdNodeGuestIps(network, kubernetesCluster.getEtcdNodeCount());
List<String> etcdHostnames = getEtcdNodeHostnames();
for (int i = 0; i < kubernetesCluster.getEtcdNodeCount(); i++) {
UserVm vm = createEtcdNode(etcdNodeGuestIps, etcdHostnames, i, domainId, accountId);
addKubernetesClusterVm(kubernetesCluster.getId(), vm.getId(), false, false, true, true);
startKubernetesVM(vm, domainId, accountId, ETCD);
vm = userVmDao.findById(vm.getId());
if (vm == null) {
throw new ManagementServerException(String.format("Failed to provision additional control VM for Kubernetes cluster : %s" , kubernetesCluster.getName()));
}
etcdNodeVms.add(vm);
if (logger.isInfoEnabled()) {
logger.info(String.format("Provisioned additional control VM : %s in to the Kubernetes cluster : %s", vm.getDisplayName(), kubernetesCluster.getName()));
}
}
return new Pair<>(etcdNodeVms, etcdNodeGuestIps);
}
private List<Network.IpAddresses> getEtcdNodeGuestIps(final Network network, final long etcdNodeCount) {
List<Network.IpAddresses> guestIps = new ArrayList<>();
for (int i = 1; i <= etcdNodeCount; i++) {
guestIps.add(new Network.IpAddresses(ipAddressManager.acquireGuestIpAddress(network, null), null));
}
return guestIps;
}
private Network startKubernetesClusterNetwork(final DeployDestination destination) throws ManagementServerException {
final ReservationContext context = new ReservationContextImpl(null, null, null, owner);
Network network = networkDao.findById(kubernetesCluster.getNetworkId());
@ -406,7 +630,40 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
setupKubernetesClusterIsolatedNetworkRules(publicIp, network, clusterVMIds, true);
}
private void startKubernetesClusterVMs() {
protected void setupKubernetesEtcdNetworkRules(List<UserVm> etcdVms, Network network) throws ManagementServerException, ResourceUnavailableException {
if (!Network.GuestType.Isolated.equals(network.getGuestType())) {
if (logger.isDebugEnabled()) {
logger.debug(String.format("Network : %s for Kubernetes cluster : %s is not an isolated network, therefore, no need for network rules", network.getName(), kubernetesCluster.getName()));
}
}
List<Long> etcdVmIds = etcdVms.stream().map(UserVm::getId).collect(Collectors.toList());
Integer startPort = KubernetesClusterService.KubernetesEtcdNodeStartPort.value();
IpAddress publicIp = ipAddressDao.findByIpAndDcId(kubernetesCluster.getZoneId(), publicIpAddress);
for (int i = 0; i < etcdVmIds.size(); i++) {
int etcdStartPort = startPort + i;
try {
if (Objects.isNull(network.getVpcId())) {
provisionFirewallRules(publicIp, owner, etcdStartPort, etcdStartPort);
} else if (network.getNetworkACLId() != NetworkACL.DEFAULT_ALLOW) {
try {
provisionVpcTierAllowPortACLRule(network, ETCD_NODE_CLIENT_REQUEST_PORT, ETCD_NODE_CLIENT_REQUEST_PORT);
if (logger.isInfoEnabled()) {
logger.info(String.format("Provisioned ACL rule to open up port %d on %s for etcd nodes for Kubernetes cluster %s",
ETCD_NODE_CLIENT_REQUEST_PORT, publicIpAddress, kubernetesCluster.getName()));
}
} catch (NoSuchFieldException | IllegalAccessException | ResourceUnavailableException | InvalidParameterValueException | PermissionDeniedException e) {
throw new ManagementServerException(String.format("Failed to provision ACL rules for etcd client access for the Kubernetes cluster : %s", kubernetesCluster.getName()), e);
}
}
} catch (NoSuchFieldException | IllegalAccessException | ResourceUnavailableException |
NetworkRuleConflictException e) {
throw new ManagementServerException(String.format("Failed to provision firewall rules for etcd nodes for the Kubernetes cluster : %s", kubernetesCluster.getName()), e);
}
provisionPublicIpPortForwardingRule(publicIp, network, owner, etcdVmIds.get(i), etcdStartPort, DEFAULT_SSH_PORT);
}
}
private void startKubernetesClusterVMs(Long domainId, Long accountId) {
List <UserVm> clusterVms = getKubernetesClusterVMs();
for (final UserVm vm : clusterVms) {
if (vm == null) {
@ -414,7 +671,9 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
}
try {
resizeNodeVolume(vm);
startKubernetesVM(vm);
KubernetesClusterVmMapVO map = kubernetesClusterVmMapDao.findByVmId(vm.getId());
KubernetesServiceHelper.KubernetesClusterNodeType nodeType = getNodeTypeFromClusterVMMapRecord(map);
startKubernetesVM(vm, domainId, accountId, nodeType);
} catch (ManagementServerException ex) {
logger.warn("Failed to start VM: {} in Kubernetes cluster: {} due to {}", vm, kubernetesCluster, ex);
// don't bail out here. proceed further to stop the reset of the VM's
@ -428,6 +687,16 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
}
}
private KubernetesServiceHelper.KubernetesClusterNodeType getNodeTypeFromClusterVMMapRecord(KubernetesClusterVmMapVO map) {
if (map.isControlNode()) {
return CONTROL;
} else if (map.isEtcdNode()) {
return ETCD;
} else {
return WORKER;
}
}
private boolean isKubernetesClusterKubeConfigAvailable(final long timeoutTime) {
if (StringUtils.isEmpty(publicIpAddress)) {
KubernetesClusterDetailsVO kubeConfigDetail = kubernetesClusterDetailsDao.findDetail(kubernetesCluster.getId(), "kubeConfigData");
@ -468,7 +737,7 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
kubernetesClusterDao.update(kubernetesCluster.getId(), kubernetesClusterVO);
}
public boolean startKubernetesClusterOnCreate() {
public boolean startKubernetesClusterOnCreate(Long domainId, Long accountId, Long asNumber) throws ManagementServerException, ResourceUnavailableException, InsufficientCapacityException {
init();
if (logger.isInfoEnabled()) {
logger.info("Starting Kubernetes cluster: {}", kubernetesCluster);
@ -477,7 +746,9 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
stateTransitTo(kubernetesCluster.getId(), KubernetesCluster.Event.StartRequested);
DeployDestination dest = null;
try {
dest = plan();
VMTemplateVO clusterTemplate = templateDao.findById(kubernetesCluster.getTemplateId());
Map<String, DeployDestination> destinationMap = planKubernetesCluster(domainId, accountId, clusterTemplate.getHypervisorType());
dest = destinationMap.get(WORKER.name());
} catch (InsufficientCapacityException e) {
logTransitStateAndThrow(Level.ERROR, String.format("Provisioning the cluster failed due to insufficient capacity in the Kubernetes cluster: %s", kubernetesCluster.getUuid()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed, e);
}
@ -499,16 +770,28 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
logTransitStateAndThrow(Level.ERROR, String.format("Failed to start Kubernetes cluster : %s as no public IP found for the cluster" , kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed);
}
// Allow account creating the kubernetes cluster to access systemVM template
LaunchPermissionVO launchPermission = new LaunchPermissionVO(clusterTemplate.getId(), owner.getId());
launchPermissionDao.persist(launchPermission);
if (isDefaultTemplateUsed()) {
LaunchPermissionVO launchPermission = new LaunchPermissionVO(kubernetesCluster.getTemplateId(), owner.getId());
launchPermissionDao.persist(launchPermission);
}
List<UserVm> etcdVms = new ArrayList<>();
List<Network.IpAddresses> etcdGuestNodeIps = new ArrayList<>();
if (kubernetesCluster.getEtcdNodeCount() > 0) {
Pair<List<UserVm>, List<Network.IpAddresses>> etcdNodesAndIps = provisionEtcdCluster(network, domainId, accountId);
etcdVms = etcdNodesAndIps.first();
etcdGuestNodeIps = etcdNodesAndIps.second();
}
List<UserVm> clusterVMs = new ArrayList<>();
Pair<UserVm, String> k8sControlVMAndIp = new Pair<>(null, null);
UserVm k8sControlVM = null;
try {
k8sControlVM = provisionKubernetesClusterControlVm(network, publicIpAddress);
k8sControlVMAndIp = provisionKubernetesClusterControlVm(network, publicIpAddress, etcdGuestNodeIps, domainId, accountId, asNumber);
} catch (CloudRuntimeException | ManagementServerException | ResourceUnavailableException | InsufficientCapacityException e) {
logTransitStateAndThrow(Level.ERROR, String.format("Provisioning the control VM failed in the Kubernetes cluster : %s", kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed, e);
}
k8sControlVM = k8sControlVMAndIp.first();
clusterVMs.add(k8sControlVM);
if (StringUtils.isEmpty(publicIpAddress)) {
publicIpSshPort = getKubernetesClusterServerIpSshPort(k8sControlVM);
@ -518,13 +801,13 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
}
}
try {
List<UserVm> additionalControlVMs = provisionKubernetesClusterAdditionalControlVms(publicIpAddress);
List<UserVm> additionalControlVMs = provisionKubernetesClusterAdditionalControlVms(k8sControlVMAndIp.second(), domainId, accountId);
clusterVMs.addAll(additionalControlVMs);
} catch (CloudRuntimeException | ManagementServerException | ResourceUnavailableException | InsufficientCapacityException e) {
logTransitStateAndThrow(Level.ERROR, String.format("Provisioning additional control VM failed in the Kubernetes cluster : %s", kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed, e);
}
try {
List<UserVm> nodeVMs = provisionKubernetesClusterNodeVms(kubernetesCluster.getNodeCount(), publicIpAddress);
List<UserVm> nodeVMs = provisionKubernetesClusterNodeVms(kubernetesCluster.getNodeCount(), k8sControlVMAndIp.second(), domainId, accountId);
clusterVMs.addAll(nodeVMs);
} catch (CloudRuntimeException | ManagementServerException | ResourceUnavailableException | InsufficientCapacityException e) {
logTransitStateAndThrow(Level.ERROR, String.format("Provisioning node VM failed in the Kubernetes cluster : %s", kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed, e);
@ -537,6 +820,12 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
} catch (ManagementServerException e) {
logTransitStateAndThrow(Level.ERROR, String.format("Failed to setup Kubernetes cluster : %s, unable to setup network rules", kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed, e);
}
try {
setupKubernetesEtcdNetworkRules(etcdVms, network);
} catch (ManagementServerException e) {
logTransitStateAndThrow(Level.ERROR, String.format("Failed to setup Kubernetes cluster : %s, unable to setup network rules for etcd nodes", kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed, e);
}
attachIsoKubernetesVMs(etcdVms);
attachIsoKubernetesVMs(clusterVMs);
if (!KubernetesClusterUtil.isKubernetesClusterControlVmRunning(kubernetesCluster, publicIpAddress, publicIpSshPort.second(), startTimeoutTime)) {
String msg = String.format("Failed to setup Kubernetes cluster : %s is not in usable state as the system is unable to access control node VMs of the cluster", kubernetesCluster.getName());
@ -574,14 +863,16 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
return true;
}
public boolean startStoppedKubernetesCluster() throws CloudRuntimeException {
public boolean startStoppedKubernetesCluster(Long domainId, Long accountId) throws CloudRuntimeException {
init();
if (logger.isInfoEnabled()) {
logger.info("Starting Kubernetes cluster: {}", kubernetesCluster);
}
final long startTimeoutTime = System.currentTimeMillis() + KubernetesClusterService.KubernetesClusterStartTimeout.value() * 1000;
stateTransitTo(kubernetesCluster.getId(), KubernetesCluster.Event.StartRequested);
startKubernetesClusterVMs();
startKubernetesClusterVMs(domainId, accountId);
try {
InetAddress address = InetAddress.getByName(new URL(kubernetesCluster.getEndpoint()).getHost());
} catch (MalformedURLException | UnknownHostException ex) {

View File

@ -20,7 +20,10 @@ package com.cloud.kubernetes.cluster.actionworkers;
import java.io.File;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
import java.util.stream.Collectors;
import com.cloud.kubernetes.cluster.KubernetesClusterVmMapVO;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.logging.log4j.Level;
@ -40,7 +43,7 @@ import com.cloud.utils.ssh.SshHelper;
public class KubernetesClusterUpgradeWorker extends KubernetesClusterActionWorker {
private List<UserVm> clusterVMs = new ArrayList<>();
protected List<UserVm> clusterVMs = new ArrayList<>();
private KubernetesSupportedVersion upgradeVersion;
private final String upgradeScriptFilename = "upgrade-kubernetes.sh";
private File upgradeScriptFile;
@ -65,12 +68,12 @@ public class KubernetesClusterUpgradeWorker extends KubernetesClusterActionWorke
String nodeAddress = (index > 0 && sshPort == 22) ? vm.getPrivateIpAddress() : publicIpAddress;
SshHelper.scpTo(nodeAddress, nodeSshPort, getControlNodeLoginUser(), sshKeyFile, null,
"~/", upgradeScriptFile.getAbsolutePath(), "0755");
String cmdStr = String.format("sudo ./%s %s %s %s %s",
String cmdStr = String.format("sudo ./%s %s %s %s %s %s",
upgradeScriptFile.getName(),
upgradeVersion.getSemanticVersion(),
index == 0 ? "true" : "false",
KubernetesVersionManagerImpl.compareSemanticVersions(upgradeVersion.getSemanticVersion(), "1.15.0") < 0 ? "true" : "false",
Hypervisor.HypervisorType.VMware.equals(vm.getHypervisorType()));
Hypervisor.HypervisorType.VMware.equals(vm.getHypervisorType()), Objects.isNull(kubernetesCluster.getCniConfigId()));
return SshHelper.sshExecute(nodeAddress, nodeSshPort, getControlNodeLoginUser(), sshKeyFile, null,
cmdStr,
10000, 10000, 10 * 60 * 1000);
@ -144,7 +147,7 @@ public class KubernetesClusterUpgradeWorker extends KubernetesClusterActionWorke
logTransitStateDetachIsoAndThrow(Level.ERROR, String.format("Failed to upgrade Kubernetes cluster : %s, unable to get control Kubernetes node on VM : %s in ready state", kubernetesCluster.getName(), vm.getDisplayName()), kubernetesCluster, clusterVMs, KubernetesCluster.Event.OperationFailed, null);
}
}
if (!KubernetesClusterUtil.clusterNodeVersionMatches(upgradeVersion.getSemanticVersion(), publicIpAddress, sshPort, getControlNodeLoginUser(), getManagementServerSshPublicKeyFile(), hostName, upgradeTimeoutTime, 15000)) {
if (!KubernetesClusterUtil.clusterNodeVersionMatches(upgradeVersion.getSemanticVersion(), publicIpAddress, sshPort, getControlNodeLoginUser(), getManagementServerSshPublicKeyFile(), hostName, upgradeTimeoutTime, 15000, vm.getId(), kubernetesClusterVmMapDao)) {
logTransitStateDetachIsoAndThrow(Level.ERROR, String.format("Failed to upgrade Kubernetes cluster : %s, unable to get Kubernetes node on VM : %s upgraded to version %s", kubernetesCluster.getName(), vm.getDisplayName(), upgradeVersion.getSemanticVersion()), kubernetesCluster, clusterVMs, KubernetesCluster.Event.OperationFailed, null);
}
if (logger.isInfoEnabled()) {
@ -169,6 +172,7 @@ public class KubernetesClusterUpgradeWorker extends KubernetesClusterActionWorke
if (CollectionUtils.isEmpty(clusterVMs)) {
logAndThrow(Level.ERROR, String.format("Upgrade failed for Kubernetes cluster: %s, unable to retrieve VMs for cluster", kubernetesCluster));
}
filterOutManualUpgradeNodesFromClusterUpgrade();
retrieveScriptFiles();
stateTransitTo(kubernetesCluster.getId(), KubernetesCluster.Event.UpgradeRequested);
attachIsoKubernetesVMs(clusterVMs, upgradeVersion);
@ -184,4 +188,14 @@ public class KubernetesClusterUpgradeWorker extends KubernetesClusterActionWorke
}
return updated;
}
protected void filterOutManualUpgradeNodesFromClusterUpgrade() {
if (CollectionUtils.isEmpty(clusterVMs)) {
return;
}
clusterVMs = clusterVMs.stream().filter(x -> {
KubernetesClusterVmMapVO mapVO = kubernetesClusterVmMapDao.getClusterMapFromVmId(x.getId());
return mapVO != null && !mapVO.isManualUpgrade();
}).collect(Collectors.toList());
}
}

View File

@ -16,6 +16,7 @@
// under the License.
package com.cloud.kubernetes.cluster.dao;
import com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType;
import com.cloud.kubernetes.cluster.KubernetesClusterVmMapVO;
import com.cloud.utils.db.GenericDao;
@ -31,5 +32,7 @@ public interface KubernetesClusterVmMapDao extends GenericDao<KubernetesClusterV
public int removeByClusterId(long clusterId);
List<KubernetesClusterVmMapVO> listByClusterIdAndVmType(long clusterId, KubernetesClusterNodeType nodeType);
KubernetesClusterVmMapVO findByVmId(long vmId);
}

View File

@ -18,6 +18,7 @@ package com.cloud.kubernetes.cluster.dao;
import java.util.List;
import com.cloud.kubernetes.cluster.KubernetesServiceHelper;
import org.springframework.stereotype.Component;
import com.cloud.kubernetes.cluster.KubernetesClusterVmMapVO;
@ -26,6 +27,9 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.CONTROL;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.ETCD;
@Component
public class KubernetesClusterVmMapDaoImpl extends GenericDaoBase<KubernetesClusterVmMapVO, Long> implements KubernetesClusterVmMapDao {
@ -37,6 +41,8 @@ public class KubernetesClusterVmMapDaoImpl extends GenericDaoBase<KubernetesClus
clusterIdSearch = createSearchBuilder();
clusterIdSearch.and("clusterId", clusterIdSearch.entity().getClusterId(), SearchCriteria.Op.EQ);
clusterIdSearch.and("vmIdsIN", clusterIdSearch.entity().getVmId(), SearchCriteria.Op.IN);
clusterIdSearch.and("controlNode", clusterIdSearch.entity().isControlNode(), SearchCriteria.Op.EQ);
clusterIdSearch.and("etcdNode", clusterIdSearch.entity().isEtcdNode(), SearchCriteria.Op.EQ);
clusterIdSearch.done();
vmIdSearch = createSearchBuilder();
@ -82,6 +88,23 @@ public class KubernetesClusterVmMapDaoImpl extends GenericDaoBase<KubernetesClus
return remove(sc);
}
@Override
public List<KubernetesClusterVmMapVO> listByClusterIdAndVmType(long clusterId, KubernetesServiceHelper.KubernetesClusterNodeType nodeType) {
SearchCriteria<KubernetesClusterVmMapVO> sc = clusterIdSearch.create();
sc.setParameters("clusterId", clusterId);
if (CONTROL == nodeType) {
sc.setParameters("controlNode", true);
sc.setParameters("etcdNode", false);
} else if (ETCD == nodeType) {
sc.setParameters("controlNode", false);
sc.setParameters("etcdNode", true);
} else {
sc.setParameters("controlNode", false);
sc.setParameters("etcdNode", false);
}
return listBy(sc);
}
@Override
public KubernetesClusterVmMapVO findByVmId(long vmId) {
SearchBuilder<KubernetesClusterVmMapVO> sb = createSearchBuilder();

View File

@ -31,6 +31,8 @@ import javax.net.ssl.HttpsURLConnection;
import javax.net.ssl.SSLContext;
import javax.net.ssl.TrustManager;
import com.cloud.kubernetes.cluster.KubernetesClusterVmMapVO;
import com.cloud.kubernetes.cluster.dao.KubernetesClusterVmMapDao;
import org.apache.cloudstack.utils.security.SSLUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.logging.log4j.Logger;
@ -215,10 +217,10 @@ public class KubernetesClusterUtil {
final int port, final String user, final File sshKeyFile) throws Exception {
Pair<Boolean, String> result = SshHelper.sshExecute(ipAddress, port,
user, sshKeyFile, null,
"sudo /opt/bin/kubectl get nodes | awk '{if ($2 == \"Ready\") print $1}' | wc -l",
"sudo /opt/bin/kubectl get nodes | grep -w 'Ready' | wc -l",
10000, 10000, 20000);
if (result.first()) {
return Integer.parseInt(result.second().trim().replace("\"", ""));
if (Boolean.TRUE.equals(result.first())) {
return Integer.parseInt(result.second().trim().replace("\"", "")) + kubernetesCluster.getEtcdNodeCount().intValue();
} else {
if (LOGGER.isDebugEnabled()) {
LOGGER.debug(String.format("Failed to retrieve ready nodes for Kubernetes cluster %s. Output: %s", kubernetesCluster, result.second()));
@ -331,7 +333,7 @@ public class KubernetesClusterUtil {
final String ipAddress, final int port,
final String user, final File sshKeyFile,
final String hostName,
final long timeoutTime, final long waitDuration) {
final long timeoutTime, final long waitDuration, final long vmId, KubernetesClusterVmMapDao vmMapDao) {
int retry = 10;
while (System.currentTimeMillis() < timeoutTime && retry-- > 0) {
if (LOGGER.isDebugEnabled()) {
@ -343,7 +345,13 @@ public class KubernetesClusterUtil {
user, sshKeyFile, null,
String.format(CLUSTER_NODE_VERSION_COMMAND, hostName.toLowerCase()),
10000, 10000, 20000);
if (clusterNodeVersionMatches(result, version)) {
Pair<Boolean, String> clusterVersionMatchesAndValue = clusterNodeVersionMatches(result, version);
if (Boolean.TRUE.equals(clusterVersionMatchesAndValue.first())) {
KubernetesClusterVmMapVO vmMapVO = vmMapDao.getClusterMapFromVmId(vmId);
String newNodeVersion = clusterVersionMatchesAndValue.second();
LOGGER.debug(String.format("Updating node %s Kubernetes version to %s", hostName, newNodeVersion));
vmMapVO.setNodeVersion(newNodeVersion);
vmMapDao.update(vmMapVO.getId(), vmMapVO);
return true;
}
} catch (Exception e) {
@ -360,11 +368,11 @@ public class KubernetesClusterUtil {
return false;
}
protected static boolean clusterNodeVersionMatches(final Pair<Boolean, String> result, final String version) {
protected static Pair<Boolean, String> clusterNodeVersionMatches(final Pair<Boolean, String> result, final String version) {
if (result == null || Boolean.FALSE.equals(result.first()) || StringUtils.isBlank(result.second())) {
return false;
return new Pair<>(false, null);
}
String response = result.second();
return response.contains(String.format("v%s", version));
return new Pair<>(response.contains(String.format("v%s", version)), response);
}
}

View File

@ -0,0 +1,133 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.api.command.user.kubernetes.cluster;
import com.cloud.kubernetes.cluster.KubernetesClusterEventTypes;
import com.cloud.kubernetes.cluster.KubernetesClusterService;
import org.apache.cloudstack.acl.RoleType;
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiCommandResourceType;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.ApiErrorCode;
import org.apache.cloudstack.api.BaseAsyncCmd;
import org.apache.cloudstack.api.Parameter;
import org.apache.cloudstack.api.ServerApiException;
import org.apache.cloudstack.api.response.KubernetesClusterResponse;
import org.apache.cloudstack.api.response.UserVmResponse;
import org.apache.cloudstack.context.CallContext;
import org.apache.commons.lang3.BooleanUtils;
import javax.inject.Inject;
import java.util.List;
@APICommand(name = "addNodesToKubernetesCluster",
description = "Add nodes as workers to an existing CKS cluster. ",
responseObject = KubernetesClusterResponse.class,
since = "4.21.0",
authorized = {RoleType.Admin, RoleType.ResourceAdmin, RoleType.DomainAdmin, RoleType.User})
public class AddNodesToKubernetesClusterCmd extends BaseAsyncCmd {
@Inject
public KubernetesClusterService kubernetesClusterService;
@Parameter(name = ApiConstants.NODE_IDS,
type = CommandType.LIST,
collectionType = CommandType.UUID,
entityType= UserVmResponse.class,
description = "comma separated list of (external) node (physical or virtual machines) IDs that need to be" +
"added as worker nodes to an existing managed Kubernetes cluster (CKS)",
required = true)
private List<Long> nodeIds;
@Parameter(name = ApiConstants.ID, type = CommandType.UUID, required = true,
entityType = KubernetesClusterResponse.class,
description = "the ID of the Kubernetes cluster", since = "4.21.0")
private Long clusterId;
@Parameter(name = ApiConstants.MOUNT_CKS_ISO_ON_VR, type = CommandType.BOOLEAN,
description = "(optional) Vmware only, uses the CKS cluster network VR to mount the CKS ISO")
private Boolean mountCksIsoOnVr;
@Parameter(name = ApiConstants.MANUAL_UPGRADE, type = CommandType.BOOLEAN,
description = "(optional) indicates if the node is marked for manual upgrade and excluded from the Kubernetes cluster upgrade operation")
private Boolean manualUpgrade;
/////////////////////////////////////////////////////
/////////////////// Accessors ///////////////////////
/////////////////////////////////////////////////////
public List<Long> getNodeIds() {
return nodeIds;
}
public Long getClusterId() {
return clusterId;
}
public boolean isMountCksIsoOnVr() {
return BooleanUtils.isTrue(mountCksIsoOnVr);
}
public boolean isManualUpgrade() {
return BooleanUtils.isTrue(manualUpgrade);
}
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
/////////////////////////////////////////////////////
@Override
public String getEventType() {
return KubernetesClusterEventTypes.EVENT_KUBERNETES_CLUSTER_NODES_ADD;
}
@Override
public String getEventDescription() {
return String.format("Adding %s nodes to the Kubernetes cluster with ID: %s", nodeIds.size(), clusterId);
}
@Override
public void execute() {
try {
kubernetesClusterService.addNodesToKubernetesCluster(this);
final KubernetesClusterResponse response = kubernetesClusterService.createKubernetesClusterResponse(getClusterId());
response.setResponseName(getCommandName());
setResponseObject(response);
} catch (Exception e) {
throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, String.format("Failed to add nodes to cluster ID %s due to: %s",
getClusterId(), e.getLocalizedMessage()), e);
}
}
@Override
public long getEntityOwnerId() {
return CallContext.current().getCallingAccount().getId();
}
@Override
public ApiCommandResourceType getApiResourceType() {
return ApiCommandResourceType.KubernetesCluster;
}
@Override
public Long getApiResourceId() {
return getClusterId();
}
}

View File

@ -16,9 +16,26 @@
// under the License.
package org.apache.cloudstack.api.command.user.kubernetes.cluster;
import java.security.InvalidParameterException;
import java.util.Map;
import java.util.Objects;
import javax.inject.Inject;
import com.cloud.dc.ASNumberVO;
import com.cloud.dc.dao.ASNumberDao;
import com.cloud.exception.InsufficientCapacityException;
import com.cloud.exception.InvalidParameterValueException;
import com.cloud.exception.ManagementServerException;
import com.cloud.exception.ResourceUnavailableException;
import com.cloud.hypervisor.Hypervisor;
import com.cloud.kubernetes.cluster.KubernetesServiceHelper;
import com.cloud.network.dao.NetworkDao;
import com.cloud.network.dao.NetworkVO;
import com.cloud.offering.NetworkOffering;
import com.cloud.offerings.NetworkOfferingVO;
import com.cloud.offerings.dao.NetworkOfferingDao;
import com.cloud.utils.Pair;
import org.apache.cloudstack.acl.RoleType;
import org.apache.cloudstack.acl.SecurityChecker.AccessType;
import org.apache.cloudstack.api.ACL;
@ -36,8 +53,11 @@ import org.apache.cloudstack.api.response.KubernetesSupportedVersionResponse;
import org.apache.cloudstack.api.response.NetworkResponse;
import org.apache.cloudstack.api.response.ProjectResponse;
import org.apache.cloudstack.api.response.ServiceOfferingResponse;
import org.apache.cloudstack.api.response.UserDataResponse;
import org.apache.cloudstack.api.response.ZoneResponse;
import org.apache.cloudstack.context.CallContext;
import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
import org.apache.cloudstack.framework.config.impl.ConfigurationVO;
import org.apache.commons.lang3.StringUtils;
import com.cloud.kubernetes.cluster.KubernetesCluster;
@ -58,6 +78,16 @@ public class CreateKubernetesClusterCmd extends BaseAsyncCreateCmd {
@Inject
public KubernetesClusterService kubernetesClusterService;
@Inject
protected KubernetesServiceHelper kubernetesClusterHelper;
@Inject
private ConfigurationDao configurationDao;
@Inject
private NetworkDao networkDao;
@Inject
private NetworkOfferingDao networkOfferingDao;
@Inject
private ASNumberDao asNumberDao;
/////////////////////////////////////////////////////
//////////////// API parameters /////////////////////
@ -83,6 +113,25 @@ public class CreateKubernetesClusterCmd extends BaseAsyncCreateCmd {
description = "the ID of the service offering for the virtual machines in the cluster.")
private Long serviceOfferingId;
@ACL(accessType = AccessType.UseEntry)
@Parameter(name = ApiConstants.NODE_TYPE_OFFERING_MAP, type = CommandType.MAP,
description = "(Optional) Node Type to Service Offering ID mapping. If provided, it overrides the serviceofferingid parameter",
since = "4.21.0")
private Map<String, Map<String, String>> serviceOfferingNodeTypeMap;
@ACL(accessType = AccessType.UseEntry)
@Parameter(name = ApiConstants.NODE_TYPE_TEMPLATE_MAP, type = CommandType.MAP,
description = "(Optional) Node Type to Template ID mapping. If provided, it overrides the default template: System VM template",
since = "4.21.0")
private Map<String, Map<String, String>> templateNodeTypeMap;
@ACL(accessType = AccessType.UseEntry)
@Parameter(name = ApiConstants.ETCD_NODES, type = CommandType.LONG,
description = "(Optional) Number of Kubernetes cluster etcd nodes, default is 0." +
"In case the number is greater than 0, etcd nodes are separate from master nodes and are provisioned accordingly",
since = "4.21.0")
private Long etcdNodes;
@ACL(accessType = AccessType.UseEntry)
@Parameter(name = ApiConstants.ACCOUNT, type = CommandType.STRING, description = "an optional account for the" +
" virtual machine. Must be used with domainId.")
@ -90,7 +139,8 @@ public class CreateKubernetesClusterCmd extends BaseAsyncCreateCmd {
@ACL(accessType = AccessType.UseEntry)
@Parameter(name = ApiConstants.DOMAIN_ID, type = CommandType.UUID, entityType = DomainResponse.class,
description = "an optional domainId for the virtual machine. If the account parameter is used, domainId must also be used.")
description = "an optional domainId for the virtual machine. If the account parameter is used, domainId must also be used. " +
"Hosts dedicated to the specified domain will be used for deploying the cluster")
private Long domainId;
@ACL(accessType = AccessType.UseEntry)
@ -144,6 +194,22 @@ public class CreateKubernetesClusterCmd extends BaseAsyncCreateCmd {
@Parameter(name = ApiConstants.CLUSTER_TYPE, type = CommandType.STRING, description = "type of the cluster: CloudManaged, ExternalManaged. The default value is CloudManaged.", since="4.19.0")
private String clusterType;
@Parameter(name = ApiConstants.HYPERVISOR, type = CommandType.STRING, description = "the hypervisor on which the CKS cluster is to be deployed. This is required if the zone in which the CKS cluster is being deployed has clusters with different hypervisor types.", since = "4.21.0")
private String hypervisor;
@Parameter(name = ApiConstants.CNI_CONFIG_ID, type = CommandType.UUID, entityType = UserDataResponse.class, description = "the ID of the Userdata", since = "4.21.0")
private Long cniConfigId;
@Parameter(name = ApiConstants.CNI_CONFIG_DETAILS, type = CommandType.MAP,
description = "used to specify the parameters values for the variables in userdata. " +
"Example: cniconfigdetails[0].key=accesskey&cniconfigdetails[0].value=s389ddssaa&" +
"cniconfigdetails[1].key=secretkey&cniconfigdetails[1].value=8dshfsss",
since = "4.21.0")
private Map cniConfigDetails;
@Parameter(name=ApiConstants.AS_NUMBER, type=CommandType.LONG, description="the AS Number of the network")
private Long asNumber;
/////////////////////////////////////////////////////
/////////////////// Accessors ///////////////////////
/////////////////////////////////////////////////////
@ -202,6 +268,10 @@ public class CreateKubernetesClusterCmd extends BaseAsyncCreateCmd {
return controlNodes;
}
public long getEtcdNodes() {
return etcdNodes == null ? 0 : etcdNodes;
}
public String getExternalLoadBalancerIpAddress() {
return externalLoadBalancerIpAddress;
}
@ -240,6 +310,67 @@ public class CreateKubernetesClusterCmd extends BaseAsyncCreateCmd {
return clusterType;
}
public Map<String, Long> getServiceOfferingNodeTypeMap() {
return kubernetesClusterHelper.getServiceOfferingNodeTypeMap(serviceOfferingNodeTypeMap);
}
public Map<String, Long> getTemplateNodeTypeMap() {
return kubernetesClusterHelper.getTemplateNodeTypeMap(templateNodeTypeMap);
}
public Hypervisor.HypervisorType getHypervisorType() {
return hypervisor == null ? null : Hypervisor.HypervisorType.getType(hypervisor);
}
private Pair<NetworkOfferingVO,NetworkVO> getKubernetesNetworkOffering(Long networkId) {
if (Objects.isNull(networkId)) {
ConfigurationVO configurationVO = configurationDao.findByName(KubernetesClusterService.KubernetesClusterNetworkOffering.key());
String offeringName = configurationVO.getValue();
return new Pair<>(networkOfferingDao.findByUniqueName(offeringName), null);
} else {
NetworkVO networkVO = networkDao.findById(getNetworkId());
if (networkVO == null) {
throw new InvalidParameterException(String.format("Failed to find network with id: %s", getNetworkId()));
}
NetworkOfferingVO offeringVO = networkOfferingDao.findById(networkVO.getNetworkOfferingId());
return new Pair<>(offeringVO, networkVO);
}
}
public Long getAsNumber() {
Pair<NetworkOfferingVO, NetworkVO> offeringAndNetwork = getKubernetesNetworkOffering(getNetworkId());
NetworkOfferingVO offering = offeringAndNetwork.first();
NetworkVO networkVO = offeringAndNetwork.second();
if (offering == null) {
throw new CloudRuntimeException("Failed to find kubernetes network offering");
}
ASNumberVO asNumberVO = null;
if (Objects.isNull(getNetworkId()) && !offering.isForVpc()) {
if (Boolean.TRUE.equals(NetworkOffering.RoutingMode.Dynamic.equals(offering.getRoutingMode()) && offering.isSpecifyAsNumber()) && asNumber == null) {
throw new InvalidParameterException("AsNumber must be specified as network offering has specifyasnumber set");
}
} else if (Objects.nonNull(networkVO)) {
if (offering.isForVpc()) {
asNumberVO = asNumberDao.findByZoneAndVpcId(getZoneId(), networkVO.getVpcId());
} else {
asNumberVO = asNumberDao.findByZoneAndNetworkId(getZoneId(), getNetworkId());
}
}
if (Objects.nonNull(asNumberVO)) {
return asNumberVO.getAsNumber();
}
return asNumber;
}
public Map getCniConfigDetails() {
return convertDetailsToMap(cniConfigDetails);
}
public Long getCniConfigId() {
return cniConfigId;
}
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
/////////////////////////////////////////////////////
@ -290,7 +421,7 @@ public class CreateKubernetesClusterCmd extends BaseAsyncCreateCmd {
KubernetesClusterResponse response = kubernetesClusterService.createKubernetesClusterResponse(getEntityId());
response.setResponseName(getCommandName());
setResponseObject(response);
} catch (CloudRuntimeException e) {
} catch (CloudRuntimeException | ManagementServerException | ResourceUnavailableException | InsufficientCapacityException e) {
throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, e.getMessage());
}
}

View File

@ -0,0 +1,125 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.api.command.user.kubernetes.cluster;
import com.cloud.exception.ConcurrentOperationException;
import com.cloud.exception.InsufficientCapacityException;
import com.cloud.exception.NetworkRuleConflictException;
import com.cloud.exception.ResourceAllocationException;
import com.cloud.exception.ResourceUnavailableException;
import com.cloud.kubernetes.cluster.KubernetesClusterEventTypes;
import com.cloud.kubernetes.cluster.KubernetesClusterService;
import org.apache.cloudstack.acl.RoleType;
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiCommandResourceType;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.ApiErrorCode;
import org.apache.cloudstack.api.BaseAsyncCmd;
import org.apache.cloudstack.api.Parameter;
import org.apache.cloudstack.api.ServerApiException;
import org.apache.cloudstack.api.response.KubernetesClusterResponse;
import org.apache.cloudstack.api.response.UserVmResponse;
import org.apache.cloudstack.context.CallContext;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import javax.inject.Inject;
import java.util.List;
@APICommand(name = "removeNodesFromKubernetesCluster",
description = "Removes external nodes from a CKS cluster. ",
responseObject = KubernetesClusterResponse.class,
since = "4.21.0",
authorized = {RoleType.Admin, RoleType.ResourceAdmin, RoleType.DomainAdmin, RoleType.User})
public class RemoveNodesFromKubernetesClusterCmd extends BaseAsyncCmd {
@Inject
public KubernetesClusterService kubernetesClusterService;
protected static final Logger LOGGER = LogManager.getLogger(RemoveNodesFromKubernetesClusterCmd.class);
@Parameter(name = ApiConstants.NODE_IDS,
type = CommandType.LIST,
collectionType = CommandType.UUID,
entityType= UserVmResponse.class,
description = "comma separated list of node (physical or virtual machines) IDs that need to be" +
"removed from the Kubernetes cluster (CKS)",
required = true)
private List<Long> nodeIds;
@Parameter(name = ApiConstants.ID, type = CommandType.UUID, required = true,
entityType = KubernetesClusterResponse.class,
description = "the ID of the Kubernetes cluster")
private Long clusterId;
/////////////////////////////////////////////////////
/////////////////// Accessors ///////////////////////
/////////////////////////////////////////////////////
public List<Long> getNodeIds() {
return nodeIds;
}
public Long getClusterId() {
return clusterId;
}
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
/////////////////////////////////////////////////////
@Override
public String getEventType() {
return KubernetesClusterEventTypes.EVENT_KUBERNETES_CLUSTER_NODES_REMOVE;
}
@Override
public String getEventDescription() {
return String.format("Removing %s nodes from the Kubernetes Cluster with ID: %s", nodeIds.size(), clusterId);
}
@Override
public void execute() throws ResourceUnavailableException, InsufficientCapacityException, ServerApiException, ConcurrentOperationException, ResourceAllocationException, NetworkRuleConflictException {
try {
if (!kubernetesClusterService.removeNodesFromKubernetesCluster(this)) {
throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, String.format("Failed to remove node(s) from Kubernetes cluster ID: %d", getClusterId()));
}
final KubernetesClusterResponse response = kubernetesClusterService.createKubernetesClusterResponse(getClusterId());
response.setResponseName(getCommandName());
setResponseObject(response);
} catch (Exception e) {
String err = String.format("Failed to remove node(s) from Kubernetes cluster ID: %d due to: %s", getClusterId(), e.getMessage());
LOGGER.error(err, e);
throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, err);
}
}
@Override
public long getEntityOwnerId() {
return CallContext.current().getCallingAccount().getId();
}
@Override
public ApiCommandResourceType getApiResourceType() {
return ApiCommandResourceType.KubernetesCluster;
}
@Override
public Long getApiResourceId() {
return getClusterId();
}
}

View File

@ -17,9 +17,11 @@
package org.apache.cloudstack.api.command.user.kubernetes.cluster;
import java.util.List;
import java.util.Map;
import javax.inject.Inject;
import com.cloud.kubernetes.cluster.KubernetesServiceHelper;
import org.apache.cloudstack.acl.RoleType;
import org.apache.cloudstack.acl.SecurityChecker;
import org.apache.cloudstack.api.ACL;
@ -54,6 +56,8 @@ public class ScaleKubernetesClusterCmd extends BaseAsyncCmd {
@Inject
public KubernetesClusterService kubernetesClusterService;
@Inject
protected KubernetesServiceHelper kubernetesClusterHelper;
/////////////////////////////////////////////////////
//////////////// API parameters /////////////////////
@ -68,6 +72,12 @@ public class ScaleKubernetesClusterCmd extends BaseAsyncCmd {
description = "the ID of the service offering for the virtual machines in the cluster.")
private Long serviceOfferingId;
@ACL(accessType = SecurityChecker.AccessType.UseEntry)
@Parameter(name = ApiConstants.NODE_TYPE_OFFERING_MAP, type = CommandType.MAP,
description = "(Optional) Node Type to Service Offering ID mapping. If provided, it overrides the serviceofferingid parameter",
since = "4.21.0")
protected Map<String, Map<String, String>> serviceOfferingNodeTypeMap;
@Parameter(name=ApiConstants.SIZE, type = CommandType.LONG,
description = "number of Kubernetes cluster nodes")
private Long clusterSize;
@ -103,6 +113,10 @@ public class ScaleKubernetesClusterCmd extends BaseAsyncCmd {
return serviceOfferingId;
}
public Map<String, Long> getServiceOfferingNodeTypeMap() {
return kubernetesClusterHelper.getServiceOfferingNodeTypeMap(this.serviceOfferingNodeTypeMap);
}
public Long getClusterSize() {
return clusterSize;
}

View File

@ -18,6 +18,9 @@ package org.apache.cloudstack.api.command.user.kubernetes.cluster;
import javax.inject.Inject;
import com.cloud.exception.InsufficientCapacityException;
import com.cloud.exception.ManagementServerException;
import com.cloud.exception.ResourceUnavailableException;
import org.apache.cloudstack.acl.RoleType;
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiCommandResourceType;
@ -102,7 +105,8 @@ public class StartKubernetesClusterCmd extends BaseAsyncCmd {
final KubernetesClusterResponse response = kubernetesClusterService.createKubernetesClusterResponse(getId());
response.setResponseName(getCommandName());
setResponseObject(response);
} catch (CloudRuntimeException ex) {
} catch (CloudRuntimeException | ManagementServerException | ResourceUnavailableException |
InsufficientCapacityException ex) {
throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, ex.getMessage());
}
}

View File

@ -18,6 +18,7 @@ package org.apache.cloudstack.api.response;
import java.util.Date;
import java.util.List;
import java.util.Map;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.BaseResponseWithAnnotations;
@ -58,6 +59,34 @@ public class KubernetesClusterResponse extends BaseResponseWithAnnotations imple
@Param(description = "the name of the service offering of the Kubernetes cluster")
private String serviceOfferingName;
@SerializedName(ApiConstants.WORKER_SERVICE_OFFERING_ID)
@Param(description = "the ID of the service offering of the worker nodes on the Kubernetes cluster")
private String workerOfferingId;
@SerializedName(ApiConstants.WORKER_SERVICE_OFFERING_NAME)
@Param(description = "the name of the service offering of the worker nodes on the Kubernetes cluster")
private String workerOfferingName;
@SerializedName(ApiConstants.CONTROL_SERVICE_OFFERING_ID)
@Param(description = "the ID of the service offering of the control nodes on the Kubernetes cluster")
private String controlOfferingId;
@SerializedName(ApiConstants.CONTROL_SERVICE_OFFERING_NAME)
@Param(description = "the name of the service offering of the control nodes on the Kubernetes cluster")
private String controlOfferingName;
@SerializedName(ApiConstants.ETCD_SERVICE_OFFERING_ID)
@Param(description = "the ID of the service offering of the etcd nodes on the Kubernetes cluster")
private String etcdOfferingId;
@SerializedName(ApiConstants.ETCD_SERVICE_OFFERING_NAME)
@Param(description = "the name of the service offering of the etcd nodes on the Kubernetes cluster")
private String etcdOfferingName;
@SerializedName(ApiConstants.ETCD_NODES)
@Param(description = "the number of the etcd nodes on the Kubernetes cluster")
private Long etcdNodes;
@SerializedName(ApiConstants.TEMPLATE_ID)
@Param(description = "the ID of the template of the Kubernetes cluster")
private String templateId;
@ -106,6 +135,14 @@ public class KubernetesClusterResponse extends BaseResponseWithAnnotations imple
@Param(description = "keypair details")
private String keypair;
@SerializedName(ApiConstants.CNI_CONFIG_ID)
@Param(description = "ID of CNI Configuration associated with the cluster")
private String cniConfigId;
@SerializedName(ApiConstants.CNI_CONFIG_NAME)
@Param(description = "Name of CNI Configuration associated with the cluster")
private String cniConfigName;
@Deprecated(since = "4.16")
@SerializedName(ApiConstants.MASTER_NODES)
@Param(description = "the master nodes count for the Kubernetes cluster. This parameter is deprecated, please use 'controlnodes' parameter.")
@ -141,7 +178,7 @@ public class KubernetesClusterResponse extends BaseResponseWithAnnotations imple
@SerializedName(ApiConstants.VIRTUAL_MACHINES)
@Param(description = "the list of virtualmachine associated with this Kubernetes cluster")
private List<UserVmResponse> virtualMachines;
private List<KubernetesUserVmResponse> virtualMachines;
@SerializedName(ApiConstants.IP_ADDRESS)
@Param(description = "Public IP Address of the cluster")
@ -151,6 +188,10 @@ public class KubernetesClusterResponse extends BaseResponseWithAnnotations imple
@Param(description = "Public IP Address ID of the cluster")
private String ipAddressId;
@SerializedName(ApiConstants.ETCD_IPS)
@Param(description = "Public IP Addresses of the etcd nodes")
private Map<String, String> etcdIps;
@SerializedName(ApiConstants.AUTOSCALING_ENABLED)
@Param(description = "Whether autoscaling is enabled for the cluster")
private boolean isAutoscalingEnabled;
@ -367,11 +408,67 @@ public class KubernetesClusterResponse extends BaseResponseWithAnnotations imple
this.serviceOfferingName = serviceOfferingName;
}
public void setVirtualMachines(List<UserVmResponse> virtualMachines) {
public String getWorkerOfferingId() {
return workerOfferingId;
}
public void setWorkerOfferingId(String workerOfferingId) {
this.workerOfferingId = workerOfferingId;
}
public String getWorkerOfferingName() {
return workerOfferingName;
}
public void setWorkerOfferingName(String workerOfferingName) {
this.workerOfferingName = workerOfferingName;
}
public String getControlOfferingId() {
return controlOfferingId;
}
public void setControlOfferingId(String controlOfferingId) {
this.controlOfferingId = controlOfferingId;
}
public String getControlOfferingName() {
return controlOfferingName;
}
public void setControlOfferingName(String controlOfferingName) {
this.controlOfferingName = controlOfferingName;
}
public String getEtcdOfferingId() {
return etcdOfferingId;
}
public void setEtcdOfferingId(String etcdOfferingId) {
this.etcdOfferingId = etcdOfferingId;
}
public String getEtcdOfferingName() {
return etcdOfferingName;
}
public void setEtcdOfferingName(String etcdOfferingName) {
this.etcdOfferingName = etcdOfferingName;
}
public Long getEtcdNodes() {
return etcdNodes;
}
public void setEtcdNodes(Long etcdNodes) {
this.etcdNodes = etcdNodes;
}
public void setVirtualMachines(List<KubernetesUserVmResponse> virtualMachines) {
this.virtualMachines = virtualMachines;
}
public List<UserVmResponse> getVirtualMachines() {
public List<KubernetesUserVmResponse> getVirtualMachines() {
return virtualMachines;
}
@ -383,6 +480,10 @@ public class KubernetesClusterResponse extends BaseResponseWithAnnotations imple
this.ipAddressId = ipAddressId;
}
public void setEtcdIps(Map<String, String> etcdIps) {
this.etcdIps = etcdIps;
}
public void setAutoscalingEnabled(boolean isAutoscalingEnabled) {
this.isAutoscalingEnabled = isAutoscalingEnabled;
}
@ -406,4 +507,12 @@ public class KubernetesClusterResponse extends BaseResponseWithAnnotations imple
public void setClusterType(KubernetesCluster.ClusterType clusterType) {
this.clusterType = clusterType;
}
public void setCniConfigId(String cniConfigId) {
this.cniConfigId = cniConfigId;
}
public void setCniConfigName(String cniConfigName) {
this.cniConfigName = cniConfigName;
}
}

View File

@ -0,0 +1,134 @@
#cloud-config
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
---
users:
- name: cloud
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
ssh_authorized_keys:
{{ k8s.ssh.pub.key }}
write_files:
- path: /opt/bin/setup-etcd-node
permissions: '0700'
owner: root:root
content: |
#!/bin/bash -e
if [[ -f "/home/cloud/success" ]]; then
echo "Already provisioned!"
exit 0
fi
ISO_MOUNT_DIR=/mnt/etcddisk
BINARIES_DIR=${ISO_MOUNT_DIR}/
ATTEMPT_ONLINE_INSTALL=false
setup_complete=false
OFFLINE_INSTALL_ATTEMPT_SLEEP={{ k8s.install.wait.time }}
MAX_OFFLINE_INSTALL_ATTEMPTS={{ k8s.install.reattempts.count }}
if [[ -z $OFFLINE_INSTALL_ATTEMPT_SLEEP || $OFFLINE_INSTALL_ATTEMPT_SLEEP -eq 0 ]]; then
OFFLINE_INSTALL_ATTEMPT_SLEEP=15
fi
if [[ -z $MAX_OFFLINE_INSTALL_ATTEMPTS || $MAX_OFFLINE_INSTALL_ATTEMPTS -eq 0 ]]; then
MAX_OFFLINE_INSTALL_ATTEMPTS=100
fi
offline_attempts=1
MAX_SETUP_CRUCIAL_CMD_ATTEMPTS=3
EJECT_ISO_FROM_OS={{ k8s.eject.iso }}
crucial_cmd_attempts=1
iso_drive_path=""
while true; do
if (( "$offline_attempts" > "$MAX_OFFLINE_INSTALL_ATTEMPTS" )); then
echo "Warning: Offline install timed out!"
break
fi
set +e
output=`blkid -o device -t TYPE=iso9660`
set -e
if [ "$output" != "" ]; then
while read -r line; do
if [ ! -d "${ISO_MOUNT_DIR}" ]; then
mkdir "${ISO_MOUNT_DIR}"
fi
retval=0
set +e
mount -o ro "${line}" "${ISO_MOUNT_DIR}"
retval=$?
set -e
if [ $retval -eq 0 ]; then
if [ -d "$BINARIES_DIR" ]; then
iso_drive_path="${line}"
break
else
umount "${line}" && rmdir "${ISO_MOUNT_DIR}"
fi
fi
done <<< "$output"
fi
if [ -d "$BINARIES_DIR" ]; then
break
fi
echo "Waiting for Binaries directory $BINARIES_DIR to be available, sleeping for $OFFLINE_INSTALL_ATTEMPT_SLEEP seconds, attempt: $offline_attempts"
sleep $OFFLINE_INSTALL_ATTEMPT_SLEEP
offline_attempts=$[$offline_attempts + 1]
done
if [[ "$PATH" != *:/opt/bin && "$PATH" != *:/opt/bin:* ]]; then
export PATH=$PATH:/opt/bin
fi
if [ -d "$BINARIES_DIR" ]; then
### Binaries available offline ###
echo "Installing binaries from ${BINARIES_DIR}"
mkdir -p /opt/bin/
tar -zxf ${BINARIES_DIR}/etcd/etcd-linux-amd64.tar.gz -C /opt/bin/
mv /opt/bin/etcd*/etcd* /opt/bin/
sudo rm -rf /opt/bin/etcd-*
fi
- path: /etc/systemd/system/etcd.service
permissions: '0755'
owner: root:root
content: |
[Unit]
Description=etcd
[Service]
Type=exec
ExecStart=/opt/bin/etcd \
--name {{ etcd.node_name }} \
--initial-advertise-peer-urls http://{{ etcd.node_ip }}:2380 \
--listen-peer-urls http://{{ etcd.node_ip }}:2380 \
--advertise-client-urls http://{{ etcd.node_ip }}:2379 \
--listen-client-urls http://{{ etcd.node_ip }}:2379,http://127.0.0.1:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster {{ etcd.initial_cluster_nodes }} \
--initial-cluster-state new
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
runcmd:
- chown -R cloud:cloud /home/cloud/.ssh
- /opt/bin/setup-etcd-node
- systemctl daemon-reload
- systemctl enable --now etcd

View File

@ -40,8 +40,8 @@ write_files:
sysctl net.ipv4.conf.default.arp_ignore=0
sysctl net.ipv4.conf.all.arp_announce=0
sysctl net.ipv4.conf.all.arp_ignore=0
sysctl net.ipv4.conf.eth0.arp_announce=0
sysctl net.ipv4.conf.eth0.arp_ignore=0
sysctl net.ipv4.conf.eth0.arp_announce=0 || sysctl net.ipv4.conf.ens35.arp_announce=0 || true
sysctl net.ipv4.conf.eth0.arp_ignore=0 || sysctl net.ipv4.conf.ens35.arp_ignore=0 || true
sed -i "s/net.ipv4.conf.default.arp_announce =.*$/net.ipv4.conf.default.arp_announce = 0/" /etc/sysctl.conf
sed -i "s/net.ipv4.conf.default.arp_ignore =.*$/net.ipv4.conf.default.arp_ignore = 0/" /etc/sysctl.conf
sed -i "s/net.ipv4.conf.all.arp_announce =.*$/net.ipv4.conf.all.arp_announce = 0/" /etc/sysctl.conf
@ -53,8 +53,14 @@ write_files:
ATTEMPT_ONLINE_INSTALL=false
setup_complete=false
OFFLINE_INSTALL_ATTEMPT_SLEEP=15
MAX_OFFLINE_INSTALL_ATTEMPTS=100
OFFLINE_INSTALL_ATTEMPT_SLEEP={{ k8s.install.wait.time }}
MAX_OFFLINE_INSTALL_ATTEMPTS={{ k8s.install.reattempts.count }}
if [[ -z $OFFLINE_INSTALL_ATTEMPT_SLEEP || $OFFLINE_INSTALL_ATTEMPT_SLEEP -eq 0 ]]; then
OFFLINE_INSTALL_ATTEMPT_SLEEP=15
fi
if [[ -z $MAX_OFFLINE_INSTALL_ATTEMPTS || $MAX_OFFLINE_INSTALL_ATTEMPTS -eq 0 ]]; then
MAX_OFFLINE_INSTALL_ATTEMPTS=100
fi
offline_attempts=1
MAX_SETUP_CRUCIAL_CMD_ATTEMPTS=3
EJECT_ISO_FROM_OS={{ k8s.eject.iso }}

View File

@ -1,3 +1,4 @@
## template: jinja
#cloud-config
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
@ -60,8 +61,8 @@ write_files:
sysctl net.ipv4.conf.default.arp_ignore=0
sysctl net.ipv4.conf.all.arp_announce=0
sysctl net.ipv4.conf.all.arp_ignore=0
sysctl net.ipv4.conf.eth0.arp_announce=0
sysctl net.ipv4.conf.eth0.arp_ignore=0
sysctl net.ipv4.conf.eth0.arp_announce=0 || sysctl net.ipv4.conf.ens35.arp_announce=0 || true
sysctl net.ipv4.conf.eth0.arp_ignore=0 || sysctl net.ipv4.conf.ens35.arp_ignore=0 || true
sed -i "s/net.ipv4.conf.default.arp_announce =.*$/net.ipv4.conf.default.arp_announce = 0/" /etc/sysctl.conf
sed -i "s/net.ipv4.conf.default.arp_ignore =.*$/net.ipv4.conf.default.arp_ignore = 0/" /etc/sysctl.conf
sed -i "s/net.ipv4.conf.all.arp_announce =.*$/net.ipv4.conf.all.arp_announce = 0/" /etc/sysctl.conf
@ -73,8 +74,14 @@ write_files:
ATTEMPT_ONLINE_INSTALL=false
setup_complete=false
OFFLINE_INSTALL_ATTEMPT_SLEEP=15
MAX_OFFLINE_INSTALL_ATTEMPTS=100
OFFLINE_INSTALL_ATTEMPT_SLEEP={{ k8s.install.wait.time }}
MAX_OFFLINE_INSTALL_ATTEMPTS={{ k8s.install.reattempts.count }}
if [[ -z $OFFLINE_INSTALL_ATTEMPT_SLEEP || $OFFLINE_INSTALL_ATTEMPT_SLEEP -eq 0 ]]; then
OFFLINE_INSTALL_ATTEMPT_SLEEP=15
fi
if [[ -z $MAX_OFFLINE_INSTALL_ATTEMPTS || $MAX_OFFLINE_INSTALL_ATTEMPTS -eq 0 ]]; then
MAX_OFFLINE_INSTALL_ATTEMPTS=100
fi
offline_attempts=1
MAX_SETUP_CRUCIAL_CMD_ATTEMPTS=3
EJECT_ISO_FROM_OS={{ k8s.eject.iso }}
@ -230,6 +237,34 @@ write_files:
done
fi
- path: /etc/kubernetes/kubeadm-config.yaml
permissions: '0644'
owner: root:root
content: |
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs:
{{ k8s_control.server_ips }}
controlPlaneEndpoint: {{ k8s_control.server_ip }}:{{ k8s.api_server_port }}
etcd:
external:
endpoints:
{{ etcd.etcd_endpoint_list }}
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
bootstrapTokens:
- token: "{{ k8s_control_node.cluster.token }}"
ttl: "0"
nodeRegistration:
criSocket: /run/containerd/containerd.sock
localAPIEndpoint:
advertiseAddress: {{ k8s_control.server_ip }}
bindPort: {{ k8s.api_server_port }}
certificateKey: {{ k8s_control.certificate_key }}
- path: /opt/bin/deploy-kube-system
permissions: '0700'
owner: root:root
@ -245,6 +280,8 @@ write_files:
export PATH=$PATH:/opt/bin
fi
EXTERNAL_ETCD_NODES={{ etcd.unstacked_etcd }}
EXTERNAL_CNI_PLUGIN={{ k8s.external.cni.plugin }}
MAX_SETUP_CRUCIAL_CMD_ATTEMPTS=3
crucial_cmd_attempts=1
while true; do
@ -254,7 +291,11 @@ write_files:
fi
retval=0
set +e
kubeadm init --token {{ k8s_control_node.cluster.token }} --token-ttl 0 {{ k8s_control_node.cluster.initargs }} --cri-socket /run/containerd/containerd.sock
if [[ ${EXTERNAL_ETCD_NODES} == true ]]; then
kubeadm init --config /etc/kubernetes/kubeadm-config.yaml --upload-certs
else
kubeadm init --token {{ k8s_control_node.cluster.token }} --token-ttl 0 {{ k8s_control_node.cluster.initargs }} --cri-socket /run/containerd/containerd.sock
fi
retval=$?
set -e
if [ $retval -eq 0 ]; then
@ -282,7 +323,9 @@ write_files:
if [ -d "$K8S_CONFIG_SCRIPTS_COPY_DIR" ]; then
### Network, dashboard configs available offline ###
echo "Offline configs are available!"
/opt/bin/kubectl apply -f ${K8S_CONFIG_SCRIPTS_COPY_DIR}/network.yaml
if [[ ${EXTERNAL_CNI_PLUGIN} == false ]]; then
/opt/bin/kubectl apply -f ${K8S_CONFIG_SCRIPTS_COPY_DIR}/network.yaml
fi
/opt/bin/kubectl apply -f ${K8S_CONFIG_SCRIPTS_COPY_DIR}/dashboard.yaml
rm -rf "${K8S_CONFIG_SCRIPTS_COPY_DIR}"
else
@ -297,6 +340,7 @@ write_files:
sudo touch /home/cloud/success
echo "true" > /home/cloud/success
{% if registry is defined %}
- path: /opt/bin/setup-containerd
permissions: '0755'
owner: root:root
@ -314,6 +358,7 @@ write_files:
echo "Restarting containerd service"
systemctl daemon-reload
systemctl restart containerd
{% endif %}
- path: /etc/systemd/system/deploy-kube-system.service
permissions: '0755'

View File

@ -40,8 +40,8 @@ write_files:
sysctl net.ipv4.conf.default.arp_ignore=0
sysctl net.ipv4.conf.all.arp_announce=0
sysctl net.ipv4.conf.all.arp_ignore=0
sysctl net.ipv4.conf.eth0.arp_announce=0
sysctl net.ipv4.conf.eth0.arp_ignore=0
sysctl net.ipv4.conf.eth0.arp_announce=0 || sysctl net.ipv4.conf.ens35.arp_announce=0 || true
sysctl net.ipv4.conf.eth0.arp_ignore=0 || sysctl net.ipv4.conf.ens35.arp_ignore=0 || true
sed -i "s/net.ipv4.conf.default.arp_announce =.*$/net.ipv4.conf.default.arp_announce = 0/" /etc/sysctl.conf
sed -i "s/net.ipv4.conf.default.arp_ignore =.*$/net.ipv4.conf.default.arp_ignore = 0/" /etc/sysctl.conf
sed -i "s/net.ipv4.conf.all.arp_announce =.*$/net.ipv4.conf.all.arp_announce = 0/" /etc/sysctl.conf
@ -53,8 +53,14 @@ write_files:
ATTEMPT_ONLINE_INSTALL=false
setup_complete=false
OFFLINE_INSTALL_ATTEMPT_SLEEP=30
MAX_OFFLINE_INSTALL_ATTEMPTS=40
OFFLINE_INSTALL_ATTEMPT_SLEEP={{ k8s.install.wait.time }}
MAX_OFFLINE_INSTALL_ATTEMPTS={{ k8s.install.reattempts.count }}
if [[ -z $OFFLINE_INSTALL_ATTEMPT_SLEEP || $OFFLINE_INSTALL_ATTEMPT_SLEEP -eq 0 ]]; then
OFFLINE_INSTALL_ATTEMPT_SLEEP=30
fi
if [[ -z $MAX_OFFLINE_INSTALL_ATTEMPTS || $MAX_OFFLINE_INSTALL_ATTEMPTS -eq 0 ]]; then
MAX_OFFLINE_INSTALL_ATTEMPTS=40
fi
offline_attempts=1
MAX_SETUP_CRUCIAL_CMD_ATTEMPTS=3
EJECT_ISO_FROM_OS={{ k8s.eject.iso }}
@ -87,6 +93,21 @@ write_files:
fi
fi
done <<< "$output"
else
### Download from VR ###
ROUTER_IP="{{ k8s.vr.iso.mounted.ip }}"
if [ "$ROUTER_IP" != "" ]; then
echo "Downloading CKS binaries from the VR $ROUTER_IP"
if [ ! -d "${ISO_MOUNT_DIR}" ]; then
mkdir "${ISO_MOUNT_DIR}"
fi
### Download from ROUTER_IP/cks-iso into ISO_MOUNT_DIR
AUX_DOWNLOAD_DIR=/aux-dwnld
mkdir -p $AUX_DOWNLOAD_DIR
wget -r -R "index.html*" $ROUTER_IP/cks-iso -P $AUX_DOWNLOAD_DIR || echo 'Cannot download some files from virtual router'
mv $AUX_DOWNLOAD_DIR/$ROUTER_IP/cks-iso/* $ISO_MOUNT_DIR
rm -rf $AUX_DOWNLOAD_DIR
fi
fi
if [ -d "$BINARIES_DIR" ]; then
break

View File

@ -0,0 +1,43 @@
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
export PATH=$PATH:/opt/bin
node_name=$1
node_type=$2
operation=$3
if [ $operation == "remove" ]; then
if [ $node_type == "control" ]; then
# get the specific node
kubectl get nodes $node_name >/dev/null 2>&1
if [[ $(echo $?) -eq 1 ]]; then
echo "No node with name $node_name present in the cluster, exiting..."
exit 0
else
# Drain the node
kubectl drain $node_name --delete-local-data --force --ignore-daemonsets
fi
else
kubeadm reset -f
fi
else
sudo mkdir -p /home/cloud/.kube
sudo cp /root/.kube/config /home/cloud/.kube/
sudo chown -R cloud:cloud /home/cloud/.kube
kubectl delete node $node_name
fi

View File

@ -18,7 +18,7 @@
# Version 1.14 and below needs extra flags with kubeadm upgrade node
if [ $# -lt 4 ]; then
echo "Invalid input. Valid usage: ./upgrade-kubernetes.sh UPGRADE_VERSION IS_CONTROL_NODE IS_OLD_VERSION IS_EJECT_ISO"
echo "Invalid input. Valid usage: ./upgrade-kubernetes.sh UPGRADE_VERSION IS_CONTROL_NODE IS_OLD_VERSION IS_EJECT_ISO IS_EXTERNAL_CNI"
echo "eg: ./upgrade-kubernetes.sh 1.16.3 true false false"
exit 1
fi
@ -35,6 +35,10 @@ EJECT_ISO_FROM_OS=false
if [ $# -gt 3 ]; then
EJECT_ISO_FROM_OS="${4}"
fi
EXTERNAL_CNI=false
if [ $# -gt 4 ]; then
EXTERNAL_CNI="${5}"
fi
export PATH=$PATH:/opt/bin
if [[ "$PATH" != *:/usr/sbin && "$PATH" != *:/usr/sbin:* ]]; then
@ -144,7 +148,9 @@ if [ -d "$BINARIES_DIR" ]; then
systemctl restart kubelet
if [ "${IS_MAIN_CONTROL}" == 'true' ]; then
/opt/bin/kubectl apply -f ${BINARIES_DIR}/network.yaml
if [[ ${EXTERNAL_CNI} == true ]]; then
/opt/bin/kubectl apply -f ${BINARIES_DIR}/network.yaml
fi
/opt/bin/kubectl apply -f ${BINARIES_DIR}/dashboard.yaml
fi

View File

@ -0,0 +1,45 @@
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
OS=`awk -F= '/^NAME/{print $2}' /etc/os-release`
REQUIRED_PACKAGES=(cloud-init cloud-guest-utils conntrack apt-transport-https ca-certificates curl gnupg gnupg-agent \
software-properties-common gnupg lsb-release python3-json-pointer python3-jsonschema cloud-init containerd.io)
declare -a MISSING_PACKAGES
if [[ $OS == *"Ubuntu"* || $OS == *"Debian"* ]]; then
for package in ${REQUIRED_PACKAGES[@]}; do
dpkg -s $package >/dev/null 2>&1
if [ $? -eq 1 ]; then
MISSING_PACKAGES+="$package"
fi
done
else
for package in ${REQUIRED_PACKAGES[@]}; do
rpm -qa | grep $package >/dev/null 2>&1
if [ $? -eq 1 ]; then
MISSING_PACKAGES[${#MISSING_PACKAGES[@]}]=$package
fi
done
fi
echo ${#MISSING_PACKAGES[@]}
if (( ${#MISSING_PACKAGES[@]} )); then
echo "Following packages are missing in the node template: ${MISSING_PACKAGES[@]}"
exit 1
else
echo 0
fi

View File

@ -0,0 +1,145 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.kubernetes.cluster;
import com.cloud.exception.InvalidParameterValueException;
import com.cloud.service.ServiceOfferingVO;
import com.cloud.service.dao.ServiceOfferingDao;
import com.cloud.vm.VmDetailConstants;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.mockito.junit.MockitoJUnitRunner;
import java.util.HashMap;
import java.util.Map;
import java.util.UUID;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.CONTROL;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.ETCD;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.WORKER;
@RunWith(MockitoJUnitRunner.class)
public class KubernetesClusterHelperImplTest {
@Mock
private ServiceOfferingDao serviceOfferingDao;
@Mock
private ServiceOfferingVO workerServiceOffering;
@Mock
private ServiceOfferingVO controlServiceOffering;
@Mock
private ServiceOfferingVO etcdServiceOffering;
private static final String workerNodesOfferingId = UUID.randomUUID().toString();
private static final String controlNodesOfferingId = UUID.randomUUID().toString();
private static final String etcdNodesOfferingId = UUID.randomUUID().toString();
private static final Long workerOfferingId = 1L;
private static final Long controlOfferingId = 2L;
private static final Long etcdOfferingId = 3L;
private final KubernetesServiceHelperImpl helper = new KubernetesServiceHelperImpl();
@Before
public void setUp() {
helper.serviceOfferingDao = serviceOfferingDao;
Mockito.when(serviceOfferingDao.findByUuid(workerNodesOfferingId)).thenReturn(workerServiceOffering);
Mockito.when(serviceOfferingDao.findByUuid(controlNodesOfferingId)).thenReturn(controlServiceOffering);
Mockito.when(serviceOfferingDao.findByUuid(etcdNodesOfferingId)).thenReturn(etcdServiceOffering);
Mockito.when(workerServiceOffering.getId()).thenReturn(workerOfferingId);
Mockito.when(controlServiceOffering.getId()).thenReturn(controlOfferingId);
Mockito.when(etcdServiceOffering.getId()).thenReturn(etcdOfferingId);
}
@Test
public void testIsValidNodeTypeEmptyNodeType() {
Assert.assertFalse(helper.isValidNodeType(null));
}
@Test
public void testIsValidNodeTypeInvalidNodeType() {
String nodeType = "invalidNodeType";
Assert.assertFalse(helper.isValidNodeType(nodeType));
}
@Test
public void testIsValidNodeTypeValidNodeTypeLowercase() {
String nodeType = KubernetesServiceHelper.KubernetesClusterNodeType.WORKER.name().toLowerCase();
Assert.assertTrue(helper.isValidNodeType(nodeType));
}
private Map<String, String> createMapEntry(KubernetesServiceHelper.KubernetesClusterNodeType nodeType,
String nodeTypeOfferingUuid) {
Map<String, String> map = new HashMap<>();
map.put(VmDetailConstants.CKS_NODE_TYPE, nodeType.name().toLowerCase());
map.put(VmDetailConstants.OFFERING, nodeTypeOfferingUuid);
return map;
}
@Test
public void testNodeOfferingMap() {
Map<String, Map<String, String>> serviceOfferingNodeTypeMap = new HashMap<>();
Map<String, String> firstMap = createMapEntry(WORKER, workerNodesOfferingId);
Map<String, String> secondMap = createMapEntry(CONTROL, controlNodesOfferingId);
serviceOfferingNodeTypeMap.put("map1", firstMap);
serviceOfferingNodeTypeMap.put("map2", secondMap);
Map<String, Long> map = helper.getServiceOfferingNodeTypeMap(serviceOfferingNodeTypeMap);
Assert.assertNotNull(map);
Assert.assertEquals(2, map.size());
Assert.assertTrue(map.containsKey(WORKER.name()) && map.containsKey(CONTROL.name()));
Assert.assertEquals(workerOfferingId, map.get(WORKER.name()));
Assert.assertEquals(controlOfferingId, map.get(CONTROL.name()));
}
@Test
public void testNodeOfferingMapNullMap() {
Map<String, Long> map = helper.getServiceOfferingNodeTypeMap(null);
Assert.assertTrue(map.isEmpty());
}
@Test
public void testNodeOfferingMapEtcdNodes() {
Map<String, Map<String, String>> serviceOfferingNodeTypeMap = new HashMap<>();
Map<String, String> firstMap = createMapEntry(ETCD, etcdNodesOfferingId);
serviceOfferingNodeTypeMap.put("map1", firstMap);
Map<String, Long> map = helper.getServiceOfferingNodeTypeMap(serviceOfferingNodeTypeMap);
Assert.assertNotNull(map);
Assert.assertEquals(1, map.size());
Assert.assertTrue(map.containsKey(ETCD.name()));
Assert.assertEquals(etcdOfferingId, map.get(ETCD.name()));
}
@Test(expected = InvalidParameterValueException.class)
public void testCheckNodeTypeOfferingEntryCompletenessInvalidParameters() {
helper.checkNodeTypeOfferingEntryCompleteness(WORKER.name(), null);
}
@Test(expected = InvalidParameterValueException.class)
public void testCheckNodeTypeOfferingEntryValuesInvalidNodeType() {
String invalidNodeType = "invalidNodeTypeName";
helper.checkNodeTypeOfferingEntryValues(invalidNodeType, workerServiceOffering, workerNodesOfferingId);
}
@Test(expected = InvalidParameterValueException.class)
public void testCheckNodeTypeOfferingEntryValuesEmptyOffering() {
String nodeType = WORKER.name();
helper.checkNodeTypeOfferingEntryValues(nodeType, null, workerNodesOfferingId);
}
}

View File

@ -27,16 +27,21 @@ import com.cloud.exception.PermissionDeniedException;
import com.cloud.kubernetes.cluster.actionworkers.KubernetesClusterActionWorker;
import com.cloud.kubernetes.cluster.dao.KubernetesClusterDao;
import com.cloud.kubernetes.cluster.dao.KubernetesClusterVmMapDao;
import com.cloud.kubernetes.version.KubernetesSupportedVersion;
import com.cloud.network.Network;
import com.cloud.network.dao.FirewallRulesDao;
import com.cloud.network.rules.FirewallRule;
import com.cloud.network.rules.FirewallRuleVO;
import com.cloud.network.vpc.NetworkACL;
import com.cloud.offering.ServiceOffering;
import com.cloud.service.ServiceOfferingVO;
import com.cloud.service.dao.ServiceOfferingDao;
import com.cloud.storage.VMTemplateVO;
import com.cloud.storage.dao.VMTemplateDao;
import com.cloud.user.Account;
import com.cloud.user.AccountManager;
import com.cloud.user.User;
import com.cloud.utils.Pair;
import com.cloud.utils.net.NetUtils;
import com.cloud.vm.VMInstanceVO;
import com.cloud.vm.dao.VMInstanceDao;
@ -45,6 +50,7 @@ import org.apache.cloudstack.api.command.user.kubernetes.cluster.AddVirtualMachi
import org.apache.cloudstack.api.command.user.kubernetes.cluster.RemoveVirtualMachinesFromKubernetesClusterCmd;
import org.apache.cloudstack.context.CallContext;
import org.apache.cloudstack.framework.config.ConfigKey;
import org.apache.commons.collections.MapUtils;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
@ -60,7 +66,14 @@ import java.lang.reflect.Field;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.CONTROL;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.DEFAULT;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.ETCD;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.WORKER;
@RunWith(MockitoJUnitRunner.class)
public class KubernetesClusterManagerImplTest {
@ -86,6 +99,9 @@ public class KubernetesClusterManagerImplTest {
@Mock
private AccountManager accountManager;
@Mock
private ServiceOfferingDao serviceOfferingDao;
@Spy
@InjectMocks
KubernetesClusterManagerImpl kubernetesClusterManager;
@ -293,4 +309,117 @@ public class KubernetesClusterManagerImplTest {
Mockito.when(kubernetesClusterDao.findById(Mockito.anyLong())).thenReturn(cluster);
Assert.assertTrue(kubernetesClusterManager.removeVmsFromCluster(cmd).size() > 0);
}
@Test
public void testValidateServiceOfferingNodeType() {
Map<String, Long> map = new HashMap<>();
map.put(WORKER.name(), 1L);
map.put(CONTROL.name(), 2L);
ServiceOfferingVO serviceOffering = Mockito.mock(ServiceOfferingVO.class);
Mockito.when(serviceOfferingDao.findById(1L)).thenReturn(serviceOffering);
Mockito.when(serviceOffering.isDynamic()).thenReturn(false);
Mockito.when(serviceOffering.getCpu()).thenReturn(2);
Mockito.when(serviceOffering.getRamSize()).thenReturn(2048);
KubernetesSupportedVersion version = Mockito.mock(KubernetesSupportedVersion.class);
Mockito.when(version.getMinimumCpu()).thenReturn(2);
Mockito.when(version.getMinimumRamSize()).thenReturn(2048);
kubernetesClusterManager.validateServiceOfferingForNode(map, 1L, WORKER.name(), null, version);
Mockito.verify(kubernetesClusterManager).validateServiceOffering(serviceOffering, version);
}
@Test(expected = InvalidParameterValueException.class)
public void testValidateServiceOfferingNodeTypeInvalidOffering() {
Map<String, Long> map = new HashMap<>();
map.put(WORKER.name(), 1L);
map.put(CONTROL.name(), 2L);
ServiceOfferingVO serviceOffering = Mockito.mock(ServiceOfferingVO.class);
Mockito.when(serviceOfferingDao.findById(1L)).thenReturn(serviceOffering);
Mockito.when(serviceOffering.isDynamic()).thenReturn(true);
kubernetesClusterManager.validateServiceOfferingForNode(map, 1L, WORKER.name(), null, null);
}
@Test
public void testClusterCapacity() {
long workerOfferingId = 1L;
long controlOfferingId = 2L;
long workerCount = 2L;
long controlCount = 2L;
int workerOfferingCpus = 4;
int workerOfferingMemory = 4096;
int controlOfferingCpus = 2;
int controlOfferingMemory = 2048;
Map<String, Long> map = Map.of(WORKER.name(), workerOfferingId, CONTROL.name(), controlOfferingId);
Map<String, Long> nodeCount = Map.of(WORKER.name(), workerCount, CONTROL.name(), controlCount);
ServiceOfferingVO workerOffering = Mockito.mock(ServiceOfferingVO.class);
Mockito.when(serviceOfferingDao.findById(workerOfferingId)).thenReturn(workerOffering);
ServiceOfferingVO controlOffering = Mockito.mock(ServiceOfferingVO.class);
Mockito.when(serviceOfferingDao.findById(controlOfferingId)).thenReturn(controlOffering);
Mockito.when(workerOffering.getCpu()).thenReturn(workerOfferingCpus);
Mockito.when(workerOffering.getRamSize()).thenReturn(workerOfferingMemory);
Mockito.when(controlOffering.getCpu()).thenReturn(controlOfferingCpus);
Mockito.when(controlOffering.getRamSize()).thenReturn(controlOfferingMemory);
Pair<Long, Long> pair = kubernetesClusterManager.calculateClusterCapacity(map, nodeCount, 1L);
Long expectedCpu = (workerOfferingCpus * workerCount) + (controlOfferingCpus * controlCount);
Long expectedMemory = (workerOfferingMemory * workerCount) + (controlOfferingMemory * controlCount);
Assert.assertEquals(expectedCpu, pair.first());
Assert.assertEquals(expectedMemory, pair.second());
}
@Test
public void testIsAnyNodeOfferingEmptyNullMap() {
Assert.assertTrue(kubernetesClusterManager.isAnyNodeOfferingEmpty(null));
}
@Test
public void testIsAnyNodeOfferingEmptyNullValue() {
Map<String, Long> map = new HashMap<>();
map.put(WORKER.name(), 1L);
map.put(CONTROL.name(), null);
map.put(ETCD.name(), 2L);
Assert.assertTrue(kubernetesClusterManager.isAnyNodeOfferingEmpty(map));
}
@Test
public void testIsAnyNodeOfferingEmpty() {
Map<String, Long> map = new HashMap<>();
map.put(WORKER.name(), 1L);
map.put(CONTROL.name(), 2L);
Assert.assertFalse(kubernetesClusterManager.isAnyNodeOfferingEmpty(map));
}
@Test
public void testCreateNodeTypeToServiceOfferingMapNullMap() {
KubernetesClusterVO clusterVO = Mockito.mock(KubernetesClusterVO.class);
Mockito.when(clusterVO.getServiceOfferingId()).thenReturn(1L);
ServiceOfferingVO offering = Mockito.mock(ServiceOfferingVO.class);
Mockito.when(serviceOfferingDao.findById(1L)).thenReturn(offering);
Map<String, ServiceOffering> mapping = kubernetesClusterManager.createNodeTypeToServiceOfferingMap(new HashMap<>(), null, clusterVO);
Assert.assertFalse(MapUtils.isEmpty(mapping));
Assert.assertTrue(mapping.containsKey(DEFAULT.name()));
Assert.assertEquals(offering, mapping.get(DEFAULT.name()));
}
@Test
public void testCreateNodeTypeToServiceOfferingMap() {
Map<String, Long> idsMap = new HashMap<>();
long workerOfferingId = 1L;
long controlOfferingId = 2L;
idsMap.put(WORKER.name(), workerOfferingId);
idsMap.put(CONTROL.name(), controlOfferingId);
ServiceOfferingVO workerOffering = Mockito.mock(ServiceOfferingVO.class);
Mockito.when(serviceOfferingDao.findById(workerOfferingId)).thenReturn(workerOffering);
ServiceOfferingVO controlOffering = Mockito.mock(ServiceOfferingVO.class);
Mockito.when(serviceOfferingDao.findById(controlOfferingId)).thenReturn(controlOffering);
Map<String, ServiceOffering> mapping = kubernetesClusterManager.createNodeTypeToServiceOfferingMap(idsMap, null, null);
Assert.assertEquals(2, mapping.size());
Assert.assertTrue(mapping.containsKey(WORKER.name()) && mapping.containsKey(CONTROL.name()));
Assert.assertEquals(workerOffering, mapping.get(WORKER.name()));
Assert.assertEquals(controlOffering, mapping.get(CONTROL.name()));
}
}

View File

@ -0,0 +1,128 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.kubernetes.cluster.actionworkers;
import com.cloud.kubernetes.cluster.KubernetesCluster;
import com.cloud.kubernetes.cluster.KubernetesClusterManagerImpl;
import com.cloud.kubernetes.cluster.KubernetesClusterVmMapVO;
import com.cloud.kubernetes.cluster.dao.KubernetesClusterVmMapDao;
import com.cloud.offering.ServiceOffering;
import com.cloud.service.ServiceOfferingVO;
import com.cloud.service.dao.ServiceOfferingDao;
import com.cloud.utils.Pair;
import com.cloud.vm.UserVmVO;
import com.cloud.vm.dao.UserVmDao;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.mockito.junit.MockitoJUnitRunner;
import java.util.List;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.DEFAULT;
import static com.cloud.kubernetes.cluster.KubernetesServiceHelper.KubernetesClusterNodeType.CONTROL;
@RunWith(MockitoJUnitRunner.class)
public class KubernetesClusterScaleWorkerTest {
@Mock
private KubernetesCluster kubernetesCluster;
@Mock
private KubernetesClusterManagerImpl clusterManager;
@Mock
private ServiceOfferingDao serviceOfferingDao;
@Mock
private KubernetesClusterVmMapDao kubernetesClusterVmMapDao;
@Mock
private UserVmDao userVmDao;
private KubernetesClusterScaleWorker worker;
private static final Long defaultOfferingId = 1L;
@Before
public void setUp() {
worker = new KubernetesClusterScaleWorker(kubernetesCluster, clusterManager);
worker.serviceOfferingDao = serviceOfferingDao;
worker.kubernetesClusterVmMapDao = kubernetesClusterVmMapDao;
worker.userVmDao = userVmDao;
}
@Test
public void testCalculateNewClusterCountAndCapacityAllNodesScaleSize() {
long controlNodes = 3L;
long etcdNodes = 2L;
Mockito.when(kubernetesCluster.getControlNodeCount()).thenReturn(controlNodes);
Mockito.when(kubernetesCluster.getEtcdNodeCount()).thenReturn(etcdNodes);
ServiceOffering newOffering = Mockito.mock(ServiceOffering.class);
int newCores = 4;
int newMemory = 4096;
Mockito.when(newOffering.getCpu()).thenReturn(newCores);
Mockito.when(newOffering.getRamSize()).thenReturn(newMemory);
long newWorkerSize = 4L;
Pair<Long, Long> newClusterCapacity = worker.calculateNewClusterCountAndCapacity(newWorkerSize, DEFAULT, newOffering);
long expectedCores = (newCores * newWorkerSize) + (newCores * controlNodes) + (newCores * etcdNodes);
long expectedMemory = (newMemory * newWorkerSize) + (newMemory * controlNodes) + (newMemory * etcdNodes);
Assert.assertEquals(expectedCores, newClusterCapacity.first().longValue());
Assert.assertEquals(expectedMemory, newClusterCapacity.second().longValue());
}
@Test
public void testCalculateNewClusterCountAndCapacityNodeTypeScaleControlOffering() {
long controlNodes = 2L;
long kubernetesClusterId = 10L;
Mockito.when(kubernetesCluster.getId()).thenReturn(kubernetesClusterId);
Mockito.when(kubernetesCluster.getControlNodeCount()).thenReturn(controlNodes);
ServiceOfferingVO existingOffering = Mockito.mock(ServiceOfferingVO.class);
int existingCores = 2;
int existingMemory = 2048;
Mockito.when(existingOffering.getCpu()).thenReturn(existingCores);
Mockito.when(existingOffering.getRamSize()).thenReturn(existingMemory);
int remainingClusterCpu = 8;
int remainingClusterMemory = 12288;
Mockito.when(kubernetesCluster.getCores()).thenReturn(remainingClusterCpu + (controlNodes * existingCores));
Mockito.when(kubernetesCluster.getMemory()).thenReturn(remainingClusterMemory + (controlNodes * existingMemory));
Mockito.when(serviceOfferingDao.findById(1L)).thenReturn(existingOffering);
ServiceOfferingVO newOffering = Mockito.mock(ServiceOfferingVO.class);
int newCores = 4;
int newMemory = 2048;
Mockito.when(newOffering.getCpu()).thenReturn(newCores);
Mockito.when(newOffering.getRamSize()).thenReturn(newMemory);
KubernetesClusterVmMapVO controlNodeVM1 = Mockito.mock(KubernetesClusterVmMapVO.class);
Mockito.when(controlNodeVM1.getVmId()).thenReturn(10L);
UserVmVO userVmVO = Mockito.mock(UserVmVO.class);
Mockito.when(userVmVO.getServiceOfferingId()).thenReturn(defaultOfferingId);
Mockito.when(userVmDao.findById(10L)).thenReturn(userVmVO);
Mockito.when(kubernetesClusterVmMapDao.listByClusterIdAndVmType(kubernetesClusterId, CONTROL)).thenReturn(List.of(controlNodeVM1));
Pair<Long, Long> newClusterCapacity = worker.calculateNewClusterCountAndCapacity(null, CONTROL, newOffering);
long expectedCores = remainingClusterCpu + (controlNodes * newCores);
long expectedMemory = remainingClusterMemory + (controlNodes * newMemory);
Assert.assertEquals(expectedCores, newClusterCapacity.first().longValue());
Assert.assertEquals(expectedMemory, newClusterCapacity.second().longValue());
}
}

View File

@ -0,0 +1,83 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.kubernetes.cluster.actionworkers;
import com.cloud.kubernetes.cluster.KubernetesCluster;
import com.cloud.kubernetes.cluster.KubernetesClusterManagerImpl;
import com.cloud.kubernetes.cluster.KubernetesClusterVmMapVO;
import com.cloud.kubernetes.cluster.dao.KubernetesClusterVmMapDao;
import com.cloud.kubernetes.version.KubernetesSupportedVersion;
import com.cloud.uservm.UserVm;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.mockito.junit.MockitoJUnitRunner;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
@RunWith(MockitoJUnitRunner.class)
public class KubernetesClusterUpgradeWorkerTest {
@Mock
private KubernetesCluster kubernetesCluster;
@Mock
private KubernetesSupportedVersion kubernetesSupportedVersion;
@Mock
private KubernetesClusterManagerImpl clusterManager;
@Mock
private KubernetesClusterVmMapDao kubernetesClusterVmMapDao;
private KubernetesClusterUpgradeWorker worker;
@Before
public void setUp() {
String[] keys = {};
worker = new KubernetesClusterUpgradeWorker(kubernetesCluster, kubernetesSupportedVersion, clusterManager, keys);
worker.kubernetesClusterVmMapDao = kubernetesClusterVmMapDao;
}
@Test
public void testFilterOutManualUpgradeNodesFromClusterUpgrade() {
long controlNodeId = 1L;
long workerNode1Id = 2L;
long workerNode2Id = 3L;
UserVm controlNode = Mockito.mock(UserVm.class);
Mockito.when(controlNode.getId()).thenReturn(controlNodeId);
UserVm workerNode1 = Mockito.mock(UserVm.class);
Mockito.when(workerNode1.getId()).thenReturn(workerNode1Id);
UserVm workerNode2 = Mockito.mock(UserVm.class);
Mockito.when(workerNode2.getId()).thenReturn(workerNode2Id);
KubernetesClusterVmMapVO controlNodeMap = Mockito.mock(KubernetesClusterVmMapVO.class);
KubernetesClusterVmMapVO workerNode1Map = Mockito.mock(KubernetesClusterVmMapVO.class);
KubernetesClusterVmMapVO workerNode2Map = Mockito.mock(KubernetesClusterVmMapVO.class);
Mockito.when(workerNode2Map.isManualUpgrade()).thenReturn(true);
Mockito.when(kubernetesClusterVmMapDao.getClusterMapFromVmId(controlNodeId)).thenReturn(controlNodeMap);
Mockito.when(kubernetesClusterVmMapDao.getClusterMapFromVmId(workerNode1Id)).thenReturn(workerNode1Map);
Mockito.when(kubernetesClusterVmMapDao.getClusterMapFromVmId(workerNode2Id)).thenReturn(workerNode2Map);
worker.clusterVMs = Arrays.asList(controlNode, workerNode1, workerNode2);
worker.filterOutManualUpgradeNodesFromClusterUpgrade();
Assert.assertEquals(2, worker.clusterVMs.size());
List<Long> ids = worker.clusterVMs.stream().map(UserVm::getId).collect(Collectors.toList());
Assert.assertTrue(ids.contains(controlNodeId) && ids.contains(workerNode1Id));
Assert.assertFalse(ids.contains(workerNode2Id));
}
}

View File

@ -27,14 +27,14 @@ public class KubernetesClusterUtilTest {
private void executeThrowAndTestVersionMatch() {
Pair<Boolean, String> resultPair = null;
boolean result = KubernetesClusterUtil.clusterNodeVersionMatches(resultPair, "1.24.0");
Assert.assertFalse(result);
Pair<Boolean, String> result = KubernetesClusterUtil.clusterNodeVersionMatches(resultPair, "1.24.0");
Assert.assertFalse(result.first());
}
private void executeAndTestVersionMatch(boolean status, String response, boolean expectedResult) {
Pair<Boolean, String> resultPair = new Pair<>(status, response);
boolean result = KubernetesClusterUtil.clusterNodeVersionMatches(resultPair, "1.24.0");
Assert.assertEquals(expectedResult, result);
Pair<Boolean, String> result = KubernetesClusterUtil.clusterNodeVersionMatches(resultPair, "1.24.0");
Assert.assertEquals(expectedResult, result.first());
}
@Test

View File

@ -474,7 +474,6 @@ public class NsxApiClient {
}
}
protected void removeSegment(String segmentName, long zoneId) {
logger.debug(String.format("Removing the segment with ID %s", segmentName));
Segments segmentService = (Segments) nsxService.apply(Segments.class);

View File

@ -94,6 +94,7 @@ import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
import com.cloud.utils.db.TransactionCallback;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.net.NetUtils;
import com.cloud.vm.NicProfile;
import com.cloud.vm.ReservationContext;
import com.cloud.vm.VMInstanceVO;
@ -121,6 +122,7 @@ import javax.naming.ConfigurationException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Locale;
import java.util.Map;
@ -861,17 +863,17 @@ public class NsxElement extends AdapterBase implements DhcpServiceProvider, Dns
* Replace 0.0.0.0/0 to ANY on each occurrence
*/
protected List<String> transformCidrListValues(List<String> sourceCidrList) {
List<String> list = new ArrayList<>();
Set<String> set = new HashSet<>();
if (org.apache.commons.collections.CollectionUtils.isNotEmpty(sourceCidrList)) {
for (String cidr : sourceCidrList) {
if (cidr.equals("0.0.0.0/0")) {
list.add("ANY");
if (cidr.equals(NetUtils.ALL_IP4_CIDRS) || cidr.equals(NetUtils.ALL_IP6_CIDRS)) {
set.add("ANY");
} else {
list.add(cidr);
set.add(cidr);
}
}
}
return list;
return set.stream().sorted().collect(Collectors.toList());
}
@Override

View File

@ -199,7 +199,7 @@ public class StorageVmSharedFSLifeCycle implements SharedFSLifeCycle {
customParameterMap, null, null, null, null,
true, UserVmManager.SHAREDFSVM, null);
vmContext.setEventResourceId(vm.getId());
userVmService.startVirtualMachine(vm);
userVmService.startVirtualMachine(vm, null);
} catch (InsufficientCapacityException ex) {
if (vm != null) {
expungeVm(vm.getId());
@ -243,7 +243,7 @@ public class StorageVmSharedFSLifeCycle implements SharedFSLifeCycle {
@Override
public void startSharedFS(SharedFS sharedFS) throws OperationTimedoutException, ResourceUnavailableException, InsufficientCapacityException {
UserVmVO vm = userVmDao.findById(sharedFS.getVmId());
userVmService.startVirtualMachine(vm);
userVmService.startVirtualMachine(vm, null);
}
@Override

View File

@ -19,8 +19,8 @@
set -e
if [ $# -lt 6 ]; then
echo "Invalid input. Valid usage: ./create-kubernetes-binaries-iso.sh OUTPUT_PATH KUBERNETES_VERSION CNI_VERSION CRICTL_VERSION WEAVENET_NETWORK_YAML_CONFIG DASHBOARD_YAML_CONFIG BUILD_NAME"
echo "eg: ./create-kubernetes-binaries-iso.sh ./ 1.11.4 0.7.1 1.11.1 https://github.com/weaveworks/weave/releases/download/latest_release/weave-daemonset-k8s-1.11.yaml https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.0/src/deploy/recommended/kubernetes-dashboard.yaml setup-v1.11.4"
echo "Invalid input. Valid usage: ./create-kubernetes-binaries-iso.sh OUTPUT_PATH KUBERNETES_VERSION CNI_VERSION CRICTL_VERSION WEAVENET_NETWORK_YAML_CONFIG DASHBOARD_YAML_CONFIG BUILD_NAME [ETCD_VERSION]"
echo "eg: ./create-kubernetes-binaries-iso.sh ./ 1.11.4 0.7.1 1.11.1 https://github.com/weaveworks/weave/releases/download/latest_release/weave-daemonset-k8s-1.11.yaml https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.0/src/deploy/recommended/kubernetes-dashboard.yaml setup-v1.11.4 3.5.1"
exit 1
fi
@ -148,6 +148,14 @@ chmod ${kubeadm_file_permissions} "${working_dir}/k8s/kubeadm"
echo "Updating imagePullPolicy to IfNotPresent in yaml files..."
sed -i "s/imagePullPolicy:.*/imagePullPolicy: IfNotPresent/g" ${working_dir}/*.yaml
if [ -n "${8}" ]; then
# Install etcd dependencies
etcd_dir="${working_dir}/etcd"
mkdir -p "${etcd_dir}"
ETCD_VERSION=v${8}
wget -q --show-progress "https://github.com/etcd-io/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz" -O ${etcd_dir}/etcd-linux-amd64.tar.gz
fi
mkisofs -o "${output_dir}/${build_name}" -J -R -l "${iso_dir}"
rm -rf "${iso_dir}"

View File

@ -5468,9 +5468,13 @@ public class ApiResponseHelper implements ResponseGenerator {
response.setZoneName(zone.getName());
response.setAsNumber(asn.getAsNumber());
ASNumberRangeVO range = asNumberRangeDao.findById(asn.getAsNumberRangeId());
response.setAsNumberRangeId(range.getUuid());
String rangeText = String.format("%s-%s", range.getStartASNumber(), range.getEndASNumber());
response.setAsNumberRange(rangeText);
if (Objects.nonNull(range)) {
response.setAsNumberRangeId(range.getUuid());
String rangeText = String.format("%s-%s", range.getStartASNumber(), range.getEndASNumber());
response.setAsNumberRange(rangeText);
} else {
logger.info("Range is null for AS number: "+ asn.getAsNumber());
}
response.setAllocated(asn.getAllocatedTime());
response.setAllocationState(asn.isAllocated() ? "Allocated" : "Free");
if (asn.getVpcId() != null) {

View File

@ -4790,12 +4790,13 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
}
}
Boolean isVnf = cmd.getVnf();
Boolean forCks = cmd.getForCks();
return searchForTemplatesInternal(id, cmd.getTemplateName(), cmd.getKeyword(), templateFilter, false,
null, cmd.getPageSizeVal(), cmd.getStartIndex(), cmd.getZoneId(), cmd.getStoragePoolId(),
cmd.getImageStoreId(), hypervisorType, showDomr, cmd.listInReadyState(), permittedAccounts, caller,
listProjectResourcesCriteria, tags, showRemovedTmpl, cmd.getIds(), parentTemplateId, cmd.getShowUnique(),
templateType, isVnf, cmd.getArch(), cmd.getOsCategoryId());
templateType, isVnf, cmd.getArch(), cmd.getOsCategoryId(), forCks);
}
private Pair<List<TemplateJoinVO>, Integer> searchForTemplatesInternal(Long templateId, String name, String keyword,
@ -4804,7 +4805,7 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
boolean showDomr, boolean onlyReady, List<Account> permittedAccounts, Account caller,
ListProjectResourcesCriteria listProjectResourcesCriteria, Map<String, String> tags,
boolean showRemovedTmpl, List<Long> ids, Long parentTemplateId, Boolean showUnique, String templateType,
Boolean isVnf, CPU.CPUArch arch, Long osCategoryId) {
Boolean isVnf, CPU.CPUArch arch, Long osCategoryId, Boolean forCks) {
// check if zone is configured, if not, just return empty list
List<HypervisorType> hypers = null;
@ -5002,7 +5003,7 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
applyPublicTemplateSharingRestrictions(sc, caller);
return templateChecks(isIso, hypers, tags, name, keyword, hyperType, onlyReady, bootable, zoneId, showDomr, caller,
showRemovedTmpl, parentTemplateId, showUnique, templateType, isVnf, searchFilter, sc);
showRemovedTmpl, parentTemplateId, showUnique, templateType, isVnf, forCks, searchFilter, sc);
}
/**
@ -5056,7 +5057,7 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
private Pair<List<TemplateJoinVO>, Integer> templateChecks(boolean isIso, List<HypervisorType> hypers, Map<String, String> tags, String name, String keyword,
HypervisorType hyperType, boolean onlyReady, Boolean bootable, Long zoneId, boolean showDomr, Account caller,
boolean showRemovedTmpl, Long parentTemplateId, Boolean showUnique, String templateType, Boolean isVnf,
boolean showRemovedTmpl, Long parentTemplateId, Boolean showUnique, String templateType, Boolean isVnf, Boolean forCks,
Filter searchFilter, SearchCriteria<TemplateJoinVO> sc) {
if (!isIso) {
// add hypervisor criteria for template case
@ -5154,6 +5155,10 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
}
}
if (forCks != null) {
sc.addAnd("forCks", SearchCriteria.Op.EQ, forCks);
}
// don't return removed template, this should not be needed since we
// changed annotation for removed field in TemplateJoinVO.
// sc.addAnd("removed", SearchCriteria.Op.NULL);
@ -5248,7 +5253,7 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
cmd.getPageSizeVal(), cmd.getStartIndex(), cmd.getZoneId(), cmd.getStoragePoolId(), cmd.getImageStoreId(),
hypervisorType, true, cmd.listInReadyState(), permittedAccounts, caller, listProjectResourcesCriteria,
tags, showRemovedISO, null, null, cmd.getShowUnique(), null, null,
cmd.getArch(), cmd.getOsCategoryId());
cmd.getArch(), cmd.getOsCategoryId(), null);
}
@Override

View File

@ -313,6 +313,7 @@ public class TemplateJoinDaoImpl extends GenericDaoBaseWithTagInformation<Templa
templateResponse.setDetails(details);
setDeployAsIsDetails(template, templateResponse);
templateResponse.setForCks(template.isForCks());
}
// update tag information

View File

@ -247,6 +247,9 @@ public class TemplateJoinVO extends BaseViewWithTagInformationVO implements Cont
@Column(name = "deploy_as_is")
private boolean deployAsIs;
@Column(name = "for_cks")
private boolean forCks;
@Column(name = "user_data_id")
private Long userDataId;
@ -529,6 +532,10 @@ public class TemplateJoinVO extends BaseViewWithTagInformationVO implements Cont
return deployAsIs;
}
public boolean isForCks() {
return forCks;
}
public Object getParentTemplateId() {
return parentTemplateId;
}

View File

@ -320,6 +320,7 @@ import com.googlecode.ipv6.IPv6Network;
public class ConfigurationManagerImpl extends ManagerBase implements ConfigurationManager, ConfigurationService, Configurable {
public static final String PERACCOUNT = "peraccount";
public static final String PERZONE = "perzone";
public static final String CLUSTER_NODES_DEFAULT_START_SSH_PORT = "2222";
@Inject
EntityManager _entityMgr;
@ -677,6 +678,17 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati
}
}
protected void validateConflictingConfigValue(final String configName, final String value) {
if (configName.equals("cloud.kubernetes.etcd.node.start.port")) {
if (value.equals(CLUSTER_NODES_DEFAULT_START_SSH_PORT)) {
String errorMessage = "This range is reserved for Kubernetes cluster nodes." +
"Please choose a value in a higher range would does not conflict with a kubernetes cluster deployed";
logger.error(errorMessage);
throw new InvalidParameterValueException(errorMessage);
}
}
}
@Override
public boolean start() {
@ -989,6 +1001,9 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati
category = config.getCategory();
}
validateIpAddressRelatedConfigValues(name, value);
validateConflictingConfigValue(name, value);
if (CATEGORY_SYSTEM.equals(category) && !_accountMgr.isRootAdmin(caller.getId())) {
logger.warn("Only Root Admin is allowed to edit the configuration " + name);
throw new CloudRuntimeException("Only Root Admin is allowed to edit this configuration.");

View File

@ -301,7 +301,7 @@ public class NetworkMigrationManagerImpl implements NetworkMigrationManager {
copyOfVpc = _vpcService.createVpc(vpc.getZoneId(), vpcOfferingId, vpc.getAccountId(), vpc.getName(),
vpc.getDisplayText(), vpc.getCidr(), vpc.getNetworkDomain(), vpc.getIp4Dns1(), vpc.getIp4Dns2(),
vpc.getIp6Dns1(), vpc.getIp6Dns2(), vpc.isDisplay(), vpc.getPublicMtu(), null, null, null);
vpc.getIp6Dns1(), vpc.getIp6Dns2(), vpc.isDisplay(), vpc.getPublicMtu(), null, null, null, vpc.useRouterIpAsResolver());
copyOfVpcId = copyOfVpc.getId();
//on resume of migration the uuid will be swapped already. So the copy will have the value of the original vpcid.

View File

@ -40,6 +40,7 @@ import java.util.UUID;
import javax.inject.Inject;
import javax.naming.ConfigurationException;
import com.cloud.dc.dao.ASNumberDao;
import org.apache.cloudstack.acl.ControlledEntity.ACLType;
import org.apache.cloudstack.acl.SecurityChecker.AccessType;
import org.apache.cloudstack.alert.AlertService;
@ -421,6 +422,8 @@ public class NetworkServiceImpl extends ManagerBase implements NetworkService, C
RoutedIpv4Manager routedIpv4Manager;
@Inject
private BGPService bgpService;
@Inject
private ASNumberDao asNumberDao;
List<InternalLoadBalancerElementService> internalLoadBalancerElementServices = new ArrayList<>();
Map<String, InternalLoadBalancerElementService> internalLoadBalancerElementServiceMap = new HashMap<>();
@ -6284,6 +6287,27 @@ public class NetworkServiceImpl extends ManagerBase implements NetworkService, C
return new ArrayList<>(this.internalLoadBalancerElementServiceMap.values());
}
@Override
public boolean handleCksIsoOnNetworkVirtualRouter(Long virtualRouterId, boolean mount) throws ResourceUnavailableException {
DomainRouterVO router = routerDao.findById(virtualRouterId);
if (router == null) {
String err = String.format("Cannot find VR with ID %s", virtualRouterId);
logger.error(err);
throw new CloudRuntimeException(err);
}
Commands commands = new Commands(Command.OnError.Stop);
commandSetupHelper.createHandleCksIsoCommand(router, mount, commands);
if (!networkHelper.sendCommandsToRouter(router, commands)) {
throw new CloudRuntimeException(String.format("Unable to send commands to virtual router: %s", router.getHostId()));
}
Answer answer = commands.getAnswer("handleCksIso");
if (answer == null || !answer.getResult()) {
logger.error(String.format("Could not handle the CKS ISO properly: %s", answer.getDetails()));
return false;
}
return true;
}
/**
* Retrieves the active quarantine for the given public IP address. It can find by the ID of the quarantine or the address of the public IP.
* @throws CloudRuntimeException if it does not find an active quarantine for the given public IP.

View File

@ -28,6 +28,7 @@ import java.util.Set;
import javax.inject.Inject;
import com.cloud.agent.api.HandleCksIsoCommand;
import com.cloud.network.rules.PortForwardingRuleVO;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.engine.orchestration.service.NetworkOrchestrationService;
@ -1425,6 +1426,13 @@ public class CommandSetupHelper {
}
}
public void createHandleCksIsoCommand(final VirtualRouter router, final boolean mount, Commands cmds) {
HandleCksIsoCommand command = new HandleCksIsoCommand(mount);
command.setAccessDetail(NetworkElementCommand.ROUTER_IP, _routerControlHelper.getRouterControlIp(router.getId()));
command.setAccessDetail(NetworkElementCommand.ROUTER_NAME, router.getInstanceName());
cmds.addCommand("handleCksIso", command);
}
public void createBgpPeersCommands(final List<? extends BgpPeer> bgpPeers, final VirtualRouter router, final Commands cmds, final Network network) {
List<BgpPeerTO> bgpPeerTOs = new ArrayList<>();

View File

@ -2072,6 +2072,7 @@ Configurable, StateListener<VirtualMachine.State, VirtualMachine.Event, VirtualM
* service, we need to override the DHCP response to return DNS server
* rather than virtual router itself.
*/
boolean useRouterIpResolver = getUseRouterIpAsResolver(router);
if (dnsProvided || dhcpProvided) {
if (defaultDns1 != null) {
buf.append(" dns1=").append(defaultDns1);
@ -2093,6 +2094,9 @@ Configurable, StateListener<VirtualMachine.State, VirtualMachine.Event, VirtualM
if (useExtDns) {
buf.append(" useextdns=true");
}
if (useRouterIpResolver) {
buf.append(" userouteripresolver=true");
}
}
if (Boolean.TRUE.equals(ExposeDnsAndBootpServer.valueIn(dc.getId()))) {
@ -2132,6 +2136,18 @@ Configurable, StateListener<VirtualMachine.State, VirtualMachine.Event, VirtualM
return true;
}
private boolean getUseRouterIpAsResolver(DomainRouterVO router) {
if (router == null || router.getVpcId() == null) {
return false;
}
Vpc vpc = _vpcDao.findById(router.getVpcId());
if (vpc == null) {
logger.warn(String.format("Cannot find VPC with ID %s from router %s", router.getVpcId(), router.getName()));
return false;
}
return vpc.useRouterIpAsResolver();
}
/**
* @param routerLogrotateFrequency The string to be checked if matches with any acceptable values.
* Checks if the value in the global configuration is an acceptable value to be informed to the Virtual Router.

View File

@ -1145,7 +1145,7 @@ public class VpcManagerImpl extends ManagerBase implements VpcManager, VpcProvis
@ActionEvent(eventType = EventTypes.EVENT_VPC_CREATE, eventDescription = "creating vpc", create = true)
public Vpc createVpc(final long zoneId, final long vpcOffId, final long vpcOwnerId, final String vpcName, final String displayText, final String cidr, String networkDomain,
final String ip4Dns1, final String ip4Dns2, final String ip6Dns1, final String ip6Dns2, final Boolean displayVpc, Integer publicMtu,
final Integer cidrSize, final Long asNumber, final List<Long> bgpPeerIds) throws ResourceAllocationException {
final Integer cidrSize, final Long asNumber, final List<Long> bgpPeerIds, Boolean useVrIpResolver) throws ResourceAllocationException {
final Account caller = CallContext.current().getCallingAccount();
final Account owner = _accountMgr.getAccount(vpcOwnerId);
@ -1247,6 +1247,7 @@ public class VpcManagerImpl extends ManagerBase implements VpcManager, VpcProvis
vpcOff.isRedundantRouter(), ip4Dns1, ip4Dns2, ip6Dns1, ip6Dns2);
vpc.setPublicMtu(publicMtu);
vpc.setDisplay(Boolean.TRUE.equals(displayVpc));
vpc.setUseRouterIpResolver(Boolean.TRUE.equals(useVrIpResolver));
if (vpc.getCidr() == null && cidrSize != null) {
// Allocate a CIDR for VPC
@ -1305,7 +1306,7 @@ public class VpcManagerImpl extends ManagerBase implements VpcManager, VpcProvis
List<Long> bgpPeerIds = (cmd instanceof CreateVPCCmdByAdmin) ? ((CreateVPCCmdByAdmin)cmd).getBgpPeerIds() : null;
Vpc vpc = createVpc(cmd.getZoneId(), cmd.getVpcOffering(), cmd.getEntityOwnerId(), cmd.getVpcName(), cmd.getDisplayText(),
cmd.getCidr(), cmd.getNetworkDomain(), cmd.getIp4Dns1(), cmd.getIp4Dns2(), cmd.getIp6Dns1(),
cmd.getIp6Dns2(), cmd.isDisplay(), cmd.getPublicMtu(), cmd.getCidrSize(), cmd.getAsNumber(), bgpPeerIds);
cmd.getIp6Dns2(), cmd.isDisplay(), cmd.getPublicMtu(), cmd.getCidrSize(), cmd.getAsNumber(), bgpPeerIds, cmd.getUseVrIpResolver());
String sourceNatIP = cmd.getSourceNatIP();
boolean forNsx = isVpcForNsx(vpc);

View File

@ -524,9 +524,13 @@ import org.apache.cloudstack.api.command.user.template.ListTemplatesCmd;
import org.apache.cloudstack.api.command.user.template.RegisterTemplateCmd;
import org.apache.cloudstack.api.command.user.template.UpdateTemplateCmd;
import org.apache.cloudstack.api.command.user.template.UpdateTemplatePermissionsCmd;
import org.apache.cloudstack.api.command.user.userdata.BaseRegisterUserDataCmd;
import org.apache.cloudstack.api.command.user.userdata.DeleteCniConfigurationCmd;
import org.apache.cloudstack.api.command.user.userdata.DeleteUserDataCmd;
import org.apache.cloudstack.api.command.user.userdata.LinkUserDataToTemplateCmd;
import org.apache.cloudstack.api.command.user.userdata.ListCniConfigurationCmd;
import org.apache.cloudstack.api.command.user.userdata.ListUserDataCmd;
import org.apache.cloudstack.api.command.user.userdata.RegisterCniConfigurationCmd;
import org.apache.cloudstack.api.command.user.userdata.RegisterUserDataCmd;
import org.apache.cloudstack.api.command.user.vm.AddIpToVmNicCmd;
import org.apache.cloudstack.api.command.user.vm.AddNicToVMCmd;
@ -4192,6 +4196,9 @@ public class ManagementServerImpl extends ManagerBase implements ManagementServe
cmdList.add(DeleteUserDataCmd.class);
cmdList.add(ListUserDataCmd.class);
cmdList.add(LinkUserDataToTemplateCmd.class);
cmdList.add(RegisterCniConfigurationCmd.class);
cmdList.add(ListCniConfigurationCmd.class);
cmdList.add(DeleteCniConfigurationCmd.class);
//object store APIs
cmdList.add(AddObjectStoragePoolCmd.class);
@ -5004,7 +5011,13 @@ public class ManagementServerImpl extends ManagerBase implements ManagementServe
}
@Override
public Pair<List<? extends UserData>, Integer> listUserDatas(final ListUserDataCmd cmd) {
@ActionEvent(eventType = EventTypes.EVENT_DELETE_CNI_CONFIG, eventDescription = "CNI Configuration deletion")
public boolean deleteCniConfiguration(DeleteCniConfigurationCmd cmd) {
return deleteUserData(cmd);
}
@Override
public Pair<List<? extends UserData>, Integer> listUserDatas(final ListUserDataCmd cmd, final boolean forCks) {
final Long id = cmd.getId();
final String name = cmd.getName();
final String keyword = cmd.getKeyword();
@ -5024,6 +5037,8 @@ public class ManagementServerImpl extends ManagerBase implements ManagementServe
sb.and("id", sb.entity().getId(), SearchCriteria.Op.EQ);
sb.and("name", sb.entity().getName(), SearchCriteria.Op.EQ);
sb.and("keyword", sb.entity().getName(), SearchCriteria.Op.LIKE);
sb.and("name", sb.entity().getName(), SearchCriteria.Op.EQ);
sb.and("forCks", sb.entity().isForCks(), SearchCriteria.Op.EQ);
final SearchCriteria<UserDataVO> sc = sb.create();
_accountMgr.buildACLSearchCriteria(sc, domainId, isRecursive, permittedAccounts, listProjectResourcesCriteria);
@ -5039,24 +5054,41 @@ public class ManagementServerImpl extends ManagerBase implements ManagementServe
sc.setParameters("keyword", "%" + keyword + "%");
}
sc.setParameters("forCks", forCks);
final Pair<List<UserDataVO>, Integer> result = userDataDao.searchAndCount(sc, searchFilter);
return new Pair<>(result.first(), result.second());
}
@Override
@ActionEvent(eventType = EventTypes.EVENT_REGISTER_CNI_CONFIG, eventDescription = "registering CNI configuration", async = true)
public UserData registerCniConfiguration(RegisterCniConfigurationCmd cmd) {
final Account owner = getOwner(cmd);
checkForUserDataByName(cmd, owner);
final String name = cmd.getName();
String userdata = cmd.getCniConfig();
final String params = cmd.getParams();
userdata = userDataManager.validateUserData(userdata, cmd.getHttpMethod());
return createAndSaveUserData(name, userdata, params, owner, true);
}
@Override
@ActionEvent(eventType = EventTypes.EVENT_REGISTER_USER_DATA, eventDescription = "registering userdata", async = true)
public UserData registerUserData(final RegisterUserDataCmd cmd) {
final Account owner = getOwner(cmd);
checkForUserDataByName(cmd, owner);
checkForUserData(cmd, owner);
final String name = cmd.getName();
String userdata = cmd.getUserData();
checkForUserData(cmd, owner);
final String params = cmd.getParams();
userdata = userDataManager.validateUserData(userdata, cmd.getHttpMethod());
return createAndSaveUserData(name, userdata, params, owner);
return createAndSaveUserData(name, userdata, params, owner, false);
}
/**
@ -5076,7 +5108,7 @@ public class ManagementServerImpl extends ManagerBase implements ManagementServe
* @param owner
* @throws InvalidParameterValueException
*/
private void checkForUserDataByName(final RegisterUserDataCmd cmd, final Account owner) throws InvalidParameterValueException {
private void checkForUserDataByName(final BaseRegisterUserDataCmd cmd, final Account owner) throws InvalidParameterValueException {
final UserDataVO userData = userDataDao.findByName(owner.getAccountId(), owner.getDomainId(), cmd.getName());
if (userData != null) {
throw new InvalidParameterValueException(String.format("A userdata with name %s already exists for this account.", cmd.getName()));
@ -5145,7 +5177,7 @@ public class ManagementServerImpl extends ManagerBase implements ManagementServe
* @param cmd
* @return Account
*/
protected Account getOwner(final RegisterUserDataCmd cmd) {
protected Account getOwner(final BaseRegisterUserDataCmd cmd) {
final Account caller = getCaller();
return _accountMgr.finalizeOwner(caller, cmd.getAccountName(), cmd.getDomainId(), cmd.getProjectId());
}
@ -5158,7 +5190,7 @@ public class ManagementServerImpl extends ManagerBase implements ManagementServe
return caller;
}
private SSHKeyPair createAndSaveSSHKeyPair(final String name, final String fingerprint, final String publicKey, final String privateKey, final Account owner) {
private SSHKeyPair createAndSaveSSHKeyPair(final String name, final String fingerprint, final String publicKey, final String privateKey, final Account owner) {
final SSHKeyPairVO newPair = new SSHKeyPairVO();
newPair.setAccountId(owner.getAccountId());
@ -5173,7 +5205,7 @@ public class ManagementServerImpl extends ManagerBase implements ManagementServe
return newPair;
}
private UserData createAndSaveUserData(final String name, final String userdata, final String params, final Account owner) {
private UserData createAndSaveUserData(final String name, final String userdata, final String params, final Account owner, final boolean isForCks) {
final UserDataVO userDataVO = new UserDataVO();
userDataVO.setAccountId(owner.getAccountId());
@ -5181,6 +5213,7 @@ public class ManagementServerImpl extends ManagerBase implements ManagementServe
userDataVO.setName(name);
userDataVO.setUserData(userdata);
userDataVO.setParams(params);
userDataVO.setForCks(isForCks);
userDataDao.persist(userDataVO);

View File

@ -55,6 +55,7 @@ public class TemplateProfile {
TemplateType templateType;
Boolean directDownload;
Boolean deployAsIs;
Boolean forCks;
Long size;
public TemplateProfile(Long templateId, Long userId, String name, String displayText, CPU.CPUArch arch, Integer bits, Boolean passwordEnabled, Boolean requiresHvm, String url,
@ -342,6 +343,14 @@ public class TemplateProfile {
return this.deployAsIs;
}
public Boolean isForCks() {
return forCks;
}
public void setForCks(Boolean forCks) {
this.forCks = forCks;
}
public CPU.CPUArch getArch() {
return arch;
}

View File

@ -30,10 +30,10 @@ public class TemplateUploadParams extends UploadParamsBase {
Long zoneId, Hypervisor.HypervisorType hypervisorType, String chksum,
String templateTag, long templateOwnerId,
Map details, Boolean sshkeyEnabled,
Boolean isDynamicallyScalable, Boolean isRoutingType, boolean deployAsIs) {
Boolean isDynamicallyScalable, Boolean isRoutingType, boolean deployAsIs, boolean forCks) {
super(userId, name, displayText, arch, bits, passwordEnabled, requiresHVM, isPublic, featured, isExtractable,
format, guestOSId, zoneId, hypervisorType, chksum, templateTag, templateOwnerId, details,
sshkeyEnabled, isDynamicallyScalable, isRoutingType, deployAsIs);
sshkeyEnabled, isDynamicallyScalable, isRoutingType, deployAsIs, forCks);
setBootable(true);
}
}

View File

@ -46,6 +46,7 @@ public abstract class UploadParamsBase implements UploadParams {
private boolean isDynamicallyScalable;
private boolean isRoutingType;
private boolean deployAsIs;
private boolean forCks;
private CPU.CPUArch arch;
UploadParamsBase(long userId, String name, String displayText, CPU.CPUArch arch,
@ -55,7 +56,7 @@ public abstract class UploadParamsBase implements UploadParams {
Long zoneId, Hypervisor.HypervisorType hypervisorType, String checksum,
String templateTag, long templateOwnerId,
Map details, boolean sshkeyEnabled,
boolean isDynamicallyScalable, boolean isRoutingType, boolean deployAsIs) {
boolean isDynamicallyScalable, boolean isRoutingType, boolean deployAsIs, boolean forCks) {
this.userId = userId;
this.name = name;
this.displayText = displayText;
@ -232,6 +233,10 @@ public abstract class UploadParamsBase implements UploadParams {
this.bootable = bootable;
}
void setForCks(boolean forCks) {
this.forCks = forCks;
}
void setBits(Integer bits) {
this.bits = bits;
}

View File

@ -246,6 +246,7 @@ public class HypervisorTemplateAdapter extends TemplateAdapterBase {
Long templateSize = performDirectDownloadUrlValidation(cmd.getFormat(),
hypervisor, url, cmd.getZoneIds(), followRedirects);
profile.setSize(templateSize);
profile.setForCks(cmd.isForCks());
}
profile.setUrl(url);
// Check that the resource limit for secondary storage won't be exceeded

Some files were not shown because too many files have changed in this diff Show More