* directdownload: fix keytool importcert
```
$ /usr/bin/keytool -importcert file /etc/cloudstack/agent/CSCERTIFICATE-full -keystore /etc/cloudstack/agent/cloud.jks -alias full -storepass DAWsfkJeeGrmhta6
Illegal option: file
keytool -importcert [OPTION]...
Imports a certificate or a certificate chain
Options:
-noprompt do not prompt
-trustcacerts trust certificates from cacerts
-protected password through protected mechanism
-alias <alias> alias name of the entry to process
-file <file> input file name
-keypass <arg> key password
-keystore <keystore> keystore name
-cacerts access the cacerts keystore
-storepass <arg> keystore password
-storetype <type> keystore type
-providername <name> provider name
-addprovider <name> add security provider by name (e.g. SunPKCS11)
[-providerarg <arg>] configure argument for -addprovider
-providerclass <class> add security provider by fully-qualified class name
[-providerarg <arg>] configure argument for -providerclass
-providerpath <list> provider classpath
-v verbose output
Use "keytool -?, -h, or --help" for this help message
```
* DirectDownload: drop HttpsMultiTrustManager
* Management Server - Prepare for Maintenance and Cancel Maintenance improvements:
- Added new setting 'management.server.maintenance.ignore.maintenance.hosts' to ignore hosts in maintenance states while preparing management server for maintenance. This skips agent transfer and agents count check for hosts in maintenance.
- Rebalance indirect agents after cancel maintenance, using rebalance parameter in cancelMaintenance API
- Force maintenance after maintenance window timeout, using forced parameter in prepareForMaintenance API.
- Propagate 'indirect.agent.lb.check.interval' setting change to the host agents.
* rebases fixes
* code improvements, cleanup
* [UI] Set rebalance true by default in cancel maintenance dialog
* Update MS state after executing cluster cmd in the target MS, and some code improvements
* code improvements
* Ensure the host lb algorithm 'shuffle' is applied once before disabling the indirect agent lb check background task
* [Vmware to KVM Migration] Display virt-v2v and ovftool versions for supported hosts for migration
* Fix UI display
* Address review comments
* Fix ovftool and version display - also display versions on host details view
CKS Enhancements:
* Ability to specify different compute or service offerings for different types of CKS cluster nodes – worker, master or etcd
* Ability to use CKS ready custom templates for CKS cluster nodes
* Add and Remove external nodes to and from a kubernetes cluster
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Update remove node timeout global setting
* CKS/NSX : Missing variables in worker nodes
* CKS: Fix ISO attach logic
* CKS: Fix ISO attach logic
* address comment
* Fix Port - Node mapping when cluster is scaled in the presence of external node(s)
* CKS: Externalize control and worker node setup wait time and installation attempts
* Fix logger
* Add missing headers and fix end of line on files
* CKS Mark Nodes for Manual Upgrade and Filter Nodes to add to CKS cluster from the same network
* Add support to deploy CKS cluster nodes on hosts dedicated to a domain
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* Support unstacked ETCD
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Fix CKS cluster scaling and minor UI improvement
* Reuse k8s cluster public IP for etcd nodes and rename etcd nodes
* Fix DNS resolver issue
* Update UDP active monitor to ICMP
* Add hypervisor type to CKS cluster creation to fix CKS cluster creation when External hosts added
* Fix build
* Fix logger
* Modify hypervisor param description in the create CKS cluster API
* CKS delete fails when external nodes are present
* CKS delete fails when external nodes are present
* address comment
* Improve network rules cleanup on failure adding external nodes to CKS cluster
* UI: Fix etcd template was not honoured
* UI: Fix etcd template was not honoured
* Refactor
* CKS: Exclude etcd nodes when calculating port numbers
* Fix network cleanup in case of CKS cluster failure
* Externalize retries and inverval for NSX segment deletion
* Fix CKS scaling when external node(s) present in the cluster
* CKS: Fix port numbers displayed against ETCD nodes
* Add node version details to every node of k8s cluster - as we now support manual upgrade
* Add node version details to every node of k8s cluster - as we now support manual upgrade
* update column name
* CKS: Exclude etcd nodes when calculating port numbers
* update param name
* update param
* UI: Fix CKS cluster creation templates listing for non admins
* CKS: Prevent etcd node start port number to coincide with k8s cluster start port numbers
* CKS: Set default kubernetes cluster node version to the kubernetes cluster version on upgrade
* CKS: Set default kubernetes cluster node version to the kubernetes cluster version on upgrade
* consolidate query
* Fix upgrade logic
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Fix CKS cluster version upgrade
* CKS: Fix etcd port numbers being skipped
* Fix CKS cluster with etcd nodes on VPC
* Move schema and upgrade for 4.20
* Fix logger
* Fix after rebasing
* Add support for using different CNI plugins with CKS
* Add support for using different CNI plugins with CKS
* remove unused import
* Add UI support and list cni config API
* necessary UI changes
* add license
* changes to support external cni
* UI changes
* Fix NPE on restarting VPC with additional public IPs
* fix merge conflict
* add asnumber to create k8s svc layer
* support cni framework to use as-numbers
* update code
* condition to ignore undefined jinja template variables
* CKS: Do not pass AS number when network ID is passed
* Fix deletion of Userdata / CNI Configuration in projects
* CKS: Add CNI configuration details to the response and UI
* Explicit events for registering cni configuration
* Add Delete cni configuration API
* Fix CKS deployment when using VPC tiers with custom ACLs
* Fix DNS list on VR
* CKS: Use Network offering of the network passed during CKS cluster creation to get the AS number
* CKS cluster with guest IP
* Fix: Use control node guest IP as join IP for external nodes addition
* Fix DNS resolver issue
* Improve etcd indexing - start from 1
* CKS: Add external node to a CKS cluster deployed with etcd node(s) successfully
* CKS: Add external node to a CKS cluster deployed with etcd node(s) successfully
* simplify logic
* Tweak setup-kube-system script for baremetal external nodes
* Consider cordoned nodes while getting ready nodes
* Fix CKS cluster scale calculations
* Set token TTL to 0 (no expire) for external etcd
* Fix missing quotes
* Fix build
* Revert PR 9133
* Add calico commands for ens35 interface
* Address review comments: plan CKS cluster deployment based on the node type
* Add qemu-guest-agent dependency for kvm based templates
* Add marvin test for CKS clusters with different offerings per node type
* Remove test tag
* Add marvin test and fix update template for cks and since annotations
* Fix marvin test for adding and removing external nodes
* Fix since version on API params
* Address review comments
* Fix unit test
* Address review comments
* UI: Make CKS public templates visible to non-admins on CKS cluster creation
* Fix linter
* Fix merge error
* Fix positional parameters on the create kubernetes ISO script and make the ETCD version optional
* fix etcd port displayed
* Further improvements to CKS (#118)
* Multiple nics support on Ubuntu template
* Multiple nics support on Ubuntu template
* supports allocating IP to the nic when VM is added to another network - no delay
* Add option to select DNS or VR IP as resolver on VPC creation
* Add API param and UI to select option
* Add column on vpc and pass the value on the databags for CsDhcp.py to fix accordingly
* Externalize the CKS Configuration, so that end users can tweak the configuration before deploying the cluster
* Add new directory to c8 packaging for CKS config
* Remove k8s configuration from resources and make it configurable
* Revert "Remove k8s configuration from resources and make it configurable"
This reverts commit d5997033ebe4ba559e6478a64578b894f8e7d3db.
* copy conf to mgmt server and consume them from there
* Remove node from cluster
* Add missing /opt/bin directory requrired by external nodes
* Login to a specific Project view
* add indents
* Fix CKS HA clusters
* Fix build
---------
Co-authored-by: Nicolas Vazquez <nicovazquez90@gmail.com>
* Add missing headers
* Fix linter
* Address more review comments
* Fix unit test
* Fix scaling case for the same offering
* Revert "Login to a specific Project view"
This reverts commit 95e37563f4.
* Revert "Fix CKS HA clusters" (#120)
This reverts commit 8dac16aa35.
* Apply suggestions from code review about user data
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Update api/src/main/java/org/apache/cloudstack/api/command/user/userdata/BaseRegisterUserDataCmd.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Refactor column names and schema path
* Fix scaling for non existing previous offering per node type
* Update node offering entry if there was an existing offering but a global service offering has been provided on scale
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Daan Hoogland <daan@onecht.net>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
A separate service account will be created and added in the project, if
not exist already, when a Kubernetes cluster is deployed in a project.
This account will have a role with limited API access.
Cleanup clusters on owner account cleanup, delete service account
if needed
When the owner account of k8s clusters is deleted, while its node VMs
get expunged, the cluster entry in DB remain present. This fixes the
issue by cleaning up all clusters for the account deleted.
Project k8s service account will be deleted on account cleanup or when
there is no active k8s cluster remaining
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
A separate service account will be created and added in the project, if
not exist already, when a Kubernetes cluster is deployed in a project.
This account will have a role with limited API access.
Cleanup clusters on owner account cleanup, delete service account
if needed
When the owner account of k8s clusters is deleted, while its node VMs
get expunged, the cluster entry in DB remain present. This fixes the
issue by cleaning up all clusters for the account deleted.
Project k8s service account will be deleted on account cleanup or when
there is no active k8s cluster remaining
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* kvm: fix vm deployment from RAW template
* Update plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
---------
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
* Introducing Storage Access Groups to define the host and storage pool connections
In CloudStack, when a primary storage is added at the Zone or Cluster scope, it is by default connected to all hosts within that scope. This default behavior can be refined using storage access groups, which allow operators to control and limit which hosts can access specific storage pools.
Storage access groups can be assigned to hosts, clusters, pods, zones, and primary storage pools. When a storage access group is set on a cluster/pod/zone, all hosts within that scope inherit the group. Connectivity between a host and a storage pool is then governed by whether they share the same storage access group.
A storage pool with a storage access group will connect only to hosts that have the same storage access group. A storage pool without a storage access group will connect to all hosts, including those with or without a storage access group.
* VMware - Ignore disk not found error on cleanup when the VM disk doesn't exists
* VMware - Retry powerOn on lock issues
* addressed comments
* Update CPVM reboot tests - wait for the agent to Disconnect and back Up
* Retry moveDatastoreFile when any file access issue while creating volume from snapshot
* Update full clone flag when restoring vm using root disk offering with more size than the template size
* refactored (mainly,for diskInfo - causing NPE in some cases)
* Retry moveDatastoreFile when there is any file access issue
* Reset the pool id when create volume fails on the allocated pool
- the pool id is persisted while creating the volume, when it fails the pool id is not reverted. On next create volume attempt, CloudStack couldn't find any suitable primary storage even there are pools available with enough capacity as the pool is already assigned to volume which is in Allocated state (and storage pool compatibility check fails). Ensure volume is not assigned to any pool if create volume fails (so the next creation job would pick the suitable pool).
* endpoint check for resize
* update the resize error through callback result instead of exception
* Add & Remove PowerFlex/ScaleIO MDMs while preparing & unpreparing the storage SDC connections (instead of start & stop scini)
* Add/Remove MDM IP addresses during Host connection/disconnection to/from storage pool when powerflex.connect.on.demand is false
* unit test fixes
* Don't remove MDM IPs from SDC when any volumes mapped to SDC
* Don't remove MDM IPs when other pools of same ScaleIO/PowerFlex cluster are connected
* rebase fixes
* update changes, to not remove/disconnect MDMs on maintenance
* import fixes after rebase
* Consider the clusters with allocation state 'Enabled' for EndPoint selection (in addition to Host state)
* Reset the pool id when create volume fails on the allocated pool
- the pool id is persisted while creating the volume, when it fails the pool id is not reverted. On next create volume attempt, CloudStack couldn't find any suitable primary storage even there are pools available with enough capacity as the pool is already assigned to volume which is in Allocated state (and storage pool compatibility check fails). Ensure volume is not assigned to any pool if create volume fails (so the next creation job would pick the suitable pool).
* endpoint check for resize
* update the resize error through callback result instead of exception
* logger fix
* KVM incremental snapshot feature
* fix log
* fix merge issues
* fix creation of folder
* fix snapshot update
* Check for hypervisor type during parent search
* fix some small bugs
* fix tests
* Address reviews
* do not remove storPool snapshots
* add support for downloading diff snaps
* Add multiple zones support
* make copied snapshots have normal names
* address reviews
* Fix in progress
* continue fix
* Fix bulk delete
* change log to trace
* Start fix on multiple secondary storages for a single zone
* Fix multiple secondary storages for a single zone
* Fix tests
* fix log
* remove bitmaps when deleting snapshots
* minor fixes
* update sql to new file
* Fix merge issues
* Create new snap chain when changing configuration
* add verification
* Fix snapshot operation selector
* fix bitmap removal
* fix chain on different storages
* address reviews
* fix small issue
* fix test
---------
Co-authored-by: João Jandre <joao@scclouds.com.br>
* Don't set signingRegion as auto for creating the s3 client in ceph object store provider.
* replace getBucketAcl with doesBucketExistV2 in CephObjectStoreDriverImplTest
* KVM: add Virtual TPM model and version
* KVM: add admin-only VM setting GUEST.CPU.MODE and GUEST.CPU.MODEL
* VMware: add vTPM
* vTPM: do not set Key due to 'Cannot add multiple devices using the same device key..'
* vTPM: add unit test testTpmModel
* engine/schema: remove user vm details for guest CPU mode/model
* vTPM: extra methods as Daan's requests
* vTPM: add unit tests in VmwareResourceTest
* vTPM: update unit tests in VmwareResourceTest
* vTPM: add unit test in LibvirtComputingResourceTest
* vTPM: use the default TPM version if an invalid version is passed
* vTPM: requires UEFI on vmware and do nothing if it is not enabled/disabled
* vTPM: let uses to add UEFI on vmware
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* vTPM: remove template details for guest CPU mode/model
* UI: boot vm from ISO into UEFI/SECURE mode
---------
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
Dependency name change mockito-inline to mockito-core. Inline is now the default and the last version of mockito-inline released is 5.2.0.
assertj-core in user-authenticators/saml2 pulls in an incompatible version of byte-buddy and required an exclusion. Updating the version of assertj is left for a future PR.
The upgrade requires Java 11+, dropping support for Java 8. CloudStack documentation already says to use Java 11 and does not indicate that java 8 is supported.
Test classes using @RunWith(MockitoJUnitRunner.class) now get run in strict mode. Changes were made to tests where the stubbing intention was clear. In ManagementServerMaintenanceManagerImplTest there are 5 tests where the intention of the test is unclear. Each of the statements now use Mockito.lenient() to avoid the exception. Other cases in the tests follow a similar pattern.
Minor clean up.
Both @Spy and Mockito.spy( should not be used. Favored the annotation.
Both @RunWith(MockitoJUnitRunner.class) and MockitoAnnotations.openMocks(this); should not be used. Favored the annotation.
Unnecessary extends TestCase removed.
@InjectMocks and new in statement unnecessary. Removed new when issue presented.
Some of the Cmd classes like UpdateNetworkCmd have a type tree that includes fields of type Object. This appears to cause issues with injection, requiring that @Mock fields be available. This is where the following fields were added in multiple places:
Object job;
ResponseGenerator _responseGenerator;
Wrong number of parameters for Mockito.when in LibvirtRevertSnapshotCommandWrapperTest.java
* 4.20:
xenserver: do not destroy halted hypervisor vm (#9175)
define the limit of projects through the UI (#10652)
fix projects metrics on dashboard (#10651)
systemvm: Bump systemvm template version to debian 12.10 (#10628)
Enhance VPC Network Tier form to auto-populate Gateway, and Netmask (#10617)
* Readd filename string on qemuimg create
* Remove empty object on the data pool details of storage pools with no data pool
* Only use the method createPhysicalDiskByLibVirt with RBD when the pool is of erasure code type. Also added javadoc for createPhysicalDisk method
* Change literal '/' string to File.separator
* Add support for erasure code pools
* Fix null on putAll
* Update last agents during ms maintenance, and some code improvements
* Send 503 (Service Unavailable) response status when maintenance or shutdown is initiated
[Any load balancer in the clustered environment can avoid routing requests to this MS node]
* Migrate systemvm agents before routing host agents, and some code improvements
* Added events for ms maintenance and shutdown operations
* Added the following ms maintenance and shutdown improvements
- block new agent connections during prepare for maintenance of ms
- maintain avoids ms list
- propagate updated management servers list and lb algorithm in host and indirect.agent.lb.algorithm settings respectively, to systemvm (non-routing) agents
- updated setup ms list and migrate agent connections to executor service
- migrate agent connection through executor, and send the answer to the ms host that initiated the migration
- re-initialize ssl handshake executor if it is shutdown
- don't allow prepare for maintenance or shutdown when other management server nodes are in preparing states
- don't allow trigger shutdown when management server is up and other management server nodes are in preparing states
- stop agent connections monitor on ms maintenance
- update avoid ms list in ready command
- updated connected host from the client connection
- update last agents in ms metrics from the database
- updated some agent config descriptions
- update last management server in the hosts during shutdown
- added agents and lastagents in management server response
- updated management server maintenance & shutdown unit tests
- some code improvements
* refactored code / addressed comments
* removed shutdown testcase (maybe, calling System.exit)
* Revert "removed shutdown testcase (maybe, calling System.exit)"
This reverts commit e14b071715.
* avoid system.exit during shutdown test
* code improvements
* testcase fix
* Fix cutoff time in agent connections monitor thread
Somehow deleteDatastore was never implemented, that meant:
templates haven't been cleaned up on datastore delete and
also agents have never been informed about storage pool removal.
If a -rst resource wasn't deleted because of a failed copy,
a reoccurring snapshot attempt couldn't be done, because there
was still the "old" -rst resource. To prevent this always
try to remove the -rst resource before, if it doesn't exist it is a noop.
* NAS B&R Plugin enhancements
* Prevent printing mount opts which may include password by removing from response
* revert marvin change
* add sanity checks to validate minimum qemu and libvirt versions
* check is user running script is part of libvirt group
* revert changes of retore expunged VM
* add code coverage ignore file
* remove check
* issue with listing schedules and add defensive checks
* redirect logs to agent log file
* add some more debugging
* remove test file
* prevent deletion of cks cluster when vms associated to a backup offering
* delete all snapshot policies when bkp offering is disassociated from a VM
* Fix `updateTemplatePermission` when the UI is set to a language other than English (#9766)
* Fix updateTemplatePermission UI in non-english language
* Improve fix
---------
* Add nobrl in the mountopts for cifs file system
* Fix restoration of VM / volumes with cifs
* add cifs utils for el8
* add cifs-utils for ubuntu cloudstack-agent
* syntax error
* remove required constraint on both vmid and id params for the delete bkp schedule command
* add use of virsh domifaddr to get VM external DHCP IP
* updates to modularize LibvirtGetVmIpAddressCommandWrapper per comments; added test cases to cover 90%+ scenarios
* updates to modularize LibvirtGetVmIpAddressCommandWrapper per comments; added test cases to cover 90%+ scenarios
* updates to modularize LibvirtGetVmIpAddressCommandWrapper per comments; added test cases to cover 90%+ scenarios
This PR introduces the concept of multi-scope configuration settings. In addition to the Global level, currently all configurations can be set at a single scope level.
It will be useful if a configuration can be set at multiple scopes. For example, a configuration set at the domain level
will apply for all accounts, but it can be set for an account as well. In which case the account level setting will override the domain level setting.
This is done by changing the column `scope` of table `configuration` from string (single scope) to bitmask (multiple scopes).
```
public enum Scope {
Global(null, 1),
Zone(Global, 1 << 1),
Cluster(Zone, 1 << 2),
StoragePool(Cluster, 1 << 3),
ManagementServer(Global, 1 << 4),
ImageStore(Zone, 1 << 5),
Domain(Global, 1 << 6),
Account(Domain, 1 << 7);
```
Each scope is also assigned a parent scope. When a configuration for a given scope is not defined but is available for multiple scope types, the value will be retrieved from the parent scope. If there is no parent scope or if the configuration is defined for a single scope only, the value will fall back to the global level.
Hierarchy for different scopes is defined as below :
- Global
- Zone
- Cluster
- Storage Pool
- Image Store
- Management Server
- Domain
- Account
This PR also updates the scope of the following configurations (Storage Pool scope is added in addition to the existing Zone scope):
- pool.storage.allocated.capacity.disablethreshold
- pool.storage.allocated.resize.capacity.disablethreshold
- pool.storage.capacity.disablethreshold
Doc PR : https://github.com/apache/cloudstack-documentation/pull/476
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This is found in some PRs
plugins/storage/volume/linstor/src/main/java/com/cloud/hypervisor/kvm/storage/LinstorStorageAdaptor.java:510: poperties ==> properties
Doc PR : https://github.com/apache/cloudstack-documentation/pull/461
This PR fixes https://github.com/apache/cloudstack/issues/8638
== Description
Four new Resource Types have been added. Admin can configure corresponding resource limits for the tenants at different levels (domain, account, project)
User dashboard's Storage section will show the new resources, their limits and current usage.
1. backup - No. of backups used by the account
2. backup_storage - Backup storage allocated for the account
3. bucket - No. of buckets used by the accounts
4. object_storage - Object storage allocated for the account.
Some other related changes done to BnR framework:
1. Maximum number of Backups to retain can be specified while creating Backup schedules, similar to Scheduled snapshots.
2. Oldest Scheduled backup of the same interval type will be deleted once the number reaches the configured max Backups value.
3. Code refactor: Moved syncBackups method from BackupProvider to the framework BackupManagerImpl, as it is a common functionality and all providers were using duplicated code.
Changes done to the Object Storage Framework
1. Quota parameter is made mandatory while creating a bucket. Bucket quota is considered to be the allocated space and will be used to enforce Resource limits.
== Schema Changes:
1. New Column `max_backups` added to `backup_schedule` table
4. New Column `backup_interval_type` added to `backups` table
== Api Changes:
1. createBackup: new Parameter `scheduleid`. It should be specified whenever a scheduled backup is created. This will translate to the `backup_interval_type` in the `backups` table.
3. createBackupScheduke: new Parameter `max_backups`. To specify maximum number of backups to retain for the given schedule.
== Configurations:
|Setting |Scope |Default Value |Description|
|-------|--------|--------------|-----------|
|backup.max.hourly |Global |8 |Maximum recurring hourly backups to be retained for an instance|
|backup.max.daily |Global |8 |Maximum recurring daily backups to be retained for an instance|
|backup.max.weekly |Global |8 |Maximum recurring weekly backups to be retained for an instance|
|backup.max.monthly |Global |8 |Maximum recurring monthly backups to be retained for an instance|
|max.account.backups| Global| 20 | The default maximum number of backups that can be created for an account|
|max.account.backup.storage| Global| 400 | The default maximum backup storage space (in GiB) that can be used for an account|
|max.domain.backups| Global| 40 | The default maximum number of backups that can be created for an domain|
|max.domain.backup.storage| Global| 800 | The default maximum backup storage space (in GiB) that can be used for an domain|
|max.project.backups| Global| 20 | The default maximum number of backups that can be created for an project|
|max.project.backup.storage| Global| 400 | The default maximum backup storage space (in GiB) that can be used for an project|
|Setting |Scope |Default Value |Description|
|-------|--------|--------------|-----------|
|max.account.buckets| Global| 20 | The default maximum number of buckets that can be created for an account|
|max.account.object.storage| Global| 400 | The default maximum object storage space (in GiB) that can be used for an account|
|max.domain.buckets| Global| 40 | The default maximum number of buckets that can be created for an domain|
|max.domain.object.storage| Global| 800 | The default maximum object storage space (in GiB) that can be used for an domain|
|max.project.buckets| Global| 20 | The default maximum number of buckets that can be created for an project|
|max.project.object.storage| Global| 400 | The default maximum object storage space (in GiB) that can be used for an project|
Co-authored-by: Daan Hoogland <daan@onecht.net>
Co-authored-by: Lucas Martins <56271185+lucas-a-martins@users.noreply.github.com>
Co-authored-by: Lucas Martins <lucas.martins@scclouds.com.br>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* api,agent,server,engine-schema: scalability improvements
Following changes and improvements have been added:
- Improvements in handling of PingRoutingCommand
1. Added global config - `vm.sync.power.state.transitioning`, default value: true, to control syncing of power states for transitioning VMs. This can be set to false to prevent computation of transitioning state VMs.
2. Improved VirtualMachinePowerStateSync to allow power state sync for host VMs in a batch
3. Optimized scanning stalled VMs
- Added option to set worker threads for capacity calculation using config - `capacity.calculate.workers`
- Added caching framework based on Caffeine in-memory caching library, https://github.com/ben-manes/caffeine
- Added caching for account/use role API access with expiration after write can be configured using config - `dynamic.apichecker.cache.period`. If set to zero then there will be no caching. Default is 0.
- Added caching for account/use role API access with expiration after write set to 60 seconds.
- Added caching for some recurring DB retrievals
1. CapacityManager - listing service offerings - beneficial in host capacity calculation
2. LibvirtServerDiscoverer existing host for the cluster - beneficial for host joins
3. DownloadListener - hypervisors for zone - beneficial for host joins
5. VirtualMachineManagerImpl - VMs in progress- beneficial for processing stalled VMs during PingRoutingCommands
- Optimized MS list retrieval for agent connect
- Optimize finding ready systemvm template for zone
- Database retrieval optimisations - fix and refactor for cases where only IDs or counts are used mainly for hosts and other infra entities. Also similar cases for VMs and other entities related to host concerning background tasks
- Changes in agent-agentmanager connection with NIO client-server classes
1. Optimized the use of the executor service
2. Refactore Agent class to better handle connections.
3. Do SSL handshakes within worker threads
5. Added global configs to control the behaviour depending on the infra. SSL handshake could be a bottleneck during agent connections. Configs - `agent.ssl.handshake.min.workers` and `agent.ssl.handshake.max.workers` can be used to control number of new connections management server handles at a time. `agent.ssl.handshake.timeout` can be used to set number of seconds after which SSL handshake times out at MS end.
6. On agent side backoff and sslhandshake timeout can be controlled by agent properties. `backoff.seconds` and `ssl.handshake.timeout` properties can be used.
- Improvements in StatsCollection - minimize DB retrievals.
- Improvements in DeploymentPlanner allow for the retrieval of only desired host fields and fewer retrievals.
- Improvements in hosts connection for a storage pool. Added config - `storage.pool.host.connect.workers` to control the number of worker threads that can be used to connect hosts to a storage pool. Worker thread approach is followed currently only for NFS and ScaleIO pools.
- Minor improvements in resource limit calculations wrt DB retrievals
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* test1, domaindetails, capacitymanager fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* test2 - agent tests
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* capacitymanagertest fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* change
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix missing changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address comments
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* revert marvin/setup.py
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix indent
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* use space in sql
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address duplicate
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* update host logs
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* revert e36c6a5d07
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix npe in capacity calculation
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* move schema changes to 4.20.1 upgrade
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* build fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address comments
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix build
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* add some more tests
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* checkstyle fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove unnecessary mocks
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* build fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* replace statics
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* engine/orchestration,utils: limit number of concurrent new agent
connections
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor - remove unused
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* unregister closed connections, monitor & cleanup
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* add check for outdated vm filter in power sync
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* agent: synchronize sendRequest wait
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Support for Management Server Maintenance
- New APIs: prepareForMaintenance and cancelMaintenance, with required parameter - managementserverid.
- New management server states for maintenance: PreparingForMaintenance, Maintenance.
- listHosts API with optional parameter – managementserverid, to list the hosts connected to the management server.
- Support management server maintenance when more than one active management servers available.
- Triggers transfer agents to other available management servers for maintenance, new agent command MigrateAgentConnectionCommand to initiate transfer of indirect agents.
- New global config 'management.server.maintenance.timeout', to set the timeout (in mins) for the management server maintenance window, default: 60 mins.
- UI changes: Prepare and Cancel Maintenance in Management Server section, Connected Agents tab, New fields for hosts and management servers.
* Updated pending jobs check timer task with ScheduledExecutorService
* keep maintenance state on trigger shutdown call when ms is in maintenance
* add pending jobs count to ms response
* during ms heartbeat, update state to up only when it's down
* allow vm work jobs of async job created before prepare for maintenance
* Revert "keep maintenance state on trigger shutdown call when ms is in maintenance"
This reverts commit 607e13364679eac897f4d146bb3325ea7a61ba17.
* skip maintenance test when multiple management servers are not available, and not configured in host setting for kvm
* Delete local storage properties in agent.properties during delete pool
* Fix stale entry when add local storage failed
* Smaller methods
* Comment added
* 4.20:
linstor: Fix ZFS snapshot backup (#10219)
fix listing of VMs by network (#10204)
Configure org.eclipse.jetty.server.Request.maxFormKeys from server.properties and increase the default value (#10214)
api: fix access for listSystemVmUsageHistory (#10032)
Fix NPE issues during host rolling maintenance, due to host tags and custom constrained/unconstrained service offering (#9844)
* 4.20:
merge errors fixed
Restrict the migration of volumes attached to VMs in Starting state (#9725)
server, plugin: enhance storage stats for IOPS (#10034)
Introducing granular command timeouts global setting (#9659)
Improve logging to include more identifiable information (#9873)
Adds framework layer change to allow retrieving and storing IOPS stats for storage pools. Custom `PrimaryStoreDriver` can implement method - `getStorageIopsStats` for returning IOPS stats. Existing method `getUsedIops` can also be overridden by such plugins when only used IOPS is returned.
For testing purpose, implementation has been added for simulator hypervisor plugin to return capacity and used IOPS for a pool.
For local storage pool, implementation has been added using iostat to return currently used IOPS.
StoragePoolResponse class has been updated to return IOPS values which allows showing IOPS values in UI for different storage pool related views and APIs.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Improve logging to include more identifiable information for kvm plugin
* Update logging for scaleio plugin
* Improve logging to include more identifiable information for default volume storage plugin
* Improve logging to include more identifiable information for agent managers
* Improve logging to include more identifiable information for Listeners
* Replace ids with objects or uuids
* Improve logging to include more identifiable information for engine
* Improve logging to include more identifiable information for server
* Fixups in engine
* Improve logging to include more identifiable information for plugins
* Improve logging to include more identifiable information for Cmd classes
* Fix toString method for StorageFilterTO.java
* 4.20:
VR: apply iptables rules when add/remove static routes (#10064)
Certificate and VM hostname validation improvements (#10051)
set ulimit for server according to redhat spec (#10040)
kvm-storage: provide isVMMigrate information to storage plugins (#10093)
Allow config drive deletion of migrated VM, on host maintenance (#10045)
linstor: improve heartbeat check with also asking linstor (#10105)
server: simplify role change validation (#9173)
UI: create VPC network offering with conserve mode (#10082)
server: fix typo removeaccessvpn in VirtualRouterElement (#10086)
UI: remove duplicated Instance Name in Public IP details page (#10087)
UI: Fixes in the Usage UI (#10000)
SAML2: add cookie with HttpOnly too #10013 (#10047)
ui: Allow font-awesome icon usage and optimise icon size inconsistency (#9744)
Particular Linstor needs can use this information to only allow
dual volume access for live migration and not enable it in general,
which can and will lead to data corruption if for some reason
2 VMs get started on 2 different hosts.
If a node doesn't have a DRBD connection to another node,
additionally ask Linstor-Controller if the node is alive.
Otherwise we would have simply said no and the node might still be alive.
This is always the case in a non hyperconverged setup.
* API to validate Quota activation rule
* Apply suggestions from code review
Co-authored-by: Bryan Lima <42067040+BryanMLima@users.noreply.github.com>
* Use constants
---------
Co-authored-by: Henrique Sato <henrique.sato@scclouds.com.br>
Co-authored-by: Bryan Lima <42067040+BryanMLima@users.noreply.github.com>
* 4.20:
UI: Tooltip on the host information card to display the CPU speed in MHz and the memory value in MB (to 3 decimal places) (#9971)
UI: Allow accounts of the `User` type to add other accounts or users to projects through UI (#9927)
enable to create VPC portfowarding rules with source cidr (#7081)
Add new column `last_id` to the table volumes (#9759)
Allow VMWare import via another host (#9787)
Linstor: add support for ISO block devices and direct download (#9792)
get expunged VM data for job result (#9949)
fix section divider display on auth page (#9966)
* cli changes to update user/account, list by apikeyaccess, domain level setting
* UI changes for updating user/account and searchfilter in listview
* make the api parameters and setting accessible only to root admin
* revert changes to ui/package-lock.json
* minor changes to description strings
* UT for ApiServer and AccountManagerImpl classes
* fix pre-commit failure
* Added a constant for the string System
* UT for searchForUsers and searchForAccounts
* Fix marvin test error
* Update schema to use idempotent add column
* Fix `updateTemplatePermission` when the UI is set to a language other than English (#9766)
* Fix updateTemplatePermission UI in non-english language
* Improve fix
---------
Co-authored-by: Lucas Martins <lucas.martins@scclouds.com.br>
* Added user name uuid to logging
* Add events when api key access is changed via api or config setting
* fix the userid for api key access update event
* Fix ut failure after event logging
* Convert drop down to radio-button in edit user and account
* Add ApiKeyAccess status in User InfoCard for Users if Api key is generated
* Return apiKeyAccess in user and account response only for Root Admin
* fixed noredist build failure
* Show apikeyaccess on the left panel in the user view for root admins as well
* don't show divider if apiKeyAccess is not shown to user
* Fix events generated to set Username, Account and Domain of the caller correctly
* cli changes to update user/account, list by apikeyaccess, domain level setting
* UI changes for updating user/account and searchfilter in listview
* make the api parameters and setting accessible only to root admin
* revert changes to ui/package-lock.json
* minor changes to description strings
* UT for ApiServer and AccountManagerImpl classes
* fix pre-commit failure
* Added a constant for the string System
* UT for searchForUsers and searchForAccounts
* Fix marvin test error
* Update schema to use idempotent add column
* Added user name uuid to logging
* Add events when api key access is changed via api or config setting
* fix the userid for api key access update event
* Fix ut failure after event logging
* Convert drop down to radio-button in edit user and account
* Add ApiKeyAccess status in User InfoCard for Users if Api key is generated
* Return apiKeyAccess in user and account response only for Root Admin
* fixed noredist build failure
* Show apikeyaccess on the left panel in the user view for root admins as well
* don't show divider if apiKeyAccess is not shown to user
* Fix events generated to set Username, Account and Domain of the caller correctly
* Added DB upgrade path from 42000 to 42010
---------
Co-authored-by: Daan Hoogland <daan@onecht.net>
Co-authored-by: Lucas Martins <56271185+lucas-a-martins@users.noreply.github.com>
Co-authored-by: Lucas Martins <lucas.martins@scclouds.com.br>
If a secondary storage pool is used by e.g.
2 concurrent snapshot->template actions,
if the first action finished it removed the netfs mount
point for the other action.
Now the storage pools are usage ref-counted and will only
deleted if there are no more users.
In non-hyperconverged setups, diskless nodes don't have a connection
to each other, so setting properties there had no effect.
Now it is checked if a connection exists,
between the live migration nodes and if not,
it will set the allow-two-primaries on resource-definition level.
This fixes the issue when create a ovs network
```
2024-10-29 16:02:45,089 WARN [resource.wrapper.LibvirtOvsFetchInterfaceCommandWrapper] (agentRequest-Handler-2:null) (logid:e716722e) Network interface: ''cloudbr1'' not found
```
This is a regression of a previous security release
see "framework/cluster: improve cluster service, integration API server"
since we now use NetworkInterface.getByName to get network interface, we should NOT add single quotes before/after the label.
* StorPool: fix of delete snapshot
Mark the DB record as destroyed when a snapshot is deleted
* Addressed reviews
* addressed review
* addressed review
qemu has a bug versions prior 7.0 with discard enabled and using the IDE bus.
It would crash the qemu process and kill the virtual machine,
this is most noticeable on installing a windows guest from the
Windows ISO installer.
* linstor: enable discard for Linstor storage pools
All Linstor storage backends support discard, so it can be safely enabled.
* linstor: enable discard for Linstor storage pools CHANGELOG.md
* CKS: add ConfigDrive to cloud-init datasource_list in systemvm template
* systemvm template: update debian 11.7.0 iso url
* CKS: get K8S iso by LABEL=CDROM if config drive ISO is attached
* Revert "CKS: add ConfigDrive to cloud-init datasource_list in systemvm template"
This reverts commit b6863a5ce1.
* CKS: patch cloud-init in opt/cloud/bin/setup/cksnode.sh
* PR7650: move ConfigDrive before CloudStack in datasource list
* Revert "CKS: patch cloud-init in opt/cloud/bin/setup/cksnode.sh"
This reverts commit 75be03c6aa.
* CKS: fix ConfigDrive
* Make volume attachment disk controller selection consistent with VM creation and start
* Update vmware-base/src/main/java/com/cloud/hypervisor/vmware/util/VmwareHelper.java
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Choose disk controllers after converting osdefault
* Rename function
---------
Co-authored-by: dahn <daan.hoogland@gmail.com>
* add dedicated resource response
* populate dedicatedresources field
* change affinity group name and description when it contains dedicated resources
* display dedicatedresources on UI
* add end of line to DedicatedResourceResponse class
* remove unnecessary fully qualified names
* Prevent addition of duplicate PF rules on scale up and no rules left behind on scale down (#32)
* fix missing dependency injection
* NSX: Fix concurrency issues on port forwarding rules deletion (#37)
* Fix concurrency issues on port forwarding rules deletion
* Refactor objectExists
* Fix unit test
* Fix test
* Small fixes
* CKS: Externalize control and worker node setup wait time and installation attempts (#38)
* NSX: Add shared network support (#41)
* NSX: Fix number of physical networks for Guest traffic checks and leftover rules on CKS cluster deletion (#45)
* Fix pf rules removal on CKS cluster deletion
* Fix check for number of physical networks for guest traffic
* Fix unit test
* fix logger
* NSX: Handle CheckHealthCommand to avoid host disconnection and errors on APIs
* NSX: Handle CheckHealthCommand to avoid host disconnection and errors on APIs
* Remove unused string
* fix logger
* Update UDP active monitor to ICMP
* Fix NPE on restarting VPC with additional public IPs
* NSX / VPC: Reuse Source NAT IP from systemVM range on restarts
* CKS: Public IP not found for VPC networks
* Externalize retries and inverval for NSX segment deletion (#67)
* remove unused import
* remove duplicate imports
* remove unused import
* revert externalizing cks settings
* fix test
* Refactor log messages
* Address comments
* Fix issue caused due to forward merge: 90fe1d
---------
Co-authored-by: Nicolas Vazquez <nicovazquez90@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Add logs to LibvirtComputingResource's metrics collecting process
* Apply Joao's suggestions
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
* Adjust some logs
* Print memory statistics log in one line
---------
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
This introduces the multi-arch zones, allowing users to select the VM arch upon deployment.
Multi-arch zone support in CloudStack can allow admins to mix x86_64 & arm64 hosts within the same zone with the following changes proposed:
- All hosts in a clusters need to be homogenous, wrt host CPU type (amd64 vs arm64) and hypevisor
- Arch-aware templates & ISOs:
- Add support for a new arch field (default set of: amd64 and arm64), when unspecified defaults to amd64 and for existing templates & iso
- Allow admins to edit the arch type of the registered template & iso
- Arch-aware clusters and host:
- Add new attribute field for cluster and hosts (kvm host agents can automatically report this, arch of the first host of the cluster is cluster's architecture), defaults to amd64 when not specified
- Allow admins to edit the arch of an existing cluster
- VM deployment form (UI):
- In a multi-arch zone/env, the VM deployment form can allow some kind of template/iso filtration in the UI
- Users should be able to select arch: amd64 & arm64; but this is shown only in a multi-arch zone (env)
- VM orchestration and lifecycle operations:
- Use of VM/template's arch to correctly decide where to provision the VM (on the correct strictly arch-matching host/clusters) & other lifecycle operations (such as migration from/to arch-matching hosts)
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR contains 3 features
- IPv4 Static Routing (Routed mode) #9346
Design document: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=306153967
- AS Numbers Management #9410
Design Document: https://cwiki.apache.org/confluence/display/CLOUDSTACK/BGP+AS+Numbers+Management
- Dynamic routing
Design Document: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=315492858
- Document: https://github.com/apache/cloudstack-documentation/pull/419
Rename nsx mode to routing mode
by
```
git grep -l nsx_mode |xargs sed -i "s/nsx_mode/routing_mode/g"
git grep -l nsxmode |xargs sed -i "s/nsxmode/routingmode/g"
git grep -l nsxMode |xargs sed -i "s/nsxMode/routingMode/g"
git grep -l NsxMode |xargs sed -i "s/NsxMode/RoutingMode/g"
```
- re-organize sql changes
- fix NPE as rules do not have public ip
- fix missing destination cidr in ingress rules
- disable network usage for routed network
- fix DB exception as network_id is -1 during network creation
- apply ingress/egress routing rules
- VR changes to configure nft rules for isolated network
- VR: setup nft rule for control network
- VR: flush all iptables rules
- fix NPE which is because ingress rules do not have public ip associated
- fix dest cidr is missing in nft tables
- add ip4 routing and ip4 routes to list network and list vpc response
- fix ingress rule is missing when vr is restarted
- fix icmp types in nft rules
- add tab to manage routing firewall rules
- fix ingress rules are not applied when VR is restarted
- add default rules in FORWARD chain
- fix create vpc offerings
- fix public ip is not assigned to vpc
- fix network offering is not listed when create vpc tier
- add is_routing to boot args of vpc vr
- remove table ip4_firewall in vpc vr
- release or remove subnet when remove a network
- implemenent fw_vpcrouter_routing
- fix wrong ip familty when flush ipv4 rules
- fix acl rules are not applied due to wrong version (should be 6 which means ip6 rules are removed)
- add default rules for vpc tiers so that tcp connections (e.g. ssh) work
- append policy rules after default rules
- remove /usr/local/cloud/systemvm/ in routers
- throw an exception when allocate subnet with cidrsize
- fix some TODOs
- add new parameters to update API
- return type Ipv4GuestSubnetNetworkMap when get or create subnet
- fix firewall rules are broken
- add domain_id and account_id to db
- add domain/account/project to ipv4 subnet response
- create ipv4 subnet for domain/account/project
- check conflict when update ipv4 subnet
- ui changes
- add parent subnet to response
- add list for ipv4 subnet
- implement some methods
- fix list subnets for guest networks by zoneid
- UI changes
- fix delete ipv4 subnet for network
- fix ipv4 subnet is set to zone guest network cidr if cidrsize is specified
- add zone info to response if parent subnet is null but network is not
- fix gateway/cidr is not set when create network with cidrsize
- fix order of nft rules in the VRs
* Routed v24
- add classes in marvin base.py
* Routed v25
- add test_01_subnet_zone
- fix dedicate to domain/account failure
- list subnets for network by keyword and subnet
* Routed v26: implement subnet auto-allocation
- add utils for split ip ranges into small subnets
- add utils to get start/end ip of a cidr
- implement subnet auto-generation
- add global settings
* Routed 27: add subnet for VPC
- add db column for vpc_id
- add db record for vpc
- remove db record when delete a vpc
- add checkConflicts methods
- remove duplicated settings
- check ipv4 cidr when create subnet
* Routed v28: update smoke tests
- update test_ipv4_routing.py
- search subnets by networkid
* Routed 29: fix vpc and add more tests
- fix createnetwork in vpc
- add vpc id/name to response
- fix zone id/name are not displayed in some cases
- add smoke test for vpc
- add smoke tests for failed cases
- add smoke test for connectivity checks
- marvin: add "-q" to ssh command
* Routed 31: ui and smoke tests
- UI: add link to network in list view
- add nftables rules check in VRs
* Routed 32: add chain OUTPUT and more rules
- fix the issue 80/443/8080 is not reachable from VR itself
```
2024-06-27 10:21:52,121 INFO Executing: systemctl start cloud-password-server@172.31.1.1
2024-06-27 10:21:52,128 INFO Service cloud-password-server@172.31.1.1 start
2024-06-27 10:21:52,129 INFO Executing: ps aux
2024-06-27 10:24:02,175 ERROR Failed to update password server due to: <urlopen error [Errno 110] Connection timed out>
```
* Routed: fix dns search from VMs in Isolated networks
* Routed: fix VPC dns issue due to gateway IP is missing in cloud.conf
This is caused by NSX integration, and fixed by
https://github.com/apache/cloudstack/pull/9102/
* Routed: rename routing_mode to network_mode
* Routed: replace centos5.5 template in smoke test as dhclient does not work in the vms
// this does not work
refer to https://dominikrys.com/posts/disable-udp-checksum-validation/#ignoring-udp-checksums-with-nftables
and
https://forum.openwrt.org/t/udp-checksum-with-nftables/161522/11
the vm should have checksum offloading disabled
* Routed: fix smoke test due to wrong cidrlist of egress rules and missing ingress rule from VR
* PR 9346: fix lint error schema-41910to42000.sql
* PR 9346: ui polish v1
* PR 9346: create VPC with cidrsize
* Routed: fix test failures with test_network_ipv6 and test_vpc_ipv6 due to 'ssh -q'
* Routed: fix /usr/local/cloud/systemvm/ are removed after SSVM/CPVM reboot
* Routed: fix IP of additional nics of VPC VR is not gateway
* PR 9346: fix cidrsize check when create VPC with cidrsize
* Routed: fix test/integration/smoke/test_ipv4_routing.py:279:16: E713 test for membership should be 'not in'
* PR9346: fix/Update api
* PR 9346: set response object name
* PR9346: UI refactor and small fixes
* PR9346: change return type of getNetworkMode
* PR9346: move IPv4 subnet to seperated tab
* PR9346: revert IpRangesTabGuest.vue back to original
* PR9346: fix remove ipv4 subnet on UI
* PR9346: fix test_ipv4_routing.py
* AS Number Range Management
* Create AS Number Range for a Zone
* Fix build
* Add ListASNRange and fix create ASN range
* Add List AS numbers
* Add UI for AS Numbers
* Fix UI and filter AS Numbers
* Add AS Number on Isolated network creation and refactor UI and response
* Release AS Number
* Add network offering new columns
* Add UI support to view and add AS number and configure network offering
* Automatically assign AS Number if not specify AS number
* update variable name
* Fix routing mode check
* UI: Only allow selecting AS number when routing mode is Dynamic and specifyAsNumber is true
* UI: Only pass AS number when supported by the network offering
* Release AS number on network deletion
* Add deleteASNRange command (#81)
* API: List ASNumbers by asnumber (#83)
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* AS number management extensions
* Support AS number on VPC tier creation based on the offering
* Fix delete AS Range
* Fix UI values
* UI: Minor fix for releasing AS number
* UI: Move management of AS Range to Zone details view
* Fix specify_as_number column in network_offering table to set the default false
* Add events for AS number operations
* Allow users to list AS Numbers and fix network form for Normal users
* Add AS number details to list networks response
* Fix Allocated time format
* Fix Allocated time format
* support in details view too
* Fix: Do not release AS number if acquired network requires AS number
* Fix: Do not release AS number if acquired network requires AS number
* Fix typo
* Fix allocated release
* Fix event type
* UI: Add Routing mode and Specify AS to the network offering details
* UI: Add Routing mode and Specify AS to the network offering details
* Address comment
* Fix release AS number of network deletion
* Fix release AS number of network deletion
* Fix
* Restore release to its place based on the boolean
* Rename boolean
* API: Add networkId as listASNumber parameter
* Add Network name to the search view filter for AS numbers
* Present allocated time in human readable format - Pubilc IP / AS Numbers
* Add account / domain filter for AS numbers
* Add support for AS numbers on VPC offerings
* Refactor AS number allocation to VPC and non VPC isolated networks
* Checkstyle
* Add support for AS numbers on VPC offerings
* extend vpc offering view and vpcoffering response
* merge https://github.com/shapeblue/cloudstack-playtika/pull/115 and change network_id of as_numbers to include vpc_id
* Display AS number of VPC tiers as the AS number of the VPC
* extend asnumber response and ui support
* improve UI and as number response to view VPC details
* List only dynamic offerings for vpc tiers with specify as numbers
* Fix release AS number
* Fix AS number displayed as 0 when no AS number assigned
* Fix VPC offering creation without specify AS
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Fix release AS number on VPC deletion
* Update server/src/main/java/com/cloud/dc/BGPServiceImpl.java
* Update server/src/main/java/com/cloud/dc/BGPServiceImpl.java
* Fix missing column on asnumber table
* Fix listASNumbers API to support vpcid and obtain AS number from vpc for tiers
* Prevent listing 0 AS number for VPC
* Fix create Isolated Network form
* Update server/src/main/java/com/cloud/network/vpc/VpcManagerImpl.java
* Update server/src/main/java/com/cloud/network/vpc/VpcManagerImpl.java
* Dynamic: move routingmode/specifyasn after networkmode in AddNetworkOffering.vue on UI
* Dynamic: fix ip4routing in network response
* Dynamic/systemvm: add FRR to systemvm template
* Dynamic: BGP peers (DB,VO,Dao)
* Dynamic: BGP peers (VR/server)
* Dynamic: v3
- remove BgpPeer class
- fix vpc vr has bgp peers of only 1 tier
- rename ip4_cidr to guest_ip4_cidr
- rename ip6_cidr to guest_ip6_cidr
- generate /etc/frr/frr.conf
- apply BGP peers on Dynamic-Routed network even if there is no BGP peers
* Dynamic v4: fix vpc vr
- fix duplicated guest cidr in frr.conf in vpc vr
todo
- restart frr / reload frr (reload will cause bgp session to Policy state)
- apis for bgp peers
- assign/release bgp peer from/to network
* Dynamic v5: add apis for bgp peers
* Dynamic v6: fix bugs
- set response object name
- remove required as number when update
- fix checks when update
- allow regular users to list bgp peers
* Dynamic v7: move apis to bgp sub-dir
* Dynamic v8: add tab for manage BGP peers on UI
* Dynamic v9: fix update bgp with same config
* Dynamiv v10: add changeBgpPeersForNetworkCmd
* Dynamic v11: create network with bgppeerids
- create network with bgppeerids
- add marvin classes
- add smoke tests
- remove uuid from bgp_peer_network_map
- fix created/removed in bgp_peer_network_map
- remove bgppeers when remove a network
- UI: fix delete bgp peer
* Dynamic v12: add test for vpc tiers
* Dynamic v13: bug fixes
- fix change BGP peers for network in Allocated state
- fix listing network returns removed record
- fix all vpc tiers have the same settings
- remove BGP peers as part of network removal
- remove FRR settings for vpc tiers without any BGP peers
- UI: fix no error msg when change BGP peers
* Dynamic v14: assign BGP Peers for VPC instead of VPC tiers
- create vpc with bgppeerids
- do not allow create/update vpc tier with bgppeerids
- apply all bgp peers when create/delete a vpc tier
- UI: change bgp peers for vpc
- test: update tests on vpc
* Dynamic: fix build errors after merging as number PR
* Dynamic: fix TODOs
* Dynamic: fix smoke test on VPC
* Allow creation of networks by users with as numbers
* Address review comments
* Move BGPService to bgp package and inject it on BaseCmd
* Revert changes for CKS and address more comments
* Display left side menu option for AS number only for root admin
* Dynamic: create/update BGP peer with details
refer to https://docs.frrouting.org/en/latest/bgp.html
* Dynamic: fix build error and remove access to ListBgpPeers cmd for regular users
* Dynamic: assign all zone BGP peers to user networks
* Dynamic: show BGP peer info of networks only for root admin
* AS number: disable specifyasnumber for non-NSX offerings
* Dynamic: pass bgppeer details to command and fix typo with ip6 addr
* Dynamic: list BGP peers by isdedicated, and fix change bgppeers for network/vpc
* Dynamic: add UI labels
* Dynamic: add bgp peers to vpc response
* Dynamic: list bgp peers by keyword, fix list by asnumber
* Dynamic: fix list bgppeers by keyword and db schema
* Dynamic: fix list bgppeers do not return dedicated peers
* Dynamic: update UI when create network/vpc offering
* Update server/src/main/java/com/cloud/configuration/ConfigurationManagerImpl.java
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Update tools/marvin/setup.py
* Dynamic: network mode must be same when update a network with new offering
* Dynamic: add method networkModel.isAnyServiceSupportedInNetwork
* Dynamic: rename APIs and classes
* Dynamic: fix unit tests due to previous changes
* Dynamic: validateNetworkCidrSize when auto-create subnet
* Dynamic: check AS number overlap
* Dynamic: add ActionEvent
* Dynamic: small code optimization
* Dynamic: fix ui bugs after api rename
* Dynamic: add marvin and test for ASN ranges and AS numbers
* Dynamic: add account setting use.system.bgp.peers
also
- change the default value of routed.ipv4.vpc.max.cidr.size and routed.ipv4.vpc.min.cidr.size
- change the category of settings
* static: fix ui error when delete zone ipv4 subnets
* static: small UI polish
* Dynamic: throw exception when as number is required but not passed
* Dynamic: fix typo when create FRR directory which causes network deletion failures
* Dynamic: connect to ALL (or ALL dedicated) BGP peers if no BGP peer mapping for the network/vpc
* Dynamic: throw exception when as number is required for VPC but not passed
* Dynamic: list bgp peers by useSystemBgpPeers
* Dynamic: fix frr config in VPC VR when change bgp peers
* Dynamic: create frr config even if there is no VPC tiers
* Dynamic: list bgp peers by zoneid (required for account) and account
* Dynamic: only apply FRR config for vpc tiers with dynamic routing
* Dynamic: donot send commands to router if commands size is 0
* Dynamic: fix 'new IPv6 address is not valid' when update bgp peer without IPv6
* Dynamic: throw exception if fail to allocate AS number when create network/vpc with dynamic routing
* Dynamic: enable ipv6 unicast and 'ip nht resolve-via-default'
* Dynamic: delete network/vpc if fail to allocate AS number when create network/vpc with dynamic routing
* test: add unit tests for ASN APIs
* test: add unit tests for core module
* test: add unit tests for API responses
* test: add unit tests for BgpPeerTO
* test: add minor changes
* test: add tests for create/delete/update/list RoutingFirewallRuleCmd
* Static: show ip4 routes for vpc tiers
* test: fix smoke test failure caused by type change of as number
* test: add test for Ipv4SubnetForZoneCmd
* test: add test for Ipv4SubnetForGuestNetworkCmd and BgpPeerCmd
* UI: do not show redundant router when network mode is ROUTED as RVR is not supported
* UI: hide 'Conserve mode' when networkmode is ROUTED
* test: add unit tests for ListASNumbersCmdTest
* Static: remove allocated IPv4 subnet when delete a network or vpc
* test: add unit tests for BgpPeersRules
* Dynamic: set ipv4routing from network offering
* server: list as numbers and ipv4 subnets by keyword
* server: remove dedicated bgp peers and ipv4 subnets when delete an account or domain
* server: fix dedicated ipv4 subnet is allocated to other accounts
* UI: fix allocated time format
* server: ignore project is projectid is -1 so bgppeers/ipv4subnets works in project view
* UI: add project column to bgp peers and ipv4 subnets
* server: fix list AS numbers by domain admin or normal user
* server: fix network creation when ipv4 subnet is dedicated
* UI: polish network.js
* Dynamic: fix frr config for ipv6 routing
* Static routing: support cks cluster
* Static: get/create IPv4 subnet from dedicated subnets at first
* Dynamic: add BGP peers tab
* Static: remove redundant loops
* api: add since to api and response
* server: add unit tests
---------
Co-authored-by: Nicolas Vazquez <nicovazquez90@gmail.com>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This is a simple NAS backup plugin for KVM which may be later expanded for other hypervisors. This backup plugin aims to use shared NAS storage on KVM hosts such as NFS (or CephFS and others in future), which is used to backup fully cloned VMs for backup & restore operations. This may NOT be as efficient and performant as some of the other B&R providers, but maybe useful for some KVM environments who are okay to only have full-instance backups and limited functionality.
Design & Implementation follows the `networker` B&R plugin, which is simply:
- Implement B&R plugin interfaces
- Use cmd-answer pattern to execute backup and restore operations on KVM host when VM is running (or needs to be restored) - instead of a B&R API client, relies on answers from KVM agent which executes the operations
- Backups are full VM domain snapshots, copied to a VM-specific folders on a NAS target (NFS) along with a domain XML
- Backup uses libvirt feature: https://libvirt.org/kbase/live_full_disk_backup.html orchestrated via virsh/bash script (nasbackup.sh) as the libvirt-java lacks the bindings
- Supported instance volume storage for restore operations: NFS & local storage
Refer the doc PR for feature limitations and usage details:
https://github.com/apache/cloudstack-documentation/pull/429
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
This feature adds support for Ceph's RADOS Gateway (RGW) support for the
Object Store feature of CloudStack.
The RGW of Ceph is Amazon S3 compliant and is therefor an easy and straigforward
implementation of basic S3 features.
Existing Ceph environments can have the RGW added as an additional feature to a
cluster already providing RBD (Block Device) to a CloudStack environment.
Introduce the BucketTO to pass to the drivers. This replaces just passing the bucket's name.
Some upcoming drivers require more information then just the bucket name to perform their actions,
for example they require the access and secret key which belong to the account of this bucket.
This is leftover code from a long time ago and this validation test has nu influence
on the end result on how a URL will be used afterwards.
We should support hosts pointing to an IPv6(-only) address out of the box.
For the code it does not matter if it's IPv4 or IPv6. This is the admin's choice.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Per docs, if the mysql connector is JDBC2 compliant then it should use
the Connection.isValid API to test a connection.
(https://docs.oracle.com/javase/8/docs/api/java/sql/Connection.html#isValid-int-)
This would significantly reduce query lags and API throughput, as for
every SQL query one or two SELECT 1 are performed everytime a Connection
is given to application logic.
This should only be accepted when the driver is JDBC4 complaint.
As per the docs, the connector-j can use /* ping */ before calling
SELECT 1 to have light weight application pings to the server:
https://dev.mysql.com/doc/connector-j/en/connector-j-usagenotes-j2ee-concepts-connection-pooling.html
Replaces dbcp2 connection pool library with more performant HikariCP.
With this unit tests are failing but build is passing.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohityadav89@gmail.com>
This PR makes sure a KVM VM gets the UUID of the VM as a static serialnumber through smbios.
Some applications on primarily Windows servers require a stable serial number for licensing purposes. By providing this serial number we can make sure these applications can have a license configured.
More information: https://libvirt.org/formatdomain.html#smbios-system-information
This PR fixes the issue with sonar check
```
Error: Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.9.1.2184:sonar (default-cli) on project cloudstack:
Error:
Error: The version of Java (11.0.22) used to run this analysis is deprecated, and SonarCloud no longer supports it. Please upgrade to Java 17 or later.
Error: You can find more information here: https://docs.sonarsource.com/sonarcloud/appendices/scanner-environment/
```
main changes
- Support build/packaging using JDK17
- Still supports JDK11 for building
- Support JRE17 for use in production installation
- Drop EL7 support
The community packages will be still packaged using JDK11.
If uses want, they can build by JDK17 as well.
Signed-off-by: Wei Zhou <wei.zhou@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>