Bundling all hypervisor SystemVM templates in release packages simplifies installs but inflates build time and artifact size. This change enables downloading templates on demand when they’re not found after package installation. The download path is wired into both cloud-setup-management and the existing SystemVM template registration flow.
For connected or mirrored environments, a repository URL prefix can be provided to support air-gapped setups: pass --systemvm-templates-repository <URL-prefix> to cloud-setup-management, or set system.vm.templates.download.repository=<URL-prefix> in server.properties for post-setup registration.
If templates are already present (bundled or preseeded), behavior is unchanged and no download is attempted.
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
The Extensions Framework in Apache CloudStack is designed to provide a flexible and standardised mechanism for integrating external systems and custom workflows into CloudStack’s orchestration process. By defining structured hook points during key operations—such as virtual machine deployment, resource preparation, and lifecycle events—the framework allows administrators and developers to extend CloudStack’s behaviour without modifying its core codebase.
The Netris Plugin introduces Netris as a network service provider in CloudStack to be able to create and manage Virtual Private Clouds (VPCs) in CloudStack, being able to orchestrate the following network functionalities:
- Network segmentation with Netris-VXLAN isolation method
- Routing between "public" IP and network segments with an ACS ROUTED mode offering
- SourceNAT, DNAT, 1:1 NAT between "public" IP and network segments with an ACS NATTED mode offering
- Routing between VPC network segments (tiers in ACS nomenclature)
- Access Lists (ACLs) between VPC tiers and "public" network (TCP, UDP, ICMP) both as global egress rules and "public" IP specific ingress rules.
- ACLs between VPC network tiers (TCP, UDP, ICMP)
- External load balancing – between VPC network tiers and "public" IP
- Internal load balancing – between VPC network tiers
- CloudStack Virtual Router services (DHCP, DNS, UserData, Password Injection, etc…)
* cloudstack: add support for EL10
This adds support for Fedora 40 and (upcoming) EL10 distro to be used
as mgmt/usage server, mysql/nfs & KVM host. Python3 version has changed
to 3.12.9 which isn't automatically determining the python-path.
* python: WIP code, this fails right now
Need to discuss/check if we can skip this code. Where/how is cgroup
setup used with KVM agent.
* prep cloudutils to be EL10 ready
Fixes issue for Fedora, it was running old EL6 hooks which isn't
applicable for modern Fedora version that are closer to EL8/9/10
* Support for Management Server Maintenance
- New APIs: prepareForMaintenance and cancelMaintenance, with required parameter - managementserverid.
- New management server states for maintenance: PreparingForMaintenance, Maintenance.
- listHosts API with optional parameter – managementserverid, to list the hosts connected to the management server.
- Support management server maintenance when more than one active management servers available.
- Triggers transfer agents to other available management servers for maintenance, new agent command MigrateAgentConnectionCommand to initiate transfer of indirect agents.
- New global config 'management.server.maintenance.timeout', to set the timeout (in mins) for the management server maintenance window, default: 60 mins.
- UI changes: Prepare and Cancel Maintenance in Management Server section, Connected Agents tab, New fields for hosts and management servers.
* Updated pending jobs check timer task with ScheduledExecutorService
* keep maintenance state on trigger shutdown call when ms is in maintenance
* add pending jobs count to ms response
* during ms heartbeat, update state to up only when it's down
* allow vm work jobs of async job created before prepare for maintenance
* Revert "keep maintenance state on trigger shutdown call when ms is in maintenance"
This reverts commit 607e13364679eac897f4d146bb3325ea7a61ba17.
* skip maintenance test when multiple management servers are not available, and not configured in host setting for kvm
* 4.20:
linstor: Fix ZFS snapshot backup (#10219)
fix listing of VMs by network (#10204)
Configure org.eclipse.jetty.server.Request.maxFormKeys from server.properties and increase the default value (#10214)
api: fix access for listSystemVmUsageHistory (#10032)
Fix NPE issues during host rolling maintenance, due to host tags and custom constrained/unconstrained service offering (#9844)
This is a simple NAS backup plugin for KVM which may be later expanded for other hypervisors. This backup plugin aims to use shared NAS storage on KVM hosts such as NFS (or CephFS and others in future), which is used to backup fully cloned VMs for backup & restore operations. This may NOT be as efficient and performant as some of the other B&R providers, but maybe useful for some KVM environments who are okay to only have full-instance backups and limited functionality.
Design & Implementation follows the `networker` B&R plugin, which is simply:
- Implement B&R plugin interfaces
- Use cmd-answer pattern to execute backup and restore operations on KVM host when VM is running (or needs to be restored) - instead of a B&R API client, relies on answers from KVM agent which executes the operations
- Backups are full VM domain snapshots, copied to a VM-specific folders on a NAS target (NFS) along with a domain XML
- Backup uses libvirt feature: https://libvirt.org/kbase/live_full_disk_backup.html orchestrated via virsh/bash script (nasbackup.sh) as the libvirt-java lacks the bindings
- Supported instance volume storage for restore operations: NFS & local storage
Refer the doc PR for feature limitations and usage details:
https://github.com/apache/cloudstack-documentation/pull/429
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
This feature adds support for Ceph's RADOS Gateway (RGW) support for the
Object Store feature of CloudStack.
The RGW of Ceph is Amazon S3 compliant and is therefor an easy and straigforward
implementation of basic S3 features.
Existing Ceph environments can have the RGW added as an additional feature to a
cluster already providing RBD (Block Device) to a CloudStack environment.
Introduce the BucketTO to pass to the drivers. This replaces just passing the bucket's name.
Some upcoming drivers require more information then just the bucket name to perform their actions,
for example they require the access and secret key which belong to the account of this bucket.
This is leftover code from a long time ago and this validation test has nu influence
on the end result on how a URL will be used afterwards.
We should support hosts pointing to an IPv6(-only) address out of the box.
For the code it does not matter if it's IPv4 or IPv6. This is the admin's choice.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Per docs, if the mysql connector is JDBC2 compliant then it should use
the Connection.isValid API to test a connection.
(https://docs.oracle.com/javase/8/docs/api/java/sql/Connection.html#isValid-int-)
This would significantly reduce query lags and API throughput, as for
every SQL query one or two SELECT 1 are performed everytime a Connection
is given to application logic.
This should only be accepted when the driver is JDBC4 complaint.
As per the docs, the connector-j can use /* ping */ before calling
SELECT 1 to have light weight application pings to the server:
https://dev.mysql.com/doc/connector-j/en/connector-j-usagenotes-j2ee-concepts-connection-pooling.html
Replaces dbcp2 connection pool library with more performant HikariCP.
With this unit tests are failing but build is passing.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohityadav89@gmail.com>
This introduces new global settings to handle how client address checks
are handled by the API layer:
proxy.header.verify: enables/disables checking of ipaddresses from a
proxy set header
proxy.header.names: a list of names to check for allowed ipaddresses
from a proxy set header.
proxy.cidr: a list of cidrs for which \"proxy.header.names\" are
honoured if the \"Remote_Addr\" is in this list.
(cherry picked from commit b65546636d)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit b1e0bf9dbd)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This introduces new global settings to handle how client address checks
are handled by the API layer:
proxy.header.verify: enables/disables checking of ipaddresses from a
proxy set header
proxy.header.names: a list of names to check for allowed ipaddresses
from a proxy set header.
proxy.cidr: a list of cidrs for which \"proxy.header.names\" are
honoured if the \"Remote_Addr\" is in this list.
(cherry picked from commit b65546636d)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>