Commit Graph

38782 Commits

Author SHA1 Message Date
James Peru Mmbono 0f151532cb
Merge 72f967aa6d into 5893ba5a8c 2026-05-12 12:39:38 +05:30
Fabricio Duarte 5893ba5a8c
server: Fix NPE when on findHostsForMigration when no suitable hosts are found (#13138) 2026-05-12 09:07:20 +02:00
Abhishek Kumar e1521f139b
systemvmtemplate-register: correctly update existing template name in config (#12703) 2026-05-11 13:22:54 +02:00
Suresh Kumar Anaparti a4a52c9665
Merge branch '4.22' 2026-05-08 20:57:36 +05:30
Suresh Kumar Anaparti 4359198904
KVM Host HA improvements - Fix to not cancel VM HA items when Host HA inspection in progress, and some code improvements (#13088)
* Host HA code improvements

* Fix to not cancel VM HA items when Host HA is enabled & inspection in progress, and some code improvements

- When Host HA inspection in progress, the investigor returns the Host Status as Up which cancels the VM HA items
- Don't cancel the VM HA items, instead reschedule them to try again later

* Changes to consider Recovered/Available Host HA state along with the agent connection status to determine the Host HA inspection in progress or not, and some code improvements
2026-05-08 19:50:50 +05:30
Suresh Kumar Anaparti ddcc0c889d
Don't delete volume on store if it is not created or doesn't exist on it (#13111) 2026-05-08 12:20:06 +05:30
Manoj Kumar 72b99a3f8c
Make resource deletion safer with name confirmation (#13104)
* enable double confirmation in delete flow for resource

* address copilot comments
2026-05-08 10:56:50 +05:30
Manoj Kumar 4425ee4234
Remove unnecessary if-else branch in template permission validation (#12683)
* consolidate if-else branch
2026-05-07 21:37:31 -03:00
dahn f6efda50d2
Update .asf.yaml: Add ingox as collaborator(#12058) 2026-05-07 17:11:54 +02:00
dependabot[bot] cbc1ae7388
Bump the github-actions-dependencies group across 1 directory with 9 updates (#13042)
Bumps the github-actions-dependencies group with 9 updates in the / directory:

| Package | From | To |
| --- | --- | --- |
| [codecov/codecov-action](https://github.com/codecov/codecov-action) | `4` | `6` |
| [github/codeql-action](https://github.com/github/codeql-action) | `3` | `4` |
| [github/gh-aw](https://github.com/github/gh-aw) | `0.45.0` | `0.71.1` |
| [actions/github-script](https://github.com/actions/github-script) | `8.0.0` | `9.0.0` |
| [actions/upload-artifact](https://github.com/actions/upload-artifact) | `6.0.0` | `7.0.1` |
| [actions/download-artifact](https://github.com/actions/download-artifact) | `6.0.0` | `8.0.1` |
| [docker/login-action](https://github.com/docker/login-action) | `2` | `4` |
| [eps1lon/actions-label-merge-conflict](https://github.com/eps1lon/actions-label-merge-conflict) | `2.0.0` | `3.0.3` |
| [actions/setup-node](https://github.com/actions/setup-node) | `5` | `6` |



Updates `codecov/codecov-action` from 4 to 6
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v4...v6)

Updates `github/codeql-action` from 3 to 4
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/v3...v4)

Updates `github/gh-aw` from 0.45.0 to 0.71.1
- [Release notes](https://github.com/github/gh-aw/releases)
- [Changelog](https://github.com/github/gh-aw/blob/main/CHANGELOG.md)
- [Commits](58d1d157fb...f01a9d118a)

Updates `actions/github-script` from 8.0.0 to 9.0.0
- [Release notes](https://github.com/actions/github-script/releases)
- [Commits](ed597411d8...3a2844b7e9)

Updates `actions/upload-artifact` from 6.0.0 to 7.0.1
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](b7c566a772...043fb46d1a)

Updates `actions/download-artifact` from 6.0.0 to 8.0.1
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](018cc2cf5b...3e5f45b2cf)

Updates `docker/login-action` from 2 to 4
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v2...v4)

Updates `eps1lon/actions-label-merge-conflict` from 2.0.0 to 3.0.3
- [Release notes](https://github.com/eps1lon/actions-label-merge-conflict/releases)
- [Changelog](https://github.com/eps1lon/actions-label-merge-conflict/blob/main/CHANGELOG.md)
- [Commits](https://github.com/eps1lon/actions-label-merge-conflict/compare/v2.0.0...v3.0.3)

Updates `actions/setup-node` from 5 to 6
- [Release notes](https://github.com/actions/setup-node/releases)
- [Commits](https://github.com/actions/setup-node/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 8.0.1
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions-dependencies
- dependency-name: actions/github-script
  dependency-version: 9.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions-dependencies
- dependency-name: actions/setup-node
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions-dependencies
- dependency-name: actions/upload-artifact
  dependency-version: 7.0.1
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions-dependencies
- dependency-name: codecov/codecov-action
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions-dependencies
- dependency-name: docker/login-action
  dependency-version: '4'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions-dependencies
- dependency-name: eps1lon/actions-label-merge-conflict
  dependency-version: 3.0.3
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions-dependencies
- dependency-name: github/codeql-action
  dependency-version: 4.35.1
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions-dependencies
- dependency-name: github/gh-aw
  dependency-version: 0.68.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: github-actions-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-05-07 10:06:32 -03:00
Bernardo De Marco Gonçalves 96ca1b2a7c
Add option to control MAC address reuse for VR public NICs (#13001) 2026-05-06 13:41:11 -03:00
Daan Hoogland 3e688b0197 Merge tag '4.22.0.1' into 4.22 2026-05-06 11:13:45 +02:00
jmsperu 72f967aa6d test(backup): mock BackupDetailsDao to fix NPE in NASBackupProviderTest
Adds @Mock injection for BackupDetailsDao so NASBackupProvider's
backupDetailsDao field is wired during testDeleteBackup and
takeBackupSuccessfully, fixing the NPE flagged by @harikrishna-patnala.
2026-05-05 11:23:20 +03:00
Rene Peinthor 5b9a3d7d32
linstor: Fix a file handle resource leak opening template.properties (#13091) 2026-05-04 14:43:06 +05:30
Suresh Kumar Anaparti 519715e81a
Fix id in listguestosmapping search (#13082) 2026-05-04 14:41:35 +05:30
codingkiddo 1e512ab9c6
Skip QemuImgTest when libvirt native library cannot load (#13086)
Co-authored-by: Vinod Kumar <vinodkumar@192.168.1.3>
2026-05-03 18:45:54 +02:00
Abhishek Kumar a17bff9ba8
ui: fix webhook filters listing (#13068) 2026-05-03 18:39:41 +02:00
Suresh Kumar Anaparti 8906aa1d46
Merge branch '4.22' 2026-05-01 22:51:01 +05:30
Henrique Sato c07f1fd5d2
Number of running and stopped VMs as preset variables for `Network` type Quota tariffs (#11689)
Co-authored-by: Fabricio Duarte <fabricio.duarte.jr@gmail.com>
2026-05-01 11:54:40 +02:00
Fabricio Duarte 1f5dba9bd2
Release reserved storage resources on VM deployment failure (#13048) 2026-04-30 20:52:35 +05:30
Bryan Lima c45596cca3
Refactor of Allocator classes (#9074)
* Refactoring Allocator classes

* Break into smaller methods random and firfit allocators.

* Added unit tests for random and firstfit allocators

* Move random allocator from cloud-plugins to cloud-server

* Add BaseAllocator abstract class for duplicate code

* Add missing license

* Add missing license to unit test file

* Remove host allocator random dependency

* Change exception message on smoke tests

* Remove conditional as it was never actually reached in the original flow

* Fix tests

* Fix flipped parameters

* Fix NPE while listing hosts for migration when suitableHosts is null

* Remove unnecessary stubbings

* Fix checkstyle

* Remove unnecessary file

* Rename exception error messages

* Apply suggestions from code review

Co-authored-by: Fabricio Duarte <fabricio.duarte.jr@gmail.com>

* Rename UserVmDetailVO references to VMInstanceDetailVO

* Remove unused imports

* Add new line at EOF

* Remove unnecessary random allocator pom

* Fix GPU allocation mistake

* Fix failing tests

---------

Co-authored-by: Fabricio Duarte <fabricio.duarte@scclouds.com.br>
Co-authored-by: Fabricio Duarte <fabricio.duarte.jr@gmail.com>
2026-04-30 10:30:02 -03:00
Abhishek Kumar 2eef7aa9a2 adding default deny keys also when there are no other keys 2026-04-30 13:52:39 +02:00
Gean Jair Silva 92d82989e3
Correction of the user responsible for the event (#13066)
Co-authored-by: gean.silva <gean.silva@scclouds.com.br>
2026-04-30 14:16:26 +05:30
julien-vaz a73cc9a22c
Improve Quota Statement (#10506)
* Improve Quota Statement

* Removes unused import

* Fix QuotaUsageJoinDao, QuotaResponseBuilderImpl, QuotaServiceImpl e QuotaServiceImplTest

* Reorganize imports

* Updates QuotaStatementCmd responseBuilder scope to default

* Fix log4j syntax

* Address reviews + other improvements

* Add missing SQL scripts and injections

* Change accountid and domainid logic + add unit tests

* Rename QuotaUsageDetail to QuotaTariffUsage

* Fix out of bounds exception

---------

Co-authored-by: Julien Hervot de Mattos Vaz <julien.vaz@scclouds.com.br>
Co-authored-by: Fabricio Duarte <fabricio.duarte.jr@gmail.com>
2026-04-29 21:09:13 -03:00
Sergiy Kukunin 089eb36e47
Linstor: fix create volume from snapshot on primary storage (#13043)
* Linstor: fix create volume from snapshot on primary storage

When creating a volume from a snapshot on Linstor primary storage
(with lin.backup.snapshots=false), the operation fails with:
"Only the following image types are currently supported: VHD, OVA,
QCOW2, RAW (for PowerFlex and FiberChannel)"

Root cause: the Linstor driver does not handle SNAPSHOT -> VOLUME in
its canCopy()/copyAsync() methods. This causes DataMotionServiceImpl
to fall through to StorageSystemDataMotionStrategy (selected because
Linstor advertises STORAGE_SYSTEM_SNAPSHOT=true). That strategy's
verifyFormatWithPoolType() rejects RAW format for Linstor pools,
since RAW is only allowed for PowerFlex and FiberChannel.

Additionally, VolumeOrchestrator.createVolumeFromSnapshot() attempts
to back up the snapshot to secondary storage when the storage plugin
does not advertise CAN_CREATE_TEMPLATE_FROM_SNAPSHOT. This backup
fails because the snapshot only exists on Linstor primary storage.

Fix:
- Add CAN_CREATE_TEMPLATE_FROM_SNAPSHOT capability so the
  orchestrator skips the backup-to-secondary path
- Add canCopySnapshotToVolumeCond() to match SNAPSHOT -> VOLUME
  when both are on the same Linstor primary store
- Wire it into canCopy() to intercept at DataMotionServiceImpl
  before strategy selection, bypassing StorageSystemDataMotionStrategy
- Implement copySnapshotToVolume() which delegates to the existing
  createResourceFromSnapshot() for native Linstor snapshot restore

This follows the same pattern used by the StorPool plugin, which
handles SNAPSHOT -> VOLUME directly in its driver rather than going
through StorageSystemDataMotionStrategy.

Tested on CloudStack 4.22 with Linstor LVM_THIN storage, creating
a volume from a 1TB CNPG Postgres database snapshot. Volume creates
successfully with correct path and deletes cleanly.

* Let CloudRuntimeException propagate from copySnapshotToVolume

Remove try/catch in copySnapshotToVolume so that CloudRuntimeException
from createResourceFromSnapshot propagates to the caller, ensuring
CloudStack properly notices and reports the failure.

* Fix CAN_CREATE_TEMPLATE_FROM_SNAPSHOT breaking template creation

Setting CAN_CREATE_TEMPLATE_FROM_SNAPSHOT unconditionally to true
caused createTemplate from snapshot to take the StorPool-specific
code path in TemplateManagerImpl, which sends a CopyCommand to a
system VM that Linstor cannot handle.

Fix: make CAN_CREATE_TEMPLATE_FROM_SNAPSHOT conditional on the same
flag as STORAGE_SYSTEM_SNAPSHOT (!BackupSnapshots). When snapshots
are backed up to secondary (the default), the old template creation
flow works. When snapshots stay on primary, the direct path is used.

Also fix checkstyle: remove unused DataObject import in test.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-29 11:23:08 +05:30
James Peru 9764025358 docs: move RFC out of repo per reviewer feedback
@bernardodemarco pointed out that design docs / RFCs go in the project
wiki or as a separate issue rather than into the source tree. The RFC
content has been posted as a comment on the existing tracking issue
#12899 (which is where the design discussion already lives), and the
docs/rfcs/ directory is removed from this PR.
2026-04-29 00:02:59 +03:00
James Peru d80ed16723 test(backup): mock returns no-backing-chain for rsync-failure test
Phase 6 added a hasBackingChain() check before rsync that uses
qemu-img info to detect chained incrementals. The existing
testExecuteWithRsyncFailure test mocks Script.runSimpleBashScriptForExitValue
to return 0 for any command, so the new qemu-img info check
incorrectly evaluates as "has backing chain" and routes the test
through the chain-flatten path instead of rsync — the test then
asserts a failure that never occurs.

Add a clause to the mock that returns 1 (no backing chain) for the
qemu-img info backing-filename probe, so the test continues to
exercise the rsync path it was designed for.
2026-04-28 11:36:53 +03:00
Erik Böck e2c13da419
Remove UUID parse from param processing workflow (#13065) 2026-04-28 09:13:05 +02:00
James Peru 49edc7f22c test(backup): smoke tests for incremental NAS backup chain
Adds five new test cases to test_backup_recovery_nas.py covering the
end-to-end behaviour of the incremental NAS backup feature:

  * test_incremental_chain_cadence
      - Sets nas.backup.full.every=3, takes 5 backups, verifies the
        type pattern is FULL, INC, INC, FULL, INC.

  * test_restore_from_incremental
      - FULL + 2 INCs, each with a marker file. Restores from the
        latest INC and verifies all three markers are present
        (i.e. qemu-img convert flattened the chain correctly).

  * test_delete_middle_incremental_repairs_chain
      - Builds FULL, INC1, INC2; deletes INC1 (no force needed);
        restores from the surviving INC2 and verifies that markers
        from FULL, INC1 (which was deleted), and INC2 are all present
        — proving the rebase merged INC1's blocks into INC2.

  * test_refuse_delete_full_with_children
      - Verifies plain delete of a FULL that has children fails, and
        delete with forced=true succeeds and removes the whole chain.

  * test_stopped_vm_falls_back_to_full
      - Sets cadence to 2, takes one backup (FULL), stops the VM,
        triggers another (cadence would say INC). Verifies the second
        backup is recorded as FULL because the agent fell back when
        backup-begin couldn't run on a stopped VM.

All tests restore nas.backup.full.every to 10 in finally blocks.

Refs: apache/cloudstack#12899
2026-04-27 19:25:30 +03:00
James Peru b8d069e127 feat(backup): cascade-delete + chain repair for NAS incrementals
Adds the delete-with-chain-repair semantics agreed in the RFC review:

  scripts/vm/hypervisor/kvm/nasbackup.sh
    - New '-o rebase' operation: rebases an existing on-NAS qcow2 onto
      a new backing parent. Uses a SAFE rebase (no -u) so the target
      absorbs blocks of the about-to-be-deleted parent before the
      backing pointer is moved up to the grandparent. Writes the new
      backing reference relative to the target's directory so it
      survives mount-point changes.
    - New CLI flags --rebase-target, --rebase-new-backing (both passed
      mount-relative).

  RebaseBackupCommand + LibvirtRebaseBackupCommandWrapper
    - New agent command that wraps the script's rebase operation. The
      provider sends one of these per child that needs re-pointing.

  NASBackupProvider.deleteBackup
    - Now plans the chain repair before touching files via
      computeChainRepair():
        * No chain metadata     -> single-file delete (legacy behaviour)
        * Tail incremental      -> single delete, no rebase
        * Middle incremental    -> rebase immediate child onto our
                                   parent, then delete; shift
                                   chain_position of all later
                                   descendants by -1
        * Full with descendants -> refuse unless forced=true; with
                                   forced=true delete full + every
                                   descendant newest-first
    - Updates parent_backup_id, chain_position metadata in
      backup_details after each rebase so the model in the DB matches
      the on-disk chain.

This implements the cascade-delete behaviour requested in @abh1sar's
review point #7.

Refs: apache/cloudstack#12899
2026-04-27 19:24:02 +03:00
James Peru 39303fbf88 feat(backup): restore path follows incremental backing-chain
Two changes that together let an incremental NAS backup be restored
without manual chain assembly:

  scripts/vm/hypervisor/kvm/nasbackup.sh
    - qemu-img rebase now writes a backing-file path that is RELATIVE to
      the new qcow2's directory (e.g. ../<parent-ts>/root.<uuid>.qcow2)
      rather than the absolute path on the current mount point. NAS mount
      points are ephemeral (mktemp -d), so an absolute reference would
      not resolve when the backup is re-mounted at restore time. Relative
      references are resolved by qemu-img against the file's own
      directory, so the chain stays valid no matter where the NAS is
      mounted next.
    - Verifies the parent file exists on the NAS before rebasing.

  LibvirtRestoreBackupCommandWrapper
    - For file-based primary storage (local, NFS-file), the existing
      code rsync'd the source qcow2 to the volume. That copies only the
      differential blocks of an incremental, leaving a volume whose
      backing-file reference points at a path the primary storage host
      doesn't have. Now: detect a backing-chain via qemu-img info JSON
      and flatten via 'qemu-img convert -O qcow2', which follows the
      chain and produces a self-contained qcow2. Full backups continue
      to use rsync (faster, no chain to flatten).
    - The block-storage path (RBD/Linstor) already used qemu-img convert
      via the QemuImg helper, which auto-flattens chains, so that path
      needed no change.

Refs: apache/cloudstack#12899
2026-04-27 19:18:33 +03:00
James Peru 43e2f7504a feat(backup): on-demand bitmap recreation for incremental NAS backup
CloudStack rebuilds the libvirt domain XML on every VM start, which means
persistent QEMU dirty bitmaps don't survive a stop/start cycle. Rather
than hooking into the VM start lifecycle (intrusive across the
orchestration layer), this commit handles the missing bitmap *lazily* at
the next backup attempt:

  nasbackup.sh
    - When -M incremental is requested, the script first checks
      `virsh checkpoint-list` for the parent bitmap. If absent, it
      recreates the checkpoint on the running domain so libvirt accepts
      the <incremental> reference. The next incremental will be larger
      than usual (it captures all writes since recreate, not since the
      previous incremental) but is correct; subsequent ones return to
      normal size.
    - On recreation, emits BITMAP_RECREATED=<name> on stdout for the
      orchestrator to record.

  BackupAnswer
    + bitmapRecreated field surfaced from the agent.

  LibvirtTakeBackupCommandWrapper
    - Strips BITMAP_RECREATED= line from stdout before size parsing.
    - Sets answer.setBitmapRecreated(...).

  NASBackupChainKeys
    + BITMAP_RECREATED key for backup_details.

  NASBackupProvider
    - When the agent reports a recreated bitmap, persists it under
      backup_details and logs an info-level message so operators can
      correlate larger-than-usual incrementals with VM restarts.

This satisfies the bitmap-loss-on-VM-restart concern from the RFC review
without touching VirtualMachineManager / StartCommand / agent lifecycle.

Refs: apache/cloudstack#12899
2026-04-27 19:10:46 +03:00
James Peru 1f2aebca36 feat(backup): orchestrate full vs incremental in NAS provider
Adds the Java side of the incremental NAS backup feature:

  TakeBackupCommand
    + mode, bitmapNew, bitmapParent, parentPath fields (null for legacy
      callers — script preserves its existing behaviour when these are
      omitted).

  BackupAnswer
    + bitmapCreated (echoed by the agent on success)
    + incrementalFallback (true when an incremental was requested but the
      agent had to fall back to full because the VM was stopped).

  LibvirtTakeBackupCommandWrapper
    - Forwards the new fields to nasbackup.sh.
    - Strips the new BITMAP_CREATED= / INCREMENTAL_FALLBACK= marker lines
      out of stdout before the existing numeric-suffix size parser runs,
      so the script can keep the same "size as last line(s)" contract.
    - Surfaces both markers on the BackupAnswer.

  NASBackupProvider
    - decideChain(vm) walks backup_details (chain_id, chain_position,
      bitmap_name) for the latest BackedUp backup of the VM and decides:
        * Stopped VM      -> full (libvirt backup-begin needs running QEMU)
        * No prior chain  -> full (chain_position=0)
        * chain_position+1 >= nas.backup.full.every -> new full
        * otherwise       -> incremental, parent=last bitmap
    - Generates timestamp-based bitmap names ("backup-<epoch>") matching
      what the script then registers as the libvirt checkpoint name.
    - persistChainMetadata() writes parent_backup_id, bitmap_name,
      chain_id, chain_position, type into the existing backup_details
      key/value table (per the RFC review — no new columns on backups).
    - Honours the agent's INCREMENTAL_FALLBACK= signal: re-records the
      backup as a full and starts a fresh chain.
    - createBackupObject() now takes a type argument so the BackupVO
      reflects the actual decision instead of always being "FULL".

Refs: apache/cloudstack#12899
2026-04-27 19:07:24 +03:00
James Peru fbb916b254 feat(backup): nasbackup.sh full+incremental modes via backup-begin
Adds three new optional CLI flags to nasbackup.sh:
  -M|--mode <full|incremental>
  --bitmap-new <name>          (checkpoint to create with this backup)
  --bitmap-parent <name>       (incremental: parent bitmap to read changes since)
  --parent-path <path>         (incremental: parent backup file for rebase)

Behavior:
  - When -M is omitted, behavior is unchanged (legacy full-only, no checkpoint
    created), so existing callers are not affected.
  - With -M full + --bitmap-new, a full backup is taken AND a libvirt
    checkpoint of that name is registered atomically (via backup-begin's
    --checkpointxml), giving the next incremental its starting bitmap.
  - With -M incremental, libvirt's <incremental> element references the
    parent bitmap; only changed blocks are written. After completion,
    qemu-img rebase wires the new file to its parent so the chain on the
    NAS is self-describing for restore.
  - Stopped VMs cannot use backup-begin; if -M incremental is requested
    while VM is stopped, the script falls back to a full and emits
    INCREMENTAL_FALLBACK= on stderr so the orchestrator can record it
    correctly in the chain.
  - The script echoes BITMAP_CREATED=<name> on success so the Java caller
    can store it under backup_details (NASBackupChainKeys.BITMAP_NAME).

Works across local file, NFS-file, and LINSTOR primary storage. Ceph RBD
running-VM support is a pre-existing limitation of this script, not
affected by this change.

Refs: apache/cloudstack#12899
2026-04-27 18:53:20 +03:00
James Peru 1981469099 feat(backup): add chain-metadata keys + nas.backup.full.every config
NASBackupChainKeys defines the keys this provider stores under the
existing backup_details kv table (parent_backup_id, bitmap_name,
chain_id, chain_position, type). This keeps the backups table
provider-agnostic per the RFC review.

nas.backup.full.every is a zone-scoped ConfigKey that controls how
often a full backup is taken; the remaining backups in the cycle are
incremental. Counts backups (not days), so it works for hourly,
daily, and ad-hoc schedules. Default 10. Set to 1 to disable
incrementals (every backup is full).

Refs: apache/cloudstack#12899
2026-04-27 18:49:38 +03:00
James Peru f2a9202d74 docs: add RFC for incremental NAS backup support (KVM)
Adds the design document for incremental NAS backups using QEMU dirty
bitmaps and libvirt's backup-begin API. Reduces daily backup storage
80-95% for large VMs.

Refs: apache/cloudstack#12899
2026-04-27 18:46:22 +03:00
Henrique Sato 6f4445c5c1
Add offering preset variables for `Network` and `VPC` Quota tariffs (#11810)
* Add offering preset variable to Network and VPC tariffs

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Fabricio Duarte <fabricio.duarte.jr@gmail.com>

* Add tests

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Fabricio Duarte <fabricio.duarte.jr@gmail.com>
2026-04-27 09:36:37 -03:00
Suresh Kumar Anaparti ffebe8eaa6
Fix bulk power state query missing VM lifecycle state field (#13027)
* Fix bulk power state query missing VM lifecycle state field

The IdsPowerStateSelectSearch partial select did not include the VM
lifecycle state, causing isPowerStateInSyncWithInstanceState to always
return true when state was null. This prevented retry of failed
StopCommands on subsequent ping cycles.

* Add defensive check for instance host ID to prevent NPE

Co-authored-by: Sachin R Doddaguni <s_rudrappadoddagu@apple.com>
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
2026-04-27 15:38:52 +05:30
dahn 0b169920f3
make dh group 31 default, support 22-24+31 (#12764) 2026-04-27 13:43:58 +05:30
Suresh Kumar Anaparti 856d83a15e
Merge branch '4.22' 2026-04-23 23:53:24 +05:30
dahn 64ac0822b4
merge conflict fixes (#13046)
* merge conflict fixes

* fix pre-commit issue

Co-authored-by: Daan Hoogland <dahn@apache.org>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
2026-04-23 23:46:54 +05:30
Abhisar Sinha a127a26ebd Fix Revert Instance to Snapshot with custom service offering (#12885)
* Fix revertVM with custom svc offering
2026-04-20 08:37:53 +02:00
Fabricio Duarte 89d915493f Fix NPE on external/unmanaged instance import using custom offerings (#12884)
* Fix NPE on external/unmanaged instance import using custom offerings
2026-04-20 08:37:21 +02:00
Nicolas Vazquez be89e6f7c3
[KVM] Reorder migration logs to prevent populating agent logs on migrations (#12883)
* Move logs for values of the migration settings out of the loop

* Apply suggestions from code review

Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>

---------

Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
2026-04-17 23:39:19 -03:00
Henrique Sato 3166e64891
Add support for new variables to the GUI whitelabel runtime system (#12760)
* Add support for new variables to the GUI whitelabel runtime system

* Address review
2026-04-17 10:59:50 -03:00
Wei Zhou f820d0125d
fix end of files and codespell errors 2026-04-17 13:58:21 +02:00
Wei Zhou 6c1437b7dd
fix end of file schema-42200to42210.sql 2026-04-17 13:56:17 +02:00
Daniil Zhyliaiev 4df32ae79f
fix: NsxResource.executeRequest DeleteNsxNatRuleCommand comparison bug (#12833)
Fixes an issue in NsxResource.executeRequest where Network.Service
comparison failed when DeleteNsxNatRuleCommand was executed in a
different process. Due to serialization/deserialization, the
deserialized Network.Service instance was not equal to the static
instances Network.Service.StaticNat and Network.Service.PortForwarding,
causing the comparison to always return false.

Co-authored-by: Andrey Volchkov <avolchkov@playtika.com>
(cherry picked from commit 30dd234b00)
2026-04-17 04:53:36 +05:30
Suresh Kumar Anaparti 2d6280b9da
Merge branch '4.22' 2026-04-17 04:35:25 +05:30
Suresh Kumar Anaparti 13a2c7793c
Merge branch '4.20' into 4.22 2026-04-17 03:12:33 +05:30