* Host HA code improvements
* Fix to not cancel VM HA items when Host HA is enabled & inspection in progress, and some code improvements
- When Host HA inspection in progress, the investigor returns the Host Status as Up which cancels the VM HA items
- Don't cancel the VM HA items, instead reschedule them to try again later
* Changes to consider Recovered/Available Host HA state along with the agent connection status to determine the Host HA inspection in progress or not, and some code improvements
Adds @Mock injection for BackupDetailsDao so NASBackupProvider's
backupDetailsDao field is wired during testDeleteBackup and
takeBackupSuccessfully, fixing the NPE flagged by @harikrishna-patnala.
* Refactoring Allocator classes
* Break into smaller methods random and firfit allocators.
* Added unit tests for random and firstfit allocators
* Move random allocator from cloud-plugins to cloud-server
* Add BaseAllocator abstract class for duplicate code
* Add missing license
* Add missing license to unit test file
* Remove host allocator random dependency
* Change exception message on smoke tests
* Remove conditional as it was never actually reached in the original flow
* Fix tests
* Fix flipped parameters
* Fix NPE while listing hosts for migration when suitableHosts is null
* Remove unnecessary stubbings
* Fix checkstyle
* Remove unnecessary file
* Rename exception error messages
* Apply suggestions from code review
Co-authored-by: Fabricio Duarte <fabricio.duarte.jr@gmail.com>
* Rename UserVmDetailVO references to VMInstanceDetailVO
* Remove unused imports
* Add new line at EOF
* Remove unnecessary random allocator pom
* Fix GPU allocation mistake
* Fix failing tests
---------
Co-authored-by: Fabricio Duarte <fabricio.duarte@scclouds.com.br>
Co-authored-by: Fabricio Duarte <fabricio.duarte.jr@gmail.com>
* Linstor: fix create volume from snapshot on primary storage
When creating a volume from a snapshot on Linstor primary storage
(with lin.backup.snapshots=false), the operation fails with:
"Only the following image types are currently supported: VHD, OVA,
QCOW2, RAW (for PowerFlex and FiberChannel)"
Root cause: the Linstor driver does not handle SNAPSHOT -> VOLUME in
its canCopy()/copyAsync() methods. This causes DataMotionServiceImpl
to fall through to StorageSystemDataMotionStrategy (selected because
Linstor advertises STORAGE_SYSTEM_SNAPSHOT=true). That strategy's
verifyFormatWithPoolType() rejects RAW format for Linstor pools,
since RAW is only allowed for PowerFlex and FiberChannel.
Additionally, VolumeOrchestrator.createVolumeFromSnapshot() attempts
to back up the snapshot to secondary storage when the storage plugin
does not advertise CAN_CREATE_TEMPLATE_FROM_SNAPSHOT. This backup
fails because the snapshot only exists on Linstor primary storage.
Fix:
- Add CAN_CREATE_TEMPLATE_FROM_SNAPSHOT capability so the
orchestrator skips the backup-to-secondary path
- Add canCopySnapshotToVolumeCond() to match SNAPSHOT -> VOLUME
when both are on the same Linstor primary store
- Wire it into canCopy() to intercept at DataMotionServiceImpl
before strategy selection, bypassing StorageSystemDataMotionStrategy
- Implement copySnapshotToVolume() which delegates to the existing
createResourceFromSnapshot() for native Linstor snapshot restore
This follows the same pattern used by the StorPool plugin, which
handles SNAPSHOT -> VOLUME directly in its driver rather than going
through StorageSystemDataMotionStrategy.
Tested on CloudStack 4.22 with Linstor LVM_THIN storage, creating
a volume from a 1TB CNPG Postgres database snapshot. Volume creates
successfully with correct path and deletes cleanly.
* Let CloudRuntimeException propagate from copySnapshotToVolume
Remove try/catch in copySnapshotToVolume so that CloudRuntimeException
from createResourceFromSnapshot propagates to the caller, ensuring
CloudStack properly notices and reports the failure.
* Fix CAN_CREATE_TEMPLATE_FROM_SNAPSHOT breaking template creation
Setting CAN_CREATE_TEMPLATE_FROM_SNAPSHOT unconditionally to true
caused createTemplate from snapshot to take the StorPool-specific
code path in TemplateManagerImpl, which sends a CopyCommand to a
system VM that Linstor cannot handle.
Fix: make CAN_CREATE_TEMPLATE_FROM_SNAPSHOT conditional on the same
flag as STORAGE_SYSTEM_SNAPSHOT (!BackupSnapshots). When snapshots
are backed up to secondary (the default), the old template creation
flow works. When snapshots stay on primary, the direct path is used.
Also fix checkstyle: remove unused DataObject import in test.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Phase 6 added a hasBackingChain() check before rsync that uses
qemu-img info to detect chained incrementals. The existing
testExecuteWithRsyncFailure test mocks Script.runSimpleBashScriptForExitValue
to return 0 for any command, so the new qemu-img info check
incorrectly evaluates as "has backing chain" and routes the test
through the chain-flatten path instead of rsync — the test then
asserts a failure that never occurs.
Add a clause to the mock that returns 1 (no backing chain) for the
qemu-img info backing-filename probe, so the test continues to
exercise the rsync path it was designed for.
Adds the delete-with-chain-repair semantics agreed in the RFC review:
scripts/vm/hypervisor/kvm/nasbackup.sh
- New '-o rebase' operation: rebases an existing on-NAS qcow2 onto
a new backing parent. Uses a SAFE rebase (no -u) so the target
absorbs blocks of the about-to-be-deleted parent before the
backing pointer is moved up to the grandparent. Writes the new
backing reference relative to the target's directory so it
survives mount-point changes.
- New CLI flags --rebase-target, --rebase-new-backing (both passed
mount-relative).
RebaseBackupCommand + LibvirtRebaseBackupCommandWrapper
- New agent command that wraps the script's rebase operation. The
provider sends one of these per child that needs re-pointing.
NASBackupProvider.deleteBackup
- Now plans the chain repair before touching files via
computeChainRepair():
* No chain metadata -> single-file delete (legacy behaviour)
* Tail incremental -> single delete, no rebase
* Middle incremental -> rebase immediate child onto our
parent, then delete; shift
chain_position of all later
descendants by -1
* Full with descendants -> refuse unless forced=true; with
forced=true delete full + every
descendant newest-first
- Updates parent_backup_id, chain_position metadata in
backup_details after each rebase so the model in the DB matches
the on-disk chain.
This implements the cascade-delete behaviour requested in @abh1sar's
review point #7.
Refs: apache/cloudstack#12899
Two changes that together let an incremental NAS backup be restored
without manual chain assembly:
scripts/vm/hypervisor/kvm/nasbackup.sh
- qemu-img rebase now writes a backing-file path that is RELATIVE to
the new qcow2's directory (e.g. ../<parent-ts>/root.<uuid>.qcow2)
rather than the absolute path on the current mount point. NAS mount
points are ephemeral (mktemp -d), so an absolute reference would
not resolve when the backup is re-mounted at restore time. Relative
references are resolved by qemu-img against the file's own
directory, so the chain stays valid no matter where the NAS is
mounted next.
- Verifies the parent file exists on the NAS before rebasing.
LibvirtRestoreBackupCommandWrapper
- For file-based primary storage (local, NFS-file), the existing
code rsync'd the source qcow2 to the volume. That copies only the
differential blocks of an incremental, leaving a volume whose
backing-file reference points at a path the primary storage host
doesn't have. Now: detect a backing-chain via qemu-img info JSON
and flatten via 'qemu-img convert -O qcow2', which follows the
chain and produces a self-contained qcow2. Full backups continue
to use rsync (faster, no chain to flatten).
- The block-storage path (RBD/Linstor) already used qemu-img convert
via the QemuImg helper, which auto-flattens chains, so that path
needed no change.
Refs: apache/cloudstack#12899
CloudStack rebuilds the libvirt domain XML on every VM start, which means
persistent QEMU dirty bitmaps don't survive a stop/start cycle. Rather
than hooking into the VM start lifecycle (intrusive across the
orchestration layer), this commit handles the missing bitmap *lazily* at
the next backup attempt:
nasbackup.sh
- When -M incremental is requested, the script first checks
`virsh checkpoint-list` for the parent bitmap. If absent, it
recreates the checkpoint on the running domain so libvirt accepts
the <incremental> reference. The next incremental will be larger
than usual (it captures all writes since recreate, not since the
previous incremental) but is correct; subsequent ones return to
normal size.
- On recreation, emits BITMAP_RECREATED=<name> on stdout for the
orchestrator to record.
BackupAnswer
+ bitmapRecreated field surfaced from the agent.
LibvirtTakeBackupCommandWrapper
- Strips BITMAP_RECREATED= line from stdout before size parsing.
- Sets answer.setBitmapRecreated(...).
NASBackupChainKeys
+ BITMAP_RECREATED key for backup_details.
NASBackupProvider
- When the agent reports a recreated bitmap, persists it under
backup_details and logs an info-level message so operators can
correlate larger-than-usual incrementals with VM restarts.
This satisfies the bitmap-loss-on-VM-restart concern from the RFC review
without touching VirtualMachineManager / StartCommand / agent lifecycle.
Refs: apache/cloudstack#12899
Adds the Java side of the incremental NAS backup feature:
TakeBackupCommand
+ mode, bitmapNew, bitmapParent, parentPath fields (null for legacy
callers — script preserves its existing behaviour when these are
omitted).
BackupAnswer
+ bitmapCreated (echoed by the agent on success)
+ incrementalFallback (true when an incremental was requested but the
agent had to fall back to full because the VM was stopped).
LibvirtTakeBackupCommandWrapper
- Forwards the new fields to nasbackup.sh.
- Strips the new BITMAP_CREATED= / INCREMENTAL_FALLBACK= marker lines
out of stdout before the existing numeric-suffix size parser runs,
so the script can keep the same "size as last line(s)" contract.
- Surfaces both markers on the BackupAnswer.
NASBackupProvider
- decideChain(vm) walks backup_details (chain_id, chain_position,
bitmap_name) for the latest BackedUp backup of the VM and decides:
* Stopped VM -> full (libvirt backup-begin needs running QEMU)
* No prior chain -> full (chain_position=0)
* chain_position+1 >= nas.backup.full.every -> new full
* otherwise -> incremental, parent=last bitmap
- Generates timestamp-based bitmap names ("backup-<epoch>") matching
what the script then registers as the libvirt checkpoint name.
- persistChainMetadata() writes parent_backup_id, bitmap_name,
chain_id, chain_position, type into the existing backup_details
key/value table (per the RFC review — no new columns on backups).
- Honours the agent's INCREMENTAL_FALLBACK= signal: re-records the
backup as a full and starts a fresh chain.
- createBackupObject() now takes a type argument so the BackupVO
reflects the actual decision instead of always being "FULL".
Refs: apache/cloudstack#12899
NASBackupChainKeys defines the keys this provider stores under the
existing backup_details kv table (parent_backup_id, bitmap_name,
chain_id, chain_position, type). This keeps the backups table
provider-agnostic per the RFC review.
nas.backup.full.every is a zone-scoped ConfigKey that controls how
often a full backup is taken; the remaining backups in the cycle are
incremental. Counts backups (not days), so it works for hourly,
daily, and ad-hoc schedules. Default 10. Set to 1 to disable
incrementals (every backup is full).
Refs: apache/cloudstack#12899
* Move logs for values of the migration settings out of the loop
* Apply suggestions from code review
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
---------
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
Fixes an issue in NsxResource.executeRequest where Network.Service
comparison failed when DeleteNsxNatRuleCommand was executed in a
different process. Due to serialization/deserialization, the
deserialized Network.Service instance was not equal to the static
instances Network.Service.StaticNat and Network.Service.PortForwarding,
causing the comparison to always return false.
Co-authored-by: Andrey Volchkov <avolchkov@playtika.com>
(cherry picked from commit 30dd234b00)
* initial attempt at network.loadbalancer.haproxy.idle.timeout implementation
* implement test cases
* move idleTimeout configuration test to its own test case
`cursor` field when more pages are available. The previous implementation only
fetched the first page and ignored pagination.
This change updates the list retrieval flow to:
- follow the `cursor` chain until no further pages exist
- accumulate items from all pages
- return a single merged result to the caller
This ensures that list operations return the complete dataset rather than just
the first page.
Co-authored-by: Andrey Volchkov <avolchkov@playtika.com>
* kvm: fix wrong CheckVirtualMachineAnswer when vm does not exist
* kvm: add LibvirtCheckVirtualMachineCommandWrapperTest
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Fix domain parsing for GPU
* Add Display controller to GPU class check
this adds support for the amd instinct mi2xx accelorator crards in the discovery script.
Co-authored-by: Piet Braat <piet@phiea.nl>
Fixes an issue in NsxResource.executeRequest where Network.Service
comparison failed when DeleteNsxNatRuleCommand was executed in a
different process. Due to serialization/deserialization, the
deserialized Network.Service instance was not equal to the static
instances Network.Service.StaticNat and Network.Service.PortForwarding,
causing the comparison to always return false.
Co-authored-by: Andrey Volchkov <avolchkov@playtika.com>