* FR-248: Instance lease, WIP commit
* insert lease expiry into db and use that to filter exiring vms, add asyncjobmanager
* Add leaseDuration and leaseExpiryAction in Service offering create flow
* Update listVM cmd to allow listing only leased instances
* Add methods to fetch instances for which lease is expiring in next days
* Changes included:
config key setup and configured for alert email
lease options in create and update vm screen
handle delete protection, edit vm, create vm
validated stop and detroy, delete protection
* Update UI screens for leased properties coming from config and service offering
* use global lock before running scheduler
* Unit tests
* Flow changes done in UI based on discussion
* Include view changes in schema upgrade files and use feature in various UI elements
* Added integration test for vm deployment, UI enhancements for user persona, bug fixes
* validate integration tests, minor ui changes and log messages
* fix build: moving configkey from setup to test itself
* Disable testAlert to unblock build and trim whitespaces in integration tests
* Address review comments
* Minor changes in EditVM screen
* Use ExecutorService instead of Timer and TimerTask
* Additional review comments
* Incorporate following changes:
1. Execute lease action once on the instance
2. Cancel lease on instance when feature is disabled
3. Relevant events when lease gets disabled, cancelled, executed
4. Disable associating lease after deployment
5. UI elements and flow changes
6. Changes based on feedback from demo
* Handle pr review comments
* address review comments
* move instance.lease.enabled config to VMLeaseManager interface
* bug fix in edit instance flow and reject api request for invalid values
* max allowed lease is for 100 years
* log instance ids for expired instance
* Fix config validation for value range and code coverage improvement
* fix lease expiry request failures in async
* dont use forced: true for StopVmCmd
* Update server/src/main/java/org/apache/cloudstack/vm/lease/VMLeaseManager.java
Co-authored-by: Vishesh <vishesh92@gmail.com>
* handle review comments
---------
Co-authored-by: Rohit Yadav <rohityadav89@gmail.com>
Co-authored-by: Vishesh <vishesh92@gmail.com>
* Multiple nics support on Ubuntu template
* Multiple nics support on Ubuntu template
* supports allocating IP to the nic when VM is added to another network - no delay
* Add option to select DNS or VR IP as resolver on VPC creation
* Add API param and UI to select option
* Add column on vpc and pass the value on the databags for CsDhcp.py to fix accordingly
* Externalize the CKS Configuration, so that end users can tweak the configuration before deploying the cluster
* Add new directory to c8 packaging for CKS config
* Remove k8s configuration from resources and make it configurable
* Revert "Remove k8s configuration from resources and make it configurable"
This reverts commit d5997033ebe4ba559e6478a64578b894f8e7d3db.
* copy conf to mgmt server and consume them from there
* Remove node from cluster
* Add missing /opt/bin directory requrired by external nodes
* Login to a specific Project view
* add indents
* Fix CKS HA clusters
* Fix build
---------
Co-authored-by: Nicolas Vazquez <nicovazquez90@gmail.com>
A separate service account will be created and added in the project, if
not exist already, when a Kubernetes cluster is deployed in a project.
This account will have a role with limited API access.
Cleanup clusters on owner account cleanup, delete service account
if needed
When the owner account of k8s clusters is deleted, while its node VMs
get expunged, the cluster entry in DB remain present. This fixes the
issue by cleaning up all clusters for the account deleted.
Project k8s service account will be deleted on account cleanup or when
there is no active k8s cluster remaining
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
In Apache CloudStack, while using the listTemplates and listIsos APIs, Domain Admins and Resource Admins can retrieve templates and ISOs outside their intended scope.
Co-authored-by: bernardodemarco <bernardomg2004@gmail.com>
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
A separate service account will be created and added in the project, if
not exist already, when a Kubernetes cluster is deployed in a project.
This account will have a role with limited API access.
Cleanup clusters on owner account cleanup, delete service account
if needed
When the owner account of k8s clusters is deleted, while its node VMs
get expunged, the cluster entry in DB remain present. This fixes the
issue by cleaning up all clusters for the account deleted.
Project k8s service account will be deleted on account cleanup or when
there is no active k8s cluster remaining
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
In Apache CloudStack, while using the listTemplates and listIsos APIs, Domain Admins and Resource Admins can retrieve templates and ISOs outside their intended scope.
Co-authored-by: bernardodemarco <bernardomg2004@gmail.com>
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* kvm: fix vm deployment from RAW template
* Update plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
---------
Co-authored-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
* Introducing Storage Access Groups to define the host and storage pool connections
In CloudStack, when a primary storage is added at the Zone or Cluster scope, it is by default connected to all hosts within that scope. This default behavior can be refined using storage access groups, which allow operators to control and limit which hosts can access specific storage pools.
Storage access groups can be assigned to hosts, clusters, pods, zones, and primary storage pools. When a storage access group is set on a cluster/pod/zone, all hosts within that scope inherit the group. Connectivity between a host and a storage pool is then governed by whether they share the same storage access group.
A storage pool with a storage access group will connect only to hosts that have the same storage access group. A storage pool without a storage access group will connect to all hosts, including those with or without a storage access group.
* VMware - Ignore disk not found error on cleanup when the VM disk doesn't exists
* VMware - Retry powerOn on lock issues
* addressed comments
* Update CPVM reboot tests - wait for the agent to Disconnect and back Up
* Retry moveDatastoreFile when any file access issue while creating volume from snapshot
* Update full clone flag when restoring vm using root disk offering with more size than the template size
* refactored (mainly,for diskInfo - causing NPE in some cases)
* Retry moveDatastoreFile when there is any file access issue