* keypairs added in api-constants
* names parameter added
* findbynames method added in dao
* change in impl to find and reset multiple keys
* findbynames method implemented
* log the publickeys, check the ssh keys given exists or not
* new ArrayList<>
* SQL IN toArray
* keypair
* null pointer exception solved with + concatanation
* null pointer exception solved with + concatanation
* error resolved
* keypair name to names in uservmresponse
* keypair name is set in the uservmresponse, from the details
* null checks are removed, keypairnames are stored in a string, sent to the resetvmsshinternal, and added in details
* commit first eval
* deploy vm takes multiple ssh-keys
* Deploy VM UI changed to accept multiple ssh keys
* Reset SSH UI API changed
* ResetSSH.vue
* ssh keys joined, ssh added in infocard
* changes made
* schema error resolved
* potential null pointer exception removed
* Update UserVmManagerImpl.java
unnecessary check removed.
* Update DeployVMCmd.java
* Update DeployVMCmd.java
* Update ResetVMSSHKeyCmd.java
* Update UserVmJoinDaoImpl.java
* .
* arraylist
* Update DeployVMCmd.java
* Update UserVmManagerImpl.java
* Update ResetVMSSHKeyCmd.java
* Update db
* Fix list vm by keypair
* ui fixes
* Fix typos
* ui fixes
* Cleanup
* Adding deprecated and since in api params
* Adding upgrade for existing vms with ssh keys
* Handle no key for cks
* Show existing keyparis in reset ssh key form
* get keys from the right account
Co-authored-by: bicrxm <bickrombishsass@gmail.com>
* This PR/commit comprises of the following:
- Support to fallback on the older systemVM template in case of no change in template across ACS versions
- Update core user to cloud in CKS
- Display details of accessing CKS nodes in the UI - K8s Access tab
- Update systemvm template from debian 11 to debian 11.2
- Update letsencrypt cert
- Remove docker dependency as from ACS 4.16 onward k8s has deprecated support for docker - use containerd as container runtime
* support for private registry - containerd
* Enable updating template type (only) for system owned templates via UI
* edit indents
* Address comments and move cmd from patch file to cloud-init runcmd
* temporary change
* update k8s test to use k8s version 1.21.5 (instead of 1.21.3 - due to https://github.com/kubernetes/kubernetes/pull/104530)
* support for private registry - containerd
* Enable updating template type (only) for system owned templates via UI
* smooth upgrade of cks clusters
* update pom file with temp download.cloudstack.org testing links
* fix pom
* add cgroup config for containerd
* add systemd config for kubelet
* add additional info during image registry config
* update to official links
* Update 'endpointe.url' global settings to 'endpoint.url'
* Add PR number on 'schema-41610to41700.sql'
* Use ApiServiceConfiguration.ApiServletPath.key() instead of "hardcoded" string
* vm-import: fix unmanaged instance listing
When the host and last host ID is not set for the VM, it may appear in the list of unmanaged instances.
This changes fixes the behaviour by filtering unmanaged instances list for host for following three criteria:
- host is set as host_id for the VM
- host is set as the last_host_id for the VM
- pod of the host is set as the pod_id for the VM and both host_id and last_host_id is NULL
* use SearchBuilder to fix query condition
* add paranthesis
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* api,server: add params for updatehypervisorcapabilities API
Allows updating following capabilities for a hypervisor, version:
- Max DATA volumes limit
- Storage motion supported
- Max hosts per cluster
- VM snapshot enabled
* added test
* changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Update test/integration/smoke/test_hypervisor_capabilities.py
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Add NFS version to mount command
* Remove extra line
* Extend NFS version to mount secondary storage
* Unused import
* Refactor NFS version to be granular
* Make use of the ConfigKey on the NFS version setting value
* In progress primary keys
* Refactor in progress to idempotent way
* Finish SQL changes
* Add java code to match new columns
* Fix imports
* Fix tests
* Remove comments
* Fix index name on vmsnapshot
* Fix parse from correct column on usage storage
* Fix parser columns
* Fix NPE
* Fix NPE for the rest of the occurrences
* Further fix for similar issue
Currently, our compute offerings and disk offerings are tightly coupled with respect to many aspects. For example, if a compute offering is created, a corresponding disk offering entry is also created with the same ID as the reference. Also creating compute offering takes few disk-related parameters which anyway goes to the corresponding disk offering only. I think this design was initially made to address compute offering for the root volume created from a template. Also changing the offering of a volume is tightly coupled with storage tags and has to be done in different APIs either migrateVolume or resizeVolume. Changing of disk offering should be seamless and should consider new storage tags, new size and place the volume in appropriate state as defined in disk offering.
more details are mentioned here https://cwiki.apache.org/confluence/display/CLOUDSTACK/Compute+offering+and+disk+offering+refactoring
* Schema changes and disk offering column change from "type" to "compute_only"
* Few more changes
* Decoupled service offering and disk offering
* Remove diskofferingid from vminstance VO
* Decouple service offering and disk offering states
* diskoffering getsize() is only for strict disk offerings
* Fix deployVM flow
* Added new API params to compute offering creation
* Add diskofferingstrictness to serviceoffering vo under quota
* Added overrideDiskOfferingId parameter in deploy VM API which will override disk offering for the root disk both in template and ISO case
Added diskSizeStrictness parameter in create Disk offering API which will decide whether to restrict resize or disk offering change of a volume
* Fix User vm response to show proper service offering and disk offerings
* Added disk size strictness in disk offering response
* Added disk offering strictness to the service offering response
* Remove comments
* Added UI changes for Disk offering strictness in add compute offering form and Disk size strictness in add disk offering form
* Added diskoffering details to the service offering response
* Added UI changes in deployvm wizard to accept override disk offering id
* Fix delete compute offering
* Fix VM deployment from custom service offering
* Move uselocalstorage column access from service offering to disk offering
* UI: Separated compute and disk releated parameters in add compute offering wizard, also added association to disk offering
* Fixed diskoffering automatic selection on add compute offering wizard
* UI: move compute only toggle button outside the box in add compute offering wizard
* Added volumeId parameter to listDiskOfferings API and the disksizestrictness flag of the current disk offering is honored while list disk offerings
* Added configuration parameter to decide whether to check volume tags on the destination storagepool during migration
* Added disk offering change checks during resize volume operation
* Added new API changeofferingforVolume API and corresponding changes
* Add UI form for changeOfferingForVolume API
* Fix UI conflicts
* Fix service offering usage as disk offering
* Fix unit test failures
* fix user_vm_view
* Addressed review comments
* Fixed service_offering_view
* Fix service offering edit flow
* Fix service offering constructor to address custom offering
* Fix domain_router_view to get proper service offering id
* Removed unused import
* Addressed review comments and fixed update service offering flow with storage tags
* Added marvin test cases for checking disk offering strictness
* review comments addressed
* Remove system_use column from disk offering join
* update volume_view to update system_use column from service offering and not disk offering
* Fix changeOfferingForVolume API for custom disk offering
* Fix global setting implementation
* Fix list volumes, after changing system_use column from disk offering to service offering in volume_view
* Changes for override root disk offering in deployvm wizard in case of custom offering
* Fix a unit test case
* Fixed recent unit test cases with new serviceofferingvo constructor
* Fix unit test in VolumeApiServiceImpl
* Added storage id for the list disk offering API and corresponding UI changes in migrateVolume and changeOfferingForVolume flow
* Rename global configuration parameter from storage.pool.tags.disk.offering.strictness to match.storage.pool.tags.with.disk.offering
* Fix smoke test failures
* Added tool tip for migrate volume UI form
* Address review comments and fix UI form of deploy VM in case of ISO.
* Fixed resize volume UI form for data disk
* UI changes to disable override root disk size when override root disk offering is enabled
* UI fix in deploy vm wizard
* Fix listdiskoffering after rebasing with main
* Fixed UI in migrate and changeofferingfor volume to handle empty disk offering list
Removed the volume's current disk offering from listDiskOffering response list
* Added custom Iops to resize volume form and removed the current disk offering during change offering for volume UI form
* Fix false response on updateDiskOffering API
* Added search field for changeofferingforvolume UI form
* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster
* Removed DB changes from 4.16 upgrade file
* Resolving merge conflicts with main 4.17
* Added support for auto migration and auto resize of the root volume upon changing the service offering for VM.
* UI: Added automigrate checkbox in scale VM form
* Addes since attributes to new API params
* Added shrinkOK parameter to changeofferingforvolume API
* Added shrinkOk param to UI in changeOfferingforVolume form
* Added shrinkOk flag to scaleVM and changeServiceForVirtualMachines and UI form
* Removed old foreign key constraint on IDs of service offering and disk offering
* Allow resize and automigrate of root volume if required in all cases of service offering change
* Allow only resize to higher disk size from UI
* Fixing vue syntax error
* Make UI changes to provide root disk size box when the linked disk offering is of custom
* Converted from check box to toggle in scale VM, changeoffering, resize and migrate volume forms
* Fix resize volume operation to update the VM settings
* Fix migratevolume form to pick selected storage pool id in list diskofferings API
* Do not fail if there are existing role permissions for annotations
* Refactor
* Improve refactor
* Do not update if there are existing role permissions for annotations
* Fix exception on upgrade
* Remove extra space from suggestion
* Apply suggestions from code review
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* Use Physical size to evaluate if migration is possible
* Improve logging and consider files skipped as failure in complete migration
* skipped can't be negative
* remove useless method
* group multidisk templates for secstor migration
* use enum
* Update engine/orchestration/src/main/java/org/apache/cloudstack/engine/orchestration/DataMigrationUtility.java
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Pearl d'Silva <pearl.dsilva@shapeblue.com>
* Enable resetting config values to default value
Provide reset button to zone,cluster,domain,account,
primary and secondary storage so that config values
can be reset to default value
* fix ui issue
* Update test/integration/smoke/test_reset_configuration_settings.py
* Update test/integration/smoke/test_reset_configuration_settings.py
Co-authored-by: Rakesh Venkatesh <rakeshv@apache.org>
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Fix metrics stats for VMs that are not running
* Improves the way to get vmIdsToRemoveStats
* Improves test
Co-authored-by: José Flauzino <jose@scclouds.com.br>
* Improve logs
* Remove unnecessary comments
* Use diamond inference
* Fix some logs
* Remove unnecessary unboxing
* Create method to handle job result
* Remove unused vars and fix some logics
* Extract code to method and few adjusts
* Use CollectionUtils
* Extract pending work job validation to method
* Create new constructors
* Extract work job and info creation to a method
* Extract submit async job to a method
* Extract find vm by id to a method
* Change log level from trace to debug
* Remove unnused methods and add logs
* Undo code remotion
* Remove asserts and fix conditionals
* Address @GabrielBrascher reviews
* Remove double quotes from keys in manual json
* Undo code remotion
* Add object to log
* Remove statement from try/catch
* Implement toString with ReflectionToStringBuilderUtils
* Fix errors related to merge main
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* api,server,engine/schema: admin listvm api clusterid
Add clusterid parameter in listVirtualMachines API for admin
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* import order
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* set clusterid only for ListVMsCmdByAdmin
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* upgrade/systemvm: add template_zone_ref entries
Fixes#5641
When registering a system VM template during an upgrade, entries in cloud.template_zone_ref must be created for the new template.
For a cross-zones template, entry for each zone must be added.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix for template-zone entry create
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* change
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Check the pool used space from the bytes used in the storage pool stats collector, for non-default primary storage pools that cannot provide stats.
Also, Update the used bytes from the pool stats answer for non-default primary storage pools if the pool can provide stats.
* Update server/src/main/java/com/cloud/storage/StorageManagerImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* space fix
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* VPC: support LB in multiple vpc tiers if LB provider is VpcVirtualRouter
* server: fix unit test CreateNetworkOfferingTest failures
[ERROR] Tests run: 10, Failures: 0, Errors: 10, Skipped: 0, Time elapsed: 13.902 s <<< FAILURE! - in org.apache.cloudstack.networkoffering.CreateNetworkOfferingTest
[ERROR] createIsolatedNtwkOffWithVlan(org.apache.cloudstack.networkoffering.CreateNetworkOfferingTest) Time elapsed: 0.662 s <<< ERROR!
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'loadBalancerDaoImpl': Invocation of init method failed; nested exception is java.lang.NullPointerException
at org.apache.cloudstack.networkoffering.CreateNetworkOfferingTest.setUp(CreateNetworkOfferingTest.java:110)
Caused by: java.lang.NullPointerException
at org.apache.cloudstack.networkoffering.CreateNetworkOfferingTest.setUp(CreateNetworkOfferingTest.java:110)
* update #5580: use java.util.Optional
* update #5580: create method listByNetworkIdOrVpcIdAndScheme
This adds unique constraints much like other tables, instead of using
query that maybe incompatible with older 5.x mysql servers.
Fixes#5564
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster
* Change in constructors
* Naming changes
* Remove commented code
* Refactor code for more readability
* Addressed review comments on code refactor
* vmware, network: add maclearning option
Adds option for specifying MAC Learning property for network offering (useful for VMware Distributed Virtual Portgroup). Added global config - network.mac.learning for the default value.
MAC Learning is supported for DV portgroups for VMware Distributed vSwitches v6.6.0+ and vSphere 6.7+
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix warning msg
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* trace nics additions
* work queue patch for network to add
* add secondary key to job
* logging improvements and naming of field(s)
* several naming corrections
* extra check if net already exists for vm
* placeholder job with secondary object
* constraint on entering the same job multiple times
* error handling/warning message
* review comments applied
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Wei Zhou <wei.zhou@shapeblue.com>
Enhanced update network form in the UI.
On network offering change for an isolated network,
- VMware portgroup should be updated accordingly.
- VMs on the network should be placed on the correct VMware portgroup based on the network rate, https://docs.cloudstack.apache.org/en/latest/adminguide/service_offerings.html#network-throttling.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Enable account settings to be visible under domain settings
All the account settings can't be configured under domain
level settings right now.
By default, if account setting is not configured then
its value will be taken from global setting.
Add a global setting "enable.account.settings.for.domain"
so that if its enabled then all the account level settings
will be visible under domain levelsettings also.
If account level setting is configured then that value will
be considered else it will take domain scope value. If
domain scope value is not configured then it will pick
it up from global setting.
If domain level setting is not configured then by default
the value will be taken from global setting
Add another global setting "enable.domain.settings.for.child.domain"
so that when its true, if a value for domain setting is not
configured then its parent domain value is considered until
it reaches ROOT domain. If no value is configured till ROOT
domain then global setting value will be taken.
Also display all the settings configured under the domain level
in list domains api response
* rename variables
Co-authored-by: Rakesh Venkatesh <rakeshv@apache.org>
* resource limit: Fix resource limit check on VM start
* add check to validate if cpu/memory are within limits for custom offering + exception handling
* unit tests
Co-authored-by: utchoang <hoangnm@unitech.vn>
This adds a volume(primary) storage plugin for the Linstor SDS.
Currently it can create/delete/migrate volumes, snapshots should be possible,
but currently don't work for RAW volume types in cloudstack.
* plugin-storage-volume-linstor: notify libvirt guests about the resize
* Resource Icon support - backend
* Add API support for resourceicon
* update reponse params + ui support
* Add exclusive list api for icons and UI changes
* refactor upload view
* UI changes to support resource icon wherever necessary
* convert api to POST + refactor icon view
* Add response name to list API + cosmetic changes in UI
* Added support for the following:
resource icon support for vpcs, networks, domains, and projects
add icons to list view if reosurces support icons to be added
support for showing project icons in the project switching drop-down menu
* List resourceicon cmds to be allowed for user role too
Users to inherit account icon if present (in listUsers response)
Move common code to plugin.js
Add icon to project list view - while switching between projects - Dashboard page
Show icons against zones - Capacity Dashboard view
Show user / account icon at the login button if present
* cosmetic changes
* optimize ui code
* fix reload issue for domain view
* add access check for delete operation
* ui-related changes to show iso icons
* iso image in uservm response
* add icons to custom form's list resources
* some more custom forms aligned to show icon for resources
* conmitic changes + add listing of icons to listdomainchildren cmd
* Add backend/server-side validation for base64 string passed for image
* change preview border
* preselect zone if there's only one
* add default icon
* show icon for network list in deploy vm view
* add custom icons if any to the import-export VM view
* preselect zone persistence on clearing cache
* prevent root vol from inheriting template/iso icon
* show tempalte icon in the info card details
* fix icon not being show on hard-refresh / initial traversal
* fx success message
If vm has multiple nics belonging to different shared networks then
wrong statistics will be collected since network id is not considred
as primary key. Make the change so that primary key contains network
id so that traffic belonging to that corresponding network is shown
If network id is not added to primary key then all the traffic of all
shared networks will show up in one nic.
Co-authored-by: Rakesh Venkatesh <rakeshv@apache.org>
* Add commons-lang3 to Utils
* Create an util to provide methods that ReflectionToStringBuilder does not have yet
* Create method to retrieve map of tags from resource
* Enable tests on volume components and remove useless tests
* Refactor VolumeObject and add unit tests
* Extract createPolicy in several methods
* Create method to copy policies between volumes and add unit tests
* Copy policies to new volume before removing old volume on volume migration
* Extract "destroySourceVolumeAfterMigration" to a method and test it
* Remove javadoc @param with no sensible information
* Rename method name to a generic name
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Extend addAnnotation and listAnnotations APIs
* Allow users to add, list and remove comments
* Add adminsonly UI and allow admins or owners to remove comments
* New annotations tab
* In progress: new comments section
* Address review comments
* Fix
* Fix annotationfilter and comments section
* Add keyword and delete action
* Fix and rename annotations tab
* Update annotation visibility API and update comments table accordingly
* Allow users seeing all the comments for their owned resources
* Extend comments for volumes and snapshots
* Extend comments to multiple entities
* Add uuid to ssh keypairs
* SSH keypair UI refactor
* Extend comments to the infrastructure entities
* Add missing entities
* Fix upgrade version for ssh keypairs
* Fix typo on DB upgrade schema
* Fix annotations table columns when there is no data
* Extend the list view of items showing they if they have comments
* Remove extra test
* Add annotation permissions
* Address review comments
* Extend marvin tests for annotations
* updating ui stuff
* addition to toggle visibility
* Fix pagination on comments section
* Extend to kubernetes clusters
* Fixes after last review
* Change default value for adminsonly column
* Remove the required field for the annotationfilter parameter
* Small fixes on visibility and other fixes
* Cleanup to reduce files changed
* Rollback extra line
* Address review comments
* Fix cleanup error on smoke test
* Fix sending incorrect parameter to checkPermissions method
* Add check domain access for the calling account for domain networks
* Fix only display annotations icon if there are comments the user can see
* Simply change the Save button label to Submit
* Change order of the Tools menu to provent users getting 404 error on clicking the text instead of expanding
* Remove comments when removing entities
* Address review comments on marvin tests
* Allow users to list annotations for an entity ID
* Allow users to see all comments for allowed entities
* Fix search filters
* Remove username from search filter
* Add pagination to the annotations tab
* Display username for user comments
* Fix add permissions for domain and resource admins
* Fix for domain admins
* Trivial but important UI fix
* Replace pagination for annotations tab
* Add confirmation for delete comment
* Lint warnings
* Fix reduced list as domain admin
* Fix display remove comment button for non admins
* Improve display remove action button
* Remove unused parameter on groupShow
* Include a clock icon to the all comments filter except for root admin
* Move cleanup SQL to the correct file after rebasing main
Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>
* server: Optional destination host when migrate a vm
* #4378: migrate systemvms/routers with optional host
* Migrate vms across clusters
After enabling maintenance mode on host, if no suitable hosts
are found in the same cluster then search for hosts in
different clusters having the same hypervisor type
set global setting migrate.vm.across.clusters to true
* search all clusters in zone when migrate vm across clusters if applicable
* Honor migrate.vm.across.clusters when migrate vm without destination
* Check MIGRATE_VM_ACROSS_CLUSTERS in zone setting
* #4534 Fix Vms are migrated to same clusters in CloudStack caused by dedicated resources.
* #4534 extract some codes to methods
* fix#4534: an error in 'git merge'
* fix#4534: remove useless methods in FirstFitPlanner.java
* fix#4534: vms are stopped in host maintenance
* fix#4534: across-cluster migration of vms with cluster-scoped pools is supported by vmware vmotion
* fix#4534: migrate systemvms is only possible across clusters in same pod to avoid potential network errors.
* fix#4534: code optimization
Co-authored-by: Rakesh Venkatesh <r.venkatesh@global.leaseweb.com>
Co-authored-by: Sina Kashipazha <s.kashipazha@global.leaseweb.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Sina Kashipazha <soreana@users.noreply.github.com>
This PR allows migration of public templates that are created from snapshots / volumes. Data migration across secondary stores initially excluded all public templates on the pretext that public templates are automatically synced when a new image store is added; however, this assumption isn't true for templates marked as "public" when created from snapshots / volumes. Such templates can be identified if their url is null
* Filter disk / service offerings by domain at DB level
* Search for tags in the db
* Update search to include host tags
* Differenciate between tags
* Refactor
* Fix of creating volumes from snapshots without backup
When few snaphots are created onyl on primary storage, and try to create
a volume or a template from the snapshot only the first operation is
successful. Its because the snapshot is backup on secondary storage with
wrong SQL query. The problem appears on Ceph/NFS but may affects other
storage plugins.
Bypassing secondary storage is implemented only for Ceph primary storage
and it didn't cover the functionality to create volume from snapshot
which is kept only on Ceph
* Address review
* CLOUDSTACK-9175: [VMware DRS] Adding new host to DRS cluster does not participate in load balancing.
Summary: When a new host is added to a cluster, Cloudstack doesn't create all the port groups (created by cloudstack earlier in other hosts) present in the cluster. Since the new host doesn't have all the necessary networking port groups of cloudstack, it is not eligible to participate in DRS load balancing or HA.
Solution: When adding a host to the cluster in Cloudstack, use VMware API to find the list of unique port groups on a previously added host (older host in the cluster) if exists and then create them on the new host.
* Added few checks for cluster details
* Create utility to centralize byte convertions
* Add/change toString definitions
* Create Libvirt handler to ScaleVmCommand
* Enable dynamic scalling VM with KVM
* Move config from interface to class and rename it
As every variable declared in interfaces are already final,
this moving will be needed to mock tests in nexts commits
* Configure VM max memory and cpu cores
The values are according to service offering or global configs
* Extract dpdk configuration to a method and test it
* Extract OS desc config to a method and test it
* Extract guest resource def to a method and test it
Improve libvirt def
* Refactor LibvirtVMDef.GuestResourceDef
* Refactor ScaleVmCommand
* Improve VMInstaVO toString()
* Refactor upgradeRunningVirtualMachine method
* Turn int variables into long on utility
* Verify if VM is scalable on KVMGuru
* Rename some KVMGuruTest's methods
* Change vm's xml to work with max memory
* Verify if service offering is dynamic before scale
* Create methods to retrieve data from domain
* Create def to hotplug memory
* Adjust the way command was scaling the VM
* Fix database persistence before executing command
* Send more info to host to improve log
* Fix var name
* Fix missing "}"
* Undo unnecessary changes
* Address review
* Fix scale validation
* Add VM prepared for dynamic scaling validation
* Refactor LibvirtScaleVmCommandWrapper and improve unit tests
* Remove duplicated method
* Add RuntimeException check
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Update ByteScaleUtilsTest.java
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Add SharedMountPoint to KVMs supported storage pool types
* Fix live migration to iSCSI and improve logs
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* [#4398] adapt code to handle multi tag string with commas
* [#4398] remove trailing spaces
* [#4398] add multi host tag support for ingest process
* [#4398] add test for multi tag support in offerings
* [#4398] update multitag support for DeploymentPlanningManagerImpl
encapsulate multi tag check from Ingest Feature, DepolymentPlanningManager into
HostDaoImpl to prevent code duplicates
* [#4398] move logic to HostVO and add tests
* rename test method
* [#4398] Change string method to apaches StringUtils
* [#4398] modify test for multi tag support
* adapt sql for double tags
Co-authored-by: Dirk Klahre <Dirk.Klahre@Itelligence.de>
Fixes#4897
Some details tables were allowing null values for detail value which can cause NPE in some cases.
mysql> SELECT TABLE_NAME, COLUMN_NAME, COLUMN_TYPE FROM information_schema.columns WHERE table_schema='cloud' AND table_name LIKE'%_details' AND column_name='value' AND IS_NULLABLE='YES';
+-------------------------------+-------------+---------------+
| TABLE_NAME | COLUMN_NAME | COLUMN_TYPE |
+-------------------------------+-------------+---------------+
| account_details | value | varchar(255) |
| cluster_details | value | varchar(255) |
| data_center_details | value | varchar(1024) |
| domain_details | value | varchar(255) |
| image_store_details | value | varchar(255) |
| storage_pool_details | value | varchar(255) |
| template_deploy_as_is_details | value | text |
| user_vm_deploy_as_is_details | value | text |
| user_vm_details | value | varchar(5120) |
+-------------------------------+-------------+---------------+
9 rows in set (0.00 sec)
Brings consistency for value column of *_details tables with preventing null values.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Add new registers in guest_os
* Create a procedure to insert guest_os and guest_os_hypervisor data
* Remove ';' as the last char of the procedure
* Set the right category_id on guest_os
Ubuntu 20.04 LTS - Ubuntu - Linux
Ubuntu 21.04 - Ubuntu - Linux
pfSense 2.4 - FreeBSD - Unix
OpenBSD 6.7 - Unix
OpenBSD 6.8 - Unix
AlmaLinux 8.3 - CentOS
* Fix SQL line's last character
* Add from with dummy table
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Global setting to select preferred storage pool
Currently all the volumes are allocated on storage pools
based on the capacity or the algorithm selected. Sometimes
we need to deploy all volumes of particular account in a
specific storage pool and in that case its not possible.
with this change, we can specify the uuid of the preferred
storage pool, so that all volumes of the account will be
deployed in this pool
* code feedback
Co-authored-by: Rakesh Venkatesh <rakeshv@apache.org>
Currently we can send a default value of 4K/32K for GET/POST request of
user data field. Most new browsers and also nginx support till 1MB of
post data.
Added a new global setting `vm.userdata.max.length` with default value of
32KB which can be increased till 1MB.
* server: skip zone check for PERHOST iso during attachIso
Hypervisor tools ISO - vmware-toools.iso, xs-tools.iso are marked as PERHOST in DB. They are active but not downloaded to the secondary storages and hence no template-zone entry.
Skips the template-zone check for such templates.
Fixes#5265
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* inverted check
* use constants in TemplateManager
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Added disk provisioning type support for VMWare
* Review changes
* Fixed unit test
* Review changes
* Added missing licenses
* Review changes
* Update StoragePoolInfo.java
Removed white space
* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id
* Delete __init__.py
* Merge fix
* Fixed failing test
* Added comment about parameters
* Added error log when update fails
* Added exception when using API
* Ordering storage pool selection to prefer thick disk capable pools if available
* Removed unused parameter
* Reordering changes
* Returning storage pool details after update
* Removed multiple pool update, updated marvin test, removed duplicate enum
* Removed comment
* Removed unused import
* Removed for loop
* Added missing return statements for failed checks
* Class name change
* Null pointer
* Added more info when a deployment fails
* Null pointer
* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Small bug fix on API response and added missing bracket
* Removed datastore cluster code
* Removed unused imports, added missing signature
* Removed duplicate config key
* Revert "Added more info when a deployment fails"
This reverts commit 2486db78dc.
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Externalize secondary storage capacity threshold
* Use default value as threshold when config value is lower than 0.0
* Move config to CapacityManager
* Validate config in CapacityManagerImpl
* Use config in StorageOrchestrator
* Change config description
* Remove unused import
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Enhance log messages with hostName
* Use host.toString() on most of host logs.
* Remove redundant "Host" in logs and enhance logs
* duplicated "for"
* Adopt String.format, and enhance code
* Address reviews enhancing log messages
Update server/src/main/java/com/cloud/resource/ResourceManagerImpl.java
-- server/src/main/java/com/cloud/vm/UserVmManagerImpl.java
-- server/src/main/java/com/cloud/resource/RollingMaintenanceManagerImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Fix String.format issue and change log message from debug to warn
* Fix checkstyle issue
* Fix string.format log
* Address review: enhance logs
* Enhance log of hosts in maintenance avoid list
* Remove "VM" on logs as vm.toString() already appends VM-<details>
* Add more details of the VM when postStateTransitionEvent
* Address reviewer and enhance VMInstanceVO.toString()
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
- Added connection manager to the gateway client.
- Renew the client session on '401 Unauthorized' response.
- Refactored the gateway client calls, for GET and POST methods.
- Consume the http entity content after login/(re)authentication and close the content stream if exists.
- Updated storage pool client connection timeout configuration 'storage.pool.client.timeout' to non-dynamic.
- Added storage pool client max connections configuration 'storage.pool.client.max.connections' (default: 100) to specify the maximum connections for the ScaleIO storage pool client.
- Updated unit tests.
and blocked the attach volume operation for uploaded volume on ScaleIO/PowerFlex storage pool
his PR fixes the problem of not updating the chain info or setting chain info to null after volume migrations.
Problem: While fetching the volume chain info, management server assumes datastore name to be a UUID (this is true only for NFS storages added by CloudStack) but datastore name can be with any name.
Solution: To fetch the volume chain info, use datastore name instead of UUID.
The fix is made in the flow of following API operations
migrateVirtualMachine
migrateVirtualMachineWithVolume
migrateVolume
This PR introduces new granularity levels to configure VM dynamic scalability. Previously VM is configured to be dynamically scalable based on the template and global setting. Now we bringing this option to configure at service offering and VM level also.
VM can dynamically scale only when all flags are ON at VM level, template, service offering and global setting. If any of the flags is set to false then VM cannot be scalable. This result will be persisted in DB for each VM and will be honoured for that VM till it is updated.
We are introducing 'dynamicscalingallowed' parameter with permitted values of true or false for deployVM API and createServiceOffering API.
Following are the API parameter changes:
createServiceOffering API:
dynamicscalingenabled: an optional parameter of type Boolean with default value “true”.
deployVirtualMachine API:
dynamicscalingenabled: an optional parameter of type Boolean with default value “true”.
Following are the UI changes:
Service offering creation has ON/OFF switch for dynamic scaling enabled with default value true
Inclusivity changes for CloudStack
- Change default git branch name from 'master' to 'main' (post renaming/changing default git branch to 'main' in git repo)
- Rename some offensive words/terms as appropriate for inclusiveness.
This PR updates the default git branch to 'main', as part of #4887.
Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixes: #4990
When a VM associated with a backup offering is destroyed/expunged, the backup offering isn't unassigned, and despite the VM having no backups present, backup usage is generated. This PR prevent usage record generation when there are no backups present for a VM with a backup offering associated to it. This is done by ensuring that usage event for backups is generated only when a the backup size > 0
This PR fixes#5047 which can be reproduced on Zones with _(I) Advanced Networks, (II) Security Groups enabled for the Zone, (III) network offering without Security Groups_; for instance, `DefaultSharedNetworkOffering` which does not list Security Group as supported service.
The issue is due to the following code inside the method `VirtualMachineManagerImpl.orchestrateReboot`:
[VirtualMachineManagerImpl.java#L3340](280c13a4bb/engine/orchestration/src/main/java/com/cloud/vm/VirtualMachineManagerImpl.java (L3340)).
```
final Answer rebootAnswer = cmds.getAnswer(RebootAnswer.class);
if (rebootAnswer != null && rebootAnswer.getResult()) {
if (dc.isSecurityGroupEnabled() && vm.getType() == VirtualMachine.Type.User) {
List<Long> affectedVms = new ArrayList<Long>();
affectedVms.add(vm.getId());
_securityGroupManager.scheduleRulesetUpdateToHosts(affectedVms, true, null);
}
return;
}
```
Fixes: #4972
This PR sets systevms' agent state to disconnected when it is stopped. Currently, when a systemVM (Console Proxy VM / Secondary storage VM) is stopped, the agent state still appears to be 'Up'
* server: destroy ssvm, cpvm on last host maintenance
When a single or last UP host enters into maintenance just stopping SSVM and CPVM will leave behind VMs on hypervisor side. As these system vms will be recreated they can be destroyed.
Fixes#3719
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix methods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* immediately destroy systemvms
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix destroy
Added bypassHostMaintenance flag in Comma.java class to allow command to be handled by host agent even when host is in maintenace.
Flag is set true only for delete commands for ssvm and cpvm.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* unit test fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix missing return statement
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix
VM should be stopped with cleanup before calling expunge else it server may through error with host in PrepareForMaintenance state.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* rename
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* forceha: fix vm is not started if it is poweroff from inside
steps to reproduce the issue
(1) make sure force.ha is true in global setting. if not, change it to true, and restart mgt server
(2) create a service offering , ha is not enabled
(3) create a vm
(4) log into the vm, and power off via cli.
expected result: vm is started again by cloudstack
actual result: vm is not started.
* forceha: fix vms are still running if host is force-removed
when host can be force removed, however vms are stopped in cloudstack, but not stopped on host
```
(localcloud) 🐱 > delete host id="a5625393-444d-4d0a-b31d-62baf88a8be1" forced=true
{
"success": true
}```
after some minutes, vms are still runnning on host
```
root@mgt01:~# ssh node63 virsh list
Id Name State
---------------------------
1 i-2-19-VM running
2 i-2-11-VM running
```
error message are
```
Cannot transmit host 2 to Enabled state
com.cloud.utils.fsm.NoTransitionException: No next resource state found for current state = Enabled event = DeleteHost
at com.cloud.resource.ResourceManagerImpl.resourceStateTransitTo(ResourceManagerImpl.java:1216)
at com.cloud.resource.ResourceManagerImpl$1.doInTransactionWithoutResult(ResourceManagerImpl.java:907)
```
* forceha: Make ForceHA dynamic
Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.
Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
IKE version allows selecting ike (autoselect), ikev1, or ikev2.
Split connections gives an option of separating the first right subnet from the rest, and kicking out individual statements for each right subnet for better cross-compatibility.
Backported from PR: #4137
update per PR suggestion
Fixes#3138
Co-authored-by: Greg Goodrich <ggoodrich@ippathways.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This PR addresses the issue raised at #4545 (Fail to change Service offering from local <> shared storage).
When upgrading a VM service offering it is validated if the new offering has the same storage scope (local or shared) as the current offering. I think that the validation makes sense in a way of preventing running Root disks with an offering that does not match the current storage pool. However, the validation only compares both offerings and does not consider that it is possible to migrate Volumes between local <> shared storage pools.
The idea behind this implementation is that CloudStack should check the scope of the current storage pool which the ROOT volume is allocated; this, it is possible to migrate the volume between storage pools and list/upgrade according to the offerings that are supported for such pool.
This PR also fixes an issue where the API command that lists offerings for a VM should follow the same idea and list based on the storage pool that the volume is allocated and not the previous offering.
Fixes: #4545
The default length is 255, which caused a truncation of data if
the JSON object representing the backup volumes is too big.
It caused errors when backups were made on VMs with 3 volumes
or more.
`vm_instance.backup_volumes` has the type TEXT, which has a
maximal length of 65535 characters.
Fixes#4965
This PR makes sure no orphaned snapshot details are considered in the cleanup at startup job.
a real solution would be to implement some kind of cascading delete, but as the parent record is "only" marked as removed this would be a bit com
Co-authored-by: Daan Hoogland <dahn@onecht.net>
* prevent other vm disks getting deleted
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: fix inter-cluster stopped vm migration
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix detached volume inter-cluster migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* cleanup unused method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* review changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: allow attached volume migration using VmwareStorageMotionStrategy
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* find vm clusterid with multiple ROOT volumes
VM can have multiple ROOT volumes and some can be on zone-wide store therefore iterate over all of them till a cluster ID is found.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix successive storage migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix intercluster check
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor vm cluster, host method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove inter-pod check
Added by mistake, VMware won't have pods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address review comment
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
When invoking migrateVirtualMachineWithVolume API call and a strategy isn't found the volumes are left in Migrating state
This PR puts back the volumes to Ready state.
This PR fixes: #4462
Problem Statement:
In case of VMware, when a VM having multiple data disk is destroyed (without expunge) and tried to recover the VM then the previous data disks are not attached to the VM like before destroy. Only root disk is attached to the VM.
Root cause:
All data disks were removed as part of VM destroy. Only the volumes which are selected to delete (while destroying VM) are supposed to be detached and destroyed.
Solution:
During VM destroy, detach and destroy only volumes which are selected during VM destroy. Detach the other volumes during expunge of VM.
Fixes#4201
This PR addresses the issue of a vm snapshot being indefinitely stuck is Expunging state in case deletion fails.
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster during a stopped VM migration. Also, find target datastore using the host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Adds new/missing guest os mappings for XCP-ng/Xenserver 8.1
Copy guest OS mappings from XCP-ng/Xenserver 8.1 for XCP-ng/Xenserver 8.2
Adds Ubuntu 20.04 guest os mapping for XCP-ng/Xenserver 8.2
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes#4517
Adds capacity checks for RandomAllocator (host allocator)
Factors out host cpu capability and capacity check wrt serviceoffering code into CapacityManager.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR aims at introducing persistence mode in L2 networks and enhancing the behavior in Isolated networks
Doc PR apache/cloudstack-documentation#183
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This contains 3 main changes
(1) add NETWORK_STATS_ethX for all nics with public ips in VPC VRs (current: NETWORK_STATS_eth1)
(2) DO NOT create records in user_statistics for each VPC tier (only one record per public nic per VPC VR)
(3) send NetworkUsageCommand before unplugging a NIC with public IPs from VPC VR
Public IP addresses dedicated to one domain should not be accessed
by other domains. Also, root admin should be able to display all
public ip addresses in system.
Currently following issues exist
1. Public IP address assigned to one domain can be accessed by
other sibling domains
If use.system.public.ip is false then child domains should not
see public ip of ROOT domain
Before fix
```
(test1) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
"count": 59,
"publicipaddress": [
```
After fix
```
(test) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
"count": 10,
```
* server: create DB entry for storage pool capacity when create storage pool
* Revert "server: create DB entry for storage pool capacity when create storage pool"
This reverts commit e790167bfe.
* server: create DB entry for storage pool capacity when create zone-wide storage pools
Update new systemvmtemplate for 4.15.1.0; synced:
http://download.cloudstack.org/systemvm/4.15/
A new template is necessary due to many security fixes over the last year, the 4.15.0 systemvmtemplate was created about a year ago.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
* Update vm_template table removed field when template is deleted
* Update method name
* address comment
* Extracted code to separate methods
* Address test failure
* refactor test cleanup
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
* Updated libvirt's native reboot operation for VM on KVM using ACPI event, and Added 'forced' reboot option to stop and start the VM (using rebootVirtualMachine API)
* Added 'forced' reboot option for System VM and Router
- New parameter 'forced' in rebootSystemVm API, to stop and then start System VM
- New parameter 'forced' in rebootRouter API, to force stop and then start Router
* Added force reboot tests for User VM, System VM and Router
Duplicated volumes after failed migration in Allocated state
Fix: Clean up the duplicate volume when the destination managed volume creation failed on migrate volume operation
While finding pools for volume migration list following compatible storages:
- all zone-wide storages of the same hypervisor.
- when the volume is attached to a VM, then all storages from the same cluster as that of VM.
- for detached volume, all storages that belong to clusters of the same hypervisor.
Fixes#4692Fixes#4400
If the template from which VR is created got deleted, the state
is set to inactive and removed to null.
Since the template is already deleted, the VR can't be created
using this template again.
If someone restarts network with cleanup then it will try to
deploy the vr from the old non existing template again.
So search only for active template which are not yet deleted.
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ
Documentation PR: apache/cloudstack-documentation#169
This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack
Other improvements addressed in addition to PowerFlex/ScaleIO support:
- Added support for config drives in host cache for KVM
=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)
- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates
- Updated full deployment destination for preparing the network(s) on VM start
- Propagate the direct download certificates uploaded to the newly added KVM hosts
- Discover the template size for direct download templates using any available host from the zones specified on template registration
=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones
- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)
- Retry VM deployment/start when the host cannot grant access to volume/template
- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore
- Check the router filesystem is writable or not, before performing health checks
=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not
- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)
* Addressed some issues for few operations on PowerFlex storage pool.
- Updated migration volume operation to sync the status and wait for migration to complete.
- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.
- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).
- Updated resize volume error message string.
- Blocked the below operations on PowerFlex storage pool:
-> Extract Volume
-> Create Snapshot for VMSnapshot
* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.
- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html
Other fixes included:
- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)
- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.
- Perform basic tests (for connectivity and file system) on router before updating the health check config data
=> Validate the basic tests (connectivity and file system check) on router
=> Cleanup the health check results when router is destroyed
* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0
* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool
and Minor improvements in PowerFlex/ScaleIO storage plugin code
* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.
- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.
* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py
* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)
* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.
* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration
* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure
* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
- Fixes inter-cluster migration of VMs
- Allows migration of stopped VM with disks attached to different and suitable pools
- Improves inter-cluster detached volume migration
- Allows inter-cluster migration (clusters of same Pod) for system VMs, VRs on VMware
- Allows storage migration for stopped system VMs, VRs on VMware within same Pod if StoragePool cluster scopetype
Linked Primate PR: https://github.com/apache/cloudstack-primate/pull/789 [Changes merged in this PR after new UI merge]
Documentation PR: https://github.com/apache/cloudstack-documentation/pull/170
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Steps to reproduce the issue:
(1)Create 10000 service offerings (by db changes below or cloudmonkey).
```
DROP PROCEDURE IF EXISTS cloud.insert_service_offering;
DELIMITER $$
CREATE PROCEDURE cloud.insert_service_offering()
BEGIN
DECLARE count INT DEFAULT 10000;
SET @offeringid = (select max(id)+1 from disk_offering);
WHILE count > 0 DO
INSERT INTO disk_offering (id,name,uuid,display_text,disk_size,type,created) values (@offeringid,'test-offering-wei',uuid(), 'test-offering-wei',0,'Service',now());
INSERT INTO service_offering (id,cpu,speed,ram_size) values (@offeringid, 1, 500,256);
SET @offeringid = @offeringid + 1;
SET count = count - 1;
END WHILE;
END $$
DELIMITER ;
CALL cloud.insert_service_offering();
mysql> CALL cloud.insert_service_offering();
Query OK, 0 rows affected (2 min 30.85 sec)
```
(2) Check the total time of periodical capacity check in cloudstack.
Without this patch, it spend 2.5 seconds (2 hosts)
```
2021-01-15 16:10:12,793 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Running Capacity Checker ...
2021-01-15 16:10:15,287 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Done running Capacity Checker ...
```
With this patch ,it spend 1.3 seconds (2 hosts)
```
2021-01-15 16:12:43,604 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Running Capacity Checker ...
2021-01-15 16:12:44,927 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Done running Capacity Checker ...
```
If there are 100 hosts, the total time will be reduced from 100+ seconds to around 10 seconds.
* 4.15:
server: select root disk based on user input during vm import (#4591)
kvm: Use Q35 chipset for UEFI x86_64 (#4576)
server: fix wrong error message when create isolated network without SourceNat (#4624)
server: add possibility to scale vm to current customer offerings (#4622)
server: keep networks order and ips while move a vm with multiple networks (#4602)
server: throw exception when update vm nic on L2 network (#4625)
doc: fix typo in install notes (#4633)
* 4.14:
server: select root disk based on user input during vm import (#4591)
kvm: Use Q35 chipset for UEFI x86_64 (#4576)
server: fix wrong error message when create isolated network without SourceNat (#4624)
server: add possibility to scale vm to current customer offerings (#4622)
server: keep networks order and ips while move a vm with multiple networks (#4602)
server: throw exception when update vm nic on L2 network (#4625)
doc: fix typo in install notes (#4633)
This PR fixes an issue when move a vm from an account to another account.
Steps to reproduce the issue
(1) create a vm with multiple shared networks (in advanced zone, or advanced zone with security groups)
(2) create another account (in same domain who can also access the shared networks)
(3) move vm to new account, with a list of networkid
expected result: the vm has nics on the networks in same order as specified in API request, and nics have the same ips as before actual result: network order is not same as specified, ips are changed.
* server: fix cannot create vm if another vm with same name has been added and removed on the network
steps to reproduce the issue
(1) create vm-1 on network-1
(2) add vm-1 to network-2
(3) remove vm-1 from network-2
(4) create another vm with same name vm-1 on network-2
expected result: operation succeed
actual result: operation failed.
* #4600: add back a removed line
Fix db upgrade path conflict, add 4.15.1.0->4.16.0.0 for master, bump
systemvmtemplate version to 4.16.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Fix for mapping guest OS type read from OVF to existing guest OS in CloudStack database while registering VMware template
* Added unit tests to String Utils methods and updated the code
* Updated the java doc section
* Updated os description logic to keep equals ignore match with guest os display name
Update the guest OS from the OVF file after upload is completed
This PR fixes the template upload from local on VMware
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
This PR addresses an error that appears when you try to add a new host. I don't even understand why there was a cast to String in the first place. I will assume some classes send HypervisorType and some send a string (empty or otherwise). Shouldn't this be addressed to use the same type everywhere? With this fix adding a new xenserver host works fine.
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Setting snapshot state to error on timeout
* Setting removed field so snapshot record is ignored by garbage collection
* Removed explicitly setting error status, renamed method from markFailed to markRemoved
* Renamed method, moved code a few lines down
* Moved remove logic
* Removed unused service
* Moved removed logic - last time, promise
For Basic network isolation methods are not provided, and exception is
thrown when trying to encode the Vlan id. That's why we have to check
before encoding that the list with isolation methods is not empty
* support for handling incremental snaps (on DB entries) on xen
* Addressed comments
* Update NfsSecondaryStorageResource.java
adjusted space in comment/ log
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This feature enables the following:
Balanced migration of data objects from source Image store to destination Image store(s)
Complete migration of data
setting an image store to read-only
viewing download progress of templates across all data stores
Related Primate PR: apache/cloudstack-primate#326
After a vm is shutdown, the power state isn't updated immediately. This prevents changing the service offering.
This PR updates the power state immediately after the vm is confirmed to be shutdown.
Fixes: #3159
While remove secondary nic from a Running vm, if update the default nic to the secondary nic before the nic is removed, the vm will not have default nic (and cannot be started) when both operations are completed.
It is because UpdateDefaultNic api is not handled as a vm work job (AddNicToVMCmd and RemoveNicFromVMCmd are), it is processed before nic is removed. The result is that secondary nic becomes default nic and got removed.
* engine: honour bypass VLAN id/range for L2 networks
Commit e894238d904a9c49c1140371f612a51d251efc1 (#3899) allowed private
gateways to bypass vlan check while refactoring it did not cover the
case for L2 but only shared network. This fix will re-enable honouring
the bypass vlan check option for L2 guest network (in addition to the
Shared networks).
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Update NetworkOrchestrator.java
This is an extention of #3732 for kvm.
This is restricted to ovs > 2.9.2
Since Xen uses ovs 2.6, pvlan is unsupported.
This also fixes issues of vms on the same pvlan unable to communicate if they're on the same host
This PR adds minor version support when mounting nfs on the SSVM as requested in #2861
The global setting "secstorage.nfs.version" has been changed to use the String data type which allows any minor version to be specified.
* DB : Add support for MySQL 8
- Splits commands to create user and grant access on database, the old
statement is no longer supported by MySQL 8.x
- `NO_AUTO_CREATE_USER` is no longer supported by MySQL 8.x so remove
that from db.properties conn parameters
For mysql-server 8.x setup the following changes were added/tested to
make it work with CloudStack in /etc/mysql/mysql.conf.d/mysqld.cnf and
then restart the mysql-server process:
server_id = 1
sql-mode="STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION,ERROR_FOR_DIVISION_BY_ZERO,NO_ZERO_DATE,NO_ZERO_IN_DATE,NO_ENGINE_SUBSTITUTION"
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=1000
log-bin=mysql-bin
binlog-format = 'ROW'
default-authentication-plugin=mysql_native_password
Notice the last line above, this is to reset the old password based
authentication used by MySQL 5.x.
Developers can set empty password as follows:
> sudo mysql -u root
ALTER USER 'root'@'localhost' IDENTIFIED BY '';
In libvirt repository, there are two related commits
2019-08-23 13:13 Daniel P. Berrangé ● rpm: don't enable socket activation in upgrade if --listen present
2019-08-22 14:52 Daniel P. Berrangé ● remote: forbid the --listen arg when systemd socket activation
In libvirt.spec.in
/bin/systemctl mask libvirtd.socket >/dev/null 2>&1 || :
/bin/systemctl mask libvirtd-ro.socket >/dev/null 2>&1 || :
/bin/systemctl mask libvirtd-admin.socket >/dev/null 2>&1 || :
/bin/systemctl mask libvirtd-tls.socket >/dev/null 2>&1 || :
/bin/systemctl mask libvirtd-tcp.socket >/dev/null 2>&1 || :
Co-authored-by: Wei Zhou <w.zhou@global.leaseweb.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR adds outputting human readable byte sizes in the management server logs, agent logs, and usage records. A non-dynamic global variable is added (display.human.readable.sizes) to control switching this feature on and off. This setting is sent to the agent on connection and is only read from the database when the management server is started up. The setting is kept in memory by the use of a static field on the NumbersUtil class and is available throughout the codebase.
Instead of seeing things like:
2020-07-23 15:31:58,593 DEBUG [c.c.a.t.Request] (AgentManager-Handler-12:null) (logid:) Seq 8-1863645820801253428: Processing: { Ans: , MgmtId: 52238089807, via: 8, Ver: v1, Flags: 10, [{"com.cloud.agent.api.NetworkUsageAnswer":{"routerName":"r-224-VM","bytesSent":"106496","bytesReceived":"0","result":"true","details":"","wait":"0",}}] }
The KB MB and GB values will be printed out:
2020-07-23 15:31:58,593 DEBUG [c.c.a.t.Request] (AgentManager-Handler-12:null) (logid:) Seq 8-1863645820801253428: Processing: { Ans: , MgmtId: 52238089807, via: 8, Ver: v1, Flags: 10, [{"com.cloud.agent.api.NetworkUsageAnswer":{"routerName":"r-224-VM","bytesSent":"(104.00 KB) 106496","bytesReceived":"(0 bytes) 0","result":"true","details":"","wait":"0",}}] }
FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Human+Readable+Byte+sizes
When the static route service is not available on the VPC and a static route is created, the static route is created in a revoked state.
Currently, the UI doesn't distinguish between active or revoked static routes.
This PR adds the missing state filter to the list routes command and only lists active routes in the UI.
It also ignores revoked routes when the private gateway is being removed but clears out the inactive routes before the gateway is removed.
Fixes#2908
This fixes NPE caused due to merge conflict fix from the forward merge
commit 562a7db8df and fixes travis test
regression:
=== TestName: test_01_reset_vm_on_reboot | Status : SUCCESS ===
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixes#4056Fixes#4107Fixes#4113Fixes#4133
Fixes deployment, template and network deletion.
Also allows filetering in listKubernetesSupportedVersions with keyword
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR adds support for the OOBM Redfish protocol, implementing a Java client to send HTTP requests to Redfish supported systems.
Implementation overview:
- Redfish Java client: a Java Client for Redfish that makes Redfish actions available to the HA workflow via an OOB driver.
- OOB Redfish driver: a new Out-of-band driver was created for Redfish, allowing to integrate the Redfish Client with the CloudStack Out-of-band management implementation.
Fixes: #3624
Adding the following fixes so primate can work without issues :
- Adding pagination for listNetworkAclLists
- Adding pagination for listRoles
- Returning mshost uuid rather than msid in list hosts response
- Allowing listVirtualMachinesMetrics to respect hostid
- Fixing return all details in template response
When a host connects to a management server, the host IP address and the certificate are stored in memory on the management server. This mapping is checked periodically to determine if any certificates are due to expire.
Before a certificate is renewed, a few checks are done to determine if the host is connected to the management server by fetching the host record from the database. The problem here is if the wrong record is fetched, the host is not checked for renewal.
This PR improves the host record fetch from the database by looking only at hosts that are not removed.
Fixes: #4129
- Create a role from any of the existing role, using new parameter roleid in createRole API
- Import a role with its rules, using a new importRole API
- New default roles for Read-Only and Support Admin & User
- No modifications allowed for Default roles
- Cleaned up old NetApp APIs from role_permissions table.
While migrate a vm, in the popup, the host dedicated to other accounts/domains are also 'Suitable" for migration, which is obviously wrong.
The same issue happens with api findHostsForMigration
* Enable unmanaging guest VMs
* Minor fixes
* Fix stop usage event only if VM is not stopped when unmanaging
* Rename unmanaged VMs manager
* Generate netofferingremove usage event if VM is not stopped
* Generate usage event VM snapshot primary off when unmanaging
This PR fixes an issue where an instance fails to deploy due to a null pointer when using an L2 Guest Network with DefaultL2NetworkOfferingConfigDrive on Xenserver. It also fixes migrating an instance to another host.
This has been tested by:
- Creating an L2 Guest network, using DefaultL2NetworkOfferingConfigDrive as the network offering.
- Deploying an instance using the L2 Guest network created.
- Migrating the instance away from the host and back
Repro Steps:
1. Create a VM on host1
2. Make host1 capacity full by deploying multiple VMs
3. Try Dynamic scaling on VM on host1
4. NPE occurs when MS tries to find host to migrate the VM and then scale.
Root cause: VM profile is not initiated properly with serviceoffering before planning for deployment
Solution: Iniate VM profile with serviceoffering and also make sure custom compute parameters are handled
Currently CloudStack is using logging frameworks as log4j and Java util logging, logging wrappers as slf4j and Apache common logging.
Here changes are to made it uniform, using only log4j framework.
Removed Java util logging, slf4j and Apache common logging.
BackupSync task would switch between databases to update backup usage
metrics in the cloud_usage.usage_backup table. The current framework
and the usage in ManagedContext causes database connection
(LegacyTransaction) leaks. When the thread runs faster, the issue is
easily reproducible and checking via heap dump analysis or using JMX
MBeans. This fixes by moving the task of backup data updation for
usage data to the usage server by publishing usage events instead of
switching between databases in a local thread while in a
ManagedContextRunnable.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Root cause:
Even though dynamic scaling job is handled in vmworkjob queue which ensures serilizing multiple jobs but the database updating and generating usage events are out of the job queue.
Solution:
Moved all updations into the job queue
Firstly I have tested all the scenarios to check if nothing is broken:
Scaling on a running VM with normal compute offering
Scaling on a stopped VM with normal compute offering
Scaling on a running VM with custom compute offering
Scaling on stopped VM with custom compute offering
Scaling on stopped/running VM between custom compute offering and normal compute offering and combinations among these. Checked if the custom parameters have been populated or deleted accordingly based on the offering to which the VM is scaled
Since this is a corner scenario I could not test the exact point where two usage events are recorded at the same time for two different API calls on same VM.
Fixes#4090
When trying to migrate a VM across 2 clusters, if a snapshot has been deleted and garbage collection has run to update the removed field, it is not possible to migrate the instance due to a null pointer.
When expunge a Running vm, vm will be stopped with forcestop=false which does not make sense. we should honor vm.destroy.forcestop in global setting, or always set forcestop=true.
This upgrades the systemvmtemplate base to Debian 10 with openjdk-11 and a newer strongswan package.
Fixes#3654
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Update AncientDataMotionStrategy.java
fix When secondary storage usage is> 90%, VOLUME migration across primary storage will cause the migration to fail and lose VOLUME
* Update AncientDataMotionStrategy.java
Volume is migrated across Primary storage. If no secondary storage is available(Or used capacity> 90% ), the migration is canceled.
Before modification, if secondary storage cannot be found, copyVolumeBetweenPools return NUll
copyAsync considers answer = null to be a sign of successful task execution, so it deletes the VOLUME on the old primary storage. This is the root cause of data loss, because VOLUME did not perform the migration at all.
* code in comment removed
Co-authored-by: div8cn <35140268+div8cn@users.noreply.github.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
* 4.13:
Snapshot deletion issues (#3969)
server: Cannot list affinity group if there are hosts dedicated… (#4025)
server: Search zone-wide storage pool when allocation algothrim is firstfitleastconsumed (#4002)
* Fixes snapshot deletion
* Remove legacy '@Component', it is not necessary in this bean/class.
* Fix log message missing %d and remove snapshot on DB
* Remove "dummy" boolean return statement
* Manage snapshot deletion for KVM + NFS (primary storage)
* checkstyle trailing spaces
* rename options strings to *_OPTION
* Fix typo on deleteSnapshotOnSecondaryStorage and enhance log message
* Move the snapshotDao.remove(snapshotId); (#4006)
* Fix deletesnapshot worflow to handle both snapshots created in primary storage and snapshots backed up to secondary storage
* Fix extra space
* refactor out separate handling methods for secondary and primary (reducing returns)
* return false on unexpected error or log when expected
* != instead of ==
* secondary instead of backup storage
* init to null
* Handle snapshot deletion on primary storage. When primary store ref not found for snapshot do not fail the operation.
* Fix debug levels on log messages
Co-authored-by: GabrielBrascher <gabriel@apache.org>
Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Remove constraint for NFS storage
* Add new property on agent.properties
* Add free disk space on the host prior template download
* Add unit tests for the free space check
* Fix free space check - retrieve avaiable size in bytes
* Update default location for direct download
* Improve the method to retrieve hosts to retry on depending on the destination pool type and scope
* Verify location for temporary download exists before checking free space
* In progress - refactor and extension
* Refactor and fix
* Last fixes and marvin tests
* Remove unused test file
* Improve logging
* Change default path for direct download
* Fix upload certificate
* Fix ISO failure after retry
* Fix metalink filename mismatch error
* Fix iso direct download
* Fix for direct download ISOs on local storage and shared mount point
* Last fix iso
* Fix VM migration with ISO
* Refactor volume migration to remove secondary storage intermediate
* Fix simulator issue
As previously described by PR #3929:
If vm has attached ISO, the migration fails with error message "org.libvirt.LibvirtException: Cannot access storage file /mnt/b33e5a1d-e4ea-3465-b6ac-c98dc8ff8af0/207-2-cc5fd717-2d57-3bb3-bcf6-2c930268db6c.iso"
By default, once we create a security group we cant change its name.
In this feature, we introduce a new API command "updateSecurityGroup"
which allows us to rename the security group name. Although we can't
change the name of the "default" security group.
This adds support for JDK11 in CloudStack 4.14+:
- Fixes code to build against JDK11
- Bump to Debian 9 systemvmtemplate with openjdk-11
- Fix Travis to run smoketests against openjdk-11
- Use maven provided jdk11 compatible mysql-connector-java
- Remove old agent init.d scripts
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Enable PVLAN support on L2 networks
* Fix prevent null pointer on details
* Add marvin tests
* Fixes from comments
* Fix: missing pvlan type on plugniccommand
* Fix checks on network creation for vlans overlap
* Fix remove prefix from secondary vlan id
* Improve checks on physical network for pvlans
* Fix compatibility with previous pvlan creation
* Fix shared networks backwards pvlan compatibility
* Add ui fix for pvlan type not passed to api
* Add check for isolated vlan id overlap
* Include check for dynamic vlan reserved for secondary vlan
* Fix marvin tests errors
* Fix redundant imports
* Skip marvin test for pvlan if dvswitch is not present
* spelling
Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
* server: fix resource count of primary storage if some volumes are Expunged but not removed
Steps to reproduce the issue
(1) create a vm and stop it. check resource count of primary storage
(2) download volume. resource count of primary storage is not changed.
(3) expunge the vm, the volume will be Expunged state as there is a volume snapshot on secondary storage. The resource count of primary storage decreased.
(4) update resource count of the account (or domain), the resource count of primary storage is reset to the value in step (2).
* New feature: Add support to destroy/recover volumes
* Add integration test for volume destroy/recover
* marvin: check resource count of more types
* messages translate to JP
* Update messages for CN
* translate message for NL
* fix two issues per Daan's comments
Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
The VM ingestion feature allows CloudStack to discover, on-board, import existing VMs in an infra. The feature currently works only for VMware, with a hypervisor agnostic framework which may be extended for KVM and XenServer in future.
Add a global setting to disable creating networks with same name in an account
Add a global setting to disable creating network without
mentioning the start and end IPv4 or IPv6 address
By default we can create networks with the same name in the account.
Sometimes we should not create the networks with same name.
This change adds a global setting which prevents creating the network with same name.
The default value is true and set it to false to prevent creating network with same names.
Also its possible to create a shared network without mentioning the
start and the end IPv4 or IPv6 address.
This change adds a global setting which prevents creating a shared
network without specifying the start and the end IPv4 or IPv6 address
If the disk size of the vm to be created is greater
than the volume size, then the exception message should
display the numeric value instead of variable name
Fixes#3191
When a template is registered, code stores md5sum of the downloaded file in the vm_template table. However, this downloaded file could be deleted after template installation if it is not an actual (.qcow2, .ova, etc.) file. When the user copies a template using copyTemplate API, the actual template file will be copied across the image stores. Matching checksum for the copied templated file and the stored value from the vm_template table will result in a mismatch.
Changes will set an empty checksum value for the copied template while passing to download service which allows skipping wrong checksum check for the copied while install.
However, this results in a change in checksum value for concerned template entry in vm_template table post template install.
Co-authored-by: dahn <daan.hoogland@gmail.com>
* marvin: check resource count of more types
* New feature: add flag resource.count.running.vms.only to count resource consumption of only running vms
Stopped VMs do not use CPU/RAM actually.
A new global configuration resource.count.running.vms.only is added to determine whether resource (cpu/memory) of only running vms (including Starting/Stopping) will be taken into calculation of resource consumption.
* Add integration test for resource count of only running vms
Stop asking user (in the upgrade documentation) to remove a trailing slash for local KVM pool - do it here in upgrade path - so not needed in DOC for the upgrade to 4.14 and onwards.
When start a vm or migrate a vm (away from a host in host maintenance), cloudstack will check capacity of all hosts and choose one. If there are hundreds of hosts on the platform, it will take some seconds. When cloudstack choose a host and start/migrate vm to it, the resource consumption of the host might have been changed. This normally happens when we start/migrate multiple vms.
It would be better to double check the host capacity when start vm on a host.
This PR includes the fix for cpucore capacity when start/migrate a vm.
When we calculate a resource consumption of a host, we need to take the vms in following states into calculation: Running, Starting, Stopping, Migrating (to the host), and vms are Migrating from the host. Because, when stop a vm, the resource on host will be released when vm is stopped. When migrate a vm, the resource on destination host will be increased before migration starts, and resource on source host will be decreased after migraiton succeeds.
In cloudstack, there is a task named CapacityChecked which run every 5 minutes (capacity.check.period =300000 ms by default). It recalculates capacity of all hosts. However, it takes only vms in Running and Starting into consideration. We have faced some issues in host maintenance due to it.
Steps to reproduce the issue
(1) migrate N vms from host A to host B, cpu/ram resource increases before the migration.
(2) capacity check recalculate the capacity of hosts. used capacity of Host B will be reset to original value (not including the vms in Migrating).
(3) migrate some more vms from other host to host B, the migrations are allowed by cloudstack (because used capacity is incorrect). If the actual used memory exceed the physical memory on the host, there might be some critical issues (for example, libvirt dies)