Doc PR : https://github.com/apache/cloudstack-documentation/pull/461
This PR fixes https://github.com/apache/cloudstack/issues/8638
== Description
Four new Resource Types have been added. Admin can configure corresponding resource limits for the tenants at different levels (domain, account, project)
User dashboard's Storage section will show the new resources, their limits and current usage.
1. backup - No. of backups used by the account
2. backup_storage - Backup storage allocated for the account
3. bucket - No. of buckets used by the accounts
4. object_storage - Object storage allocated for the account.
Some other related changes done to BnR framework:
1. Maximum number of Backups to retain can be specified while creating Backup schedules, similar to Scheduled snapshots.
2. Oldest Scheduled backup of the same interval type will be deleted once the number reaches the configured max Backups value.
3. Code refactor: Moved syncBackups method from BackupProvider to the framework BackupManagerImpl, as it is a common functionality and all providers were using duplicated code.
Changes done to the Object Storage Framework
1. Quota parameter is made mandatory while creating a bucket. Bucket quota is considered to be the allocated space and will be used to enforce Resource limits.
== Schema Changes:
1. New Column `max_backups` added to `backup_schedule` table
4. New Column `backup_interval_type` added to `backups` table
== Api Changes:
1. createBackup: new Parameter `scheduleid`. It should be specified whenever a scheduled backup is created. This will translate to the `backup_interval_type` in the `backups` table.
3. createBackupScheduke: new Parameter `max_backups`. To specify maximum number of backups to retain for the given schedule.
== Configurations:
|Setting |Scope |Default Value |Description|
|-------|--------|--------------|-----------|
|backup.max.hourly |Global |8 |Maximum recurring hourly backups to be retained for an instance|
|backup.max.daily |Global |8 |Maximum recurring daily backups to be retained for an instance|
|backup.max.weekly |Global |8 |Maximum recurring weekly backups to be retained for an instance|
|backup.max.monthly |Global |8 |Maximum recurring monthly backups to be retained for an instance|
|max.account.backups| Global| 20 | The default maximum number of backups that can be created for an account|
|max.account.backup.storage| Global| 400 | The default maximum backup storage space (in GiB) that can be used for an account|
|max.domain.backups| Global| 40 | The default maximum number of backups that can be created for an domain|
|max.domain.backup.storage| Global| 800 | The default maximum backup storage space (in GiB) that can be used for an domain|
|max.project.backups| Global| 20 | The default maximum number of backups that can be created for an project|
|max.project.backup.storage| Global| 400 | The default maximum backup storage space (in GiB) that can be used for an project|
|Setting |Scope |Default Value |Description|
|-------|--------|--------------|-----------|
|max.account.buckets| Global| 20 | The default maximum number of buckets that can be created for an account|
|max.account.object.storage| Global| 400 | The default maximum object storage space (in GiB) that can be used for an account|
|max.domain.buckets| Global| 40 | The default maximum number of buckets that can be created for an domain|
|max.domain.object.storage| Global| 800 | The default maximum object storage space (in GiB) that can be used for an domain|
|max.project.buckets| Global| 20 | The default maximum number of buckets that can be created for an project|
|max.project.object.storage| Global| 400 | The default maximum object storage space (in GiB) that can be used for an project|
Co-authored-by: Daan Hoogland <daan@onecht.net>
Co-authored-by: Lucas Martins <56271185+lucas-a-martins@users.noreply.github.com>
Co-authored-by: Lucas Martins <lucas.martins@scclouds.com.br>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Show Usage Server configuration in a separate pane
* UI: Option to attach volume to an instance during create volume
* Show service ip in management server details tab
* change Schedule Snapshots to Recurring Snapshots
* Change the hypervisor order so that kvm, vmware, xenserver show up first
* Remove extra space in hypervisor names in config.java
* Fix `updateTemplatePermission` when the UI is set to a language other than English (#9766)
* Fix updateTemplatePermission UI in non-english language
* Improve fix
---------
Co-authored-by: Lucas Martins <lucas.martins@scclouds.com.br>
* Autofill vcenter details in add cluster form
* UI: condition to display create vm-vol-snapshots to same as create vol-snapshots
* Fix alignment on wrapping in global settings tabs
* rename Autofill vCenter credentials to Autofill vCenter credentials from Zone
* Rename Service Ip to Ip Address in management server response
* Change description of kvm.snapshot.enabled to say that it applies to volume snapshots
* Return error when kvm vm snapshot is taken withoutsnapshot memory
* Minor naming changes and grammar
* Fix tooltip for attach volume to instance button
* Show Usage Server configuration in a separate pane
* UI: Option to attach volume to an instance during create volume
* Show service ip in management server details tab
* change Schedule Snapshots to Recurring Snapshots
* Change the hypervisor order so that kvm, vmware, xenserver show up first
* Remove extra space in hypervisor names in config.java
* Autofill vcenter details in add cluster form
* UI: condition to display create vm-vol-snapshots to same as create vol-snapshots
* Fix alignment on wrapping in global settings tabs
* rename Autofill vCenter credentials to Autofill vCenter credentials from Zone
* Rename Service Ip to Ip Address in management server response
* Change description of kvm.snapshot.enabled to say that it applies to volume snapshots
* Return error when kvm vm snapshot is taken withoutsnapshot memory
* Minor naming changes and grammar
* Fix tooltip for attach volume to instance button
* Show Usage Server configuration in a separate pane
* UI: Option to attach volume to an instance during create volume
* Show service ip in management server details tab
* change Schedule Snapshots to Recurring Snapshots
* Change the hypervisor order so that kvm, vmware, xenserver show up first
* Remove extra space in hypervisor names in config.java
* Autofill vcenter details in add cluster form
* UI: condition to display create vm-vol-snapshots to same as create vol-snapshots
* Fix alignment on wrapping in global settings tabs
* rename Autofill vCenter credentials to Autofill vCenter credentials from Zone
* Rename Service Ip to Ip Address in management server response
* Change description of kvm.snapshot.enabled to say that it applies to volume snapshots
* Return error when kvm vm snapshot is taken withoutsnapshot memory
* Minor naming changes and grammar
* Fix tooltip for attach volume to instance button
* UI: Option to attach volume to an instance during create volume
* UI: condition to display create vm-vol-snapshots to same as create vol-snapshots
* moved db changes from 41900to42000 to 42000to42010
* Update group_id in already present usage configuration settings
* remove "schedule" from message in create Recurring Snapshots form
* Update server/src/main/java/com/cloud/vm/snapshot/VMSnapshotManagerImpl.java
---------
Co-authored-by: Daan Hoogland <daan@onecht.net>
Co-authored-by: Lucas Martins <56271185+lucas-a-martins@users.noreply.github.com>
Co-authored-by: Lucas Martins <lucas.martins@scclouds.com.br>
Co-authored-by: Boris Stoyanov - a.k.a Bobby <bss.stoyanov@gmail.com>
Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
* api,agent,server,engine-schema: scalability improvements
Following changes and improvements have been added:
- Improvements in handling of PingRoutingCommand
1. Added global config - `vm.sync.power.state.transitioning`, default value: true, to control syncing of power states for transitioning VMs. This can be set to false to prevent computation of transitioning state VMs.
2. Improved VirtualMachinePowerStateSync to allow power state sync for host VMs in a batch
3. Optimized scanning stalled VMs
- Added option to set worker threads for capacity calculation using config - `capacity.calculate.workers`
- Added caching framework based on Caffeine in-memory caching library, https://github.com/ben-manes/caffeine
- Added caching for account/use role API access with expiration after write can be configured using config - `dynamic.apichecker.cache.period`. If set to zero then there will be no caching. Default is 0.
- Added caching for account/use role API access with expiration after write set to 60 seconds.
- Added caching for some recurring DB retrievals
1. CapacityManager - listing service offerings - beneficial in host capacity calculation
2. LibvirtServerDiscoverer existing host for the cluster - beneficial for host joins
3. DownloadListener - hypervisors for zone - beneficial for host joins
5. VirtualMachineManagerImpl - VMs in progress- beneficial for processing stalled VMs during PingRoutingCommands
- Optimized MS list retrieval for agent connect
- Optimize finding ready systemvm template for zone
- Database retrieval optimisations - fix and refactor for cases where only IDs or counts are used mainly for hosts and other infra entities. Also similar cases for VMs and other entities related to host concerning background tasks
- Changes in agent-agentmanager connection with NIO client-server classes
1. Optimized the use of the executor service
2. Refactore Agent class to better handle connections.
3. Do SSL handshakes within worker threads
5. Added global configs to control the behaviour depending on the infra. SSL handshake could be a bottleneck during agent connections. Configs - `agent.ssl.handshake.min.workers` and `agent.ssl.handshake.max.workers` can be used to control number of new connections management server handles at a time. `agent.ssl.handshake.timeout` can be used to set number of seconds after which SSL handshake times out at MS end.
6. On agent side backoff and sslhandshake timeout can be controlled by agent properties. `backoff.seconds` and `ssl.handshake.timeout` properties can be used.
- Improvements in StatsCollection - minimize DB retrievals.
- Improvements in DeploymentPlanner allow for the retrieval of only desired host fields and fewer retrievals.
- Improvements in hosts connection for a storage pool. Added config - `storage.pool.host.connect.workers` to control the number of worker threads that can be used to connect hosts to a storage pool. Worker thread approach is followed currently only for NFS and ScaleIO pools.
- Minor improvements in resource limit calculations wrt DB retrievals
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* test1, domaindetails, capacitymanager fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* test2 - agent tests
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* capacitymanagertest fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* change
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix missing changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address comments
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* revert marvin/setup.py
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix indent
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* use space in sql
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address duplicate
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* update host logs
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* revert e36c6a5d07
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix npe in capacity calculation
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* move schema changes to 4.20.1 upgrade
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* build fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address comments
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix build
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* add some more tests
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* checkstyle fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove unnecessary mocks
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* build fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* replace statics
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* engine/orchestration,utils: limit number of concurrent new agent
connections
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor - remove unused
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* unregister closed connections, monitor & cleanup
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* add check for outdated vm filter in power sync
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* agent: synchronize sendRequest wait
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Support for Management Server Maintenance
- New APIs: prepareForMaintenance and cancelMaintenance, with required parameter - managementserverid.
- New management server states for maintenance: PreparingForMaintenance, Maintenance.
- listHosts API with optional parameter – managementserverid, to list the hosts connected to the management server.
- Support management server maintenance when more than one active management servers available.
- Triggers transfer agents to other available management servers for maintenance, new agent command MigrateAgentConnectionCommand to initiate transfer of indirect agents.
- New global config 'management.server.maintenance.timeout', to set the timeout (in mins) for the management server maintenance window, default: 60 mins.
- UI changes: Prepare and Cancel Maintenance in Management Server section, Connected Agents tab, New fields for hosts and management servers.
* Updated pending jobs check timer task with ScheduledExecutorService
* keep maintenance state on trigger shutdown call when ms is in maintenance
* add pending jobs count to ms response
* during ms heartbeat, update state to up only when it's down
* allow vm work jobs of async job created before prepare for maintenance
* Revert "keep maintenance state on trigger shutdown call when ms is in maintenance"
This reverts commit 607e13364679eac897f4d146bb3325ea7a61ba17.
* skip maintenance test when multiple management servers are not available, and not configured in host setting for kvm
For HA work items that are created for host state change, checks must be
done when execution is called in a new management server session.
A new column, reason, has been added in cloud.op_ha_work table to track
the reason for HA work.
When HighAvailabilityManager starts it finds and puts all pending HA
work items in Investigating state. During execution of the HA work if it
is found in investigating state, checks are done to verify if the work
is still valid. If the jobs is found to be invalid it is cancelled.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* 4.20:
Maintenance mode: Add host to deployment planner avoid list to fix local storage vm migration (#9892)
Add project-user association normalization script to 4.20.1 upgrade (#10116)
fix slider component for global settings of the range type (#10187)
Clean up network permissions on account deletion (#10176)
* 4.20:
merge errors fixed
Restrict the migration of volumes attached to VMs in Starting state (#9725)
server, plugin: enhance storage stats for IOPS (#10034)
Introducing granular command timeouts global setting (#9659)
Improve logging to include more identifiable information (#9873)
Adds framework layer change to allow retrieving and storing IOPS stats for storage pools. Custom `PrimaryStoreDriver` can implement method - `getStorageIopsStats` for returning IOPS stats. Existing method `getUsedIops` can also be overridden by such plugins when only used IOPS is returned.
For testing purpose, implementation has been added for simulator hypervisor plugin to return capacity and used IOPS for a pool.
For local storage pool, implementation has been added using iostat to return currently used IOPS.
StoragePoolResponse class has been updated to return IOPS values which allows showing IOPS values in UI for different storage pool related views and APIs.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Improve logging to include more identifiable information for kvm plugin
* Update logging for scaleio plugin
* Improve logging to include more identifiable information for default volume storage plugin
* Improve logging to include more identifiable information for agent managers
* Improve logging to include more identifiable information for Listeners
* Replace ids with objects or uuids
* Improve logging to include more identifiable information for engine
* Improve logging to include more identifiable information for server
* Fixups in engine
* Improve logging to include more identifiable information for plugins
* Improve logging to include more identifiable information for Cmd classes
* Fix toString method for StorageFilterTO.java
* 4.20:
UI: Fix userdata and load balancer selection (#10016)
Prevent password updates for SAML and LDAP users (#9999)
cloudstack-migrate-databases: sql AND added (#10033)
engine/schema: move SQLs to 4.20.0 to 4.20.1 upgrade (#10018)
Remove user from project before deletion (#10008)
Simplify validation for creating volume templates via UI (#9828)
* 4.20:
UI: Tooltip on the host information card to display the CPU speed in MHz and the memory value in MB (to 3 decimal places) (#9971)
UI: Allow accounts of the `User` type to add other accounts or users to projects through UI (#9927)
enable to create VPC portfowarding rules with source cidr (#7081)
Add new column `last_id` to the table volumes (#9759)
Allow VMWare import via another host (#9787)
Linstor: add support for ISO block devices and direct download (#9792)
get expunged VM data for job result (#9949)
fix section divider display on auth page (#9966)
* cli changes to update user/account, list by apikeyaccess, domain level setting
* UI changes for updating user/account and searchfilter in listview
* make the api parameters and setting accessible only to root admin
* revert changes to ui/package-lock.json
* minor changes to description strings
* UT for ApiServer and AccountManagerImpl classes
* fix pre-commit failure
* Added a constant for the string System
* UT for searchForUsers and searchForAccounts
* Fix marvin test error
* Update schema to use idempotent add column
* Fix `updateTemplatePermission` when the UI is set to a language other than English (#9766)
* Fix updateTemplatePermission UI in non-english language
* Improve fix
---------
Co-authored-by: Lucas Martins <lucas.martins@scclouds.com.br>
* Added user name uuid to logging
* Add events when api key access is changed via api or config setting
* fix the userid for api key access update event
* Fix ut failure after event logging
* Convert drop down to radio-button in edit user and account
* Add ApiKeyAccess status in User InfoCard for Users if Api key is generated
* Return apiKeyAccess in user and account response only for Root Admin
* fixed noredist build failure
* Show apikeyaccess on the left panel in the user view for root admins as well
* don't show divider if apiKeyAccess is not shown to user
* Fix events generated to set Username, Account and Domain of the caller correctly
* cli changes to update user/account, list by apikeyaccess, domain level setting
* UI changes for updating user/account and searchfilter in listview
* make the api parameters and setting accessible only to root admin
* revert changes to ui/package-lock.json
* minor changes to description strings
* UT for ApiServer and AccountManagerImpl classes
* fix pre-commit failure
* Added a constant for the string System
* UT for searchForUsers and searchForAccounts
* Fix marvin test error
* Update schema to use idempotent add column
* Added user name uuid to logging
* Add events when api key access is changed via api or config setting
* fix the userid for api key access update event
* Fix ut failure after event logging
* Convert drop down to radio-button in edit user and account
* Add ApiKeyAccess status in User InfoCard for Users if Api key is generated
* Return apiKeyAccess in user and account response only for Root Admin
* fixed noredist build failure
* Show apikeyaccess on the left panel in the user view for root admins as well
* don't show divider if apiKeyAccess is not shown to user
* Fix events generated to set Username, Account and Domain of the caller correctly
* Added DB upgrade path from 42000 to 42010
---------
Co-authored-by: Daan Hoogland <daan@onecht.net>
Co-authored-by: Lucas Martins <56271185+lucas-a-martins@users.noreply.github.com>
Co-authored-by: Lucas Martins <lucas.martins@scclouds.com.br>
* Improvement: management server peer states
* Update pr9885: consider new mgmt server node which has msId=managementServerNodeId
* Update pr9885: update global config description
* Update pr9885: update label on UI
* framework: Do not update mshost_peer when mgmt server is Up as it will be updated by status update
* mgmt: Update state to Up when mgmt server writes heartbeat to db
* mgmt: change Service IP to Management IP
---------
Co-authored-by: Boris Stoyanov - a.k.a Bobby <bss.stoyanov@gmail.com>
* Prevent addition of duplicate PF rules on scale up and no rules left behind on scale down (#32)
* fix missing dependency injection
* NSX: Fix concurrency issues on port forwarding rules deletion (#37)
* Fix concurrency issues on port forwarding rules deletion
* Refactor objectExists
* Fix unit test
* Fix test
* Small fixes
* CKS: Externalize control and worker node setup wait time and installation attempts (#38)
* NSX: Add shared network support (#41)
* NSX: Fix number of physical networks for Guest traffic checks and leftover rules on CKS cluster deletion (#45)
* Fix pf rules removal on CKS cluster deletion
* Fix check for number of physical networks for guest traffic
* Fix unit test
* fix logger
* NSX: Handle CheckHealthCommand to avoid host disconnection and errors on APIs
* NSX: Handle CheckHealthCommand to avoid host disconnection and errors on APIs
* Remove unused string
* fix logger
* Update UDP active monitor to ICMP
* Fix NPE on restarting VPC with additional public IPs
* NSX / VPC: Reuse Source NAT IP from systemVM range on restarts
* CKS: Public IP not found for VPC networks
* Externalize retries and inverval for NSX segment deletion (#67)
* remove unused import
* remove duplicate imports
* remove unused import
* revert externalizing cks settings
* fix test
* Refactor log messages
* Address comments
* Fix issue caused due to forward merge: 90fe1d
---------
Co-authored-by: Nicolas Vazquez <nicovazquez90@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This introduces the multi-arch zones, allowing users to select the VM arch upon deployment.
Multi-arch zone support in CloudStack can allow admins to mix x86_64 & arm64 hosts within the same zone with the following changes proposed:
- All hosts in a clusters need to be homogenous, wrt host CPU type (amd64 vs arm64) and hypevisor
- Arch-aware templates & ISOs:
- Add support for a new arch field (default set of: amd64 and arm64), when unspecified defaults to amd64 and for existing templates & iso
- Allow admins to edit the arch type of the registered template & iso
- Arch-aware clusters and host:
- Add new attribute field for cluster and hosts (kvm host agents can automatically report this, arch of the first host of the cluster is cluster's architecture), defaults to amd64 when not specified
- Allow admins to edit the arch of an existing cluster
- VM deployment form (UI):
- In a multi-arch zone/env, the VM deployment form can allow some kind of template/iso filtration in the UI
- Users should be able to select arch: amd64 & arm64; but this is shown only in a multi-arch zone (env)
- VM orchestration and lifecycle operations:
- Use of VM/template's arch to correctly decide where to provision the VM (on the correct strictly arch-matching host/clusters) & other lifecycle operations (such as migration from/to arch-matching hosts)
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This handles new systemvmtemplate renames which now includes arch detail
in the file name and URL. By default, it continues to use x86_64 for
automatic systemvmtemplate seeding and upgrades only.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR contains 3 features
- IPv4 Static Routing (Routed mode) #9346
Design document: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=306153967
- AS Numbers Management #9410
Design Document: https://cwiki.apache.org/confluence/display/CLOUDSTACK/BGP+AS+Numbers+Management
- Dynamic routing
Design Document: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=315492858
- Document: https://github.com/apache/cloudstack-documentation/pull/419
Rename nsx mode to routing mode
by
```
git grep -l nsx_mode |xargs sed -i "s/nsx_mode/routing_mode/g"
git grep -l nsxmode |xargs sed -i "s/nsxmode/routingmode/g"
git grep -l nsxMode |xargs sed -i "s/nsxMode/routingMode/g"
git grep -l NsxMode |xargs sed -i "s/NsxMode/RoutingMode/g"
```
- re-organize sql changes
- fix NPE as rules do not have public ip
- fix missing destination cidr in ingress rules
- disable network usage for routed network
- fix DB exception as network_id is -1 during network creation
- apply ingress/egress routing rules
- VR changes to configure nft rules for isolated network
- VR: setup nft rule for control network
- VR: flush all iptables rules
- fix NPE which is because ingress rules do not have public ip associated
- fix dest cidr is missing in nft tables
- add ip4 routing and ip4 routes to list network and list vpc response
- fix ingress rule is missing when vr is restarted
- fix icmp types in nft rules
- add tab to manage routing firewall rules
- fix ingress rules are not applied when VR is restarted
- add default rules in FORWARD chain
- fix create vpc offerings
- fix public ip is not assigned to vpc
- fix network offering is not listed when create vpc tier
- add is_routing to boot args of vpc vr
- remove table ip4_firewall in vpc vr
- release or remove subnet when remove a network
- implemenent fw_vpcrouter_routing
- fix wrong ip familty when flush ipv4 rules
- fix acl rules are not applied due to wrong version (should be 6 which means ip6 rules are removed)
- add default rules for vpc tiers so that tcp connections (e.g. ssh) work
- append policy rules after default rules
- remove /usr/local/cloud/systemvm/ in routers
- throw an exception when allocate subnet with cidrsize
- fix some TODOs
- add new parameters to update API
- return type Ipv4GuestSubnetNetworkMap when get or create subnet
- fix firewall rules are broken
- add domain_id and account_id to db
- add domain/account/project to ipv4 subnet response
- create ipv4 subnet for domain/account/project
- check conflict when update ipv4 subnet
- ui changes
- add parent subnet to response
- add list for ipv4 subnet
- implement some methods
- fix list subnets for guest networks by zoneid
- UI changes
- fix delete ipv4 subnet for network
- fix ipv4 subnet is set to zone guest network cidr if cidrsize is specified
- add zone info to response if parent subnet is null but network is not
- fix gateway/cidr is not set when create network with cidrsize
- fix order of nft rules in the VRs
* Routed v24
- add classes in marvin base.py
* Routed v25
- add test_01_subnet_zone
- fix dedicate to domain/account failure
- list subnets for network by keyword and subnet
* Routed v26: implement subnet auto-allocation
- add utils for split ip ranges into small subnets
- add utils to get start/end ip of a cidr
- implement subnet auto-generation
- add global settings
* Routed 27: add subnet for VPC
- add db column for vpc_id
- add db record for vpc
- remove db record when delete a vpc
- add checkConflicts methods
- remove duplicated settings
- check ipv4 cidr when create subnet
* Routed v28: update smoke tests
- update test_ipv4_routing.py
- search subnets by networkid
* Routed 29: fix vpc and add more tests
- fix createnetwork in vpc
- add vpc id/name to response
- fix zone id/name are not displayed in some cases
- add smoke test for vpc
- add smoke tests for failed cases
- add smoke test for connectivity checks
- marvin: add "-q" to ssh command
* Routed 31: ui and smoke tests
- UI: add link to network in list view
- add nftables rules check in VRs
* Routed 32: add chain OUTPUT and more rules
- fix the issue 80/443/8080 is not reachable from VR itself
```
2024-06-27 10:21:52,121 INFO Executing: systemctl start cloud-password-server@172.31.1.1
2024-06-27 10:21:52,128 INFO Service cloud-password-server@172.31.1.1 start
2024-06-27 10:21:52,129 INFO Executing: ps aux
2024-06-27 10:24:02,175 ERROR Failed to update password server due to: <urlopen error [Errno 110] Connection timed out>
```
* Routed: fix dns search from VMs in Isolated networks
* Routed: fix VPC dns issue due to gateway IP is missing in cloud.conf
This is caused by NSX integration, and fixed by
https://github.com/apache/cloudstack/pull/9102/
* Routed: rename routing_mode to network_mode
* Routed: replace centos5.5 template in smoke test as dhclient does not work in the vms
// this does not work
refer to https://dominikrys.com/posts/disable-udp-checksum-validation/#ignoring-udp-checksums-with-nftables
and
https://forum.openwrt.org/t/udp-checksum-with-nftables/161522/11
the vm should have checksum offloading disabled
* Routed: fix smoke test due to wrong cidrlist of egress rules and missing ingress rule from VR
* PR 9346: fix lint error schema-41910to42000.sql
* PR 9346: ui polish v1
* PR 9346: create VPC with cidrsize
* Routed: fix test failures with test_network_ipv6 and test_vpc_ipv6 due to 'ssh -q'
* Routed: fix /usr/local/cloud/systemvm/ are removed after SSVM/CPVM reboot
* Routed: fix IP of additional nics of VPC VR is not gateway
* PR 9346: fix cidrsize check when create VPC with cidrsize
* Routed: fix test/integration/smoke/test_ipv4_routing.py:279:16: E713 test for membership should be 'not in'
* PR9346: fix/Update api
* PR 9346: set response object name
* PR9346: UI refactor and small fixes
* PR9346: change return type of getNetworkMode
* PR9346: move IPv4 subnet to seperated tab
* PR9346: revert IpRangesTabGuest.vue back to original
* PR9346: fix remove ipv4 subnet on UI
* PR9346: fix test_ipv4_routing.py
* AS Number Range Management
* Create AS Number Range for a Zone
* Fix build
* Add ListASNRange and fix create ASN range
* Add List AS numbers
* Add UI for AS Numbers
* Fix UI and filter AS Numbers
* Add AS Number on Isolated network creation and refactor UI and response
* Release AS Number
* Add network offering new columns
* Add UI support to view and add AS number and configure network offering
* Automatically assign AS Number if not specify AS number
* update variable name
* Fix routing mode check
* UI: Only allow selecting AS number when routing mode is Dynamic and specifyAsNumber is true
* UI: Only pass AS number when supported by the network offering
* Release AS number on network deletion
* Add deleteASNRange command (#81)
* API: List ASNumbers by asnumber (#83)
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* AS number management extensions
* Support AS number on VPC tier creation based on the offering
* Fix delete AS Range
* Fix UI values
* UI: Minor fix for releasing AS number
* UI: Move management of AS Range to Zone details view
* Fix specify_as_number column in network_offering table to set the default false
* Add events for AS number operations
* Allow users to list AS Numbers and fix network form for Normal users
* Add AS number details to list networks response
* Fix Allocated time format
* Fix Allocated time format
* support in details view too
* Fix: Do not release AS number if acquired network requires AS number
* Fix: Do not release AS number if acquired network requires AS number
* Fix typo
* Fix allocated release
* Fix event type
* UI: Add Routing mode and Specify AS to the network offering details
* UI: Add Routing mode and Specify AS to the network offering details
* Address comment
* Fix release AS number of network deletion
* Fix release AS number of network deletion
* Fix
* Restore release to its place based on the boolean
* Rename boolean
* API: Add networkId as listASNumber parameter
* Add Network name to the search view filter for AS numbers
* Present allocated time in human readable format - Pubilc IP / AS Numbers
* Add account / domain filter for AS numbers
* Add support for AS numbers on VPC offerings
* Refactor AS number allocation to VPC and non VPC isolated networks
* Checkstyle
* Add support for AS numbers on VPC offerings
* extend vpc offering view and vpcoffering response
* merge https://github.com/shapeblue/cloudstack-playtika/pull/115 and change network_id of as_numbers to include vpc_id
* Display AS number of VPC tiers as the AS number of the VPC
* extend asnumber response and ui support
* improve UI and as number response to view VPC details
* List only dynamic offerings for vpc tiers with specify as numbers
* Fix release AS number
* Fix AS number displayed as 0 when no AS number assigned
* Fix VPC offering creation without specify AS
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Fix release AS number on VPC deletion
* Update server/src/main/java/com/cloud/dc/BGPServiceImpl.java
* Update server/src/main/java/com/cloud/dc/BGPServiceImpl.java
* Fix missing column on asnumber table
* Fix listASNumbers API to support vpcid and obtain AS number from vpc for tiers
* Prevent listing 0 AS number for VPC
* Fix create Isolated Network form
* Update server/src/main/java/com/cloud/network/vpc/VpcManagerImpl.java
* Update server/src/main/java/com/cloud/network/vpc/VpcManagerImpl.java
* Dynamic: move routingmode/specifyasn after networkmode in AddNetworkOffering.vue on UI
* Dynamic: fix ip4routing in network response
* Dynamic/systemvm: add FRR to systemvm template
* Dynamic: BGP peers (DB,VO,Dao)
* Dynamic: BGP peers (VR/server)
* Dynamic: v3
- remove BgpPeer class
- fix vpc vr has bgp peers of only 1 tier
- rename ip4_cidr to guest_ip4_cidr
- rename ip6_cidr to guest_ip6_cidr
- generate /etc/frr/frr.conf
- apply BGP peers on Dynamic-Routed network even if there is no BGP peers
* Dynamic v4: fix vpc vr
- fix duplicated guest cidr in frr.conf in vpc vr
todo
- restart frr / reload frr (reload will cause bgp session to Policy state)
- apis for bgp peers
- assign/release bgp peer from/to network
* Dynamic v5: add apis for bgp peers
* Dynamic v6: fix bugs
- set response object name
- remove required as number when update
- fix checks when update
- allow regular users to list bgp peers
* Dynamic v7: move apis to bgp sub-dir
* Dynamic v8: add tab for manage BGP peers on UI
* Dynamic v9: fix update bgp with same config
* Dynamiv v10: add changeBgpPeersForNetworkCmd
* Dynamic v11: create network with bgppeerids
- create network with bgppeerids
- add marvin classes
- add smoke tests
- remove uuid from bgp_peer_network_map
- fix created/removed in bgp_peer_network_map
- remove bgppeers when remove a network
- UI: fix delete bgp peer
* Dynamic v12: add test for vpc tiers
* Dynamic v13: bug fixes
- fix change BGP peers for network in Allocated state
- fix listing network returns removed record
- fix all vpc tiers have the same settings
- remove BGP peers as part of network removal
- remove FRR settings for vpc tiers without any BGP peers
- UI: fix no error msg when change BGP peers
* Dynamic v14: assign BGP Peers for VPC instead of VPC tiers
- create vpc with bgppeerids
- do not allow create/update vpc tier with bgppeerids
- apply all bgp peers when create/delete a vpc tier
- UI: change bgp peers for vpc
- test: update tests on vpc
* Dynamic: fix build errors after merging as number PR
* Dynamic: fix TODOs
* Dynamic: fix smoke test on VPC
* Allow creation of networks by users with as numbers
* Address review comments
* Move BGPService to bgp package and inject it on BaseCmd
* Revert changes for CKS and address more comments
* Display left side menu option for AS number only for root admin
* Dynamic: create/update BGP peer with details
refer to https://docs.frrouting.org/en/latest/bgp.html
* Dynamic: fix build error and remove access to ListBgpPeers cmd for regular users
* Dynamic: assign all zone BGP peers to user networks
* Dynamic: show BGP peer info of networks only for root admin
* AS number: disable specifyasnumber for non-NSX offerings
* Dynamic: pass bgppeer details to command and fix typo with ip6 addr
* Dynamic: list BGP peers by isdedicated, and fix change bgppeers for network/vpc
* Dynamic: add UI labels
* Dynamic: add bgp peers to vpc response
* Dynamic: list bgp peers by keyword, fix list by asnumber
* Dynamic: fix list bgppeers by keyword and db schema
* Dynamic: fix list bgppeers do not return dedicated peers
* Dynamic: update UI when create network/vpc offering
* Update server/src/main/java/com/cloud/configuration/ConfigurationManagerImpl.java
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Update tools/marvin/setup.py
* Dynamic: network mode must be same when update a network with new offering
* Dynamic: add method networkModel.isAnyServiceSupportedInNetwork
* Dynamic: rename APIs and classes
* Dynamic: fix unit tests due to previous changes
* Dynamic: validateNetworkCidrSize when auto-create subnet
* Dynamic: check AS number overlap
* Dynamic: add ActionEvent
* Dynamic: small code optimization
* Dynamic: fix ui bugs after api rename
* Dynamic: add marvin and test for ASN ranges and AS numbers
* Dynamic: add account setting use.system.bgp.peers
also
- change the default value of routed.ipv4.vpc.max.cidr.size and routed.ipv4.vpc.min.cidr.size
- change the category of settings
* static: fix ui error when delete zone ipv4 subnets
* static: small UI polish
* Dynamic: throw exception when as number is required but not passed
* Dynamic: fix typo when create FRR directory which causes network deletion failures
* Dynamic: connect to ALL (or ALL dedicated) BGP peers if no BGP peer mapping for the network/vpc
* Dynamic: throw exception when as number is required for VPC but not passed
* Dynamic: list bgp peers by useSystemBgpPeers
* Dynamic: fix frr config in VPC VR when change bgp peers
* Dynamic: create frr config even if there is no VPC tiers
* Dynamic: list bgp peers by zoneid (required for account) and account
* Dynamic: only apply FRR config for vpc tiers with dynamic routing
* Dynamic: donot send commands to router if commands size is 0
* Dynamic: fix 'new IPv6 address is not valid' when update bgp peer without IPv6
* Dynamic: throw exception if fail to allocate AS number when create network/vpc with dynamic routing
* Dynamic: enable ipv6 unicast and 'ip nht resolve-via-default'
* Dynamic: delete network/vpc if fail to allocate AS number when create network/vpc with dynamic routing
* test: add unit tests for ASN APIs
* test: add unit tests for core module
* test: add unit tests for API responses
* test: add unit tests for BgpPeerTO
* test: add minor changes
* test: add tests for create/delete/update/list RoutingFirewallRuleCmd
* Static: show ip4 routes for vpc tiers
* test: fix smoke test failure caused by type change of as number
* test: add test for Ipv4SubnetForZoneCmd
* test: add test for Ipv4SubnetForGuestNetworkCmd and BgpPeerCmd
* UI: do not show redundant router when network mode is ROUTED as RVR is not supported
* UI: hide 'Conserve mode' when networkmode is ROUTED
* test: add unit tests for ListASNumbersCmdTest
* Static: remove allocated IPv4 subnet when delete a network or vpc
* test: add unit tests for BgpPeersRules
* Dynamic: set ipv4routing from network offering
* server: list as numbers and ipv4 subnets by keyword
* server: remove dedicated bgp peers and ipv4 subnets when delete an account or domain
* server: fix dedicated ipv4 subnet is allocated to other accounts
* UI: fix allocated time format
* server: ignore project is projectid is -1 so bgppeers/ipv4subnets works in project view
* UI: add project column to bgp peers and ipv4 subnets
* server: fix list AS numbers by domain admin or normal user
* server: fix network creation when ipv4 subnet is dedicated
* UI: polish network.js
* Dynamic: fix frr config for ipv6 routing
* Static routing: support cks cluster
* Static: get/create IPv4 subnet from dedicated subnets at first
* Dynamic: add BGP peers tab
* Static: remove redundant loops
* api: add since to api and response
* server: add unit tests
---------
Co-authored-by: Nicolas Vazquez <nicovazquez90@gmail.com>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This is a simple NAS backup plugin for KVM which may be later expanded for other hypervisors. This backup plugin aims to use shared NAS storage on KVM hosts such as NFS (or CephFS and others in future), which is used to backup fully cloned VMs for backup & restore operations. This may NOT be as efficient and performant as some of the other B&R providers, but maybe useful for some KVM environments who are okay to only have full-instance backups and limited functionality.
Design & Implementation follows the `networker` B&R plugin, which is simply:
- Implement B&R plugin interfaces
- Use cmd-answer pattern to execute backup and restore operations on KVM host when VM is running (or needs to be restored) - instead of a B&R API client, relies on answers from KVM agent which executes the operations
- Backups are full VM domain snapshots, copied to a VM-specific folders on a NAS target (NFS) along with a domain XML
- Backup uses libvirt feature: https://libvirt.org/kbase/live_full_disk_backup.html orchestrated via virsh/bash script (nasbackup.sh) as the libvirt-java lacks the bindings
- Supported instance volume storage for restore operations: NFS & local storage
Refer the doc PR for feature limitations and usage details:
https://github.com/apache/cloudstack-documentation/pull/429
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
This feature adds support for Ceph's RADOS Gateway (RGW) support for the
Object Store feature of CloudStack.
The RGW of Ceph is Amazon S3 compliant and is therefor an easy and straigforward
implementation of basic S3 features.
Existing Ceph environments can have the RGW added as an additional feature to a
cluster already providing RBD (Block Device) to a CloudStack environment.
Introduce the BucketTO to pass to the drivers. This replaces just passing the bucket's name.
Some upcoming drivers require more information then just the bucket name to perform their actions,
for example they require the access and secret key which belong to the account of this bucket.
This is leftover code from a long time ago and this validation test has nu influence
on the end result on how a URL will be used afterwards.
We should support hosts pointing to an IPv6(-only) address out of the box.
For the code it does not matter if it's IPv4 or IPv6. This is the admin's choice.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Per docs, if the mysql connector is JDBC2 compliant then it should use
the Connection.isValid API to test a connection.
(https://docs.oracle.com/javase/8/docs/api/java/sql/Connection.html#isValid-int-)
This would significantly reduce query lags and API throughput, as for
every SQL query one or two SELECT 1 are performed everytime a Connection
is given to application logic.
This should only be accepted when the driver is JDBC4 complaint.
As per the docs, the connector-j can use /* ping */ before calling
SELECT 1 to have light weight application pings to the server:
https://dev.mysql.com/doc/connector-j/en/connector-j-usagenotes-j2ee-concepts-connection-pooling.html
Replaces dbcp2 connection pool library with more performant HikariCP.
With this unit tests are failing but build is passing.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohityadav89@gmail.com>
Added caching for ConfigKey value retrievals based on the Caffeine
in-memory caching library.
https://github.com/ben-manes/caffeine
Currently, expire time for a cache is 30s and each update of the
config key invalidates the cache. On any update or reset of the
configuration, cache automatically invalidates for it.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes the issue with sonar check
```
Error: Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.9.1.2184:sonar (default-cli) on project cloudstack:
Error:
Error: The version of Java (11.0.22) used to run this analysis is deprecated, and SonarCloud no longer supports it. Please upgrade to Java 17 or later.
Error: You can find more information here: https://docs.sonarsource.com/sonarcloud/appendices/scanner-environment/
```
main changes
- Support build/packaging using JDK17
- Still supports JDK11 for building
- Support JRE17 for use in production installation
- Drop EL7 support
The community packages will be still packaged using JDK11.
If uses want, they can build by JDK17 as well.
Signed-off-by: Wei Zhou <wei.zhou@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* refactored field instanceId to nicId in DataCenterIpAddressVO and AcquirePodIpCmdResponse
* refactored ocurrences of "instanceId" in DataCenterDaoImpl and DataCenterIpAddressDaoImpl
* Added API arg validator for RFC compliance domain name, to validate VM's host name
* Added unit tests for vm host/domain name validation
* Don't send sql exception/query from dao to upper layer, log it and send only the error message
* Updated user resources name / displayname(/text) column's charset to utf8mb4 to support emojis / unicode chars
* Check and update char set for affinity group name to utf8mb4, from the data migration in upgrade path
* Added smoke test to check resource name for vm, volume, service & disk offering, template, iso, account(first/lastname)
* Updated resource annotation charset to utf8mb4
* Updated some resources description charset to utf8mb4
* Updated sql stmt with constant
* Updated modify columns char set with idempotent procedure
* Removed delimiter (for creating procedures)
Fixes#9331
Only those VMs should be considered network VM which have a NIC entry
that is not marked removed.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* New feature: Change storage pool scope
* Added checks for Ceph/RBD
* Update op_host_capacity table on primary storage scope change
* Storage pool scope change integration test
* pull 8875 : Addressed review comments
* Pull 8875: remove storage checks, AbstractPrimayStorageLifeCycleImpl class
* Pull 8875: Fixed integration test failure
* Pull 8875: Review comments
* Pull 8875: review comments + broke changeStoragePoolScope into smaller functions
* Added UT for changeStoragePoolScope
* Rename AbstractPrimaryDataStoreLifeCycleImpl to BasePrimaryDataStoreLifeCycleImpl
* Pull 8875: Dao review comments
* Pull 8875: Rename changeStoragePoolScope.vue to ChangeStoragePoolScope.vue
* Pull 8875: Created a new smokes test file + A single warning msg in ui
* Pull 8875: Added cleanup in test_primary_storage_scope.py
* Pull 8875: Type in en.json
* Pull 8875: cleanup array in test_primary_storage_scope.py
* Pull:8875 Removing extra whitespace at eof of StorageManagerImplTest
* Pull 8875: Added UT for PrimaryDataStoreHelper and BasePrimaryDataStoreLifeCycleImpl
* Pull 8875: Added license header
* Pull 8875: Fixed sql query for vmstates
* Pull 8875: Changed icon plus info on disabled mode in apidoc
* Pull 8875: Change scope should not work for local storage
* Pull 8875: Change scope completion event
* Pull 8875: Added api findAffectedVmsForStorageScopeChange
* Pull 8875: Added UT for findAffectedVmsForStorageScopeChange and removed listByPoolIdVMStatesNotInCluster
* Pull 8875: Review comments + Vm name in response
* Pull 8875: listByVmsNotInClusterUsingPool was returning duplicate VM entries because of multiple volumes in the VM satisfying the criteria
* Pull 8875: fixed listAffectedVmsForStorageScopeChange UT
* listAffectedVmsForStorageScopeChange should work if the pool is not disabled
* Fix listAffectedVmsForStorageScopeChangeTest UT
* Pull 8875: add volume.removed not null check in VmsNotInClusterUsingPool query
* Pull 8875: minor refactoring in changeStoragePoolScopeToCluster
* Update server/src/main/java/com/cloud/storage/StorageManagerImpl.java
* fix eof
* changeStoragePoolScopeToZone should connect pool to all Up hosts
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Create/Export OVA file of the VM on external vCenter host, to temporary conversion location (NFS)
* Fixed ova issue on untar/extract ovf from ova file
"tar -xf" cmd on ova fails with "ovf: Not found in archive" while extracting ovf file
* Updated VMware to KVM instance migration using OVA
* Refactoring and cleanup
* test fixes
* Consider zone wide pools in the destination cluster for instance conversion
* Remove local storage pool support as temporary conversion location
- OVA export not possible as the pool is not accessible outside host, NFS pools are supported.
* cleanup unused code
* some improvements, and refactoring
* import nic unit tests
* vmware guru unit tests
* Separate clone VM and create template file for VMware migration
- Export OVA (of the cloned VM) to the conversion location takes time.
- Do any validations with cloned VM before creating the template (and fail early).
- Updated unit tests.
* Check conversion support on host before clone vm / create template on vmware (and fail early)
* minor code improvements
* Auto select the host with instance conversion capability
* Skip instance conversion supported response param for non-KVM hosts
* Show supported conversion hosts in the UI
* Skip persistence map update if network doesn't exist
* Added support to export OVA from KVM host, through ovftool (when installed in KVM host)
* Updated importvm api param 'usemsforovaexport' to 'forcemstodownloadvmfiles', to be generic
* Updated hardcoded UI messages with message labels
* Updated UI to support importvm api param - forcemstodownloadvmfiles
* Improved instance conversion support checks on ubuntu hosts, and for windows guest vms
* Use OVF template (VM disks and spec files) for instance conversion from VMware, instead of OVA file
- this would further increase the migration performance (as it reduces the time for OVA preparation / archiving of the VM files into a single file)
* OVF export tool parallel threads code improvements
* Updated 'convert.vmware.instance.to.kvm.timeout' config default value to 3 hrs
* Config values check & code improvements
* Updated import log, with time taken and vm details
* Support for parallel downloads of VMware VM disk files while exporting OVF from MS, and other changes below.
- Skip clone for powered off VMs
- Fixes to support standalone host (with its default datacenter)
- Some code improvements
* rebase fixes
* rebase fixes
* minor improvement
* code improvements - threads configuration, and api parameter changes to import vm files
* typo fix in error msg
* Veeam: find storage pool by path for PreSetup and VMFS
* Veeam: support VMware distributed virtual switch
* Veeam: sync volumes on Solidfire after backup restoration
user faced the issue that backup is restored but the DATA disk is gone (ROOT disk is ok)
```
2024-05-03 12:00:32,868 ERROR [o.a.c.b.BackupManagerImpl] (API-Job-Executor-13:ctx-aa8a1d85 job-149661 ctx-73328567) (logid:6510cf06) Failed to import VM [vmInternalName: i-169-9679-VM] from backup restoration [{"backupType":"Full","externalId":"821ca400-a5da-4282-bf3f-7c7e38a6cdb4","id":257,"uuid":"69399101-5cbd-461c-8a48-f0c70eac0b24","vmId":9679}] with hypervisor [type: VMware] due to: [Couldn't find storage pool -iqn.2010-01.com.solidfire:3p53.data-9679.221-0].
```
On managed storage, the datastore name of DATA disk is determined by the iscsi_name of the volume.
* Veeam: set correct path for DATA disks on solidfire
* Temporarily backup StorPool volume before expunge
Sometimes the users delete the volumes by mistake. This enhancment
provides a solution to backup the volume before it's deleted. The user
will be able to see the snapshot in CloudStack UI/CLI and create only a
volume from it.
A task will check (by default on every 5mins) if the snapshots are
deleted from StorPool
Global settings to enable the delay delete option:
`storpool.delete.after.interval` - The interval (in seconds) after the StorPool snapshot will be deleted
`storpool.list.snapshots.delete.after.interval` - The interval (in seconds) to fetch the StorPool snapshots with deleteAfter flag
Minor fix when deleting snapshots
* added Apache licence
* addressed comments
Documentation PR: https://github.com/apache/cloudstack-documentation/pull/398
Currently, an administrator can break host tag compatibility for a VM administrator by certain operations:
* deploy/start VM on a specific host
* migrate VM
* restore VM
* scale VM
This PR allows the user to specify tags which must be checked during these operations.
Global Settings
1. `vm.strict.host.tags` - A comma-separated list of tags for strict host check (Default - empty)
2. `vm.strict.resource.limit.host.tag.check` - Determines whether the resource limits tags are considered strict or not (Default - true)
During the above operations, we now check and throw an error if host tags compatibility is being broken for tags specified in `vm.strict.host.tags`. If `vm.strict.resource.limit.host.tag.check` is set to `true`, tags set in `resource.limit.host.tags` are also checked during these operations.
With d8c7e34b38 options were added to the host.allocators.order config. Currently, it allows adding only FirstFitRouting as the value. This PR fixes the behaviour and allows other host allocators to be added.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Got an exception when delete a zone
```
com.cloud.utils.exception.CloudRuntimeException: The zone cannot be deleted because there are Secondary storages in this zone
```
This PR introduces the functionality of purging removed DB entries for CloudStack entities (currently only for VirtualMachine). There would be three mechanisms for purging removed resources:
Background task - CloudStack will run a background task which runs at a defined interval. Other parameters for this task can be controlled with new global settings.
API - New admin-only API purgeExpungedResources. It will allow passing the following parameters - resourcetype, batchsize, startdate, enddate. Currently, API is not supported in the UI.
Config for service offering. Service offerings can be created with purgeresources parameter which would allow purging resources immediately on expunge.
Following new global settings have been added:
expunged.resources.purge.enabled: Default: false. Whether to run a background task to purge the expunged resources
expunged.resources.purge.resources: Default: (empty). A comma-separated list of resource types that will be considered by the background task to purge the expunged resources. Currently only VirtualMachine is supported. An empty "value will result in considering all resource types for purging
expunged.resources.purge.interval: Default: 86400. Interval (in seconds) for the background task to purge the expunged resources
expunged.resources.purge.delay: Default: 300. Initial delay (in seconds) to start the background task to purge the expunged resources task.
expunged.resources.purge.batch.size: Default: 50. Batch size to be used during expunged resources purging.
expunged.resources.purge.start.time: Default: (empty). Start time to be used by the background task to purge the expunged resources. Use format yyyy-MM-dd or yyyy-MM-dd HH:mm:ss.
expunged.resources.purge.keep.past.days: Default: 30. The number of days in the past from the execution time of the background task to purge the expunged resources for which the expunged resources must not be purged. To enable purging expunged resource till the execution of the background task, set the value to zero.
expunged.resource.purge.job.delay: Default: 180. Delay (in seconds) to execute the purging of an expunged resource initiated by the configuration in the offering. Minimum value should be 180 seconds and if a lower value is set then the minimum value will be used.
Documentation PR: apache/cloudstack-documentation#397
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
- Changes behaviour of details param handling via global setting:
- listVirtualMachines API: when the details param is not provided, it returns whether stats are returned controlled by a new global setting `list.vm.default.details.stats`
- listVirtualMachinesMetrics API: when the details param is not provided, it uses `all` details including `stats`
- Users who are affected slow performance of the listVirtualMachines API response time can set `list.vm.default.details.stats` to `false`
- Remove ConfigKey vm.stats.increment.metrics.in.memory which was renamed to `vm.stats.increment.metrics` in #5984 and also remove unused/unnecessary global settings via upgrade path
- Changes default value of VM stats accumulation setting `vm.stats.increment.metrics` to false until a better solution emerges. Since #5984, this is true and during the execution of listVM APIs the stats are clubbed/calculated which can immensely slow down list VM API calls. Any costly operations such as summing of stats shouldn't be done during the course of a synchronous API, such as the list VM API.
- Fix UI that uses listVirtualMachinesMetrics to not call `stats` detail when in list view without metrics selected.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The user_vm_view can end up not picking the right index to join against
the user_ip_address table causing full table scan on the user_ip_address
table. This could be related to a MySQL bug
https://bugs.mysql.com/bug.php?id=41220
In a test environment with 20k shared networks and over 20M IPs, the
listVirtualMachines API was found to take over 17s to return list of
just 10 VMs. However, with this fix it would now take under 200ms to
return the list.
MySQL slow query logging showed ~nearly 20M table scans of the IP
address table:
```
# User@Host: cloud[cloud] @ localhost [127.0.0.1] Id: 39
# Query_time: 8.227541 Lock_time: 0.000014 Rows_sent: 12 Rows_examined: 19,667,235
SET timestamp=1715410270;
SELECT user_vm_view.id, user_vm_view.name /*snipped*/ FROM user_vm_view
WHERE user_vm_view.id IN (4,6,7,8,9,10,11,12,13,14,15,16);
```
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* kvm: replace ISO path in vm XML configuration during vm migration
* Update 9212: address comments
* kvm: fix vm migration if there are multiple image stores
* list by isEncrypted
* use filter on VO and cleanup
* add encryption type to volume response
* Update api/src/main/java/org/apache/cloudstack/api/command/user/volume/ListVolumesCmd.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
Co-authored-by: Bryan Lima <bryan.lima@hotmail.com>
Co-authored-by: SadiJr <sadi@scclouds.com.br>
Co-authored-by: Bryan Lima <42067040+BryanMLima@users.noreply.github.com>
Co-authored-by: Henrique Sato <henriquesato2003@gmail.com>
* NSX integration - skeletal code
* Fix module not loading on startup
* add upgrade path and daos
\n add nsx controller command
* add support for adding and listing nsx provider to a zone
* add license
* add default VPC offering and update upgrade path
* add global setting to enable nsx plugin
* add delete nsx controller operation
* add nsxresource
* add NSX resource , api client, create tier1 gw
* update db
* update response and add license
* Add support to create and delete nsx tier-1 gateway
* add license
* cleanup and add skeletal code for network creation
* add create/delete segment and UI integration
* add license
* address code smells - part 1
* fix test / build failure
* NSX integration - skeletal code
* Fix module not loading on startup
* add upgrade path and daos
\n add nsx controller command
* add support for adding and listing nsx provider to a zone
* add license
* add default VPC offering and update upgrade path
* add global setting to enable nsx plugin
* add delete nsx controller operation
* add nsxresource
* add NSX resource , api client, create tier1 gw
* update db
* update response and add license
* Add support to create and delete nsx tier-1 gateway
* add license
* cleanup and add skeletal code for network creation
* add create/delete segment and UI integration
* add license
* address code smells - part 1
* fix test / build failure
* add ui changes + update nsx_provider table transport zones + use NSX broadcast domain for add nics to router
* ui: fix password field, and backend changes
* add route advertisement
* update offering
* update offering
* add sleep before deletion of vpc / tier g/w for ports to be removed
* move creation of segments to design phase
* change provider to VPC router for Dhcp & dns service in an nsx offering
* Add public nic for NSX
* reserve first IP (after g/w) of subnet for router nic - NSX
* revert reserving 1st IP in vpc segments
* [NSX] Create a DHCP relay and add it to a VPC tier segment (#107)
* Create DHCP relay command and execute request
* In progress integrate with networking
* Create DHCP relay config on the network VR allocation
* Revert domain router dao changes
* Create DHCP relay con VR nic plug to NSX network
* Link DHCP relay config to segment after creation
* [NSX] Cleanup DHCP Relay config on segment deletion (#108)
* Cleanup DHCP Relay config on segment deletion
* update segment & relay name generators and call delete dhcprelay after deletion of segment
* address comment
* [NSX] Fix DHCP relay config deletion was missing zone name (#8068)
* [NSX] Refactor API wrapper operations (#8059)
* [NSX] Refactor API wrapper operations
* Big refactor
* Address review comment
* change network cidr to cidr to prevent NPE
* add domain and zone names to the various networks - vpc & tier
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* Nsx unit tests (#8090)
* Add tests
* add test for NsxGuestNetworkGuru
* add unit tests for NsxResource
* add unti tests for NsxElement
* cleanup
* [NSX] Refactor API wrapper operations
* update tests
* update tests - add nsxProviderServiceImpl test
* add unit test - NsxServiceImpl
* add license
* Big refactor
* Address review comment
* change network cidr to cidr to prevent NPE
* add domain and zone names to the various networks - vpc & tier
* fix tests
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* modify NSX resource naming convention (#8095)
* modify NSX resource naming convention
* remove unused imports
* add a setup phase between desgin and implementation of a network for intermediary steps
* add method to all classes
* NSX: Refactor Network & VPC offering (#8110)
* [NSX] Refactor API wrapper operations
* Network offering changes for NSX
* fix services and provider combination
* address comments: rename param
* update nsx_mode parameter
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix test
* [NSX] Allow NSX isolated networks (#8132)
* Add network offerings for NSX on isolated networks
* Fix offerings creation
* In progress NSX isolated network
* Fixes
* Fix NIC allocation to router
* NSX: Add Step for Adding Public traffic network for NSX During zone creation (#8126)
* NSX: Add Step for Adding Public traffic network for NSX
* address comments and cleanup
* address comment
* remove indent
* NSX: Create and Delete static NAT & Port forward rules (#8131)
* NSX: Create and delete NSX Static Nat rules
* fix issues with static nat
* add static nat
* Support to add and delete Port forward rules
* add license
* fix adding multiple pf rules
* cleanup
* fix lint check
* fix smoke tests
* fix smoke tests
* Nsx add lb rule (#8161)
* NSX: Create and delete NSX Static Nat rules
* fix issues with static nat
* add static nat
* Support to add and delete Port forward rules
* add license
* fix adding multiple pf rules
* cleanup
* NSX: Add support to create and delete Load balancer rules
* fix deletion of lb rules
* add header file and update protocol detail
* build failure fix
* [NSX] Add SNAT support (#8100)
* In progress add source NAT
* Fix after merge
* Fix tests
* Fix NPE on isolated network deletion
* Reserve source NAT IP when its not passed for NSX VPC
* Create source NAT rule on VR NIC allocation
* Fix update VPC and remove VPC to update and remove SNAT rule
* Fix packaging
* Address review comment
* Fix build
* fix build - unused import
* Add defensive checks
* Add missing design to NSX public guru
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* NSX: Fix VR public NIC allocation (#8166)
* NSX: fix LB member addition and deletion and add defensive checks (#8167)
* Fix public NIC NPE on broadcast URI
* NSX: Router Public nic to get IP from systemVM Ip range (#8172)
* NSX: Router Public nic to get IP from systemVM Ip range
* Fix VR IP address and setSourceNatIp command
* NSX: hide systemVM reserved IP range SourceNAT
* fix test
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix test failure
* test failure fix
* [NSX] Fix update source NAT IP (#8176)
* [NSX] Fix update source NAT IP
* Fix startup
* Fix API result
* NSX - add LB route Advertizement (#8192)
* [NSX] Add ACL types support (#8224)
* NSX: Create segment group on segment creation
* Add unit tests
* Remove group for segment before removing segment
* Create Distributed Firewall rules
* Remove distributed firewall policy on segment deletion
* Fix policy rule ID and add more unit tests
* Fix DROP action rules and transform tests
* Add new ACL rules
* Fixes
* associate security policies with groups and not to DFW and add deletion of rules
* Fix name convention
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* NSX: Fix creation of VPCs (#8320)
* Fix ACL rules creation (#8323)
* [NSX] Fix database views (#8325)
* NSX: Add CKS Support & Firewall rules for Isolated Networks (#8189)
* NSX: Add ALL LB IP to the list of route advertisements in tier1
* NSX: Support Source NAT on NSX Isolated networks
* NSX: Cks Support
* NSX: Create segment group on segment creation
* Add unit tests
* Remove group for segment before removing segment
* Create Distributed Firewall rules
* Remove distributed firewall policy on segment deletion
* Fix policy rule ID and add more unit tests
* Add support for routed NSX Isolated networks \n and non RFC 1918 compliant IPs
* Add support for routed NSX Isolated networks \n and non RFC 1918 compliant IPs
* Add Firewall rules
* build failure - fix unit test
* fix npes
* Add support to delete firewall rules
* update nsx cks offering
* add license
* update order of ports in PF & FW rules
* fix filter for getting transport zones
* CKS support changed - MTU updated, etc
* add LB for CKS on VPC
* address comments
* adapt upstream cks logic for vpc
* rever mtu hack
* update UI changes as per upstream fix
* change display test for CKS n/w offerings for isolated and VPC tiers
* add extra line for linter
* address comment
* revert list change
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix ui build failure
* [NSX] Address SonarCloud Bugs (#8341)
* [NSX] Address SonarCloud Bugs
* Fix NSX API connection issues
* NSX: Add unit tests to increase coverage (#8355)
* NSX: Add unit tests
* cleanup unused imports
* add more unit tests
* add tests for publicnsxnetworkguru
* add license
* fix build failures
* address sonar comment
* fix security hotspots
* NSX: Add more unit tests (#8381)
* NSX : Unit tests
* remove unused imports
* remove unused import causing build failure
* fix build failures due to unused imports
* fix build failure
* fix test assertion
* remove unused imports
* remove unused import
* Nsx UI zone bug (#8398)
* NSX: Attempt to fix NSX Zone creation bug for public networks
* fix zone wizard public traffic issue
* add proper filtering of offerings based on VPC nsx mode
* clean up console logs
* NSX: Fix code smells and reported bugs (#8409)
* NSX: Fix code smells and reported bugs
* fox override issue
* remove unused imports
* fix test
* refactor code to reduce complexity
* add lisence
* cleanup
* fix build failure
* fix build failure
* address comments
* test - add config to ignore certain files from test coverage
* test exclusion of classes from test cov
* rever pom changes
* [NSX] Add more unit tests (#8431)
* [NSX] Add more unit tests
* More tests
* Fix build errors
* NSX: Prevent creation of L2 and Shared networks for NSX (#8463)
* NSX: Prevent creation of L2 and Shared networks for NSX
* add checks to backend to prevent creation of l2 and shared networks in nsx zones and filter only nsx offerings when creating isolated networks
* cleanup
* NSX: Fix code smells (#8436)
* NSX: Fix code smells
* Add changes to service creation logic
* CKS: Add action to during firewall rule creation (#8498)
* NSX,UI: Deduplicate network list when creating kubernetes clusters (#8513)
* NSX: Make LB service selectable in network offering (#8512)
* NSX: Make LB service selectable in network offering
* fix label
* address comments
* address comments
* NSX: Add appropriate error message when icmp type is set to -1 for NSX (#8504)
* NSX: Add appropriate error message when icmp type is set to -1 for NSX
* address comments
* update text
* fix test
* fix test - build failure
* fix test - build failure
* NSX: Cleanup NSX resources during k8s cluster cleanup (#8528)
* fix test failure
* NSX: Improve segment deletion process (#8538)
* NSX: Add passive monitor for NSX LB to test whether a server is available (#8533)
* NSX: Add passive monitor for NSX LB to test whether a server is available
* Add active monitors too
* fix build failure
* NSX: Add check for ICMP code / type for NSX zones (#8542)
* NSX: Fix Routed Mode for Isolated and VPC networks (#8534)
* NSX: Fix Routed Mode for Isolated and VPC networks
* NSX: Fix Routed mode - add checks for ports added for FW rules
* clean up code
* fix build failure
* NSX: Add retry logic with sleep to delete segments (#8554)
* NSX: Add retry logic with sleep to delete segments
* add logs
* NSX: Fix custom ACL check (#2)
* NSX: Fix custom ACL check
* NSX: Fix custom ACL check
* Nsx vpc routed mode (#5)
* NSX: Fix VPC routed mode
* NSX: VPC route mode
* remove unnecessary changes
* Nsx: Support internal LB (#4)
* NSX: Support internal LB service in NSX
* add lb removal logic
* Fix UI issue hiding internal LB tab
* Refactor method name
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* NSX: Improve NSX resource cleanup process (#3)
* Fix unit test
* NSX: Add SourceNAT service to the default Routed offering for VPC (#13)
* Fix VPC restart with cleanup (#12)
* NSX: Fix ACL rule removal on replacement and fix rule order (#11)
* NSX: fix smoke test failure for ACLs (#9)
* Fix unit tests
* Fix NSX plugin pom XML
* NSX: Add support to re-order ACL rules (NSX FW rules) (#14)
* [WIP] NSX: Add support to re-order ACL rules (NSX FW rules)
* fix reordering of acl rules on all networks that it is associated to
* clean up and attempt test fix
* Fix tests
* Remove unused import
* tweak reorder logic
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Fix zone creation issue for internal load balancer
* Fix
* Fix unit test
* fix logger
* fix logger
* fix logger
* NSX: Fix VPC form to ignore source NAT IP when creating VPCs and fix label
* Move SQL changes to the newest schema file
* NSX: Last Fixes
* Fix build
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
This fixes https://github.com/apache/cloudstack/issues/8595
```
2024-02-01 16:23:52,473 INFO [c.c.n.s.SecurityGroupManagerImpl] (AgentManager-Handler-16:null) (logid:) Network Group full sync for agent 1 found 3 vms out of sync
2024-02-01 16:23:52,473 DEBUG [c.c.n.s.SecurityGroupManagerImpl] (AgentManager-Handler-16:null) (logid:) Security Group Mgr v2: scheduling ruleset updates for 3 vms (unique=3), current queue size=0
2024-02-01 16:23:52,473 DEBUG [c.c.n.s.SecurityGroupManagerImpl] (AgentManager-Handler-16:null) (logid:) Security Group Mgr v2: done scheduling ruleset updates for 3 vms: num new jobs=3 num rows insert or updated=0 time taken=0
2024-02-01 16:23:52,478 ERROR [c.c.n.s.SecurityGroupManagerImpl] (SecGrp-Worker-20:ctx-0aa3885d) (logid:472b30d2) Problem during SG work com.cloud.network.security.LocalSecurityGroupWorkQueue$LocalSecurityGroupWork@5
com.cloud.utils.exception.CloudRuntimeException: DB Exception on: com.mysql.cj.jdbc.ClientPreparedStatement: SELECT SQL_CACHE security_group_vm_map.id, security_group_vm_map.security_group_id, security_group_vm_map.instance_id, nics.ip4_address, vm_instance.state, security_group.name FROM security_group_vm_map INNER JOIN nics ON security_group_vm_map.instance_id=nics.instance_id INNER JOIN vm_instance ON security_group_vm_map.instance_id=vm_instance.id INNER JOIN security_group ON security_group_vm_map.security_group_id=security_group.id WHERE security_group_vm_map.security_group_id = 3 AND vm_instance.state='Running'
at com.cloud.utils.db.GenericDaoBase.searchIncludingRemoved(GenericDaoBase.java:424)
at com.cloud.utils.db.GenericDaoBase.listIncludingRemovedBy(GenericDaoBase.java:938)
at com.cloud.utils.db.GenericDaoBase.listBy(GenericDaoBase.java:928)
at com.cloud.network.security.dao.SecurityGroupVMMapDaoImpl.listBySecurityGroup(SecurityGroupVMMapDaoImpl.java:134)
at jdk.internal.reflect.GeneratedMethodAccessor555.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215)
at com.sun.proxy.$Proxy245.listBySecurityGroup(Unknown Source)
at com.cloud.network.security.SecurityGroupManagerImpl2.generateRulesForVM(SecurityGroupManagerImpl2.java:246)
at com.cloud.network.security.SecurityGroupManagerImpl2.sendRulesetUpdates(SecurityGroupManagerImpl2.java:177)
at com.cloud.network.security.SecurityGroupManagerImpl2.work(SecurityGroupManagerImpl2.java:157)
at com.cloud.network.security.SecurityGroupManagerImpl2$WorkerThread$1.run(SecurityGroupManagerImpl2.java:75)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
at com.cloud.network.security.SecurityGroupManagerImpl2$WorkerThread.run(SecurityGroupManagerImpl2.java:72)
Caused by: java.sql.SQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '.id, security_group_vm_map.security_group_id, security_group_vm_map.instance_id,' at line 1
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
... 28 more
```
The SearchBuilder listDomainAndTypeAndNoTagSearch in ReservationDaoImpl.java is wrongly created by adding ACCOUNT_ID as part of the search param. Instead it should be DOMAIN_ID.
Additional fixes in test cases to:
- add assert
- add spacing
* Update to 4.20.0
* Update to python3
* Upgrade to JRE 17
* Upgrade to Debian 12.4.0
* VR: upgrade to python3
for f in `find systemvm/ -name *.py`;do
if grep "print " $f >/dev/null;then
2to3-2.7 -w $f
else
2to3-2.7 -p -w $f
fi
done
* java: Use JRE17 in cloudstack packages and systemvmtemplate
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Add --add-opens to JAVA_OPTS in systemd config
* Add --add-opens to JAVA_OPTS in systemd config for usage
* python3: fix "TypeError: a bytes-like object is required, not 'str'"
* python3: fix "ValueError: must have exactly one of create/read/write/append mode"
* Add --add-exports=java.base/sun.security.x509=ALL-UNNAMED for management server
* Use pip3 instead of pip for centos8
* python3: fix "TypeError: write() argument must be str, not bytes"
```
root@r-1037-VM:~# /opt/cloud/bin/passwd_server_ip.py 10.1.1.1
Traceback (most recent call last):
File "/opt/cloud/bin/passwd_server_ip.py", line 201, in <module>
serve()
File "/opt/cloud/bin/passwd_server_ip.py", line 187, in serve
initToken()
File "/opt/cloud/bin/passwd_server_ip.py", line 60, in initToken
f.write(secureToken)
TypeError: write() argument must be str, not bytes
root@r-1037-VM:~#
```
* Python3: fix "name 'file' is not defined"
```
root@r-1037-VM:~# /opt/cloud/bin/passwd_server_ip.py 10.1.1.1
Traceback (most recent call last):
File "/opt/cloud/bin/passwd_server_ip.py", line 201, in <module>
serve()
File "/opt/cloud/bin/passwd_server_ip.py", line 188, in serve
loadPasswordFile()
File "/opt/cloud/bin/passwd_server_ip.py", line 67, in loadPasswordFile
with file(getPasswordFile()) as f:
NameError: name 'file' is not defined
```
* python3: fix "TypeError: write() argument must be str, not bytes" (two more files)
* Upgrade jaxb version
* python3: fix more "TypeError: a bytes-like object is required, not str"
* python3: fix "Failed to update password server"
Failed to update password server due to: POST data should be bytes, an iterable of bytes, or a file object. It cannot be of type str.
* python3: fix "bad duration value: ikelifetime=24.0h"
Jan 15 13:57:20 systemvm ipsec[3080]: # bad duration value: ikelifetime=24.0h
* python3: fix password server "invalid save_password token"
* test: incease retries in test_vpc_vpn.py
* python3: fix passwd_server_ip.py
see error below
```
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: ----------------------------------------
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: Exception occurred during processing of request from ('10.1.1.129', 32782)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: Traceback (most recent call last):
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/socketserver.py", line 650, in process_request_thread
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.finish_request(request, client_address)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/socketserver.py", line 360, in finish_request
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.RequestHandlerClass(request, client_address, self)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/socketserver.py", line 720, in __init__
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.handle()
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/http/server.py", line 427, in handle
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.handle_one_request()
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/http/server.py", line 415, in handle_one_request
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: method()
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/opt/cloud/bin/passwd_server_ip.py", line 120, in do_GET
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.wfile.write(password)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/socketserver.py", line 799, in write
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self._sock.sendall(b)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: TypeError: a bytes-like object is required, not 'str'
```
* python3: fix self.cl.get_router_password in Redundant VRs
```
File "/opt/cloud/bin/cs/CsDatabag.py", line 154, in get_router_password
md5.update(passwd)
TypeError: Unicode-objects must be encoded before hashing"]
```
* scripts: mark multipath scripts as executable
* systemvm template: remove hyperv packages and do not export
* VR: update default RAM size of System VMs/VRs to 512MiB
Before
```
mysql> select id,name,cpu,speed,ram_size,unique_name,system_use from service_offering where name like "System%";
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
| id | name | cpu | speed | ram_size | unique_name | system_use |
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
| 3 | System Offering For Software Router | 1 | 500 | 256 | Cloud.Com-SoftwareRouter | 1 |
| 4 | System Offering For Software Router - Local Storage | 1 | 500 | 256 | Cloud.Com-SoftwareRouter-Local | 1 |
| 5 | System Offering For Internal LB VM | 1 | 256 | 256 | Cloud.Com-InternalLBVm | 1 |
| 6 | System Offering For Internal LB VM - Local Storage | 1 | 256 | 256 | Cloud.Com-InternalLBVm-Local | 1 |
| 7 | System Offering For Console Proxy | 1 | 500 | 1024 | Cloud.com-ConsoleProxy | 1 |
| 8 | System Offering For Console Proxy - Local Storage | 1 | 500 | 1024 | Cloud.com-ConsoleProxy-Local | 1 |
| 9 | System Offering For Secondary Storage VM | 1 | 500 | 512 | Cloud.com-SecondaryStorage | 1 |
| 10 | System Offering For Secondary Storage VM - Local Storage | 1 | 500 | 512 | Cloud.com-SecondaryStorage-Local | 1 |
| 11 | System Offering For Elastic LB VM | 1 | 128 | 128 | Cloud.Com-ElasticLBVm | 1 |
| 12 | System Offering For Elastic LB VM - Local Storage | 1 | 128 | 128 | Cloud.Com-ElasticLBVm-Local | 1 |
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
10 rows in set (0.00 sec)
```
New value
```
mysql> select id,name,cpu,speed,ram_size,unique_name,system_use from service_offering where name like "System%";
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
| id | name | cpu | speed | ram_size | unique_name | system_use |
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
| 3 | System Offering For Software Router | 1 | 500 | 512 | Cloud.Com-SoftwareRouter | 1 |
| 4 | System Offering For Software Router - Local Storage | 1 | 500 | 512 | Cloud.Com-SoftwareRouter-Local | 1 |
| 5 | System Offering For Internal LB VM | 1 | 256 | 512 | Cloud.Com-InternalLBVm | 1 |
| 6 | System Offering For Internal LB VM - Local Storage | 1 | 256 | 512 | Cloud.Com-InternalLBVm-Local | 1 |
| 7 | System Offering For Console Proxy | 1 | 500 | 1024 | Cloud.com-ConsoleProxy | 1 |
| 8 | System Offering For Console Proxy - Local Storage | 1 | 500 | 1024 | Cloud.com-ConsoleProxy-Local | 1 |
| 9 | System Offering For Secondary Storage VM | 1 | 500 | 512 | Cloud.com-SecondaryStorage | 1 |
| 10 | System Offering For Secondary Storage VM - Local Storage | 1 | 500 | 512 | Cloud.com-SecondaryStorage-Local | 1 |
| 11 | System Offering For Elastic LB VM | 1 | 128 | 512 | Cloud.Com-ElasticLBVm | 1 |
| 12 | System Offering For Elastic LB VM - Local Storage | 1 | 128 | 512 | Cloud.Com-ElasticLBVm-Local | 1 |
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
10 rows in set (0.01 sec)
```
* debian12: fix test_network_ipv6 and test_vpc_ipv6
* python3: remove duplicated imports
* debian12: failed to start Apache2 server (SSLCipherSuite @SECLEVEL=0)
error message
```
[Sat Jan 20 22:51:14.595143 2024] [ssl:emerg] [pid 10200:tid 140417063888768] AH02562: Failed to configure certificate cloudinternal.com:443:0 (with chain), check /etc/ssl/certs/cert_apache.crt
[Sat Jan 20 22:51:14.595234 2024] [ssl:emerg] [pid 10200:tid 140417063888768] SSL Library Error: error:0A00018E:SSL routines::ca md too weak
AH00016: Configuration Failed
```
openssl version
```
root@s-167-VM:~# openssl version -a
OpenSSL 3.0.11 19 Sep 2023 (Library: OpenSSL 3.0.11 19 Sep 2023)
built on: Mon Oct 23 17:52:22 2023 UTC
platform: debian-amd64
options: bn(64,64)
compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -fzero-call-used-regs=used-gpr -DOPENSSL_TLS_SECURITY_LEVEL=2 -Wa,--noexecstack -g -O2 -ffile-prefix-map=/build/reproducible-path/openssl-3.0.11=. -fstack-protector-strong -Wformat -Werror=format-security -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_BUILDING_OPENSSL -DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2
OPENSSLDIR: "/usr/lib/ssl"
ENGINESDIR: "/usr/lib/x86_64-linux-gnu/engines-3"
MODULESDIR: "/usr/lib/x86_64-linux-gnu/ossl-modules"
Seeding source: os-specific
CPUINFO: OPENSSL_ia32cap=0x80202001478bfffd:0x0
```
certificate
```
root@s-167-VM:~# keytool -printcert -rfc -file /usr/local/cloud/systemvm/certs/realhostip.crt
-----BEGIN CERTIFICATE-----
MIIFZTCCBE2gAwIBAgIHKBCduBUoKDANBgkqhkiG9w0BAQUFADCByjELMAkGA1UE
BhMCVVMxEDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxGjAY
BgNVBAoTEUdvRGFkZHkuY29tLCBJbmMuMTMwMQYDVQQLEypodHRwOi8vY2VydGlm
aWNhdGVzLmdvZGFkZHkuY29tL3JlcG9zaXRvcnkxMDAuBgNVBAMTJ0dvIERhZGR5
IFNlY3VyZSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTERMA8GA1UEBRMIMDc5Njky
ODcwHhcNMTIwMjAzMDMzMDQwWhcNMTcwMjA3MDUxMTIzWjBZMRkwFwYDVQQKDBAq
LnJlYWxob3N0aXAuY29tMSEwHwYDVQQLDBhEb21haW4gQ29udHJvbCBWYWxpZGF0
ZWQxGTAXBgNVBAMMECoucmVhbGhvc3RpcC5jb20wggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQCDT9AtEfs+s/I8QXp6rrCw0iNJ0+GgsybNHheU+JpL39LM
TZykCrZhZnyDvwdxCoOfE38Sa32baHKNds+y2SHnMNsOkw8OcNucHEBX1FIpOBGp
h9D6xC+umx9od6xMWETUv7j6h2u+WC3OhBM8fHCBqIiAol31/IkcqDxxsHlQ8S/o
CfTlXJUY6Yn628OA1XijKdRnadV0hZ829cv/PZKljjwQUTyrd0KHQeksBH+YAYSo
2JUl8ekNLsOi8/cPtfojnltzRI1GXi0ZONs8VnDzJ0a2gqZY+uxlz+CGbLnGnlN4
j9cBpE+MfUE+35Dq121sTpsSgF85Mz+pVhn2S633AgMBAAGjggG+MIIBujAPBgNV
HRMBAf8EBTADAQEAMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAOBgNV
HQ8BAf8EBAMCBaAwMwYDVR0fBCwwKjAooCagJIYiaHR0cDovL2NybC5nb2RhZGR5
LmNvbS9nZHMxLTY0LmNybDBTBgNVHSAETDBKMEgGC2CGSAGG/W0BBxcBMDkwNwYI
KwYBBQUHAgEWK2h0dHA6Ly9jZXJ0aWZpY2F0ZXMuZ29kYWRkeS5jb20vcmVwb3Np
dG9yeS8wgYAGCCsGAQUFBwEBBHQwcjAkBggrBgEFBQcwAYYYaHR0cDovL29jc3Au
Z29kYWRkeS5jb20vMEoGCCsGAQUFBzAChj5odHRwOi8vY2VydGlmaWNhdGVzLmdv
ZGFkZHkuY29tL3JlcG9zaXRvcnkvZ2RfaW50ZXJtZWRpYXRlLmNydDAfBgNVHSME
GDAWgBT9rGEyk2xF1uLuhV+auud2mWjM5zArBgNVHREEJDAighAqLnJlYWxob3N0
aXAuY29tgg5yZWFsaG9zdGlwLmNvbTAdBgNVHQ4EFgQUZyJz9/QLy5TWIIscTXID
E8Xk47YwDQYJKoZIhvcNAQEFBQADggEBAKiUV3KK16mP0NpS92fmQkCLqm+qUWyN
BfBVgf9/M5pcT8EiTZlS5nAtzAE/eRpBeR3ubLlaAogj4rdH7YYVJcDDLLoB2qM3
qeCHu8LFoblkb93UuFDWqRaVPmMlJRnhsRkL1oa2gM2hwQTkBDkP7w5FG1BELCgl
gZI2ij2yxjge6pOEwSyZCzzbCcg9pN+dNrYyGEtB4k+BBnPA3N4r14CWbk+uxjrQ
6j2Ip+b7wOc5IuMEMl8xwTyjuX3lsLbAZyFI9RCyofwA9NqIZ1GeB6Zd196rubQp
93cmBqGGjZUs3wMrGlm7xdjlX6GQ9UvmvkMub9+lL99A5W50QgCmFeI=
-----END CERTIFICATE-----
Warning:
The certificate uses the SHA1withRSA signature algorithm which is considered a security risk. This algorithm will be disabled in a future update.
```
it comes from
```
$ openssl x509 -in ./systemvm/agent/certs/realhostip.crt -noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 11277268652730408 (0x28109db8152828)
Signature Algorithm: sha1WithRSAEncryption
Issuer: C = US, ST = Arizona, L = Scottsdale, O = "GoDaddy.com, Inc.", OU = http://certificates.godaddy.com/repository, CN = Go Daddy Secure Certification Authority, serialNumber = 07969287
Validity
Not Before: Feb 3 03:30:40 2012 GMT
Not After : Feb 7 05:11:23 2017 GMT
Subject: O = *.realhostip.com, OU = Domain Control Validated, CN = *.realhostip.com
```
* debian12: use ed25519 instead of rsa as ssh-rsa has been deprecated in OpenSSH
on xenserver
```
[root@pr8497-t8906-xenserver-71-xs2 ~]# ssh -i .ssh/id_rsa.cloud -p 3922 169.254.214.153
Warning: Permanently added '[169.254.214.153]:3922' (ECDSA) to the list of known hosts.
Permission denied (publickey).
```
in the CPVM
Jan 22 19:31:09 v-1-VM sshd[2869]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Jan 22 19:31:09 v-1-VM sshd[2869]: Connection closed by authenticating user root 169.254.0.1 port 54704 [preauth]
```
ssh-dss (DSA) is not supported either
* debian12: add PubkeyAcceptedAlgorithms=+ssh-rsa to sshd_config
* VR: install python3 packages in case of Debian 11
* pom.xml: exclude systemvm/agent/packages/* in license check
* systemvm: do not patch router/systemvm during startup
this will cause 4.19 SYSTEM template not work, but may be expected
- python3 VS python2 (default)
- openSSL 3.0.1 VS 1.1.1w
- openssh-server 9.1 VS 8.4
* VR: patch router/systemvm if template is debian11
This supports debian 11 template by
- revert change in systemvm/debian/etc/ssh/sshd_config
- patch VR/systemvms during startup
- install packages during patching system vm/routers
* python3 flake: fix E502 the backslash is redundant between brackets
```
../debian/root/health_checks/router_version_check.py:55:70: E502 the backslash is redundant between brackets
../debian/root/health_checks/router_version_check.py:58:61: E502 the backslash is redundant between brackets
../debian/root/health_checks/router_version_check.py:67:71: E502 the backslash is redundant between brackets
../debian/root/health_checks/router_version_check.py:70:60: E502 the backslash is redundant between brackets
../debian/root/health_checks/haproxy_check.py:47:71: E502 the backslash is redundant between brackets
../debian/root/health_checks/haproxy_check.py:48:64: E502 the backslash is redundant between brackets
../debian/root/health_checks/cpu_usage_check.py:43:54: E502 the backslash is redundant between brackets
../debian/root/health_checks/cpu_usage_check.py:46:58: E502 the backslash is redundant between brackets
../debian/root/health_checks/memory_usage_check.py:31:65: E502 the backslash is redundant between brackets
../debian/root/health_checks/memory_usage_check.py:42:57: E502 the backslash is redundant between brackets
../debian/root/health_checks/memory_usage_check.py:45:63: E502 the backslash is redundant between brackets
```
* python3 flake: fix E275 missing whitespace after keyword
```
../debian/opt/cloud/bin/cs_firewallrules.py:29:20: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_dhcp.py:27:16: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_dhcp.py:36:16: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_guestnetwork.py:33:20: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_guestnetwork.py:35:16: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_vpnusers.py:37:16: E275 missing whitespace after keyword
../debian/opt/cloud/bin/merge.py:230:11: E275 missing whitespace after keyword
../debian/opt/cloud/bin/merge.py:239:19: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_remoteaccessvpn.py:24:12: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_site2sitevpn.py:24:12: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs/CsHelper.py:90:15: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs/CsAddress.py:367:15: E275 missing whitespace after keyword
```
* python3 flake: fix configure.py
```
../debian/opt/cloud/bin/configure.py:24:22: E401 multiple imports on one line
../debian/opt/cloud/bin/configure.py:43:180: E501 line too long (294 > 179 characters)
../debian/opt/cloud/bin/configure.py:46:1: E302 expected 2 blank lines, found 1
../debian/opt/cloud/bin/configure.py:63:1: E302 expected 2 blank lines, found 1
../debian/opt/cloud/bin/configure.py:65:12: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/configure.py:72:1: E302 expected 2 blank lines, found 1
../debian/opt/cloud/bin/configure.py:310:25: E711 comparison to None should be 'if cond is not None:'
../debian/opt/cloud/bin/configure.py:312:29: E711 comparison to None should be 'if cond is None:'
../debian/opt/cloud/bin/configure.py:378:25: E711 comparison to None should be 'if cond is not None:'
../debian/opt/cloud/bin/configure.py:380:29: E711 comparison to None should be 'if cond is None:'
../debian/opt/cloud/bin/configure.py:490:29: E712 comparison to False should be 'if cond is False:' or 'if not cond:'
../debian/opt/cloud/bin/configure.py:642:16: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/configure.py:644:18: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/configure.py:1416:1: E305 expected 2 blank lines after class or function definition, found 1
```
* python3 flake: fix other python files
```
../debian/opt/cloud/bin/vmdata.py:97:12: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/vmdata.py:99:14: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/cs/CsRedundant.py:438:53: E203 whitespace before ':'
../debian/opt/cloud/bin/cs/CsRedundant.py:461:53: E203 whitespace before ':'
../debian/opt/cloud/bin/cs/CsRedundant.py:499:5: E303 too many blank lines (2)
../debian/opt/cloud/bin/cs/CsDatabag.py:189:1: E302 expected 2 blank lines, found 1
../debian/opt/cloud/bin/cs/CsDatabag.py:193:37: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/cs/CsHelper.py:118:30: E231 missing whitespace after ','
../debian/opt/cloud/bin/cs/CsHelper.py:119:15: E225 missing whitespace around operator
../debian/opt/cloud/bin/cs/CsHelper.py:127:19: E225 missing whitespace around operator
../debian/opt/cloud/bin/cs/CsAddress.py:324:43: E221 multiple spaces before operator
../debian/opt/cloud/bin/cs/CsVpcGuestNetwork.py:28:1: E302 expected 2 blank lines, found 1
```
* python3 flake: fix CsNetfilter.py
```
../debian/opt/cloud/bin/cs/CsNetfilter.py:226:13: E117 over-indented
../debian/opt/cloud/bin/cs/CsNetfilter.py:233:180: E501 line too long (197 > 179 characters)
../debian/opt/cloud/bin/cs/CsNetfilter.py:241:14: E201 whitespace after '{'
../debian/opt/cloud/bin/cs/CsNetfilter.py:242:14: E201 whitespace after '{'
../debian/opt/cloud/bin/cs/CsNetfilter.py:247:18: E201 whitespace after '{'
../debian/opt/cloud/bin/cs/CsNetfilter.py:247:74: E202 whitespace before '}'
../debian/opt/cloud/bin/cs/CsNetfilter.py:248:18: E201 whitespace after '{'
```
* systemvm/test: fix sys.path
```
$ bash runtests.sh
/usr/bin/python
Python 3.10.12
Running pycodestyle to check systemvm/python code for errors
Running pylint to check systemvm/python code for errors
Python 3.10.12
pylint 2.12.2
astroid 2.9.3
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
--------------------------------------------------------------------
Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)
--------------------------------------------------------------------
Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)
Running systemvm/python unit tests
....Device "eth0" does not exist.
.....................
----------------------------------------------------------------------
Ran 25 tests in 0.008s
OK
```
* Revert "systemvm template: remove hyperv packages and do not export"
This reverts commit 4383d59d03.
* debian12: move SQL change to schema-41900to42000.sql
* debian12: update systemvm template version to 4.20 in pom.xml
* pom.xml: fix NPE if templates do not exist on download.cloudstack.org
* debian12: increase default system offering for routers to 384MiB RAM
* CKS: fix addkubernetessupportedversion failed with JRE17
```
marvin.cloudstackException.CloudstackAPIException: Execute cmd: addkubernetessupportedversion failed, due to: errorCode: 530, errorText:Cannot invoke "org.apache.cloudstack.engine.subsystem.api.storage.ObjectInDataStoreStateMachine$State.toString()" because the return value of "com.cloud.api.query.vo.TemplateJoinVO.getState()" is null
```
* python3: revert changes by 2to3 with systemvm/debian/root/health_checks/*.py
* debian12: use ISO/packages on download.cloudstack.org
* VR: Update default ram size to 384
* debian12: fix router_version_check.py after VR live-patch and add health check in test_routers.py
* debian12: fix build error after log4j 2.x merge
* VR: Update default ram size to 512MB (again)
This reverts commit 578dd2b73f and efafa8c4d6.
* systemvmtemplate: Upgrade to Debian 12.5.0
* systemvm template: increase swap to 512MB
* VR: fix health check error due to deprecated SafeConfigParser
warning below
```
root@r-20-VM:~# /opt/cloud/bin/getRouterMonitorResults.sh true
/root/monitorServices.py:59: DeprecationWarning: The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in Python 3.12. Use ConfigParser directly instead.
parser = SafeConfigParser()
```
* test: fix wget does not work in macchinina vms on vmware80u1
fixes error below
```
{Cmd: wget -t 1 -T 1 www.google.com via Host: 10.0.55.186} {returns: ["wget: '/usr/lib/libpcre.so.1' is not an ELF file", "wget: can't load library 'libpcre.so.1'"]}
```
* packaging: add message for VR memory upgrade after packages installation
---------
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Vishesh <vishesh92@gmail.com>
Feature spec: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Granular+Resource+Limit+Management
Introduces the concept of tagged resource limits for granular resource limit management. Limits can be enforced on accounts and domains for the deployment of entities for a tagged resource. Current tagged resource limits can be used for the following resource types,
Host limits
- user_vm
- cpu
- memory
Storage limits
- volume
- primary_storage
Following global settings can used to specify tags for which limit needs to be enforced,
Host: `resource.limit.host.tags`
Storage: `resource.limit.storage.tags`
Option for specifying tagged resource limits and viewing tagged resource usage are made available in the UI.
Enhances the use of templatetag for VM deployment and template creation
Adds option to list service/compute offerings that can be used with a given template. A new parameter named templateid has been added.
Adds option to list disk offering with suitability flag for a virtual machine. A new parameter named virtualmachineid has been added to the listDiskOfferings API which when passed returns suitableforvirtualmachine param in the response.
* Normalize logs
All classes that could have their loggers inherited from their fathers had their own loggers deleted;
Most loggers didn't have to be static, so most of them were normalized so that they wouldn't be;
All loggers are protected now;
Static logger's name are now 'LOGGER';
Non-static logger's name are now 'logger';
New class DbUpgradeAbstractImpl created so that all Upgraders extend it and inherit its logger
* Upgrade log4j
* fix errors caused by the merge
* Refactor cglibThrowableRenderer functionality to log4j2 and upgrade the last configuration files
* fix sonarcloud bug
* Fix errors caused by merge, remove some unused loggers, and rename a variable that was mistakenly renamed on the normalization commit
* Readd snmpTrapAppender, remove TestAppender
* Regenerate changes
* regenerate changes
* refactor last custom appender
* fix systemvm configuration xml
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* Fix utils pom
* fix some tests
* regenerate changes
* Fix jar being printed on exception
* fix logging in system VMs, fix commands not having log4j2 classpath.
* regenerate changes
* Fix some unwanted renomeations
* fix end of file
* regenerate changes
* regenerate changes
* fix merge error
* regenerate changes
* fix tests
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* readd reload4j to tungsten as juniper depends on it
* Regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* re-add reload4j dependency to network-contrail, as juniper depends on it
* regenerate changes
* regenerate changes
* regenerate changes
* fix typo
* regenerate changes
* regenerate changes
* Fix end of files
* regenerate changes
* add logj42 to cloud-utils-SHADED.jar
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* fix some tests
* Regenerate changes
* Regenerate changes
* fix test
* Regenerate changes
* Regenerate changes
* StoragePoolType as a class
* Fix agent side StoragePoolType enum to class
* Handle StoragePoolType for StoragePoolJoinVO
* Since StoragePoolType is a class, it cannot be converted by @Enumerated annotation.
Implemented conveter class and logic to utilize @Convert annotation.
* Fix UserVMJoinVO for StoragePoolType
* fixed missing imports
* Since StoragePoolType is a class, it cannot be converted by @Enumerated annotation.
Implemented conveter class and logic to utilize @Convert annotation.
* Fixed equals for the enum.
* removed not needed try/catch for prepareAttribute
* Added license to the file.
* Implemented "supportsPhysicalDiskCopy" for storage adaptor.
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>
* Add javadoc to StoragePoolType class
* Add unit test for StoragePoolType comparisons
* StoragePoolType "==" and ".equals()" fix.
* Fix StoragePoolType for FiberChannelAdapter
* Fix for abstract storage adaptor set up issue
* review comments
* Pass StoragePoolType object for poolType dao attribute
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>
Co-authored-by: mprokopchuk <mprokopchuk@gmail.com>
This PR fixes several issues in the testing of Veeam 11 and Veeam12
- Import Veeam.Backup.PowerShell and silently ignore the warning messages
- Fix issue when assign vm to backup offerings, which caused by separator (\r\n)
- Fix authorization failure in veeam 12a, which is because v1_4 is not supported in veeam 12a any more
- Fix exception if backup name has space
- Fix backup metrics in veeam12, which is because powershell command does not return the values needed
- Fix Incorrect datetime value, which is because powershell command returns a datetime which is not supported in Java
- Fix issue during backup restoration if VM has both ROOT and DATA disks.
This PR also has the following update
- Add integration test test/integration/smoke/test_backup_recovery_veeam.py
- Make some UI changes
- Add zone setting backup.plugin.veeam.version. If it is not set, CloudStack will get veeam version via powershell commands.
- Add zone setting backup.plugin.veeam.task.poll.interval and backup.plugin.veeam.task.poll.max.retry
This PR fixes reorder/list pools when cluster details are not set, while deploying vm / attaching volume.
Problem:
Attach volume to a VM fails, on infra with zone-wide pools & vm.allocation.algorithm=userdispersing as the cluster details are not set (passed as null) while reordering / listing pools by volumes.
Solution:
Ignore cluster details when not set, while reordering / listing pools by volumes.
Fixes#8412
Add support for 8.0.0.2 explicitly to prevent falling over to the parent version
Adds log when hypervisor capabilities fail over to the parent version
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
GuestOS mappings are retrieved from the parent hypervisor version when a minor, patch hypervisor version doesn't exist.
Fixes#8412
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR updates the conserve mode of default vpc tier offering to conserve_mode=1
so we can create both port forwarding and load balancing rules on a public IP in vpc tiers.
This fixes#8313
When a public IP gets removed from quarantine, the removal reason gets saved to the database; however, it may also be useful for operators to know who removed the public IP from quarantine. For that reason, this PR extends the public IP quarantine feature so that the account that deliberately removed an IP from quarantine also gets saved to the database.
This PR adds missing indexes on `alerts` & `events` tables.
For alerts table, some of the queries are part of a couple of APIs and some operations. I have added the index for the same. Ref:
8f39087377/engine/schema/src/main/java/com/cloud/alert/dao/AlertDaoImpl.java (L40-L45)
For Events table, we query for `resource_id` & `resource_type` in the UI for a resource's events. Indexes were missing, so I have added those.
Sometimes users have the need to move resources between domains, for example, in a big company, a department may be moved from one part of the company to another, changing the company's department hierarchy, the easiest way of reflecting this change on the company's cloud environment would be to move subdomains between domains, but currently ACS offers no option to do that.
This PR adds the moveDomain API, which will move domains between subdomains. Furthermore, if the domain that is being moved has any subdomains, those will also be moved, maintaining the current subdomain tree.
This PR adds the capability in CloudStack to convert VMware Instances disk(s) to KVM using virt-v2v and import them as CloudStack instances. It enables CloudStack operators to import VMware instances from vSphere into a KVM cluster managed by CloudStack. vSphere/VMware setup might be managed by CloudStack or be a standalone setup.
CloudStack will let the administrator select a VM from an existing VMware vCenter in the CloudStack environment or external vCenter requesting vCenter IP, Datacenter name and credentials.
The migrated VM will be imported as a KVM instance
The migration is done through virt-v2v: https://access.redhat.com/articles/1351473, https://www.ovirt.org/develop/release-management/features/virt/virt-v2v-integration.html
The migration process timeout can be set by the setting convert.instance.process.timeout
Before attempting the virt-v2v migration, CloudStack will create a clone of the source VM on VMware. The clone VM will be removed after the registration process finishes.
CloudStack will delegate the migration action to a KVM host and the host will attempt to migrate the VM invoking virt-v2v. In case the guest OS is not supported then CloudStack will handle the error operation as a failure
The migration process using virt-v2v may not be a fast process
CloudStack will not perform any check about the guest OS compatibility for the virt-v2v library as indicated on: https://access.redhat.com/articles/1351473.
* 4.18:
server: Initial new vpnuser state (#8268)
UI: Removed redundant IP Address Column when create Port forwarding rules (#8275)
UI: Removed ICMP input fields for protocol number from ACL List rules modal (#8253)
server: check if there are active nics before network GC (#8204)