* Update to 4.20.0
* Update to python3
* Upgrade to JRE 17
* Upgrade to Debian 12.4.0
* VR: upgrade to python3
for f in `find systemvm/ -name *.py`;do
if grep "print " $f >/dev/null;then
2to3-2.7 -w $f
else
2to3-2.7 -p -w $f
fi
done
* java: Use JRE17 in cloudstack packages and systemvmtemplate
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Add --add-opens to JAVA_OPTS in systemd config
* Add --add-opens to JAVA_OPTS in systemd config for usage
* python3: fix "TypeError: a bytes-like object is required, not 'str'"
* python3: fix "ValueError: must have exactly one of create/read/write/append mode"
* Add --add-exports=java.base/sun.security.x509=ALL-UNNAMED for management server
* Use pip3 instead of pip for centos8
* python3: fix "TypeError: write() argument must be str, not bytes"
```
root@r-1037-VM:~# /opt/cloud/bin/passwd_server_ip.py 10.1.1.1
Traceback (most recent call last):
File "/opt/cloud/bin/passwd_server_ip.py", line 201, in <module>
serve()
File "/opt/cloud/bin/passwd_server_ip.py", line 187, in serve
initToken()
File "/opt/cloud/bin/passwd_server_ip.py", line 60, in initToken
f.write(secureToken)
TypeError: write() argument must be str, not bytes
root@r-1037-VM:~#
```
* Python3: fix "name 'file' is not defined"
```
root@r-1037-VM:~# /opt/cloud/bin/passwd_server_ip.py 10.1.1.1
Traceback (most recent call last):
File "/opt/cloud/bin/passwd_server_ip.py", line 201, in <module>
serve()
File "/opt/cloud/bin/passwd_server_ip.py", line 188, in serve
loadPasswordFile()
File "/opt/cloud/bin/passwd_server_ip.py", line 67, in loadPasswordFile
with file(getPasswordFile()) as f:
NameError: name 'file' is not defined
```
* python3: fix "TypeError: write() argument must be str, not bytes" (two more files)
* Upgrade jaxb version
* python3: fix more "TypeError: a bytes-like object is required, not str"
* python3: fix "Failed to update password server"
Failed to update password server due to: POST data should be bytes, an iterable of bytes, or a file object. It cannot be of type str.
* python3: fix "bad duration value: ikelifetime=24.0h"
Jan 15 13:57:20 systemvm ipsec[3080]: # bad duration value: ikelifetime=24.0h
* python3: fix password server "invalid save_password token"
* test: incease retries in test_vpc_vpn.py
* python3: fix passwd_server_ip.py
see error below
```
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: ----------------------------------------
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: Exception occurred during processing of request from ('10.1.1.129', 32782)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: Traceback (most recent call last):
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/socketserver.py", line 650, in process_request_thread
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.finish_request(request, client_address)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/socketserver.py", line 360, in finish_request
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.RequestHandlerClass(request, client_address, self)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/socketserver.py", line 720, in __init__
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.handle()
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/http/server.py", line 427, in handle
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.handle_one_request()
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/http/server.py", line 415, in handle_one_request
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: method()
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/opt/cloud/bin/passwd_server_ip.py", line 120, in do_GET
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self.wfile.write(password)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: File "/usr/lib/python3.9/socketserver.py", line 799, in write
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: self._sock.sendall(b)
Jan 15 18:51:21 systemvm passwd_server_ip.py[1507]: TypeError: a bytes-like object is required, not 'str'
```
* python3: fix self.cl.get_router_password in Redundant VRs
```
File "/opt/cloud/bin/cs/CsDatabag.py", line 154, in get_router_password
md5.update(passwd)
TypeError: Unicode-objects must be encoded before hashing"]
```
* scripts: mark multipath scripts as executable
* systemvm template: remove hyperv packages and do not export
* VR: update default RAM size of System VMs/VRs to 512MiB
Before
```
mysql> select id,name,cpu,speed,ram_size,unique_name,system_use from service_offering where name like "System%";
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
| id | name | cpu | speed | ram_size | unique_name | system_use |
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
| 3 | System Offering For Software Router | 1 | 500 | 256 | Cloud.Com-SoftwareRouter | 1 |
| 4 | System Offering For Software Router - Local Storage | 1 | 500 | 256 | Cloud.Com-SoftwareRouter-Local | 1 |
| 5 | System Offering For Internal LB VM | 1 | 256 | 256 | Cloud.Com-InternalLBVm | 1 |
| 6 | System Offering For Internal LB VM - Local Storage | 1 | 256 | 256 | Cloud.Com-InternalLBVm-Local | 1 |
| 7 | System Offering For Console Proxy | 1 | 500 | 1024 | Cloud.com-ConsoleProxy | 1 |
| 8 | System Offering For Console Proxy - Local Storage | 1 | 500 | 1024 | Cloud.com-ConsoleProxy-Local | 1 |
| 9 | System Offering For Secondary Storage VM | 1 | 500 | 512 | Cloud.com-SecondaryStorage | 1 |
| 10 | System Offering For Secondary Storage VM - Local Storage | 1 | 500 | 512 | Cloud.com-SecondaryStorage-Local | 1 |
| 11 | System Offering For Elastic LB VM | 1 | 128 | 128 | Cloud.Com-ElasticLBVm | 1 |
| 12 | System Offering For Elastic LB VM - Local Storage | 1 | 128 | 128 | Cloud.Com-ElasticLBVm-Local | 1 |
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
10 rows in set (0.00 sec)
```
New value
```
mysql> select id,name,cpu,speed,ram_size,unique_name,system_use from service_offering where name like "System%";
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
| id | name | cpu | speed | ram_size | unique_name | system_use |
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
| 3 | System Offering For Software Router | 1 | 500 | 512 | Cloud.Com-SoftwareRouter | 1 |
| 4 | System Offering For Software Router - Local Storage | 1 | 500 | 512 | Cloud.Com-SoftwareRouter-Local | 1 |
| 5 | System Offering For Internal LB VM | 1 | 256 | 512 | Cloud.Com-InternalLBVm | 1 |
| 6 | System Offering For Internal LB VM - Local Storage | 1 | 256 | 512 | Cloud.Com-InternalLBVm-Local | 1 |
| 7 | System Offering For Console Proxy | 1 | 500 | 1024 | Cloud.com-ConsoleProxy | 1 |
| 8 | System Offering For Console Proxy - Local Storage | 1 | 500 | 1024 | Cloud.com-ConsoleProxy-Local | 1 |
| 9 | System Offering For Secondary Storage VM | 1 | 500 | 512 | Cloud.com-SecondaryStorage | 1 |
| 10 | System Offering For Secondary Storage VM - Local Storage | 1 | 500 | 512 | Cloud.com-SecondaryStorage-Local | 1 |
| 11 | System Offering For Elastic LB VM | 1 | 128 | 512 | Cloud.Com-ElasticLBVm | 1 |
| 12 | System Offering For Elastic LB VM - Local Storage | 1 | 128 | 512 | Cloud.Com-ElasticLBVm-Local | 1 |
+----+----------------------------------------------------------+------+-------+----------+----------------------------------+------------+
10 rows in set (0.01 sec)
```
* debian12: fix test_network_ipv6 and test_vpc_ipv6
* python3: remove duplicated imports
* debian12: failed to start Apache2 server (SSLCipherSuite @SECLEVEL=0)
error message
```
[Sat Jan 20 22:51:14.595143 2024] [ssl:emerg] [pid 10200:tid 140417063888768] AH02562: Failed to configure certificate cloudinternal.com:443:0 (with chain), check /etc/ssl/certs/cert_apache.crt
[Sat Jan 20 22:51:14.595234 2024] [ssl:emerg] [pid 10200:tid 140417063888768] SSL Library Error: error:0A00018E:SSL routines::ca md too weak
AH00016: Configuration Failed
```
openssl version
```
root@s-167-VM:~# openssl version -a
OpenSSL 3.0.11 19 Sep 2023 (Library: OpenSSL 3.0.11 19 Sep 2023)
built on: Mon Oct 23 17:52:22 2023 UTC
platform: debian-amd64
options: bn(64,64)
compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -fzero-call-used-regs=used-gpr -DOPENSSL_TLS_SECURITY_LEVEL=2 -Wa,--noexecstack -g -O2 -ffile-prefix-map=/build/reproducible-path/openssl-3.0.11=. -fstack-protector-strong -Wformat -Werror=format-security -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_BUILDING_OPENSSL -DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2
OPENSSLDIR: "/usr/lib/ssl"
ENGINESDIR: "/usr/lib/x86_64-linux-gnu/engines-3"
MODULESDIR: "/usr/lib/x86_64-linux-gnu/ossl-modules"
Seeding source: os-specific
CPUINFO: OPENSSL_ia32cap=0x80202001478bfffd:0x0
```
certificate
```
root@s-167-VM:~# keytool -printcert -rfc -file /usr/local/cloud/systemvm/certs/realhostip.crt
-----BEGIN CERTIFICATE-----
MIIFZTCCBE2gAwIBAgIHKBCduBUoKDANBgkqhkiG9w0BAQUFADCByjELMAkGA1UE
BhMCVVMxEDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxGjAY
BgNVBAoTEUdvRGFkZHkuY29tLCBJbmMuMTMwMQYDVQQLEypodHRwOi8vY2VydGlm
aWNhdGVzLmdvZGFkZHkuY29tL3JlcG9zaXRvcnkxMDAuBgNVBAMTJ0dvIERhZGR5
IFNlY3VyZSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTERMA8GA1UEBRMIMDc5Njky
ODcwHhcNMTIwMjAzMDMzMDQwWhcNMTcwMjA3MDUxMTIzWjBZMRkwFwYDVQQKDBAq
LnJlYWxob3N0aXAuY29tMSEwHwYDVQQLDBhEb21haW4gQ29udHJvbCBWYWxpZGF0
ZWQxGTAXBgNVBAMMECoucmVhbGhvc3RpcC5jb20wggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQCDT9AtEfs+s/I8QXp6rrCw0iNJ0+GgsybNHheU+JpL39LM
TZykCrZhZnyDvwdxCoOfE38Sa32baHKNds+y2SHnMNsOkw8OcNucHEBX1FIpOBGp
h9D6xC+umx9od6xMWETUv7j6h2u+WC3OhBM8fHCBqIiAol31/IkcqDxxsHlQ8S/o
CfTlXJUY6Yn628OA1XijKdRnadV0hZ829cv/PZKljjwQUTyrd0KHQeksBH+YAYSo
2JUl8ekNLsOi8/cPtfojnltzRI1GXi0ZONs8VnDzJ0a2gqZY+uxlz+CGbLnGnlN4
j9cBpE+MfUE+35Dq121sTpsSgF85Mz+pVhn2S633AgMBAAGjggG+MIIBujAPBgNV
HRMBAf8EBTADAQEAMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAOBgNV
HQ8BAf8EBAMCBaAwMwYDVR0fBCwwKjAooCagJIYiaHR0cDovL2NybC5nb2RhZGR5
LmNvbS9nZHMxLTY0LmNybDBTBgNVHSAETDBKMEgGC2CGSAGG/W0BBxcBMDkwNwYI
KwYBBQUHAgEWK2h0dHA6Ly9jZXJ0aWZpY2F0ZXMuZ29kYWRkeS5jb20vcmVwb3Np
dG9yeS8wgYAGCCsGAQUFBwEBBHQwcjAkBggrBgEFBQcwAYYYaHR0cDovL29jc3Au
Z29kYWRkeS5jb20vMEoGCCsGAQUFBzAChj5odHRwOi8vY2VydGlmaWNhdGVzLmdv
ZGFkZHkuY29tL3JlcG9zaXRvcnkvZ2RfaW50ZXJtZWRpYXRlLmNydDAfBgNVHSME
GDAWgBT9rGEyk2xF1uLuhV+auud2mWjM5zArBgNVHREEJDAighAqLnJlYWxob3N0
aXAuY29tgg5yZWFsaG9zdGlwLmNvbTAdBgNVHQ4EFgQUZyJz9/QLy5TWIIscTXID
E8Xk47YwDQYJKoZIhvcNAQEFBQADggEBAKiUV3KK16mP0NpS92fmQkCLqm+qUWyN
BfBVgf9/M5pcT8EiTZlS5nAtzAE/eRpBeR3ubLlaAogj4rdH7YYVJcDDLLoB2qM3
qeCHu8LFoblkb93UuFDWqRaVPmMlJRnhsRkL1oa2gM2hwQTkBDkP7w5FG1BELCgl
gZI2ij2yxjge6pOEwSyZCzzbCcg9pN+dNrYyGEtB4k+BBnPA3N4r14CWbk+uxjrQ
6j2Ip+b7wOc5IuMEMl8xwTyjuX3lsLbAZyFI9RCyofwA9NqIZ1GeB6Zd196rubQp
93cmBqGGjZUs3wMrGlm7xdjlX6GQ9UvmvkMub9+lL99A5W50QgCmFeI=
-----END CERTIFICATE-----
Warning:
The certificate uses the SHA1withRSA signature algorithm which is considered a security risk. This algorithm will be disabled in a future update.
```
it comes from
```
$ openssl x509 -in ./systemvm/agent/certs/realhostip.crt -noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 11277268652730408 (0x28109db8152828)
Signature Algorithm: sha1WithRSAEncryption
Issuer: C = US, ST = Arizona, L = Scottsdale, O = "GoDaddy.com, Inc.", OU = http://certificates.godaddy.com/repository, CN = Go Daddy Secure Certification Authority, serialNumber = 07969287
Validity
Not Before: Feb 3 03:30:40 2012 GMT
Not After : Feb 7 05:11:23 2017 GMT
Subject: O = *.realhostip.com, OU = Domain Control Validated, CN = *.realhostip.com
```
* debian12: use ed25519 instead of rsa as ssh-rsa has been deprecated in OpenSSH
on xenserver
```
[root@pr8497-t8906-xenserver-71-xs2 ~]# ssh -i .ssh/id_rsa.cloud -p 3922 169.254.214.153
Warning: Permanently added '[169.254.214.153]:3922' (ECDSA) to the list of known hosts.
Permission denied (publickey).
```
in the CPVM
Jan 22 19:31:09 v-1-VM sshd[2869]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Jan 22 19:31:09 v-1-VM sshd[2869]: Connection closed by authenticating user root 169.254.0.1 port 54704 [preauth]
```
ssh-dss (DSA) is not supported either
* debian12: add PubkeyAcceptedAlgorithms=+ssh-rsa to sshd_config
* VR: install python3 packages in case of Debian 11
* pom.xml: exclude systemvm/agent/packages/* in license check
* systemvm: do not patch router/systemvm during startup
this will cause 4.19 SYSTEM template not work, but may be expected
- python3 VS python2 (default)
- openSSL 3.0.1 VS 1.1.1w
- openssh-server 9.1 VS 8.4
* VR: patch router/systemvm if template is debian11
This supports debian 11 template by
- revert change in systemvm/debian/etc/ssh/sshd_config
- patch VR/systemvms during startup
- install packages during patching system vm/routers
* python3 flake: fix E502 the backslash is redundant between brackets
```
../debian/root/health_checks/router_version_check.py:55:70: E502 the backslash is redundant between brackets
../debian/root/health_checks/router_version_check.py:58:61: E502 the backslash is redundant between brackets
../debian/root/health_checks/router_version_check.py:67:71: E502 the backslash is redundant between brackets
../debian/root/health_checks/router_version_check.py:70:60: E502 the backslash is redundant between brackets
../debian/root/health_checks/haproxy_check.py:47:71: E502 the backslash is redundant between brackets
../debian/root/health_checks/haproxy_check.py:48:64: E502 the backslash is redundant between brackets
../debian/root/health_checks/cpu_usage_check.py:43:54: E502 the backslash is redundant between brackets
../debian/root/health_checks/cpu_usage_check.py:46:58: E502 the backslash is redundant between brackets
../debian/root/health_checks/memory_usage_check.py:31:65: E502 the backslash is redundant between brackets
../debian/root/health_checks/memory_usage_check.py:42:57: E502 the backslash is redundant between brackets
../debian/root/health_checks/memory_usage_check.py:45:63: E502 the backslash is redundant between brackets
```
* python3 flake: fix E275 missing whitespace after keyword
```
../debian/opt/cloud/bin/cs_firewallrules.py:29:20: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_dhcp.py:27:16: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_dhcp.py:36:16: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_guestnetwork.py:33:20: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_guestnetwork.py:35:16: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_vpnusers.py:37:16: E275 missing whitespace after keyword
../debian/opt/cloud/bin/merge.py:230:11: E275 missing whitespace after keyword
../debian/opt/cloud/bin/merge.py:239:19: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_remoteaccessvpn.py:24:12: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs_site2sitevpn.py:24:12: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs/CsHelper.py:90:15: E275 missing whitespace after keyword
../debian/opt/cloud/bin/cs/CsAddress.py:367:15: E275 missing whitespace after keyword
```
* python3 flake: fix configure.py
```
../debian/opt/cloud/bin/configure.py:24:22: E401 multiple imports on one line
../debian/opt/cloud/bin/configure.py:43:180: E501 line too long (294 > 179 characters)
../debian/opt/cloud/bin/configure.py:46:1: E302 expected 2 blank lines, found 1
../debian/opt/cloud/bin/configure.py:63:1: E302 expected 2 blank lines, found 1
../debian/opt/cloud/bin/configure.py:65:12: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/configure.py:72:1: E302 expected 2 blank lines, found 1
../debian/opt/cloud/bin/configure.py:310:25: E711 comparison to None should be 'if cond is not None:'
../debian/opt/cloud/bin/configure.py:312:29: E711 comparison to None should be 'if cond is None:'
../debian/opt/cloud/bin/configure.py:378:25: E711 comparison to None should be 'if cond is not None:'
../debian/opt/cloud/bin/configure.py:380:29: E711 comparison to None should be 'if cond is None:'
../debian/opt/cloud/bin/configure.py:490:29: E712 comparison to False should be 'if cond is False:' or 'if not cond:'
../debian/opt/cloud/bin/configure.py:642:16: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/configure.py:644:18: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/configure.py:1416:1: E305 expected 2 blank lines after class or function definition, found 1
```
* python3 flake: fix other python files
```
../debian/opt/cloud/bin/vmdata.py:97:12: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/vmdata.py:99:14: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/cs/CsRedundant.py:438:53: E203 whitespace before ':'
../debian/opt/cloud/bin/cs/CsRedundant.py:461:53: E203 whitespace before ':'
../debian/opt/cloud/bin/cs/CsRedundant.py:499:5: E303 too many blank lines (2)
../debian/opt/cloud/bin/cs/CsDatabag.py:189:1: E302 expected 2 blank lines, found 1
../debian/opt/cloud/bin/cs/CsDatabag.py:193:37: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`
../debian/opt/cloud/bin/cs/CsHelper.py:118:30: E231 missing whitespace after ','
../debian/opt/cloud/bin/cs/CsHelper.py:119:15: E225 missing whitespace around operator
../debian/opt/cloud/bin/cs/CsHelper.py:127:19: E225 missing whitespace around operator
../debian/opt/cloud/bin/cs/CsAddress.py:324:43: E221 multiple spaces before operator
../debian/opt/cloud/bin/cs/CsVpcGuestNetwork.py:28:1: E302 expected 2 blank lines, found 1
```
* python3 flake: fix CsNetfilter.py
```
../debian/opt/cloud/bin/cs/CsNetfilter.py:226:13: E117 over-indented
../debian/opt/cloud/bin/cs/CsNetfilter.py:233:180: E501 line too long (197 > 179 characters)
../debian/opt/cloud/bin/cs/CsNetfilter.py:241:14: E201 whitespace after '{'
../debian/opt/cloud/bin/cs/CsNetfilter.py:242:14: E201 whitespace after '{'
../debian/opt/cloud/bin/cs/CsNetfilter.py:247:18: E201 whitespace after '{'
../debian/opt/cloud/bin/cs/CsNetfilter.py:247:74: E202 whitespace before '}'
../debian/opt/cloud/bin/cs/CsNetfilter.py:248:18: E201 whitespace after '{'
```
* systemvm/test: fix sys.path
```
$ bash runtests.sh
/usr/bin/python
Python 3.10.12
Running pycodestyle to check systemvm/python code for errors
Running pylint to check systemvm/python code for errors
Python 3.10.12
pylint 2.12.2
astroid 2.9.3
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
--------------------------------------------------------------------
Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)
--------------------------------------------------------------------
Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)
Running systemvm/python unit tests
....Device "eth0" does not exist.
.....................
----------------------------------------------------------------------
Ran 25 tests in 0.008s
OK
```
* Revert "systemvm template: remove hyperv packages and do not export"
This reverts commit 4383d59d03.
* debian12: move SQL change to schema-41900to42000.sql
* debian12: update systemvm template version to 4.20 in pom.xml
* pom.xml: fix NPE if templates do not exist on download.cloudstack.org
* debian12: increase default system offering for routers to 384MiB RAM
* CKS: fix addkubernetessupportedversion failed with JRE17
```
marvin.cloudstackException.CloudstackAPIException: Execute cmd: addkubernetessupportedversion failed, due to: errorCode: 530, errorText:Cannot invoke "org.apache.cloudstack.engine.subsystem.api.storage.ObjectInDataStoreStateMachine$State.toString()" because the return value of "com.cloud.api.query.vo.TemplateJoinVO.getState()" is null
```
* python3: revert changes by 2to3 with systemvm/debian/root/health_checks/*.py
* debian12: use ISO/packages on download.cloudstack.org
* VR: Update default ram size to 384
* debian12: fix router_version_check.py after VR live-patch and add health check in test_routers.py
* debian12: fix build error after log4j 2.x merge
* VR: Update default ram size to 512MB (again)
This reverts commit 578dd2b73f and efafa8c4d6.
* systemvmtemplate: Upgrade to Debian 12.5.0
* systemvm template: increase swap to 512MB
* VR: fix health check error due to deprecated SafeConfigParser
warning below
```
root@r-20-VM:~# /opt/cloud/bin/getRouterMonitorResults.sh true
/root/monitorServices.py:59: DeprecationWarning: The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in Python 3.12. Use ConfigParser directly instead.
parser = SafeConfigParser()
```
* test: fix wget does not work in macchinina vms on vmware80u1
fixes error below
```
{Cmd: wget -t 1 -T 1 www.google.com via Host: 10.0.55.186} {returns: ["wget: '/usr/lib/libpcre.so.1' is not an ELF file", "wget: can't load library 'libpcre.so.1'"]}
```
* packaging: add message for VR memory upgrade after packages installation
---------
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Vishesh <vishesh92@gmail.com>
Feature spec: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Granular+Resource+Limit+Management
Introduces the concept of tagged resource limits for granular resource limit management. Limits can be enforced on accounts and domains for the deployment of entities for a tagged resource. Current tagged resource limits can be used for the following resource types,
Host limits
- user_vm
- cpu
- memory
Storage limits
- volume
- primary_storage
Following global settings can used to specify tags for which limit needs to be enforced,
Host: `resource.limit.host.tags`
Storage: `resource.limit.storage.tags`
Option for specifying tagged resource limits and viewing tagged resource usage are made available in the UI.
Enhances the use of templatetag for VM deployment and template creation
Adds option to list service/compute offerings that can be used with a given template. A new parameter named templateid has been added.
Adds option to list disk offering with suitability flag for a virtual machine. A new parameter named virtualmachineid has been added to the listDiskOfferings API which when passed returns suitableforvirtualmachine param in the response.
* Normalize logs
All classes that could have their loggers inherited from their fathers had their own loggers deleted;
Most loggers didn't have to be static, so most of them were normalized so that they wouldn't be;
All loggers are protected now;
Static logger's name are now 'LOGGER';
Non-static logger's name are now 'logger';
New class DbUpgradeAbstractImpl created so that all Upgraders extend it and inherit its logger
* Upgrade log4j
* fix errors caused by the merge
* Refactor cglibThrowableRenderer functionality to log4j2 and upgrade the last configuration files
* fix sonarcloud bug
* Fix errors caused by merge, remove some unused loggers, and rename a variable that was mistakenly renamed on the normalization commit
* Readd snmpTrapAppender, remove TestAppender
* Regenerate changes
* regenerate changes
* refactor last custom appender
* fix systemvm configuration xml
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* Fix utils pom
* fix some tests
* regenerate changes
* Fix jar being printed on exception
* fix logging in system VMs, fix commands not having log4j2 classpath.
* regenerate changes
* Fix some unwanted renomeations
* fix end of file
* regenerate changes
* regenerate changes
* fix merge error
* regenerate changes
* fix tests
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* readd reload4j to tungsten as juniper depends on it
* Regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* re-add reload4j dependency to network-contrail, as juniper depends on it
* regenerate changes
* regenerate changes
* regenerate changes
* fix typo
* regenerate changes
* regenerate changes
* Fix end of files
* regenerate changes
* add logj42 to cloud-utils-SHADED.jar
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* fix some tests
* Regenerate changes
* Regenerate changes
* fix test
* Regenerate changes
* Regenerate changes
* Use dualzones for ci github actions
* Update advdualzone.cfg to be similar to advanced.cfg & fixup test_metrics_api.py
* Fixup e2e tests for running with multiple zones
* Add e2e tests for listing of accounts, disk_offerings, domains, hosts, service_offerings, storage_pools, volumes
* Fixup
* another fixup
* Add test for listing volumes with tags filter
* Add check for existing volumes in test_list_volumes
* Wait for volumes to be deleted on cleanup
* Filter out volumes in Destroy state before checking the count of volumes
* CKS: retry if unable to drain node or unable to upgrade k8s node
I tried CKS upgrade 16 times, 11 of 16 upgrades succeeded.
2 of 16 upgrades failed due to
```
error: unable to drain node "testcluster-of7974-node-18c8c33c2c3" due to error:[error when evicting pods/"cloud-controller-manager-5b8fc87665-5nwlh" -n "kube-system": Post "https://10.0.66.18:6443/api/v1/namespaces/kube-system/pods/cloud-controller-manager-5b8fc87665-5nwlh/eviction": unexpected EOF, error when evicting pods/"coredns-5d78c9869d-h5nkz" -n "kube-system": Post "https://10.0.66.18:6443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-h5nkz/eviction": unexpected EOF], continuing command...
```
3 of 16 upgrades failed due to
```
Error from server: error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
Name: "kubernetes-dashboard", Namespace: "kubernetes-dashboard"
from server for: "/mnt/k8sdisk//dashboard.yaml": etcdserver: leader changed
```
* CKS: remove tests of creating/deleting HA clusters as they are covered by the upgrade test
* Update PR 8402 as suggested
* test: remove CKS cluster if fail to create or verify
* veeam: detach only the restored volume during backup restore
Steps to reproduce the issue
1. create a VM (A) with ROOT and DATA disk
2. assign to a backup offering
3. create backup
4. create another VM (B)
5. restore the DATA disk of VM A, and attach to VM B
6. When operation is done, check the datastore
Without this change, the ROOT image is not removed and left over on the datastore.
```
[root@ref-trl-5933-v-Mr8-wei-zhou-esxi2:/vmfs/volumes/5f60667d-18d828eb] ls -l /vmfs/volumes/5f60667d-18d828eb/CS-RSTR-dfb6f21c-a941-49db-9963-4f0286a17dac
total 1784840
-rw------- 1 root root 5242880000 Jan 24 09:23 ROOT-722_2-flat.vmdk
-rw------- 1 root root 499 Jan 24 09:23 ROOT-722_2.vmdk
```
With this change, the whole temporary vm has been destroyed.
```
[root@ref-trl-5933-v-Mr8-wei-zhou-esxi2:/vmfs/volumes/5f60667d-18d828eb] ls -l /vmfs/volumes/5f60667d-18d828eb/CS-RSTR-734bee3b-640c-4ff0-a34b-bc45358565b2
ls: /vmfs/volumes/5f60667d-18d828eb/CS-RSTR-734bee3b-640c-4ff0-a34b-bc45358565b2: No such file or directory
```
* veeam: fix wrong disk size in debug message
* veeam: sync backup repository after operations are done
got exception of some operations which succeeds due to the following error
```
2024-01-19 10:59:52,846 DEBUG [o.a.c.b.v.VeeamClient] (API-Job-Executor-42:ctx-716501bb job-4373 ctx-2359b76d) (logid:b5e19a17) Veeam response for PowerShell commands [PowerShell Import-Module Veeam.Backup.PowerShell -WarningAction SilentlyContinue;$restorePoint = Get-VBRRestorePoint ^| Where-Object { $_.Id -eq '1d99106a-b5c8-4a1e-958d-066a987caa5f' };if ($restorePoint) { Remove-VBRRestorePoint -Oib $restorePoint -Confirm:$false;$repo = Get-VBRBackupRepository;Sync-VBRBackupRepository -Repository $repo;} else { ; Write-Output 'Failed to delete'; Exit 1;}] is: [^M
Restore Type Job Name State Start Time End Time Description ^M
------------ -------- ----- ---------- -------- ----------- ^M
ConfResynchronize Configuration Dat... Starting 19/01/2024 10:59:52 01/01/1900 00:00:00 ^M
^M
^M
Remove-VBRRestorePoint : Win32 internal error "Access is denied" 0x5 occurred while reading the console output buffer. ^M
Contact Microsoft Customer Support Services.^M
At line:1 char:196^M
+ ... orePoint) { Remove-VBRRestorePoint -Oib $restorePoint -Confirm:$false ...^M
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^M
+ CategoryInfo : ReadError: (:) [Remove-VBRRestorePoint], HostException^M
+ FullyQualifiedErrorId : ReadConsoleOutput,Veeam.Backup.PowerShell.Cmdlets.RemoveVBRRestorePoint^M
^M
].
```
* veeam: fix unable to detach volume when restore backup and attach to vm then detach the volume
It also happened when destroy the original or backup VM
```
2024-01-24 10:10:03,401 ERROR [c.c.s.r.VmwareStorageProcessor] (DirectAgent-74:ctx-95b24ac7 10.0.35.53, job-25995/job-25996, cmd: DettachCommand) (logid:7260ffb8) Failed to detach volume!
java.lang.RuntimeException: Unable to access file [de52fdd3386b3d67b27b3960ecdb08f4] i-2-723-VM/7c2197c129464035bab062edec536a09-flat.vmdk
at com.cloud.hypervisor.vmware.util.VmwareClient.waitForTask(VmwareClient.java:426)
at com.cloud.hypervisor.vmware.mo.DatastoreMO.moveDatastoreFile(DatastoreMO.java:290)
at com.cloud.storage.resource.VmwareStorageLayoutHelper.syncVolumeToRootFolder(VmwareStorageLayoutHelper.java:241)
at com.cloud.storage.resource.VmwareStorageProcessor.attachVolume(VmwareStorageProcessor.java:2150)
at com.cloud.storage.resource.VmwareStorageProcessor.dettachVolume(VmwareStorageProcessor.java:2408)
at com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:174)
at com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:71)
at com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:589)
at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:48)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:45)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2024-01-24 10:10:03,402 INFO [c.c.h.v.u.VmwareHelper] (DirectAgent-74:ctx-95b24ac7 10.0.35.53, job-25995/job-25996, cmd: DettachCommand) (logid:7260ffb8) [ignored]failed to get message for exception: Unable to access file [de52fdd3386b3d67b27b3960ecdb08f4] i-2-723-VM/7c2197c129464035bab062edec536a09-flat.vmdk
```
* vmware: create restored volume with new UUID and attach to VM
This PR fixes several issues in the testing of Veeam 11 and Veeam12
- Import Veeam.Backup.PowerShell and silently ignore the warning messages
- Fix issue when assign vm to backup offerings, which caused by separator (\r\n)
- Fix authorization failure in veeam 12a, which is because v1_4 is not supported in veeam 12a any more
- Fix exception if backup name has space
- Fix backup metrics in veeam12, which is because powershell command does not return the values needed
- Fix Incorrect datetime value, which is because powershell command returns a datetime which is not supported in Java
- Fix issue during backup restoration if VM has both ROOT and DATA disks.
This PR also has the following update
- Add integration test test/integration/smoke/test_backup_recovery_veeam.py
- Make some UI changes
- Add zone setting backup.plugin.veeam.version. If it is not set, CloudStack will get veeam version via powershell commands.
- Add zone setting backup.plugin.veeam.task.poll.interval and backup.plugin.veeam.task.poll.max.retry
With this change, a fix is added for failures seen with test_08_migrate_vm or other migration-related tests because the target host is in `Connecting` state,
#8356 (comment)
#8374 (comment)
and more
Failures seen in #7344 were debugged and it was seen since one of the
host is in Alert state. VM deployment fails with affinity group.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR updates the conserve mode of default vpc tier offering to conserve_mode=1
so we can create both port forwarding and load balancing rules on a public IP in vpc tiers.
This fixes#8313
This PR fixes the test failures with CKS HA-cluster upgrade.
In production, the CKS HA cluster should have at least 3 control VMs as well.
The etcd cluster requires 3 members to achieve reliable HA. The etcd daemon in control VMs uses RAFT protocol to determine the roles of nodes. During upgrade of CKS with HA, the etcd become unreliable if there are only 2 control VMs.
This PR adds argument 'ipadress' to the disassociateIpAddress api. IP address can be disassociated by directly giving the address instead of ID.
Fixes: #8125
This pull request (PR) implements a Distributed Resource Scheduler (DRS) for a CloudStack cluster. The primary objective of this feature is to enable automatic resource optimization and workload balancing within the cluster by live migrating the VMs as per configuration.
Administrators can also execute DRS manually for a cluster, using the UI or the API.
Adds support for two algorithms - condensed & balanced. Algorithms are pluggable allowing ACS Administrators to have customized control over scheduling.
Implementation
There are three top level components:
Scheduler
A timer task which:
Generate DRS plan for clusters
Process DRS plan
Remove old DRS plan records
DRS Execution
We go through each VM in the cluster and use the specified algorithm to check if DRS is required and to calculate cost, benefit & improvement of migrating that VM to another host in the cluster. On the basis of cost, benefit & improvement, the best migration is selected for the current iteration and the VM is migrated. The maximum number of iterations (live migrations) possible on the cluster is defined by drs.iterations which is defined as a percentage (as a value between 0 and 1) of total number of workloads.
Algorithm
Every algorithms implements two methods:
needsDrs - to check if drs is required for cluster
getMetrics - to calculate cost, benefit & improvement of a migrating a VM to another host.
Algorithms
Condensed - Packs all the VMs on minimum number of hosts in the cluster.
Balanced - Distributes the VMs evenly across hosts in the cluster.
Algorithms use drs.level to decide the amount of imbalance to allow in the cluster.
APIs Added
listClusterDrsPlan
id - ID of the DRS plan to list
clusterid - to list plans for a cluster id
generateClusterDrsPlan
id - cluster id
iterations - The maximum number of iterations in a DRS job defined as a percentage (as a value between 0 and 1) of total number of workloads. Defaults to value of cluster's drs.iterations setting.
executeClusterDrsPlan
id - ID of the cluster for which DRS plan is to be executed.
migrateto - This parameter specifies the mapping between a vm and a host to migrate that VM. Format of this parameter: migrateto[vm-index].vm=<uuid>&migrateto[vm-index].host=<uuid>.
Config Keys Added
ClusterDrsPlanExpireInterval
Key drs.plan.expire.interval
Scope Global
Default Value 30 days
Description The interval in days after which old DRS records will be cleaned up.
ClusterDrsEnabled
Key drs.automatic.enable
Scope Cluster
Default Value false
Description Enable/disable automatic DRS on a cluster.
ClusterDrsInterval
Key drs.automatic.interval
Scope Cluster
Default Value 60 minutes
Description The interval in minutes after which a periodic background thread will schedule DRS for a cluster.
ClusterDrsIterations
Key drs.max.migrations
Scope Cluster
Default Value 50
Description Maximum number of live migrations in a DRS execution.
ClusterDrsAlgorithm
Key drs.algorithm
Scope Cluster
Default Value condensed
Description DRS algorithm to execute on the cluster. This PR implements two algorithms - balanced & condensed.
ClusterDrsLevel
Key drs.imbalance
Scope Cluster
Default Value 0.5
Description Percentage (as a value between 0.0 and 1.0) of imbalance allowed in the cluster. 1.0 means no imbalance
is allowed and 0.0 means imbalance is allowed.
ClusterDrsMetric
Key drs.imbalance.metric
Scope Cluster
Default Value memory
Description The cluster imbalance metric to use when checking the drs.imbalance.threshold. Possible values are memory and cpu.
This PR adds new functionality to copy snapshots across zones and take snapshots for multiple zones.
Copy functionality is similar to template copy. The source zone acts as the web server from where the destination zone(s) can download the snapshot files. For this purpose, a new API - `copySnapshot` has been added. The response for copySnapshot will be returning zone and download details from the first destination zone of the request. This behaviour is similar to the `copyTemplate` API.
In a similar manner, multiple zones can be selected while taking the snapshots or creating snapshot policies. For this snapshot will be taken in the base zone(in which volume is present) and then copied to the additional zones. A new parameter - `zoneids` has been added to `createSnapshot` and `createSnapshotPolicy` APIs.
As snapshots can be present on multiple zones (secondary stores), a new parameter `zoneid` has been added to delete the snapshot copy on a specific zone.
`listSnapshots` API has been updated to allow listing snapshot entries for different zones/datastores. New parameters - `showUnique`, `locationType` have been added.
Events generated during snapshot operations will now be linked to the snapshot itself rather than the volume of the snapshot.
`listSnapshotPolicies` and `createSnapshotPolicy` APIs will return zone details of the zones in which backup will be scheduled for the policy.
----
New API added
`copySnapshot`
Request and response params updated for APIs
```
- listSnapshots
- deleteSnapshot
- createTemplate
- listZones
- listSnapshotPolicies
- createSnapshotPolicy
```
UI updated for
- Snapshot detail view
- Create snapshot form
- Create snapshot policy form
- Create volume (from snapshot) form
- Create template (from snapshot) form
Doc PR: https://github.com/apache/cloudstack-documentation/pull/344
PR: https://github.com/apache/cloudstack/pull/7873
* marvin,test: fix directdownload template checksum test
During failure while deploying a VM with wrong checksum template, VM may be left in Error state. This PR adds code to delete such VM.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove unnecessary logs
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes#8034
Adds the following test for a backed-up snapshot (original template and VM deleted beforehand):
- Create volume from snapshot
- Create a template from the snapshot and deploy a VM using it
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes few issues:
- check ip range of new network instead of network cidr, so that the two networks can use same cidr but no IP conflicts.
- Private gateways: return vlan number only for root admins
- when update isolated network, check new guest vm cidr and IPs of neworks/vpc gateways associated to it
* Don't allow inadvertent deletion of hidden details via API
* Update VM details unit test ensuring system/hidden details not removed
* Update test/integration/component/test_update_vm.py
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
* VMware: add support for 8.0b (8.0.0.2)
* VMware 8: add new guest os mappings in VirtualMachineGuestOsIdentifier
The full list can be found at https://developer.vmware.com/apis/1355/vsphere
* VMware: get guest os mappings of parent version
* VMware8: remove guest os mappings for 8.0.0.2
* VMware8: fix code smells
* vmware: remove annotations in VmwareVmImplementerTest which caused 0.0% code coverage
* VMware8: add a unit test case
* VMware: add support for 8.0c (8.0.0.3)
* VMware8: move to CloudStackVersion.getVMwareParentVersion
* VMware: add support for 8.0u1 (8.0.1.0)
* Copy engine/schema/src/main/java/com/cloud/upgrade/GuestOsMapper.java from PR 6979
* Copy engine/schema/src/main/java/com/cloud/storage/dao/GuestOSHypervisorDao.java from PR 6979
* VMware: ignore the last number in VMware versions
* VMware: copy guest os mapping from 8.0 to 8.0.1
* VMware: add unit tests in VmwareVmImplementerTest.java
* Copy engine/schema/src/test/java/com/cloud/upgrade/GuestOsMapperTest.java from PR 6979
* VMware8: retry vm poweron if fails due to exception "File system specific implementation of Ioctl[file] failed"
This fixes a weird issue on vmware8. When power on a vm, sometimes it fails due to error
2023-04-27 07:04:43,207 ERROR [c.c.h.v.r.VmwareResource] (DirectAgent-442:ctx-cdd42b03 10.0.32.133, job-105/job-106, cmd: StartCommand) (logid:8a24a607) StartCommand failed due to [Exception: java.lang.RuntimeException
Message: File system specific implementation of Ioctl[file] failed
].
java.lang.RuntimeException: File system specific implementation of Ioctl[file] failed
at com.cloud.hypervisor.vmware.util.VmwareClient.waitForTask(VmwareClient.java:426)
at com.cloud.hypervisor.vmware.mo.VirtualMachineMO.powerOn(VirtualMachineMO.java:288)
in vmware.log on ESXi host, it shows
2023-04-27T09:20:41.713Z In(05)+ vmx - Power on failure messages: File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - Failed to lock the file
2023-04-27T09:20:41.713Z In(05)+ vmx - Cannot open the disk '/vmfs/volumes/7b29c876-ac102328/i-2-167-VM/ROOT-167.vmdk' or one of the snapshot disks it depends on.
2023-04-27T09:20:41.713Z In(05)+ vmx - Module 'Disk' power on failed.
2023-04-27T09:20:41.713Z In(05)+ vmx - Failed to start the virtual machine.
There is a KB article for it, but I still do not know why and how to fix it.
https://kb.vmware.com/s/article/1004232
* VMware: extract to method powerOnVM
* vmware: fix mistake in logs
* vmware8: use curl instead of wget to fix test failures
Traceback (most recent call last):
File "/root/test_internal_lb.py", line 555, in test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80
self.execute_internallb_roundrobin_tests(vpc_offering)
File "/root/test_internal_lb.py", line 641, in execute_internallb_roundrobin_tests
client_vm, applb.sourceipaddress, max_http_requests)
File "/root/test_internal_lb.py", line 497, in run_ssh_test_accross_hosts
(e, clienthost.public_ip))
AssertionError: list index out of range: SSH failed for VM with IP Address: 10.0.52.187
and
sshClient: DEBUG: {Cmd: /usr/bin/wget -T3 -qO- --user=admin --password=password http://10.1.2.253:8081/admin?stats via Host: 10.0.52.188} {returns: ["/usr/bin/wget: '/usr/lib/libpcre.so.1' is not an ELF file", "/usr/bin/wget: can't load library 'libpcre.so.1'"]}
* VMware: correct guest OS names in hypervisor mappings for VMware 8.0
el9 and variants were introduced by https://github.com/apache/cloudstack/pull/7059
they are supported with guest os identifiers since VMware 8.0
see https://vdc-repo.vmware.com/vmwb-repository/dcr-public/c476b64b-c93c-4b21-9d76-be14da0148f9/04ca12ad-59b9-4e1c-8232-fd3d4276e52c/SDK/vsphere-ws/docs/ReferenceGuide/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html
* VMware: add Ubuntu 20.04 and 22.04 support for vmware 7.0+
* PR7380: only add guest os mappings for Ubuntu 20.04
* PR7380: Correct RHEL9 guest os names and others for VMware 8.0
* PR7380: correct guest os names on 8.0.0.1 as well
* PR7380: remove Windows 12 and Windows Server 2025 which are not released yet
* 4.18:
server: remove registered userdata when cleanup an account (#7777)
server: Use max secondary storage defined on the account during upload (#7441)
test: upgrade kubernetes versions to 1.25.0/1.26.0 (#7685)
kvm: Added VNI Devices as normal bridge slave devs (#7836)
noVNC: fix JP keyboard on vmware7+ which uses websocket URL (#7694)
* 4.18:
UI: allow new keys for VM details (#7793)
Refactoring StorPool's smoke tests (#7392)
UI: decode userdata in EditVM dialog (#7796)
packaging: unalias cp before package upgrade (#7722)
make NoopDbUpgrade do a systemvm template check (#7564)
UI unit test: fix expected values (#7792)
* Removed the hardcoded StorPool endpoint from tests
- removed the hardcoded enpoint of StorPool primary storage from tests
- added the git commit information into the maven build
* Convert indents to spaces
* update git-commit-id-plugin version
Fixes case of appending userdata when both template and vm data are either shellscript or cloudconfig
Fixes error when appending gzip userdata
Fixes case when userdata manual text from VM is not getting decoded-encoded correctly.
Fixes case of appending multipart data when both template and vm data contain same format types.
Refactor - moved validateUserData method to UserDataManager class
Refactor userdata test to check resultant multipart userdata thoroughly
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
There are tools like cluster-api which create and manage kubernetes cluster on CloudStack. This PR adds the option to add unmanaged kubernetes cluster which are not managed by CKS plugin. This helps provide a consolidated view of unmanaged clusters on CloudStack. The changes done make sure that operations for managed clusters are not executed for unmanaged clusters.
Two new APIs have also been added:
1. addVirtualMachinesToKubernetesCluster - to add VMs to unmanaged clusters.
2. removeVirtualMachinesFromKubernetesCluster - to remove VMs to unmanaged clusters.
Two APIs have been updated:
1. createKubernetesCluster - made KUBERNETES_VERSION_ID, SERVICE_OFFERING_ID, SIZE as not required for unmanaged clusters. Add an additional parameter, managed, which is true by default.
2. listKubernetesClusters - Add a parameter managed to filter on managed field.
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Guest OS mapping improvements
- Checks the OS mapping name in hypervisor (VMware, XenServer)
- Displays guest OS mappings in UI
* Added API getHypervisorGuestOsNames to list the guest OS names in the hypervisor, and code improvements
* Some static analysis fixes
* Removed commented code in listview
* Guest OS list
* UI changes for adding guest os and mappings
* Added guest os mappings in guest os form
* Added new filter to guest os mapping
* Name and description changes
* VMWare Host and cluster MO unit tests
* CheckGuestOsMapping command and answer unit tests
* GetHypervisorGuestOsNames command and answer unit tests
* VmwareResource unitests
* GuestOsMapper unittests
* icon changes
* Addressed review comments
* Renaming fixes
* Removed comments
* marvin tests for guest os operations
* Added marvin tests for OS mappings
* Document links and UI improvements
* Added deduplication for the list guest OS API
* Fixed linter failure
* Few bug fixes and UI changes
* Few improvements
* Addressed code smells
* Fixed UI issues after rebase
---------
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
Supported Virtual machine operations:
- live migration of VM to another host
- virtual machine snapshots (group snapshot without memory)
- revert VM snapshot
- delete VM snapshot
Supported Volume operations:
- attach/detach volume
- live migrate volume between two StorPool primary storages
- volume snapshot
- delete snapshot
- revert snapshot
* Live storage migration of volume in scaleIO within same storage scaleio cluster
* Added migrate command
* Recent changes of migration across clusters
* Fixed uuid
* recent changes
* Pivot changes
* working blockcopy api in libvirt
* Checking block copy status
* Formatting code
* Fixed failures
* code refactoring and some changes
* Removed unused methods
* removed unused imports
* Unit tests to check if volume belongs to same or different storage scaleio cluster
* Unit tests for volume livemigration in ScaleIOPrimaryDataStoreDriver
* Fixed offline volume migration case and allowed encrypted volume migration
* Added more integration tests
* Support for migration of encrypted volumes across different scaleio clusters
* Fix UI notifications for migrate volume
* Data volume offline migration: save encryption details to destination volume entry
* Offline storage migration for scaleio encrypted volumes
* Allow multiple Volumes to be migrated with migrateVirtualMachineWithVolume API
* Removed unused unittests
* Removed duplicate keys in migrate volume vue file
* Fix Unit tests
* Add volume secrets if does not exists during volume migrations. secrets are getting cleared on package upgrades.
* Fix secret UUID for encrypted volume migration
* Added a null check for secret before removing
* Added more unit tests
* Fixed passphrase check
* Add image options to the encypted volume conversion
* Refactor test and change IP range
* countdown from 254 to allow multiple pseudo public nets counting up from 0
* cleanup
* location of asserts improved
---------
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: Daan Hoogland <daan@onecht.net>
This PR adds name in updateProject API to allow renaming 'name' field with description from both API and UI level.
Fixes: #7107
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rahul Agarwal <rahul.agarwal@shapeblue.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* UI changes for management server comments
* Added support for mgmt server comments in annotations framework
* Added test for mgmt server annotation
* changed annotation to be unique for mgmt server test
* Auto Enable Disable KVM hosts
* Improve health check result
* Fix corner cases
* Script path refactor
* Fix sonar cloud reports
* Fix last code smells
* Add marvin tests
* Fix new line on agent.properties to prevent host add failures
* Send alert on auto-enable-disable and add annotations when the setting is enabled
* Address reviews
* Add a reason for enabling or disabling a host when the automatic feature is enabled
* Fix comment on the marvin test description
* Fix for disabling the feature if the admin has manually updated the host resource state before any health check result
* test: add smoke test for user role for userdata crud api
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address review comment
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR allows securing the console access through CloudStack to the virtual machines running on KVM. The secure access is achieved through the generated certificates for the CA Framework in CloudStack, that provides mutual TLS connections between agents. These certificates are used to also secure the connection between the console proxies and the VNC ports for VM console access.
This feature is only supported on the KVM hypervisor
Design Document: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Secure+KVM+VNC+connection+using+the+CA+framework
When using admin=True in account creation with domain it creates a domain admin. It would be better to run tests as normal user.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes#6983
In case of multiple classes for and API class, ApiServer returns an API command class for User role only when ResponseView is set to Restricted in annotation.
This PR set Restricted ResponseView for ListVMsMetrics class. It also adds a smoke test for User role account for the listVirtualMachinesMetrics API.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
PR #6964 added some changes for VM import test which are causing exceptions on non-VMware environments. This PR fixes those error and correctly skips unmanage and import tests for non-VMware env.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Add smoke test to prevent any regression such as #6951.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes#6701
When volume migration is initiated by system, account check is not needed.
Introduces a new global setting - allow.diskoffering.change.during.scale.vm. This determines whether to allow or disallow disk offering change for root volume during scaling of a stopped or running VM.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
Co-authored-by: Rohit Yadav <rohityadav89@gmail.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
flake8 --select=W291,W292,W293,W391 .
./test/integration/smoke/test_register_userdata.py:766:1: W391 blank line at end of file
./test/integration/component/test_network_vpc_custom_dns.py:732:1: W391 blank line at end of file
This PR introduces a feature designed to allow CloudStack to manage a generic volume encryption setting. The encryption is handled transparently to the guest OS, and is intended to handle VM guest data encryption at rest and possibly over the wire, though the actual encryption implementation is up to the primary storage driver.
In some cases cloud customers may still prefer to maintain their own guest-level volume encryption, if they don't trust the cloud provider. However, for private cloud cases this greatly simplifies the guest OS experience in terms of running volume encryption for guests without the user having to manage keys, deal with key servers and guest booting being dependent on network connectivity to them (i.e. Tang), etc, especially in cases where users are attaching/detaching data disks and moving them between VMs occasionally.
The feature can be thought of as having two parts - the API/control plane (which includes scheduling aspects), and the storage driver implementation.
This initial PR adds the encryption setting to disk offerings and service offerings (for root volume), and implements encryption support for KVM SharedMountPoint, NFS, Local, and ScaleIO storage pools.
NOTE: While not required, operations can be significantly sped up by ensuring that hosts have the `rng-tools` package and service installed and running on the management server and hypervisors. For EL hosts the service is `rngd` and for Debian it is `rng-tools`. In particular, the use of SecureRandom for generating volume passphrases can be slow if there isn't a good source of entropy. This could affect testing and build environments, and otherwise would only affect users who actually use the encryption feature. If you find tests or volume creates blocking on encryption, check this first.
### Management Server
##### API
* createDiskOffering now has an 'encrypt' Boolean
* createServiceOffering now has an 'encryptroot' Boolean. The 'root' suffix is added here in case there is ever any other need to encrypt something related to the guest configuration, like the RAM of a VM. This has been refactored to deal with the new separation of service offering from disk offering internally.
* listDiskOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listServiceOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listHosts now shows encryption support of each hypervisor host via `encryptionsupported`
* Volumes themselves don't show encryption on/off, rather the offering should be referenced. This follows the same pattern as other disk offering based settings such as the IOPS of the volume.
##### Volume functions
A decent effort has been made to ensure that the most common volume functions have either been cleanly supported or blocked. However, for the first release it is advised to mark this feature as *experimental*, as the code base is complex and there are certainly edge cases to be found.
Many of these features could eventually be supported over time, such as creating templates from encrypted volumes, but the effort and size of the change is already overwhelming.
Supported functions:
* Data Volume create
* VM root volume create
* VM root volume reinstall
* Offline volume snapshot/restore
* Migration of VM with storage (e.g. local storage VM migration)
* Resize volume
* Detach/attach volume
Blocked functions:
* Online volume snapshot
* VM snapshot w/memory
* Scheduled snapshots (would fail when VM is running)
* Disk offering migration to offerings that don't have matching encryption
* Creating template from encrypted volume
* Creating volume from encrypted volume
* Volume extraction (would we decrypt it first, or expose the key? Probably the former).
##### Primary Storage Support
For storage developers, adding encryption support involves:
1. Updating the `StoragePoolType` for your primary storage to advertise encryption support. This is used during allocation of storage to match storage types that support encryption to storage that supports it.
2. Implementing encryption feature when your `PrimaryDataStoreDriver` is called to perform volume lifecycle functions on volumes that are requesting encryption. You are free to do what your storage supports - this could be as simple as calling a storage API with the right flag when creating a volume. Or (as is the case with the KVM storage types), as complex as managing volume details directly at the hypervisor host. The data objects passed to the storage driver will contain volume passphrases, if encryption is requested.
##### Scheduling
For the KVM implementations specified above, we are dependent on the KVM hosts having support for volume encryption tools. As such, the hosts `StartupRoutingCommand` has been modified to advertise whether the host supports encryption. This is done via a probe during agent startup to look for functioning `cryptsetup` and support in `qemu-img`. This is also visible via the listHosts API and the host details in the UI. This was patterned after other features that require hypervisor support such as UEFI.
The `EndPointSelector` interface and `DefaultEndpointSelector` have had new methods added, which allow the caller to ask for endpoints that support encryption. This can be used by storage drivers to find the proper hosts to send storage commands that involve encryption. Not all volume activities will require a host to support encryption (for example a snapshot backup is a simple file copy), and this is the reason why the interface has been modified to allow for the storage driver to decide, rather than just passing the data objects to the EndpointSelector and letting the implementation decide.
VM scheduling has also been modified. When a VM start is requested, if any volume that requires encryption is attached, it will filter out hosts that don't support encryption.
##### DB Changes
A volume whose disk offering enables encryption will get a passphrase generated for it before its first use. This is stored in the new 'passphrase' table, and is encrypted using the CloudStack installation's standard configured DB encryption. A field has been added to the volumes table, referencing this passphrase, and a foreign key added to ensure passphrases that are referenced can't be removed from the database. The volumes table now also contains an encryption format field, which is set by the implementer of the encryption and used as it sees fit.
#### KVM Agent
For the KVM storage pool types supported, the encryption has been implemented at Qemu itself, using the built-in LUKS storage support. This means that the storage remains encrypted all the way to the VM process, and decrypted before the block device is visible to the guest. This may not be necessary in order to implement encryption for /your/ storage pool type, maybe you have a kernel driver that decrypts before the block device on the system, or something like that. However, it seemed like the simplest, common place to terminate the encryption, and provides the lowest surface area for decrypted guest data.
For qcow2 based storage, `qemu-img` is used to set up a qcow2 file with LUKS encryption. For block based (currently just ScaleIO storage), the `cryptsetup` utility is used to format the block device as LUKS for data disks, but `qemu-img` and its LUKS support is used for template copy.
Any volume that requires encryption will contain a passphrase ID as a byte array when handed down to the KVM agent. Care has been taken to ensure this doesn't get logged, and it is cleared after use in attempt to avoid exposing it before garbage collection occurs. On the agent side, this passphrase is used in two ways:
1. In cases where the volume experiences some libvirt interaction it is loaded into libvirt as an ephemeral, private secret and then referenced by secret UUID in any libvirt XML. This applies to things like VM startup, migration preparation, etc.
2. In cases where `qemu-img` needs to use this passphrase for volume operations, it is written to a `KeyFile` on the cloudstack agent's configured tmpfs and passed along. The `KeyFile` is a `Closeable` and when it is closed, it is deleted. This allows us to try-with-resources any volume operations and get the KeyFile removed regardless.
In order to support the advanced syntax required to handle encryption and passphrases with `qemu-img`, the `QemuImg` utility has been modified to support the new `--object` and `--image-opts` flags. These are modeled as `QemuObject` and `QemuImageOptions`. These `qemu-img` flags have been designed to supersede some of the existing, older flags being used today (such as choosing file formats and paths), and an effort could be made to switch over to these wholesale. However, for now we have instead opted to keep existing functions and do some wrapping to ensure backward compatibility, so callers of `QemuImg` can choose to use either way.
It should be noted that there are also a few different Enums that represent the encryption format for various purposes. While these are analogous in principle, they represent different things and should not be confused. For example, the supported encryption format strings for the `cryptsetup` utility has `LuksType.LUKS` while `QemuImg` has a `QemuImg.PhysicalDiskFormat.LUKS`.
Some additional effort could potentially be made to support advanced encryption configurations, such as choosing between LUKS1 and LUKS2 or changing cipher details. These may require changes all the way up through the control plane. However, in practice Libvirt and Qemu currently only support LUKS1 today. Additionally, the cipher details aren't required in order to use an encrypted volume, as they're stored in the LUKS header on the volume there is no need to store these elsewhere. As such, we need only set the one encryption format upon volume creation, which is persisted in the volumes table and then available later as needed. In the future when LUKS2 is standard and fully supported, we could move to it as the default and old volumes will still reference LUKS1 and have the headers on-disk to ensure they remain usable. We could also possibly support an automatic upgrade of the headers down the road, or a volume migration mechanism.
Every version of cryptsetup and qemu-img tested on variants of EL7 and Ubuntu that support encryption use the XTS-AES 256 cipher, which is the leading industry standard and widely used cipher today (e.g. BitLocker and FileVault).
Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
This PR allows the cloud admin to set either a global or domain-specific value "metadata.allow.expose.domain", and when set this allows the VM to see the name and ID of the immediate domain that contains the VM in instance metadata. This can be useful or a variety of things such as bootstrapping VM configuration and access according to domain.
This PR also deletes the CloudZonesNetworkElement because it isn't referred to anywhere, and there was initially some confusion as to whether this code needed to be updated when extending metadata. If it needs to be kept we can remove that delete from the PR.
Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
This PR addresses parallel resource allocation as a generalization of the problem and solution described in #6644. Instead of the Global lock on the resources a reservation record is created which is added in the resource check count in the ResourceLimitService/ResourceLimitManagerImpl. As a convenience a CheckedReservation is created. This is an implementation of AutoClosable and can be used as a guard in a try-with-resource fashion. The close method of the CheckedReservation wil delete the reservation record.
Co-authored-by: Boris Stoyanov - a.k.a Bobby <bss.stoyanov@gmail.com>
This PR creates a new API createConsoleAccess to create VM console URL allowing it to connect using other UI implementations. To avoid reply attacks, the console access is enhanced to use a one time token per session
New configuration added:
consoleproxy.extra.security.validation.enabled: Enable/disable extra security validation for console proxy using a token
Documentation PR: apache/cloudstack-documentation#284
Adds option to provide custom DNS servers for isolated network, shared network and VPC tier.
New API parameters added in createNetwork API along with the corresponding response parameters.
Doc PR: apache/cloudstack-documentation#276
Few of the smoke tests fail on XCP-ng8 with PV drivers not installed for the VM.
This PR makes changes to use get_suitable_test_template instead of get_template to use the appropriate template for the VM deployed during the test.
After volume migration VM becomes unusable for attach/detach volume action.
A new template could be used in future. For workaround right now, tests are ordered in a way that migrate volume test run at the end.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes#6544 where it could not list networks in a project even after network permissions are set.
* Added test cases to existing component tests to test network permissions
* Moved test_network_permissions.py from component to smoke tests
* Added test_network_permissions to travis.yml under smoke tests
- Refactor IPv6 related tests
- Adds smoke test for IPv4 network to IPv6 upgrade
- Adds smoke test for IPv6 VPC
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor and log trace
* tracelogs
* shuffle pools with real randomiser
* sinlge retrieval of async job context
* some review comments addressed
* Apply suggestions from code review
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* log formatting
* integration test for distribution of volumes over storages
* move test to smoke tests
* imports
* sonarcloud issue # AYCOmVntKzsfKlhz0HDh
* spellos
* review comments
* review comments
* sonarcloud issues
* unittest
* import
* Update AbstractStoragePoolAllocatorTest.java
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
Default ACS Xenserver template, CentOS 5.6, has IPv6 disabled.
/etc/modeprobe.conf shows "options ipv6 disable=1"
To run IPv6 network test successfully on Xenserver smoketest run get_test_template will be used instead of get_template while deploying guest VM in the IPv6 guest network.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* ms stats thread added
* initial data collection for management server
* empty list management server metrics command
* bean copy into MS metrics object
* ms status VO
* further API and DB plumbing
* minimal metrics response in API
* remove commented, refactor data collection plumbing
* javadocs
* surpress stacktrace on expected error
* update status experiment
* ms status publish framework added
* review comment addressed
* static data to DB and API, /proc/ reading
* addressing review comments
* ui for ms details
* small ui adjustment
* beanCopy
* agentcount response and system parameter
* labels
* package-lock
* add version strings to regular list API
* add shutdown time to DB
* add last start and last stop to regular list response
* distro info in regular response/session count added
* metrics as details
* add heap used and remove details map
* thread-statusses
* move db upgrade to 4.17
* sysmem
* procmem
* ui demo comments applied
* javadoc
* get conf and log file locations
* loginfo
* cpuLoadStats
* no.remote
* extra spaces removed
* clusterlistener
* add unit to kb value
* revert accidental rename
* silly fqcn removed
* get mem info from bean is possible
* refactor long sequence for readability
* registerListener
* listUsageMetrics and isDbLocal
* rats
* local usage and db or not
* minimal listDbMetrics
* db vars and stats
* cleanup and #queries queried
* db stats calculation
* rat
* remove list response wrapper from sinlge details-lists responses
* rudimentary metrics view
* metrics table cleanup
* table makeup, collection dates
* move component to appropriate location
* capitalisation removed
* rebase error resolved
* rename deamon to daemon
* small style comments applied
* another merge issue
* naming comments and boot time
* stop/start prefixed with server
* layout-fix
* listMSMetrics test and test refactor
* usage metrics test
* db metrics test
* extra validations
* Update ui/public/locales/en.json
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* descriptions of loadaverages and replica's
* collection time on top
* cpu load on metrics overview
* DbStatsCollection
* some parameter description texts
* labels adjusted
* new output 'kernelversion' and log info cleanup
* labels
* Update api/src/main/java/com/cloud/server/ManagementServerHostStats.java
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/response/DbMetricsResponse.java
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/dao/ManagementServerHostDao.java
Co-authored-by: Rodrigo D. Lopez <19981369+RodrigoDLopez@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/ClusterManager.java
Co-authored-by: Rodrigo D. Lopez <19981369+RodrigoDLopez@users.noreply.github.com>
* Update api/src/main/java/org/apache/cloudstack/api/response/ManagementServerResponse.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update api/src/main/java/org/apache/cloudstack/api/response/ManagementServerResponse.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update engine/schema/src/main/java/com/cloud/host/dao/HostDao.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/ClusterManager.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/ClusterManager.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/server/StatsCollector.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/ClusterManager.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/ClusterManager.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/dao/ManagementServerHostDao.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/server/StatsCollector.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/server/StatsCollector.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/server/StatsCollector.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
* some (more) refactorring suggestions applied
* human readable memory sizes
* rat
* actual collection time instead of query time, improved descriptions
* merge errors fixed
* optional metric values
* javadoc and logging
* names of jmx vars have changed
* vue3-compatibility
* new output parameter type
* lower retention default
* vue3 fixes
* polish comments
* polish comments 2, the reckoning
* note on usage servers
* merge conflict errors
* pollish
* conditional assertion to deal with simulator restart
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
Co-authored-by: Rodrigo D. Lopez <19981369+RodrigoDLopez@users.noreply.github.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Added configuration and Integration test to restrict public template access.
* Move settings to domain.
* Updated integration test.
* Changed Config key's name and description.
* Justified the variable names and removed white spaces.
* Added configuration and Integration test to restrict public template access.
* Move settings to domain.
* Changed Config key's name and description.
* Justified the variable names and removed white spaces.
* Moved configuration to domain scope.
* Added integration test to travis.
* Updated the configuration's name and description.
* Extracted public template check to a separate method.
* Fixed rebase issue.
* Apply tear down changes.
* Update .travis.yml to remove the component test
The test needs to be updated to use the new configuration name
Co-authored-by: Wei Zhou <weizhou@apache.org>
* Enhancement: create Shared networks and VPC private gateways by users
* UI bug fix: pass correct domainid in CreateSharedNetworkForm
* Update #5730: fix test failure with test_guest_vlan_range.py
* Update #5730: fix test failure with test_persistent_network.py
* Update #5730: Add since to new API commands and API parameters
* Update #5730: Get first physical network for VPC private gateway if other ways do not work
* Update #5730: code optimization (return !offering.isSpecifyVlan())
* Update #5730: fix hard-coded network offering id in test_pvlan.py
* Update #5730: skip access check on the network owner if the owner is ROOT/system
* Update #5730: overlap check on cidr/startip/endip
* Update #5730: add methods to get accountid/domainid of shared networks
* Update #5730: improve integration tests
* Update #5730: update as per GutoVeronezi's comments
* Network Sharing: give network access permission to other accounts within a domain
* network: update ip in lb/pf/dnat tables when update vm nic ip
* Update #5757: create 3 separated methods for DNAT/LB/PF update
* travis: install python3-setuptools
* Network Sharing: update integration test
* Update #5769: Remove NetworkPermission.Ops
* Update #5769: Update as per Daan's comments
* Update #5769: Update as per Suresh's comments
* Update #5769: fix UI bug that accounts/projects are not listed
* Update #5769: fix domain admin can deploy vm on L2 network of other users
* Update #5769: Remove method listPermittedNetworkIdsByDomains in NetworkPermissionDao
* Update #5769: Skip network operation permissions check for root admin
* UI: fix create Isolated/L2 network form
* Update #5730: fix create Shared network form
* Update #5769: fix domain admin can deploy vm on L2 network of other users
* test: fix test_storage_policy.py
* Update #5769: fix remove_nic in test_network_permissions.py
* Update #5769: extract some codes to a method
* Update #5769: fix add/remove nic by domain admin
* Update #5769: allow domain admin to enable/disable static nat and create port forwarding rules
* Update #5769: update integration test
* Update #5769: fix unit test AssignLoadBalancerTest.java
* Update #5769: allow normal users to share network permission to other users on UI
* Update #5769: fix small UI bug with label
* Update #5769: Support L2 network as associated network
* test: sleep 30s after restarting mgt server in test_kubernetes_supported_versions.py to fix test failures with test_secondary_storage.py
* Update #5784: revert part of changes in #2420
* Update #5757: invert if condition to reduce code indentation
* Update #5769: fix regular user cannot create L2 network
* Update #5769: Add associated nework id and name in private gateway response
* Update #5769: list networks by networkfilter=Account on UI
* Update #5769: fix ui issue when list private gateways or create shared network if no isolated networks
* Update #5769: fix vue ui warnings
* Update #5679: add BaseResponseWithAssociatedNetwork and extract method setResponseAssociatedNetworkInformation
* Update #5679: extract some methods in VpcManagerImpl.java
* Update #5679: Update smoke tests as per Daan's comments
* Update #5769: fix vpc with private gateways cannot be removed when remove an acount
* Update #5769: fix unit test failures after merging latest main
* Update #5769: fix schema-41610to41700.sql
* Update #5769: fix Request failed due to empty network offering list on UI
* Update #5769: Throw exception when account is not found by name
* Update #5769: display a warning message if network offering list is empty
* Update #5769: fix an UI bug caused by previous commit b286cb7677
* Update #5769: fix UI bugs due to vue3 merge
* Update #5769: fix issue due to account type refactoring
* Update #5769: fix ui bugs due to vue3
* Update #5769: fix issue due to vue3 upgrade
* Update #5769: fix issue due to vue3 upgrade part 2
* Update #5769: fix issue due to vue3 upgrade part 3
* Update #5769: highlight default scope when create shared network on UI
* Update #5769: fix domain list is not loaded on UI
* Update #5769: fix restart/delete shared network by normal users
* Update #5769: fix restart domain-scope shared network by domain admin
* Update #5769: fix 3 UI bugs (1) double networks in list; (2) icon of first items in list; (3) account/project autoselect
* Update #5769: fix 2 ui bugs; (1) selected project is not changed when change domain; (2) no network should be selected by default
* Update #5769: fix update shared networks by domain admin/regular user
* Update #5769: fix Flicking warning message about the empty network offerings
* Update #5769: display associated network name in shared network info card
* Update #5769: fix create private gateway form
* Update #5769: fix network lists in project view
* Update #5769: fix duplicated networks in network dropdown
* Update #5769: fix failed to create shared network if associated L2 network is Setup
* Update #5769: check AccessType.OperateEntry on network in its implementation
* Revert "Update #5769: check AccessType.OperateEntry on network in its implementation"
This reverts commit c42c489e5b.
* Update #5769: fix keyword search in list guest vlans