BackupSync task would switch between databases to update backup usage
metrics in the cloud_usage.usage_backup table. The current framework
and the usage in ManagedContext causes database connection
(LegacyTransaction) leaks. When the thread runs faster, the issue is
easily reproducible and checking via heap dump analysis or using JMX
MBeans. This fixes by moving the task of backup data updation for
usage data to the usage server by publishing usage events instead of
switching between databases in a local thread while in a
ManagedContextRunnable.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Root cause:
Even though dynamic scaling job is handled in vmworkjob queue which ensures serilizing multiple jobs but the database updating and generating usage events are out of the job queue.
Solution:
Moved all updations into the job queue
Firstly I have tested all the scenarios to check if nothing is broken:
Scaling on a running VM with normal compute offering
Scaling on a stopped VM with normal compute offering
Scaling on a running VM with custom compute offering
Scaling on stopped VM with custom compute offering
Scaling on stopped/running VM between custom compute offering and normal compute offering and combinations among these. Checked if the custom parameters have been populated or deleted accordingly based on the offering to which the VM is scaled
Since this is a corner scenario I could not test the exact point where two usage events are recorded at the same time for two different API calls on same VM.
Fix to pick the max data volumes limit using the actual hypervisor version, instead of "default" version. Use the hypervisor version in the host table when product_version parameter in host details doesn't exist or is empty
Fixes#4101
In the list publicipaddress api call, display the network
name if ip is associated to shared network
Co-authored-by: Rakesh Venkatesh <r.venkatesh@global.leaseweb.com>
Allow VR's to be searched using its redundant state
Under Infrastructure -> Virtual Routers -> Search box
we can search using "MASTER", "BACKUP" and this will display
the VR's matching the state.
Co-authored-by: Rakesh Venkatesh <r.venkatesh@global.leaseweb.com>
Only admins should be able to search VM by instance name
Customers should not see or serach VM's using the instance name (i-)
Co-authored-by: Rakesh Venkatesh <r.venkatesh@global.leaseweb.com>
This update turns on certificate revocation checking for uploaded certificates:
- Updated `CertServiceImpl` to be able to enable revocation checking.
- Introduced a new parameter `ENABLED_REVOCATION_CHECK` for `UploadSslCertCmd`.
- Updated `CertServiceTest`.
Even if no CLRs are specified via `PKIXParameters`, the certificates
themselves may still provide info for revocation checking:
- The AIA extension may contains a URL to the OCSP responder.
- The CLRDP extension contains a URL to the CLR.
Those extensions may need to be explicitly enabled by setting the system properties `com.sun.security.enableAIAcaIssuers` and `com.sun.security.enableCRLDP` to true. See [Java PKI Programmer's Guide](https://docs.oracle.com/en/java/javase/11/security/java-pki-programmers-guide.html).
Using a revoked certificate may be dangerous. One of the most common reasons why a certificate authority (CA) revokes a certificate is that the private key has been compromised. For example, the private key might have been stolen by an adversary.
If I understand correctly, the `CertServiceImpl` bean is used for operations with certificates on a load balancer. In particular, it validates a certificate chain without revocation checking while uploading a certificate. If a compromised revoked certificate is then used by the load balancer, then it may result to compromising TLS connections. However, the attacker has to be able to implement man-in-the-middle attack to compromise the connections. So the attacker has to be quite powerful. Therefore, such an attack is definitely not easy to implement. On the other hand, the impact may be significant because of loss of confidentiality.
This has been discussed on security@cloudstack.apache.org
* 4.13:
Snapshot deletion issues (#3969)
server: Cannot list affinity group if there are hosts dedicated… (#4025)
server: Search zone-wide storage pool when allocation algothrim is firstfitleastconsumed (#4002)
Though VMware does not support security groups, but in a basic zone with VMware and no isolation VMs should be able to deploy.
Root cause:
In case of VMware and basic zone control nic is set to 0.0.0.0 assuming control network will be shared with guest network.
But to have access to VMware instances management/private needs to be assigned to it.
Solution:
Assing a private ip even in case of basic zone VMware.
The listZonesMetrics does not return same keys are listZones as the
default response view is restricted. This fixes that by ensuring that
for root admin full response view is used.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Validate API IOPS normal and maximum read/write values.
Ensures that normal read/write cannot be greater than Maximum
read/write. Additionally, it was added a global settings
'iops.maximum.rate.length'.
'iops.maximum.rate.length' sets the maximum IOPS read/write length
(seconds) accepted; thus, preventing irrealistic values for a disk
offering (e.g. hours or days of burst IOPS). The default value is 0
(zero) and allows any IOPS maximum rate length. Example:
iops.maximum.rate.length = 3600 sets the maximum IOPS length
accepted for a disk offering as 3600 seconds (60 minutes).
* Fix log String.format message from %s to %d
* Add bytes rate validation
* Refactoring to cover Read/Write Bytes and IOPS length validation
* Fix "copy-paste" issue with bytes write rate max length
Change Response view to Full for Admin user
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Remove constraint for NFS storage
* Add new property on agent.properties
* Add free disk space on the host prior template download
* Add unit tests for the free space check
* Fix free space check - retrieve avaiable size in bytes
* Update default location for direct download
* Improve the method to retrieve hosts to retry on depending on the destination pool type and scope
* Verify location for temporary download exists before checking free space
* In progress - refactor and extension
* Refactor and fix
* Last fixes and marvin tests
* Remove unused test file
* Improve logging
* Change default path for direct download
* Fix upload certificate
* Fix ISO failure after retry
* Fix metalink filename mismatch error
* Fix iso direct download
* Fix for direct download ISOs on local storage and shared mount point
* Last fix iso
* Fix VM migration with ISO
* Refactor volume migration to remove secondary storage intermediate
* Fix simulator issue
As previously described by PR #3929:
If vm has attached ISO, the migration fails with error message "org.libvirt.LibvirtException: Cannot access storage file /mnt/b33e5a1d-e4ea-3465-b6ac-c98dc8ff8af0/207-2-cc5fd717-2d57-3bb3-bcf6-2c930268db6c.iso"
* Userdata to display static NAT as public ip instead of VR ip
If static nat is enabled on VM then metadata service should return
the static nat instead of gateway IP.
If static not is not enabled then it should return the gateway IP
as the public IP
Test results:
Step to reproduce:
1. Create a vm
2. Ssh to vm.
3. Run the below command inside the vm
wget http://<VR public ip>/latest/meta-data/public-ipv4
Note down the output of the above command
4. Now acquire a new public and enable static NAT on that IP to this vm
5. Now run the same command mentioned above in the VM
This should display the static NAT ip instead of VR public IP
Output:
Before enabling static nat
wget http://10.10.10.40/latest/meta-data/public-ipv4
$ cat public-ipv4
10.10.10.29
After enabling static nat
wget http://10.10.10.40/latest/meta-data/public-ipv4
$ cat public-ipv4
10.11.10.30
* server: apply vm user data when release a public ip
Co-authored-by: Wei Zhou <ustcweizhou@gmail.com>
in 4.13, list sshkeypairs with keyword will ignore the search by name if name is specifed
Fixes an issue in #3098
for example,
(local) > list sshkeypairs name=wei keyword=wei filter=name
{
"count": 3,
"sshkeypair": [
{
"name": "wei3"
},
{
"name": "wei2"
},
{
"name": "wei"
}
]
}
with this patch ,it gives correct result.
(local) > list sshkeypairs name=wei keyword=wei filter=name
{
"count": 1,
"sshkeypair": [
{
"name": "wei"
}
]
}
When both routers of VPC is in MASTER state
then multiple alerts are sent equally to the number of tiers in the VPC.
If the VPC has 3 tiers then 6 alerts will be sent. This is not good
if VPC has more than 10 networks in it.
Instead of checking the router status for all the tiers in the VPC,
just check the status of the router for one tier in a VPC so that
multiple duplicate alerts can be avoided
Currently, the cloudstack sends VM password only to the first
router in the network even if its the backup and return the result.
In some cases the first router will be back up and the second will be master.
Since password server is not running in backup, when the user resets the password,
it is sent to the first router which can be backup.
In that case, the new password is not stored in the password server and users cant log in with a new password.
This change ensures that we send the password to both the routers instead
of the first router so that a new password is stored in the master router.
When we add new guest os, sometimes we missed the records in guest_os_hypervisor.
However, the guest disk model (virtio/ide) is determined by record in the table.
It causes the issue that some new guest os(eg Debian 8/9) uses e1000 instead of virtio nic, and ide disk instead of virtio disk.
To fix the issue permanantly, pass the guest os name in guest_os if the record for kvm is not found in guest_os_hypervisor.
Related commit:7ac9f00eeeb4cd37ec39efeba066e799b581b1a0
By default, once we create a security group we cant change its name.
In this feature, we introduce a new API command "updateSecurityGroup"
which allows us to rename the security group name. Although we can't
change the name of the "default" security group.
When the state of the site to site vpn changes, the check
is done on all the virtual routers including the internal
load balancing vm as well. It is not needed to check the
state for internal load balancing vm
This adds support for JDK11 in CloudStack 4.14+:
- Fixes code to build against JDK11
- Bump to Debian 9 systemvmtemplate with openjdk-11
- Fix Travis to run smoketests against openjdk-11
- Use maven provided jdk11 compatible mysql-connector-java
- Remove old agent init.d scripts
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This implements the systemvm list API response creator to find and use
the host record for a ssvm/cpvm to get the agent status and other
details like last disconnected date and agent version.
Fixes 3875
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Enable PVLAN support on L2 networks
* Fix prevent null pointer on details
* Add marvin tests
* Fixes from comments
* Fix: missing pvlan type on plugniccommand
* Fix checks on network creation for vlans overlap
* Fix remove prefix from secondary vlan id
* Improve checks on physical network for pvlans
* Fix compatibility with previous pvlan creation
* Fix shared networks backwards pvlan compatibility
* Add ui fix for pvlan type not passed to api
* Add check for isolated vlan id overlap
* Include check for dynamic vlan reserved for secondary vlan
* Fix marvin tests errors
* Fix redundant imports
* Skip marvin test for pvlan if dvswitch is not present
* spelling
Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
This makes the listSystemVms API to return the host status (agent state),
version and last pinged information. This makes it possible for UIs
to call a single API to get this information.
* server: fix resource count of primary storage if some volumes are Expunged but not removed
Steps to reproduce the issue
(1) create a vm and stop it. check resource count of primary storage
(2) download volume. resource count of primary storage is not changed.
(3) expunge the vm, the volume will be Expunged state as there is a volume snapshot on secondary storage. The resource count of primary storage decreased.
(4) update resource count of the account (or domain), the resource count of primary storage is reset to the value in step (2).
* New feature: Add support to destroy/recover volumes
* Add integration test for volume destroy/recover
* marvin: check resource count of more types
* messages translate to JP
* Update messages for CN
* translate message for NL
* fix two issues per Daan's comments
Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
After a local template is uploaded via browser, the generated usage event with type = "TEMPLATE.CREATE" is persisted with the data store ID instead of the zone ID on the zone_id column. The fix will refactor the upload monitor logic, as after the upload completes, it sets the datastore ID on the zone ID column for the created "TEMPLATE.CREATE" usage event. This refactor will query the DB for the data store and will set its associated zone ID in the usage field.
The fix produces the same behaviour as when registering a template from URL.
FIx is also for uploading VOLUME from local/via browser.
When we restart the VPC after destroying the master VR, the backup VR
becomes the master and the site to site connections are not in
the connected state. The passive VPN connection will be in the connected
state but the active VPN connection will be in a disconnected state.
The VM ingestion feature allows CloudStack to discover, on-board, import existing VMs in an infra. The feature currently works only for VMware, with a hypervisor agnostic framework which may be extended for KVM and XenServer in future.
Associating static NAT on IP to VM fails even though the IP is not allocated.
When we try enable static NAT on second IP address to the same VM, the operation fails but the IP address is still allocated in the db and it can't be used to enable static NAT on different VM.
Steps to reproduce the issue:
(1) create a vpc (vpc-001) and a vpc tier (vpc-001-001)
(2) create a vm (vm-001-001) in vpc-001-001
(3) acquire a public ip (ip-1) and enable static nat to vm-001-001,
operation succeeds.
(4) acquire a public ip (ip-2) and enable static nat to vm-001-001,
operation fails but the ip is still assigned to vpc tier vpc-001-001.
Note down the ip address and the id of it.
(5) create another vpc tier vpc-001-002, and vm (vm-001-002) in the tier
(6) enabled ip-2 static nat to vm-001-002, operation should succeed
Add a global setting to disable creating networks with same name in an account
Add a global setting to disable creating network without
mentioning the start and end IPv4 or IPv6 address
By default we can create networks with the same name in the account.
Sometimes we should not create the networks with same name.
This change adds a global setting which prevents creating the network with same name.
The default value is true and set it to false to prevent creating network with same names.
Also its possible to create a shared network without mentioning the
start and the end IPv4 or IPv6 address.
This change adds a global setting which prevents creating a shared
network without specifying the start and the end IPv4 or IPv6 address
Fixes#3783
As reported in the issue, creating volumes from pure snapshot fails with NPE. This is due to order of calls where disk offering access is checked before checking disk offering value. This PR fixes the same.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes#3191
When a template is registered, code stores md5sum of the downloaded file in the vm_template table. However, this downloaded file could be deleted after template installation if it is not an actual (.qcow2, .ova, etc.) file. When the user copies a template using copyTemplate API, the actual template file will be copied across the image stores. Matching checksum for the copied templated file and the stored value from the vm_template table will result in a mismatch.
Changes will set an empty checksum value for the copied template while passing to download service which allows skipping wrong checksum check for the copied while install.
However, this results in a change in checksum value for concerned template entry in vm_template table post template install.
Co-authored-by: dahn <daan.hoogland@gmail.com>
* [CLOUDSTACK-10408] Fix String.replaceAll() to replace() for better performance
* improve with replace char but string
Co-authored-by: Rohit Yadav <rohit@apache.org>
* marvin: check resource count of more types
* New feature: add flag resource.count.running.vms.only to count resource consumption of only running vms
Stopped VMs do not use CPU/RAM actually.
A new global configuration resource.count.running.vms.only is added to determine whether resource (cpu/memory) of only running vms (including Starting/Stopping) will be taken into calculation of resource consumption.
* Add integration test for resource count of only running vms
The List Management Server api returns a list of all the management servers but fails when trying to list by id or name. This ensures that it fetches the details as per the parameters passed.
Fixes: #3833
The metrics API has few properties missing that are present in the corresponding resource.
Fixes#3831
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit@apache.org>
Steps to reproduce the issue
(1) create a custom service offering
(2) create a vm with the offering
(3) update vm with displayvm=false, returns an error
(local) > update virtualmachine id=f33fd06a-7643-40d1-833f-272845d9ba09 displayvm=false
Error 530: {"updatevirtualmachineresponse":{"uuidList":[],"errorcode":530,"cserrorcode":9999}}
When start a vm or migrate a vm (away from a host in host maintenance), cloudstack will check capacity of all hosts and choose one. If there are hundreds of hosts on the platform, it will take some seconds. When cloudstack choose a host and start/migrate vm to it, the resource consumption of the host might have been changed. This normally happens when we start/migrate multiple vms.
It would be better to double check the host capacity when start vm on a host.
This PR includes the fix for cpucore capacity when start/migrate a vm.
When we calculate a resource consumption of a host, we need to take the vms in following states into calculation: Running, Starting, Stopping, Migrating (to the host), and vms are Migrating from the host. Because, when stop a vm, the resource on host will be released when vm is stopped. When migrate a vm, the resource on destination host will be increased before migration starts, and resource on source host will be decreased after migraiton succeeds.
In cloudstack, there is a task named CapacityChecked which run every 5 minutes (capacity.check.period =300000 ms by default). It recalculates capacity of all hosts. However, it takes only vms in Running and Starting into consideration. We have faced some issues in host maintenance due to it.
Steps to reproduce the issue
(1) migrate N vms from host A to host B, cpu/ram resource increases before the migration.
(2) capacity check recalculate the capacity of hosts. used capacity of Host B will be reset to original value (not including the vms in Migrating).
(3) migrate some more vms from other host to host B, the migrations are allowed by cloudstack (because used capacity is incorrect). If the actual used memory exceed the physical memory on the host, there might be some critical issues (for example, libvirt dies)
Steps to reproduce the issue
(1) create an account (test)
(2) create a vm with the account (test)
(3) login with admin, and upgrade the vm to another offering
(4) the resource count (cpu,memory) of admin increases, not the account (test).
* pass domainid for list users
* passing arg in wizzard
* adding userfilter to list ldap users and usersource to response
port of list ldap users tests to java
* assertion of differnt junit ldap methods
* broken test for directory server (and others)
* embedded context loading
* add user and query test
* UI: filter options passing filter and domain and onchange trigger
* disable tests that only work in ide
prereqs for domain-linkage fixed
move trigger to the right location in code
trigger for changing domain
* logging, comments and refactor
implement search users per domain
retrieve appropriate list of users to filter
get domain specific ldap provider
* query cloudstack users with now db filter
* recreate ldap linked account should succeed
* disable auto import users that don't exist
* ui choice and text
* import filter and potential remove from list bug fixed
* fix rights for domain admins
* list only member of linked groups not of principle group
* Do not show ldap user filter if not importing from ldap
do not delete un-needed items from dialog permanently
delete from temp object not from global one
* localdomain should not filterout users not imported from ldap
* several types of authentication handling errors fixed and unit tested
* conflict in output name
* add conflict source field to generic import dialog
* replace reflextion by enum member call
* conflict is now called conflict 🎉
* Update message when keys are NOT being injected
* Correct the message after injectkeys.ssh is done
* Update message to a more meaningful one, since sometimes nothing is injected
* Update other 2
* typo
* * Complete API implementation
* Complete UI integration
* Complete marvin test
* Complete Secondary storage GC background task
* improve UI labels
* slight reword and add another missing description
* improve download message clarity
* Address comments
* multiple fixes and cleanups
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* fix more bugs, let it return ip rule list in another log file
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* fix missing iprule bug
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* add support for ARCHIVE type of object to be linked/setup on secstorage
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Fix retrieving files for Xenserver
* Update get_diagnostics_files.py
* Fix bug where executable scripts weren't handled
* Fixed error on script cmd generation
* Do not filter name for log files as it would override similar prefix script names
* Addressed code review comments
* log error instead of printstacktrace
* Treat script as executable and shell script
* Check missing script name case and write to output instead of catching exception
* Use shell = true instead of shlex to support any executable
* fix xenserver bug
* don't set dir permission for vmware
* Code review comments - refactoring
* Add check for possible NPE
* Remove unused imoprt after rebase
* Add better description for configs
Co-authored-by: Nicolas Vazquez <nicovazquez90@gmail.com>
Co-authored-by: Rohit Yadav <rohit@apache.org>
Co-authored-by: Anurag Awasthi <anurag.awasthi@shapeblue.com>
* Suqash commits to a single commit and rebase against master
Update marvin tests to use white list
* * Fix marvin test failure
* Add new marvin negative tests cases
* Remove hard-coded hypervisor types in marvin tests
* Fix build error after rebase and add hugepagesless
* Fix readability of python code
* Fix failing test
* Adding cleanup of vms for negative tests
* Bug fixes - change config checks properly and block extraconfig in details
* Trim to compare the keys
* CR comments
* Don't skip extraconfig without exception
Co-authored-by: Boris Stoyanov - a.k.a Bobby <bss.stoyanov@gmail.com>
Currently while creating ingress/egress rule for a security group,
we can specify only TCP/UDP/ICMP. Sometimes we need to add rules
for different protocol number or rules for all the above three
mentioned protocols.
In this new feature users can specify the protocol number or select
"ALL" option which will apply rules for TCP/UDP/ICMP
Currently in cloudstack, when we click on "Acquire New Ip", it will
randomly acquire IP from the pool. With this enhancement, it is
possible to select the IP from the drop down IP list of that network.
Same thing applies for a VPC as well.
* 4.13:
Added zone check for attach iso (#3755)
config: add isdynamic flag in configuration response (#3729)
filter hosts to query on zone wide storage (#3733)
convert protocal names to be found as labels (#3747)
Once again allow a VM to be on multiple networks from VPCs (#3754)
create template from snapshot regression (partly reverted) (#3767)
* create template from snapshot regression (partly reverted) (#3767)
* Once again allow a VM to be on multiple networks from VPCs (#3754)
to once again allow a VM to be on multiple networks from VPCs
* convert protocal names to be found as labels (#3747)
* convert protocal names to be found as labels
* format
* filter hosts to query on zone wide storage (#3733)
* config: add isdynamic flag in configuration response (#3729)
Co-authored-by: Wei Zhou <ustcweizhou@gmail.com>
* Service layer changes for new way of tracking maintanence progress
* Fixes after offline code review
* Fix marvin tests
* Change state name and add documentation
* Fix test
* Fix and add more unit tests for different caseS
* Fix and enhance Marvin Tests
* Fixes for corner cases
* More fixes and logging
* UI fixes
* Some minor changes and reducing VMs on host for more contained tests
* Fixed ssh client auth problem causing test failure
* Code review changes + fixes + some more logging
* Fix flaky tests by adding delays between host states
* Added fetching only enabled hosts for tests
* Make port blocking KVM specific and refactor to handle failure
* Make failing migrations due to tagged host instead of port blocking
* Added additional check for migrating VMs
* Refactor to use single place for methods checking maintenance states
* Add missing HA config keys
* Change time value to seconds
* Change Integer to Long
* Using ConfigKey defaultValue
* Do some code refactoring
* Simplify code
* Avgload (#2)
* Adding avgload for kvm
* Fix coding style issue
* Add getter/setter
* Fix several small errors
* Add override
* Uncomment getAverageLoad
* Override getAverageLoad()
* Checkstyle bug?
* Delete trailing spaces
* Renaming function
* Change interface to match
* Rename method in GetHostStatsAnswer
* Change method call name
* Convert double to long
* Remove trailing whitespace
* Change names around
* Make load visible to return it
* Parse string to double
* Change Long to Double
* Fix getter
* Unify naming to cpuloadaverage
* Change cpuloadaverage String to Double in listHostsMetrics
Remove some unnecessary whitespaces
* Add CPU_LOAD_AVERAGE to ApiConstants
After commit fbf488497f, admin need to specify an ipv4 or ipv6 addresses when add IP to nic which breaks backward compatibity. If IP is not specified, a IPv4 address should be returned.
KVM is supported on arm64 Linux (https://www.linux-kvm.org/page/Processor_support#ARM:).
For a small (IoT) platform such as the new Raspberry Pi 4 that uses armv8 processor
(cortex-a72) it's possible to run Linux host with `/dev/kvm`
accleration. This adds support for IoT IaaS in CloudStack.
This PR is from a fun weekend project where:
- I set up a Raspberry Pi 4 - 4GB RAM model with 4 CPU cores @ 1.5Ghz, 128GB SD samsung evo plus card
- Installed Ubuntu 19.10 raspi3 base image: http://cdimage.ubuntu.com/releases/19.10/release/ubuntu-19.10-preinstalled-server-arm64+raspi3.img.xz
- Build a custom Linux 5.3 kernel with KVM enabled, deb here: http://dl.rohityadav.cloud/cloudstack-rpi/kernel-19.10/ and install the linux-image and linux-module
- Then install/setup CloudStack on it (fix some issues around jna, by manually installing newer libjna-java to /usr/share/cloudstack-agent/lib)
- Since the host processor is not x86_64, I had to build a new arm64 (or aarch64) systemvmtemplate: http://dl.rohityadav.cloud/cloudstack-rpi/systemvmtemplate/
I could finally get a 4.13 CloudStack + Adv zone/networking to run on it
and deployed a KVM based Ubuntu 19.10 environment and NFS storage.
Deployed a test vm with isolated network, VR works as expected. Console
proxy works as well, for this tested against arm64 openstack Debian 9/10
templates.
I raised the issue of enabling KVM in upstream Ubuntu arm64 build: https://bugs.launchpad.net/ubuntu/+source/linux-raspi2/+bug/1783961
Ubuntu kernel team has come back and future arm64 releases may have
KVM enabled by default.
Limitation: on my aarch64 env, it did not support IDE, therefore all
default bus type for volumes are SCSI by default. With VIRTIO it fails
sometimes.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* server: Do NOT cleanup dhcp and dns when stop a vm
According comment in PR #3608, dhcp and dns entries are cleaned up only when a VM is expunged.
Revert part of commit 8fb388e931.
* server: cleanup dns/dhcp entries in removeNic instead of finalizeExpunge
This fixes a behaviour to not cleanup DHCP and DNS rules for NICs of a
VM in the VR when it is stopped, but instead when VM is expunged because
stopped VMs in CloudStack still retain the IPs and records.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
In case of null guest OS found for a template, don't fail prioritisation
completely (could still work based on HVM etc).
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Refactor: Cleanup duplicate code
Make use of Java 8 default implementation in interfaces,
to remove code duplication between XxxCmd and XxxCmdAsAdmin.
Refactor checkFormat by pre-calculating the supported
extensions. Also make use of this in ImageStoreUtil.
Makes it easier to add new file and compression formats.
is not used; probably it is a legacy code/table.
Therefore, remove the verification that counts the IPs from
UserIpv6AddressVO in order to check if it can use the network for
deploying new VMs in UI [1].
[1] com.cloud.network.NetworkModelImpl.canUseForDeploy(Network).
Fixes NPE when trying to find suitable storage pools for a volume
when the volume is not attached to a VM.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When a network IP range is removed, the "vlan" stays mapped on pod_vlan_map; therefore, the method that lists the VLANs by pod id will return null VLANS.
This PR adds proper verifications to avoid null pointer exception when deploying VRs on a pod with removed VLANs. The exception was caused on getPlaceholderNicForRouter.
Problem: In Vmware, appliances that have options that are required to be answered before deployments are configurable through vSphere vCenter user interface but it is not possible from the CloudStack user interface.
Root cause: CloudStack does not handle vApp configuration options during deployments if the appliance contains configurable options. These configurations are mandatory for VM deployment from the appliance on Vmware vSphere vCenter. As shown in the image below, Vmware detects there are mandatory configurations that the administrator must set before deploy the VM from the appliance (in red on the image below):
Solution:
On template registration, after it is downloaded to secondary storage, the OVF file is examined and OVF properties are extracted from the file when available.
OVF properties extracted from templates after being downloaded to secondary storage are stored on the new table 'template_ovf_properties'.
A new optional section is added to the VM deployment wizard in the UI:
If the selected template does not contain OVF properties, then the optional section is not displayed on the wizard.
If the selected template contains OVF properties, then the optional new section is displayed. Each OVF property is displayed and the user must complete every property before proceeding to the next section.
If any configuration property is empty, then a dialog is displayed indicating that there are empty properties which must be set before proceeding
image
The specific OVF properties set on deployment are stored on the 'user_vm_details' table with the prefix: 'ovfproperties-'.
The VM is configured with the vApp configuration section containing the values that the user provided on the wizard.
This reverts commit 7a27e35a61.
We're near 4.13 RC1, we've low confidence if the changes from #3152
would cause other regressions so reverting this. The author may send a
PR again towards 4.14.
Regressions found are all related to template and iso registration and
upload.
Fixes:
- This allows getUploadParamsForIso for all user role types, also fixes
authorised field for getUploadParamsForTemplate API.
- Fix global setting description to say what is used when value is empty/blank.
- For VM running/allocated usage description, use parenthesis to return the instance name and ID.
- Display template download progress when template is added to a project
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Retrieval of an image store using ImageStoreProviderManager has been refactored by introducing three different methods,
DataStore getRandomImageStore(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will not be used here.
DataStore getImageStoreWithFreeCapacity(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will be used here and the store with max free space will be returned. If no store with filled storage less than the threshold is found, the NULL value will be returned.
List<DataStore> listImageStoresWithFreeCapacity(List<DataStore> imageStores);
To get a list of image stores for writing purpose which fulfills threshold capacity check.
Correspondingly DataStoreManager methods have been refactored to return similar values for a given zone.
Fixes#3287 - NULL value will be returned when secondary storage is needed for writing but there is not store with free space.
Fixes#3041 - Rather than returning random secondary storage for writing, storage with max. free space will be returned.
Fixes#3478 - For migration on VMware, all writable secondary storage will be mounted while preparation.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Make use of Java 8 default implementation in interfaces,
to remove code duplication between XxxCmd and XxxCmdAsAdmin.
Refactor checkFormat by pre-calculating the supported
extensions. Also make use of this in ImageStoreUtil.
Makes it easier to add new file and compression formats.
There are certain scenarios where the 169.254.0.0/16 subnet is used for different
purposes then CloudStack on a hypervisor.
Once of such scenarios is a BGP+EVPN+VXLAN setup using BGP Unnumbered where the
169.254.0.1 address is used by Frr/Zebra BGP routing to send traffic to the
neighboring router.
The following settings can be changed in the agent.properties (default values added):
control.cidr=169.254.0.0/16
Make sure the global setting 'control.cidr' matches the values defined in the agent.propeties!
In the future the mgmt server can send this parameter to a KVM Agent on startup, but at the moment
this framework is not in place and thus these values can't be send to the Agent in a proper manner.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Currently when refreshing disk usage stats all kvm agents are asked to collect stats for all volumes. In setups with multiple kvm hosts where managed storage is used, not all volumes are attached to all kvm hosts, this results in a large number of warnings in the kvm agent logs. This change introduces a filter step in case managed storage is used so that the management server only requests kvm agents for stats about volumes that are connected to each kvm host.
Add CephSnapshotStrategy to handle RBD revert (rollback) snapshot. In order to support RBD revert (rbd_rollback), this PR adds a CephSnapshotStrategy class to handle Ceph/RBD snapshot actions.
Fixes: #3114
When adding iprange for VLANs there are 3 cases -
VLAN under consideration has a tag (like 101)
VLAN under consideration has a tag but as a range (like 101-124)
VLAN is untagged (i.e. id is "untagged")
Before adding iprange we have to check for possible overlaps and throw exception. This needs to be done as follows -
If VLAN Tag ID is numeric or a range we need to call UriUtils.checkVlanUriOverlapmethod which internally tries to expand the range as verifies if there are overlaps. If URI overlaps (i.e. there are overlapping VLAN tags) we then need to verify if the iprange being added overlaps with previously added ranges.
If there are no overlapping tags we simply need to test for public networks being present in the VLAN.
A Regression was introduced in 41fdb88#diff-6e2b61984e8fa2823bb47da3caafa4eeR3174 which caused comparing 'untagged' string as a numeric VLAN Tag range and and attempted expanding it to test overlap in UriUtils.checkVlanUriOverlap.
To fix the bug in the issue, we need to handle the untagged case separately as it's non-numeric tag in code. For untagged VLANs and overlapping VLAN URIs we need to check for ipranges and gateways which happens naturally after this change. For tagged VLANs with non-overlapping URIs we need to check if there is a public network.
Set integration.api.port to (0) zero as default. CloudStack provides CloudStack API Unauthenticated Access through port 8096. It should not be open to the Internet in any case.
* Allow users to share templates with Accounts or Projects through the
updateTemplate permissions API
* Change behaviour to show only supported projects and accounts with update template permissions
* Allow admins to see accounts dropdown and only hide lists for users
* Don't allow sharing project owned templates as you cannot retrieve them in list api calls
* Add revoke certificates API
* Add background task to sync certificates
* Fix marvin test and revoke certificate
* Fix certificate sent to hypervisor was missing headers
* Fix background task for uploading certificates to hosts
Fixes#3321
This changes removes exception throwing while associating an IP address to a new isolated network which is in Allocated state. And it allows disassociating an IP address when it is used for source NAT purpose but network is in allocated state.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Currently an admin can choose which host a VM is to be started on.
They should be able to 'override' the allocation algorthm to a greater
or lesser extent at will, and be able to choose the pod, cluster or host
that they wish a new VM to be deployed in.
DeployVirtualMachine API has been extended with additional, optional
parameters podid and clusterid that will be passed to and used in the
deployment planner, when selecting a viable host. If the user supplies
a pod, a suitable host in the given pod will be selected. If the user
supplies a cluster, a suitable host in the given cluster will be selected.
Based on the parameter supplied and on passing validation, the VM will
then be deployed on the selected host, cluster or pod.
Removed the download icon when a template is not extractable.
Modified the api to allow a user from the same account as the template, to change the extractable attribute on the template.
Fixes#3400
Support copy tags from template/iso image to VM from deploy vm command. Allow creation of tags from the source template/iso image to vm when deploy vm command creates virtual machine.
Fixes: #3048
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
If there are many projects and accounts, listing projects/accounts will take long time getting the resource limitation and resource count in the process. However resource count/limitation are not needed sometimes.
Add an option 'details' to listProjects and listAccounts. If you do not need the resource count/limitation, please add details=min to api call. The api execution time will be reduced significantly.
If projects have many resource tags, it will take a long time to list projects.
Remove resource tags information from project_view will fix it the issue.
Fixes#3178
For VPC supports redundant VRs, when start the second VR, the pod/cluster/host of first VR should be added to avoid list. This provides higher availability.
The network VRs have the same process already.
Behaviour followed while updating disk, compute offerings by domain-admins,
- Domain-admins cannot change zones for offerings specified for domains/subdomains.
- Domain-admins can chnage domains(within their subdomains) for the offerings specified for their domains/subdomains.
- Domain-admins cannot change name, display text, sort-key for offerings specified for their domains/subdomains and also other domains which are not child domain for admin.
Fixes UI unidentified button bug for Update offering access form
Signed-off-by: Abhishek Kumar <abhishek.kumar@shapeblue.com>
Problem: Currently tags cannot be applied to snapshot when it is being created but through separate “create tags” API calls. For snapshot policies tags cannot be set either at creation or through “create tags” API.
Root Cause: The “create snapshots” API does not support adding tags during creation and it can only be done through “create tags” API. Snapshot policy as a resource does not support tags and no tags can be set for them through any API.
Solution: Tag support for snapshot policy has been added. Snapshot policy with tags when executed will produce snapshots containing the same tags from snapshot policy.
Following APIs have been updated:
Both “create snapshotpolicy” and “create snapshot” now accepts “tags” as a new parameter. The expected format for “tags” parameter is similar to parameter “tags” in “create tags“ API.
Deletion support for tags associated with snapshots policy has been added to “delete snapshotpolicies” API.
Tags set for snapshot policies are added to the Response of “list snapshotpolicies“ API.
UI support for setting tags to snapshots and snapshot policy is provided through the corresponding menus with a new section in each form to set tags.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Problem: The VM metrics has aggregated volume bytes read/write and iops metrics but not on per volume basis.
Root Cause: The volume stats sub-system is not used to export the metrics, the support is not available for VMware.
Solution: Use the volume stats sub-system and DB table to export the metrics via the listVolumes and listVolumeMetrics API, and implement support for VMware and fix issue with network and disk metrics in the VM metrics view.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem: Users don't know what keys/values to enter for template and VM details.
Root Cause: The feature does not exist that can list possible details and options.
Solution: Based on the possible VM and template details handled by the
codebase, those details were refactored and a list API is introduced
that can return users those details along with possible values. When
users add details now, they will be presented with a list of key details
and their possible options if any.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes a potential NPE when a mapped account is not found and
moving of user to the mapped account is performed. This will now
throw a more information exception than NPE.
Fixes#2853
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixes#2742
UI Supported ordering VPC Offerings but the API did not have that
support implemented. This makes the change in updateVPCOfferings
and listVPCOfferings API calls, along with necessary database
changes for supporting sorting of VPC Offerings.
Problem: Volume created from a snapshot does not show its disk offering.
Root Cause: The volume created from a snapshot of a root disk does not have a disk offering therefore the disk offering of the created volume from the snapshot is empty.
Solution: Refactored createVolume API and extended UI to allow user to select a disk offering while creating a volume using a root disk volume snapshot. For creating volumes using data disk volume snapshot, the disk offering given by the snapshot will be assigned. Disk offering selection in the UI form for volume creation from snapshot is depicted in screenshot below.
Signed-off-by: Abhishek Kumar <abhishek.kumar@shapeblue.com>
'domainid' and 'zoneid' param of update*Offering APIs has been made string type.
For associating multiple domains, zones to an offering, a comma-separated list of domains and zones can be passes.
To make a domain specific offering public, a value of 'public' can be given for domainid param.
To make a zone specific offering available for all zones, a value of 'all' can be given for zoneid param.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
On update*Offering API call, supplied domain(s) and zone(s) will overwrite current domains and zones associated with the offering.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Problem: The disk offering change is not reflected in cloud_usage database table.
Root Cause: The resizeVolume API does not publish the volume disk offering change event to the
cloud_usage database table.
Solution: This issue has been fixed by refactoring the resizeVolume API to publish this disk offering change for volumes that either in Allocated or Ready state.
Moves the method that published events for volumes in Ready state from
the VolumeStateListener class to the orchestrateResizeVolume method in
the VolumeApiService.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The usage record descriptions have CloudStack's internal integer IDs
which makes it difficult for users to read their usages. This PRs
introduces a new API boolean flag `oldformat` which when set to true
would return the older description format, otherwise by default
listUsageRecords will process and return description with names and
UUIDs of resources.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem: Network name is not part of the network usage response
Root Cause: Code does not set the network name
Solution: Set the network name for network usage type usage records in the API response
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem: Not able to configure a sort order for the zones that are listed in various views in the UI.
Root Cause: There is no mechanism to accept sort key for existing zones or UI widget, that would allow to listing zones in the UI in a certain order.
Solution: The order of zones in listed in various views in the UI can now be configured through the newly added “sort_key” field added for the zone. It can be set using updateZone API by providing “sort_key” parameter for a zone, or by reordering the items in the zones list in the UI. UI has been updated to show ordering controls in zones list view. Database changes include updating table “data_center” by adding “sort_key” column (containing integer values and defaults to zero).
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem: The listVolumeMetrics API response does not honor the volume detail visibility restrictions set for normal users and returns sensitive information which should only be visible to the root admin.
Root Cause: The listVolumeMetrics API response extends the ListVolumesByAdmin API internally and this results in a full display view response that is only meant for the root admin.
Solution: This has been fixed by rectifying the API response to not show ‘physical size’, 'storage type', and ‘storage pool’ information. The UI has also been fixed to hide these columns for normal users.
Problem: Admins don’t want to charge for IP address usage on certain (shared) networks.
Root Cause: There is no flag or detail for admins to provide using UI or API when creating networks to specify if they want IP address usage of the network hidden.
Solution: A new boolean hideipaddressusage flag is added to the createNetwork API and a checkbox in the ‘Add guest network’ UI for the root admins to specify if they want the shared network’s IP address usage to be hidden in the listUsageRecords API response. The provided flag is saved as the ‘hideIpAddressUsage’ detail in the cloud.network_details table for the network. For existing (shared) networks, root admins can also specify the same boolean API parameter hideipaddressusage with the updateNetwork API request to configure the behaviour for an existing network. When the detail/flag is true, the IP address usage for the (shared) network is not exported in the listUsageRecords API response. The listNetworks API response will include the details of a network for root admin only. (note usage is still recorded in the usage database but not return by the listUsageRecords API)
The API flag works for any kind of network via the API, but the checkbox is only shown while creating shared networks in the UI.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Fixes tests path from old layout to standard maven in src/test/java/
- Removed duplicate SnapshotManagerImpl at old path `server/src/com...`
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When a KVM host is added to a cluster, the cluster GUID is null. In case
the KVM host fails to be added, the GUID is not set to null and if any
other hosts are added an exception is thrown by the resource manager
that does not allow addition of hosts to a cluster with existing hosts
whose GUID is null.
In case of KVM, other hosts may be added in parallel therefore this
restriction can be safely removed.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548
Live storage migration on KVM under these conditions:
From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.
Additional notes:
To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:
Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The code prior to this commit was looking into storage tags at the
storage_pool_details. However, it gets null (table is empty). It should
select from storage_pool_tags, which would result on the storage pool
tags. and then reflect on the code that matched the volume tags (e.g.
'aTag') with the storage pool tags (empty).
The code prior to this commit was looking for the storage tags at the table
storage_pool_details, which is empty. It should select from storage_pool_tags,
which contains the tags from each tagged storage.
Problem: Users can register ISOs from URL but cannot upload local ISOs.
Root cause: CloudStack provides browser-based upload support for volumes and templates, but ISOs are not supported.
Solution:
The existing browser-based upload from local functionality for templates and volumes (https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=39620237) is extended to support uploading local ISOs.
Extend the UI: A new button is created under the ISOs view: 'Upload from Local'. A new dialog form is displayed in which the user must select the ISO to upload from its local file system.
Extend the API: New 'GetUploadParamsForIso' API command is created to handle the ISO upload.
Problem: When a multi-disk OVA template is uploaded, only the root disk is recognized and VMs deployed using such template only get the root disk provisioned.
Root Cause: The template processor for multi-disk OVA was not used in the template upload processor.
Solution: Added support for local multi-disk OVA template upload. After a multi-disk OVA template is
uploaded, the mechanism that worked on multi-disk OVA templates registered using URL is now also used to discovers and creates data-disk templates in cloud.vm_template table and on the secondary storage.
To enable SSL on SSVMs :
• Upload the certificates like you usually do via the API or UI->Infrastructure tab
• Set the global settings secstorage.encrypt.copy, secstorage.ssl.cert.domain to appropriate values
along with the CPVM ones
• Restart management server (no need to destroy/restart SSVM (or the ssvm agent))
Test cases:
- Upload template and check it creates multi-disk folders on secondary
storage and entries in cloud.vm_template table
- Upload template and kill/shutdown management server. Then restart MS
to check if template sync works
- Copy template across zone of an uploaded template
Signed-off-by: Rohit Yadav rohit.yadav@shapeblue.com
This does not remove VM entries in dbags when hostnames match. The
current codebase already removes entry when a VM is stopped/removed so
we don't need to handle lazy removal. This will allow a VM on
multiple-tiers in a VPC to get dns/dhcp rules as expected.
This also fixes the issue of dhcp_release based on a specific interface and
removes dhcp/dns entry when a nic is removed on a guest VM.
Fixes#3273
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Improvements on upload direct download certificates
* Move upload direct download certificate logic to KVM plugin
* Extend unit test certificate expiration days
* Add marvin tests and command to revoke certificates
* Review comments
* Do not include revoke certificates API
This fixes forward merge regression that missed an import and causes
build failure in b2b99ca63e
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Since the CloudStack virtual router was redesigned on version 4.6 it has been observed that the DHCP leases file is not persistent across network operations. This causes conflicts on guest VMs static IPs, causing these static IPs to not be renewed by the DHCP server running on isolated and VPC networks' virtual routers (dnsmasq). On stopping or destroying a VM, its dhcp/dns records are not removed from the virtual router causing ghost effects.
Fixes#3272Fixes#3354
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
when we dedicate public ip range to a domain but some ips are used by an account in the domain,
the operation should be allowed but actually fails for now.
It is because cloudstack check if ips are used by same account by account name,
However, accountName is null when dedicate public ip range to a domain.
Modify the code to check account id only when dedicate ip range to account.
In virtual routers, there are different dnsmasq settings for default nic and non-default nic on vm.
We need to update dhcp informations on network vrs when default nic is changed.
For example, if 172.16.1.135 is non-default nic of vm VPC1-001-001, then
root@r-22-VM:~# cat /etc/dhcphosts.txt
02:00:1d:15:00:05,set:172_16_1_135,172.16.1.135,VPC1-001-001,710h
root@r-22-VM:~# cat /etc/dhcpopts.txt
172_16_1_135,3
172_16_1_135,6
172_16_1_135,15
If it is default nic,then
root@r-22-VM:~# cat /etc/dhcpopts.txt
root@r-22-VM:~# cat /etc/dhcphosts.txt
02:00:1d:15:00:05,172.16.1.135,VPC1-001-001,757h
Fixes#3201
If there are multiple IPs in different subnet assigned to a VPC, after restarting VPC with cleanup, the VRs will be FAULT state.
Step to reproduce:
(1) create vpc, source nat IP is 10.11.118.X
(2) assign two public IPs in other subnet to this VPC. 10.11.119.X and 10.11.119.Y
(3) deploy two vms in the vpc, and enable static nat 10.11.119.X and 10.11.119.Y to these two vms
(4) restart vpc with cleanup. There are more than 1 nic allocated for 10.11.119 to new VRs
Logs as below:
2019-05-10 14:12:24,652 DEBUG [o.a.c.e.o.NetworkOrchestrator] (API-Job-Executor-36:ctx-839f6522 job-652 ctx-35fb4667) (logid:1ab7aa37) Allocating nic for vm VM[DomainRouter|r-85-VM] in network Ntwk[200|Public|1] with requested profile NicProfile[0-0-null-10.11.118.157-vlan://untagged
2019-05-10 14:12:24,676 DEBUG [o.a.c.e.o.NetworkOrchestrator] (API-Job-Executor-36:ctx-839f6522 job-652 ctx-35fb4667) (logid:1ab7aa37) Allocating nic for vm VM[DomainRouter|r-85-VM] in network Ntwk[200|Public|1] with requested profile NicProfile[0-0-null-10.11.119.110-vlan://119
2019-05-10 14:12:24,699 DEBUG [o.a.c.e.o.NetworkOrchestrator] (API-Job-Executor-36:ctx-839f6522 job-652 ctx-35fb4667) (logid:1ab7aa37) Allocating nic for vm VM[DomainRouter|r-85-VM] in network Ntwk[200|Public|1] with requested profile NicProfile[0-0-null-10.11.119.110-vlan://119
2019-05-10 14:12:24,723 DEBUG [o.a.c.e.o.NetworkOrchestrator] (API-Job-Executor-36:ctx-839f6522 job-652 ctx-35fb4667) (logid:1ab7aa37) Allocating nic for vm VM[DomainRouter|r-85-VM] in network Ntwk[200|Public|1] with requested profile NicProfile[0-0-null-10.11.119.110-vlan://119
This is a regression issue caused by commit 1d382e0
See #3339: a runtime exception is thrown but it should be converted to an error return. Wrapping it in a CloudRuntimeException should do the trick.
Fixes#3339
* DPDK vHost User mode selection
* SQL text field and DPDK classes refactor
* Fix NullPointerException after refactor
* Fix unit test
* Refactor details type
Fixes#3315
Currently, the code was allowed to change service offering for VM to a deleted or inactive service offering. Added check for it to throw an exception.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This change allows instance Settings tab to be visible but inaccessible when instance is running. A warning is shown when user tries to access Settings for a running instance and tab content is greyed out.
It also allows some admin defined instance settings/details to be made static for user. User will be able to see them in instance settings tab but cannot change their values as action buttons are disabled and greyed out. This can be achieved by providing a comma-separated list details for global settings key 'user.vm.readonly.ui.details'. A new value 'readonlyuidetails' has been added in UserVMResponse for UI manipulate editing functionality of settings/details.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
updateServiceOffering and updateDiskOffering API has been modified to allow updating domain(s) and zone(s) for the offering.
Signed-off-by: Abhishek Kumar <abhishek.kumar@shapeblue.com>
Added changes for creating service offerings for specified domain(s) and zone(s).
Fixed checkAccess for disk offerings.
Fixed list APIs for disk and service offerings.
UI changes for creating disk, service offerings for specified domain(s) and zone(s).
Signed-off-by: Abhishek Kumar <abhishek.kumar@shapeblue.com>
Allows creating storage offerings associated with particular domain(s) and zone(s). In create disk/storage offfering form UI, a mult-select control has been addded to select desired zone(s) and domain select element has been made multi-select.
createDiskOffering API has been modified to allow passing list of domain and zone IDs with keys domainids and zoneids respectively. These lists are stored in DB in cloud.disk_offering_details table with 'domainids' and 'zoneids' key as string of comma separated list of IDs. Response for create, update and list disk offering APIs will return domainids, domainnames, zoneids and zonenames in details object of offering.
listDiskOfferings API has been modified to allow passing zoneid to return only offerings which are associated with the zone.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
On first startup, the management server creates and saves a random
ssh keypair using ssh-keygen in the database. The command does
not specify keys in PEM format which is not the default as generated
by latest ssh-keygen tool.
The systemvmtemplate always needs re-building whenever there is a change
in the cloud-early-config file. This also tries to fix that by introducing a
stage 2 bootstrap.sh where the changes specific to hypervisor detection
etc are refactored/moved. The initial cloud-early-config only patches
before the other scripts are called.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Keep connection alive when on maintenance
* Refactor cancel maintenance and unit tests
* Add marvin tests
* Refactor
* Changing the way we get ssh credentials
* Add check on SSH restart and improve marvin tests
@mike-tutkowski @syed If there's something more that needs to be added/changed, we'll just open another PR for this.
For now this seems to be a very straightforward fix for the UI problem with managed storage.
* Fix iops values when creating a compute offering
* Fix iops values when creating a disk offering
Problem: Custom compute offering does not allow setting min and max values for CPU and VRAM for custom VMs.
Root Cause: Custom compute offerings cannot be created with a given range of CPU number and memory instead it allows only fixed values.
Solution: createServiceOffering API has been modified to allow setting a defined range for CPU number and memory. Also, UI form for compute offering creation is provided with a new field named 'compute offering type’ with values - Fixed, Custom Constrained, Custom Constrained. It will allow the creation of compute offerings either with a fixed CPU speed and memory for fixed compute offering, or with a range of CPU number and memory for custom constrained compute offering or without predefined CPU number, CPU speed and memory for custom unconstrained compute offering.
To allow the user to set CPU number, CPU speed and memory during VM deployment, UI form for VM deployment has been modified to provide controls to change these values. These controls are depicted in screenshots below for custom constrained and custom unconstrained compute offering types.
Sample API calls using cmk to create a constrained service offering and deploying a VM using it,
create serviceoffering name=Constrained displaytext=Constrained customized=true mincpunumber=2 maxcpunumber=4 cpuspeed=400 minmemory=256 maxmemory=1024
deploy virtualmachine displayname=ConstrainedVM serviceofferingid=60f3e500-6559-40b2-9a61-2192891c2bd6 templateid=8e0f4a3e-601b-11e9-9df4-a0afbd4a2d60 zoneid=9612a0c6-ed28-4fae-9a48-6eb207af29e3 details[0].cpuNumber=3 details[0].memory=800
Signed-off-by: Abhishek Kumar <abhishek.kumar@shapeblue.com>
- Fixes PR #3146 db cleanup to the correct 4.12->4.13 upgrade path
- Fixes failing unit test due to jdk specific changes after forward
merging
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
With this patch b766bf7
we started tracking disks in attaching state so that other attach request can fail gracefully. However this missed the case where disks were in allocated state but attach was requested.
For the use case where users want to attach disk in allocated state but not ready, we need to have allocated-attaching transition as well. We must take care of returning to the original state - allocated or ready - when attach request has completed.
For the use case of unstarted vm's the disk must proceed as follows - "Allocated" -> Attaching -> Allocated. When VM is started, the disk is "created" and pool is assigned. For the use case of started VMs it's more trivial and disk proceeds as follows - Ready -> Attaching -> Ready.
Test this by creating a VM with "startvm=false", create a disk and try attaching it in allocated state. It would give an exception on latest 4.11 but will be fixed on this patch.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This introduces a new global setting `user.vm.blacklisted.details` that
allows admins to blacklist VM details that non-admin users should not
see via the VM's settings tab.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This ensures that tags of a VM snapshot are listed in the UI, available
in the list vmsnapshots API response.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixes#2689
With the current code, existing templates were not downloaded to the new secondary storage when it is added. SSVM needed to be restarted to start the download process. This PR starts templates sync for the new secondary storage when it is added.
Signed-off-by: Abhishek Kumar <abhishek.kumar@shapeblue.com>
The InfluxDB Java client supports Batch Mode at versions 2.9+ [1]. Thus, this PR updated to the latest InfluxDB (2.15), adding support to Batch Mode
[1] https://github.com/influxdata/influxdb-java
This allows showing complete domain, ie, domain path for accounts list view and account detail.
Added a new key, domainpath, in AccountResponse.
Fixes#2994
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Using db column instead of VO variable name was causing issue with SQL select statement.
This change fixes the problem by using VO variable for adding conditional.
Fixes#3208
Using db column instead of VO variable name was causing issue with SQL select statement.
This PR fixes the problem by using VO variable for adding conditional.
Additionally in UI listAll parameter was being sent twice in the listVMSnapshot API call. It is fixed with this PR.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
## Description
The issue was that an Incorrect iscsi path was being passed for managed storage pools when collecting volume stats. Storage pools normally have a UUID based path while managed storage pools require an IQN based path
* server: make snapshotting on KVM non-blocking
This references and uses an already fixed solution from
https://github.com/MissionCriticalCloud/cosmic/pull/68 to make
snapshotting on KVM non-blocking.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* move StorageSubSystemCommand instanceof check above as CopyCommand is a type of StorageSubSystemCommand
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Rename ListUsageRecords API command file name to ListUsageRecordsCmd
* Refactor to use APINAME variable and remove unused s_logger field
* Remove unused import
* Migrate template to target host if needed.
Fix KVM VM local storage live migration by migrating its template to the
target host if needed.
* Address reviewer and add method that updates the DB template reference
* Remove deprecated Config.PrimaryStorageDownloadWait
* Code formating of @Inject to follow checkstyle
* feature: add libvirt / qemu io bursting
Adds the ability to set bursting features from libvirt / qemu
This allows you to utilize the iops and bytes temporary "burst" mode
introduced with libvirt 2.4 and improved upon with libvirt 2.6.
https://blogs.igalia.com/berto/2016/05/24/io-bursts-with-qemu-2-6/
* updates per rafael et al
* The snapshot.backup.rightafter configuration variable was removed by:
SHA: 6bb0ca2f85
This adds it back, though named snapshot.backup.to.secondary now instead.
This global parameter, once set, will allow you to prevent automatic backups of
snapshots to secondary storage, unless they're actually needed.
Fixes#3096
* updates per review
* api: add command to list management servers
* api: add number of mangement servers in listInfrastructure command
* ui: add block for mangement servers on infra page
* api name resolution method cleanup
Offerings can co-exist where on does provide Security Grouping in the
network, but other guest Networks have no Security Grouping.
In V(X)LAN isolation environments the L2 separation is handled by V(X)LAN
and protection between Instances is handled by Security Grouping.
There are multiple scenarios possible where one network has Security Grouping
enabled because that is required in that network.
In the other network, but in the same zone it could be a choice to have
Security Grouping disabled and allow all traffic to flow.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* - Offline VM and Volume migration on Vmware hypervisor hosts
- Also add VM disk consolidation call on successful VM migrations
* Fix indentation of marvin test file and reformat against PEP8
* * Fix few comment typos
* Refactor debug messages to use String.format() when debug log level is enabled.
* Send list of commands returned by hypervisor Guru instead of explicitly selecting the first one
* Fix unhandled NPE during VM migration
* Revert back to distinct event descriptions for VM to host or storage pool migration
* Reformat test_primary_storage file against PEP-8 and Remove unused imports
* Revert back the deprecation messages in the custom StringUtils class to favour the use of the ApacheUtils
* CLOUDSTACK-4045 added a check for network state when determining whether a new IP should be source NAT. this prevents associated IP's to be marked as source NAT when the network is in allocated state, causing disassociateIpAddress to fail later
* Remove mock object that cause other tests to fail
* Remove underscores from variable types and add documentation for the created method
* Improve exception message to include network name
* Include network UUID with the Exception message and fix failing marvin test
* Rebase against latest master and format AssociateIPAddrCmd class
* netutils: Add method to verify if IPv6 Address is EUI-64
By checking if ff:fe is present in the address we can see if an IPv6 Address
is EUI-64 or not.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* ipv6: Do not allow a Secondary IPv6 address to be EUI-64
EUI-64 addresses should not be allowed as they can be used in the future by a to be
deployed Instance which has to obtain this address because it matches it's MAC.
In a /64 subnet there are more then enough other IPs available to be allocated to
Instances, therefor we can safely disallow the allocation of EUI-64 addresses.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
With IPv6 we are not using DHCP to allocate addresses, but using
StateLess Address Auto Configuration (SLAAC) a Instance will calculate
it's own address based on the Router Advertisements send out by the
routers in the network.
This Advertisement contains the IPv6 Subnet in use in that subnet and
allows to calculate the stable Address the Instance will obtain based
on it's MAC Address.
The existing code is 'dead code' as it has been written, but was never
used by any production code.
SLAAC only works properly with subnets of exactly 64-bits large.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
This regression was introduced with PR #2773 (Support IPv6 address in addIpToNic). The contributor did not take into consideration that the method “addIpToNic” was designed to add/allocate other IPs to a NIC. If users did not specify an IP, ACS should generate one for the network where the NIC is plugged into.
Even though I am fixing this regression here, it is still important to highlight that for IPV6, the user is not able to allocate an IP without specifying it.
* Add Support for InfluxDB on StatsCollector
* Code refactored to fit Inner Class architecture.
Due to the inner class structure, test case for some methods will not be
implemented. On the future it will be necessary to refactor the whole
StatsCOllector architecture and extract inner classes.
Each Inner Class that is a "stats collector" and sends data to Influx
will extend AbstractStatsCollector to send metrics to the correct
measure ("table"). For instance, HostCollector sends data to host_stats,
VmStatsCollector sends data to vm_stats.
Add ping test for ensure that the target InfluxDB host is reachable
* Address PR reviews
* Enhance and tests implemented addressing reviewers.
* Set variables to private
A corner case was found on 4.11.2 for #2493 leading to an infinite loop in state PrepareForMaintenance
To prevent such cases, in which failed migrations are detected but still running on the host, this feature adds a new cluster setting host.maintenance.retries which is the number of retries before marking the host as ErrorInMaintenance if migration errors persist.
How Has This Been Tested?
- 2 KVM hosts, pick one which has running VMs as H
- Block migrations ports on H to simulate failures on migrations:
iptables -I OUTPUT -j REJECT -m state --state NEW -m tcp -p tcp --dport 49152:49215 -m comment --comment 'test block migrations' iptables -I OUTPUT -j REJECT -m state --state NEW -m tcp -p tcp --dport 16509 -m comment --comment 'test block migrations
- Put host H in Maintenance
- Observe that host is indefinitely in PrepareForMaintenance state (after this fix it goes into ErrorInMaintenance after retrying host.maintenance.retries times)
This is important because it helps in communicating back the exact
error to the API callee.
Current behavior is that ParamProcessWorker#processParameters catches
the exception and returns an incorrect type exception without the
proper message.
There is no reason to not send userdata+password to the VR as all
Instances in CloudStack are Dual-Stacked. They have IPv4 and IPv6
so they can query their metadata over IPv4 at the VR.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* Allow KVM VM live migration with ROOT volume on file
* Allow KVM VM live migration with ROOT volume on file
- Add JUnit tests
* Address reviewers and change some variable names to ease future
implementation (developers can easily guess the name and use
autocomplete)
Users reported that they weren't getting all apis listed in cloudmonkey when running a sync. After some debugging, I found that the problem is that the ApiDiscoveryService is calling ApiRateLimitServiceImpl.checkAccess(), so the results of the listApis command are being truncated because Cloudstack believes the user has exceeded their API throttling rate.
I enabled throttling with a 25 request per second limit. I then created a test role with only list* permissions and assigned it to a test user. When this user calls listApis, they will typically receive anywhere from 15-18 results. Checking the logs, you see The given user has reached his/her account api limit, please retry after 218 ms..
I raised the limit to 200 requests per second, restarted the management server and tried again. This time I got 143 results and no log messages about the user being throttled.
These files were not in the right directory and thus not being executed
by Maven.
By moving the files we make sure these tests are run again.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
This commit allows deploying VMs with a specific IPv4 address.
DirectPodBasedNetworkGuru does not support requesting a custom
IP-Address while creating a new NIC/Instance, throwing the following
error:
Error 530: Does not support custom ip allocation at this time:
NicProfile[0-0-null-null-null
Unknown macro: { "cserrorcode"}
Some use-cases prefer the ability to request the IPv4 address which the
Instance will get.
This implementation adds unit test cases to cover and it was manually
tested in Basic Networking. I can perform more tests if requested.
With earlier work in Basic Networking and the security group provider IPv6 is
supported and we can allow IPv6 to be supplied in networks with SG enabled.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
This should not be in DEBUG as people would want to know that the host was skipped
because it didn't have enough slots available to run the VM.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
After upgrade existing environments to 4.11, ConfigDrive cannot be enabled for existing zones due to missing entry on 'physical_network_service_providers' table.
It does it by reusing the DateUtil helpers. DateUtil uses java.time.* as that one knows how to deal
with timezones correctly.
The format expected by signatureVersion=3&expires=.... is quite limited.
It should accept the following formats that are containing a timezone and/or milliseconds.
2018-10-01T08:12:14Z
2018-10-01T08:12:14+01:00
2018-10-01T08:12:14+0100
2018-10-01T08:12:14.000Z
2018-10-01T08:12:14.000+01:00
2018-10-01T08:12:14.000+0100
afaik only 2018-10-01T08:12:14+0100 is accepted by the current codebase.
This PR echoes other pull requests I made earlier this year. #2392 and #2867
Signed-off-by: Yoan Blanc <yoan.blanc@exoscale.ch>
* Remove some unused Classes
These classes were deleted because they have no references in our code base. They are not in Spring execution flow nor instantiated with "new":
- com.cloud.agent.api.CheckStateAnswer
- com.cloud.agent.api.StartupVMMAgentCommand
- com.cloud.agent.api.routing.UserDataCommand
- remove from description at
com.cloud.configuration.Config.ExecuteInSequenceNetworkElementCommands
enum
- com.cloud.agent.api.storage.UpgradeDiskCommand
- com.cloud.agent.api.storage.CreatePrivateTemplateCommand
- com.cloud.agent.api.storage.DestroyAnswer
- Note: "FIXME: Should have an DestroyAnswer" at
com.cloud.storage.resource.StoragePoolResource
- com.cloud.agent.api.storage.UpgradeDiskAnswer
- com.cloud.agent.api.storage.ManageVolumeAvailabilityAnswer
- com.cloud.agent.api.storage.ManageVolumeAvailabilityCommand
- com.cloud.exception.UsageServerException
- com.cloud.info.SecStorageVmLoadInfo
- com.cloud.serializer.SerializerHelper
* PR#1448 update description of 'execute.in.sequence.network.element.commands' param
Update description of 'execute.in.sequence.network.element.commands'parameter to reflect an unused command that has been removed. The removed class command is 'UserDataCommand'.
* Add cloud schema to update SQL
Prevents errors while migrating VM from ISO:
Test 1: Deploy VM from ISO -> Live migrate VM to another host -> ERROR
Test 2: Register ISO using Direct Download on KVM -> Deploy VM from ISO -> Live migrate VM to another host -> ERROR
- Prevent NullPointerException migrating VM from ISO
- Prevent mount secondary storage on ISO direct downloads on KVM
This force stops old VRs when performing rolling restart with
cleanup=true. This will ensure that VRs are powered off quickly than
wait longer for the normal ACPI shutdown. During testing, it was found
on VMware where VM stops are slow compared to XenServer and KVM.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Unify checksum API output for templates and ISOs: not list the checksum algorithm on:
KVM direct downloads
On in progress normal template downloads. The algorithm is shown on the listtemplates API, but after it is downloaded it is not shown anymore.
This adds a global setting for admins who may not want the rolling
restart of routers or are seeing any issues around it. In future, this
setting may be removed.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes#2719 where private gateway IP might be incorrectly
programmed on a guest network nic. The VR would now check ipassoc
requests by mac addresses than provided nic/device id in case they are
wrong.
The root cause is that the device id information is lost when aggregated
commands are created upon starting of a new VPC VR, without the correct
device id in ip_associations json it mis-programs the VR.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
These boolean-return methods are named as "getXXX".
Other boolean-return methods are named as "isXXX".
Considering there methods will return boolean values, it should be more clear and consistent to rename them as "isXXX".
(rebase #2602 and #2816)
* CLOUDSTACK-9473: storage pool capacity check when volume is resized or migrated
Storage pool checker is not being called on resize and migrate volume.
This may lead to allocated percentage of storage above 100%.
Setup:
1 VMware cluster with 2 Hosts.
Executed Steps:
Applied the following global settings:
storage.overprovisioning.factor = 1
pool.storage.allocated.capacity.disablethreshold = 1
pool.storage.capacity.disablethreshold = 1
Restarted management server
Executed Resize and migrate pool and Observed that Storage pool checker is not performed on resizeVolume and migrateVolume.
Result:
Root cause analysis shows storage pool checker is not called when doing migration and resizing.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Removing UserVmDetailsDao duplicate field;
Found the following repeated field in the UserVmManagerImpl class
@Inject
private UserVmDetailsDao _vmDetailsDao;
@Inject
private UserVmDetailsDao _uservmDetailsDao;
Refactored to a single field;
@Inject
private UserVmDetailsDao userVmDetailsDao;
Similar to this PR: https://github.com/apache/cloudstack/pull/2750/files
* 4.11:
Changed the implementation of isVolumeOnManagedStorage(VolumeInfo) to check if the data store in question is for primary storage (and added a unit test from Daan Hoogland)
vmware: reboot VR after mac updates (#2794)
This re-introduces the rebooting of VR after setup of nics/macs in
case of VMware. It also adds a minor enhancement to show the console
esp. for root admins when VRs and systemvms are in starting state.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Previously, the ethernet device index was used as rt_table index and
packet marking id/integer. With eth0 that is sometimes used as link-local
interface, the rt_table index `0` would fail as `0` is already defined
as a catchall (unspecified). The fwmarking on packets on eth0 with 0x0
would also fail. This fixes the routing issues, by adding 100 to the
ethernet device index so the value is a non-zero, for example then the
relationship between rt_table index and ethernet would be like:
100 -> Table_eth0 -> eth0 -> fwmark 100 or 0x64
101 -> Table_eth1 -> eth1 -> fwmark 101 or 0x65
102 -> Table_eth2 -> eth2 -> fwmark 102 or 0x66
This would maintain the legacy design of routing based on packet mark
and appropriate routing table rules per table/ids. This also fixes a
minor NPE issue around listing of snapshots.
This also backports fixes to smoketests from master.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Cleaup and code-formatting POM files
* Remove obsolete mycila license-maven-plugin
* Remove obsolete console-proxy/plugin project
* Move console-proxy-rdbconsole under console-proxy parent
* Use correct parent path for rdpconsole
* Order alphabetally items in setnextversion.sh
* Unifiy License header in POMs
* Alphabetic order of modules definition
* Extract all defined versions into parent pom
* Remove obsolete files: version-info.in, configure-info.in
* Remove redundant defaultGoal
* Remove useless checkstyle plugin from checkstyle project
* Order alphabetally items in pom.xml
* Add aditional SPACEs to fix debian build
* Don't execute checkstyle on parent projects
* Use UTF-8 encoding in building checkstyle project
* Extract plugin versions into properties
* Execute PMD plugin on all the projects with -Penablefindbugs
* Upgrade maven plugins to latest version
* Make sure to always look for apache parent pom from repository
* Fix incorrect version grep in debian packaging
* Fix rebase conflicts
* Fix rebase conflicts
* Remove PMD for now to be fixed on another PR
* Fix limitation on tag matching in 'migrateVolume' with disk offering replacement
When the feature to enable disk offering replacement during volume migration was created, we were forcing the tags of the new disk offering to exact the same as the tags of the target storage poll. However, that is not how ACS manages volumes allocation. This change modifies this validation to make it consistent with volume allocation.
* Address Nitin's suggestions
* Apply Daan's suggestion regarding "doesTargetStorageSupportDiskOffering" method
* fix problem
There was a concurrency problem with the “moveNetworkAclItem” API method. If two users were changing the ACL rules order at the same time, this could lead to inconsistent actions.
To solve the problem we added a “consistency check ” parameter, which is used to hold the consistency hash. This hash is created using an MD5 hash function on a String that is created with all ACL rules UUIDs concatenated in their order, which is defined via the ‘number’ field.
We also lock the editing of the ACL while executing the upgrade. This allows us to handle race conditions nicely, and present a good feedback for the user.
This is a new feature for CS that allows Admin users improved
troubleshooting of network issues in CloudStack hosted networks.
Description: For troubleshooting purposes, CloudStack administrators may wish to execute network utility commands remotely on system VMs, or request system VMs to ping/traceroute/arping to specific addresses over specific interfaces. An API command to provide such functionalities is being developed without altering any existing APIs. The targeted system VMs for this feature are the Virtual Router (VR), Secondary Storage VM (SSVM) and the Console Proxy VM (CPVM).
FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+Remote+Diagnostics+API
ML discussion:
https://markmail.org/message/xt7owmb2c6iw7tva
Fixes the version in pom etc. to be consistent with versioning pattern as X.Y.Z.0-SNAPSHOT after a minor release.
Signed-off-by: Khosrow Moossavi <khos2ow@gmail.com>
As rolling restart does not deallocate an IP before configuring it on a new VR, the code must allow it to be reused on a non-redundant VPCs gateway nic.
In crease ping counts to reduce intermittent failures in smoketests.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This ensure that fewer mount points are made on hosts for either
primary storagepools or secondary storagepools.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* 4.11:
comment on unencryption
ui: fix create VPC dialog box failure when zone is SG enabled (#2704)
CLOUDSTACK-10381: Fix password reset / reset ssh key with ConfigDrive
isisnot=
extra message
debug message
imports
update without decrypt doesn't work
set unsensitive attributes as not 'Secure'
remove old config artifacts from update path
Not possible to deploy an Advanced zone with Security Groups, and VXLAN isolation method on KVM. Exception: "Unable to convert network offering with specified id to network profile" is logged.
Changes in PR #2508 have caused network restart to fail in a Nuage setup,
as the new VR takes the same IP as the old one, and the old VR is still running.
Nuage doesn't support multiple VM's having the same IP.
We delay provisioning the interfaces in VSD until the old VR interface is released.
* Create unit test cases for 'ConfigDriveBuilder' class
* add method 'getProgramToGenerateIso' as suggested by rohit and Daan
* fix encoding for base64 to StandardCharsets.US_ASCII
* fix MockServerTest.testIsMockServerCanUpgradeConnectionToSsl()
This is another method that is causing Jenkins to fail for almost a month
This fixes and ensures that every VM has its capacity individually
calculated, with the initial override of 1.0f as overcommit ratio.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When configuring a pre-setup primary storage we can enter the name-label of the storage that is going to be used by ACS and is already set up in the host. The problem is that we can use any String of characters there, and this String does not need to be a UUID. When listing volumes from a primary storage that has such conditions, the list will return all of the volumes in the cloud because the “API framework” will ignore that value as it is not a UUID type.
This introduces a new global setting `vm.configdrive.primarypool.enabled` to toggle creation/hosting of config drive iso files on primary storage, the default will be false causing them to be hosted on secondary storage. The current support is limited from hypervisor resource side and in current implementation limited to `KVM` only. The next big change is that config drive is created at a temporary location by management server and shipped to either KVM or SSVM agent via cmd-answer pattern, the data of which is not logged in logs. This saves us from adding genisoimage dependency on cloudstack-agent pkg.
The APIs to reset ssh public key, password and user-data (via update VM API) requires that VM should be shutdown. Therefore, in the refactoring I removed the case of updation of existing ISO. If there are objections I'll re-put the strategy to detach+attach new config iso as a way of updation. In the refactored implementation, the folder name is changed to lower-cased configdrive. And during VM start, migration or shutdown/removal if primary storage is enable for use, the KVM agent will handle cleanup tasks otherwise SSVM agent will handle them.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Primary Storage count for an account does not decrease when a Data Disk is deleted
When a data disk is created and not attached in a running VM, the "deleteVolume" will not decrement the count for used primary storage in the VMs accounting information. The property that is not being decremented is called "primarystoragetotal"; this information can be retrieved via "listAccounts" API method.
Steps to reproduce this issue:
1 - Create an account, deploy a VM in it
2 - Check the primary storage count for the account with listAccounts API
3 - Create a data disk
4 - Check the primary storage count for the account with listAccounts API
5 - Delete the Data disk
6 - Check the primary storage count for the account with listAccounts API - It is the same as before deleting the data disk (it should not be the same as the value in step 2!)
* formatting and cleanups
* fix imports that were wrongly changed during rebase
This fixes config drive to use VM's user provided host-name instead of
the internal VM instance ID for hostname related config in both
cloudstack and openstack metadata bundled in the ISO.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-9184: Fixes#2631 VMware dvs portgroup autogrowth
This deprecates the vmware.ports.per.dvportgroup global setting.
The vSphere Auto Expand feature (introduced in vSphere 5.0) will take
care of dynamically increasing/decreasing the dvPorts when running out
of distributed ports . But in case of vSphere 4.1/4.0 (If used), as this
feature is not there, the new default value (=> 8) have an impact in the
existing deployments. Action item for vSphere 4.1/4.0: Admin should
modify the global configuration setting "vmware.ports.per.dvportgroup"
from 8 to any number based on their environment because the proposal
default value of 8 would be very less without auto expand feature in
general. The current default value of 256 may not need immediate
modification after deployment, but 8 would be very less which means
admin need to update immediately after upgrade.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This introduces a rolling restart of VRs when networks are restarted
with cleanup option for isolated and VPC networks. A make redundant option is
shown for isolated networks now in UI.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
These Boolean-return methods are named "getXXX", but other Boolean-return methods are named "isXXX", such as the following two methods. They will return boolean values, rename them as "isXXX" should be more clear than "getXXX".
Supporting ConfigDrive user data on L2 networks.
Add UI checkbox to create L2 network offering with config drive.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This returns the boolean value of the `isuserdefined` key than
converting it to string. Fixes#2529.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10147 Disabled Xenserver Cluster can still deploy VM's. Added code to skip disabled clusters when selecting a host (#2442)
(cherry picked from commit c3488a51db)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10318: Bug on sorting ACL rules list in chrome (#2478)
(cherry picked from commit 4412563f19)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10284:Creating a snapshot from VM Snapshot generates error if hypervisor is not KVM.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10221: Allow IPv6 when creating a Basic Network (#2397)
Since CloudStack 4.10 Basic Networking supports IPv6 and thus
should be allowed to be specified when creating a network.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
(cherry picked from commit 9733a10ecd)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10214: Unable to remove local primary storage (#2390)
Allow admins to remove primary storage pool.
Cherry-picked from eba2e1d8a1
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* dateutil: constistency of tzdate input and output (#2392)
Signed-off-by: Yoan Blanc <yoan.blanc@exoscale.ch>
Signed-off-by: Daan Hoogland <daan.hoogland@shapeblue.com>
(cherry picked from commit 2ad5202823)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10054:Volume download times out in 3600 seconds (#2244)
(cherry picked from commit bb607d07a9)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* When creating a new account (via domain admin) it is possible to select “root admin” as the role for the new user (#2606)
* create account with domain admin showing 'root admin' role
Domain admins should not be able to assign the role of root admin to new users. Therefore, the role ‘root admin’ (or any other of the same type) should not be visible to domain admins.
* License and formatting
* Break long sentence into multiple lines
* Fix wording of method 'getCurrentAccount'
* fix typo in variable name
* [CLOUDSTACK-10259] Missing float part of secondary storage data in listAccounts
* [CLOUDSTACK-9338] ACS not accounting resources of VMs with custom service offering
ACS is accounting the resources properly when deploying VMs with custom service offerings. However, there are other methods (such as updateResourceCount) that do not execute the resource accounting properly, and these methods update the resource count for an account in the database. Therefore, if a user deploys VMs with custom service offerings, and later this user calls the “updateResourceCount” method, it (the method) will only account for VMs with normal service offerings, and update this as the number of resources used by the account. This will result in a smaller number of resources to be accounted for the given account than the real used value. The problem becomes worse because if the user starts to delete these VMs, it is possible to reach negative values of resources allocated (breaking all of the resource limiting for accounts). This is a very serious attack vector for public cloud providers!
* [CLOUDSTACK-10230] User should not be able to use removed “Guest OS type” (#2404)
* [CLOUDSTACK-10230] User is able to change to “Guest OS type” that has been removed
Users are able to change the OS type of VMs to “Guest OS type” that has been removed. This becomes a security issue when we try to force users to use HVM VMs (Meltdown/Spectre thing). A removed “guest os type” should not be usable by any users in the cloud.
* Remove trailing lines that are breaking build due to checkstyle compliance
* Remove unused imports
* fix classes that were in the wrong folder structure
* Updates to capacity management
Currently, users are not able to change the disk offering of VMs' root volumes. It might be interesting to allow such changes, so users would be able to move a VM initially deployed in shared storage to local storage and vice versa. It is also interesting to enable changing the quality of service offered to root disks.
We are allowing only administrators to execute the change of root volumes disk offerings during volume migration between storage. Therefore, we perform all at once, the migration of storage and the disk offering to reflect the new place.
* [CLOUDSTACK-5235] Force users to enter old password when updating password
* Formatting for checkstyle
* Remove an unused import in AccountManagerImpl
* Apply Nitin's suggestions
* Change 'oldPassword' to 'currentPassword'
* Second review of Resmo
* Fix typos found by Nitin
* create account with domain admin showing 'root admin' role
Domain admins should not be able to assign the role of root admin to new users. Therefore, the role ‘root admin’ (or any other of the same type) should not be visible to domain admins.
* License and formatting
* Break long sentence into multiple lines
* Fix wording of method 'getCurrentAccount'
* fix typo in variable name
* [CLOUDSTACK-10323] Allow changing disk offering during volume migration
This is a continuation of work developed on PR #2425 (CLOUDSTACK-10240), which provided root admins an override mechanism to move volumes between storage systems types (local/shared) even when the disk offering would not allow such operation. To complete the work, we will now provide a way for administrators to enter a new disk offering that can reflect the new placement of the volume. We will add an extra parameter to allow the root admin inform a new disk offering for the volume. Therefore, when the volume is being migrated, it will be possible to replace the disk offering to reflect the new placement of the volume.
The API method will have the following parameters:
* storageid (required)
* volumeid (required)
* livemigrate(optional)
* newdiskofferingid (optional) – this is the new parameter
The expected behavior is the following:
* If “newdiskofferingid” is not provided the current behavior is maintained. Override mechanism will also keep working as we have seen so far.
* If the “newdiskofferingid” is provided by the admin, we will execute the following checks
** new disk offering mode (local/shared) must match the target storage mode. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** we will check if the new disk offering tags match the target storage tags. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** check if the target storage has the capacity for the new volume. If it does not have enough space, then an exception is thrown and the operator will receive a message indicating the problem.
** check if the size of the volume is the same as the size of the new disk offering. If it is not the same, we will ALLOW the change of the service offering, and a warning message will be logged.
We execute the change of the Disk offering as soon as the migration of the volume finishes. Therefore, if an error happens during the migration and the volume remains in the original storage system, the disk offering will keep reflecting this situation.
* Code formatting
* Adding a test to cover migration with new disk offering (#4)
* Adding a test to cover migration with new disk offering
* Update test_volumes.py
* Update test_volumes.py
* fix test_11_migrate_volume_and_change_offering
* Fix typo in Java doc
* CLOUDSTACK-10289: Config Drive Metadata: Use VM UUID instead of VM id
* CLOUDSTACK-10288: Config Drive Userdata: support for binary userdata
* CLOUDSTACK-10358: SSH keys are missing on Config Drive disk in some cases
* fix https://issues.apache.org/jira/browse/CLOUDSTACK-10356
* del patch file
* Update ResourceCountDaoImpl.java
* fix some format
* fix code
* fix error message in VolumeOrchestrator
* add check null stmt
* del import unuse class
* use BooleanUtils to check Boolean
* fix error message
* delete unuse function
* delete the deprecated function updateDomainCount
* add error log and throw exception in ProjectManagerImpl.java
CloudStack SSO (using security.singlesignon.key) does not work anymore with CloudStack 4.11, since commit 9988c26, which introduced a regression due to a refactoring: every API request that is not "validated" generates the same error (401 - Unauthorized) and invalidates the session.
However, CloudStack UI executes a call to listConfigurations in method bypassLoginCheck. A non-admin user does not have the permissions to execute this request, which causes an error 401:
{"listconfigurationsresponse":{"uuidList":[],"errorcode":401,"errortext":"unable to verify user credentials and/or request signature"}}
The session (already created by SSO) is then invalidated and the user cannot access to CloudStack UI (error "Session Expired").
Before 9988c26 (up to CloudStack 4.10), an error 432 was returned (and ignored):
{"errorresponse":{"uuidList":[],"errorcode":432,"cserrorcode":9999,"errortext":"The user is not allowed to request the API command or the API command does not exist"}}
Even if the call to listConfigurations was removed, another call to listIdps also lead to an error 401 for user accounts if the SAML plugin is not enabled.
This pull request aims to fix the SSO issue, by restoring errors 432 (instead of 401 + invalidate session) for commands not available. However, if an API command is explicitly denied using ACLs or if the session key is incorrect, it still generates an error 401 and invalidates the session.
* CLOUDSTACK-10359: Change the inconsistent method names.
The two methods are named "getXXX".
The two method are checking the status of variables.
"getCustomized" is not as intuitive as "isCustomized".
"getIsSystem" is not as intuitive as "isSystem" as well.
* Add the missing changes of all usages of method getIsSystem.
This extends securing of KVM hosts to securing of libvirt on KVM
host as well for TLS enabled live VM migration. To simplify implementation
securing of host implies that both host and libvirtd processes are
secured with management server's CA plugin issued certificates.
Based on whether keystore and certificates files are available at
/etc/cloudstack/agent, the KVM agent determines whether to use TLS or
TCP based uris for live VM migration. It is also enforced that a secured
host will allow live VM migration to/from other secured host, and an
unsecured hosts will allow live VM migration to/from other unsecured
host only.
Post upgrade the KVM agent on startup will expose its security state
(secured detail is sent as true or false) to the managements server that
gets saved in host_details for the host. This host detail can be accesed
via the listHosts response, and in the UI unsecured KVM hosts will show
up with the host state of ‘unsecured’. Further, a button has been added
that allows admins to provision/renew certificates to KVM hosts and can
be used to secure any unsecured KVM host.
The `cloudstack-setup-agent` was modified to accept a new flag `-s`
which will reconfigure libvirtd with following settings:
listen_tcp=0
listen_tls=1
tcp_port="16509"
tls_port="16514"
auth_tcp="none"
auth_tls="none"
key_file = "/etc/pki/libvirt/private/serverkey.pem"
cert_file = "/etc/pki/libvirt/servercert.pem"
ca_file = "/etc/pki/CA/cacert.pem"
For a connected KVM host agent, when the certificate are
renewed/provisioned a background task is scheduled that waits until all
of the agent tasks finish after which libvirt process is restarted and
finally the agent is restarted via AgentShell.
There are no API or DB changes.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
I found this empty test while working on other PRs. Empty/ignored tests do not help us. I am removing it. In the future, if we manage to improve these classes, we can work on unit test cases for them.
This allows CloudStack to use a console proxy domain instead of public
IP address even when ssl is not enabled but console proxy url/domain
is defined in global settings.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Add stack traces information
* update stack trace info
* update stack trace to make them consistent
* update stack traces
* update stacktraces
* update stacktraces for other similar situations
* fix some other situations
* enhance other situations
* [CLOUDSTACK-10230] User is able to change to “Guest OS type” that has been removed
Users are able to change the OS type of VMs to “Guest OS type” that has been removed. This becomes a security issue when we try to force users to use HVM VMs (Meltdown/Spectre thing). A removed “guest os type” should not be usable by any users in the cloud.
* [CLOUDSTACK-10226] CloudStack is not importing Local storage properly
CloudStack is importing as Local storage any XenServer SR that is of type LVM or EXT. This causes a problem when one wants to use both Direct attach storage and local storage. Moreover, CloudStack was not importing all of the local storage that a host has available when local storage is enabled. It was only importing the First SR it sees.
To fix the first problem we started ignoring SRs that have the flag shared=true when discovering local storages. SRs configured to be shared are used as direct attached storage, and therefore should not be imported again as local ones.
To fix the second problem, we started loading all Local storage and importing them accordingly to ACS.
* Cleanups and formatting
Several fixes addressed:
- Dettach ISO fails when trying to detach a direct download ISO
- Fix for metalink support on SSVM agents (this closes CLOUDSTACK-10238)
- Reinstall VM from bypassed registered template (this closes CLOUDSTACK-10250)
- Fix upload certificate error message even though operation was successful
- Fix metalink download, checksum retry logic and metalink SSVM downloader
The new CA framework introduced basic support for comma-separated
list of management servers for agent, which makes an external LB
unnecessary.
This extends that feature to implement LB sorting algorithms that
sorts the management server list before they are sent to the agents.
This adds a central intelligence in the management server and adds
additional enhancements to Agent class to be algorithm aware and
have a background mechanism to check/fallback to preferred management
server (assumed as the first in the list). This is support for any
indirect agent such as the KVM, CPVM and SSVM agent, and would
provide support for management server host migration during upgrade
(when instead of in-place, new hosts are used to setup new mgmt server).
This FR introduces two new global settings:
- `indirect.agent.lb.algorithm`: The algorithm for the indirect agent LB.
- `indirect.agent.lb.check.interval`: The preferred host check interval
for the agent's background task that checks and switches to agent's
preferred host.
The indirect.agent.lb.algorithm supports following algorithm options:
- static: use the list as provided.
- roundrobin: evenly spreads hosts across management servers based on
host's id.
- shuffle: (pseudo) randomly sorts the list (not recommended for production).
Any changes to the global settings - `indirect.agent.lb.algorithm` and
`host` does not require restarting of the mangement server(s) and the
agents. A message bus based system dynamically reacts to change in these
global settings and propagates them to all connected agents.
Comma-separated management server list is propagated to agents on
following cases:
- Addition of a host (including ssvm, cpvm systevms).
- Connection or reconnection by the agents to a management server.
- After admin changes the 'host' and/or the
'indirect.agent.lb.algorithm' global settings.
On the agent side, the 'host' setting is saved in its properties file as:
`host=<comma separated addresses>@<algorithm name>`.
First the agent connects to the management server and sends its current
management server list, which is compared by the management server and
in case of failure a new/update list is sent for the agent to persist.
From the agent's perspective, the first address in the propagated list
will be considered the preferred host. A new background task can be
activated by configuring the `indirect.agent.lb.check.interval` which is
a cluster level global setting from CloudStack and admins can also
override this by configuring the 'host.lb.check.interval' in the
`agent.properties` file.
Every time agent gets a ms-host list and the algorithm, the host specific
background check interval is also sent and it dynamically reconfigures
the background task without need to restart agents.
Note: The 'static' and 'roundrobin' algorithms, strictly checks for the
order as expected by them, however, the 'shuffle' algorithm just checks
for content and not the order of the comma separate ms host addresses.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- new flag `-T, --use-timestamp` to use `timestamp` when POM version contains SNAPSHOT
- in the final artifacts (jar) name
- in the final package (rpm, deb) name
- in `/etc/cloudstack-release` file of SystemVMs
- in the Management Server > About dialog
- if there's a "branding" string in the POM version (e.g. `x.y.z.a-NAME[-SNAPSHOT]`),
the branding name will be used in the final generated pacakge name such as following:
- `cloudstack-management-x.y.z.a-NAME.NUMBER.el7.centos.x86_64`
- `cloudstack-management_x.y.z.a-NAME-NUMBER~xenial_all.deb`
- branding string can be overriden with newly added `-b, --brand` flag
- handle the new format version for VR version
- fix long opts (they were broken)
- tolerate and show a warning message for unrecognized flags
- usage help reformat
* Deprecate Version class in favor or CloudStackVersion
* CLOUDSTACK-8855 Improve Error Message for Host Alert State
* [CLOUDSTACK-9846] create column to save the content of alert messages
Remove declaration of throws CloudRuntimeException
I also removed some unused variables and comments left behind
This closes#837
* Isolate a problematic test "smoke/test_certauthority_root"
* Refactored nuage tests
Added simulator support for ConfigDrive
Allow all nuage tests to run against simulator
Refactored nuage tests to remove code duplication
* Move test data from test_data.py to nuage_test_data.py
Nuage test data is now contained in nuage_test_data.py instead of
test_data.py
Removed all nuage test data from nuage_test_data.py
* CLOUD-1252 fixed cleanup of vpc tier network
* Import libVSD into the codebase
* CLOUDSTACK-1253: Volumes are not expunged in simulator
* Fixed some merge issues in test_nuage_vsp_mngd_subnets test
* Implement GetVolumeStatsCommand in Simulator
* Add vspk as marvin nuagevsp dependency, after removing libVSD dependency
* correct libVSD files for license purposes
pep8 pyflakes compliant
* [CLOUDSTACK-10314] Add Text-Field to each ACL Rule
It is interesting to have a text field (e.g. CHAR-256) added to each ACL rule, which allows to enter a "reason" for each FW Rule created. This is valuable for customer documentation, as well as best practice for an evidence towards auditing the system
* Formatting to make check style happy and code clean ups
* [CLOUDSTACK-10240] ACS cannot migrate a volume from local to shared storage.
CloudStack is logically restricting the migration of local storages to shared storage and vice versa. This restriction is a logical one and can be removed for XenServer deployments. Therefore, we will enable migration of volumes between local-shared storages in XenServers independently of their service offering. This will work as an override mechanism to the disk offering used by volumes. If administrators want to migrate local volumes to a shared storage, they should be able to do so (the hypervisor already allows that). The same the other way around.
* Cleanups implemented while working on [CLOUDSTACK-10240]
* Fix test case test_03_migrate_options_storage_tags
The changes applied were:
- When loading hypervisors capabilities we must use "default" instead of nulls
- "Enable" storage migration for simulator hypervisor
- Remove restriction on "ClusterScopeStoragePoolAllocator" to find shared pools
This would make keystore utility scripts being executed as sudoer
in case the process uid/owner is not root but still a sudoer user.
Also fails addHost while securing a KVM host and if keystore fails to be
setup for any reason.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-10269: On deletion of role set name to null (#2444)
CLOUDSTACK-10146 checksum in java instead of script (#2405)
CLOUDSTACK-10222: Clean snaphosts from primary storage when taking (#2398)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
During deletion of role, set name to null. This fixes concurrent
exception issue where previously it would rename the deleted role
with a timestamp.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
ACS is accounting the resources properly when deploying VMs with custom service offerings. However, there are other methods (such as updateResourceCount) that do not execute the resource accounting properly, and these methods update the resource count for an account in the database. Therefore, if a user deploys VMs with custom service offerings, and later this user calls the “updateResourceCount” method, it (the method) will only account for VMs with normal service offerings, and update this as the number of resources used by the account. This will result in a smaller number of resources to be accounted for the given account than the real used value. The problem becomes worse because if the user starts to delete these VMs, it is possible to reach negative values of resources allocated (breaking all of the resource limiting for accounts). This is a very serious attack vector for public cloud providers!
This fixes move refactoring error introduced in #2283
For instance, the class DatadiskTO is supposed to be in com.cloud.agent.api.to package. However, the folder structure it was placed in is com.cloud.agent.api.api.to.
Skip tests for cloud-plugin-hypervisor-ovm3:
For some unknown reason, there are quite a lot of broken test cases for cloud-plugin-hypervisor-ovm3. They might have appeared after some dependency upgrade and was overlooked by the person updating them. I checked them to see if they could be fixed, but these tests are not developed in a clear and clean manner. On top of that, we do not see (at least I) people using OVM3-hypervisor with ACS. Therefore, I decided to skip them.
Identention corrected to use spaces instead of tabs in XML files
Remove maven standard module (which only a few were using) and get ride of maven customization for the projects structure.
- moved all directories to src/main/java, src/main/resources, src/main/scripts, src/test/java, src/test/resources
- grep scan to search for src/com and src/org left over
- grep for <project>/scripts to fix pom.xml configuration
- remove custom <build> configuration in pom.xml
Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
This fixes regression failures seen in Trillian, fixes NPEs that cause Travis related failures.
This also removes the aria2 dependency from rpms that require users to enable/install epel-release.
This finally updates the checksums for 4.11 systemvmtemplates in db upgrade path.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in a KVM-only environment (previously this only worked for XenServer and VMware)
Added support for root disks on managed storage with KVM
Added support for volume snapshots with managed storage on KVM
Enable creating a template directly from a volume (i.e. without having to go through a volume snapshot) on KVM with managed storage
Only allow the resizing of a volume for managed storage on KVM if the volume in question is either not attached to a VM or is attached to a VM in the Stopped state.
Included support for Reinstall VM on KVM with managed storage
Enabled offline migration on KVM from non-managed storage to managed storage and vice versa
Included support for online storage migration on KVM with managed storage (NFS and Ceph to managed storage)
Added support to download (extract) a managed-storage volume to a QCOW2 file
When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS, if applicable.
Included support for the KVM auto-convergence feature
The compression flag was actually added in version 1.0.3 (1000003) as opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)
On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable iSCSI volumes that need to be disconnected.
Added a new Global Setting: kvm.storage.live.migration.wait
For XenServer, added a check to enforce that only volumes from zone-wide managed storage can be storage motioned from a host in one cluster to a host in another cluster (cannot do so at the time being with volumes from cluster-scoped managed storage)
Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one or more snapshots.
Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot, create volume from snapshot, and create template from snapshot
Added an SIOC API plug-in to support VMware SIOC
When starting a VM that uses managed storage in a cluster other than the one it last was running in, we need to remove the reference to the iSCSI volume from the original cluster.
Added the ability to revert a volume to a snapshot
Enabled cluster-scoped managed storage
Added support for VMware dynamic discovery
Extending Config Drive support
* Added support for VMware
* Build configdrive.iso on ssvm
* Added support for VPC and Isolated Networks
* Moved implementation to new Service Provider
* UI fix: add support for urlencoded userdata
* Add support for building systemvm behind a proxy
Co-Authored-By: Raf Smeets <raf.smeets@nuagenetworks.net>
Co-Authored-By: Frank Maximus <frank.maximus@nuagenetworks.net>
Co-Authored-By: Sigert Goeminne <sigert.goeminne@nuagenetworks.net>
During storage expunge domain resource statistics for primary storage space resource counter is not updated for domain. This leads to the situation when domain resource statistics for primary storage is overfilled (statistics only increase but not decrease).
Global scheduled task resourcecount.check.interval > 0 provides a workaround but not fixes the problem truly because when accounts inside domains use primary_storage allocation/deallocation intensively it leads to service block of operation.
NB: Unable to implement marvin tests because it (marvin) places in database weird primary storage volume size of 100 when creating VM from template. It might be a sign of opening a new issue for that bug.
CloudStack volumes and templates are one single virtual disk in case of XenServer/XCP and KVM hypervisors since the files used for templates and volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates are in OVA format, which are archives that can contain a complete VM including multiple VMDKs and other files such as ISOs. And currently, Cloudstack only supports Template creation based on OVA files containing a single disk. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, only the first disk is attached to the new instance and other disks are ignored.
Similarly with uploaded volumes, attaching an uploaded volume that contains multiple disks to a VM will result in only one VMDK to being attached to the VM.
FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Support+OVA+files+containing+multiple+disks
This behavior needs to be improved in VMWare to support OVA files with multiple disks for both uploaded volumes and templates. i.e. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, the first disk should be attached to the new instance as the ROOT disk and volumes should be created based on other VMDK disks in the OVA file and should be attached to the instance.
Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes regression introduced in PR #2295:
- Pass assign=true to fetch new public IP
- Use wait_until instead of sleep+wait in tests
- Loop through list of public IP ranges to match the systemvm gateway
- Fix potential NPE seen when adding simulator host(s)
- Removes aria2 installation from setup_agent.sh using yum, it's already
dependency for cloudstack-agent package
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR introduces several features and fixes some bugs:
- account tags feature
- fixed resource tags bugs which happened during tags search (found wrong entries because of mysql string to number translation - see #905, but this PR does more and fixes also resource access - vulnerability during list resource tags)
- some marvin improvements (speed, sanity)
Improved resource tags code:
1. Enhanced listTags security
2. Added support for account tags (account tags are required to support tags common for all users of an account)
3. Improved the tag management code (refactoring and cleanup)
Marvin:
1. Fixed Marvin wait timeout between async pools. To decrease polling interval and improve CI speed.
2. Fixed /tmp/ to /tmp in zone configuration files.
3. Fixed + to os.path.join in log class.
4. Fixed + to os.path.join in deployDataCenter class.
5. Fixed typos in tag tests.
6. Modified Tags base class delete method.
Deploy Datacenter script:
1. Improved deployDatacenter. Added option logdir to specify where script places results of evaluation.
ConfigurationManagerImpl:
1. Added logging to ConfigurationManagerImpl to log when vlan is not found. Added test stubs for tags. Found accidental exception during simulator running after CI.
tests_tags.py:
1. Fixed stale undeleted tags.
2. Changed region:India to scope:TestName.
This can otherwise cause problems in Basic Networking where multiple
IPv4 ranges are configured in a POD.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
This feature allows using templates and ISOs avoiding secondary storage as intermediate cache on KVM. The virtual machine deployment process is enhanced to supported bypassed registered templates and ISOs, delegating the work of downloading them to primary storage to the KVM agent instead of the SSVM agent.
Template and ISO registration:
- When hypervisor is KVM, a checkbox is displayed with 'Direct Download' label.
- API methods registerTemplate and registerISO are both extended with this new parameter directdownload.
- On template or ISO registration, no download job is sent to SSVM agent, CloudStack would only persist an entry on template_store_ref indicating that template or ISO has been marked as 'Direct Download' (bypassing Secondary Storage). These entries are persisted as:
template_id = Template or ISO id on vm_template table
store_id NULL
download_state = BYPASSED
state = Ready
(Note: these entries allow users to deploy virtual machine from registered templates or ISOs)
- An URL validation command is sent to a random KVM host to check if template/ISO location can be reached. Metalink are also supported by this feature. In case of a metalink, it is fetched and URL check is performed on each of its URLs.
- Checksum should be provided as indicated on #2246: {ALGORITHM}CHKSUMHASH
- After template or ISO is registered, it would be displayed in the UI
Virtual machine deployment:
When a 'Direct Download' template is selected for deployment, CloudStack would delegate template downloading to destination storage pool via destination host by a new pluggable download manager.
Download manager would handle template downloading depending on URL protocol. In case of HTTP, request headers can be set by the user via vm_template_details. Those details should be persisted as:
Key: HTTP_HEADER
Value: HEADERNAME:HEADERVALUE
In case of HTTPS, a new API method is added uploadTemplateDirectDownloadCertificate to allow user importing a client certificate into all KVM hosts' keystore before deployment.
After template or ISO is downloaded to primary storage, usual entry would be persisted on template_spool_ref indicating the mapping between template/ISO and storage pool.
Steps to reproduce issue
Deploy a VM
Take snapshot of the root volume
Delete the snapshot
Before the garbage collector has run, shutdown the VM and assign the VM to other user.
When garage collector executes NPE shows in the logs.
This happens when the root disk size is overridden. The primary storage limit check should be performed based on overridden size instead of template size. Enabled root disk resize tests to run on simulator as well.
This feature allow admins to dedicate a range of public IP addresses to the SSVM and CPVM, such that they can be subject to specific external firewall rules. The option to dedicate a public IP range to the System VMs (SSVM & CPVM) is added to the createVlanIpRange API method and the UI.
Solution:
Global setting 'system.vm.public.ip.reservation.mode.strictness' is added to determine if the use of the system VM reservation is strict (when true) or preferred (false), false by default.
When a range has been dedicated to System VMs, CloudStack should apply IPs from that range to
the public interfaces of the CPVM and the SSVM depending on global setting's value:
If the global setting is set to false: then CloudStack will use any unused and unreserved public IP
addresses for system VMs only when the pool of reserved IPs has been exhausted
If the global setting is set to true: then CloudStack will fail to deploy the system VM when the pool
of reserved IPs has been exhausted, citing the lack of available IPs.
UI Changes
Under Infrastructure -> Zone -> Physical Network -> Public -> IP Ranges, button 'Account' label is refactored to 'Set reservation'.
When that button is clicked, dialog displayed is also refactored, including a new checkbox 'System VMs' which indicates if range should be dedicated for CPVM and SSVM, and a note indicating its usage.
When clicking on button for any created range, UI dialog displayed indicates whether IP range is dedicated for system vms or not.
The first PR(#1176) intended to solve #CLOUDSTACK-9025 was only tackling the problem for CloudStack deployments that use single hypervisor types (restricted to XenServer). Additionally, the lack of information regarding that solution (poor documentation, test cases and description in PRs and Jira ticket) led the code to be removed in #1124 after a long discussion and analysis in #1056. That piece of code seemed logicless (and it was!). It would receive a hostId and then change that hostId for other hostId of the zone without doing any check; it was not even checking the hypervisor and storage in which the host was plugged into.
The problem reported in #CLOUDSTACK-9025 is caused by partial snapshots that are taken in XenServer. This means, we do not take a complete snapshot, but a partial one that contains only the modified data. This requires rebuilding the VHD hierarchy when creating a template out of the snapshot. The point is that the first hostId received is not a hostId, but a system VM ID(SSVM). That is why the code in #1176 fixed the problem for some deployment scenarios, but would cause problems for scenarios where we have multiple hypervisors in the same zone. We need to execute the creation of the VHD that represents the template in the hypervisor, so the VHD chain can be built using the parent links.
This commit changes the method com.cloud.hypervisor.XenServerGuru.getCommandHostDelegation(long, Command). From now on we replace the hostId that is intended to execute the “copy command” that will create the VHD of the template according to some conditions that were already in place. The idea is that starting with XenServer 6.2.0 hotFix ESP1004 we need to execute the command in the hypervisor host and not from the SSVM. Moreover, the method was improved making it readable and understandable; it was also created test cases assuring that from XenServer 6.2.0 hotFix ESP1004 and upward versions we change the hostId that will be used to execute the “copy command”.
Furthermore, we are not selecting a random host from a zone anymore. A new method was introduced in the HostDao called “findHostConnectedToSnapshotStoragePoolToExecuteCommand”, using this method we look for a host that is in the cluster that is using the storage pool where the volume from which the Snaphost is taken of. By doing this, we guarantee that the host that is connected to the primary storage where all of the snapshots parent VHDs are stored is used to create the template.
Consider using Disabled hosts when no Enabled hosts are found
This also closes#2317
Fix for CLOUDSTACK-7931 enforces a valid integer value to be configured for integration.api.port and usage.sanity.check.interval. These global configs can't be reset back to null(default).
While creating the response object for the 'listDomain' API, several database calls are triggered to fetch details like parent domain, project limit, IP limit, etc. These database calls are triggered for each record found in the main fetch query, which is causing the response to slow down.
Fix:
The database transactions are reduced to improve response of the Listdomain API
This extends work presented on #2048 on which the ability to extend the management range is provided.
Aim
This PR allows separating the management network subnet on which SSVM and CPVM are from the virtual routers management subnet.
Detailed use case
PCI compliance requires that network elements are defined as ‘in scope’ or ‘out of scope’, for compliance purposes. The SSVM and CPVM are both in scope as they allow public HTTP or HTTPS connections. The virtual routers have been defined as out of scope as they have been placed entirely in a firewalled network's segment. However, all of the system VM types share management network. As SSVM and CPVM are both in scope this would bring the virtual routers into scope as well, requiring individual audits of every virtual router. As this is not practical, the ‘management network’ which the SSVM and CPVM are on, and the management network which the virtual routers are on, must be separated by a firewall.
Description
By this feature it is possible to dedicate a created range for SSVM and CPVM (system vms) and provide a VLAN ID for its range.
A new boolean global configuration is added: system.vm.management.ip.reservation.mode.strictness. If enabled, the use of System VMs management IP reservation is strict, preferred if not. Default value is false (preferred).
Strict reservation: System VMs should try to get a private IP from a range marked for system vms. If not available, deployment fails
Preferred reservation: System VMS will try to get a private IP from a range marked for system vms. If not available, IP for range not marked for system vms is taken.
The db queries in listTemplateAPI could be optimized to get unique results from the database which could help in reducing the listTemplate API response time.
In CLOUDSTACK-9886, we are reading ping.interval and ping.timeout using configdao which involves direct reading of DB. So, replaced it with ConfigKey based approach.
Snapshot on primary storage not cleaned up after Storage migration. This happens in the following scenario:
Steps To Reproduce
Create an instance on the local storage on any host
Create a scheduled snapshot of the volume:
Wait until ACS created the snapshot. ACS is creating a snapshot on local storage and is transferring this snapshot to secondary storage. But the latest snapshot on local storage will stay there. This is as expected.
Migrate the instance to another XenServer host with ACS UI and Storage Live Migration
The Snapshot on the old host on local storage will not be cleaned up and is staying on local storage. So local storage will fill up with unneeded snapshots.
Using cloudmonkey, when invoking the update template api call, it does not display the isdynamicallyscalable field as part of its template response.
fix done:
org.apache.cloudstack.api.response.TemplateResponse isdynamicallyscalable field is now populated in the server/src/com/cloud/api/query/dao/TemplateJoinDaoImpl.java.newUpdateResponse method.
Unit test:
the Unit test server/test/com/cloud/api/query/dao/TemplateJoinDaoImplTest.java testNewUpdateResponse() verifies that the TemplateResponse is populated correctly.
Marvin test:
the Marvin nosetest integration/smoke/test_templates.py test_02_edit_template(self) confirms that the template_response.isdynamicallyscalable field gets populated with the correct user data.
Test scenario:
Using cloudmonkey, when invoking the 'update template' API call, it should now display the isdynamicallyscalable field as part of its template response.
Consider this scenario:
1. User launches a VM from Template and keep it running
2. Admin logins and deleted that template [CloudPlatform does not check existing / running VM etc. while the deletion is done]
3. User resets the VM
4. CloudPlatform fails to star the VM as it cannot find the corresponding template.
It throws error as
java.lang.RuntimeException: Job failed due to exception Resource [Host:11] is unreachable: Host 11: Unable to start instance due to can't find ready template: 209 for data center 1
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:113)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:495)
Client is requesting better handing of this scenario. We need to check existing / running VM's when the template is deleted and warn admin about the possible issue that may occur.
REPRO STEPS
==================
1. Launches a VM from Template and keep it running
2. Now delete that template
3. Reset the VM
4. CloudPlatform fails to star the VM as it cannot find the corresponding template.
EXPECTED BEHAVIOR
==================
Cloud platform should throw some warning message while the template is deleted if that template is being used by existing / running VM's
ACTUAL BEHAVIOR
==================
Cloud platform does not throw as waring etc.
* Cleanup and Improve NetUtils
This class had many unused methods, inconsistent names and redundant code.
This commit cleans up code, renames a few methods and constants.
The global/account setting 'api.allowed.source.cidr.list' is set
to 0.0.0.0/0,::/0 by default preserve the current behavior and thus
allow API calls for accounts from all IPv4 and IPv6 subnets.
Users can set it to a comma-separated list of IPv4/IPv6 subnets to
restrict API calls for Admin accounts to certain parts of their network(s).
This is to improve Security. Should an attacker steal the Access/Secret key
of an account he/she still needs to be in a subnet from where accounts are
allowed to perform API calls.
This is a good security measure for APIs which are connected to the public internet.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Ran into an issue today where we passed both the "id" and "domainid" parameters into "listAccounts" and received a response despite the account id passed not belonging to the domainid passed.
Allow usage of "domainid" AND "id" in "listAccounts"
- Adding "AccountDoa::findActiveAccountById"
- Adding "AccountDaoImpl::findActiveAccountById"
- Removing seemingly pointless "listForDomain" parameter
- Updating "typeNEQ" value from "5" to "Account.ACCOUNT_TYPE_PROJECT"
(which is "5")
- Only attempt to load domain for "path" query parameter once
"searchForAccountsInternal" input validation logic pseudo-code:
- If "domainid" set, check immediately
- If "id" not set:
- and user is admin and "listall" is true
- if "domainid" not set, use caller domain id
- force "isrecursive" true
- else use caller account id
- Else if "domainid" and "name" set
- verify existence of account and that user has access
- Else:
- if "domainid" not set, locate account by "id"
- else, locate account by "id" and "domainid"
- verify account found and caller has access rights
createSnapshotPolicy API create multiple entries in DB for same parameters, if multiple threads are executed in parallel.
STEPS TO REPRODUCE :
Created a new machine having root and data disk.
Make sure that no existing snapshot policy is present for the volume.
Execute multiple threads in parallel for createSnapshotPolicy API having all required parameters exactly same.
Verify table snapshot_policy in DB, will get multiple entries for same policy.
Once again execute same multiple threads, by changing any API parameter, will see that existing entries are getting modified in DB and no new entries are added.
Add resource type name in request and response for listResources API call.
This adds in the response a new attribute typename with the String value for the corresponding resource enum.
{
"capacitytotal": 0,
"capacityused": 0,
"percentused": "0",
"type": 19,
"typename": "gpu",
"zoneid": "381d0a95-ed4a-4ad9-b41c-b97073c1a433",
"zonename": "ch-dk-2"
}
Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
Python base64 requires that the string is a multiple of 4 characters but
the Apache codec does not. RFC states is not mandatory so the data should
not fail the VR script (vmdata.py).
Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR fixes the issue in which there's a leak when doing API call for listing VPC with domain account and projectId=-1.
Note for reviewers: The code formatting changed so many lines in the commit but the actual change is in line 2467-2471.
This fixes the issue that a non-root user cannot tag a network ACL item
and after the fix a non-root user still cannot tag a globally defined
ACL item and only the ACLs they have access to.
This includes test related fixes and code review fixes based on
reviews from @rafaelweingartner, @marcaurele, @wido and @DaanHoogland.
This also includes VMware disk-resize limitation bug fix based on comments
from @sateesh-chodapuneedi and @priyankparihar.
This also includes the final changes to systemvmtemplate and fixes to
code based on issues found via test failures.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
In a VMware 55u3 environment it was found that CPVM and SSVM would
get the same public IP. After another investigative review of
fetchNewPublicIp method, it was found that it would always pick up the
first IP from the sql query list/result.
The cause was found to be that with the new changes no table/row locks
are done and first item is used without looping through the list of
available free IPs. The previously implementation method that put
IP address in allocating state did not check that it was a free IP.
In this refactoring/fix, the first free IP is first marked as allocating
and if assign is requested that is changed into Allocated state.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes incorrect total host memory in listHosts and related host
responses, regression introduced in #2120.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes test failures around VMware with the new systemvmtemplate.
In addition:
- Does not skip rVR related test cases for VMware
- Removes rc.local
- Processes unprocessed cmd_line.json
- Fixed NPEs around VMware tests/code
- On VMware, use udevadm to reconfigure nic/mac address than rebooting
- Fix proper acpi shutdown script for faster systemvm shutdowns
- Give at least 256MB of swap for VRs to avoid OOM on VMware
- Fixes smoke tests for environment related failures
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
On XenServer, both redundant router's vifs were getting deleted when any
PF rule is removed from any of the acquired public IPs. This fix
ensures that lastIp is set to `false` when processed by hypervisor
resources to avoid removing of VIFs when VPCs have any source nat IP.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Several systemvmtemplate optimizations
- Uses new macchinina template for running smoke tests
- Switch to latest Debian 9.3.0 release for systemvmtemplate
- Introduce a new `get_test_template` that uses tiny test template
such as macchinina as defined test_data.py
- rVR related fixes and improvements
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Fixes timezone issue where dates show up as nvalid in UI
- Introduces new event timeline listing/filtering of events
- Several UI improvements to add columns in list views
- Bulk operations support in instance list view to shutdown and destroy
multiple-selected VMs (limitation: after operation, redundant entries
may show up in the list view, refreshing VM list view fixes that)
- Align table thead/tbody to avoid splitting of tables
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The `assignDedicateIpAddress` previously had marked the newly fetched
IP as allocated but now it does not do that. This fails for VPCs
where SNATs IP are retained as allocating and not allocated after
creation.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem:
When a VM is deployed with in an Affinity group which has the cluster dedicated to a subdomain (zone is dedicated to parent domain) it is getting successful. We can also stop the vm and remove the affinity group, but if you want to add back the affinity it is failing.
Root cause:
During VM deployment there is clear check on affinity type (account/domain). Here since the acl_type is "domain" it does not expect to be same owner for entities.
But during update of affinity to VM there is no specific check for acl_type "domain".
Solution:
Fix is to make the access check similar to VM deployment where it does not expect to be same owner for entities if the acl_type is "domain".
This feature allows CloudStack administrators to create layer 2 networks on CloudStack. As these networks are purely layer 2, they don't require IP addresses or Virtual Router, only VLAN is necessary (provided by administrator or assigned by CloudStack). Also, network services should be handled externally, e.g. DNS, DHCP, as they are not provided by L2 networks.
As a consequence, a new Guest Network type is created within CloudStack: L2
Description:
Network offerings and networks support new guest type: L2.
L2 Network offering creation allows administrator to select Specify VLAN or let CloudStack assign it dynamically.
L2 Network creation allows administrator to specify VLAN tag (if network offerings allows it) or simply create network.
VM deployments on L2 networks:
VMs should not IP addresses or any network service
No Virtual Router deployed on network
If Specify VLAN = true for network offering, network gets implemented using a dynamically assigned VLAN
UI changes
A new button is added on Networks tab, available for admins, to allow L2 networks creation
At present, The management IP range can only be expanded under the same subnet. According to existing range, either the last IP can be forward extended or the first IP can be backward extended. But we cannot add an entirely different range from the same subnet. So the expansion of range is subnet bound, which is fixed. But when the range gets exhausted and a user wants to deploy more system VMs, then the operation would fail. The purpose of this feature is to expand the range of management network IPs within the existing subnet. It can also delete and list the IP ranges.
Please refer the FS here: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Expansion+of+Management+IP+Range
ISSUE :
Duplicate public VLAN for two different admin accounts.
STEPS TO REPRODUCE :
Start multiple threads for executing createVlanIpRange API.
Make sure multiple threads run in parallel.
Verify vlan table in DB, duplicate entry for same VLAN and IP address range will be encountered, just id and uuid will be different, rest all fields will have similar value.
Following entry will be observed in vlan table :
`mysql> select * from vlan where vlan_id like 'vlan://77' and removed is null;
+----+--------------------------------------+-----------+--------------+-----------------+---------------------------+----------------+----------------+------------+---------------------+-------------+----------+-----------+---------+---------------------+
| id | uuid | vlan_id | vlan_gateway | vlan_netmask | description | vlan_type | data_center_id | network_id | physical_network_id | ip6_gateway | ip6_cidr | ip6_range | removed | created |
+----+--------------------------------------+-----------+--------------+-----------------+---------------------------+----------------+----------------+------------+---------------------+-------------+----------+-----------+---------+---------------------+
| 15 | 6a205b78-d162-43e3-8da9-86a3ff60f40e | vlan://77 | 10.112.63.65 | 255.255.255.192 | 10.112.63.66-10.112.63.70 | VirtualNetwork | 1 | 200 | 200 | NULL | NULL | NULL | NULL | 2017-12-13 12:55:51 |
| 17 | ff8b5175-b247-45a5-b8d3-feb6a1ca64d0 | vlan://77 | 10.112.63.65 | 255.255.255.192 | 10.112.63.66-10.112.63.70 | VirtualNetwork | 1 | 200 | 200 | NULL | NULL | NULL | NULL | 2017-12-13 12:55:51 |
+----+--------------------------------------+-----------+--------------+-----------------+---------------------------+----------------+----------------+------------+---------------------+-------------+----------+-----------+---------+---------------------+
MySQLTransactionRollbackException is seen frequently in logs
Root Cause
Attempts to lock rows in the core data access layer of database fails if there is a possibility of deadlock. However Operations are not getting retried in case of deadlock. So introducing retries here
Solution
Operations would be retried after some wait time in case of dead lock exception.
This change adds allocatediops to the ListStoragePool API. This applies to managed storage where we have a guaranteed minimum IOPS set. This is useful for monitoring if we have reached the IOPS limit on a storage cluster.
Related 86bbe211f2 and CLOUDSTACK-494. Currently we can not have serveral VPCs in one account with different VPN customer gateways configuration per same gateway IP.
Root Cause:
Earlier, it was failing with ArrayIndexOutOfBoundsException, when the list is empty and accessing the first element.
The error was only observed in Log, but was not showing in UI as it was not throwing any exception.
Hence the API call was in turn successful.
Solution:
Added the empty check before sending device details.
Which says either the required GPU device is not available or out of capacity.
reate Snaphot with quiesce option set to true fails with InvalidParameterValueException
--quiescevm option is only supported with netapp plugin with vmware. When netapp is not there, they would be struck in Allocated state and Deletion is not supported in allocated state. To fix this issue, deletion is supported in allocated state as well.
Depending on the timezone you're running CS (before GMT timezones) you could experience that some jobs are marked as failed since the parent job got a null result despite its child job having successfully done the job. The child job got deleted by the CleanupTask ahead of time, due to a missing datetime conversion to GMT timezone.
Jobs are failing with this message: Job failed with un-handled exception
The fix intends to correct any datetime used in the code that should be using the GMT timezone instead of the local one since all DB datetime should be stored at GMT.
…Protocol which Stops VR
When we run the createPortForwardingRule API with input as Protocol as halt the PF rule is added however Halt is executed on VR. Hence the VR is stopped.
Following entry added to Firewall_Rules table and VirtualRouter went to halt(stopped)
mysql> select * from firewall_rules where id = 7
*************************** 1. row ***************************
id: 7
uuid: XXXXXXXXXXXXXXXXXXXXXXXXXXX
ip_address_id: 13
start_port: 222
end_port: 222
state: Revoke
protocol: halt
purpose: PortForwarding
account_id: 2
domain_id: 1
network_id: 208
xid: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
created: 2017-09-04 04:48:16
icmp_code: NULL
icmp_type: NULL
related: NULL
type: User
vpc_id: NULL
traffic_type
Tags field to be included in the listusagerecords response such that it can be used in billing report. E.g.
"tags":[
{"key":"city","value":"Toronto","resourcetype":"UserVm","resourceid":"a0cca906-f985-4b56-ad11-f33e59c4c733","account":"admin","domainid":"dec39eb8-4f81-11e7-8315-067fa0000031","domain":"ROOT"}
,
{"key":"region","value":"canada","resourcetype":"UserVm","resourceid":"a0cca906-f985-4b56-ad11-f33e59c4c733","account":"admin","domainid":"dec39eb8-4f81-11e7-8315-067fa0000031","domain":"ROOT"}
* CLOUDSTACK-9972: Enhance listVolume API to include physical size and utilization.
Also fixed pool, cluster and pod info
* CLOUDSTACK-9972: Fix volume_view and duplicate API constant
* CLOUDSTACK-9972: Backport Do not allow vms to be deployed on hosts that are in disabled pod
* CLOUDSTACK-9972: Fix localization missing keys
* CLOUDSTACK-9972: Fix sql path
- Migrate to embedded Jetty server.
- Improve ServerDaemon implementation.
- Introduce a new server.properties file for easier configuration.
- Have a single /etc/default/cloudstack-management to configure env.
- Reduce shaded jar file, removing unnecessary dependencies.
- Upgrade to Spring 5.x, upgrade several jar dependencies.
- Does not shade and include mysql-connector, used from classpath instead.
- Upgrade and use bountcastle as a separate un-shaded jar dependency.
- Remove tomcat related configuration and files.
- Have both embedded UI assets in uber jar and separate webapp directory.
- Refactor systemd and init scripts, cleanup packaging.
- Made cloudstack-setup-databases faster, using `urandom`.
- Remove unmaintained distro packagings.
- Moves creation and usage of server keystore in CA manager, this
deprecates the need to create/store cloud.jks in conf folder and
the db.cloud.keyStorePassphrase in db.properties file. This also
remove the need of the --keystore-passphrase in the
cloudstack-setup-encryption script.
- GZip contents dynamically in embedded Jetty
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The description of the parameter was saying that the parameter has to be bigger than 6, however, in the code we were only accepting values bigger than 10. This PR changes the validation method to accept any number >= 6. We also change other inconsistencies in error messages presented to users. in near by validations.
Allow security policies to apply on port groups:
- Accepts security policies while creating network offering
- Deployed network will have security policies from the network offering
applied on the port group (in vmware environment)
- Global settings as fallback when security policies are not defined for a network
offering
- Default promiscuous mode security policy set to REJECT as it's the default
for standard/default vswitch
Portgroup vlan-trunking options for dvswitch: This allows admins to define
a network with comma separated vlan id and vlan
range such as vlan://200-400,21,30-50 and use the provided vlan range to
configure vlan-trunking for a portgroup in dvswitch based environment.
VLAN overlap checks are performed for:
- isolated network against existing shared and isolated networks
- dedicated vlan ranges for the physical/public network for the zone
- shared network against existing isolated network
Allow shared networks to bypass vlan overlap checks: This allows admins
to create shared networks with a `bypassvlanoverlapcheck` API flag
which when set to 'true' will create a shared network without
performing vlan overlap checks against isolated network and against
the vlans allocated to the datacenter's physical network (vlan ranges).
Notes:
- No vlan-range overlap checks are performed when creating shared networks
- Multiple vlan id/ranges should include the vlan:// scheme prefix
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Refactors V3 x509 cert generator to put basic constraint and key usage
extensions when CA cert is created
- Refactors root CA provider to use V3 generator to generate CA cert
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10046 digest helper for calculating checksums
* CLOUDSTACK-10046 cleanup unused checksum code
* CLOUDSTACK-10046 padding method proof of concept
* CLOUDSTACK-10046 only compare checksums if old value is valid
* Adding positive and negative tests for md5, sha-1 and sha-256, for xen, vmware and kvm hypervisors.
KVM Results:
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 189, in test_02_1_create_template_with_checksum_sha1_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{sha-1}bf580a13f791d86acf3449a7b457a91a14389264" didn\'t match the given value, "{sha-1}someInvalidValue"\n']
=== TestName: test_02_1_create_template_with_checksum_sha1_negative | Status : SUCCESS ===
=== TestName: test_02_create_template_with_checksum_sha1 | Status : SUCCESS ===.
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 203, in test_03_1_create_template_with_checksum_sha256_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{SHA-256}efc03633f2b8f5db08acbcc5dc1be9028572dfd8f1c6c8ea663f0ef94b458c5" didn\'t match the given value, "{SHA-256}someInvalidValue"\n']
=== TestName: test_03_1_create_template_with_checksum_sha256_negative | Status : SUCCESS ===
=== TestName: test_03_create_template_with_checksum_sha256 | Status : SUCCESS ===
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 217, in test_04_1_create_template_with_checksum_md5_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{md5}ada77653dcf1e59495a9e1ac670ad95f" didn\'t match the given value, "{md5}someInvalidValue"\n']
=== TestName: test_04_1_create_template_with_checksum_md5_negative | Status : SUCCESS ===
=== TestName: test_04_create_template_with_checksum_md5 | Status : SUCCESS ===
* CLOUDSTACK-10046 digest helper for calculating checksums
* CLOUDSTACK-10046 cleanup unused checksum code
* CLOUDSTACK-10046 padding method proof of concept
* CLOUDSTACK-10046 only compare checksums if old value is valid
* Adding positive and negative tests for md5, sha-1 and sha-256, for xen, vmware and kvm hypervisors.
KVM Results:
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 189, in test_02_1_create_template_with_checksum_sha1_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{sha-1}bf580a13f791d86acf3449a7b457a91a14389264" didn\'t match the given value, "{sha-1}someInvalidValue"\n']
=== TestName: test_02_1_create_template_with_checksum_sha1_negative | Status : SUCCESS ===
=== TestName: test_02_create_template_with_checksum_sha1 | Status : SUCCESS ===.
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 203, in test_03_1_create_template_with_checksum_sha256_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{SHA-256}efc03633f2b8f5db08acbcc5dc1be9028572dfd8f1c6c8ea663f0ef94b458c5" didn\'t match the given value, "{SHA-256}someInvalidValue"\n']
=== TestName: test_03_1_create_template_with_checksum_sha256_negative | Status : SUCCESS ===
=== TestName: test_03_create_template_with_checksum_sha256 | Status : SUCCESS ===
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 217, in test_04_1_create_template_with_checksum_md5_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{md5}ada77653dcf1e59495a9e1ac670ad95f" didn\'t match the given value, "{md5}someInvalidValue"\n']
=== TestName: test_04_1_create_template_with_checksum_md5_negative | Status : SUCCESS ===
=== TestName: test_04_create_template_with_checksum_md5 | Status : SUCCESS ===
* Adding additional test with no checksum added when registering template
Result:
test_05_create_template_with_no_checksum (integration.smoke.test_templates.TestCreateTemplateWithChecksum) ... === TestName: test_05_create_template_with_no_checksum | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 42.320s
OK
* Fixing negative tests exception handling
* Adding tests for ISO checksum validation and fixing a zero prefix failure test in templates
* CLOUDSTACK-10046 padding
* CLOUDSTACK-10046 usability additions
* yet another IDE artifact hindering checkstyle
Co-Authored-By: Prashanth Manthena <prashanth.manthena@nuagenetworks.net>
Co-Authored-By: Sigert Goeminne <sigert.goeminne@nuagenetworks.net>
Bug: https://issues.apache.org/jira/browse/CLOUDSTACK-9832
Detail:
When the VPC offering does not contain VpcVirtualRouter as a SourceNat provider,
then we will not add the interface in the public network to the VpcVR.
CLOUDSTACK-9832: Move isSrcNat check to VpcManager
* CLOUDSTACK-9899 adding a global setting for not checking URLs from the MS
* CLOUDSTACK-9899 refactor HttpTemplateDownloader contructor cleanup
* CLOUDSTACK-9899 refactor HttpTemplateDownloader.download() cleanup
* CLOUDSTACK-9899 add the new config key to configurable
* CLOUDSTACK-9899 refactor download method
* CLOUDSTACK-9899 less verbose setting comment
* CLOUDSTACK-9899 debug message to indicate checking happened
* CLOUDSTACK-9899 typi flase -> false
This causes VM deployment failure on the host that was disabled while adding the storage repository.
In the attachCluster function of the PrimaryDataStoreLifeCycle, we were only selecting hosts that are up and are in enabled state. Here if we select all up hosts, it will populate the DB properly and will fix this issue. Also added a unit test for attachCluster function.
Fix: Block VMOperations if Host in PrepareForMaintenance mode. VM operations (Stop, Reboot, Destroy, Migrate to host) are not allowed when Host in PrepareForMaintenance mode.
Added ability to specify mac in deployVirtualMachine and
addNicToVirtualMachine api endpoints.
Validates mac address to be in the form of:
aa:bb:cc:dd:ee:ff , aa-bb-cc-dd-ee-ff , or aa.bb.cc.dd.ee.ff.
Ensures that mac address is a Unicast mac.
Ensures that the mac address is not already allocated for the
specified network.
When we have VPN customer gateway which is resolved by a hostname , we should be able to register the VPN customer gateway with its hostname instead of the IP address, this would be useful in case where the remote device IP is dynamically assigned , where customers use DDNS to resolve it.
Root Cause:
Some global parameters contains NULL value, where the code doesn't handle NULL check.
So it fails with an exception. Hence nothing appears on the field(ERROR).
Solution:
Added required NULL check.
While downloading the template for the first time install path was not available. During first download after migration template is synced to s3 storage and template install path is updated to DB. But while generating the extract URL we are still taking install path from TemplateDataStoreVO object cached in the process.
Configure a PF rule Private port : Start port ; 20 ENd POrt 25 || Public Port : Start port 20 ; ENd Port : 25.
Trigger UpdatePortForwardingRule api
ApI fails with following error : " Unable to update the private port of port forwarding rule as the rule has port range "
Solution-
Port range gets modified
- All tests should pass on KVM, Simulator
- Add test cases covering FSM state transitions and actions
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Removed three bg thread tasks, uses FSM event-trigger based scheduling
- On successful recovery, kicks VM HA
- Improves overall HA scheduling and task submission, lower DB access
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Host-HA offers investigation, fencing and recovery mechanisms for host that for
any reason are malfunctioning. It uses Activity and Health checks to determine
current host state based on which it may degrade a host or try to recover it. On
failing to recover it, it may try to fence the host.
The core feature is implemented in a hypervisor agnostic way, with two separate
implementations of the driver/provider for Simulator and KVM hypervisors. The
framework also allows for implementation of other hypervisor specific provider
implementation in future.
The Host-HA provider implementation for KVM hypervisor uses the out-of-band
management sub-system to issue IPMI calls to reset (recover) or poweroff (fence)
a host.
The Host-HA provider implementation for Simulator provides a means of testing
and validating the core framework implementation.
Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This introduces a new certificate authority framework that allows
pluggable CA provider implementations to handle certificate operations
around issuance, revocation and propagation. The framework injects
itself to `NioServer` to handle agent connections securely. The
framework adds assumptions in `NioClient` that a keystore if available
with known name `cloud.jks` will be used for SSL negotiations and
handshake.
This includes a default 'root' CA provider plugin which creates its own
self-signed root certificate authority on first run and uses it for
issuance and provisioning of certificate to CloudStack agents such as
the KVM, CPVM and SSVM agents and also for the management server for
peer clustering.
Additional changes and notes:
- Comma separate list of management server IPs can be set to the 'host'
global setting. Newly provisioned agents (KVM/CPVM/SSVM etc) will get
radomized comma separated list to which they will attempt connection
or reconnection in provided order. This removes need of a TCP LB on
port 8250 (default) of the management server(s).
- All fresh deployment will enforce two-way SSL authentication where
connecting agents will be required to present certificates issued
by the 'root' CA plugin.
- Existing environment on upgrade will continue to use one-way SSL
authentication and connecting agents will not be required to present
certificates.
- A script `keystore-setup` is responsible for initial keystore setup
and CSR generation on the agent/hosts.
- A script `keystore-cert-import` is responsible for import provided
certificate payload to the java keystore file.
- Agent security (keystore, certificates etc) are setup initially using
SSH, and later provisioning is handled via an existing agent connection
using command-answers. The supported clients and agents are limited to
CPVM, SSVM, and KVM agents, and clustered management server (peering).
- Certificate revocation does not revoke an existing agent-mgmt server
connection, however rejects a revoked certificate used during SSL
handshake.
- Older `cloudstackmanagement.keystore` is deprecated and will no longer
be used by mgmt server(s) for SSL negotiations and handshake. New
keystores will be named `cloud.jks`, any additional SSL certificates
should not be imported in it for use with tomcat etc. The `cloud.jks`
keystore is stricly used for agent-server communications.
- Management server keystore are validated and renewed on start up only,
the validity of them are same as the CA certificates.
New APIs:
- listCaProviders: lists all available CA provider plugins
- listCaCertificate: lists the CA certificate(s)
- issueCertificate: issues X509 client certificate with/without a CSR
- provisionCertificate: provisions certificate to a host
- revokeCertificate: revokes a client certificate using its serial
Global settings for the CA framework:
- ca.framework.provider.plugin: The configured CA provider plugin
- ca.framework.cert.keysize: The key size for certificate generation
- ca.framework.cert.signature.algorithm: The certificate signature algorithm
- ca.framework.cert.validity.period: Certificate validity in days
- ca.framework.cert.automatic.renewal: Certificate auto-renewal setting
- ca.framework.background.task.delay: CA background task delay/interval
- ca.framework.cert.expiry.alert.period: Days to check and alert expiring certificates
Global settings for the default 'root' CA provider:
- ca.plugin.root.private.key: (hidden/encrypted) CA private key
- ca.plugin.root.public.key: (hidden/encrypted) CA public key
- ca.plugin.root.ca.certificate: (hidden/encrypted) CA certificate
- ca.plugin.root.issuer.dn: The CA issue distinguished name
- ca.plugin.root.auth.strictness: Are clients required to present certificates
- ca.plugin.root.allow.expired.cert: Are clients with expired certificates allowed
UI changes:
- Button to download/save the CA certificates.
Misc changes:
- Upgrades bountycastle version and uses newer classes
- Refactors SAMLUtil to use new CertUtils
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes issue of enabling dynamic roles based on the global setting
only. This also fixes application of the default role/permissions mapping
on upgrade from 4.8 and previous versions to 4.9+.
Previously, it would make additional check to ensure commands.properties
is not in the classpath however this creates confusion for admins who
may skip/skim through the rn/docs and assume that mere changing the
global settings was not enough.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Change default configuration for router.aggregation.command.each.timeout from 3 to 600 seconds (#2223)
(cherry picked from commit 17bc6afc82)
This fixes some test_nic failures caused due to short aggregation command timeout
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This feature allows changing permission for existing role permissions, as those were static and could not be changed once created. It also provides the ability to change these permissions in the UI using a drop down menu for each permission rule, in which admin can select ‘Allow’ or ‘Deny’ permission.
Changes in the API:
This feature modifies behaviour of updateRolePermission API method:
New optional parameters ‘ruleid’ and ‘permission’ are introduced, they are mutual exclusive to ‘ruleorder’ parameter. This defines two use cases:
Update role permission: ‘ruleid’ and ‘permission’ parameters needed
Update rules order: ‘ruleorder’ parameter needed
Parameter ‘ruleorder’ is now optional
updateRolePermission providing ‘ruleorder’ parameter should be sent via POST
Default value of the account level global config vmsnapshot.expire.interval is -1 that conforms to legacy behaviour. A positive value will expire the VM snapshots for the respective account in that many hours.
Errored and Abandoned Templates should also be displayed on UI so that user has the accessibility to delete the template even before the clean up thread is run. Refer - CLOUDSTACK-9608
ISSUE: Featured Templates/Iso's created by Root/admin user are not visible to Domain Admin users.
STEPS TO REPRODUCE
Mark a template as featured and try to view it from a domain admin user
The issue occurs for both templates and iso's registered before and after upgrade
Templates,ISO's whose owner is ROOT admin, public: Yes, featured: Yes
Log in to UI (as a domain admin, such as an admin of “TEST/TEST1” domain)
Choose “Templates”.
Error message will be shown on UI
CloudStack has several background polling tasks that are spread across
the codebase, the aim of this work is to provide a single manager to
handle submission, execution and handling of background tasks. With
the framework implemented, existing oobm background task has been
refactored to use this manager.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
There is no cpuspeed, cpunumber or memory details in the listUsageRecords output as documented
In DB (cloud_usage table) we have cpu_speed, cpu_cores and ram fileds, but these are not populated for all the VM's. These fields are only populated for the VM's which are deployed with custom service offerings.
Include the timezone in datetime format of snapshot events, to be consistent
with every other events.
"eventDateTime" was added by @chipchilders in commit 14ee684ce3 and was
updated the same day to add the timezone (commit bf967eb622) except for
Snapshots.
Unable to create service offering with networkrate=0(Unlimited network throttling) with an error "Failed to create service offering xxxxxxx: specify the network rate value more than 0".
Now, Updating the password via UpdateUser API is not allowed via integration port
(cherry picked from commit d206336e1a)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
capacity_type for local storage in op_host_capacity
is still enabled
(cherry picked from commit e06e3b7cd4)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
removed code which nullifies vm_instance_id
Also modified QueryManagerImpl to ignore volume which does not have uuid. This is to avoid duplicate volume listing.
(cherry picked from commit 3cced927c4)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
check if acl service provider is configured when network is associated with a acl.
(cherry picked from commit bbff9f1575)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
registerTemplate and getUploadParamsForTemplate API's
Any string is allowed as hypervisor type from the api.
HypervisorType.getType() tries to validate with the enums and if nothing
matches, sets the type as None.
Added a check to not allow None hypervisor type when registering.
(cherry picked from commit cc06c5189a)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
the destination pool is in maintenance mode do not allow a volume to be migrated to
the storage pool. Fixed it for volume migration and vm migration with volume.
(cherry picked from commit 8ef94819da)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Updated hardcoded value with max data volumes limit from hypervisor capabilities.
(cherry picked from commit 93f5b6e8a3)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
It would be useful if we return the provider name in the list storage pools response. This will be useful for example to identify different storages that are in use and their scope.
This commit contains following changes
(1) add CPU CORE information in op_host_capacity
(2) add capacity name in the CapacityResponse
(3) add allocatedCapacity for CPU/MEMORY/CPU CORE for zones
(4) sort CapacityResponse by zonename and CapacityType
This allows native CloudStack users to change password in UI when LDAP
is enabled. Overall changes:
- A new usersource returned in the listUsers response
- Removed ldap check in the UI, replaced with check based on user source
- DB changes to include user.source in user_view
- Changed UI error message for non-native users trying to change password
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When dedicating a resource (cluster or host) to a domain, the affinity
group which is created is visible to everyone rather than only to domain
that the cluster is dedicated to.
CLOUDSTACK-9669:egress destination cidr VR python script changes
CLOUDSTACK-9669:egress destination API and orchestration changes
CLOUDSTACK-9669: Added the ipset package in systemvm template
CLOUDSTACK-9669:Added licence header for new files
CLOUDSTACK-9669: replacing 0.0.0.0/0 with the network cidr
ipset member add with 0.0.0.0/0 fails. So 0.0.0.0/0 replaced with the network cidr.
In source cidr 0.0.0.0/0 is nothing but network cidr.
updated the default egress all cidr with network cidr
given is running out of capacity. If host id is specified the deployment should happen
on the given host and it should fail if the host is out of capacity. We are retrying
deployment on the entire zone without the given host id if we fail once. The retry,
which will retry on other hosts, should only be attempted if host id isn't given.
Also, introduces global setting
allow.deploy.vm.if.deploy.on.given.host.fails with which old behaviour
can be restored
* 4.9:
Do not set gateway to 0.0.0.0 for windows clients
CLOUDSTACK-9904: Fix log4j to have @AGENTLOG@ replaced
ignore bogus default gateway when a shared network is secondary the default gateway gets overwritten by a bogus one dnsmasq does the right thing and replaces it with its own default which is not good for us so check for '0.0.0.0'
Activate NioTest following changes in CLOUDSTACK-9348 PR #1549
CLOUDSTACK-9828: GetDomRVersionCommand fails to get the correct version as output Fix tries to return the output as a single command, instead of appending output from two commands
CLOUDSTACK-3223 Exception observed while creating CPVM in VMware Setup with DVS
CLOUDSTACK-9787: Fix wrong return value in NetUtils.isNetworkAWithinNetworkB
Following parameters are moved to configdepot.
snapshot.max.hourly
snapshot.max.daily
snapshot.max.weekly
snapshot.max.monthly
enable.secure.session.cookie
json.content.type
A root volume can be replaced by a different root volume without the VM it belongs to being expunged.
From dev@:
For example: Let’s say we have a system VM running on NFS primary storage. We then put this primary storage into maintenance mode, which creates the system VM (with the same name) on a different primary storage (we do not create a new row in the cloud.vm_instance table for this VM). While this VM works, the original root disk of the system VM remains on the original primary storage and is not destroyed by the code in StorageManagerImpl.cleanupStorage(boolean) in 4.10 because 4.10 (as shown above) only asks for non-root volumes to consider for deletion. In the 4.9 version of the code, the original root disk is cleaned up in StorageManagerImpl.cleanupStorage(boolean). The problem with 4.10 relying on a root disk always being deleted when the VM it belongs to is deleted is that in a situation like this that the system VM doesn’t get deleted at this point – it gets a new root disk that’s hosted by a different primary storage (so now it’s original root disk is stranded).
When sending the DHCP offerings to the VRs this setting wasn't read properly which made
it default to 'all'.
This causes all DHCP offerings to be send to all VRs instead of just those in the POD.
As VR provisioning can be very time consuming this can drastically reduce deployment time
of the VR.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
1. Removed XenServerGuestOsMemoryMap from CitrixHelper.java
This java file was holding a static in memory map named XenServerGuestOsMemoryMap. This was the source for xenserver dynamic memory values(max and min). These values were moved to guest_os_details table.
2. DAO layer was modified to access these values.
3. VirtualMachineTo object was modified to populate the dynamic memory values.
4. addGuestOs and UpdateGuestOS api has been modified to update memory values.
secure and hidden config values are first unencrypted before returning
them in the api. This is not desired as they are secure configs
returning encrypted strings for secure and hidden configs if encryption
is enabled.
* 4.9:
CLOUDSTACK-9647: NIC adapter type becomes e1000 , even after changing the global parameter "vmware.systemvm.nic.device.type" to vmxnet3 for VPC VR
removed code which nullifies vm_instance_id
Also modified QueryManagerImpl to ignore volume which does not have uuid. This is to avoid duplicate volume listing.
* 4.9:
CLOUDSTACK-9857: With this change if agent dies the systemd will catch it properly and show process as exited
CLOUDSTACK-9805: Display VR list in network details
CLOUDSTACK-9356: FIX Cannot add users in VPC VPN
Ldap auto creation of accounts is broken due to the security fix for
CLOUDSTACK-9369.
There was an explicit check to not allow login incase the
user doesnt exist. removed the same.
- commented some occurences of cloud.com as being harmless
* examples
* identifiers (internal)
- changed the URL for vhd-util download
- changed comments from 'cloud.com' to 'Apache CloudStack'
Reviewed-By: Rashmi Dixit
Problem: All the hosts suitable for VM Migration are not shown in the UI. This could
confuse the user as the target host might never be shown in the UI.
Root Cause: The API (findHostsForMigration) always returned page 1 results which would
be always <= default.page.size global parameter. Therefore, in case of large
no. of hosts where the result can map to multiple pages, this issue would arise.
Solution: 1. Replace drop-down with listView widget.
2. Allow lazy-loading of records on listView's scroll.
3. Show additional parameters (CPU/Memory used) to assist admin in decision making.
4. Provide 'Search by host name' to limit the results.
Added change where if there are no hosts found, an empty row with message will
appear.
There are some VM deployment failures happening when multiple VMs are deployed at a time, failures mainly due to NetworkModel code that iterates over all the vlans in the pod. This causes each deployVM thread to hold the global lock on Network longer and cause delays. This delay in turn causes more threads to choose same host and fail since capacity is not available on that host.
Following are some changes required to be done to reduce delays during VM deployments which in turn causes some vm deployment failures when multiple VMs are launched at a time.
In Planner, remove the clusters that do not contain a host with matching service offering tag. This will save some iterations over clusters that dont have matching tagged host
In NetworkModel, do not query the vlans for the pod within the loop. Also optimized the logic to query the ip/ipv6
In DeploymentPlanningManagerImpl, do not process the affinity group if the plan has hostId provided.
CLOUDSTACK-9794: Unable to attach more than 14 devices to a VMUpdated hardcoded value with max data volumes limit from hypervisor capabilities.
* pr/1953:
CLOUDSTACK-9794: Unable to attach more than 14 devices to a VM
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-5806: add presetup to storage types that support over provisioning
Ideally this should be configurable via global settings
* pr/1958:
CLOUDSTACK-5806: add presetup to storage types that support over provisioning Ideally this should be configurable via global settings
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
addhost api is successful with out providing the host tag info and we recommend host tag is mandatory for bare-metal.
In the current implementation host tag check is happening at vm deployment stage but it will be good to have host tag field as mandatory field during adding of the host it self.
This improves the metrics view feature by improving the rendering performance
of metrics view tables, by reimplementing the logic at the backend and data
served via APIs. In large environments, the older implementation would
make several API calls that increases both network and database load.
List of APIs introduced for improving the performance:
listClustersMetrics
listHostsMetrics
listInfrastructure
listStoragePoolsMetrics
listVMsMetrics
listVolumesMetrics
listZonesMetrics
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-8841: Storage XenMotion from XS 6.2 to XS 6.5 fails.Removed Host version check in API. Because
Case 1:(Lower to Higher Version)
Migration from lower version to higher version is valid.
Case 2:(Higher to Lower Version)
In this case system(Host) will not allow.
So no need to check version in API. Additionally, CLOUDSTACK User Interface(UI) does not allow migration between different version of hyper-visors. But sometimes user wants to do migration from Lower to Higher Version. Now he can do it via API.
ACS Link ==>
https://issues.apache.org/jira/browse/CLOUDSTACK-8841
* pr/815:
CLOUDSTACK-8841: Storage XenMotion from XS 6.2 to XS 6.5 fails.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests
NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle
only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
* pr/1825:
CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9757: Fixed issue in traffic from additional public subnetAcquire ip from additional public subnet and configure nat on that ip.
After this pick any from that network and access additional public subnet from this vm. Traffic is supposed to go via additional public subnet interface in the VR.
* pr/1922:
CLOUDSTACK-9757: Fixed issue in traffic from additional public subnet
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
* 4.9:
CLOUDSTACK-9746 system-vm: logrotate config causes critical failures
CLOUDSTACK-9788: Fix exception listNetworks with pagesize=0
CLOUDSTACK-8663: Fixed various issues to allow VM snapshots and volume snapshots to exist together
Fix HVM VM restart bug in XenServer
CLOUDSTACK-8663: Fixed various issues to allow VM snapshots and volumesnapshots to exist together
Reverting VM to disk only snapshot in Xenserver corrupts VM
Stale NFS secondary storage on XS leads to volume creation failure from snapshot
Fixed various concerns raised in #672
* pr/1941:
CLOUDSTACK-8663: Fixed various issues to allow VM snapshots and volume snapshots to exist together
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-8857 listProjects doesn't return tags vmstopped or vmrunning when their value is zero listProjects doesn't return tags vmstopped or vmrunning when their value is zero
added the the appropriate tags to response.
tested this manually by creating projects, launching vms from project accounts and then listing the projects.
* pr/838:
CLOUDSTACK-8857 listProjects doesn't return tags vmstopped or vmrunning when their value is zero
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-8856 Primary Storage Used(type tag with value 2) related tPrimary Storage Used(type tag with value 2) related tag is not showing in listCapacity api response
* pr/865:
CLOUDSTACK-8856 Primary Storage Used(type tag with value 2) related tag is not showing in listCapacity api response.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
* 4.9:
CLOUDSTACK-9789: Fix releasing secondary guest IP fails with associated static nat which is actually not used
CLOUDSTACK-9628: Use correct virtualsize with Swift as secondary storage
CLOUDSTACK-9789: Fix releasing secondary guest IP fails with associated static nat which is actually not used
* pr/1947:
CLOUDSTACK-9789: Fix releasing secondary guest IP fails with associated static nat which is actually not used
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9724: Fixed missing additional public ip on tier network wIn VPC tier network acquire an ip and configure the PF service on it. VR now will have the two ip addresses on the interface.
Now restart the VPC tier network with cleanup option. After router comes up the public interface has only one ip (source nat ip)
Fixed the above issue.
* pr/1885:
CLOUDSTACK-9724: Fixed missing additional public ip on tier network with cleanup
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-8737: Removed the missed out-of-band VR reboot code, not required based on persistent VR changes.
* pr/1882:
CLOUDSTACK-8737: Removed the missed out-of-band VR reboot code, not required based on persistent VR changes.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9768: Time displayed for events in UI is incorrectTime displayed for events in UI is incorrect. Let's say, when we login using Japanese language the time displayed in the events is GMT instead of JST. However with English language the time is JST, as expected.
Example:
Time is displayed in the event is 10:40, if you are logged in using English language.
Whereas, time in the event shows 19:40 If you login with Japanese language.
* pr/1926:
CLOUDSTACK-9768: Time displayed for events in UI is incorrect
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9766 : Executing deleteSnapshot api with already deleted sIf we try to delete the snapshot which is already deleted, then no proper error appears in the log and it just try to delete the snapshot which is already deleted.
Steps to reproduce :
-------
1-create a snapshot
2-delete the snapshot
3-try to delete snapshot which is deleted in step 2
Expected Result
-------------
Result should show proper error message. Request for deleting already deleted snapshot should not be placed.
* pr/1924:
CLOUDSTACK-9766 : Executing deleteSnapshot api with already deleted snapshot does not throw any exception or failure message
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9711: Fixed error reporting while adding vpn user
If configuring vpn user in one of the network fails the failure is ignored, failure should be shown in API response.
* pr/1874:
CLOUDSTACK-9711: Fixed error reporting while adding vpn user
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
* 4.9:
CLOUDSTACK-9691: Added test list_snapshots_with_removed_data_store
CLOUDSTACK-9691: Fixed unhandeled excetion in list snapshot command when a primary store is deleted related to it
CLOUDSTACK-9790: fix NPE in case of Basic zone.This PR fixes the creation of a basic zone.
https://issues.apache.org/jira/browse/CLOUDSTACK-9790
* pr/1952:
CLOUDSTACK-9790: fix NPE in case of Basic zone.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9682: Block VM migration to a storage which is in maintainenece mode. If
the destination pool is in maintenance mode do not allow a volume to be migrated to
the storage pool. Fixed it for volume migration and vm migration with volume.
* pr/1838:
CLOUDSTACK-9682: Block VM migration to a storage which is in maintainenece mode. If the destination pool is in maintenance mode do not allow a volume to be migrated to the storage pool. Fixed it for volume migration and vm migration with volume.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9679:Allow master user to manage subordinate user uploaded template
* pr/1834:
CLOUDSTACK-9679:Allow master user to manage subordinate user uploaded template
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9574: Redesign storage views## Part 1: Redesign storage tags
### Actual behavior
Primary storage tags are being saved as an entry on `storage_pool_details` with:
* name = TAG_NAME
* value = "true"
When a boolean property is defined in {{storage_pool_details}} and has value = "true", it is displayed as a tag.


### Goal
Redesign `Storage Tags` for Primary Storage view, to list only tags, as it is done in Host Tags (Hosts view).
## Part 2: Remove details from listImageStores API call response and UI
### Description
In Secondary Storage view we propose removing `Details` field, as `Setting` tab list details for a given image store. We also remove details from response on `listImageStores` API method
* pr/1747:
CLOUDSTACK-9574: Redesign storage tags and remove details from listImageStores response and UI
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
BUG-ID:CLOUDSTACK-9678listNetworkOfferings API is listing all the offerings which has same prefix in their name
* pr/1833:
BUG-ID:CLOUDSTACK-9678 listNetworkOfferings API is listing all the offerings which has same prefix in their name
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
snapshots to exist together
Reverting VM to disk only snapshot in Xenserver corrupts VM
Stale NFS secondary storage on XS leads to volume creation failure from snapshot
CLOUDSTACK-8886: Limitations is listUsageRecords output, listUsageRecAs @kansal is inactive created new branch and raised the PR. This is continuation of PR #858
This closes#858
Problem: Only domainid is returned by usageReports API call. In cloudstack documention it mentions "domain" as being in the usage response. The API should really be returning the domain as account information has both account and accountid.
Fix: Missing setDomainName at the time of creating response.
* pr/1939:
CLOUDSTACK-8886: Limitations is listUsageRecords output, listUsageRecords does not return domain - Fixed and tests added
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Updated StrongSwan VPN ImplementationThis PR is a merge of @jayapalu changes in #872 and the changes I had to make to get the functionality working.
I have done pretty extensive testing of this code so far and we are looking to be in pretty good shape. One thing to note is that a `Diffie-Hellman` group **is required** in order for this feature to work correctly. It is not highlighted in the tests below, but I have shown that the `PFS` is not required for this feature to work. In #872 I have shown a more exhaustive set of tests of this code, but I have limited this set of tests to a recommended `IKE` and `ESP` configuration in order to reduce the noise and test the other areas of functionality.
**Test Results**
I am testing this functionality by creating two VPCs with VMs in each and creating a S2S VPN connection between the two VPCs. Then I SSH into a VM in one VPC and I ping the private IP of a VM in the other VPC. Then I tear it down and try a different configuration.
_Setup_
```
VPC 1 VPC 2
===== =====
VPN Gateway VPN Gateway
VPN Customer Gateway VPN Customer Gateway
VPN Connection <---> VPN Connection
- Passive = True - Passive = False
```
_Legend_
`SKIP` => At least one of the VPN Connections did not come up, so no test was run.
`OK` => The ping test was successful over the S2S VPN connection.
`FAIL` => The ping test failed over the S2S VPN connection.
`Passive` => Specifies if either the `<vpc_1> : <vpc_2>` sides of the VPN Connection is set to passive.
`Conn State` => Specifies the connection status of the `<vpc_1> : <vpc_2>` VPN Connection in the UI.
`Requires Reset` => If the ping test does not result in an `OK`, then a VPN Connection Reset is performed on either `<vpc_1> : <vpc_2>` sides of the VPN Connection based on which side is not showing `Connected`. The results in the `Status` column is the final result after the reset is performed.
_Results_
```
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| Status | IKE & ESP | DPD | Encap | IKE Life | ESP Life | Passive | Conn State | Requires Reset |
+========+======================+=======+=======+==========+==========+===============+=============================+================+
| OK | aes128-sha1;modp1536 | True | False | 86400 | 3600 | True : False | Disconnected : Connected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | True | True | 86400 | 3600 | True : False | Disconnected : Connected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | True | False | | 3600 | True : False | Disconnected : Connected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | True | False | 86400 | | True : False | Disconnected : Connected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | True | False | | | True : False | Disconnected : Connected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | True | False | 86400 | 3600 | False : False | Connected : Connected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | True | False | 86400 | 3600 | True : True | Disconnected : Disconnected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | True | False | 86400 | 3600 | False : True | Connected : Disconnected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | False | False | 86400 | 3600 | False : False | Connected : Connected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | False | False | 86400 | 3600 | True : False | Disconnected : Connected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | False | False | 86400 | 3600 | True : True | Disconnected : Disconnected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | False | False | 86400 | 3600 | False : True | Connected : Disconnected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| SKIP | aes128-sha1 | True | False | 86400 | 3600 | True : False | Disconnected : Error | True : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| SKIP | aes128-sha1 | False | False | 86400 | 3600 | True : False | Disconnected : Error | True : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| FAIL | aes128-sha1 | True | False | 86400 | 3600 | True : True | Disconnected : Disconnected | True : True |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| SKIP | aes128-sha1 | True | False | 86400 | 3600 | False : False | Connected : Error | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
```
* pr/1741:
complete implementation of the StrongSwan VPN feature
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
the destination pool is in maintenance mode do not allow a volume to be migrated to
the storage pool. Fixed it for volume migration and vm migration with volume.
This issue occurs when a volume in Ready state is moved across storage
pools.
While finding if the storage pool has enough space, it has a check to
consider the size of non Ready volumes only. This is true if the volume
to be attached to a vm is in the same storage pool. But, if the volume
is in another storage pool and has to be moved to a vm's storage pool,
the size of the volume should be considered in doing the space check.
computing the asking size when volume is not in ready state or when the
volume is on a different storage pool.
registerTemplate and getUploadParamsForTemplate API's
Any string is allowed as hypervisor type from the api.
HypervisorType.getType() tries to validate with the enums and if nothing
matches, sets the type as None.
Added a check to not allow None hypervisor type when registering.
[4.10] CLOUDSTACK-7985: assignVM in Advanced zone with Security GroupsThis commit contains the following changes:
(1) implementation of assignVM in Advanced zone with Security Groups
(2) keep the default nic on shared network when assignVM
(3) allow migrate vm from/to project;
(4) UI change for selecting account/project/network
* pr/844:
CLOUDSTACK-7985: assignVM in Advanced zone with Security Groups
CLOUDSTACK-7985: keep the default nic on shared network when assignVM
CLOUDSTACK-7985: (1) allow migrate vm from/to project; (2) UI change for selecting account/project/network
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This closes#1644
* 4.9:
CLOUDSTACK-4858 Honors the snapshot.backup.rightafter configuration variable Unhides snapshot.backup.rightafter from global configuration
CLOUDSTACK-4858 Honors the snapshot.backup.rightafter configuration variable
Unhides snapshot.backup.rightafter from global configuration
If snapshot.backup.rightafter is set to false (defaults to true), snapshots are
not backed up to secondary storage.
This is the same as PR #1644 applied to 4.9, as per @jburwell
* pr/1697:
CLOUDSTACK-4858 Honors the snapshot.backup.rightafter configuration variable Unhides snapshot.backup.rightafter from global configuration
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9738: [Vmware] Optimize vm expunge process for instances with vm snapshots## Description
It was noticed that expunging instances with many vm snapshots took a look of time, as hypervisor received as many tasks as vm snapshots instance had, apart from the delete vm task. We propose a way to optimize this process for instances with vm snapshots by sending only one delete task to hypervisor, which will delete vm and its snapshots
## Use cases
1. deleteVMsnapohsot-> no changes to current behavior
2. destroyVM with expunge=false -> no actions to VMsnaphsot is performed at the moment. When VM cleanup thread is executed it will perform the same sequence as (3). If instance is recovered before expunged by the cleanup thread it will remain intact with VMSnapshot chain present
3. destroyVM with expunge=true:
* Vmsnaphsot is marked with removed timestamp and state = Expunging in DB
* VM is deleted in HW
* pr/1905:
CLOUDSTACK-9738: [Vmware] Optimize vm expunge process for instances with vm snapshots
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
* 4.9:
CLOUDSTACK-8805: Domains become inactive automatically. Handled the '%' case by replacing that with a literal character rather than a wildcard character.
CLOUDSTACK-8805: Domains become inactive automatically.Handled the '%' case by replacing that with a literal character rather than a wildcard character.
* pr/775:
CLOUDSTACK-8805: Domains become inactive automatically. Handled the '%' case by replacing that with a literal character rather than a wildcard character.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
[4.10] CLOUDSTACK-8746: VM Snapshotting implementation for KVM
* pr/977:
Fixes for testing VM Snapshots on KVM. Related to PR 977
CLOUDSTACK-8746: vm snapshot implementation for KVM
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This commit implements basic Security Grouping for KVM in
Basic Networking.
It does not implement full Security Grouping yet, but it does:
- Prevent IP-Address source spoofing
- Allow DHCPv6 clients, but disallow DHCPv6 servers
- Disallow Instances to send out Router Advertisements
The Security Grouping allows ICMPv6 packets as described by RFC4890
as they are essential for IPv6 connectivity.
Following RFC4890 it allows:
- Router Solicitations
- Router Advertisements (incoming only)
- Neighbor Advertisements
- Neighbor Solicitations
- Packet Too Big
- Time Exceeded
- Destination Unreachable
- Parameter Problem
- Echo Request
ICMPv6 is a essential part of IPv6, without it connectivity will break or be very
unreliable.
For now it allows any UDP and TCP packet to be send in to the Instance which
effectively opens up the firewall completely.
Future commits will implement Security Grouping further which allows controlling UDP and TCP
ports for IPv6 like can be done with IPv4.
Regardless of the egress filtering (which can't be done yet) it will always allow outbound DNS
to port 53 over UDP or TCP.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
This commit adds the initial functionality for IPv6 in Basic Networking.
When a valid IPv6 CIDR is configured for the POD/VLAN the DirectPodBasedNetworkGuru
will use the EUI-64 calculation to calculate the IPv6 Address the Instance will obtain.
For this it is required that the physical routers in the Layer 2 network (POD/VLAN) send out
Router Advertisements with the same subnet as configured in CloudStack.
A example subnet could be 2001:db8::/64
Using radvd a Linux Router could send out Router Advertisements using this configuration:
interface eth0
{
MinRtrAdvInterval 5;
MaxRtrAdvInterval 60;
AdvSendAdvert on;
AdvOtherConfigFlag off;
IgnoreIfMissing off;
prefix 2001:db8::/64 {
};
RDNSS 2001:db8:ffff::53 {
};
};
A Instance with MAC Address 06:7a:88:00:00:8b will obtain IPv6 address 2001:db8:100::47a:88ff:fe00:8b
Both Windows, Linux and FreeBSD use the same calculation for their IPv6 Addresses, this is specified
in RFC4862 (IPv6 Stateless Address Autoconfiguration).
Under Linux it is mandatory that IPv6 Privacy Extensions are disabled:
$ sysctl -w net.ipv6.conf.all.use_tempaddr=0
Windows should be configured to use the MAC Address as the identifier for the EUI-64/SLAAC calculation.
$ netsh interface ipv6 set privacy state=disabled store=persistent
$ netsh interface ipv6 set global randomizeidentifiers=disabled store=persistent
The IPv6 address is stored in the 'nics' table and is then returned by the API and will be shown in the UI.
Searching for a conflicting IPv6 Address it NOT required as each IPv6 address is based on the MAC Address
of the Instance and therefor unique.
Security Grouping has not been implemented yet and will follow in a upcoming commit.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
(1) add support to create/delete/revert vm snapshots on running vms with QCOW2 format
(2) add new API to create volume snapshot from vm snapshot
(3) delete metadata of vm snapshots before stopping/migrating and recover vm snapshots after starting/migrating
(4) enable deleting of VM snapshot on stopped vm or vm snapshot is not listed in qcow2 image.
(5) enable smoke tests for vmsnaphsots on KVM
CLOUDSTACK-9456: Migrate master to Spring 4.xThis changes makes CloudStack use spring 4:
```
- Bump spring-framework version to 4.x and Jetty to version that runs with JDK7
- Bump servet dependency version
- Migrates various xmls to use version independent schema uris
```
Outstanding issue:
- Testing of various non-standard plugins such as network and storage plugins etc.
Since, this is a big change pinging for review -- @jburwell @karuturi @wido @murali-reddy @abhinandanprateek @DaanHoogland @GaborApatiNagy @JayapalUradi @kishankavala @K0zka @nvazquez @rafaelweingartner @pyr and others
@blueorangutan package
* pr/1638:
CLOUDSTACK-9456: Update Spring version in maven poms
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9649: In the management server log there is an error
ISSUE
============
In the management server log there is an error
2016-10-01 00:07:31,670 ERROR [c.c.h.v.r.VmwareResource] (DirectAgent-417:ctx-e8c89b3f strmg-esx-01, cmd: GetRouterAlertsCommand) (logid:7beb3819) Command failed due to Exception: java.io.IOException
Message: There was a problem while connecting to 0.0.0.0:3922
In case of basic zone and VMWare ESXi host, the NIC 2 always gets 0.0.0.0 as IP address. Looks like we are generating an error for connecting through this invalid IP.
2016-10-01 04:37:31,680 DEBUG [c.c.a.m.AgentManagerImpl] (RouterStatusMonitor-1:ctx-8880f9c8) (logid:946838b8) Details from executing class com.cloud.agent.api.routing.GetRouterAlertsCommand: Command failed due to Exception: java.io.IOException
Message: There was a problem while connecting to 0.0.0.0:3922
2016-10-01 04:37:31,680 WARN [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-8880f9c8) (logid:946838b8) Unable to get alerts from router r-4-VM Command failed due to Exception: java.io.IOException
Message: There was a problem while connecting to 0.0.0.0:3922
2016-10-01 04:37:31,682 DEBUG [c.c.n.ExternalDeviceUsageManagerImpl] (ExternalNetworkMonitor-1:ctx-913c7bae) (logid:1b926a60) External devices stats collector is running...
Root Cause:
As Link local is not used in basic zone mode (vmware). 0.0.0.0 is just shown as a placeholder address. In getRouterAlerts before sending GetRouterAlertsCommand added check for ip and skip the command if ip is '0.0.0.0'.
* pr/1811:
CLOUDSTACK-9649: In the management server log there is an error related to 0.0.0.0 IP address
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9639: Unable to create shared network with vLan isolationDescription:
=========
Create shared network fails with Error.
While creating a shared network it fails to create with Error "The new IP range you have specified has overlapped with the guest network in the zone: XYZ. Please specify a different gateway/netmask"
Steps to Reproduce:
===============
1. Create an isolated network with a subnet eg: 10.1.1.0/24
2. Create a shared network with the same subnet but different VLAN, we should observe this issue.
Expected Behaviour:
===============
It shouldn't restrict the creation of the guest network with the same subnet as long as they are segmented by VLAN.
ACTUAL BEHAVIOR:
===============
It doesn't allow the creation of shared guest networks if there is any isolated guest network using the same subnet although it allows using the same subnet in multiple shared networks.
Cause:
=====
The issue is happening because, when Shared network is getting created we are checking if there is any guest network already implemented with the same CIDR and throwing the error without checking if they are having same VLAN also. Creating the same CIDR shared network with different VLAN should be allowed.
Fix:
===
When creating a shared network, if there is any existing guest network with the same CIDR, we check if they are having the same VLAN, if they are in same VLAN, then we don't allow creating it. If they are in the same CIDR with different VLAN then we allowing to create the network.
* pr/1804:
CLOUDSTACK-9639: Unable to create shared network with vLan isolation
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9617: Fixed enabling remote access after PF configured on Enabling Remote access Vpn Fails when there is a portforwarding rule of the reserved ports ( 1701 , 500 , 4500) under TCP protocol on Source nat IP
* pr/1782:
CLOUDSTACK-9617: Fixed enabling remote access after PF or LB configured on vpn tcp ports
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9683: system.vm.default.hypervisor will pin the hypervisor for VR too with this fix
* pr/1839:
CLOUDSTACK-9683: system.vm.default.hypervisor will pin the hypervisor for VR too with this fix
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Bump spring-framework version to 4.x and Jetty to version that runs with JDK8
- Bump servet dependency version
- Migrate spring xmls to version 4, fixes schema locations that are 3.0
dependent in various xmls.
- Fix failing tests due to spring upgrade
(Thanks @marcaurele Marc-Aurèle Brothier for fixing them)
* Fix test DeploymentPlanningManagerImplTest
* Fix GloboDNS test
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9626: Instance fails to start after unsuccesful computeISSUE
============
Instance fails to start after unsuccesful compute offering upgrade.
TROUBLESHOOTING
==================
We observed VM instance get compute values "cpuNumber","cpuSpeed","memory" removed from table "user_vm_details", which cause instance fail to startup next time on XenServer
`mysql> select * from user_vm_details where vm_id=10;
--------------------------------------------------------------------------------------------------
id vm_id name value display
--------------------------------------------------------------------------------------------------
218 10 platform viridian:true;acpi:1;apic:true;pae:true;nx:true 1
219 10 hypervisortoolsversion xenserver56 1
220 10 Message.ReservedCapacityFreed.Flag true 1
--------------------------------------------------------------------------------------------------
3 rows in set (0.00 sec)`
`2016-11-29 06:49:03,667 ERROR [c.c.a.ApiAsyncJobDispatcher] (API-Job-Executor-12:ctx-49c25b1d job-125) (logid:114a2f1b) Unexpected exception while executing org.apache.cloudstack.api.command.admin.vm.ScaleVMCmdByAdmin
java.lang.NullPointerException
at com.cloud.vm.UserVmManagerImpl.upgradeRunningVirtualMachine(UserVmManagerImpl.java:1664)
at com.cloud.vm.UserVmManagerImpl.upgradeVirtualMachine(UserVmManagerImpl.java:1631)
at com.cloud.vm.UserVmManagerImpl.upgradeVirtualMachine(UserVmManagerImpl.java:1561)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
at com.cloud.event.ActionEventInterceptor.invoke(ActionEventInterceptor.java:51)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at com.sun.proxy.$Proxy197.upgradeVirtualMachine(Unknown Source)
at org.apache.cloudstack.api.command.admin.vm.ScaleVMCmdByAdmin.execute(ScaleVMCmdByAdmin.java:48)
at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:150)
at com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:554)
at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:502)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)`
REPRO STEPS
==================
1. Set global setting enable.dynamic.scale.vm to true
2. Create a custom Compute Offerings A
3. Create a VM instance apply A, ie. cpuNumber=1,cpuSpeed=1000,memory=512M
4. Create another custom Compute Offerings B
5. Change service offering to B, ie. cpuNumber=2,cpuSpeed=2000,memory=4096M (ensure 4 times over previous memory size), then you will encounter scaling failed
6. Stop VM instance , you will never startup again
EXPECTED BEHAVIOR
==================
Succeed Startup VM instance
ACTUAL BEHAVIOR
==================
Fail to start instance
RCA:
The ROLLBACK does not take care of restoring old service offering details. In case failure we are removing the new service offering details but restoring old service offering details is missing.
Before Fix:
`user_vm_details before upgrade.
mysql> select * from user_vm_details where vm_id =9;
+-----+-------+------------------------------------+-------------------------------------------------+---------+
| id | vm_id | name | value | display |
+-----+-------+------------------------------------+-------------------------------------------------+---------+
| 118 | 9 | platform | viridian:true;acpi:1;apic:true;pae:true;nx:true | 1 |
| 119 | 9 | hypervisortoolsversion | xenserver56 | 1 |
| 120 | 9 | Message.ReservedCapacityFreed.Flag | false | 1 |
| 121 | 9 | cpuNumber | 1 | 1 |
| 122 | 9 | cpuSpeed | 1000 | 1 |
| 123 | 9 | memory | 256 | 1 |
+-----+-------+------------------------------------+-------------------------------------------------+---------+
6 rows in set (0.00 sec)
user_vm_details after unsuccessful upgrade.
mysql> select * from user_vm_details where vm_id =9;
+-----+-------+------------------------------------+-------------------------------------------------+---------+
| id | vm_id | name | value | display |
+-----+-------+------------------------------------+-------------------------------------------------+---------+
| 133 | 9 | platform | viridian:true;acpi:1;apic:true;pae:true;nx:true | 1 |
| 134 | 9 | hypervisortoolsversion | xenserver56 | 1 |
| 135 | 9 | Message.ReservedCapacityFreed.Flag | false | 1 |
+-----+-------+------------------------------------+-------------------------------------------------+---------+
3 rows in set (0.00 sec)`
After fix:
`
mysql> select * from user_vm_details where vm_id =9;
+-----+-------+------------------------------------+-------------------------------------------------+---------+
| id | vm_id | name | value | display |
+-----+-------+------------------------------------+-------------------------------------------------+---------+
| 166 | 9 | cpuNumber | 1 | 1 |
| 167 | 9 | platform | viridian:true;acpi:1;apic:true;pae:true;nx:true | 1 |
| 168 | 9 | cpuSpeed | 1000 | 1 |
| 169 | 9 | Message.ReservedCapacityFreed.Flag | false | 1 |
| 170 | 9 | memory | 256 | 1 |
| 171 | 9 | hypervisortoolsversion | xenserver56 | 1 |
+-----+-------+------------------------------------+-------------------------------------------------+---------+
6 rows in set (0.00 sec)
`
* pr/1796:
CLOUDSTACK-9626: Instance fails to start after unsuccesful compute offering upgrade.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle
only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
CLOUDSTACK-9594: API "list templates templatefilter=all" reveals allAPI "list templates templatefilter=all" reveals all templates.
Using a "list templates templatefilter=all" API call any domain admin can see all templates of all domains in ACS. Information returned includes the account and domain of the template's owner.
The template data shows what that VM is using and any hints from the label. This would give an advantage in what attack vectors to use. The account and domain can possibly be used in brute force attack to guess the password and login information.
Test Scenario:
created two accounts in different domain.
```
mysql> select account_id,username,api_key from user where id in (4,5);
+------------+-----------+----------------------------------------------------------------------------------------+
| account_id | username | api_key |
+------------+-----------+----------------------------------------------------------------------------------------+
| 4 | sudadmin1 | 3qeSuWadNzUFZ_i6c6zbwafjM3Eo0TWpkHw3En9jNsg5Ditk2N18DnbbL2quBYQ7FsdXQ8rwxbyFlE8vyUTwEg |
| 5 | sudadmin | N5uHVOrg1Ek1F1a_5OXTz4WpLG3ewHqcbPUSBjQ-2CTJdxmUe2go0S8fyqH4Np0scYiehYg2KqthZXCWEyKx1A |
+------------+-----------+----------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)
mysql> select account_name,domain_id from account where id in (4,5);
+--------------+-----------+
| account_name | domain_id |
+--------------+-----------+
| sudadmin | 2 |
| sudadmin1 | 3 |
+--------------+-----------+
2 rows in set (0.00 sec)
```
User sudadmin registered a private template named 'Debian'.
http://10.147.59.107:8080/client/api?apikey=N5uHVOrg1Ek1F1a_5OXTz4WpLG3ewHqcbPUSBjQ-2CTJdxmUe2go0S8fyqH4Np0scYiehYg2KqthZXCWEyKx1A&command=listTemplates&templatefilter=self&signature=ODt7zEWCLL20z1FT%2FIkd1molRaM%3D
listTemplate with "templatefilter=self", lists the newly registered template.
```
<listtemplatesresponse cloud-stack-version="4.8.0">
<count>1</count>
<template>
<id>51026d32-60ee-4e25-8ffd-3fa3c57fc14c</id>
<name>Debian</name>
<displaytext>Debian</displaytext>
<ispublic>false</ispublic>
<created>2016-11-10T17:18:00-0500</created>
<isready>true</isready>
<passwordenabled>false</passwordenabled>
<format>VHD</format>
<isfeatured>false</isfeatured>
<crossZones>false</crossZones>
<ostypeid>38c1fc84-a687-11e6-a8c8-06f654000053</ostypeid>
<ostypename>Debian GNU/Linux 7(64-bit)</ostypename>
<account>sudadmin</account>
<zoneid>25fa5b74-d4c2-4bad-8e3a-ceffcd10985e</zoneid>
<zonename>z1</zonename>
<status>Download Complete</status>
<size>2621440000</size>
<templatetype>USER</templatetype>
<hypervisor>XenServer</hypervisor>
<domain>SUDDOMAIN</domain>
<domainid>a350c00d-4048-4876-ae09-74ad4b7bb28c</domainid>
<isextractable>false</isextractable>
<checksum>e87a6d7291b999c92baa9623c9c3c207</checksum>
<details>{hypervisortoolsversion=xenserver61}</details>
<sshkeyenabled>false</sshkeyenabled>
<isdynamicallyscalable>false</isdynamicallyscalable>
</template>
</listtemplatesresponse>
```
User: sudadmin1
listTemplate with "templatefilter=self" does not list any template.
http://10.147.59.107:8080/client/api?apikey=3qeSuWadNzUFZ_i6c6zbwafjM3Eo0TWpkHw3En9jNsg5Ditk2N18DnbbL2quBYQ7FsdXQ8rwxbyFlE8vyUTwEg&command=listTemplates&templatefilter=self&signature=RfKsdg3RxDkqJotbTlHU2RdbdPA%3D
`<listtemplatesresponse cloud-stack-version="4.8.0"/>
`
NO TEMPLATES
**listTemplate with "templatefilter=all" lists all templates**
http://10.147.59.107:8080/client/api?apikey=3qeSuWadNzUFZ_i6c6zbwafjM3Eo0TWpkHw3En9jNsg5Ditk2N18DnbbL2quBYQ7FsdXQ8rwxbyFlE8vyUTwEg&command=listTemplates&templatefilter=all&signature=l5tubfyABT67d1jY702dvtZODbc%3D
Result:
```
<listtemplatesresponse cloud-stack-version="4.8.0">
<count>3</count>
<template>
<id>38451a02-a687-11e6-a8c8-06f654000053</id>
<name>CentOS 5.6(64-bit) no GUI (XenServer)</name>
<displaytext>CentOS 5.6(64-bit) no GUI (XenServer)</displaytext>
<ispublic>true</ispublic>
....
</template>
<template>
<id>51026d32-60ee-4e25-8ffd-3fa3c57fc14c</id>
<name>Debian</name>
<displaytext>Debian</displaytext>
<ispublic>false</ispublic>
<created>2016-11-10T17:18:00-0500</created>
<isready>true</isready>
<passwordenabled>false</passwordenabled>
<format>VHD</format>
<isfeatured>false</isfeatured>
<crossZones>false</crossZones>
<ostypeid>38c1fc84-a687-11e6-a8c8-06f654000053</ostypeid>
<ostypename>Debian GNU/Linux 7(64-bit)</ostypename>
**<account>sudadmin</account>**
<zoneid>25fa5b74-d4c2-4bad-8e3a-ceffcd10985e</zoneid>
<zonename>z1</zonename>
<size>2621440000</size>
<templatetype>USER</templatetype>
<hypervisor>XenServer</hypervisor>
<domain>SUDDOMAIN</domain>
<domainid>a350c00d-4048-4876-ae09-74ad4b7bb28c</domainid>
<isextractable>false</isextractable>
<checksum>e87a6d7291b999c92baa9623c9c3c207</checksum>
<details>{hypervisortoolsversion=xenserver61}</details>
<sshkeyenabled>false</sshkeyenabled>
<isdynamicallyscalable>false</isdynamicallyscalable>
</template>
<template>
<id>5f6af7bb-d965-4b9b-ab45-6d455b0d6bbe</id>
<name>SystemVM Template (XenServer)</name>
<displaytext>SystemVM Template (XenServer)</displaytext>
<ispublic>false</ispublic>
.....
</template>
</listtemplatesresponse>
```
**After Fix:**
http://10.147.59.107:8080/client/api?apikey=3qeSuWadNzUFZ_i6c6zbwafjM3Eo0TWpkHw3En9jNsg5Ditk2N18DnbbL2quBYQ7FsdXQ8rwxbyFlE8vyUTwEg&command=listTemplates&templatefilter=all&signature=l5tubfyABT67d1jY702dvtZODbc%3D
```
<listtemplatesresponse cloud-stack-version="4.8.0">
<count>1</count>
<template>
<id>38451a02-a687-11e6-a8c8-06f654000053</id>
<name>CentOS 5.6(64-bit) no GUI (XenServer)</name>
<displaytext>CentOS 5.6(64-bit) no GUI (XenServer)</displaytext>
<ispublic>true</ispublic>
<created>2016-11-10T09:32:44-0500</created>
<isready>true</isready>
<passwordenabled>false</passwordenabled>
<format>VHD</format>
<isfeatured>true</isfeatured>
<crossZones>true</crossZones>
<ostypeid>38a2bfd6-a687-11e6-a8c8-06f654000053</ostypeid>
<ostypename>CentOS 5.6 (64-bit)</ostypename>
<account>system</account>
<zoneid>25fa5b74-d4c2-4bad-8e3a-ceffcd10985e</zoneid>
<zonename>z1</zonename>
<size>21474836480</size>
<templatetype>BUILTIN</templatetype>
<hypervisor>XenServer</hypervisor>
<domain>ROOT</domain>
<domainid>383e0ea6-a687-11e6-a8c8-06f654000053</domainid>
<isextractable>true</isextractable>
<checksum>905cec879afd9c9d22ecc8036131a180</checksum>
<sshkeyenabled>false</sshkeyenabled>
<isdynamicallyscalable>true</isdynamicallyscalable>
</template>
</listtemplatesresponse>
```
Bug has been fixed considering below points
1. templatefilter=all or isofilter=all is applicable only to admin and domain admin.
2. With templatefilter=all or isofilter=all below are the visiblity of templates in system.
- admin should be able to see all templates/iso in system.
- domain admin should be able to see all public template and templates under its domain tree (including sub domain).
- domain admin in a project context should be able to see all public templates and templates registered
as project account and templates which are shared(using updateTemplatePermission api) with project account.
Also Modified "test/integration/component/test_escalation_listTemplateDomainAdmin.py"
This marvin test was written for this scenario but for the second account "templatefilter=all" is not used.
* pr/1763:
CLOUDSTACK-9594: reverted changes introduced in CLOUDSTACK-9376
CLOUDSTACK-9594: API "list templates templatefilter=all" reveals all templates of all domains
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9637: Template create from snapshot does not populate vm_t**ISSUE**
============
Template create from snapshot does not populate vm_template_details
**REPRO STEPS**
==================
1. Register a template A and specify property:
Root disk controller: scsi
NIC adapter type: E1000
Keyboard type: us
2. Create a vm instance from template A
3. Take volume snapshot for vm instance
4. Delete VM instance
5. Switch to "Storage->Snapshots", convert snapshot to a template B
6. Observe template B does not inherit property from template A, the table vm_template_details is empty
**SOLUTION**: Retrieve and add source template details to VMTemplateVO.
Before Fix:
```
mysql> select id,name,source_template_id from vm_template where id=202;
+-----+--------+--------------------+
| id | name | source_template_id |
+-----+--------+--------------------+
| 202 | Debian | NULL |
+-----+--------+--------------------+
1 row in set (0.00 sec)
mysql> select * from vm_template_details where template_id=202;
+----+-------------+--------------------+-------+---------+
| id | template_id | name | value | display |
+----+-------------+--------------------+-------+---------+
| 1 | 202 | keyboard | us | 1 |
| 2 | 202 | nicAdapter | E1000 | 1 |
| 3 | 202 | rootDiskController | scsi | 1 |
+----+-------------+--------------------+-------+---------+
3 rows in set (0.00 sec)
mysql> select id,name,source_template_id from vm_template where source_template_id=202;
+-----+----------------+--------------------+
| id | name | source_template_id |
+-----+----------------+--------------------+
| 203 | derived-debian | 202 |
+-----+----------------+--------------------+
1 row in set (0.00 sec)
mysql> select * from vm_template_details where template_id=203;
Empty set (0.00 sec)
After Fix:
mysql> select id,name,source_template_id from vm_template where source_template_id=202;
+-----+--------------------------+--------------------+
| id | name | source_template_id |
+-----+--------------------------+--------------------+
| 203 | derived-debian | 202 |
| 204 | debian-derived-after-fix | 202 |
+-----+--------------------------+--------------------+
2 rows in set (0.00 sec)
mysql> select * from vm_template_details where template_id=204;
+----+-------------+--------------------+-------+---------+
| id | template_id | name | value | display |
+----+-------------+--------------------+-------+---------+
| 4 | 204 | keyboard | us | 1 |
| 5 | 204 | nicAdapter | E1000 | 1 |
| 6 | 204 | rootDiskController | scsi | 1 |
+----+-------------+--------------------+-------+---------+
3 rows in set (0.00 sec)
```
**Marvin Test :** test_template_from_snapshot_with_template_details.py
**Result:**
```
test_01_create_template_snampshot (integration.component.test_template_from_snapshot_with_template_details.TestCreateTemplate) ... === TestName: test_01_create_template_snampshot | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 864.523s
OK
```
* pr/1805:
CLOUDSTACK-9637: Template create from snapshot does not populate vm_template_details
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
[CLOUDSTACK-9643] Now returning os info with the list snapshot responseThis commit adds the ID and display name of the OS on the volume.
* pr/1618:
CLOUDSTACK-9643: Now returning os info with the list snapshot response
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
[CLOUDSTACK-9644] Adding missing bits field to TemplateResponseThis pull request adds a bits field for template size, and sets it equal to ISO size.
* pr/1622:
CLOUDSTACK-9644: Adding missing bits field to TemplateResponse
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9646: No usage is generated for uploaded templates/volumes from localpublished usage events on successful upload of template or volume.
* pr/1809:
CLOUDSTACK-9646: No usage is generated for uploaded templates/volumes from local
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Upgrades Maven dependency version to v1.55
- Fixes bountycastle usages and issues
- Adds timeout to jetty/annotation scanning
- Fixes servlet issue, uses servlet 3.1.0
- Downgrade javassist used by reflections to fix annotation process errors
- Make console-proxy-rdp bc dependency same as rest of the codebase
- Picks up PR #1510 by Daan
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Introduced a global configuration flag 'cluster.threshold.enabled'. By default the flag is true.
If the value is false, then a VM can be started in a cluster even if the cluster thresholds are
crossed. However, for a new VM deployment the cluster threshold will always be honoured.
CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's haproxy.cfg fileMultiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is added to the corresponding InternalLbVm instance, it replaces the existing one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules are getting dropped by the InternalLbVm instance.
PR contents:
1) Fix for this bug.
2) Marvin test coverage for Internal LB feature on master with native ACS setup (component directory) including validations for this bug fix.
3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins directory) to validate this bug fix.
4) PEP8 & PyFlakes compliance with the added Marvin test code.
* pr/1577:
CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's haproxy.cfg file
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9402 : Support for underlay features (Source & Static NAT to underlay) in Nuage VSP pluginSupport for underlay features (Source & Static NAT to underlay) with Nuage VSP SDN Plugin including Marvin test coverage for corresponding Source & Static NAT features on master. Moreover, our Marvin tests are written in such a way that they can validate our supported feature set with both Nuage VSP SDN platform's overlay and underlay infra.
PR contents:
1) Support for Source NAT to underlay feature on master with Nuage VSP SDN Plugin.
2) Support for Static NAT to underlay feature on master with Nuage VSP SDN Plugin.
3) Marvin test coverage for Source & Static NAT to underlay on master with Nuage VSP SDN Plugin.
4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
5) PEP8 & PyFlakes compliance with our Marvin test code.
* pr/1580:
CLOUDSTACK-9402 : Support for underlay features (Source & Static NAT to underlay) in Nuage VSP plugin
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9402 : Marvin tests for Source NAT and Static NAT features verification with NuageVsp (both overlay and underlay infra).
Co-Authored-By: Prashanth Manthena <prashanth.manthena@nuagenetworks.net>, Frank Maximus <frank.maximus@nuagenetworks.net>
CLOUDSTACK-9071: Properly parse stats.output.uri in StatsCollectorBoth host and path could have been NULL which causes the StatsCollector
no to start properly.
By checking if the Strings are not Empty or Null we make sure the StatsCollector
always runs and does not prevent the Management Server from starting.
* pr/1673:
CLOUDSTACK-9071: Properly parse stats.output.uri in StatsCollector
Signed-off-by: John Burwell <meaux@cockamamy.net>
templates of all domains
Bug has been fixed considering below points
1. templatefilter=all or isofilter=all is applicable only to admin
and domain admin.
2. With templatefilter=all or isofilter=all below are the visiblity
of templates in system.
a. admin should be able to see all templates/iso in system.
b. domain admin should be able to see all public template and
templates under its domain tree (including sub domain).
c. domain admin in a project context should be able to see all public
templates and templates registered as project account and templates
which are shared(using updateTemplatePermission api) with project account.
Modified
"test/integration/component/test_escalation_listTemplateDomainAdmin.py"
This marvin test is written for this scenario but for the second account
"templatefilter=all" is not used.
CLOUDSTACK-9553 Usage event is not getting recorded for snapshots in a specific scenario
* pr/1714:
CLOUDSTACK-9553 Usage event is not getting recorded for snapshots in a specific scenario
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
A constructor signature has changed between 4.8 and 4.9+ branches which caused
failure in a unit test introduced by PR #1694. This fixes the unit test by
passing null as the additional parameter (the test does not need instantiated
object).
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9509: Host Connects Without StorageKVM hosts on shared storage failure was accepted by mgmt server with the
host state as Up, even though there was no primary/shared storage available on
it. This patch offers a quick fix by throwing an exception in the storage monitor
which connects storage pool on host. The failure is trapped by agent manager
that disconnects the agent without any investigation.
Based on Lab tests, KVM agent may take upto 2 minutes to attempt NFS mount when
the storage is inaccessible (firewalled, or shutdown) before returning back with
an error. It is safe to assume that this won't add pressure on mgmt server due to
several reconnection attempts, and KVM agent would retry reconnection every 2
minutes.
For such KVM hosts, where failure happens due to storage issues; they will be
briefly put in Alert state but will be mostly be in Connecting state during which
the KVM host attempts to mount/reconfigure NFS storage pool.
/cc @jburwell @karuturi
@blueorangutan package
* pr/1694:
CLOUDSTACK-9509: Host Connects Without Storage
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR adds an ability to Pass a new parameter, locationType,
to the “createSnapshot” API command. Depending on the locationType,
we decide where the snapshot should go in case of managed storage.
There are two possible values for the locationType param
1) `Standard`: The standard operation for managed storage is to
keep the snapshot on the device. For non-managed storage, this will
be to upload it to secondary storage. This option will be the
default.
2) `Archive`: Applicable only to managed storage. This will
keep the snapshot on the secondary storage. For non-managed
storage, this will result in an error.
The reason for implementing this feature is to avoid a single
point of failure for primary storage. Right now in case of managed
storage, if the primary storage goes down, there is no easy way
to recover data as all snapshots are also stored on the primary.
This features allows us to mitigate that risk.
CLOUDSTACK-9438: Fix for CLOUDSTACK-9252 - Make NFS version changeable in UIJIRA TICKET: https://issues.apache.org/jira/browse/CLOUDSTACK-9438
### Introduction
From #1361 it was possible to configure NFS version for secondary storage mount.
However, changing NFS version requires inserting an new detail on `image_store_details` table, with `name = 'nfs.version'` and `value = X` where X is desired NFS version, and then restarting management server for changes to take effect.
Our improvement aims to make NFS version changeable from UI, instead of previously described workflow.
### Proposed solution
Basically, NFS version is defined as an image store ConfigKey, this implied:
* Adding a new Config scope: **ImageStore**
* Make `ImageStoreDetailsDao` class to extend `ResourceDetailsDaoBase` and `ImageStoreDetailVO` implement `ResourceDetail`
* Insert `'display'` column on `image_store_details` table
* Extending `ListCfgsCmd` and `UpdateCfgCmd` to support **ImageStore** scope, which implied:
** Injecting `ImageStoreDetailsDao` and `ImageStoreDao` on `ConfigurationManagerImpl` class, on `cloud-server` module.
### Important
It is important to mention that `ImageStoreDaoImpl` and `ImageStoreDetailsDaoImpl` classes were moved from `cloud-engine-storage` to `cloud-engine-schema` module in order to Spring find those beans to inject on `ConfigurationManagerImpl` in `cloud-server` module.
We had this maven dependencies between modules:
* `cloud-server --> cloud-engine-schema`
* `cloud-engine-storage --> cloud-secondary-storage --> cloud-server`
As `ImageStoreDaoImpl` and `ImageStoreDetailsDao` were defined in `cloud-engine-storage`, and they needed in `cloud-server` module, to be injected on `ConfigurationManagerImpl`, if we added dependency from `cloud-server` to `cloud-engine-storage` we would introduce a dependency cycle. To avoid this cycle, we moved those classes to `cloud-engine-schema` module
* pr/1615:
CLOUDSTACK-9438: Fix for CLOUDSTACK-9252 - Make NFS version changeable in UI
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9504: System VMs on Managed StorageThis PR makes it easier to spin up system VMs on managed storage.
Managed storage is when you have a dedicated volume on a SAN for a particular virtual disk (making it easier to deliver QoS).
For example, with this PR, you'd likely have a single virtual disk for a system VM. On XenServer, that virtual disk resides by itself in a storage repository (no other virtual disks share this storage repository).
It was possible in the past to spin up system VMs that used managed storage, but this PR facilitates the use case by making changes to the System Service Offering dialog (and by putting in some parameter checks in the management server).
JIRA ticket: https://issues.apache.org/jira/browse/CLOUDSTACK-9504
* pr/1642:
Added support for system VMs to make use of managed storage
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Currently the logic about volume deletion seems to be that an event
should be emitted when the volume delete is requested, not when the
deletion completes.
The VolumeStateListener specifically ignores destroy events for ROOT
volumes, assuming that the ROOT volume only gets deleted when the
instance is destroyed and the UserVmManager should take care of it.
When deleting an account, all of its resources get destroyed, but the
instance expunging circumvents the UserVmManager, and thus we miss the
VOLUME_DESTROY usage event. The account manager now attempts to
propperly destroy the vm before expunging it. This way the destroy
logic is respected, including the event emission.
KVM hosts on shared storage failure was accepted by mgmt server with the
host state as Up, even though there was no primary/shared storage available on
it. This patch offers a quick fix by throwing an exception in the storage monitor
which connects storage pool on host. The failure is trapped by agent manager
that disconnects the agent without any investigation.
Based on Lab tests, KVM agent may take upto 2 minutes to attempt NFS mount when
the storage is inaccessible (firewalled, or shutdown) before returning back with
an error. It is safe to assume that this won't add pressure on mgmt server due to
several reconnection attempts, and KVM agent would retry reconnection every 2
minutes.
For such KVM hosts, where failure happens due to storage issues; they will be
briefly put in Alert state but will be mostly be in Connecting state during which
the KVM host attempts to mount/reconfigure NFS storage pool.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Unhides snapshot.backup.rightafter from global configuration
If snapshot.backup.rightafter is set to false (defaults to true), snapshots are
not backed up to secondary storage
Both host and path could have been NULL which causes the StatsCollector
no to start properly.
By checking if the Strings are not Empty or Null we make sure the StatsCollector
always runs and does not prevent the Management Server from starting.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Conflicts:
server/src/com/cloud/server/StatsCollector.java
We use some JSP file just for translation of strings in the UI. This is
achievable purely in JavaScript. This removes those JSPs, simplifies
translation usage and workflow (purely JS based). The l10n js (dictionary)
files are generated from existing messages.properties files during client-ui
code generation phase.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9422: Granular 'vmware.create.full.clone' as Primary Storage setting### Introduction
For VMware, It is possible to decide creating VMs as full clones on ESX HV, adjusting `vmware.create.full.clone` global setting. We would like to introduce this property as a primary storage detail, and use its value instead of global setting's value.
We propose introducing `fullCloneFlag` on `PrimaryDataStoreTO` sent on `CopyCommand`. This way we can reconfigure `VmwareStorageProcessor` and `VmwareStorageSubsystemCommandHandler` similar as it was done for `nfsVersion` but refactoring it to be more general.
* pr/1602:
CLOUDSTACK-9422: Granular VMware vms creation as full clones on HV
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Export UUID for zone creation event completion.Action events have support for exporting their "first class entities" over the event bus, if it's turned on. This works mostly for BaseAsyncCmds and any synchronous command that operates on already-existing entities. They are populated by the ApiServer and found by ActionEventUtils.
Where it tends to falter is when dealing with synchronous creation commands. This commit makes createZone export the zone UUID over the event bus once the zone has been created.
* pr/1661:
Export UUID for zone creation event completion.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Adding support for cross-cluster storage migration for managed storage when using XenServerThis PR adds support for cross-cluster storage migration of VMs that make use of managed storage with XenServer.
Managed storage is when you have a 1:1 mapping between a virtual disk and a volume on a SAN (in the case of XenServer, an SR is placed on this SAN volume and a single virtual disk placed in the SR).
Managed storage allows features such as storage QoS and SAN-side snapshots to work (sort of analogous to VMware VVols).
This PR focuses on enabling VMs that are using managed storage to be migrated across XenServer clusters.
I have successfully run the following tests on this branch:
TestVolumes.py
TestSnapshots.py
TestVMSnapshots.py
TestAddRemoveHosts.py
TestVMMigrationWithStorage.py (which is a new test that is being added with this PR)
* pr/1671:
Adding support for cross-cluster storage migration for managed storage when using XenServer
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Added checks to prevent netwrok update when router state is unknown or when
the new offering removes a service that is in use.
Added a new param forced to the updateNetwork API. The network will
undergo a forced update when this param is set to true.
CLOUDSTACK-8751 Clean network config like firewall rules etc, when network services are removed during network update.
Often, patch and security releases do not require schema migrations or
data migrations. However, if an empty upgrade class and associated
scripts are not defined, the upgrade process will break. With this
change, if a release does not have an upgrade, a noop DbUpgrade is added
to the upgrade path. This approach allows the upgrade to proceed and
for the database to properly reflect the installed version. This change
should make the release process simpler as RMs no longer need to
rememeber to create this boilerplate code when starting a new release.
Beginning with the 4.8.2.0 and 4.9.1.0 releases, the project will
formally adopt a four (4) position release number to properly accomodate
rekeases that contain only CVE fixes. The DatabaseUpgradeChecker and
Version classes made assumptions that they would always parse and
compare three (3) position version numbers. This change adds the
CloudStackVersion value object that supports both three (3) and four (4)
version numbers. It encapsulates version comparsion logic, as well as,
the rules to allow three (3) and four (4) to interoperate.
* Modifies DatabaseUpgradeChecker to handle derive an upgrade path for
a version that was not explicitly specified. It determines the
releases the first release before it with database migrations and uses
that list as the basis for the list for version being calculated. A
noop upgrade is then added to the list which causes no schema changes
or data migrations, but will update the database to the version.
* Adds unit tests for the upgrade path calculation logic in
DatabaseUpgradeChecker
* Removes dummy upgrade logic for the 4.8.2.0 introduced in previous
versions of this patch
* Introduces the CloudStackVersion value object which parses and
compares three (3) and four (4) position version numbers. This class
is intended to replace com.cloud.maint.Version.
* Adds the junit-dataprovider dependency -- allowing test data to be
concisely generated separately from the execution of a test case.
Used extensively in the CloudStackVersionTest.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
[4.9/LTS] Add upgrade path from 4.9.0 to 4.9.1, change version to 4.9.1.0-SNAPSHOTThis adds db upgrade path from 4.9.0 to 4.9.1 and fixes a typo in default user role description (CLOUDSTACK-9449)
/cc @karuturi @jburwell -- this will cause issues when fwd-merged to master, I can do the fwd-merging if you would like to avoid fixing the conflicts yourself
@blueorangutan package
* pr/1646:
Updating pom.xml version numbers for release 4.9.1.0-SNAPSHOT
cloudstack: upgrade path from 4.9.0 to 4.9.1
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Increases allowed max and permgen memory flags to maven-surefire plugins.
This fixes unit test failures in cloud-server.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit fd7273b446)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Increases allowed max and permgen memory flags to maven-surefire plugins.
This fixes unit test failures in cloud-server.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Simplifies change password transactional logic without using pessmistic locks
- Adds a re-enter password field in the UI to valid ipmi/oobm password
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9180: Optimize concurrent VM deployment operation on same network
Check if VR needs to be allocated for a given network and only acquire lock if required
Refer to the bug for details.
* pr/1251:
CLOUDSTACK-9180: Optimize concurrent VM deployment operation on same network Check if VR needs to be allocated for a given network and only acquire lock if required
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9238: Fix URL length to 2048 for all url fields in VOI will update the PR to add max field length in the API commands too
* pr/1567:
API: update url field max length
not needed on host table
Fix URL length to 2048 for all url fields in VO
Signed-off-by: Will Stevens <williamstevens@gmail.com>
* 4.7:
CLOUDSTACK-9376: Restrict listTemplates API with filter=all for root admin
CLOUDSTACK-9369: Restrict default login to ldap/native users
Add lsb-release dependency to mgmt server and agent on Debian/Ubuntu.
Emit template UUID and class type over event bus when deleting templates.
Emit template UUID and class type over event bus when deleting templatesNew version of #1378 for the 4.7b branch instead of 4.6.
* pr/1564:
Emit template UUID and class type over event bus when deleting templates.
Signed-off-by: Will Stevens <williamstevens@gmail.com>
- Restricts default login auth handler to ldap and native-cloudstack users
- Refactors and create re-usable method to find domain by id/path
- Adds unit test for refactored method in DomainManagerImpl
- Adds smoke test for login handler
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The behavior is now consistent with template creation. This commit
also adds a unit test for this functionality to make sure that it will
always happen.
CLOUDSTACK-9368: Fix for Support configurable NFS version for Secondary Storage mounts## Description
JIRA TICKET: https://issues.apache.org/jira/browse/CLOUDSTACK-9368
This pull request address a problem introduced in #1361 in which NFS version couldn't be changed after hosts resources were configured on startup (for hosts using `VmwareResource`), and as host parameters didn't include `nfs.version` key, it was set `null`.
## Proposed solution
In this proposed solution `nfsVersion` would be passed in `NfsTO` through `CopyCommand` to `VmwareResource`, who will check if NFS version is still configured or not. If not, it will use the one sent in the command and will set it to its storage processor and storage handler. After those setups, it will proceed executing command.
* pr/1518:
CLOUDSTACK-9368: Fix for Support configurable NFS version for Secondary Storage mounts
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Refactor system VM default network creationTwo small commits which moves the retrieval of the default network for the console proxy and the SSVM into a separate protected method. It's a small change that makes the code more readable/maintainable and also makes the class more suitable for overriding should one want to do this. It's forward-ported from our 4.2 branch.
No new tests since this should not change any functionality, and thus should be covered by the existing unit tests.
Now on the master branch (#1359 was on the wrong branch).
* pr/1360:
Refactor ssvm default network retrieval.
Refactor console proxy default network retrieval.
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Dynamically load drivers before creating our DB connectionsSolution to the mailing thread titled "MySQL : No suitable driver found for jdbc:mysql".
It doesn't harm that we explicitely load the MySQL driver, and for those which would use a commons-dbcp version < 1.4 this would fix it as well. Since JDBC 4.0, the JDBC driver can auto-register itself, but for some weird cases (like ours), it's not working. Therefore we need to explicitly load the JDBC driver.
* pr/1553:
Dynamic loading of DB driver + support for other DB providers
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9203 Implement security group move on updateVM API call cherry-picked from a exoscale internal fix
Conflicts:
api/src/org/apache/cloudstack/api/command/user/vm/UpdateVMCmd.java
server/src/com/cloud/vm/UserVmManager.java
server/src/com/cloud/vm/UserVmManagerImpl.java
* pr/1297:
CLOUDSTACK-9203 refactorred DeployVM code to be used by UpdateVM as well
CLOUDSTACK-9203 security group update on running instance
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9380: fix NPE in listDomains API for a mistakeThe issue happens if volumeTotal is NULL in database.
This is caused by commit 0407fb334f for CLOUDSTACK-7847.
* pr/1550:
CLOUDSTACK-9380: fix NPE in listDomains API for a mistake
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9358: StringIndexOutOfBoundsException on eventsFixes JSON deserialization of `cmdInfo` (current process fails with `StringIndexOutOfBoundsException` when `cmdEventType` is the last parameter in the JSON string).
A `StringIndexOutOfBoundsException` is thrown in some cases during event publication.
Example: a stopVirtualMachine API request is executed, and fails with:
```
2016-04-15 09:24:43,080 ERROR [o.a.c.f.m.MessageDispatcher] (catalina-exec-1:ctx-840cbaa7 ctx-8daf0e9c ctx-f63af073) Unexpected exception when calling com.cloud.api.ApiServer.handleAsyncJobPublishEvent
java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor307.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.cloudstack.framework.messagebus.MessageDispatcher.dispatch(MessageDispatcher.java:75)
at org.apache.cloudstack.framework.messagebus.MessageDispatcher.onPublishMessage(MessageDispatcher.java:45)
at org.apache.cloudstack.framework.messagebus.MessageBusBase$SubscriptionNode.notifySubscribers(MessageBusBase.java:441)
at org.apache.cloudstack.framework.messagebus.MessageBusBase.publish(MessageBusBase.java:178)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl.publishOnEventBus(AsyncJobManagerImpl.java:1052)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl.submitAsyncJob(AsyncJobManagerImpl.java:180)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl.submitAsyncJob(AsyncJobManagerImpl.java:168)
at com.cloud.api.ApiServer.queueCommand(ApiServer.java:687)
at com.cloud.api.ApiServer.handleRequest(ApiServer.java:528)
at com.cloud.api.ApiServlet.processRequestInContext(ApiServlet.java:296)
at com.cloud.api.ApiServlet$1.run(ApiServlet.java:127)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:124)
at com.cloud.api.ApiServlet.doGet(ApiServlet.java:86)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:620)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1040)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1720)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1679)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.StringIndexOutOfBoundsException: String index out of range: -468
at java.lang.String.substring(String.java:1967)
at com.cloud.api.ApiServer.handleAsyncJobPublishEvent(ApiServer.java:270)
at sun.reflect.GeneratedMethodAccessor307.invoke(Unknown Source)
... 41 more
```
The VM is not stopped, and the API returns the following error:
HTTP/1.1 530 : InvocationTargetException when invoking event handler for subject: job.eventpublish
The error comes from JSON parsing in 339355594c/server/src/com/cloud/api/ApiServer.java (L270)
The `substring()` method throws a StringIndexOutOfBoundsException when `cmdEventType` is the last field in the JSON object.
The JSON object being the JSON serialization of a `HashMap` object, the order of the fields is not guaranteed. In my tests, I observed that there is no error with Java 7, but it fails with Java 8, which has a different Hashmap implementation.
For example, with request http://127.0.0.1:8096/client/api?command=stopVirtualMachine&id=734ef752-fe99-4d46-8f31-18130496473c, cmdInfo is:
```
{"ctxUserId":"1","httpmethod":"GET","ctxStartEventId":"105","id":"734ef752-fe99-4d46-8f31-18130496473c","ctxDetails":"{\"interface com.cloud.vm.VirtualMachine\":\"734ef752-fe99-4d46-8f31-18130496473c\"}","ctxAccountId":"1","uuid":"734ef752-fe99-4d46-8f31-18130496473c","cmdEventType":"VM.STOP"}
```
and it fails.
Please note that the error is not present for all API requests, depending on the parameters fields.
* pr/1503:
CLOUDSTACK-9358: StringIndexOutOfBoundsException on events
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Bug-ID: CLOUDSTACK-8870: Skip external device usage collection if no external devices existexternal network device usage monitor thread that runs every 5mins by default (based on global config external.network.stats.interval) and runs coalesce query to acquire a lock. When there are no external devices exist, there is no need to run usage collection.
Added test case to verify that usage collection task is not run when there are no External LB or External FW
* pr/846:
Bug-ID: CLOUDSTACK-8870: Skip external device usage collection if no external devices exist
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-6928: fix issue disk I/O throttling not appliedDisk IO throttling (for KVM) is not applied in the merge of 4.2.
Tests passed:
(1) start vm
(2) attach volume
(3) start vm with volume
(4) migrate vm (with volume)
* pr/1410:
CLOUDSTACK-6928: fix issue disk I/O throttling not applied
Signed-off-by: Will Stevens <williamstevens@gmail.com>
cherry-picked from a exoscale internal fix:
Implement security group move on updateVM API call
Prevent security group update while instance is running
Conflicts:
api/src/org/apache/cloudstack/api/command/user/vm/UpdateVMCmd.java
server/src/com/cloud/vm/UserVmManager.java
server/src/com/cloud/vm/UserVmManagerImpl.java
CLOUDSTACK-9365 : updateVirtualMachine with userdata should not error when a VM is attached to multiple networks from which one or more doesn't support userdata
* pr/1523:
Marvin script for cloudstack-9365
CLOUDSTACK-9365 : updateVirtualMachine with userdata should not error when a VM is attached to multiple networks from which one or more doesn't support userdata
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Taking fast and efficient volume snapshots with XenServer (and your storage provider)A XenServer storage repository (SR) and virtual disk image (VDI) each have UUIDs that are immutable.
This poses a problem for SAN snapshots, if you intend on mounting the underlying snapshot SR alongside the source SR (duplicate UUIDs).
VMware has a solution for this called re-signaturing (so, in other words, the snapshot UUIDs can be changed).
This PR only deals with the CloudStack side of things, but it works in concert with a new XenServer storage manager created by CloudOps (this storage manager enables re-signaturing of XenServer SR and VDI UUIDs).
I have written Marvin integration tests to go along with this, but cannot yet check those into the CloudStack repo as they rely on SolidFire hardware.
If anyone would like to see these integration tests, please let me know.
JIRA ticket: https://issues.apache.org/jira/browse/CLOUDSTACK-9281
Here's a video I made that shows this feature in action:
https://www.youtube.com/watch?v=YQ3pBeL-WaA&list=PLqOXKM0Bt13DFnQnwUx8ZtJzoyDV0Uuye&index=13
* pr/1403:
Faster logic to see if a cluster supports resigning
Support for backend snapshots with XenServer
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9366: Capacity of one zone-wide primary storage ignoredDisable and Remove Host operation disables the primary storage capacity.
Steps to replicate:
Base Condition: There exists a host and storage pool with same id
Steps:
1. Find a host and storage pool having same id
2. Disable the host
3. CPU(1) and MEMORY(0) capacity in op_host_capacity for above host is disabled
4. STORAGE(3) capacity in op_host_capacity for storage pool with id same as above host is also disabled
RCA:
'host_id' column in 'op_host_capacity' table used for storing both storage pool id (for STORAGE capacity) and host id (MEMORY and CPU). While disabling a HOST we also disable the capacity associated with host.
Ideally while disabling capacity we should only disable MEMORY and CPU capacity, but we are not doing so.
Code Path:
ResourceManagerImpl.doDeleteHost() -> ResourceManagerImpl.resourceStateTransitTo() -> CapacityDaoImpl.updateCapacityState(null, null, null, host.getId(), capacityState.toString())
updateCapacityState is updating disabling all entries which matches the host_id. This will also disable a entry having storage pool id same as that of host id.
Changes:
introduced new capacityType parameter in updateCapacityState method and necessary changes to add capacity_type clause in sql
also fixed incorrect sql builder logic (unused code path for which it is never surfaced )
Added marvin test to check host and storagepool capacity when host is disabled
Test Result:
```
Before Fix:
mysql> select ohc.host_id, ohc.`capacity_state`, case capacity_type when 0 then 'MEMORY' when 1 then 'CPU' ELSE 'STORAGE' END as 'capacity_type' , total_capacity, case capacity_type when 0 then 'HOST' when 1 then 'HOST' ELSE 'STORAGE POOL' END as 'HOST/STORAGE POOL' from op_host_capacity ohc where host_id=3;
+---------+----------------+---------------+----------------+-------------------+
| host_id | capacity_state | capacity_type | total_capacity | HOST/STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
| 3 | Enabled | MEMORY | 8589934592 | HOST |
| 3 | Enabled | CPU | 32000 | HOST |
| 3 | Enabled | STORAGE | 2199023255552 | STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
9 rows in set (0.00 sec)
Disable Host 3 from UI.
mysql> select ohc.host_id, ohc.`capacity_state`, case capacity_type when 0 then 'MEMORY' when 1 then 'CPU' ELSE 'STORAGE' END as 'capacity_type' , total_capacity, case capacity_type when 0 then 'HOST' when 1 then 'HOST' ELSE 'STORAGE POOL' END as 'HOST/STORAGE POOL' from op_host_capacity ohc where host_id=3;
+---------+----------------+---------------+----------------+-------------------+
| host_id | capacity_state | capacity_type | total_capacity | HOST/STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
| 3 | Disabled | MEMORY | 8589934592 | HOST |
| 3 | Disabled | CPU | 32000 | HOST |
| 3 | Disabled | STORAGE | 2199023255552 | STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
After Fix:
mysql> select ohc.host_id, ohc.`capacity_state`, case capacity_type when 0 then 'MEMORY' when 1 then 'CPU' ELSE 'STORAGE' END as 'capacity_type' , total_capacity, case capacity_type when 0 then 'HOST' when 1 then 'HOST' ELSE 'STORAGE POOL' END as 'HOST/STORAGE POOL' from op_host_capacity ohc where host_id=3;
+---------+----------------+---------------+----------------+-------------------+
| host_id | capacity_state | capacity_type | total_capacity | HOST/STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
| 3 | Enabled | MEMORY | 8589934592 | HOST |
| 3 | Enabled | CPU | 32000 | HOST |
| 3 | Enabled | STORAGE | 2199023255552 | STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
3 rows in set (0.01 sec)
Disable Host 3 from UI.
mysql> select ohc.host_id, ohc.`capacity_state`, case capacity_type when 0 then 'MEMORY' when 1 then 'CPU' ELSE 'STORAGE' END as 'capacity_type' , total_capacity, case capacity_type when 0 then 'HOST' when 1 then 'HOST' ELSE 'STORAGE POOL' END as 'HOST/STORAGE POOL' from op_host_capacity ohc where host_id=3;
+---------+----------------+---------------+----------------+-------------------+
| host_id | capacity_state | capacity_type | total_capacity | HOST/STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
| 3 | Disabled | MEMORY | 8589934592 | HOST |
| 3 | Disabled | CPU | 32000 | HOST |
| 3 | Enabled | STORAGE | 2199023255552 | STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
3 rows in set (0.00 sec)
Sudhansus-MAC:cloudstack sudhansu$ nosetests-2.7 --with-marvin --marvin-config=setup/dev/advanced.cfg test/integration/component/maint/test_capacity_host_delete.py
==== Marvin Init Started ====
=== Marvin Parse Config Successful ===
=== Marvin Setting TestData Successful===
==== Log Folder Path: /tmp//MarvinLogs//Apr_22_2016_22_42_27_X4VBWD. All logs will be available here ====
=== Marvin Init Logging Successful===
==== Marvin Init Successful ====
===final results are now copied to: /tmp//MarvinLogs/test_capacity_host_delete_9RHSNB===
Sudhansus-MAC:cloudstack sudhansu$ cat /tmp//MarvinLogs/test_capacity_host_delete_9RHSNB/results.txt
test_01_op_host_capacity_disable_host (integration.component.maint.test_capacity_host_delete.TestHosts) ... === TestName: test_01_op_host_capacity_disable_host | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 0.168s
OK
```
* pr/1516:
CLOUDSTACK-9366: Capacity of one zone-wide primary storage ignored
Signed-off-by: Will Stevens <williamstevens@gmail.com>
introduced new capacityType parameter in updateCapacityState method and necessary changes to add capacity_type clause in sql
also fixed incorrect sql builder logic (unused code path for which it is never surfaced )
Added marvin test to check host and storagepool capacity when host is disabled
Added conditions to ensure the capacity_type is added only when capacity_type length is greater than 0.
Added checks in marvin test to ensure the capacity exists for a host before disabling it.
Added checks to avoid index out of range exception
* 4.7:
Fix Sync of template.properties in Swift
Configure rVPC for router.redundant.vrrp.interval advert_int setting
Have rVPCs use the router.redundant.vrrp.interval setting
Resolve conflict as forceencap is already in master
Split the cidr lists so we won't hit the iptables-resture limits
Check the existence of 'forceencap' parameter before use
Do not load previous firewall rules as we replace everyhing anyway
Wait for dnsmasq to finish restart
Remove duplicate spaces, and thus duplicate rules.
Restore iptables at once using iptables-restore instead of calling iptables numerous times
Add iptables copnversion script.
Fix Sync of template.properties in SwiftWhen using Swift as a secondary storage, we create a template.properties file which stores the metadata about the template. Currently the data which is present in the file is incorrect which leads to templates becoming unavailable after they are downloaded. This fix makes sure that the template.properties has the correct "path" set so that templates are available.
I've also done a bit of cleanup and made the code bit more clean.
* pr/1331:
Fix Sync of template.properties in Swift
Signed-off-by: Will Stevens <williamstevens@gmail.com>
It defaults to 1, which is hardcoded in the template:
./cosmic/cosmic-core/systemvm/patches/debian/config/opt/cloud/templates/keepalived.conf.templ
As non-VPC redundant routers use this setting, I think it makes sense to use it for rVPCs as well.
We also need a change to pickup the cmd_line parameter and use it in the Python code that configures the router.
CLOUDSTACK-8800 : Improved the listVirtualMachines API call to include memory utilization information for a VMThis PR introduces the changes proposed in PR #780 with some work to make the code null safe.
During this PR, I have also removed some unused code.
* pr/1444:
Removed unnecessary check when creating the “userVmResponse” object.
Fixed issues from CLOUDSTACK-8800 that were introduced in PR 780
CLOUDSTACK-8800 : Improved the listVirtualMachines API call to include memory utilization information for a VM for xenserver,kvm and for vmware.
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Notify listeners when a host has been added to a cluster, is about to be removed from a cluster, or has been removed from a cluster
This PR addresses the following JIRA ticket:
https://issues.apache.org/jira/browse/CLOUDSTACK-8813
The problem is that there needs to be notifications sent when a host is added to, about to be removed from, and removed from a cluster.
Such notifications can be used for many purposes. For example, it can allow storage plug-ins to update ACLs on their storage systems. Also, it can allow us to clean up IQNs from ESXi hosts that are no longer needed.
* pr/816:
CLOUDSTACK-8813: Notify listeners when a host has been added to a cluster, is about to be removed from a cluster, or has been removed from a cluster
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9299: Out-of-band Management for CloudStackSupport access to a hosts out-of-band management interface (e.g. IPMI, iLO,
DRAC, etc.) to manage host power operations (on/off etc.) and querying current
power state in CloudStack.
Given the wide range of out-of-band management interfaces such as iLO and iDRA,
the service implementation allows for development of separate drivers as plugins.
This feature comes with a ipmitool based driver that uses the
ipmitool (http://linux.die.net/man/1/ipmitool) to communicate with any
out-of-band management interface that support IPMI 2.0.
This feature allows following common use-cases:
- Restarting stalled/failed hosts
- Powering off under-utilised hosts
- Powering on hosts for provisioning or to increase capacity
- Allowing system administrators to see the current power state of the host
For testing this feature, please install `ipmitool` (using yum/apt/brew) and `ipmisim`:
https://pypi.python.org/pypi/ipmisim
The default ipmitool location is assumed in /usr/bin, if this is different in your env please fix the global setting, see FS for details on various global settings.
FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Out-of-band+Management+for+CloudStack
/cc @jburwell @swill @abhinandanprateek @murali-reddy @borisstoyanov
* pr/1502:
maven: ignore utils/testsmallfileinactive for rat checking
CLOUDSTACK-9378: Fix for #1497
HypervisorUtilsTest: increate timeout to 8seconds
travis: Use patched version of ipmitool for tests
CLOUDSTACK-9299: Out-of-band Management for CloudStack
Signed-off-by: Will Stevens <williamstevens@gmail.com>
* 4.8:
CLOUDSTACK-9287 - Improve test by checking if pvt gw is removed and fix typos
Handle private gateways more reliably
CLOUDSTACK-9287 - Fix RVR public interface
CLOUDSTACK-9287 - Add integration test to cover the private gateway related changes
CLOUDSTACK-9287 - Refactor the interface state configuration
CLOUDSTACK-9287 - Check if the nic profile has already been removed from a certain router
CLOUDSTACK-9287 - Bring up the private gw interface on state change to master
CLOUDSTACK-9287 - Make sure private gw interface is not used for default gw
CLOUDSTACK-9287 - Add integration test to cover the private gw interface/mac address issues
CLOUDSTACK-9287 - Put private gateway interface down on backup router
CLOUDSTACK-9287 - Generate new mac address if router is redundant and nic profile exists
Add private gateway IP to router initialization config
apply static routes on change to master state
* 4.7:
CLOUDSTACK-9287 - Improve test by checking if pvt gw is removed and fix typos
Handle private gateways more reliably
CLOUDSTACK-9287 - Fix RVR public interface
CLOUDSTACK-9287 - Add integration test to cover the private gateway related changes
CLOUDSTACK-9287 - Refactor the interface state configuration
CLOUDSTACK-9287 - Check if the nic profile has already been removed from a certain router
CLOUDSTACK-9287 - Bring up the private gw interface on state change to master
CLOUDSTACK-9287 - Make sure private gw interface is not used for default gw
CLOUDSTACK-9287 - Add integration test to cover the private gw interface/mac address issues
CLOUDSTACK-9287 - Put private gateway interface down on backup router
CLOUDSTACK-9287 - Generate new mac address if router is redundant and nic profile exists
Add private gateway IP to router initialization config
apply static routes on change to master state
Handle private gateways more reliablyWhen initialising a VPC router we need to know which IP/device corresponds to a private gateway. This is to solve a problem when stop/starting a VPC router (which gets the private gateway config as a guest network and as a result breaks the functionality). You read it right, the private gateway is sent as type=guest after reboot and type=public initially.
Before this change, you could add a private gw to a running router but you couldn't restart it (it would mix up the tiers). Now the private gateway is detected properly and it works just fine.
Booting without private gateway:
```
root@r-167-VM:~# cat /etc/cloudstack/cmdline.json
{
"config": {
"baremetalnotificationapikey": "V2l1u3wKJVan01h8kq63-5Y5Ia3VLEW1v_Z6i-31QIRJXlt5vkqaqf6DVcdK0jP3u79SW6X9pqJSLSwQP2c2Rw",
"baremetalnotificationsecuritykey": "OXI16srCrxFBi-xOtEwcYqwLlMfSFTlTg66YHtXBBqR7HNN1us3HP5zWOKxfVmz4a3C1kUNLPrUH13gNmZlu4w",
"disable_rp_filter": "true",
"dns1": "8.8.8.8",
"domain": "cs2cloud",
"eth0ip": "169.254.0.42",
"eth0mask": "255.255.0.0",
"host": "192.168.22.61",
"name": "r-167-VM",
"port": "8080",
"privategateway": "None",
"redundant_router": "false",
"template": "domP",
"type": "vpcrouter",
"vpccidr": "10.0.0.0/24"
},
"id": "cmdline"
```
Booting with private gateway:
```
root@r-167-VM:~# cat /etc/cloudstack/cmdline.json
{
"config": {
"baremetalnotificationapikey": "V2l1u3wKJVan01h8kq63-5Y5Ia3VLEW1v_Z6i-31QIRJXlt5vkqaqf6DVcdK0jP3u79SW6X9pqJSLSwQP2c2Rw",
"baremetalnotificationsecuritykey": "OXI16srCrxFBi-xOtEwcYqwLlMfSFTlTg66YHtXBBqR7HNN1us3HP5zWOKxfVmz4a3C1kUNLPrUH13gNmZlu4w",
"disable_rp_filter": "true",
"dns1": "8.8.8.8",
"domain": "cs2cloud",
"eth0ip": "169.254.2.227",
"eth0mask": "255.255.0.0",
"host": "192.168.22.61",
"name": "r-167-VM",
"port": "8080",
"privategateway": "10.201.10.1",
"redundant_router": "false",
"template": "domP",
"type": "vpcrouter",
"vpccidr": "10.0.0.0/24"
},
"id": "cmdline"
```
And:
```
cat cmdline
vpccidr=10.0.0.0/24 domain=cs2cloud dns1=8.8.8.8 privategateway=10.201.10.1 template=domP name=r-167-VM eth0ip=169.254.2.227 eth0mask=255.255.0.0 type=vpcrouter disable_rp_filter=true baremetalnotificationsecuritykey=OXI16srCrxFBi-xOtEwcYqwLlMfSFTlTg66YHtXBBqR7HNN1us3HP5zWOKxfVmz4a3C1kUNLPrUH13gNmZlu4w baremetalnotificationapikey=V2l1u3wKJVan01h8kq63-5Y5Ia3VLEW1v_Z6i-31QIRJXlt5vkqaqf6DVcdK0jP3u79SW6X9pqJSLSwQP2c2Rw host=192.168.22.61 port=8080
```
Logs:
```
2016-02-24 20:08:45,723 DEBUG [c.c.n.r.VpcVirtualNetworkApplianceManagerImpl] (Work-Job-Executor-4:ctx-458d4c52 job-1402/job-1403 ctx-d5355fca) (logid:5772906c) Set privategateway field in cmd_line.json to 10.201.10.1
```
* pr/1474:
Handle private gateways more reliably
Add private gateway IP to router initialization config
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9287 - Fix unique mac address per rVPC routerThis is work by @wilderrodrigues, see PR #1413 It contains important fixes and I think it needs to be included so I send the PR again.
* pr/1483:
CLOUDSTACK-9287 - Improve test by checking if pvt gw is removed and fix typos
CLOUDSTACK-9287 - Fix RVR public interface
CLOUDSTACK-9287 - Add integration test to cover the private gateway related changes
CLOUDSTACK-9287 - Refactor the interface state configuration
CLOUDSTACK-9287 - Check if the nic profile has already been removed from a certain router
CLOUDSTACK-9287 - Bring up the private gw interface on state change to master
CLOUDSTACK-9287 - Make sure private gw interface is not used for default gw
CLOUDSTACK-9287 - Add integration test to cover the private gw interface/mac address issues
CLOUDSTACK-9287 - Put private gateway interface down on backup router
CLOUDSTACK-9287 - Generate new mac address if router is redundant and nic profile exists
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Support access to a host’s out-of-band management interface (e.g. IPMI, iLO,
DRAC, etc.) to manage host power operations (on/off etc.) and querying current
power state in CloudStack.
Given the wide range of out-of-band management interfaces such as iLO and iDRA,
the service implementation allows for development of separate drivers as plugins.
This feature comes with a ipmitool based driver that uses the
ipmitool (http://linux.die.net/man/1/ipmitool) to communicate with any
out-of-band management interface that support IPMI 2.0.
This feature allows following common use-cases:
- Restarting stalled/failed hosts
- Powering off under-utilised hosts
- Powering on hosts for provisioning or to increase capacity
- Allowing system administrators to see the current power state of the host
For testing this feature `ipmisim` can be used:
https://pypi.python.org/pypi/ipmisim
FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Out-of-band+Management+for+CloudStack
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This would fix regression from recent mvn version changes. Without this
patch users get redirected to error.jsp as jstl-1.2 jar is not installed
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Removes commands.properties file
- Fixes apidocs and marvin to be independent of commands.properties usage
- Removes bundling of commands.properties in deb/rpm packaging
- Removes file references across codebase
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This feature allows root administrators to define new roles and associate API
permissions to them.
A limited form of role-based access control for the CloudStack management server
API is provided through a properties file, commands.properties, embedded in the
WAR distribution. Therefore, customizing API permissions requires unpacking the
distribution and modifying this file consistently on all servers. The old system
also does not permit the specification of additional roles.
FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Dynamic+Role+Based+API+Access+Checker+for+CloudStack
DB-Backed Dynamic Role Based API Access Checker for CloudStack brings following
changes, features and use-cases:
- Moves the API access definitions from commands.properties to the mgmt server DB
- Allows defining custom roles (such as a read-only ROOT admin) beyond the
current set of four (4) roles
- All roles will resolve to one of the four known roles types (Admin, Resource
Admin, Domain Admin and User) which maintains this association by requiring
all new defined roles to specify a role type.
- Allows changes to roles and API permissions per role at runtime including additions or
removal of roles and/or modifications of permissions, without the need
of restarting management server(s)
Upgrade/installation notes:
- The feature will be enabled by default for new installations, existing
deployments will continue to use the older static role based api access checker
with an option to enable this feature
- During fresh installation or upgrade, the upgrade paths will add four default
roles based on the four default role types
- For ease of migration, at the time of upgrade commands.properties will be used
to add existing set of permissions to the default roles. cloud.account
will have a new role_id column which will be populated based on default roles
as well
Dynamic-roles migration tool: scripts/util/migrate-dynamicroles.py
- Allows admins to migrate to the dynamic role based checker at a future date
- Performs a harder one-way migrate and update
- Migrates rules from existing commands.properties file into db and deprecates it
- Enables an internal hidden switch to enable dynamic role based checker feature
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Validate API arguments based on annotations. Introduces:
- NotNullOrEmpty: for doing null and empty string checks
- PositiveNumber: number > 0 (natural number)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-8901: PrepareTemplate job thread hard-coded to max 8 threads The thread pool was hardcoded to use 8 threads,
com.cloud.template.TemplateManagerImpl.configure(String, Map<String, Object>):
_preloadExecutor = Executors.newFixedThreadPool(8, new NamedThreadFactory("Template-Preloader"));
Added the change to pick threadpool size from configuration.
* pr/880:
CLOUDSTACK-8901: PrepareTemplate job thread hard-coded to max 8 threads
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9351: Add ids parameter to resource listing API calls## General behaviour
A new parameter is added in each method, its type a list of IDs of the entity, it will be mutually exclusive with id. (Similar to <code>id</code> and <code>ids</code> parameters in <code>listVirtualMachines</code> method)
### API Methods affected
* <code>listTemplates</code>: new parameter **<code>ids</code>**, mutually exclusive with <code>id</code>
* <code>listVolumes</code>: new parameter **<code>ids</code>**, mutually exclusive with <code>id</code>
* <code>listSnapshots</code>: new parameter **<code>ids</code>**, mutually exclusive with <code>id</code>
* <code>listVMSnapshots</code>: new parameter **<code>vmsnapshotids</code>**, mutually exclusive with <code>vmsnapshotid</code>
* pr/1497:
CLOUDSTACK-9351: Add marvin test and add it to travis file
CLOUDSTACK-9351: Add ids parameter to resource listing API calls
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9350: KVM-HA- Fix CheckOnHost for Local storage- KVM-HA- Fix CheckOnHost for Local storage
- Also skip HA on VMs that are using local storage
* pr/1496:
CLOUDSTACK-9350: KVM-HA- Fix CheckOnHost for Local storage - Also skip HA on VMs that are using local storage
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CID-1338387: Deletion of method endPointSelector.selectHypervisorHostFollowing the discussions and analysis presented on PR #1056 create by @DaanHoogland
This PR is intended to push those changes that were discussed there regarding the of endPointSelector.selectHypervisorHost method.
* pr/1124:
Deletion of method endPointSelector.selectHypervisorHost
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Fixes JSON deserialization of cmdInfo (current process fails with
StringIndexOutOfBoundsException when cmdEventType is the last parameter
in the JSON string.
CLOUDSTACK-8847: ListServiceOfferings is returning incompatible tagged offerings when called with VM idWhen calling listServiceOfferings with VM id as parameter. It is returning incompatible tagged offerings. It should only list all compatible tagged offerings. Compatible means the new service offering should contain all the tags of the existing service offering(Existing offering SUBSET of new offering). If that is the case It should list in the result and can be upgraded to that offering.
* pr/1321:
CLOUDSTACK-8847: ListServiceOfferings is returning incompatible tagged offerings when called with VM id
Signed-off-by: Will Stevens <williamstevens@gmail.com>
[4.7] vmware: improve support for disks- Improve disk chain usage while attaching, migrating disks
- Gets root disk controller based diskDeviceBusName from volume's chain info
* pr/1365:
vmware: improve support for disks
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9323: Fix cancel host maintenance canFix cancel host maintenance so that if maintenance is cancelled the host come back to normal state gracefully.
Added marvin tests for host maintennace.
* pr/1454:
CLOUDSTACK-9323: Fix Cancel maintenance so that if maintenance is cancelled the host come back to normal state gracefully. Added marvin tests for host maintennace.
Signed-off-by: Will Stevens <williamstevens@gmail.com>
* 4.8:
Removed sleeps and used validateList as requested.
Added required_hardware="false" attr above test_02_root_volume_attach_detach
Modified test_volumes.py to include a hypervisor test for root attach/detach testing
Let hypervisor type KVM and Simulator detach root volumes. Updated test_volumes.py to include a test for detaching and reattaching a root volume from a vm. I also had to update base.py to allow attach_volume to have the parameter deviceid to be passed as needed.
* 4.7:
Removed sleeps and used validateList as requested.
Added required_hardware="false" attr above test_02_root_volume_attach_detach
Modified test_volumes.py to include a hypervisor test for root attach/detach testing
Let hypervisor type KVM and Simulator detach root volumes. Updated test_volumes.py to include a test for detaching and reattaching a root volume from a vm. I also had to update base.py to allow attach_volume to have the parameter deviceid to be passed as needed.
CLOUDSTACK-9349: Enable root disk detach for KVM with new Marvin testsThis PR addresses the KVM detach/attach ROOT disks from VMs (CLOUDSTACK-9349). In short, this allows the KVM Hypervisor, and I added the Simulator as a valid hypervisor for ease of development and testing of marvin, to detach a root volume and the reattach a root volume using the deviceid=0 flag to the attachVolume API. I have also written a marvin integration test that verifies this feature works for both KVM and the Simulator.
Below is the marvin results files of the full marvin test_volumes.py. All tests pass, including the new root detach/attach, on our KVM lab running with the patches in this PR.
[test_volumes_KIR4G3.zip](https://github.com/apache/cloudstack/files/223799/test_volumes_KIR4G3.zip)
* pr/1500:
Removed sleeps and used validateList as requested.
Added required_hardware="false" attr above test_02_root_volume_attach_detach
Modified test_volumes.py to include a hypervisor test for root attach/detach testing
Let hypervisor type KVM and Simulator detach root volumes. Updated test_volumes.py to include a test for detaching and reattaching a root volume from a vm. I also had to update base.py to allow attach_volume to have the parameter deviceid to be passed as needed.
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Log asynchronous responses in the api logCurrently API responses for synchronous calls are logged, but asynchronous call responses are not. This pull request makes a minor modification that logs the response including the JobId of all asynchronous requests.
As an example, here is what a stopVirtualMachine request looked like in the logs:
2016-04-27 10:43:11,084 INFO [a.c.c.a.ApiServer] (catalina-exec-3:ctx-37d9f693 ctx-d2368de3) (logid:3a0fad97) (userId=2 accountId=2 sessionId=AF8B1F726ACB5C3A637B8B300AA218A7) 10.103.0.207 -- GET command=stopVirtualMachine&id=f63b6fcc-e0b0-480f-8f7a-cba329634ba1&forced=false&response=json&_=1461771791036 200
After this modification, here is what the logs look like:
2016-04-27 13:37:11,338 INFO [a.c.c.a.ApiServer] (catalina-exec-6:ctx-915b5c84 ctx-a03152fa) (logid:66249df0) (userId=2 accountId=2 sessionId=9EF127EED5CA6E74797DFE487D980FAF) 10.103.0.207 -- GET command=stopVirtualMachine&id=f63b6fcc-e0b0-480f-8f7a-cba329634ba1&forced=false&response=json&_=1461782231194 200 {"stopvirtualmachineresponse":{"jobid":"5b9f4a9b-eabe-4fa4-849d-3d004bb65634"}}
* pr/1522:
Log responses from asynchronous api commands
Signed-off-by: Will Stevens <williamstevens@gmail.com>
4.9 mvn version safeupgradeonlyUpgrades maven dependencies versions that can be safely upgraded without breaking console-proxy/crypto usage.
Bisected changes from: https://github.com/apache/cloudstack/pull/1397
cc @swill @DaanHoogland
* pr/1510:
maven: fix dependency version support by JDK7
further maven dependency updates from Daan
framework/quota: fix checkstyle issue
maven: Upgrade dependency versions
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Fixed: Error given when creating VPN user in one network if VR for another network is stopped.
Problem: Adding a VPN user to an IP on an isolated network fails with an exception if the end-user has two isolated networks and one of them is not in use (i.e. VR is stopped).
Fix: Presently in the code, all the VR which are not in Running state are forced to throw error. Added a check for the Stopped and Stopping state routers. Configurations will be applied to them when they get restarted.
* pr/826:
Fixed Coverity Issue: Unintentional Integer FLow
Fixed: Error given when creating VPN user in one network if VR for another network is stopped
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9322: Support for Internal LB fuctionality with Nuage VSP SDN Plugin including Marvin test coverageTask: https://issues.apache.org/jira/browse/CLOUDSTACK-9322
PR contents:
1) UI changes to support LB provider InternalLbVm during VPC offering creation.
2) Bug fix for CLOUDSTACK-9320.
3) Nuage VSP SDN Plugin related enhancements for VPC network functionality.
4) Marvin test coverage for Internal LB support on master with Nuage VSP SDN Plugin.
5) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
6) PEP8 & PyFlakes compliance with our Marvin test code.
Test run:
CloudStack$ nosetests --with-marvin --marvin-config=nuage_ant.cfg test/integration/plugins/nuagevsp/ -a tags=nuagevsp
Test results:
Test user data and password reset functionality with Nuage VSP SDN plugin ... === TestName: test_nuage_UserDataPasswordReset | Status : SUCCESS ===
ok
Test Nuage VSP VPC Offering with different combinations of LB service providers ... === TestName: test_01_nuage_internallb_vpc_Offering | Status : SUCCESS ===
ok
Test Nuage VSP VPC Network Offering with and without Internal LB service ... === TestName: test_02_nuage_internallb_vpc_network_offering | Status : SUCCESS ===
ok
Test Nuage VSP VPC Networks with and without Internal LB service ... === TestName: test_03_nuage_internallb_vpc_networks | Status : SUCCESS ===
ok
Test Nuage VSP VPC Internal LB functionality with different combinations of Internal LB rules ... === TestName: test_04_nuage_internallb_rules | Status : SUCCESS ===
ok
Test Nuage VSP VPC Internal LB functionality by performing (wget) traffic tests within a VPC ... === TestName: test_05_nuage_internallb_traffic | Status : SUCCESS ===
ok
Test Nuage VSP VPC Internal LB functionality with different LB algorithms by performing (wget) traffic tests ... === TestName: test_06_nuage_internallb_algorithms_traffic | Status : SUCCESS ===
ok
Test Nuage VSP VPC Internal LB functionality with restarts of VPC network components by performing (wget) ... === TestName: test_07_nuage_internallb_vpc_network_restarts_traffic | Status : SUCCESS ===
ok
Test Nuage VSP VPC Internal LB functionality with InternalLbVm appliance operations by performing (wget) ... === TestName: test_08_nuage_internallb_appliance_operations_traffic | Status : SUCCESS ===
ok
Test Basic VPC Network Functionality with Nuage VSP SDN plugin ... === TestName: test_nuage_vpc_network | Status : SUCCESS ===
ok
Test Nuage VSP SDN plugin with basic Isolated Network functionality ... === TestName: test_nuage_vsp | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 11 tests in 12094.705s
OK
Test run logs:
[results.txt](https://github.com/apache/cloudstack/files/187587/results.txt)
[runinfo.txt](https://github.com/apache/cloudstack/files/187588/runinfo.txt)
Test config file:
[nuage_ant.txt](https://github.com/apache/cloudstack/files/222711/nuage_ant.txt)
Note: Attached the Marvin config file as .txt instead of .cfg.
PEP8 & PyFlakes Compliance:
CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/*.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/nuageTestCase.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/test_nuage_password_reset.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/test_nuage_vpc_internal_lb.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/test_nuage_vpc_network.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/test_nuage_vsp.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/*.py
#CLOUDSTACK-9322
* pr/1452:
CLOUDSTACK-9322 : Marvin tests for Internal Lb with Nuage VSP
CLOUDSTACK-9320 : InternalLBVM is not getting destroyed when the last Internal Load Balancer rule is removed for the corresponding source IP address
CLOUDSTACK-9322 : Changes to support InternalLbVm with Nuage VSP plugin
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9130: Make RebootCommand similar to start/stop/migrate agent commands w.r.t. "execute in sequence" flag
RebootCommand now behaves in the same way as start/stop/migrate agent commands w.r.t. to sequential/parallel execution.
* pr/1200:
CLOUDSTACK-9130: Make RebootCommand similar to start/stop/migrate agent commands w.r.t. "execute in sequence" flag RebootCommand now behaves in the same way as start/stop/migrate agent commands w.r.t. to sequential/parallel execution.
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Updated test_volumes.py to include a test for detaching and reattaching a root volume from a vm. I also had to update base.py to allow attach_volume to have the parameter deviceid to be passed as needed.
- Improve disk chain usage while attaching, migrating disks
- Gets root disk controller based diskDeviceBusName from volume's chain info
- Refactor and move VirtualMachineDiskInfo to cloud-utils
- Allows mixing of scsi controller types
- Fixes a NPE case with map passed as null, for example in case of detach volume
command
- Use a osdefault translator that allow use of recent os types added (enums of
which) are not available in the sdk
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Removed unused code from com.cloud.api.ApiServer**Removed \_ from variables names**: private variables with \_ at the beginning is common in C++ but not in Java.
**Removed unused code from ApiServer:**
- com.cloud.api.ApiServer.getPluggableServices(): unused method;
- com.cloud.api.ApiServer.getApiAccessCheckers(): unused method;
**Methods and variables access level reviewed:**
- com.cloud.api.ApiServer.handleAsyncJobPublishEvent(String, String ,Object): this method was private but the annotation @MessageHandler requests public methods, as can be seen in org.apache.cloudstack.framework.messagebus.MessageDispatcher.buildHandlerMethodCache(Class\<?\>), which searches methods with the @MessageHandler annotation and changes
it to be accessible (setAccessible(true)). Thus, there is no reason for handleAsyncJobPublishEvent be a private method and lead some other dev to wrong conclusions about the use of the method;
- Global variables and methods called just by this class (ApiServer) were changed to private.
**Changed variables and methods from static to non-static (if possible):** as some variables/methods are used just by one object of this class, instantiated by Spring, they were changed to non-static.
With that, calls from com.cloud.api.ApiServlet.ApiServlet() that used static methods from ApiServer, were changed from ApiServer.\<staticMethodName\> to \_apiServer.\<methodName\> that refers to the org.apache.cloudstack.api.ApiServerService interface. Thus, methods com.cloud.api.ApiServer.getJSONContentType() and com.cloud.api.ApiServer.isSecureSessionCookieEnabled() had to be added in the interface (org.apache.cloudstack.api.ApiServerService, interface implemented by class ApiServer).
* pr/1263:
The goal of this PR is to review com.cloud.api.ApiServer class, with the following actions:
Signed-off-by: Will Stevens <williamstevens@gmail.com>
following actions:
Removed “_” in beginning of global variables names:
Variables was changed from “_<variablename>” to “<variablename>”, as
this convension (private veriables with “_”) is common in C++ but not in
Java.
Removed unused code from ApiServer:
- com.cloud.api.ApiServer.getPluggableServices():
Unused method.
- com.cloud.api.ApiServer.getApiAccessCheckers():
Unused method.
Methods and variables access level reviewed:
- com.cloud.api.ApiServer.handleAsyncJobPublishEvent(String, String,
Object):
This method was private but the annotation @MessageHandler requests
public methods, as can be seen in
org.apache.cloudstack.framework.messagebus.MessageDispatcher.buildHandlerMethodCache(Class<?>),
which searches methods with the @MessageHandler annotation and changes
it to accessible (“setAccessible(true)”). Thus, there is no reason for
handleAsyncJobPublishEvent be a private method.
- Global variables and methods called just by this class (ApiServer)
were changed to private.
Changed variables and methods from static to non static (if possible):
As some variables/methods are used just by one object of this class
(instantiated by springer), they were changed to non static.
With that, calls from com.cloud.api.ApiServlet.ApiServlet() that used
static methods from ApiServer, was changed from
ApiServer.<staticMethodName> to _apiServer.<methodName> that refers to
the org.apache.cloudstack.api.ApiServerService interface. Thus, methods
com.cloud.api.ApiServer.getJSONContentType() and
com.cloud.api.ApiServer.isSecureSessionCookieEnabled() had to be
included in the interface (org.apache.cloudstack.api.ApiServerService,
interface implemented by class ApiServer).
However, com.cloud.api.ApiServer.isEncodeApiResponse() was keept static,
as its call hierarchy would have to be changed (more than planed for
this PR).
SecurityGroupRulesCmd code cleanupWrote a test and cleaned some duplicate code with the objective to evaluate the jenkins pull request process at builds.a.o
worthwhile to keep, IMHO.
* pr/1287:
SecurityGroupRulesCmd code cleanup review comments handled
deal with PMD warnings
code cleanup
security rules test
remove autogenerated pydev files
Signed-off-by: Koushik Das <koushik@apache.org>
- In case of redundant VPCs, the ACL items are revoked in the first iteration. Since the econd iteration
is needed in order to remove the private network, we have to check if the nic profile is gone before trying
to revoke the ACL items again, which would throw a NPE.
- Some variable extraction in order to ease debugging.
CLOUDSTACK-9333: Exclude clusters from OVF operationsJIRA TICKET: https://issues.apache.org/jira/browse/CLOUDSTACK-9333
## Introduction
In some environments there is a need to exclude certain VMware clusters from performing OVF operations. This operations are part of:
* create template
* create volume snaphsot
* copy template, volume, images from primary storage to secondary storage
* migrate volume
* participate when a template gets cached over to primary storage.
In ESX/ESXi, OVF operations are low priority and bound to a single CPU and most likely get throttled to certain IOPS and network limits.
If the hypervisor chosen for OVF operations is weak or overloaded this results in significantly longer execution of such OVF command and therefore degraded performance of underlying CloudStack API call.
### Proposed solution
It is proposed to add a way to exclude hosts from selected clusters for OVF operations.
To exclude a cluster, would be necessary to insert a record in <code>cluster_details</code> specifying property **vmware.exclude_from_ovf** in this way: (supposing we want to exclude cluster X)
| cluster_id | name| value |
|:-------------:|:-------------:|:-------------:|
|X|vmware.exclude_from_ovf|true|
* pr/1457:
CLOUDSTACK-9333: Exclude clusters for OVF operations
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Add ability to download templates in SwiftThis PR adds the ability to download templates when using Swift as a secondary storage. Uses the "temp_url" feature of Swift so that tempates can be downloaded without authenticaiton.
* pr/1332:
Add ability to download templates in Swift
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9298: Improve performance of resource retrieval that have tags associated and target volumes, VMs and templatesJIRA TICKET: https://issues.apache.org/jira/browse/CLOUDSTACK-9298
## Description of the problem
When retrieving a large number of resources which have tags associated with, retrieval methods took too long. Our goal is to improve performance of this methods avoiding query the database for each tag, managing that information in memory.
API methods to improve: <code>listTemplates</code>, <code>listVolumes</code>, <code>listVirtualMachines</code>
To achive it, it's necessary to include new columns in <code>template_view</code>, <code>volume_view</code> and <code>user_vm_view</code>:
* tag_account_name
* tag_domain_name
* tag_domain_uuid
* pr/1425:
CLOUDSTACK-9298: Remove user definer from view creations
CLOUDSTACK-9298: Add @MappedSuperClass support for persistence inheritance
CLOUDSTACK-9298: Improve ListTemplatesCmd, ListVolumesCmd and ListVMsCmd performance
Signed-off-by: Will Stevens <williamstevens@gmail.com>
It is now broken into separate methods for more readability and
flexibility. Each zone type (basic, advanced) has its own method for
getting the default network when creating the VM.
CLOUDSTACK-8800 : Improved the listVirtualMachines API call to include memory utilization information for a VMfor xenserver,kvm and for vmware.
* pr/780:
CLOUDSTACK-8800 : Improved the listVirtualMachines API call to include memory utilization information for a VM for xenserver,kvm and for vmware.
Signed-off-by: Kishan Kavala <kishan@apache.org>
Removed unused variables from "NetworkStateListener" classWe removed the following variables from "com.cloud.network.NetworkStateListener"
. UsageEventDao _usageEventDao
. NetworkDao _networkDao
We changed the EventBus s_eventBus variable to private, the constructor not to use those variables and applied this change in classes com.cloud.network.IpAddressManagerImpl and org.apache.cloudstack.engine.orchestration.NetworkOrchestrator
* pr/1261:
Removed unused variables from class NetworkStateListener
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Removal of class AgentBasedStandaloneConsoleProxyManagerThe AgentBasedStandaloneConsoleProxyManager class is neither manually
instatiated anywhere in the code nor via Spring. Further checking in the
interface which the class implements, shows that the use for classes
implementing the EventBus interface relies on instances generated by the
Spring framework, which discards the possibility of the class being
created by EJB. Being the class useless, it was discarded.
* pr/855:
Add Javadoc to AgentBasedStandaloneConsoleProxyManager
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-8858: listVolumes API fails for a particular domain with NPE.CLOUDSTACK-8858: listVolumes API fails for a particular domain with NPE.
Summary: listVolumes API fails when volume associated vm instance has NULL or invalid state. Fix the code to guard this situation since this should not block volume listing.
* pr/830:
CLOUDSTACK-8858: listVolumes API fails for a particular domain with NPE.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* 4.7:
CLOUDSTACK-9254: Make longer names display pretty
CLOUDSTACK-9245 - Deletes ACL items when destroying the VPC or deleting the ACL itself
CLOUDSTACK-9245 - Formatting NetworkACLServiceImpl class
CLOUDSTACK-9245 - Formatting VpcManagerImpl class
CLOUDSTACK-9245 - Formatting NetworkACLManagerImpl class
More VR performance!
* 4.7:
CLOUDSTACK-9237: Create LB Healthcheck issues - button alignment and error message goes outside the window
CLOUDSTACK-9235: Autoscale button is missing in VPC
CLOUDSTACK-9229: Autoscale policy creation failing in VPC due to zoneid missing in createAutoScaleVmProfile
CLOUDSTACK-8860: improve error messages in VM deployment code path.improved the error messages in vm deployment code path. added some more data to the error messages and also fixed some errors using internal ids to use uuids.
* pr/864:
CLOUDSTACK-8860: improve error messages in VM deployment code path.
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-9132: API createVolume takes empty string for name parameterSteps to Reproduce:
================
Create a volume using createVolume API where parameter name is empty.
It creates a volume with empty name.
But the name parameter is mandatory.(Issue)
Expected Behaviour:
================
It shouldn't create a volume with an empty name. Error should be returned.
Solution:
=======
Added a condition to check in case of empty string. If the name is an empty string, it generates a random name for the volume. Made the name field optional in UI as well as in API.
* pr/1319:
CLOUDSTACK-9132: API createVolume takes empty string for name parameter
Signed-off-by: Remi Bergsma <github@remi.nl>
* 4.7:
CLOUDSTACK-9222 Prevent cloud.log.1 filling up the disk
Add integration test for restartVPC with cleanup, and Private Gateway enabled.
Nullpointer Exception in NicProfileHelperImpl
NicProfileHelperImpl NullpointerException when ipVO is nullWhen a VPC has a private gateway, and one would like to restart the VPC with **cleanup** it would fail.
This PR adds a NullPointer check and verifies it with an integration test.
```
test_01_vpc_privategw_acl (integration.smoke.test_privategw_acl.TestPrivateGwACL) ... === TestName: test_01_vpc_privategw_acl | Status : SUCCESS ===
ok
test_02_vpc_privategw_static_routes (integration.smoke.test_privategw_acl.TestPrivateGwACL) ... === TestName: test_02_vpc_privategw_static_routes | Status : SUCCESS ===
ok
test_03_vpc_privategw_restart_vpc_cleanup (integration.smoke.test_privategw_acl.TestPrivateGwACL) ... === TestName: test_03_vpc_privategw_restart_vpc_cleanup | Status : SUCCESS ===
ok
test_04_rvpc_privategw_static_routes (integration.smoke.test_privategw_acl.TestPrivateGwACL) ... === TestName: test_04_rvpc_privategw_static_routes | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 4 tests in 2945.055s
OK
```
* pr/1328:
Add integration test for restartVPC with cleanup, and Private Gateway enabled.
Nullpointer Exception in NicProfileHelperImpl
Signed-off-by: Remi Bergsma <github@remi.nl>
* 4.7:
Fix unable to setup more than one Site2Site VPN Connection
FIX S2S VPN rVPC: Check only redundant routers in state MASTER
PEP8 of integration/smoke/test_vpc_vpn
Add S2S VPN test for Redundant VPC
Make integration/smoke/test_vpc_vpn Hypervisor independant
FIX VPN: non-working ipsec commands
[UI] MADNESS
[DB] Add force_encap field to s2s_customer_gateway table
[ROUTER] Add forceencaps field to python router ipsec config method
[TEST] unittest needs rework
[MARVIN] Add forceencap field to VpnCustomerGateway class in marvin base
[CORE] Add Force UDP Encapsulation option to Site2Site VPN
CLOUDSTACK-9186: Root admin cannot see VPC created by Domain admin user
CLOUDSTACK-9192: UpdateVpnCustomerGateway is failing
CLOUDSTACK-6485 prevent ip asignment of private gw iface
CLOUDSTACK-9204 Do not error when staticroute is already gone
make both check lines consistent
CLOUDSTACK-9181 Prevent syntax error in checkrouter.sh
CLOUDSTACK-9202 Bump ssh timeout
[4.7] FIX Site2SiteVPN on redundant VPCThis PR:
- fixes the inability to setup more than one Site2Site VPN connection from a VPC
- fixes starting of Site2Site VPN on redundant VPC
- fixes Site2Site VPN state checking on redundant VPC
- improves the vpc_vpn test to allow multple hypervisors
- adds an integration test for Site2Site VPN on redundant VPC
Tested it on 4.7 single Xen server zone:
command:
```
nosetests --with-marvin --marvin-config=/data/shared/marvin/mct-zone1-xen1.cfg -a tags=advanced,required_hardware=true /tmp/test_vpc_vpn.py
```
results:
```
Test Site 2 Site VPN Across redundant VPCs ... === TestName: test_01_redundant_vpc_site2site_vpn | Status : SUCCESS ===
ok
Test Remote Access VPN in VPC ... === TestName: test_01_vpc_remote_access_vpn | Status : SUCCESS ===
ok
Test Site 2 Site VPN Across VPCs ... === TestName: test_01_vpc_site2site_vpn | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 3 tests in 1490.076s
OK
```
also performed numerous manual inspections of state of VPN connections and connectivity between VPC's
* pr/1276:
Fix unable to setup more than one Site2Site VPN Connection
FIX S2S VPN rVPC: Check only redundant routers in state MASTER
PEP8 of integration/smoke/test_vpc_vpn
Add S2S VPN test for Redundant VPC
Make integration/smoke/test_vpc_vpn Hypervisor independant
FIX VPN: non-working ipsec commands
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-6485 prevent ip asignment of private gw ifacePrevent ipaddress asignment of gateway to gateway-interface on vpc router by setting vpcid to null in network. This was fixed in 4.4 by 1f209ff226, reimplemented for 4.7
* pr/1299:
CLOUDSTACK-6485 prevent ip asignment of private gw iface
Signed-off-by: Remi Bergsma <github@remi.nl>
Added conditions to check if the name is empty or blank.
If it is empty or blank, then it generates a random name.
Made the name field as optional in UI as well as in API.
Added required unit tests.
Prevent ipaddress asignment of gateway to gateway-interface on vpc router by setting vpcid to null in network
Was fixed in 4.4 by 1f209ff226
Reimplemented for 4.7