- All tests should pass on KVM, Simulator
- Add test cases covering FSM state transitions and actions
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Host-HA offers investigation, fencing and recovery mechanisms for host that for
any reason are malfunctioning. It uses Activity and Health checks to determine
current host state based on which it may degrade a host or try to recover it. On
failing to recover it, it may try to fence the host.
The core feature is implemented in a hypervisor agnostic way, with two separate
implementations of the driver/provider for Simulator and KVM hypervisors. The
framework also allows for implementation of other hypervisor specific provider
implementation in future.
The Host-HA provider implementation for KVM hypervisor uses the out-of-band
management sub-system to issue IPMI calls to reset (recover) or poweroff (fence)
a host.
The Host-HA provider implementation for Simulator provides a means of testing
and validating the core framework implementation.
Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This introduces a new certificate authority framework that allows
pluggable CA provider implementations to handle certificate operations
around issuance, revocation and propagation. The framework injects
itself to `NioServer` to handle agent connections securely. The
framework adds assumptions in `NioClient` that a keystore if available
with known name `cloud.jks` will be used for SSL negotiations and
handshake.
This includes a default 'root' CA provider plugin which creates its own
self-signed root certificate authority on first run and uses it for
issuance and provisioning of certificate to CloudStack agents such as
the KVM, CPVM and SSVM agents and also for the management server for
peer clustering.
Additional changes and notes:
- Comma separate list of management server IPs can be set to the 'host'
global setting. Newly provisioned agents (KVM/CPVM/SSVM etc) will get
radomized comma separated list to which they will attempt connection
or reconnection in provided order. This removes need of a TCP LB on
port 8250 (default) of the management server(s).
- All fresh deployment will enforce two-way SSL authentication where
connecting agents will be required to present certificates issued
by the 'root' CA plugin.
- Existing environment on upgrade will continue to use one-way SSL
authentication and connecting agents will not be required to present
certificates.
- A script `keystore-setup` is responsible for initial keystore setup
and CSR generation on the agent/hosts.
- A script `keystore-cert-import` is responsible for import provided
certificate payload to the java keystore file.
- Agent security (keystore, certificates etc) are setup initially using
SSH, and later provisioning is handled via an existing agent connection
using command-answers. The supported clients and agents are limited to
CPVM, SSVM, and KVM agents, and clustered management server (peering).
- Certificate revocation does not revoke an existing agent-mgmt server
connection, however rejects a revoked certificate used during SSL
handshake.
- Older `cloudstackmanagement.keystore` is deprecated and will no longer
be used by mgmt server(s) for SSL negotiations and handshake. New
keystores will be named `cloud.jks`, any additional SSL certificates
should not be imported in it for use with tomcat etc. The `cloud.jks`
keystore is stricly used for agent-server communications.
- Management server keystore are validated and renewed on start up only,
the validity of them are same as the CA certificates.
New APIs:
- listCaProviders: lists all available CA provider plugins
- listCaCertificate: lists the CA certificate(s)
- issueCertificate: issues X509 client certificate with/without a CSR
- provisionCertificate: provisions certificate to a host
- revokeCertificate: revokes a client certificate using its serial
Global settings for the CA framework:
- ca.framework.provider.plugin: The configured CA provider plugin
- ca.framework.cert.keysize: The key size for certificate generation
- ca.framework.cert.signature.algorithm: The certificate signature algorithm
- ca.framework.cert.validity.period: Certificate validity in days
- ca.framework.cert.automatic.renewal: Certificate auto-renewal setting
- ca.framework.background.task.delay: CA background task delay/interval
- ca.framework.cert.expiry.alert.period: Days to check and alert expiring certificates
Global settings for the default 'root' CA provider:
- ca.plugin.root.private.key: (hidden/encrypted) CA private key
- ca.plugin.root.public.key: (hidden/encrypted) CA public key
- ca.plugin.root.ca.certificate: (hidden/encrypted) CA certificate
- ca.plugin.root.issuer.dn: The CA issue distinguished name
- ca.plugin.root.auth.strictness: Are clients required to present certificates
- ca.plugin.root.allow.expired.cert: Are clients with expired certificates allowed
UI changes:
- Button to download/save the CA certificates.
Misc changes:
- Upgrades bountycastle version and uses newer classes
- Refactors SAMLUtil to use new CertUtils
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
After integrating XS with CCP the following folder gets created: /var/log/cloud/ however the logs in that are not rotated resulting in root file system fill up. It was a known issue and link http://support.citrix.com/article/CTX138064 describes the issue and solution. Used the article and added corresponding changes to Cloudstack.
ip6tables no longer takes '--icmpv6-type any' as a argument.
To allow all ICMPv6 traffic with ip6tables it has to be invoked this way:
$ ip6tables -I i-2-14-VM -p icmpv6 -s ::/0 -j ACCEPT
All ICMPv6 traffic is now allow into the Instance.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
This fixes the agreed upon url on download.cloudstack.org in various
sql files and misc scripts.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This commit implements Ingress and Egress filtering for IPv6 in
Basic Networking.
It allows for opening and closing ports just as can be done with IPv4.
Rules have to be specified twice, once for IPv4 and once for IPv6, for
example:
- 22 until 22: 0.0.0.0/0
- 22 until 22: ::/0
Egress filtering works the same as with IPv4. When no rule is applied all
traffic is allowed. Otherwise only the specified traffic (with DNS being
the exception) is allowed.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
This commit implements basic Security Grouping for KVM in
Basic Networking.
It does not implement full Security Grouping yet, but it does:
- Prevent IP-Address source spoofing
- Allow DHCPv6 clients, but disallow DHCPv6 servers
- Disallow Instances to send out Router Advertisements
The Security Grouping allows ICMPv6 packets as described by RFC4890
as they are essential for IPv6 connectivity.
Following RFC4890 it allows:
- Router Solicitations
- Router Advertisements (incoming only)
- Neighbor Advertisements
- Neighbor Solicitations
- Packet Too Big
- Time Exceeded
- Destination Unreachable
- Parameter Problem
- Echo Request
ICMPv6 is a essential part of IPv6, without it connectivity will break or be very
unreliable.
For now it allows any UDP and TCP packet to be send in to the Instance which
effectively opens up the firewall completely.
Future commits will implement Security Grouping further which allows controlling UDP and TCP
ports for IPv6 like can be done with IPv4.
Regardless of the egress filtering (which can't be done yet) it will always allow outbound DNS
to port 53 over UDP or TCP.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Use separate lvcreate command on XenServer7 hosts, that checks and passes
different parameters based on the xenserver release version.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Allow DNS queries over TCP when egress filtering is configured.
When using DNSSEC more and more queries are done over TCP and this
requires 53/TCP to be allowed.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
We use some JSP file just for translation of strings in the UI. This is
achievable purely in JavaScript. This removes those JSPs, simplifies
translation usage and workflow (purely JS based). The l10n js (dictionary)
files are generated from existing messages.properties files during client-ui
code generation phase.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The old commands.properties format included the full class name such as:
createAccount=com.cloud.api.commands.CreateAccountCmd;1
The migration script did not consider this format and fails. With this fix
the migration script will process both the formats, including processing a
commands.properties file with mixed format, for example:
$ cat commands.properties
### Account commands
createAccount=1
deleteAccount=2
markDefaultZoneForAccount=com.cloud.api.commands.MarkDefaultZoneForAccountCmd;3
$ python scripts/util/migrate-dynamicroles.py -d -f commands.properties
Apache CloudStack Role Permission Migration Tool
(c) Apache CloudStack Authors and the ASF, under the Apache License, Version 2.0
Running this migration tool will remove any default-role permissions from cloud.role_permissions. Do you want to continue? [y/N]y
The commands.properties file has been deprecated and moved at: commands.properties.deprecated
Running SQL query: DELETE FROM `cloud`.`role_permissions` WHERE `role_id` in (1,2,3,4);
Running SQL query: INSERT INTO `cloud`.`role_permissions` (`uuid`, `role_id`, `rule`, `permission`, `sort_order`) values (UUID(), 1, '*', 'ALLOW', 0);
Running SQL query: INSERT INTO `cloud`.`role_permissions` (`uuid`, `role_id`, `rule`, `permission`, `sort_order`) values (UUID(), 2, 'deleteAccount', 'ALLOW', 0);
Running SQL query: INSERT INTO `cloud`.`role_permissions` (`uuid`, `role_id`, `rule`, `permission`, `sort_order`) values (UUID(), 2, 'markDefaultZoneForAccount', 'ALLOW', 1);
Static role permissions from commands.properties have been migrated into the db
Running SQL query: UPDATE `cloud`.`configuration` SET value='true' where name='dynamic.apichecker.enabled'
Dynamic role based API checker has been enabled!
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
more detailed error if host file not found or cannot be opened
using mkstemp and mkdtemp for improved security
improve resource cleanup in error conditions in unit test
As requested here: https://github.com/apache/cloudstack/pull/1495
No scripts are using perl so that install requirement can be removed.
The new scripts are using standard python packages only.
Includes extensive unit test.
Due to PR #1054 this patch fixes the dynamic-roles migration script
to use the mysql-connector-python dependency.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
writeIfNotHere requires an array of strings, not a string
* pr/1456:
writeIfNotHere requires an array of strings, not a string
Signed-off-by: Will Stevens <williamstevens@gmail.com>
- Removes commands.properties file
- Fixes apidocs and marvin to be independent of commands.properties usage
- Removes bundling of commands.properties in deb/rpm packaging
- Removes file references across codebase
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This feature allows root administrators to define new roles and associate API
permissions to them.
A limited form of role-based access control for the CloudStack management server
API is provided through a properties file, commands.properties, embedded in the
WAR distribution. Therefore, customizing API permissions requires unpacking the
distribution and modifying this file consistently on all servers. The old system
also does not permit the specification of additional roles.
FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Dynamic+Role+Based+API+Access+Checker+for+CloudStack
DB-Backed Dynamic Role Based API Access Checker for CloudStack brings following
changes, features and use-cases:
- Moves the API access definitions from commands.properties to the mgmt server DB
- Allows defining custom roles (such as a read-only ROOT admin) beyond the
current set of four (4) roles
- All roles will resolve to one of the four known roles types (Admin, Resource
Admin, Domain Admin and User) which maintains this association by requiring
all new defined roles to specify a role type.
- Allows changes to roles and API permissions per role at runtime including additions or
removal of roles and/or modifications of permissions, without the need
of restarting management server(s)
Upgrade/installation notes:
- The feature will be enabled by default for new installations, existing
deployments will continue to use the older static role based api access checker
with an option to enable this feature
- During fresh installation or upgrade, the upgrade paths will add four default
roles based on the four default role types
- For ease of migration, at the time of upgrade commands.properties will be used
to add existing set of permissions to the default roles. cloud.account
will have a new role_id column which will be populated based on default roles
as well
Dynamic-roles migration tool: scripts/util/migrate-dynamicroles.py
- Allows admins to migrate to the dynamic role based checker at a future date
- Performs a harder one-way migrate and update
- Migrates rules from existing commands.properties file into db and deprecates it
- Enables an internal hidden switch to enable dynamic role based checker feature
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
kvm: Aqcuire lock when running security group Python scriptIt could happen that when multiple instances are starting at the same
time on a KVM host the Agent spawns multiple instances of security_group.py
which both try to modify iptables/ebtables rules.
This fails with on of the two processes failing.
The instance is still started, but it doesn't have any IP connectivity due
to the failed programming of the security groups.
This modification lets the script aqcuire a exclusive lock on a file so that
only one instance of the scripts talks to iptables/ebtables at once.
Other instances of the script which start will poll every 500ms if they can
obtain the lock and otherwise execute anyway after 15 seconds.
* pr/1408:
kvm: Aqcuire lock when running security group Python script
Signed-off-by: Will Stevens <williamstevens@gmail.com>
It could happen that when multiple instances are starting at the same
time on a KVM host the Agent spawns multiple instances of security_group.py
which both try to modify iptables/ebtables rules.
This fails with on of the two processes failing.
The instance is still started, but it doesn't have any IP connectivity due
to the failed programming of the security groups.
This modification lets the script aqcuire a exclusive lock on a file so that
only one instance of the scripts talks to iptables/ebtables at once.
Other instances of the script which start will poll every 500ms if they can
obtain the lock and otherwise execute anyway after 15 seconds.
The lock will be released as soon as the script exists, which is usually within
a few hundred ms.
* 4.7:
CLOUDSTACK-9172 Added cross zones check to delete template and iso
Check the existence of 'forceencap' parameter before use
systemvm: set default umask 022 in injectkeys.sh
The default umask of 0022 is set in Ubuntu and other packages. Set the same
in case of CentOS startup scripts. Use umask 022 in the injectkeys.sh script
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This is a mandatory argument but it was NOT passed which caused the
re-programming of security groups to fail.
Simple fix to just add the argument since the variable is available
there.
* 4.6:
Revert "Change references of people.apache.org to home.apache.org in the test code"
Change references of people.apache.org to home.apache.org in the test code This closes#1123 Signed-off-by: SrikanteswaraRao Talluri <talluri@apache.org>
CLOUDSTACK-9077 Fix injectkeys.sh to work on CentOS7
CLOUDSTACK-9065: fix bug when creating packaging with noredist flag
The S3 implementation is far from finished, this commit focusses on the bases.
- Upgrade AWS SDK to latest version.
- Rewrite S3 Template downloader.
- Rewrite S3Utils utility class.
- Improve addImageStoreS3 API command.
- Split various classes for convenience.
- Various minor improvements and code optimalisations.
A side effect of the new AWS SDK is that it, by default, uses the V4 signature. Therefore I added an option to specify the Signer, so it stays compatible with previous versions.
CLOUDSTACK-9029: Proper support to identify CentOS 7 version numberhttps://issues.apache.org/jira/browse/CLOUDSTACK-9029
* pr/1033:
CLOUDSTACK-9029: Proper support to identify CentOS 7 version number
Signed-off-by: Remi Bergsma <github@remi.nl>
To configure firewall rules, CloudStack modifies `/etc/sysctl.conf` and
execute those modifications. This may be harmful for several reasons:
1. `/etc/sysctl.conf` may be managed by some configuration management
system. Such a system will constantly restore the previous version.
2. `/etc/sysctl.conf` may contain additional properties that have been
changed later by some system administrator (for example, once a
firewall has been configured, forwarding may have been activated
while it is disabled in `/etc/sysctl.conf`). Executing the file
again at a later time may disrupt the system.
3. Entries are added again and again. `/etc/sysctl.conf` will contain
the same directives repeated several times.
Using a configuration file is not needed as `sysctl` is able to directly
modify sysctl values with `-w` flag.
Signed-off-by: Vincent Bernat <Vincent.Bernat@exoscale.ch>
Guys, can you review it? things need to be discussed:
(1) this supports KVM/QCOW2 only. Anyone want to implement for other Hypervisor/format ?
(2) The original data volume (on primary storage) will be removed.
(3) The script uses the default timeout in libvirtComputingResource. Do we need to add one in global configuration (like copy.volume.wait or backup.snapshot.wait, create.volume.from.snapshot.wait)
(4) In scripts/storage/qcow2/managesnapshot.sh, I use "qemu-img convert -f qcow2 -O qcow2" to copy the snapshot from secondary to primary (hence there is no base image file), instead of "cp -f", this is because convert is faster than cp in my testing.
* pr/732:
CLOUDSTACK-5863: revert volume snapshot for KVM/QCOW2
Signed-off-by: Wei Zhou <w.zhou@tech.leaseweb.com>
This setting works on CentOS 6 / RHEL 6 but does nothing, as
"cpu" cgroup is not mounted. On CentOS 7 / RHEL 7 systemd does
mount cgroups and "cpu" is co-mounted with "cpuacc". Hence, if
we specify "cpu" then this results in an error because it can
only use them both, or none.
By removing the setting, we rely on the default of qemu, which
is:
cgroup_controllers = ["cpu", "devices", "memory", "blkio", "cpuacct", "net_cls"]
Only if they are really mounted, they will be used. So, this will
work on both version 6 and 7.
The 'fix script' didn't work well, as after a reboot you'd still have qemu
throwing errors. Now we can handle the co-mountedcgroups.
In dev environments, there is no /etc/cloudstack/management/db.properties file
That forces you to specify all parameters on the command line. This commit
sets some defaults, like port 3306, user root and localhost.
When available, it will still get settings from the config file and it will
also allow you to override it on the command line. So it is fully backwards
compatible.
- Changed location of the update_host_passwd script
- Updated the patch files for XenServer
- Updated the script path on LibvirtComputing class
- Removed the hostIP from the LibvirtUpdateHostPasswordCommandWrapper execute() method
- Adding update_host_passwd to VRScripts
- Add accessor method to host password on CitrixResourceBase
- Add implementation to CitrixUpdateHostPasswordCommandWrapper
- Improve testUpdateHostPasswordCommand() unit test on CitrixRequestWrapperTest
- Add line to patch files on xenserver directory
Concerning the LibVirt change:
- I forgot to assing the return of the getDefaultHypervisorScriptsDir() method to the hypervisorScriptsDir variable
- Modifying the LibvirtUpdateHostPasswordCommandWrapper in order to execute the script on the host
- Adding the script path to LibvirtComputingResource
- Adding the host IP address as an instance variable on UpdateHostPasswordCommand
- Improving the Unit Test (LibvirtComputingResourceTest) to get it covering the new code
The script that installs the system vm templates sets the uuid column
for the template being installed, however it does not set the respective
url column. This commit changes that.
Signed-off-by: Remi Bergsma <apache@remi.nl>
This closes#348
VLAN id 4095 is commonly used as a 'tag passthrough' in virtualization environments
(VMware, specifically). This vlan id is incompatible with Linux, but we can
allow the admin to manually configure the bridge if the same passthrough is
desired.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit aee35c96a8)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Router VMs don't have a chain rule with -def suffix, this fixes name and
properly removes VR vms not running on a host
- Before trying to remove dnats, filter empty/None elements from list
- destroy_ebtables_rules should check what kind of action is request to be
performed (-A for add or -D for removed) and execute based on that
- Before executing any command, log it for debugging purposes
- Method to cleanup bridge, may be used in future
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit 3925512115)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The SG python script depends on ebtables-save which is not available on Debian
based distros (Ubuntu and Debian for example). The commit uses /proc/modules
to find available bridge tables (one of nat, filter or broute) and then
find VMs that need to be removed. Further it uses set() to remove duplicate VMs
so we don't try to remove a VM's rules more than once leading to unwanted errors
in the log.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit d66677101c)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes the issue of Security Groups not working in case of XenServer 6.5;
- Uses nethash ipset data-structure to store CIDRs (efficient than iphash and
avoids overflow errors in case users add /8 /4 ingress/egress cidrs)
- Support for ipset versions both on 6.2 and 6.5, both have different outputs. This
fixes the issue of destroy_network_rules_for_vm failing
- Implements defensive filtering of list, instead of popping last item without
checking if it's None or empty
- Greps using names that are 'quoted' to avoid bash errors
- Before setting up new network rule, tries to clean and remove old ipset entry
- Idents, whitespace and naming fixes
PS. This is my 1000th commit to the 🐵 project :)
This closes#186
(cherry picked from commit d91d161107)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Conflicts:
scripts/vm/hypervisor/xenserver/vmops
scripts: filter output instead of popping string from list
This is a defensive enhancement for KVM SG script that filters out empty string
instead of popping last item which may or may not be an empty string.
Squashed commits:
(cherry picked from commit f4cbc4c010)
(cherry picked from commit 64ab3554a1)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The recent discussed improvement has the risk that if 'sync' hangs, the reboot may be delayed in the same way as the 'reboot' command would do. To work around, we're adding a 5 second timeout. If it cannot sync in 5 seconds, it will not succeed anyway and we should proceed the reset.
@snuf: Could we use your OVM3 heartbeat script for other hypervisors as well? One way to do it seems like a nice idea :-)
1. provide compatibility with the Big Cloud Fabric (BCF) controller
L2 Connectivity Service in both VPC and non-VPC modes
2. virtual network terminology updates: VNS --> BCF_SEGMENT
3. uses HTTPS with trust-always certificate handling
4. topology sync support with BCF controller
5. support multiple (two) BCF controllers with HA
6. support VM migration
7. support Firewall, Static NAT, and Source NAT with NAT enabled option
8. add VifDriver for Indigo Virtual Switch (IVS)
This closes#151
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
As discussed with @wido @pyr and @nuxro added an extra log line.
Tested it and it logs fine (tested to local disk) when syncing first:
Apr 3 15:31:23 mcctest2 heartbeat: kvmheartbeat.sh system because it was unable to write the heartbeat to the storage
By the way, it did also log to the agent.log but this extra log has the benefit of ending up in the system log so you'll probably find it easier there. Existing logs:
2015-04-03 15:27:23,943 WARN [kvm.resource.KVMHAMonitor] (Thread-24:null) write heartbeat failed: timeout, retry: 0
2015-04-03 15:28:23,944 WARN [kvm.resource.KVMHAMonitor] (Thread-24:null) write heartbeat failed: timeout, retry: 1
2015-04-03 15:29:23,946 WARN [kvm.resource.KVMHAMonitor] (Thread-24:null) write heartbeat failed: timeout, retry: 2
2015-04-03 15:30:23,948 WARN [kvm.resource.KVMHAMonitor] (Thread-24:null) write heartbeat failed: timeout, retry: 3
2015-04-03 15:31:23,950 WARN [kvm.resource.KVMHAMonitor] (Thread-24:null) write heartbeat failed: timeout, retry: 4
2015-04-03 15:31:23,950 WARN [kvm.resource.KVMHAMonitor] (Thread-24:null) write heartbeat failed: timeout; reboot the host
This closes#145
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When storage cannot be reached, it does not make sense to reboot as it will try to flush buffers, umount NFS mounts, etc. This will not work and thus cause a long delay. With this change, the box will reboot immediately (like pressing the reset button).
This is a plugin that puts in ovm3 support ranging from 3.3.1 to 3.3.2. Basic
functionality is in here, advanced networking etc..
Snapshots only work when a VM is stopped now due to the semantics of OVM's raw
image implementation (so snapshots should work on a storage level underneath the
hypervisor shrug)
This closes#113
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When there is large size VR configuration (aggregate commands) copying data to VR using vmops plugin was failed
because of the ARG_MAX size limitation. The configuration data size is around 300KB.
Updated this to create file in host by scp with file contents. This will create file in host.
Then copy the file from the host to VR using hte vmops createFileInDomr method.
In host file get created in /tmp/ with name VR-<UUID>.cfg, once it copied to VR this file will be removed.
This makes sure dom0 in xenserver doesn't get hammered
when copying templates. It doesn't make sense to use
the cache of dom0 as the template does not fit in
memory. The directIO flags prevent it from trying.
Some try/except in security_group.py catch a lot of exceptions. There
was already one fixed in CLOUDSTACK-1052. Here is another one. We use
logging.exception() to log those exceptions.
Signed-off-by: Vincent Bernat <Vincent.Bernat@exoscale.ch>
Signed-off-by: Pierre-Luc Dion <pdion891@apache.org>
Recent versions of libvirt (at least 0.9.8) will return an int when
queried for the ID of a domain, not a string. This breaks some parts of
the `security_group.py` script which expects a string containing an
int. Notably, this breaks the part handling VM reboots which is
therefore not executed.
Signed-off-by: Vincent Bernat <Vincent.Bernat@exoscale.ch>
Signed-off-by: Sebastien Goasguen <runseb@gmail.com>
1. Integrate System Seed Template into MSI Installer
2. Added progress bar status messages for custom actions at needed places.
Signed-off-by: Abhinandan Prateek <aprateek@apache.org>
last VM from the VPC is deleted on a host
OVS distributed routing: ensure bridge is deleted when last VM from the
VPC is deleted on a host. This fix ensures that bridge is
destroyed.
not created already when OvsVpcPhysicalTopologyConfigCommand update is
recived
Currently if the tunnel creation fails, there is no retry logic. Fix
ensures OvsVpcPhysicalTopologyConfigCommand updates as an opputiunity to ensure
proper tunnels are established between the hosts.
OvsVpcPhysicalTopologyConfigCommand and OvsVpcRoutingPolicyConfigCommand
fix ensures only latest updates are applied (new openflow rules) to the
bidge enabled for distributed routing.
fix mismatch of ovs-host-setup, ovs_host_setup used Libvirt resource and
scripts
plug the nic to OVS bridges created for the tunnel network.
Conflicts:
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/OvsVifDriver.java
distributed routing. Fix ensures there is approproate flag in other
config of the network to indicate the bridge type.
Conflicts:
server/src/com/cloud/network/vpc/VpcManagerImpl.java
to flow rules and applies them on the bridge
add event subscriber in OvsTunnelManager, that listens to
replaceNetworkAcl events. On event sends the updated policy info to all
the hosts in the VPC
- get the hosts on which VPC spans given vpc id
- get the VM's in the VPC
- get the hosts on which a network spans
- get the VPC's to which a hosts is part of
- get VM's of a VPC on a hosts
introduces capability to build a physical toplogy representation of a
VPC. This json file is encapsulated in
OvsVpcPhysicalTopologyConfigCommand, and is used to send full topology
to hypervisor hosts. On hypervisor this json config can be used to setup
tunnels, configure bridge, add flow rules etc
Ovs GURU, to use different broasdcast scheme VS://vpcid.gerkey for the
networks in VPC that use distributed routing
each VIF and tunnel interface to carry the network UUID in other/options
config
distributed routing mode, and setup flows appropriatley
script to handle the VPC topology sent from management server in JSOn
format. From the JSON file, reqired configuration (tunnel setup and flow
rules setup) is setup on the bridge
on hyperv. There were multiple issues here. Upload volume was actually
failing because the post download check for vhd on the cifs share was
unsuccessful. Also the agent code wasn't parsing the volume path correctly.
Fixed it too.
Fixing rebase issues after integrating with wmi v2 implementation.
Removing the executable attribute from some files.
Remove the unused wmi v1 interface file.
Unit test for DestroyCommand implementation in hyperv agent.
Fixed VM state changes w.r.t wmi version 2 changes
If a VM is already running, deploy virtual machine shouldn't fail and throw an exception.
Don't run vhd-util on templates which are present on CIFS. Hyperv uses cifs as secondary storage
Add a SCSI controller by default. This is needed so that data volumes can be added/removed
on a running vm.
Remove the hard coded path in the agent code.
Rat fixes for hyper agent. Added the missing headers in files where it was missing.
use interface wildcard "+" in iptables to cover potential used VLAN interface to allow output on physical interface.
you will see
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-out bond2+ --physdev-is-bridged
instead of
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-out bond2.1234 --physdev-is-bridged
Anthony
1) vxlan will use bridge scheme 'brvx-<vni>'. Multiple physical networks can host guest
traffic type with vxlan isolation, so long as they don't use the same VNI range.
2) Guest traffic labels can be physical interface if bridge by given name is not found.
Normally we take traffic label name, find the matching bridge, then resolve that to a
physical interface. Then we create guest bridges on that interface. Now we can just
specify the interface.
xs 6.1/6.2 introduce the new virtual platform, so there are two virtual platforms, windows PV driver version must match virtual platforms,
this patch tracks PV driver versions in vm details and template details.
Anthony