This introduces a new certificate authority framework that allows
pluggable CA provider implementations to handle certificate operations
around issuance, revocation and propagation. The framework injects
itself to `NioServer` to handle agent connections securely. The
framework adds assumptions in `NioClient` that a keystore if available
with known name `cloud.jks` will be used for SSL negotiations and
handshake.
This includes a default 'root' CA provider plugin which creates its own
self-signed root certificate authority on first run and uses it for
issuance and provisioning of certificate to CloudStack agents such as
the KVM, CPVM and SSVM agents and also for the management server for
peer clustering.
Additional changes and notes:
- Comma separate list of management server IPs can be set to the 'host'
global setting. Newly provisioned agents (KVM/CPVM/SSVM etc) will get
radomized comma separated list to which they will attempt connection
or reconnection in provided order. This removes need of a TCP LB on
port 8250 (default) of the management server(s).
- All fresh deployment will enforce two-way SSL authentication where
connecting agents will be required to present certificates issued
by the 'root' CA plugin.
- Existing environment on upgrade will continue to use one-way SSL
authentication and connecting agents will not be required to present
certificates.
- A script `keystore-setup` is responsible for initial keystore setup
and CSR generation on the agent/hosts.
- A script `keystore-cert-import` is responsible for import provided
certificate payload to the java keystore file.
- Agent security (keystore, certificates etc) are setup initially using
SSH, and later provisioning is handled via an existing agent connection
using command-answers. The supported clients and agents are limited to
CPVM, SSVM, and KVM agents, and clustered management server (peering).
- Certificate revocation does not revoke an existing agent-mgmt server
connection, however rejects a revoked certificate used during SSL
handshake.
- Older `cloudstackmanagement.keystore` is deprecated and will no longer
be used by mgmt server(s) for SSL negotiations and handshake. New
keystores will be named `cloud.jks`, any additional SSL certificates
should not be imported in it for use with tomcat etc. The `cloud.jks`
keystore is stricly used for agent-server communications.
- Management server keystore are validated and renewed on start up only,
the validity of them are same as the CA certificates.
New APIs:
- listCaProviders: lists all available CA provider plugins
- listCaCertificate: lists the CA certificate(s)
- issueCertificate: issues X509 client certificate with/without a CSR
- provisionCertificate: provisions certificate to a host
- revokeCertificate: revokes a client certificate using its serial
Global settings for the CA framework:
- ca.framework.provider.plugin: The configured CA provider plugin
- ca.framework.cert.keysize: The key size for certificate generation
- ca.framework.cert.signature.algorithm: The certificate signature algorithm
- ca.framework.cert.validity.period: Certificate validity in days
- ca.framework.cert.automatic.renewal: Certificate auto-renewal setting
- ca.framework.background.task.delay: CA background task delay/interval
- ca.framework.cert.expiry.alert.period: Days to check and alert expiring certificates
Global settings for the default 'root' CA provider:
- ca.plugin.root.private.key: (hidden/encrypted) CA private key
- ca.plugin.root.public.key: (hidden/encrypted) CA public key
- ca.plugin.root.ca.certificate: (hidden/encrypted) CA certificate
- ca.plugin.root.issuer.dn: The CA issue distinguished name
- ca.plugin.root.auth.strictness: Are clients required to present certificates
- ca.plugin.root.allow.expired.cert: Are clients with expired certificates allowed
UI changes:
- Button to download/save the CA certificates.
Misc changes:
- Upgrades bountycastle version and uses newer classes
- Refactors SAMLUtil to use new CertUtils
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
agent: Enable IPv6 connectivity for KVM Agent to Management ServerIPv4 is still preferred, so if the hostname of the Management Server
returns a A and AAAA-record the Agent will still connect to the
server over IPv4.
This situation will however allow to use a hostname which only has
a AAAA-record. In that case the Agent will connect to the Management
Server over IPv6.
* pr/1488:
agent: Enable IPv6 connectivity for KVM Agent to Management Server
Signed-off-by: Will Stevens <williamstevens@gmail.com>
IPv4 is still preferred, so if the hostname of the Management Server
returns a A and AAAA-record the Agent will still connect to the
server over IPv4.
This situation will however allow to use a hostname which only has
a AAAA-record. In that case the Agent will connect to the Management
Server over IPv6.
This allows us to have the Agent connect to the Management Server
over IPv6 if that is listening on :::8250
With this patch it is possible to deploy a IPv6-only KVM Agent where
IPv4 traffic is still forwarded over the bridges, but the KVM Agent
itself does not have IPv4 connectivity.
- Properties object polulation using PropertiesUtil.loadFromFile
- test added
- the separate FileNotFoundException handling block was removed as the next IOException block is catching it and it is only logging
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
- minor cleanups on the method body
- java 1.5 for loop
- paramName and paramValue to make the code more readable
- NumbersUtil replaced by NumberUtils
- Test case for the parseCommand
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
- javadoc changed - the old one was copy-pasted from AgentShell
- start and stop method removed - they did the same as the overridden methods
- _counter removed as it was only written, but never read
- remove from _asleep map was moved to a finally block, to make sure it is removed even in case of the thread gets interrupted
- Tests created for the above scenarios.
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
The managed context framework provides a simple way to add logic
to ACS at the various entry points of the system. As threads are
launched and ran listeners can be registered for onEntry or onLeave
of the managed context. This framework will be used specifically
to handle DB transaction checking and setting up the CallContext.
This framework is need to transition away from ACS custom AOP to
Spring AOP.
Update ImageFormat enum to include VHDX format introduced with Hyper-V
Server 2012.
Remove existing Hyper-V plugin, because it does not work and is dead
code.
Remove references to existing Hyper-V plugin from config files.
Remove Hypervisor.HypervisorType.Hyperv special cases from manager code
that are unused or unsupported.
Specifically, there is no CIFS secondary storage class
"CifsSecondaryStorageResource". Also, the Hyper-V plugin's
ServerResource is contacted by the management server and not the other
way around.
Add Hyperv-V support to ListHypervisorsCmd API call
Signed-off-by: Edison Su <sudison@gmail.com>
Since we use JSVC we don't execute the main method, but it is still
there for manually running the Agent.
Initializing log4j in the start method makes sure it also works with JSVC
* send StartupAnswer right after StartupCommand is recieved
* if post processor going wrong, send out readycommand with error message to agent, then agent will exit
The example configuration file said 'workers' was the directive, but the code said
'threads'.
Now we accept both to prevent configuration errors, but the example config remains 'workers'
This patch adds RBD (RADOS Block Device) support for primary storage in combination with KVM.
To get this patch working you need:
- libvirt-java 0.4.8
- libvirt with RBD storage pool support (>0.9.13)
- Qemu with RBD support (>0.14)
The primary storage does not support all the functions of CloudStack yet, for example snapshotting is disabled
due to the fact that backupping up a RBD snapshot is not possible in the way CloudStack wants to do it.
Creating templates from RBD volumes goes well, creating a VM from a template however is still a hit-and-miss.
NFS primary storage is also still required, you are not able to run your System VM's from RBD, they will need
to run on NFS.
Other then these points you can run instances with RBD backed disks.
host if there are multiple storage pools in a cluster.
The issue is as follows:
1. When CloudStack detects that a host is not responding to ping
requests it'll send a fence command for this host to another host in the
cluster.
2. The agent takes a long time to respond to this check if the storage
is fenced. This is because the agent checks if the first host is writing
to its heartbeat file on all pools in the cluster. It is doing this in a
sequential manner on all storage pool.
Making a fix to get rid of sleep, wait during HA. The behavior is now
similar to Xenserver.
RB: https://reviews.apache.org/r/6133/
Send-by:devdeep.singh@citrix.com
[Problem]
CloudStack uses a significant amount of third party software. As part of the move to ASF there is a certain set of licenses that are compatible with ASF policy. We need to make sure that every dependency we have is in that set. If it's not we have to remove it.
[Solution]
First set: Removing JnetPcap.
[Reviewers]
Edison Su, David Nalley
[Testing]
[Test Cases]
Executed ANT build-all sucessfully after removing JnetPcap and its respective dependencies.
[Platform]
Fedora release
Signed-off-by: Pradeep <pradeep.soundararajan@citrix.com>
The default_network_rules_systemvm method in security_group.py only created the appropriate rules for
just one bridge.
This however leads to traffic not being forwarded to the virtual machine in the case of the system VMs
both (console & storage) having different bridges in basic networking.
This patch makes sure rules are generated for all target devices based on their source device/bridge
It however excludes the LinkLocalBridge since no filtering is needed on that bridge.
1. put host into Maintenance, will send a Maintenance command to host, tell host that do not reconnect to mgt server
2. cancel Maintenance, will ssh into kvm host, and restart cloud-agent, which will reconnect to host
Changes:
- Added a new interface 'PluggableService'
- Any component that can be packaged separately from cloudstack, can implement this interface and provide its own property file listing the API commands the component supports
- As an example have made VirtualNetworkApplianceService pluggable and a new configureRouter command is added
- ComponentLocator reads all the pluggable service from componentLibrary or from components.xml and instantiates the services.
- As an example, DefaultComponentLibrary adds the pluggable service 'VirtualNetworkApplianceService'
- Also components.xml.in has an entry to show how a pluggable service can be added, but it is commented out.
- APIServer now reads the commands for each pluggable service and when a command for such a service is called, APIServer sets the required instance of the pluggable service in the coomand.
- To do this a new annotation '@PlugService' is added that is processed by APIServer. This eliminates the dependency on the BaseCmd to instantiate the service instances.
1. create a VM from iso/template whose guest os type is "Windows*"
2. attach a disk to windows VM
3. install virtio disk driver
4. stop the vm, and create template from it, chose guest os type as "Other PV"
5. create vm from the template created at step 4. Then this vm will have virtio disk, and virtio nic. Need to install virtio nic driver after VM booting up.
Description :
API's:
- Two new api's authorizeSecurityGroupEgress,revokeSecurityGroupEgressCmd are added. These two API's are similer to ingress rule API's.
- authorizeSecurityGroupEgress :Authorizes a particular egress rule for this security group . Usageof API is very similer to that of authorizeSecurityGroupIngress except that instead of source cidr there will be destination cidr. By default like ingress, all the outgoing flows are blocked.
- revokeSecurityGroupEgress : It is similer to revokeSecurityGroupIngress api, It removes the egress rule.
- listSecurityGroup API's response changed. It include's egress list apart from the existing ingress rules in the output of the API.
Hypervisors :
- It is implemented in Xen and KVM.
Pending Tasks : Blocking using destination security groups.
Previous commits: c9fda641673df7701f44963ef27e1d488f121219 , 24e4e44b8f0712a37147a3777833de3f9e24829e
previous commit: c9fda641673df7701f44963ef27e1d488f121219 ( this under bug 1067, typing error)
changes: 1) partially implemented listing of egress rules along with ingress rules.
2) partially implemneted egress rules for KVM