For ResizeVolume API command -
1. If hypervisor resource throws an exception, handle the NPE thrown by the job framework.
2. Improve user error message in case of RuntimeException by throwing the exception instead of 'Unexpected Exception'.
We don't need an external script to investigate the format of the RBD volume,
we only have to ask Libvirt to resize the volume and that will ask librbd to
do so.
1. Fixed JSON response deserialization. While creating a mock a JSON can be passed which will be deserialized into a response object and returned from agent layer.
For e.g. for a mock corresponding to StopCommand, a response like "{"com.cloud.agent.api.StopAnswer":{"result":false,"wait":0}}" can be passed.
2. Ability to mock PingCommand (returned as part of getCurrentStatus() agent method). As a part of this a mocked VM state report can be returned.
For e.g. {"com.cloud.agent.api.PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"newStates":{},"_hostVmStateReport":{"v-2-VM":{"state":"PowerOn","host":"SimulatedAgent.e6df7732-69b2-429b-9b6a-3e24dddfa2e0"},"i-2-5-VM":{"state":"PowerOff","host":"SimulatedAgent.e6df7732-69b2-429b-9b6a-3e24dddfa2e0"}},"_gatewayAccessible":true,"_vnetAccessible":true,"hostType":"Routing","hostId":3,"contextMap":{},"wait":0}}
in the XenServer pool for GRE tunnel networks
Fix uses XenServer recommended way
Network.other_config:assume_network_is_shared=true
which ensures bridge is created automatically on hosts in the pool for
GRE tunnel networks. Fix also gets rid of error prone custom logic that ensures
bridge is created by plugging a VIF into the dom0 and connected to
GRE tunnel network.
Conflicts:
plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/resource/CitrixResourceBase.java
xen cluster
XenServer does not create a bridge automatically when VIF from domU is connected
to internal network. So there is logic to force bridge creation by
creating VIF in dom0 connected to GRE tunnel network. But there is no
logic to delete the VIF after bridge gets created. So this fix ensure
VIF is delted when atleast there is one domU VIF connected to the
network.
In situations where libvirt lost the storage pool the KVM Agent will re-create the
storage pool in libvirt.
This could be then libvirt is restarted for example.
The object returned internally was missing essential information like the sourceDir
aka the Ceph pool, the monitor IPs, cephx information and such.
In this case the first operation on this newly created pool would fail. All operations
afterwards would succeed.
to get nic info of 0.0.0.0. To get it, it iterates through all nics and return the last NIC in
the list if it doesn't match with any IP address. In case last NIC doesn't have unicastAddress,
Hyper-V agent will fail to start. We don't need IP address during initialization. It get
initialized with startupcommand later.
last VM from the VPC is deleted on a host
OVS distributed routing: ensure bridge is deleted when last VM from the
VPC is deleted on a host. This fix ensures that bridge is
destroyed.
the folder column. For an smb share the smb credentials are in the query string of the path.
Before adding the path, smb shares query string should be cleaned up.
that case cloudstack was not doing anything and not updating the state of the vms to stopped.
Now the agent returns empty list of hostvmstatereport. Management server will then update the
vm state to stopped (instead of not acting upon the return state).
Check if switch name detected from traffic label for management, storage, control traffic is null before falling back to default value.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
We used to create the snapshot after the copy from Secondary Storage,
but it could be that we never use the snapshot.
Now we check if the snapshot exists prior to performing the cloning operation
Since we use qemu-img to copy from RBD to Secondary Storage we no
longer have to force to RAW images, but can stick with QCOW2
When the snapshot backups are QCOW2 format they can easily be deployed
again when restoring from a backup
The KVMStorageProcessor no longer has a hardcoded if-statement which sets
RBD volumes to RAW, this is now handled in the LibvirtStorageAdapter
The Management Server still sends QCOW2 as format. That's a fix for later.
fix mismatch of ovs-host-setup, ovs_host_setup used Libvirt resource and
scripts
plug the nic to OVS bridges created for the tunnel network.
Conflicts:
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/OvsVifDriver.java
In case some environments has different performance or we found some commands
would took too long to execute, one global configuration item is introduced to
specify "time out in seconds per one command in aggregation commands".
By default it's 3 seconds. If admin feel it's too long, it can be adjust to as
low as 1 seconds, which runs still well in my machine.
Conflicts:
setup/db/db/schema-430to440.sql
Added a new flag 'checkBeforeCleanup' to StopCommand based on which check is done to see if VM is running in HV host.
If VM is running then in this case it is not stopped and the operation bails out.
Also modified the MS code to call the StopCommand with appropriate value for the flag based on the context.
Currently it is only set to 'true' when called from the new vmsync logic based on powerstate of VM. For rest it
is set to 'false' meaning no change in behaviour.
This reduces the amount of time and storage it takes dramatically. We no longer
do a full copy, but a sparse copy. The destination image is still in RAW
format, but we only copy over used blocks.
Qemu is also better in doing this then us doing it in Java code.
Otherwise a RBDException will be thrown with the message that the snapshot
isn't protected.
modified: plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
clusers
across the poll members an internal network created is visible to al the
members but bridge is not necessariliy created. This fix enables
plugging a temp VIF connected to internal network to dom0 and then
unplug-it. this action creates a bridge on the host of the network.
This saves the step of writing to a temporary image in /tmp first before
writing to RBD.
This is possible due to a new version in librbd. With the rbd_default_format
setting we can now force qemu-img to create format 2 RBD images.
This is available since Ceph version 0.67.5 (Dumpling).
Add executeInVR() with timeout interface to VirtualRouterDeployer
AggregationControlCommand with Action.Finish may take longer than normal command
since it would execute all the commands in one execution, and it may result in
SSH timeout for SshHelper or other mechanism communicate with VR.
Introduce an new executeInVR() interface with added timeout period for waiting
FinishAggregationCommand to complete execution.
to flow rules and applies them on the bridge
add event subscriber in OvsTunnelManager, that listens to
replaceNetworkAcl events. On event sends the updated policy info to all
the hosts in the VPC
- get the hosts on which VPC spans given vpc id
- get the VM's in the VPC
- get the hosts on which a network spans
- get the VPC's to which a hosts is part of
- get VM's of a VPC on a hosts
introduces capability to build a physical toplogy representation of a
VPC. This json file is encapsulated in
OvsVpcPhysicalTopologyConfigCommand, and is used to send full topology
to hypervisor hosts. On hypervisor this json config can be used to setup
tunnels, configure bridge, add flow rules etc
Ovs GURU, to use different broasdcast scheme VS://vpcid.gerkey for the
networks in VPC that use distributed routing
each VIF and tunnel interface to carry the network UUID in other/options
config
storage pool (SMB) and attached to a running vm can be live migrated to another shared storage
pool. Also a vm and its volumes can be live migrated to another host and storage pool respectively.
2) Corrected some logging in MidoNetPublicNetworkGuru - removed .toString method call on the objects in the log body as toString is called on the object by default when use log4j
CLOUDSTACK-4762 : Enabling VGPU support for XenServer.
This feature is to enable the GPU-passthrough and vGPU functionality,
with the help of this feature, admins/users will be able to leverage
the GPU graphics unit power by deploying a virtul machine with GPU or
vGPU support or by changing the service offering of an existing VM
at any later point of time. There GPU/vGPU enabled VMs are able to run
graphical applications.
For now, this feature is only supported with XenServer hypervisor but
can be extended to add the support of other hypervisors.
In vCenter 5.5, once a volume is migrated the VMDKs are renamed to match the name of the VM.
Update volume path for every volume belonging to the VM to the corresponding new disk filename.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
plugins/hypervisors/vmware/src/org/apache/cloudstack/storage/motion/VmwareStorageMotionStrategy.java
In vCenter 5.5, once a volume is migrated the VMDKs are renamed to match the name of the VM.
If a volume has been renamed upon migration update its volumePath to that of the new disk filename.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java
With VirtIO enabled on KVM. FreeBSD 10 supports VirtIO for both the
network and the disks. This frees us from IDE and E1000 which should
also improve performance.
By default all network disks are in RAW format. Gluster works fine with
QCOW2 which has some advantages.
Disks are by default in QCOW2 format. It is possible to run into
a mismatch, where the disk is in QCOW2 format, but QEMU gets started
with format=raw. This causes the virtual machines to lockup on boot.
Failures to start a virtual machine can be verified by checking the log
of the virtual machine, and compare the output of 'qemu-img info'.
In /var/log/libvirt/qemu/<VM>.log find the URL for the drive:
-drive file=gluster+tcp://...,format=raw,..
Compare this with the 'qemu-img info' output of the same file, mounted
under /mnt/<pool-uuid>/<img-uuid>:
# qemu-img info /mnt/<pool-uuid>/<img-uuid>
...
file format: qcow2
...
This change makes passes the format when creating a disk located on RBD
(RAW only) and Gluster (QCOW2).
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The support for Gluster as Primary Storage is mostly based on the
implementation for NFS. Like NFS, libvirt can address a Gluster environment
through the 'netfs' pool-type.
PrepareForMigrationCommand, so that destination hypervisor can
mount pool. This further exposed an issue for KVM where iso
was not getting cleaned up upon successful migration, fixed as well.
By default only the Integers between -128..127 are cached (unless overridden by java.lang.Integer.IntegerCache.high system property)
If the inbound or outbound values are higher, the reference comparison won't work.
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
- minor resource leak cleaned up
- cpu-speed reading method extracted
- test added
- logging added in case of exception
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
This saves us a lot of code and libvirt is probably a better
place to do this.
libvirt-java now has the support we want, so we can now resize volumes
with libvirt.
(C)LVM volumes can't be resized using libvirt, so we have to
invoke a resize script for that.
By default the client_mount_timeout setting in librados is 300 seconds,
but that causes the connect to the Ceph cluster to block for 5 minutes
if the Ceph cluster is not available.
This patch is not ideal, but it mitigates the problem for now.
At a later point all this librados/librbd code should go back to libvirt
again, but the current versions of libvirt in the distributions are
to old for all the features we require.
For now this should prevent the CloudStack agent blocking for 5 minutes
when the Ceph cluster isn't available.
This is also tracked at the Ceph tracker: http://tracker.ceph.com/issues/6507
To obtain network read/write statistics, multiply sample duration with the
average of the particular performance metric obtained over the sample period.
Added fix for exception and listing. Mentioned details under bug.
Post the fix, simulator works fine.
Signed-off-by: Santhosh Edukulla <Santhosh.Edukulla@citrix.com>
Signed-off-by: Koushik Das <koushik@apache.org>
Introduced by:
commit ac65f8fddf
Author: Hugo Trippaers <htrippaers@schubergphilis.com>
Date: Mon Jan 20 18:03:02 2014 +0100
CLOUDSTACK-5884 make getTargetSwitch(NicTO nicTo) do all the work to select
switch name, type and vlan token. Change preference to use the tags set on the
physical network.
the management server was restarted. The template.properties file created for the
template has the format field in upper-case. This caused the template service to
not to recognise the format and it removed the entry from the template_store_ref
table in db. Fixed the format field in the templatee.properties.
null pointer exception was getting generated when a VolumeTO object was
serialized to create an answer object. If a local storage is used the uri
field will be null. Added null checks for the same.
We hit these excptions whenever a management server held session that was with the old 5.1 vCenter server
is used to make resource calls to the new 5.5 vCenter.
Validate a vCenter session context before it is being used to make a resource call.
And if the context is invalid then discard the context and retrieve a new one.
During the invalidation of an old context handle the context disconnect better
by catching the appropriate exception and returning a newly created context.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareContextFactory.java
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
plugins/hypervisors/vmware/src/com/cloud/storage/resource/VmwareSecondaryStorageResourceHandler.java
vmware-base/src/com/cloud/hypervisor/vmware/util/VmwareContext.java
For a detached volume, don't try to find the associated VM on the hypervisor/peer hypervisor host.
By default create a worker VM to perform snapshot operations.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/storage/resource/VmwareStorageProcessor.java
on hyperv. There were multiple issues here. Upload volume was actually
failing because the post download check for vhd on the cifs share was
unsuccessful. Also the agent code wasn't parsing the volume path correctly.
Fixed it too.
Don't package the OVF and VMDK files into OVA after a template is created from volume.
Since packaging process contains reading and writing from the NFS mount, it doubles the amount of data that needs to be moved around
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/storage/resource/VmwareStorageProcessor.java
Instead of injecting object of VolumeOrchestrationService into VmwareResource, we now populate the command object (MigrateVolumeCommand here) with required information. Thus we dont need volume orchestration service to query that information from resource.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
is attached a hard disk drive is created on the scsi controller. On detach
the data disk is removed from the drive but the disk drive is left behind.
On reattach the agent was again trying to create a disk drive while it was
already present. Fixed the agent code to look up for disk drive while
attaching and if one is not found then only to create the drive for
attaching a data disk.
The agent was always creating a disk with image format vhdx, but the
cloudstack management server defaults to image format vhd for hyperv.
Updated the agent code to be consistent with what cs expects. All disks
are now created with image format vhd.
When VM is not running, existing code is unable to retrieve associated cluster's Id. Now we will try to get this information using previous host where the VM was running.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/guru/VMwareGuru.java
DetachISO is succeeding even though detach opeartion is failing as cdrom is locked by VM as it was mounted inside VM.
Detect if cdrom is locked or not. If locked fail detach operation and warn user to unmount before detaching the iso/cdrom device.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
plugins/hypervisors/vmware/src/com/cloud/storage/resource/VmwareStorageProcessor.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java
deployment fails because of that as cloudstack tries to deploy it on a host which is
ctually down. An investigator wasn't present for hyper-v; so cloudstack wasn't able to
determine the status of the host. Wrote an investigator for hyper-v which checks with
other hosts in the cluster for the status of the host being investigated.
host is put in maintenance mode. The migrate flag wasn't set to true in
the maintain answer. This caused cloudstack to not to schedule a migration
work item for vms on the host. Made a change to set the migrate flag to
true in migrate answer.
brought back up after being down for few hours,snapshot jobs do not get
triggered with reason "there is other active snapshot tasks on the
instance to which the volume is attached".
The systemvm iso file is copied only when a systemvm or router vm is to be started on
a host. The file gets copied to the secondary storage. The mount point used is the one
that has permissions for regular user to mount a share.
CLOUDSTACK-5275: The failure was because a secondary storage wasn't available when the
host was added. When a setup is done through wizard the hosts get added before the
secondary storage. CS was tying to copy yhe systemvm iso to secondary and it used to
fail if it wasn't available. Made a change to copy the iso only when a systemvm is
being started on a host.
CLOUDSTACK-5202: Made changes to clean up mount points on stop and start.
All the three are related fixes; so putting a fix in one commit.
While VRs upgrade, cloudstack should still be able to work with older
/newer version of the scripts within VRs. To allow this, the simulator
needs to send the version strings for the domr version response or the
VR start is interrupted.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
In VMware during VM start the existing disk information is used to configure the VMs. So even if a new disk is created using the new template VM continues to use the old disk.
Once the old root disk is marked for destroy force expunge it and sync the new disk into the VM folder before VM start
- the result of dividing long with long resulted in loss of precision both for network and IO
- unit tests included
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
replace vlanid wih broadcast uri to support vxlan to identify whether id is VLAN ID or VNI
Signed-off-by: ynojima <mail@ynojima.net>
Signed-off-by: Hugo Trippaers <htrippaers@schubergphilis.com>
Fixing rebase issues after integrating with wmi v2 implementation.
Removing the executable attribute from some files.
Remove the unused wmi v1 interface file.
Unit test for DestroyCommand implementation in hyperv agent.
Fixed VM state changes w.r.t wmi version 2 changes
If a VM is already running, deploy virtual machine shouldn't fail and throw an exception.
Don't run vhd-util on templates which are present on CIFS. Hyperv uses cifs as secondary storage
Add a SCSI controller by default. This is needed so that data volumes can be added/removed
on a running vm.
Remove the hard coded path in the agent code.
Rat fixes for hyper agent. Added the missing headers in files where it was missing.
Copy the iso to the secondary storage and let the hypervisor agent know of its
location during setup. The agent will copy it over once it handles the setup
command.
Changes for attaching the systemvm iso to virtual router will booting it -
part 2. The agent copies over the systemvm iso during setup. When a
virtual router is being booted it attaches the iso to it.
Hyperv unit tests for the agent. Unit tests are written using NSubstitute and XUnit and
they test the create, stop and start commands in the agent.
Fix to make sure the hyperv agent and the funcitonal tests are working after the unit tests update.
Fixing the warnings while running unit tests for hyper agent.
Added a new switch for functional tests.
Update the unit test to create a fake vhd file on the fly and run the test. The file is removed when the test completes.
Fix for functional tests. The test was failing to build on java 1.6.
Fix to bring up SSVM and Console Proxy systemvms
Fix to discover the seeded template to bring up the systemvm's for the first startup and fixed UNC path isues
Fixed the UNC path for copying the files from CIFS, and from seeded template
Fixed the issues for ssvm and cpvm to wait until it gets configured and then return the status. Made checksum method to return true.
Fixed HypervDirectConnect resource to figure out the status of systemvms, Need to fix this issue by connecting to public/control ip instead of local ip
checksum is failing for the copied system vm images, currently bypassing.
Implemented commands that are required for VR to bootup and Vm deployment to work
Modified hyperv agent code, to deploy VR with Boot Args, boot args passed to VR using KVP Exchange Component.
Fix for VR to boot up and get configured with boot args, Fixed issue in VolumeOrchestrator
Implemented SetFirewallRulesCommand in HyperV Resource
Implemented VR network commands to provide the necessary services from VR
Fixed hyperv localstorage path encode url issue. encode is converting space to '+'
architecture allows additional functionality to be easily added. Incorporating the plugin in CloudStack will allow
the community to participate in improving the features available with Hyper-V. The plugin uses a Director Connect
Agent architecture described here: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Progress
Add ability to pass kvp data via the key cloudstack-vm-userdata
Rearrange code to make it clearer what .NET objects are being used.
Test failures are easier to deal with if test key is not deleted.
Acquire management/pod ip for control ip when VR deploys in HyperV
Fixed deletion on VM's on hyperv host when mgmt server gets restarted due to HA
Implementation for attach iso command. Attaches an iso to a given vm.
Detail: getPhysicalDisk() was not matching on volumes with .raw, so
instead setting disk format to QCOW2.
BUG-ID: CLOUDSTACK-5018
Bugfix-for:
Reviewed-by:
Reported-by:
Signed-off-by: John Kinsella <jlk@stratosec.co> 1383287538 -0700
Now VPN connection can be created as "passive", which would enable the ability
of remote peer initiate the connection. So it's possible for VPC VR to
establish the connection to another VPC VR of CloudStack.
Test case also included.
The test case would create 2 vpcs and using VPN to connect them.
1) vxlan will use bridge scheme 'brvx-<vni>'. Multiple physical networks can host guest
traffic type with vxlan isolation, so long as they don't use the same VNI range.
2) Guest traffic labels can be physical interface if bridge by given name is not found.
Normally we take traffic label name, find the matching bridge, then resolve that to a
physical interface. Then we create guest bridges on that interface. Now we can just
specify the interface.
When a ROOT volume is created from base template, if a folder already exists for the ROOT volume's VM then replace the old ROOT disk files with the new one.
The simulator uses the default planners of cloudstack and does not
require a separate planner context (as of now). This was just c&p from
baremetal planners.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
xs 6.1/6.2 introduce the new virtual platform, so there are two virtual platforms, windows PV driver version must match virtual platforms,
this patch tracks PV driver versions in vm details and template details.
Anthony
Detail: Checks for other Ethernet interface names uses startsWith(),
whereas the p1p1 style interface uses a regex that doesn't allow for
tailing characters, and so blocks vlan IDs. Fixed.
BUG-ID: CLOUDSTACK-4884
Bugfix-for: 4.2.1
Reviewed-by:
Reported-by:
Signed-off-by: John Kinsella <jlk@stratosec.co> 1381965250 -0700
Introduction of a new Transaction API that is more consistent with the style
of Spring's transaction managment. The existing Transaction class was renamed
to TransactionLegacy. All of the non-DAO code in the management server has been
updated to use the new Transaction API.
I don't think host kernel version has any bearing on it. Original code
was tested with CentOS 6.3 and 6.4, but it seems to succeed or fail per-host,
e.g. a fast host might work and a slow host might not. I was getting intermittent
failures with ubuntu 12.04.3 prior to this patch.
These changes are a joint effort between Edison and I to refactor some
of the code around snapshotting VM volumes and creating
templates/volumes from VM volume snapshots. In general, we were working
towards allowing PrimaryDataStoreDrivers to create snapshots on primary
storage and not requiring the snapshots to be transferred to secondary
storage.
High level changes:
-Added uuid to NfsTO, SwiftTO & S3TO to cut down on the requirement of
PrimaryDataStoreTO and ImageStoreTO which don't really serve much of a
purpose
-Initial work towards enable reverting VM volume from snapshots
-Added hypervisor commands for introducing and forgetting new hypervisor
objects (snapshots, templates & volumes)
Signed-off-by: Edison Su <sudison@gmail.com>
ACS is now comprised of a hierarchy of spring application contexts.
Each plugin can contribute configuration files to add to an existing
module or create it's own module.
Additionally, for the mgmt server, ACS custom AOP is no longer used
and instead we use Spring AOP to manage interceptors.
The managed context framework provides a simple way to add logic
to ACS at the various entry points of the system. As threads are
launched and ran listeners can be registered for onEntry or onLeave
of the managed context. This framework will be used specifically
to handle DB transaction checking and setting up the CallContext.
This framework is need to transition away from ACS custom AOP to
Spring AOP.
Initial patch for VXLAN support.
Fully functional, hopefully, for GuestNetwork - AdvancedZone.
Patch Note:
in cloudstack-server
- Add isolation method VXLAN
- Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
- Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
- Add VXLAN isolation option in zoneWizard UI
in cloudstack-agent (kvm)
- Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation script
-- Usage is exactly same to modifyvlan.sh
- BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when VXLAN is used for isolation
Database changes:
- No change in database structure.
- VXLAN isolation uses same tables that VLAN uses to store vNet allocation status.
Known Issue and/or TODO:
- Some resource still says 'VLAN' in log even if VXLAN is used
- in UI, "Network - GuestNetworks" dosen't display VNI
-- VLAN ID field displays "N/A"
- Documentation!
Signed-off-by : Toshiaki Hatano <haeena@haeena.net>
Libvirt reports:
org.libvirt.LibvirtException: Storage volume not found: no storage vol
with matching name
in some cases, if the volume is created on one kvm host, while accessed
from other host.
It's possible due to concurrent access(read/write) storage.
The current fix is to try serveral times, and wait for 30 seconds for
each retry.
If the issue still there, then need to sync the storage pool access
CLOUDSTACK-4457:
CLOUDSTACK-4459:
harden kvm getvolume. It's possible that one volume created on other kvm host, won't show up on another host, try more times to refresh storage pool if volume won't shown up
Conflicts:
engine/storage/integration-test/test/org/apache/cloudstack/storage/test/FakeDriverTestConfiguration.java
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java
There still exist two issues after Edison's commits.
(1) Migration from new hosts to old hosts failed.
The bridge name on old host is set to cloudVirBr* if network.bridge.name.schema is set to 3.0 in /etc/cloudstack/agent/agent.properties, but the actual bridge name is breth*-* after running cloudstack-agent-upgrade.
(2) all ports of vms (Basic zone, or Advanced zone with security groups) on old hosts are open, because the iptables rules are binding to device (bridge) name which is changed by cloudstack-agent-upgrade.
After this, the KVM upgrade steps :
a. Install 4.2 cloudstack agent on each kvm host
b. Run "cloudstack-agent-upgrade". This script will upgrade all the existing bridge name to new bridge name, and update related firewall rules.
c. install a libvirt hook:
c1. mkdir /etc/libvirt/hooks
c2. cp /usr/share/cloudstack-agent/lib/libvirtqemuhook /etc/libvirt/hooks/qemu
c3. chmod +x /etc/libvirt/hooks/qemu
c4. service libvirtd restart
c5. service cloudstack-agent restart
Signed-off-by: Wei Zhou <w.zhou@leaseweb.com>
The migrate method from libvirt supports passing down a different XML for running
the instance of the target hypervisor.
This enables the VNC to bind to the private IP Address of the hypervisor and during
migration this will be changed to the private IP address of the target host.
This way VNC doesn't listen world wide and is much safer.
when secondary storage is mounted as read-only, changing permission of files on it will fail. But we should still stick to current mount point instread of
returning a wrong mount point /mnt/sec
Initial patch for VXLAN support.
Fully functional, hopefully, for GuestNetwork - AdvancedZone.
Patch Note:
in cloudstack-server
- Add isolation method VXLAN
- Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
- Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
- Add VXLAN isolation option in zoneWizard UI
in cloudstack-agent (kvm)
- Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation script
-- Usage is exactly same to modifyvlan.sh
- BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when VXLAN is used for isolation
Database changes:
- No change in database structure.
- VXLAN isolation uses same tables that VLAN uses to store vNet allocation status.
Known Issue:
- Some resource still says 'VLAN' in log even if VXLAN is used
- in UI, "Network - GuestNetworks" dosen't display VNI
-- VLAN ID field displays "N/A"
This failed due to a RAW -> QCOW2 conversion (again).
The current code still makes to much assumptions about everything always
being QCOW2 while that is not always true.
UI support for baremetal PXE server
CloudStack CLOUDSTACK-1364
UI support for baremetal DHCP server
Conflicts:
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BareMetalPingServiceImpl.java
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BaremetalKickStartServiceImpl.java
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BaremetalPxeManagerImpl.java
KVM - Create template from volume
Vmware - Create template from volume / Create template from snapshot
send the physical size in the copycommand which accordingly will populate template store ref and the usage_event tables with the right physical size
Signed off by : nitin mehta<nitin.mehta@citrix.com>
XS Creating templates from volume - send the physical size in the copycommand which accordingly will populate template store ref and the usage_event tables with the right physical size
Signed off by : nitin mehta<nitin.mehta@citrix.com>
If all the VM's volumes are on zone wide primary storage pool then live migration of the VM would not involve storage migration. Hence MigrateVM API would be called against MigrateVMWithVolume. So far PrepareForMigrationCommand handled scenarios of VM moving across hosts within a cluster, but with zone wide primary storage in picture this command need to handle scenarios of VM moving across clusters. Try to find the VM in datacenter if not found within cluster.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Simulator should revert back to CLOUD_DB after its operations on
SIMULATOR_DB or the cloudstack connections go to the simulator instead
of cloud.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
(cherry picked from commit 3d39716c8f)
Although libvirt supports resizing RBD volumes (and other formats) the
Java bindings (libvirt-java) don't.
Right now we use the Java bindings for librbd to handle the resizing for us,
but in the future this should be done by libvirt rather then these
Java bindings.
- ManagementServerSimulatorImpl is not injected by default context.
configureSimulatorCmd API was loaded as part of it. Use
SimulatorManagerImpl as PluggableService to inject configureSimulator
API.
- Remove unused ManagementServerSimulatorImpl.
- Rename ConfigureSimulator to ConfigureSimulatorCmd for uniformity with
all API Cmds
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
(cherry picked from commit 0c294a50a8)
Make VSM specific input paramters optional while adding VMware cluster where no traffic chosen to use Nexus 1000v dvSwitch when cloud level vSwitch is Nexus 1000v.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
CS used to access vnc server in xenserver dom0 to get VM console, now CS moves to use XenServer console API. getvncport plugin is not needed any more.
remove the code related to getvncport in XenServer
While finding endpoint to send the migrate volume command, in case of zone wide storage pool, storage engine is picking a host among all connected hosts randomly. Hence a host may be from different cluster t
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Detail: Cloudstack tries to stop VMs all the time, for all sorts of reasons,
but usually just to get into a known state. Libvirt throws an exception of
'Domain not found' when attempting to stop a VM that doesn't exist. This causes
problems for troubleshooting real issues. Domain not found should equate
to success if trying to stop.
BUG-ID: CLOUDSTACK-4011
Bugfix-for: 4.2
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1375825281 -0600
Detail: KVM recently got a patch that did away with a few dozen ssh calls
when programming virtual router (CLOUDSTACK-3163), saving several seconds
for each vm served by the virtual router when the router is rebooted. This
patch updates Xen to use the same method, and cleans up the old script refs.
Reviewed-by: Sheng Yang, Prasanna Santhanam
The method getCommandHostDelegation(long hostId, Command cmid) got overidden in VmwareGuru.java as part of
commit bfe30cd2e3. Earlier there was no HV specific implementation and copy
volume from secondary to primary worked fine. With the Vmware specific change the code was getting hit even
in case of XS and other hypervisors and failed with NPE.
Now there is a check in the Vmware implementation to check if the HV is of type Vmware.
Marked the system template new system template as dynamicallyScalable
- handled upgrade case
- moved "dynamicallyScalable" flag to vm_instance table from user_vm_details to support dynamic scaling of system vm
Signed off by : Nitin Mehta<nitin.mehta@citrix.com>
details has information on cores per socket, create an instance accordingly. The vm record
is populated with information on how many cores should be allowed in a socket.
a host to the pool it may take a little longer to join the pool. Increased the time the
resource waits and checks to make sure the host has joined as slave to the pool.
Include the createTemplateFromSnapshot in the storageprocessor.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
(cherry picked from commit e8383121c60e7c815ba1990df1e8a440f7d87188)
Send success answer of type CopyCmdAnswer with full snapshot backup path in secondary store (image store role)
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
- Move vnetBridge clean up function from LibvirtComputingResource to BridgeVifDriver
-- since only BridgeVifDriver have to handle this event
- LibvirtComputingResource now properly call VifDriver.unplug() when it receives UnPlugCommand
- Remove not working and no longer used method getVnet(String) from VirtualMachineName
- Remove not working and no longer used method getVnet() from StopCommand
- Remove unused constructer StopCommand(VirtualMachine, String, boolean) from StopCommand
- Remove unused member vnet from StopCommand
- Remove unused member _modifyVlanPath from OvsVifDriver
Tested with 2 KVM hosts and confirmed it correctly manipulate vnetBridge with start, stop, migrate, plug, and unplug event
Signed-off-by: Hugo Trippaers <htrippaers@schubergphilis.com>
Fix the name of the interface id in extra config, so the nic is found up using the nic uuid instead of the vm id.
getVlanInfo should return null when the nic is plugged on a Lswitch network.
On a distributed portgroup all vms should be connected to a single
portgroup and set a specific vlan on the port. On a standard switch each nic should have its own portgroup with a dedicated vlan (not related to CS vlans)
For legacy zones insist on the old model of asking for credentials for each cluster being added.
For non-legacy zones check if username & password are provided. If either or both not provided, try to retrieve & use the credentials from database, which are provided earlier while adding VMware DC to zone.
This lets user to not specify credentials while adding VMware cluster to zone, if the credentials are same as that of provided while adding VMware DC to zone.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Description:
a) Fixing NPE when wrong path is provided for primary datastore.
b) No error dialog shows up in GUI when wrong path is provided,
after NPE fix - propagating exception upward.
c) If the KVM agent is down, an invalid datastore gets logged in
storage_pool table and doesn't get removed, so it shows up
in the GUI in the list of datastores - fixing this as well.
listVmwareDcs API is added. Takes 1 parameter (mandatory) which is zoneId.
Returns list of VMware DCs associated with specified zone.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Following Hotadd memory checks are included now.
1) Check if guest operating system supports memory hotadd
2) Check if virtual machine is using hardware version 7 or later before enabling memory hotadd
Following Hotadd CPU checks are included now.
1) Check if guest operating system supports cpu hotadd
2) Check if virtual machine is using hardware version 8 or later.
3) Check if virtual machine has only 1 core per socket. If hardware version is 7, then only 1 core per socket is supported. Hot adding multi-core vcpus is not allowed if hardware version is 7.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
The problem was because in cloudstack when a vm is stopped it gets destroyed on the host. For a
windows vm the timeoffset (which can be set by changing the timezone from within the vm) is stored
in the platform:timeoffset attribute of vm record. The information is lost when the vm is destroted.
Made change to read and persist the platform:timeoffset vm attribute when an instance is stopped.
The value is persisted in the user_vm_details table. When the vm is started again the attribute is
set for the vm instance that gets created.
Retrieve maximum hotadd memory limit & hotadd memory increment size from running VM's configuration to validate dynamic scale memory limit while scaling up a VM.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Now ESXi server license would be fetched to see if HOTPLUG feature is license or not.
Throw Exception if the feature is not licensed.
Also added FeatureKeyConstants enum type to maintain list of various features to be checked whether licensed or not.
Check if the associated VMware DC matches the one specified in API params. This check would yield success as the association exists between same entities (zone and VMware DC)
This scenario would result in if the API addVmwareDc is called more than once with same parameters. For such scenario, return success response with VmwareDatacenter object read from database. Also returns success if addVmwareDC is called agsin with same zone and VMwareDC where the zone already has resources added to it (clusters etc.)
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
on adding sec. storage no need to report back to resourceManager since
the sc. storage is no longer a directly connected host.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Currently it is very hard to see which tasks are pending in cloudstack
that need to be executed by xapi. Ideally we would like to have a
central overview of all tasks centrally. But for now being able to
enable tracelogging should give some insight in what is going on.
- writeToFile removed since no references to it
- readFileAsString replaced with FileUtils.readFileToString
- minor code duplication removed in dependent method getNicStats
- unit test added
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
instead of a dynamically configured one.
Refactor bits of the HypervisorHostHelper to better
support multiple BroadcastDomainTypes
Cleanup some parts of the code that was using tab indents and removed some dead code.
Read the global configuration setting while configuring VmwareManager.
Also enabling autoExpand feature for supported distributed virtual switch versions.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
The file cloudstack-oss.asf/client/target/cloud-client-ui-4.2.0-SNAPSHOT/WEB-INF/classes/scripts/util/ipmi.py
is not mark as executible, so we cannot execute it directly.
Involve "/usr/bin/python" for it. The position of python file maybe different in
the different machines.
1. Baremetal doesn't have secondary storage, so we don't need check them.
2. The new "AddBaremetalHostCmd" hasn't been used by UI, so keep the validity
checking out for now. "AddHostCmd" would still works.
3. Baremetal haven't implemented multiple ip range feature(CLOUDSTACK-702),
return true for now for single range.
Introduced the SimulatorStorageProcessor to handle image store related
commands. Right now only mock implementations are provided, no error
handling, logging, runtime exception scenarios limited. SystemVMs are
able to start up but the default built-in template has trouble in going
to Ready state.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
CLOUDSTACK-3042 - handle Scaling up of vm memory/CPU based on the presence of XS tools in the template
This also takes care of updation of VM after XS tools are installed in the vm and set memory values accordingly to support dynamic scaling after stop start of VM
Signed-off-by: Abhinandan Prateek <aprateek@apache.org>
for number of commands participating in Vm deployment process, as parallel deployment is supported on the hypervisor side.
The behavior is controlled by global config varirables:
"execute.in.sequence.hypervisor.commands" (false by default) sets/resets the synchronization for commands:
=========================
StartCommand
StopCommand
CreateCommand
CopyVolumeCommand
"execute.in.sequence.network.element.commands" (false by default) sets/resets the synchronization for commands:
==========================
DhcpEntryCommand
SavePasswordCommand
UserDataCommand
VmDataCommand
As a part of the fix, increased the global lock timeout to 30 mins in several VR scripts:
===========================
edithosts.sh
savepassword.sh
userdata.sh
to support situations when multiple concurrent calls to the script are being made.
During Scale up of VM, memory/cpu calculations should consider the memory/cpu overprovisioning factors which are set per cluster.
CLOUDSTACK-2939: CPU limit is not getting set for vm after scaleup to a service offering which have cpu cap enabled
Signed-off-by: Abhinandan Prateek <aprateek@apache.org>
Currently XcpServerDiscoverer.java is only allowing till XenServer 6.1.0. Added
code to support XenServer 6.2.0. Also, added support to allow the RC build
of XenSever 6.2.0.
Signed-off-by: venkataswamybabu budumuru <venkataswamybabu.budumuru@citrix.com>
Signed-off-by: Abhinandan Prateek <aprateek@apache.org>
Made vcenter as required parameter to addVmwareDc API.
Removed stale parameter url from addVmwareDc API.
Improved Exception handling, logging remote exception details.
Provide mock implementations for all network commands in the
MockNetworkManagerImpl of the simualtor
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Renaming the method in the command objects to be uniform with
PlugNicCommand/UnplugNicCommand.getVmName
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
And the components-simulator.xml configuration. Both are unused as we
use Spring injection now.
Remove unused rebootVM method
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
inactive VM definitions block a new VM starting. Definitions aren't supposed to
be persistent, but sometimes a crash or failed migration can leave behind a
definition.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1370299734 -0600
Whitespace cleanup
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
CLOUDSTACK-2701 - Enable storage migration for VMware resources
Removing injections which are not required in Vmware Manager
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
CLOUDSTACK-659
Support for storage migration in Cloudstack deployment over VMware Added
Resource changes to perform VM live migration along with virtual disks across the clusters in a zone.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
CLOUDSTACK-659
Support for storage migration in Cloudstack deployment over VMware
Added VmwareStorageMotionStrategy to application context.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Support for volume live migration across datastores
Unit tests for vmware storage motion. These test the VmwareStorageMotionStrategy.
CLOUDSTACK-659
Fixing migrate volume.
CLOUDSTACK-659
Support for storage migration in Cloudstack deployment over VMware Added
Resource changes to perform VM live migration along with virtual disks across the clusters in a zone.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Support for volume live migration across datastores
CLOUDSTACK-659
Support for storage migration in Cloudstack deployment over VMware Added
Resource changes to perform VM live migration along with virtual disks across the clusters in a zone.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Support for volume live migration across datastores
CLOUDSTACK-659
Support for storage migration in Cloudstack deployment over VMware Added
Resource changes to perform VM live migration along with virtual disks across the clusters in a zone.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Support for volume live migration across datastores
Added vm type to prepareNetworkFromNicInfo in MigrateWithStorageCommand implementation.
CLOUDSTACK-2701 - Enable storage migration for VMware resources
Fixing attach volume and delete volume cases for volumes that are moved off original path in datastore when created.
If volume is not found in root directory or datastore, do search in sub folders.
CLOUDSTACK-2701 - Enable storage migration for VMware resources
Sending command MigrateWithStorageCommand to source host instead of target host for the case of migration of VM within cluster.
CLOUDSTACK-2701 - Enable storage migration for VMware resources
Searching for virtual disk during device tear down.
Adding dependency of 'cloud-engine-storage' to vmware hypervisor plugin.
Add hypervisor capability storage_motion_supported for VMware 5.0 and 5.1
CLOUDSTACK-659
Support for storage migration in Cloudstack deployment over VMware
Added VmwareStorageMotionStrategy to deal with storage motion tasks.
Added target host parameter to MigrateWithStorageCommand.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
CLOUDSTACK-659
Support for storage migration in Cloudstack deployment over VMware Added
Resource changes to perform VM live migration along with virtual disks across the clusters in a zone.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
CLOUDSTACK-659
Support for storage migration in Cloudstack deployment over VMware
Added VmwareStorageMotionStrategy to application context.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Support for volume live migration across datastores
Unit tests for vmware storage motion. These test the VmwareStorageMotionStrategy.
CLOUDSTACK-659
Fixing migrate volume.
CLOUDSTACK-659
Support for storage migration in Cloudstack deployment over VMware Added
Resource changes to perform VM live migration along with virtual disks across the clusters in a zone.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Support for volume live migration across datastores
Added vm type to prepareNetworkFromNicInfo in MigrateWithStorageCommand implementation.
Adding dependency of 'cloud-engine-storage' to vmware hypervisor plugin.
Add hypervisor capability storage_motion_supported for VMware 5.0 and 5.1
CLOUDSTACK-659
Support for storage migration in Cloudstack deployment over VMware
Added VmwareStorageMotionStrategy to deal with storage motion tasks.
Added target host parameter to MigrateWithStorageCommand.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
CLOUDSTACK-659
Support for storage migration in Cloudstack deployment over VMware Added
Resource changes to perform VM live migration along with virtual disks across the clusters in a zone.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
CLOUDSTACK-659
Support for storage migration in Cloudstack deployment over VMware
Added VmwareStorageMotionStrategy to application context.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Support for volume live migration across datastores
Unit tests for vmware storage motion. These test the VmwareStorageMotionStrategy.
CLOUDSTACK-659
Fixing migrate volume.
Adding dependency of 'cloud-engine-storage' to vmware hypervisor plugin.
Add hypervisor capability storage_motion_supported for VMware 5.0 and 5.1
Adding dependency of 'cloud-engine-storage' to vmware hypervisor plugin.
CLOUDSTACK-659
Support for storage migration in Cloudstack deployment over VMware
Added VmwareStorageMotionStrategy to deal with storage motion tasks.
Added target host parameter to MigrateWithStorageCommand.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
CLOUDSTACK-659
Support for storage migration in Cloudstack deployment over VMware Added
Resource changes to perform VM live migration along with virtual disks across the clusters in a zone.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
CLOUDSTACK-659
Support for storage migration in Cloudstack deployment over VMware
Added VmwareStorageMotionStrategy to application context.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Support for volume live migration across datastores
CLOUDSTACK-659
Fixing migrate volume.
Unit tests for vmware storage motion. These test the VmwareStorageMotionStrategy.
CLOUDSTACK-2701 - Enable storage migration for VMware resources
Sending command MigrateWithStorageCommand to source host instead of target host for the case of migration of VM within cluster.
CLOUDSTACK-2701 - Enable storage migration for VMware resources
Moved 2 methods that are not specific to VMware but Volume are moved to VolumeManager from VmwareManager.
Moved datastore volume path constructing code to separate method.
Added check for source and target host, if they are from different DCs/vCenter instances.
Updated error message & removed stale comment
CLOUDSTACK-2701 - Enable storage migration for VMware resources
Injecting component VolumeManager into VmwareResource.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Introduced 2 new API command classes AddVmwareDcCmd & RemoveVmwareDcCmd.
The new APIs are addVmwareDc & removeVmwareDc, these APIs will associate a Vmware datacenter to a cloudstack zone & dis-associate a Vmware datacenter from a cloudstack zone.
Constraint checks are added for infrastructure change operations in zone that encompass resources like clusters.
Constraint checks are added in discoverer and manager classes.
Added a service 'VmwareDatacenterService' to expose methods that will have API implementation for 2 APIs addVmwareDc & removeVmwareDc
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Support for DB changes for Vmware datacenter objects
Support for DB changes to store mapping with CloudStack zones.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
DevCloud is a XCP Kronos based xen. For this we use the XcpOssResource
whose memory limits were not set explicitly. Instead the base
CitrixResourceBase would set the limits. static_min was set to 128MB
failing the cpvm and ssvm start fails whose offerings have 100MB set to
the max limits.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
RBD format 2 supports cloning (aka layering) where one base image can serve
as a parent image for multiple child images.
This enables fast deployment of a large amount of virtual machines, but it also
saves spaces on the Ceph cluster and improves performance due to better caching.
Qemu-img doesn't support RBD format 2 (yet), so to enable these functions the
RADOS/RBD Java bindings are required.
This patch also enables deployment of System VMs on RBD storage pools. Since we
no longer require a patchdisk for passing the boot arguments we are able to deploy
these VMs on RBD.
Detail: We do two strange things, #1, when a vm is created, we create the uuid
for the domain by converting the name into a uuid, then subsequently, any time
we look for a domain, we convert its name to a uuid and domainLookupByUUID. This
is an unnecessary obfuscation, so instead we just do a domainLookupByName. As
a bonus, we are now setting the domain's uuid to be the cloudstack uuid. It is
no longer used anywhere, but will be less confusing to admins. This shouldn't
affect upgrades, since we can always lookup existing VMs by name.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1369287049 -0600
Detail: Undefine VM after migration. Lingering domain definitions cause
migrations back to the original host to fail, since domain already exists.
BUG-ID: CLOUDSTACK-2640
Bugfix-for: 4.1.0,4.2.0
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1369285950 -0600
When dmc is enabled allow cloudstack to scale the VM between dynamic
ranges. This requires a further commit where there is enough memory
available between dynamic-max and static-max for the VM to scale under
memory pressure. See the TODO in Xenserver56FP1Resource.java
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
23e54bb0 introduced multiple hypervisors support for cpu and memory
overcommit. Here the HypervisorGuru base which determines the min, max
range for the memory for all hypervisors computes the minCpu using the
MemoryOverCommit ratio and minMemory using the CpuOverCommit ratio.
Minor typo/logic issue but massive damage across all HV if enabled ;)
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
When setting memory constraints on Xen guests we should honor:
static-min <= dynamic-min <= dynamic-max <= static-max
Our VmSpec while allows the guests to like between dynamic-min and
dynamic-max the memory set by the resource set the static min to be
equal to the dynamic max. This restricts the hypervisor from ensuring
optimized memory handling of guests.
see: http://wiki.xen.org/wiki/XCP_FAQ_Dynamic_Memory_Control#How_does_XCP_choose_targets_for_guests_in_dynamic_range_mode.3F
Another fix was related the restrict_dmc option. when enabled (true)
this option disallows scaling a vm. The logic was reverse handled
allowing scaling when restrict_dmc was on. This control flow is now
fixed.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Missing default constructor fails the agent manager reloading the XCP
resource on reboot of management server. This is fixed by using the
default constructor as do other Xen resources and include a new resource
ala XenServers for XCP1.6.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
- Changes merged from planner_reserve branch
- Exposing deploymentplanner as an optional parameter while creating a service offering
- changes to DeploymentPlanningManagerImpl to make sure host reserve-release happens between conflicting planner usages.
Fixed by updating the patch files that has
entries to copy scipts on xenserver. Here we added
Add-To-VCPUs-Params-Live.sh
Added a check on Host params whether host restricts Dynamic memory control(DMC) to able to allow scale up VM.
If DMC is not enabled then static max and min are set to SO.
Signed Off by - Nitin Mehta <nitin.mehta@citrix.com>
Fixed by updating the patch files that has
entries to copy scipts on xenserver. Here we added
Add-To-VCPUs-Params-Live.sh
Added a check on Host params whether host restricts Dynamic memory control(DMC) to able to allow scale up VM.
If DMC is not enabled then static max and min are set to SO.
Signed Off by - Nitin Mehta <nitin.mehta@citrix.com>
This feature enables adding of guest ip ranges (public ips) form different subnets.
In order to provide the dhcp service to a different subnet we create an ipalias on the router. This allows the router to listen to the dhcp request from the guest vms and respond accordingly. Every time a vm is deployed in the new subnet we configure an ip alias on the router. Cloudstack uses dnsmasq to provide dhcp service. We need to configure the dnsmasq to issue ips on the new subnets. Added a new class dnsmasqconfigurator which generates the dnsmasq confg file, this file replaces the old config in the router.
The details of the alias ips are stored in db in the nic_ip_alias table. Every time a new subnet is added one of the ip from the subnet is used to configure the ip alias.
I have pushed the code to https://github.com/bvbharatk/cloud-stack/tree/Cloudstack-702 , also rebased the code with master.
I need to test the code for advanced sg enabled network using kvm.
I have added the unit test
Marvin tests are at https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=53e4965
Also accomodated some of the changes suggested by koushik.
corrected the import statements. renamed the IpAlias command to createIpAlias command.
This feature supports only ipv4
Start/stop vm/dhcp server are done. Not done with VM migration.
A new command(PvlanSetupCommand) is sent for setting up PVLAN for vms. Currently
it's focus on OVS implementation. Need to be more abstruct and add vSwitch part.
so that callers of startVM in LibvirtComputingResource know that a vm failed
to start
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1366224775 -0600
pools the proper way won't cause problems for KVM HA Monitor, this patch closes
holes. Call the KVMStoragePool deleteStoragePool that properly removes it from
the KVMHA hashmap, instead of the pools direct delete() call.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1366172318 -0600
commit 7ce45ea108
Author: Koushik Das <koushik.das@citrix.com>
Date: Mon Apr 15 18:36:33 2013 +0530
Fixed indentation and line ending
commit 0232048f90
Merge: 735c4c897911e9
Author: Koushik Das <koushik.das@citrix.com>
Date: Mon Apr 15 17:05:59 2013 +0530
Merge branch 'master' into cisco-vnmc-api-integration
Conflicts:
api/src/org/apache/cloudstack/api/ApiConstants.java
client/tomcatconf/commands.properties.in
setup/db/db/schema-410to420.sql
tools/marvin/marvin/integration/lib/base.py
commit 735c4c8955
Author: Koushik Das <koushik.das@citrix.com>
Date: Mon Apr 15 15:20:37 2013 +0530
Fixed unit tests based on recent changes in the Vnmc resource code
commit f166f2d0bf
Author: Koushik Das <koushik.das@citrix.com>
Date: Mon Apr 15 14:50:25 2013 +0530
added tests to register vnmc and asa appliance in cloudstack
commit f38be4810e
Author: Koushik Das <koushik.das@citrix.com>
Date: Mon Apr 8 18:42:06 2013 +0530
Removed unwanted files
commit 902ce426c1
Author: Koushik Das <koushik.das@citrix.com>
Date: Mon Apr 8 17:59:30 2013 +0530
Fixed auto-wiring of components for Cisco Vnmc
commit 08467ee307
Author: Koushik Das <koushik.das@citrix.com>
Date: Mon Apr 8 16:04:54 2013 +0530
Fixed compilation issues, incorrect merges from last commit
commit 67f11d46ad
Merge: 3422ceec9c68e1
Author: Koushik Das <koushik.das@citrix.com>
Date: Mon Apr 8 15:11:10 2013 +0530
Merge branch 'master' into cisco-vnmc-api-integration
commit 3422ceefb6
Author: Koushik Das <koushik.das@citrix.com>
Date: Mon Apr 8 14:42:32 2013 +0530
Correctly associating nat, acl policy sets to edge security profile in VNMC
commit 9c1e193fca
Author: Koushik Das <koushik.das@citrix.com>
Date: Sun Apr 7 21:22:22 2013 +0530
Passing correct subnet mask while creating edge firewall in VNMC
commit 05e3d04b55
Author: Koushik Das <koushik.das@citrix.com>
Date: Tue Apr 2 17:50:57 2013 +0530
Added changes related to icmp
commit bcecb589de
Author: Koushik Das <koushik.das@citrix.com>
Date: Mon Apr 1 13:57:21 2013 +0530
Some xml file renames
commit 9c1ee93f2e
Author: Koushik Das <koushik.das@citrix.com>
Date: Sat Mar 30 15:54:25 2013 +0530
Fixed PF and static NAT rule creation in VNMC
commit 7e6159fa05
Author: Koushik Das <koushik.das@citrix.com>
Date: Wed Mar 27 18:53:49 2013 +0530
Added more unit tests for Cisco Vnmc element
commit fc0ed9adb6
Author: Koushik Das <koushik.das@citrix.com>
Date: Wed Mar 27 16:48:28 2013 +0530
Cleaning up VNMC config as part of network shutdown
commit 5a427d48e2
Author: Koushik Das <koushik.das@citrix.com>
Date: Wed Mar 27 02:22:54 2013 +0530
Added unit test for Vnmc network element implement() method
commit 48cbf34d3b
Author: Koushik Das <koushik.das@citrix.com>
Date: Wed Mar 27 02:20:45 2013 +0530
Passing correct gateway ip while creating vservice node and guest port profile in Nexus
commit 2c386c61ef
Author: Koushik Das <koushik.das@citrix.com>
Date: Fri Mar 22 13:50:52 2013 +0530
Nexus 1000v fix
commit 4d2168bfa9
Author: Koushik Das <koushik.das@citrix.com>
Date: Fri Mar 22 00:30:01 2013 +0530
Egress firewall rule
commit e81ab3a2f4
Author: Koushik Das <koushik.das@citrix.com>
Date: Thu Mar 21 10:50:29 2013 +0530
More tests for VnmcResource class
commit 9e9c179212
Author: Koushik Das <koushik.das@citrix.com>
Date: Thu Mar 21 00:25:10 2013 +0530
Fixed build issue from master merge
commit f0c1af2b5c
Merge: 4f305c2873ec27
Author: Koushik Das <koushik.das@citrix.com>
Date: Wed Mar 20 16:20:10 2013 +0530
Merge branch 'master' into cisco-vnmc-api-integration
Conflicts:
api/src/com/cloud/network/Network.java
api/src/org/apache/cloudstack/api/ApiConstants.java
client/tomcatconf/components-nonoss.xml.in
client/tomcatconf/nonossComponentContext.xml.in
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/manager/VmwareManagerImpl.java
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
setup/db/db/schema-410to420.sql
vmware-base/src/com/cloud/hypervisor/vmware/mo/HypervisorHostHelper.java
commit 4f305c2beb
Author: Koushik Das <koushik.das@citrix.com>
Date: Wed Mar 20 15:09:34 2013 +0530
Initial set of tests, will add more in subsequent commits
commit 50bfcc1f75
Author: Koushik Das <koushik.das@citrix.com>
Date: Wed Mar 20 15:02:14 2013 +0530
Updated pom to copy xmls to target location during build
commit 45bc92b826
Author: Koushik Das <koushik.das@citrix.com>
Date: Wed Mar 20 14:58:59 2013 +0530
Fixed cpmpilation issue as missed out adding this file
commit 2ce7cdc756
Author: Koushik Das <koushik.das@citrix.com>
Date: Sun Mar 17 17:02:25 2013 +0530
Creating vservice node and associating it with port profile in nexus for guest VMs
commit 387545caff
Author: Koushik Das <koushik.das@citrix.com>
Date: Sat Mar 16 11:14:43 2013 +0530
Added license headers to XML files
commit 43e2997421
Author: Koushik Das <koushik.das@citrix.com>
Date: Wed Mar 13 11:51:59 2013 +0530
Changes related to instantiating the dao components
commit 99e88ecbf9
Author: Koushik Das <koushik.das@citrix.com>
Date: Tue Mar 12 23:40:35 2013 +0530
Fix build errors after merge from master
commit 7c20b120c2
Author: Koushik Das <koushik.das@citrix.com>
Date: Tue Mar 12 23:31:46 2013 +0530
Fixing poms and other xmls
commit ee868759a8
Merge: 9c94b6da1b33ca
Author: Koushik Das <koushik.das@citrix.com>
Date: Tue Mar 12 14:44:59 2013 +0530
Merge branch 'master' into cisco-vnmc-api-integration
Conflicts:
api/src/com/cloud/network/Network.java
api/src/org/apache/cloudstack/api/ApiConstants.java
plugins/pom.xml
setup/db/create-schema.sql
commit 9c94b6d231
Author: Koushik Das <koushik.das@citrix.com>
Date: Fri Mar 8 22:20:23 2013 +0530
Fixed XML to create static route in VNMC correctly
commit ef069b3323
Author: Koushik Das <koushik.das@citrix.com>
Date: Fri Mar 8 15:26:26 2013 +0530
Added logic for revoking ACL, PF and Static NAT rules
commit 4c65b70668
Author: Koushik Das <koushik.das@citrix.com>
Date: Fri Mar 8 13:51:37 2013 +0530
Renamed delete-acl-rule -> delete-rule
commit aa94eca516
Author: Koushik Das <koushik.das@citrix.com>
Date: Fri Mar 8 00:38:52 2013 +0530
- Creating static routes in VNMC as part of edge firewall configuration
- Passing order parameter while creating rules so that they are evaluated in a specific order
- Added methods in VnmcResource for listing acl policies and rules belonging to variouos policies. This is used to compute order while creation of various rules in VNMC
commit cc824e8585
Author: Koushik Das <koushik.das@citrix.com>
Date: Thu Mar 7 12:16:29 2013 +0530
Adding appropriate ACL rules for PF and static NAT
commit fb23c50365
Author: Koushik Das <koushik.das@citrix.com>
Date: Fri Mar 1 17:21:45 2013 +0530
Added logic for deleting various VNMC artifacts. Added/updated relevant xmls as well.
commit 970c21a9a3
Author: Koushik Das <koushik.das@citrix.com>
Date: Fri Mar 1 01:54:10 2013 +0530
Added implementation for delete of asa and vnmc apis
commit 22e1455142
Author: Koushik Das <koushik.das@citrix.com>
Date: Fri Mar 1 01:19:43 2013 +0530
List asa api to return guest network if associated. From this it can be inferred if asa is available or not
commit 32223736c9
Author: Koushik Das <koushik.das@citrix.com>
Date: Fri Mar 1 00:50:55 2013 +0530
Added Vmware cluster info along with asa1kv appliance.
This is used to select the correct n1kv vsm for configuring the guest network
commit deed3cc951
Author: Koushik Das <koushik.das@citrix.com>
Date: Mon Feb 25 18:03:59 2013 +0530
Added support for static NAT rules.
- Xmls for creating static nat rules in VNMC
- applyStaticNats implementation in VNMC network element
- handler for static nat in resource class
commit 681f0b7b50
Author: Koushik Das <koushik.das@citrix.com>
Date: Mon Feb 25 10:44:13 2013 +0530
Added implementation for firewall and port forwarding rules in Cisco VNMC element class
commit 66b01a6589
Author: Koushik Das <koushik.das@citrix.com>
Date: Fri Feb 22 19:19:44 2013 +0530
VNMC xml for deleting NAT policy
commit 5d98686768
Author: Koushik Das <koushik.das@citrix.com>
Date: Fri Feb 22 19:16:41 2013 +0530
Added support for PF/DNAT rules.
Created methods in VNMCConnection class to create PF rules. Also moved out common code for PF and source NAT in methods.
Updated the corresponding VNMC resource class.
commit 8db2fbeb8f
Author: Koushik Das <koushik.das@citrix.com>
Date: Fri Feb 22 18:21:45 2013 +0530
Added xml for creating NAT policy set in VNMC
commit f2da0d50ca
Author: Koushik Das <koushik.das@citrix.com>
Date: Fri Feb 22 18:17:53 2013 +0530
Added VNMC XMLs for supporting PF/DNAT rules.
Also moved out some XMLs related to source NAT to common files so that these can be used for both source NAT and DNAT
commit 124a48819d
Author: Koushik Das <koushik.das@citrix.com>
Date: Thu Feb 21 17:53:12 2013 +0530
Separated out creation of ACL policy set and policy in VNMC
commit 1e38515f35
Author: Koushik Das <koushik.das@citrix.com>
Date: Thu Feb 21 11:54:44 2013 +0530
Added changes to create ingress fw rules in VNMC
commit cb2fba9e7c
Author: Koushik Das <koushik.das@citrix.com>
Date: Thu Feb 14 16:23:05 2013 +0530
Source NAT in VNMC
commit 720fe2f908
Author: Koushik Das <koushik.das@citrix.com>
Date: Wed Feb 13 14:16:47 2013 +0530
Fix Vnmc test file
commit d6dbe790c6
Author: Koushik Das <koushik.das@citrix.com>
Date: Wed Feb 13 12:07:03 2013 +0530
Added db. tables for asa1kv devices and their mapping with guest network
commit 3fd7e30f6e
Author: Koushik Das <koushik.das@citrix.com>
Date: Wed Feb 13 11:52:12 2013 +0530
Changes:
- Added implementation for add/list asa1kv APIs
- Added agent command for associating asa1kv appliance with logical edge firewall in VNMC
- Added handler for the above agent command in VNMC resource class
- Updated VNMC element class to support the above
commit d08e2a1faf
Author: Koushik Das <koushik.das@citrix.com>
Date: Wed Feb 13 11:40:58 2013 +0530
Added lifecycle APIs for Cisco Asa 1000v appliance.
Added corresponding Dao and VO classes.
Also added mapping Dao and VO for guest netwok and asa appliance
commit 6b999ec867
Author: Koushik Das <koushik.das@citrix.com>
Date: Tue Feb 12 00:05:39 2013 +0530
Changes:
a. Added handlers for CreateLogicalEdgeFirewall and ConfigureNexusVSMForASA commands
b. Logic for add/list vnmc device API
c. Partial implementation for network element implement()
commit 0656250308
Author: Koushik Das <koushik.das@citrix.com>
Date: Mon Feb 11 23:48:19 2013 +0530
Moved VNMC provider creation to Network.java. The plugin code would have been the ideal place to keep it but current state of the code doesn't allow it.
commit dc402eaa7a
Author: Koushik Das <koushik.das@citrix.com>
Date: Mon Feb 11 23:35:19 2013 +0530
Added new commands for the following:
a. Logical edge firewall creation in VNMC
b. Asa1kv vservice node creation and updating asa1kv inside port profile with guest network vlan id in n1kv VSM
commit d6cdfe35f8
Author: Koushik Das <koushik.das@citrix.com>
Date: Mon Feb 11 23:06:36 2013 +0530
Added helper method to create port profile in n1kv VSM with additional parameters VDC tenant and edge security profile
Added helper method to create a vservice node in n1kv VSM
commit db42da17e9
Author: Koushik Das <koushik.das@citrix.com>
Date: Mon Feb 11 22:44:01 2013 +0530
Added database table for storing VNMC devices
commit f991436335
Author: Koushik Das <koushik.das@citrix.com>
Date: Fri Feb 8 16:00:15 2013 +0530
Added support for network offering creation with VNMC as provider for firewall, port forwarding, source nat
commit 74de210359
Author: Koushik Das <koushik.das@citrix.com>
Date: Fri Feb 8 15:06:11 2013 +0530
Added name attribute for the VNMC lifecycle commands
commit 6ce25ef11d
Author: Chiradeep Vittal <chiradeep@apache.org>
Date: Wed Jan 16 16:44:28 2013 -0800
Fix licensing
commit 392cd8ed63
Author: Chiradeep Vittal <chiradeep@apache.org>
Date: Wed Jan 16 16:38:19 2013 -0800
cisco-vnmc: Fix api to use new conventions
commit 6b142bbaab
Author: Chiradeep Vittal <chiradeep@apache.org>
Date: Wed Jan 16 15:33:33 2013 -0800
WIP: configure ASA port profile
Signed-off-by: Chiradeep Vittal <chiradeep@apache.org>
commit 1ae21ea49a
Author: Chiradeep Vittal <chiradeep@apache.org>
Date: Wed Jan 16 15:33:01 2013 -0800
WIP rename device to resource to better reflect nature of VNMC
Signed-off-by: Chiradeep Vittal <chiradeep@apache.org>
commit 84d218f972
Author: Chiradeep Vittal <chiradeep@apache.org>
Date: Wed Jan 16 15:32:54 2013 -0800
WIP: fixes for associating ASA1000v to tenant
Signed-off-by: Chiradeep Vittal <chiradeep@apache.org>
commit d74c6a9ac2
Author: Chiradeep Vittal <chiradeep@apache.org>
Date: Wed Jan 16 15:32:45 2013 -0800
WIP: fixes for associating ASA1000v to tenant
Signed-off-by: Chiradeep Vittal <chiradeep@apache.org>
commit 9350d10849
Author: Chiradeep Vittal <chiradeep@apache.org>
Date: Wed Jan 16 15:32:29 2013 -0800
WIP: admin commands for adding / listing VNMC
Signed-off-by: Chiradeep Vittal <chiradeep@apache.org>
commit a8031a0cfe
Author: Chiradeep Vittal <chiradeep@apache.org>
Date: Wed Jan 16 15:30:41 2013 -0800
WIP ASA 1000v listing"
Signed-off-by: Chiradeep Vittal <chiradeep@apache.org>
commit f9cc674b9c
Author: Chiradeep Vittal <chiradeep@apache.org>
Date: Wed Jan 16 15:30:36 2013 -0800
WIP : edge firewall
Signed-off-by: Chiradeep Vittal <chiradeep@apache.org>
commit 6a0964af00
Author: Chiradeep Vittal <chiradeep@apache.org>
Date: Wed Jan 16 15:30:30 2013 -0800
WIP : edge security policy
Signed-off-by: Chiradeep Vittal <chiradeep@apache.org>
commit e32295e8cf
Author: Chiradeep Vittal <chiradeep@apache.org>
Date: Wed Jan 16 15:30:24 2013 -0800
WIP : dhcp server policy
Signed-off-by: Chiradeep Vittal <chiradeep@apache.org>
commit 446a9b8491
Author: Chiradeep Vittal <chiradeep@apache.org>
Date: Wed Jan 16 15:30:18 2013 -0800
WIP : dhcp server policy
Signed-off-by: Chiradeep Vittal <chiradeep@apache.org>
commit e35e0eb59b
Author: Chiradeep Vittal <chiradeep@apache.org>
Date: Wed Jan 16 15:30:14 2013 -0800
Move unit test
Signed-off-by: Chiradeep Vittal <chiradeep@apache.org>
commit 2b43a3e74e
Author: Chiradeep Vittal <chiradeep@apache.org>
Date: Wed Jan 16 15:30:08 2013 -0800
Move unit test
Signed-off-by: Chiradeep Vittal <chiradeep@apache.org>
commit 11b804a894
Author: Chiradeep Vittal <chiradeep@apache.org>
Date: Wed Jan 16 15:29:54 2013 -0800
WIP: XML control of VNMC
Signed-off-by: Chiradeep Vittal <chiradeep@apache.org>
Add a flag VmDetailConstants.NESTED_VIRTUALIZATION_FLAG
Add an advanced config uption VmwareEnableNestedVirtualization
Depending on the settings of the flags and the capabilities of the target hypervisor the nested virtualization option will be set on guest VMs. It's a global setting intended only for developers to support cloud-in-a-cloud deployments.
For now it replaces ConsoleProxyManagerImpl with StaticConsoleProxyManager
Usage: mvn install -Dquickcloud
QuickCloud: rename deploy profile
QuickCloud: remove cyclic dependency introduced in nonoss build by moving SecondaryStorageDiscoverer into services
However with this fix, developers will be unable to run 'PremiumSecondaryStorageResource' (for VMWare installations) using mvn exec:java.
Instead they will have to use the exploded archive from systemvm.zip
- Supports DHCP, Source NAT, Static NAT, Firewall rules, Port Forwarding
- Renamed MidokuraMidonet to MidoNet
- Related Jira ticket is CLOUDSTACK-996
Signed-off-by: Dave Cahill <dcahill@midokura.com>
Signed-off-by: Hugo Trippaers <htrippaers@schubergphilis.com>
deleted NFS pools, causing failures when defining new storage pools. Sometimes
a storage pool has never been used on a host, and getStoragePool fails when
copying templates or in storage migration. deleteStoragePool(pool) often fails
silently, leaving no pool defined in libvirt, but a mountpoint left behind.
This patch handles some of these exceptions and brings forward any issues via
logging.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1364603486 -0600
Addition of two new resource types i.e. Primary and Secondary storage space in the existing pool of
resource types.
Added methods to set the limits on these resources using updateResourceLimit
API command and to get a count using updateResourceCount. Also added calls in the
Templates, Volumes, Snapshots life cycle to check these limits and to increment/decrement the new
resource types
Resource Name :: Resource type number
Primary Storage 10
Secondary Storage 11
Also added jUnit Tests for the same.
Reviewed by : nitin mehta<nitin.mehta@citrix.com>
Fixed inconsistency in cluster limit check for Vmware clusters.
While adding a new Vmware cluster limit check done correctly but while adding a host to an existing cluster there was an issue with limit check.
KVM to manager. This adds collection of available storage to KVM, not
just used.
Bugfix-for: 4.0.2, 4.1, master
Submitted-by: Ted Smith <darnoth@gmail.com>
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1363966235 -0600
The collection of network usage from VPC virtual router on KVM does not work,
because there is no corresponding procedure to deal with VPC virtual router
(cmd.isForVpc() = true).
Reviewed-by: Kishan Kavala <kishan@apache.org>
Reported-by: Wei Zhou <w.zhou@leaseweb.com>
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
cloud-defined resources on the host has caused various problems. As a backward
compatible fix, if an existing pool with a different name collides with a pool
being created (by path), the pool will be redefined with the name cloudstack
knows about. This is actually what brought up the bug, a persisted storage pool
cloudstack wasn't managing.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1363210149 -0600
Detail: When we stop a VM, it's definition is no longer valid. Therefore, we
need to catch the exception thrown from libvirt in trying to lookup a
non-existent domain by UUID while trying to check if it's shut down.
BUG-ID:CLOUDSTACK-600
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1363201066 -0600
Detail: A previous patch fixed an issue where we are defining VMs to persist
locally on KVM hosts, which can cause issues if the agent isn't running and
libvirt decides to start the VM unbeknownst to cloudstack. The previous patch
stopped defining VMs as persistent. This patch adds compatibility for existing
cloudstack environments, removing the persistent definition on stop if needed.
BUG-ID: CLOUDSTACK-600
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1363194656 -0600
Detail: This gets rid of the patchdisk method of passing cmdline and
authorized_keys to KVM system VMs. It instead passes them to a virtio socket,
which the KVM guest reads from the character device /dev/vport0p1 during
cloud-early-config. Tested to work on CentOS 6.3 and Ubuntu 12.04. Should
work with even older versions of libvirt.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1362691685 -0700
Current kvm agent will silently ignore many exception, and there's no
way to see what really happened. This patch will log in trace level log
that was silently ignored. And also, it will fix huge bare Exception
catch, which is very harmful because it also catches RuntimeException.
Detail: This device can be used for remotely controlling the system vms through
a local socket on the host. We will attempt to replace the KVM patchdisk with
it. Tested, successfully deploys VM, and if system vm has proper driver it
will create a /dev/vport0p1 device within the VM. We will be updating the
system VM in 4.2/5.0 and will support this.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1362527352 -0700
Libvirt-java 0.4.9 works just fine with JNA 3.2.4 which is in
all distributions.
Future libvirt version require at least JNA 3.5.1 due to new methods
and memory management, but that is not our concern now.
By depending on the JNA in the distribution and adding it to the classpath
we can work just fine.
- Building map of {trafficType, vifDriver} at configure time
- Use the relevant VIF driver for the given traffic type when call plug()
- Inform all vif drivers when call unplug(), as we no longer know traffic type
- Refactor VIF driver choosing code and add unit tests
- Basic unit tests, just test default case
- Also slight refactor of unit test code, and use jUnit 4 instead of 3, to match rest of codebase
Signed-off-by: Hugo Trippaers <htrippaers@schubergphilis.com>
Enhanced baremetal servers support on Cisco UCS
change UcsXxxDao to Spring xml loading
change ListxxxCmd to inherit ListCmd
change API response in line with current API architecture
adding missing db schema to db upgrade schemaOh
Conflicts:
client/pom.xml
plugins/hypervisors/ucs/src/com/cloud/ucs/database/UcsBladeDaoImpl.java
plugins/hypervisors/ucs/src/com/cloud/ucs/database/UcsManagerDaoImpl.java
Enhanced baremetal servers support on Cisco UCS
change API response in line with new API response convention
Conflicts:
api/src/org/apache/cloudstack/api/ApiConstants.java
Some concepts included:
* the replace.properties location used by maven is parameterized to allow
for a build that does not modify the currently git tracked files
* package naming is updated along the lines of what was discussed on the
-dev mailing list and between committers at the Build a Cloud Day in Belgi
* package version pattern is updated (since we redo all package names,
we might as well drop the epoch)
The recently added overcommit feature breaks compatibility between older management servers
and 4.2 agents.
This patch fixes that by falling back if needed.
CLOUDSTACK-657 VMware vNetwork Distributed Virtual Switch support in CloudStack
This is 5th patch for feature 'Support for VMware dvSwitch in CloudStack'.
This patch contains
1)Changes to addCluster done in vmware discoverer to support vswitch type provided as parameters. Also performing validation of vswitch type parameter provided with addCluster api call. Checks for physical network configuration for vmware cluster is added.
2)Changes to vmware resource to use specified vswitch type while preparing network for guest and public traffic types.
3)Changes to vmware manager to introduce new global parameter vmware.ports.per.dvportgroup. Some cleanup.
Virtual switch type could be chosen at zone level or at cluster level for specific traffic type.
autoExpand of dvPortGroup is available in code but disabled as its breaking because vCenter 4.1 does not support autoExpand feature. Would be enable once vSphere 5.1 SDK support is added to CloudStack.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
This is 1st patch for feature 'Support for VMware dvSwitch in CloudStack'.
This contains 3 newly introduced classes. Added apache license header for all 3 files.
[1]TrafficLable and [2]VmwareTrafficLabel classes are to define and encapsulate virtual switch type per traffic type along with other network label fields (VLAN ID and physical network).
[3]DistributedVirtualSwitchMO class is wrapper class for vSphere API calls specific to a distributed virtual switch in a vCenter datacenter.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Thanks to Devdeep for pointing this out.
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
(cherry picked from commit c7935a9ab6)
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
Supporting kickstart in CloudStack baremetal
able to start vm
Conflicts:
client/tomcatconf/componentContext.xml.in
server/src/com/cloud/baremetal/BareMetalTemplateAdapter.java
server/src/com/cloud/baremetal/BareMetalVmManagerImpl.java
server/src/com/cloud/vm/UserVmManagerImpl.java
allow spring to do DI for simulator plugin. componentContext.xml will
have simulator components disabled by default.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Detail: Removing references to /usr/lib/cloud and /usr/lib64/cloud so that old
systemvm.iso files aren't found by accident. systemvm.iso should exist in
/usr/share/cloudstack-common/vms now.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1360860243 -0700
Detail: If your traffic label points to a bridge that is on a tagged interface
rather than a real physical interface, cloudstack may not parse the physical
interface correctly, bringing up tagged interfaces on the tagged interface.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1360798665 -0700
This is a security failsafe, so even if destination does not exist we mkdir the path
with 0700 permission. If path exists mkdir -m 700 -p won't do anything.
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
- Assumption is that mkdir is available on xen host
- We don't know what kind of file we're copying, dirs would have 0777 permission
by default
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
We used to define domains persistent in libvirt, which caused XML definitions
to stay there after a reboot of the hypervisor.
We however don't do anything with those already defined domains, actually, we wipe
all defined domains when starting the agent.
Some users however reported that libvirt started these domains after a reboot
before the CloudStack agent was started.
By starting domains from the XML description and not defining them we prevent
them from ever being stored in libvirt.
Due to incorrect logic the private network traffic label specified was not getting used, instead some default was getting used (vSwitch0 or privateEthernetPortProfile). The fix passes the correct label in the format vSwitchX or vSwitchX,<vlan_id> and based on that the correct switch is used.
- Fixed new join dao impls as spring components
- Fixed component context xml to load api rate limit checker
- Fixed root pom.xml for duplicate plugin
- Fixed list data centers method
- Fixed following conflicts:
api/src/org/apache/cloudstack/api/command/admin/network/CreateNetworkOfferingCmd.java
api/src/org/apache/cloudstack/api/command/user/offering/ListServiceOfferingsCmd.java
api/src/org/apache/cloudstack/api/command/user/template/DeleteTemplateCmd.java
api/src/org/apache/cloudstack/api/command/user/template/ExtractTemplateCmd.java
plugins/api/discovery/src/org/apache/cloudstack/discovery/ApiDiscoveryServiceImpl.java
server/src/com/cloud/api/ApiDBUtils.java
server/src/com/cloud/api/ApiServer.java
server/src/com/cloud/api/query/QueryManagerImpl.java
server/src/com/cloud/configuration/DefaultComponentLibrary.java
server/src/com/cloud/server/ManagementServerImpl.java
server/src/com/cloud/storage/swift/SwiftManagerImpl.java
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
Detail: There are several places in the code that do a
"brctl show | grep bridgeName" or similar, which causes all sorts
of problems when you have for example a cloudVirBr50 and a
cloudVirBr5000. This patch attempts to stop relying on the output
of brctl, instead favoring sysfs and /sys/devices/virtual/net.
It cuts a lot of bash out altogether by using java File. It was
tested in my devcloud-kvm against current 4.0, as well as by the
customer reporting initial bug.
BUG-ID: CLOUDSTACK-938
Fix-For: 4.0.1
Signed-off-by: Marcus Sorensen <marcus@betterservers.com>
Description: When selecting a storage adaptor, cleanupDisk assumes that
libvirt is being used. Therefore, we pass a StoragePoolType that maps to
libvirt. This is the only place in LibvirtComputingResource where the
StoragePoolType can't be pulled from somewhere else.
BUG-ID: CLOUDSTACK-1011
Signed-off-by: Marcus Sorensen <marcus@betterservers.com>
and CloudException in one place, and Introduced ApiErrorCode to handle CloudStack API error
code to standard Http code mapping.
Signed-off-by: Min Chen <min.chen@citrix.com>
Detail: This merges the resizevolume feature branch, which provides the
ability to migrate a disk between disk offerings, thereby changing its
size, or specifying a new size if current disk offering is custom.
BUG-ID: CLOUDSTACK-644
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1358358209 -0700
Although I still think the templates aren't well maintained, I just
added 12.04 since this is an LTS and people probably want it in the
list of templates.
This system should be more generic I think though.
Create OvsVifDriver to deal with openvswitch specifics for plugging
intefaces
Create a parameter to set the bridge type to use in
LibvirtComputingResource.
Create several functions to get bridge information from openvswitch
Add a check to detect the libvirt version and throw an exception when
the version is to low ( < 0.9.11 )
Fix classpath loading in Script.findScript to deal with missing path
separators at the end.
Add notification to the BridgeVifDriver that lswitch broadcast type is
not supported.
Create OvsVifDriver to deal with openvswitch specifics for plugging
intefaces
Create a parameter to set the bridge type to use in
LibvirtComputingResource.
Create several functions to get bridge information from openvswitch
Add a check to detect the libvirt version and throw an exception when
the version is to low ( < 0.9.11 )
Fix classpath loading in Script.findScript to deal with missing path
separators at the end.
Add notification to the BridgeVifDriver that lswitch broadcast type is
not supported.
- Makes plugins self contained so they decide their properties file format
- PluggableService creates the contract that implementing entity will return a
properties map which is apiname:rolemask (both are strings)
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
NetworkManager's exclusive focus is now
- handling plugins during orchestration, and
- to deal with ip address allocation.
Those classes that used to refer to NetworkManager to get access to the datamodel now refer to NetworkModel
Signed-off-by: Chiradeep Vittal <chiradeep@apache.org>
Modify the spec file to package the agent files and the scripts
Some changes to the poms to put the java dependencies in the right place.
Move the agent script to the dedicated os dir in packaging.
Issue seen during system vm template upgrade and restoreVM command
scenarios for vmware. In these cases CS tries to recreate root disk with
same name as the existing one, in case of vmware this results in creation
of vmdk file with same name for both existing and new root volume.
This results in undesired behavior when storage cleanup thread tries to
cleanup old volume. Made the vmdk file name unique by adding the volume
id to it. This will ensure that during volume recreation in the scenarios
mentioned vmdk will get created with a new name and there will be
no undesired side effects of running the storage cleanup thread.
- Fix interface to return array of strings, or filenames
- Fix StaticRoleBased ACL adapter to process config files by going through all pluggable services
- Refactor interface names
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
Automates name field filling using following python program which reads from
various *commands.properties.in files and populates name fields based on the
name cmd class mapping defined in them.
import os
search_pattern = "@APICommand("
pattern_len = len(search_pattern)
prop_files = [
"client/tomcatconf/cisconexusvsm_commands.properties.in",
"client/tomcatconf/f5bigip_commands.properties.in",
"client/tomcatconf/junipersrx_commands.properties.in",
"client/tomcatconf/netapp_commands.properties.in",
"client/tomcatconf/netscalerloadbalancer_commands.properties.in",
"client/tomcatconf/nicira-nvp_commands.properties.in",
"client/tomcatconf/simulator_commands.properties.in",]
file_prefixes = [
"plugins/hypervisors/vmware/src/",
"plugins/network-elements/f5/src/",
"plugins/network-elements/juniper-srx/src/",
"plugins/file-systems/netapp/src/",
"plugins/network-elements/netscaler/src/",
"plugins/network-elements/nicira-nvp/src/",
"plugins/hypervisors/simulator/src/",]
counter = 0
for prop_file in prop_files:
f = open(prop_file, 'r')
data = f.read()
f.close()
file_prefix = file_prefixes[counter]
apis = filter(lambda x: x.strip()!='' and (not x.startswith('#')), data.split('\n'))
for api in apis:
api_name = api.split('=')[0].strip()
cmd_name = file_prefix + api.split('=')[1].split(';')[0].replace('.', '/').strip() + ".java"
if not os.path.exists(cmd_name):
print cmd_name, api_name
f = open(cmd_name, 'r')
d = f.read()
f.close()
idx = d.find(search_pattern) + pattern_len
new_str = d[:idx] + "name = \"%s\", " % api_name + d[idx:]
f = open(cmd_name, 'w')
f.write(new_str)
f.close()
counter += 1
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
Entities correlated to the Identity and carry a uuid and those
correlated to InternalIdentity carry an id. Those entities that carry
both will correlated to Identity and InternalIdentity.
This refactors entities wherever possible to ensure the VO only
implements the first class entity.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Multiple fixes:
1. changes to the mvn configuration
a. include simulator to client.war
b. activate simulator by profile
2. templates for simulator
3. developer prefill for simulator
a. Use deplydb-simulator to setup simulator db
4. Inherit components-simulator.xml from components.xml
5. ListVolumesCommand missed for MockStorageManager
6. Include simulator properties into utils/db.properties
TODO:
Secondary storage VMs don't come up because ComponentLocator doesn't
retain a unique set of adapaters by name. Fix this in subsequent
checkin.
BootArgs carry router priority for master and backup and simulator will
reuse the priority to decide RvR status. Deprecating the odd/even logic.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Detail:
To induce latency for a command you have to use an API call like so
http://localhost:8096/client/api?command=configureSimulator&zoneid=1&podid=1&name=CheckRouterCommand&value=wait:80|timeout:0
(This is a hidden API command just for the simulator)
You will see the configuration effected in the mockconfiguration table of
simulator db. You can introduce the latency at runtime without restarting
management server.
mysql> select * from mockconfiguration;
+----+----------------+--------+------------+---------+--------------------+-------------------+
| id | data_center_id | pod_id | cluster_id | host_id | name | values |
+----+----------------+--------+------------+---------+--------------------+-------------------+
| 1 | 1 | 1 | NULL | NULL | CheckRouterCommand | wait:80|timeout:0 |
+----+----------------+--------+------------+---------+--------------------+-------------------+
1 row in set (0.00 sec)
By providing the optional zoneid, podid, clusterid, hostid you can induce the
latency at various levels. This delay will happen before the command is
processed and post-execution return Command's Answer back to management
server.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Simulator just like any hypervisor should be a plugin.
resurrecting it to aid api refactoring tests. WIP
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Detail: Instead of using LibvirtStorageAdaptor for everything, you can create
your own storage adaptor and use it. We select storage adaptor based on storage
pool type, thus we needed to adjust LibvirtComputingResource to pass pool type
to everything in KVMStoragePoolManager. This in turn required that we pass the
info necessary to LibvirtComputingResource as well, so a few agent Commands were
modified.
Note this patch in and of itself shouldn't change any existing behavior, just
allow for new storage adaptors to be selected based on storage pool type.
Reviewed-by: Edison Su
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1355769696 -0700
Detail: If source image is qcow2, and we want a qcow2 image, then doing a
convert strips off compression and any snapshots the user had in that image. If
a backing file exists, we stick with convert so we can pull in both the backing
file and the COW image, otherwise we just cp the qcow2 file. This is also faster
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1354755241 -0700
- Refactor VPN and VM APIs to admin and user pkgs
- Names space, org.apache.cloudstack
- Fix refactored apis in commands*.in
- Fix comments etc.
- Expand tabs, remove trailing whitespace
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
Detail: This patch deletes any patchdisk found when deleting root volume for
system VM.
BUG-ID: CLOUDSTACK-566
Bugfix-for: 4.0.1
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1354222335 -0700
stopped
Detail: This patch fixed an issue with hosts trying to stop system vms that were
already not running and deleting a patch disk for the system vm running on
another host. It got applied to 4.0 but not master.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1354222160 -0700
Detail: Because of the way most other primary storage types work with cloudstack
(i.e. backing stores) CLVM actually copies the template to a local logical
volume on primary storage, then uses that. This causes all of your primary
storage to be littered with a copy of every template used. Since we're not
using these, dump the template direct to the newly created logical volume.
This is faster as well since the template is sparse; we're not creating a fat
template on primary storage and then copying that to a logical volume when we
deploy from template.
BUG-ID: CLOUDSTACK-508
Bugfix-for: 4.1
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1353221260 -0700
Detail: In com.cloud.hypervisor.kvm.resource.BridgeVifDriver.java, in 2 places
an if block should have evaluated to true if trafficLabel was null, however it
was causing a NullPointerException instead.
BUG-ID : NONE
Bugfix-for: 4.0
Reviewed-by: Marcus Sorensen
Reported-by: Dave Cahill
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1352307750 -0700
Detail: There was a regression in functionality introduced by
915babd970 where the public
bridge could not also be the private bridge. This had several
additional consequences, this patch should revert the behavior
back while keeping the functionality enhancements introduced by that
commit.
BUG-ID : NONE
Reviewed-by: Dave Cahill
Reported-by: Dave Cahill via cloudstack-dev
Signed-off-by: Marcus Sorensen <shadowsor@gmail.com> 1351574006 -0600
called.
VifDriver.unplug must be called in MigrateCommand which hooks VM
migration in source host, because plug will be called in
PrepareForMigration in destination host. But that operation is missing
in current LibvirtComputingResources.
Signed-off-by: Edison Su <sudison@gmail.com>
On kvm computing host, vifdriver.unplug will always fails (throws
LibvirtException) and network cleanup will not be called. This was
because the code first undefine the computing domain, and then tries to
query the destroyed machine definition to fetch NIC information. IMHO,
kvm plugin code rounds LibvirtException too much.
Signed-off-by: Edison Su <sudison@gmail.com>
The vmware modules should be listed as provided so they are never
packaged. However this also means that you have to put them in the
web-inf/lib directory by hand.
Set the version of the api in the central pom for easy reference.
Add wsdl4j as a runtime requirement. It is actually required by the
vmware implementation but it is easier to list it as a requirements for
the component here as vmware is not in any maven repo
put the dependency on vim back in the dependencies. It is not required
for compile, but is required as runtime by apputils.
vmware-lib-jaxrpc is now provided by axis-jaxrpc-1.4.jar, the former is
the same as latter (bit by bit) and only difference is the file name.
- Fix dependency in vmware-base's pom.xml
- Fix dependency in hypervisor-plugin-vmware's pom.xml
- Fix install-non-oss.sh by reverting commit:
2e6ddc6c36.
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
Since only the cephx user like 'admin' was passed we couldn't define two RBD storage pools
using the cephx user admin, even if they were running on different Ceph clusters.
By adding the monitor hostname and poolname to the secret's usage (which we don't even use) it becomes
unique.
Fixes the hard coded path in the vmware plugin.
The systemvm.iso file would copy the script only to /opt/cloud/bin.
Same is the path used for vpc_netusage.sh
Signed-off-by: Rohit Yadav <rohit.yadav@citrix.com>
work)
Cloudstack seems to let you create guest traffic types on multiple
physical networks. However, when I try this with KVM I end up always
bridging to whatever device is used for guest.network.device. This pulls
the traffic label (NicTO.getName()) and uses that bridge to ensure that
we get on the correct physical network, rather than just always using
the guest.network.device.
This also changes the bridge naming scheme from cloudVirBr + vlanid to
br + physicalinterface + "-" + vlanid. This is because we should be able
to support the same vlan numbers per physical network, and the previous
bridge name would not support this and collide.
Signed-off-by: Edison Su <sudison@gmail.com>
create
The code is unable to detect an existing pool, because we use a random
UUID each time. New Libvirt doesn't allow multiple pools to be defined
to the same storage. This patch generates a UUID based on the storage
path, so that it can be detected as existing and reused. It also cleans
up no-op code and adjusts the naming of a few things to clean up any
confusion.
Signed-off-by: Edison Su <sudison@gmail.com>
Since /root is r-x permissions, Java fails to mkdir /root/.ssh (even
though the agent is running as root) because it looks for the writable
permission. This patch modifies the 'chmod 700 /root/.ssh' shell command
that we already use into 'mkdir -m 700 /root/.ssh', to be able to create
the directory as root even though write permissions are not set on
/root. This seemed cleaner/safer than adding writable to /root.
Signed-off-by: Edison Su <sudison@gmail.com>
The default value for local.storage.path does not exists by
default in CentOS 6. By default, this results in NullPointerException
silently. Without this log message, administrator can't figure out
the reason at all.
Signed-off-by: Edison Su <sudison@gmail.com>
/root/.ssh is created with perms '600' if it doesn't already exist. This causes
a problem in that it can't write out id_rsa.cloud:
2012-08-27 16:35:40,227 DEBUG [cloud.agent.Agent] (agentRequest-Handler-4:null)
Processing command: com.cloud.agent.api.ModifySshKeysCommand
2012-08-27 16:35:40,228 DEBUG [kvm.resource.LibvirtComputingResource]
(agentRequest-Handler-4:null) Failed to create file: java.io.IOException:
Permission denied
Doing 'chmod u+x /root/.ssh' fixed the above, so it seems that even though the
agent is running as root it cares about being able to chdir into /root.ssh
Signed-off-by: Sheng Yang <sheng.yang@citrix.com>
Implements
SetupGuestNetworkCommand,SetNetworkACLCommand,SetSourceNatCommand,IpAssocVpcCommand,SetPortForwardingRulesVpcCommand.
Passes basic functionality, though I'm sure there may be some honing to
do.
Also fixes a few minor things found along the way:
vpc_guestnw.sh wasn't successfully setting up apache due to default
listen IP of 10.1.1.1
vpc_guestnw.sh was referencing a 'logger_it' function, replaced with
'logger -t cloud'
system vms were running with OS type "Debian GNU/Linux 5.0(32-bit)",
which was not found in the KVMGuestOsMapper
the Xen implementation of SetupGuestNetworkCommand had apparently
copied its catch message from UnPlug Nic, fixed string
Send-by: Marcus Sorensen
RB: https://reviews.apache.org/r/6883
This is part 1 in enabling VPC for KVM. The various commands needing
implementation will be submitted individually unless I'm told to do
otherwise, in case I don't complete all of the commands, such that
someone else can take over or build on my work.
RB: https://reviews.apache.org/r/6859
Send-by: shadowsor@gmail.com
Add BridgeVifDriver and move current vif implementation to it.
- remove dependency on VirtualRoutingResource.
- factor out some of the networking code in LibvirtComputingResource
to BridgeVifDriver.
Add base class for KVM VifDriver.
Add VifDriver Interface for KVM.
RB: https://reviews.apache.org/r/6285
Send-by: Tomoe Sugihara <tomoe@midokura.com>
Add BridgeVifDriver and move current vif implementation to it.
- remove dependency on VirtualRoutingResource.
- factor out some of the networking code in LibvirtComputingResource
to BridgeVifDriver.
Add base class for KVM VifDriver.
Add VifDriver Interface for KVM.
RB: https://reviews.apache.org/r/6285
Send-by: Tomoe Sugihara <tomoe@midokura.com>
We used to generate a UUID when this wasn't set, but since we aren't writing to
agent.properties anymore we have to make sure the UUID is persistent across restarts.
Libvirt can also return a bunch of emulators for eg ARM and S390
We filter those out since we do not support these architectures.
This way we don't try to start a x86_64 instance with a S390 emulator
Since we are using libvirt for handling our storage pools we should rely on that information as well.
Before fetching the capacity we refresh the pool so libvirt has the most up-to-date information.
This is not needed with newly created pools since libvirt does a refresh on creation.
[Dropped Vmware support in this commit, due to lack of VMware support in VPC now]
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java