the management server was restarted. The template.properties file created for the
template has the format field in upper-case. This caused the template service to
not to recognise the format and it removed the entry from the template_store_ref
table in db. Fixed the format field in the templatee.properties.
null pointer exception was getting generated when a VolumeTO object was
serialized to create an answer object. If a local storage is used the uri
field will be null. Added null checks for the same.
introduces a force option in delete network to forcifully delete a
network. This comes handy in rare cases where network fails to implenet
and network is in shutdown state, but network shutdown to rollback
implement process fails as well.
We hit these excptions whenever a management server held session that was with the old 5.1 vCenter server
is used to make resource calls to the new 5.5 vCenter.
Validate a vCenter session context before it is being used to make a resource call.
And if the context is invalid then discard the context and retrieve a new one.
During the invalidation of an old context handle the context disconnect better
by catching the appropriate exception and returning a newly created context.
vxlan code. Users can set a physical network to isolation type 'vxlan',
put public traffic on that physical network, and it will still attempt
to use 'vlan' isolation on the KVM hosts. This is going to be an issue
for other isolation types as well, but I'm not familiar with them, so
I'm just fixing vxlan for now.
For a detached volume, don't try to find the associated VM on the hypervisor/peer hypervisor host.
By default create a worker VM to perform snapshot operations.
on hyperv. There were multiple issues here. Upload volume was actually
failing because the post download check for vhd on the cifs share was
unsuccessful. Also the agent code wasn't parsing the volume path correctly.
Fixed it too.
Don't package the OVF and VMDK files into OVA after a template is created from volume.
Since packaging process contains reading and writing from the NFS mount, it doubles the amount of data that needs to be moved around
Instead of injecting object of VolumeOrchestrationService into VmwareResource, we now populate the command object (MigrateVolumeCommand here) with required information. Thus we dont need volume orchestration service to query that information from resource.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
is attached a hard disk drive is created on the scsi controller. On detach
the data disk is removed from the drive but the disk drive is left behind.
On reattach the agent was again trying to create a disk drive while it was
already present. Fixed the agent code to look up for disk drive while
attaching and if one is not found then only to create the drive for
attaching a data disk.
The agent was always creating a disk with image format vhdx, but the
cloudstack management server defaults to image format vhd for hyperv.
Updated the agent code to be consistent with what cs expects. All disks
are now created with image format vhd.
When VM is not running, existing code is unable to retrieve associated cluster's Id. Now we will try to get this information using previous host where the VM was running.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
DetachISO is succeeding even though detach opeartion is failing as cdrom is locked by VM as it was mounted inside VM.
Detect if cdrom is locked or not. If locked fail detach operation and warn user to unmount before detaching the iso/cdrom device.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
deployment fails because of that as cloudstack tries to deploy it on a host which is
ctually down. An investigator wasn't present for hyper-v; so cloudstack wasn't able to
determine the status of the host. Wrote an investigator for hyper-v which checks with
other hosts in the cluster for the status of the host being investigated.
host is put in maintenance mode. The migrate flag wasn't set to true in
the maintain answer. This caused cloudstack to not to schedule a migration
work item for vms on the host. Made a change to set the migrate flag to
true in migrate answer.
During VM deployment when base template is being cloned to create VM ROOT disk, get the disk path
i.e. base file name of the VM's ROOT disk from vCenter
vmware.reserve.cpu/mem are set to true. Insted of setting
the ovecommit values to one on upgrade, we popultate them
from the global values.
Signed off by : nitin mehta<nitin.mehta@citrix.com>
brought back up after being down for few hours,snapshot jobs do not get
triggered with reason "there is other active snapshot tasks on the
instance to which the volume is attached".
The systemvm iso file is copied only when a systemvm or router vm is to be started on
a host. The file gets copied to the secondary storage. The mount point used is the one
that has permissions for regular user to mount a share.
CLOUDSTACK-5275: The failure was because a secondary storage wasn't available when the
host was added. When a setup is done through wizard the hosts get added before the
secondary storage. CS was tying to copy yhe systemvm iso to secondary and it used to
fail if it wasn't available. Made a change to copy the iso only when a systemvm is
being started on a host.
CLOUDSTACK-5202: Made changes to clean up mount points on stop and start.
All the three are related fixes; so putting a fix in one commit.
service and not used for LB
Fix adds a boolean flag to addNetscalerLoadBalancer api, which
will mark added NetScaler for exclusive GSLB service. A netscaler marked
as exclusive gslb service provider is not picked for any guest network's
lb provider.
Conflicts:
engine/schema/src/com/cloud/network/dao/ExternalLoadBalancerDeviceVO.java
plugins/network-elements/f5/src/com/cloud/network/element/F5ExternalLoadBalancerElement.java
plugins/network-elements/netscaler/src/com/cloud/api/commands/AddNetscalerLoadBalancerCmd.java
plugins/network-elements/netscaler/src/com/cloud/api/response/NetscalerLoadBalancerResponse.java
plugins/network-elements/netscaler/src/com/cloud/network/element/NetscalerElement.java
server/src/com/cloud/network/ExternalLoadBalancerDeviceManager.java
server/src/com/cloud/network/ExternalLoadBalancerDeviceManagerImpl.java
setup/db/db/schema-421to430.sql
1. Egress default policy rules is send to the firewall provider. It is up to the
provider to configure the rules.
2. The default policy rules are send for both allow and deny default policy.
3. On network shutdown rules for delete are send.
4. For VR and SRX, by default deny the traffic. So no default rule to deny traffic is required.