mirror of https://github.com/apache/cloudstack.git
Merge branch 'master' into javelin
This commit is contained in:
commit
387c6fc135
29
README.html
29
README.html
|
|
@ -1,29 +0,0 @@
|
|||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
|
||||
<!--
|
||||
Licensed to the Apache Software Foundation (ASF) under one
|
||||
or more contributor license agreements. See the NOTICE file
|
||||
distributed with this work for additional information
|
||||
regarding copyright ownership. The ASF licenses this file
|
||||
to you under the Apache License, Version 2.0 (the
|
||||
"License"); you may not use this file except in compliance
|
||||
with the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing,
|
||||
software distributed under the License is distributed on an
|
||||
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations
|
||||
under the License.
|
||||
-->
|
||||
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
|
||||
<body>
|
||||
<a href="http://cloud.com"><img src="docs/images/logo_cloud.jpg"></a>
|
||||
<p>Welcome to CloudStack. Here's where you can find more information:</p>
|
||||
<ul>
|
||||
<li><a href="http://www.cloudstack.org">Community</a> - forums, code, bugbase, blog, events, links to outside resources, IRC, and more</li>
|
||||
<li><a href="http://cloud.mindtouch.us">Documentation and Knowledge Base</a> - installation steps, guides, references, troubleshooting tips</li>
|
||||
</ul>
|
||||
</body>
|
||||
</html>
|
||||
120
README.md
120
README.md
|
|
@ -1,3 +1,29 @@
|
|||
Apache CloudStack (Incubating) Version 4.0.0
|
||||
|
||||
# About Apache CloudStack (Incubating)
|
||||
|
||||
Apache CloudStack (Incubating) is software designed to deploy
|
||||
and manage large networks of virtual machines, as a highly
|
||||
available, highly scalable Infrastructure as a Service (IaaS)
|
||||
cloud computing platform. CloudStack is used by a number of
|
||||
service providers to offer public cloud services, and by many
|
||||
companies to provide an on-premises (private) cloud offering.
|
||||
|
||||
Apache CloudStack currently supports the most popular hypervisors:
|
||||
VMware, Oracle VM, KVM, XenServer and Xen Cloud Platform.
|
||||
CloudStack also offers bare metal management of servers,
|
||||
using PXE to provision OS images and IPMI to manage the server.
|
||||
Apache CloudStack offers three methods for managing cloud
|
||||
computing environments: an easy to use Web interface, command
|
||||
line tools, and a full-featured RESTful API.
|
||||
|
||||
Visit us at [cloudstack.org](http://incubator.apache.org/cloudstack).
|
||||
|
||||
## Mailing lists
|
||||
[Development Mailing List](mailto:cloudstack-dev-subscribe@incubator.apache.org)
|
||||
[Users Mailing list](mailto:cloudstack-users-subscribe@incubator.apache.org)
|
||||
[Commits mailing list](mailto:cloudstack-commits-subscribe@incubator.apache.org)
|
||||
|
||||
# License
|
||||
|
||||
Licensed to the Apache Software Foundation (ASF) under one
|
||||
|
|
@ -17,42 +43,88 @@ KIND, either express or implied. See the License for the
|
|||
specific language governing permissions and limitations
|
||||
under the License.
|
||||
|
||||
# Apache CloudStack
|
||||
# Building CloudStack
|
||||
|
||||
Apache CloudStack is a massively scalable free/libre open source Infrastructure as a Service cloud platform.
|
||||
By default, CloudStack will only build with supporting packages
|
||||
that are appropved by the ASF as being compatible with the Apache
|
||||
Software License Version 2.
|
||||
|
||||
Visit us at [cloudstack.org](http://cloudstack.org) or join #cloudstack on irc.freenode.net
|
||||
## Default build
|
||||
|
||||
## Binary Downloads
|
||||
To build the default build target, use maven3 and execute:
|
||||
|
||||
Downloads are available from:
|
||||
http://cloudstack.org/download.html
|
||||
maven install
|
||||
|
||||
## Supported Hypervisors
|
||||
## Including optional third party libraries in your build
|
||||
|
||||
* XenServer
|
||||
* KVM
|
||||
* VMware ESX/ESXi (via vCenter)
|
||||
* Oracle VM
|
||||
* XCP
|
||||
If you want to build this software against one of the optional
|
||||
third party libraries, follow the instructions below:
|
||||
|
||||
## Mailing lists
|
||||
[Development Mailing List](mailto:cloudstack-dev-subscribe@incubator.apache.org)
|
||||
[Users Mailing list](mailto:cloudstack-users-subscribe@incubator.apache.org)
|
||||
[Commits mailing list](mailto:cloudstack-commits-subscribe@incubator.apache.org)
|
||||
These third parties jars are non available in Maven central, and
|
||||
need to be located and downloaded by the developer themselves.
|
||||
The libraries to download are listed below, by the feature that
|
||||
they support.
|
||||
|
||||
#Maven build
|
||||
Some third parties jars are non available in Maven central.
|
||||
So install it with: cd deps&&sh ./install-non-oss.sh
|
||||
Now you are able to activate nonoss build with adding -Dnonoss to maven cli.
|
||||
For F5 load balancing support:
|
||||
cloud-iControl.jar
|
||||
|
||||
For Netscaler support:
|
||||
cloud-netscaler.jar
|
||||
cloud-netscaler-sdx.jar
|
||||
|
||||
For NetApp Storage Support:
|
||||
cloud-manageontap.jar
|
||||
|
||||
For VMware Support:
|
||||
vmware-vim.jar
|
||||
vmware-vim25.jar
|
||||
vmware-apputils.jar
|
||||
|
||||
Once downloaded (and named the same as listed above), they can be
|
||||
installed into your local maven repository with the following command:
|
||||
|
||||
cd deps&&sh ./install-non-oss.sh
|
||||
|
||||
To perform the build, run the following command:
|
||||
|
||||
mvn -Dnonoss install
|
||||
|
||||
## Running a developer environment
|
||||
|
||||
To run the webapp client:
|
||||
|
||||
mvn org.apache.tomcat.maven:tomcat7-maven-plugin:2.0-beta-1:run -pl :cloud-client-ui -am -Pclient
|
||||
|
||||
Then hit: http://localhost:8080/cloud-client-ui/
|
||||
|
||||
to run webapp client:
|
||||
mvn org.apache.tomcat.maven:tomcat7-maven-plugin:2.0-beta-1:run -pl :cloud-client-ui -am -Pclient -Dnonoss
|
||||
then hit: http://localhost:8080/cloud-client-ui/
|
||||
or add in your ~/.m2/settings.xml
|
||||
<pluginGroups>
|
||||
<pluginGroup>org.apache.tomcat.maven</pluginGroup>
|
||||
</pluginGroups>
|
||||
and save your fingers with mvn tomcat7:run -pl :cloud-client-ui -am -Pclient -Dnonoss
|
||||
and save your fingers with mvn tomcat7:run -pl :cloud-client-ui -am -Pclient
|
||||
|
||||
Optionally add -Dnonoss to either of the commands above.
|
||||
|
||||
If you want to use ide debug: replace mvn with mvnDebug and attach your ide debugger to port 8000
|
||||
|
||||
# Notice of Cryptographic Software
|
||||
|
||||
This distribution includes cryptographic software. The country in which you currently
|
||||
reside may have restrictions on the import, possession, use, and/or re-export to another
|
||||
country, of encryption software. BEFORE using any encryption software, please check your
|
||||
country's laws, regulations and policies concerning the import, possession, or use, and
|
||||
re-export of encryption software, to see if this is permitted. See http://www.wassenaar.org/
|
||||
for more information.
|
||||
|
||||
The U.S. Government Department of Commerce, Bureau of Industry and Security (BIS), has
|
||||
classified this software as Export Commodity Control Number (ECCN) 5D002.C.1, which
|
||||
includes information security software using or performing cryptographic functions with
|
||||
asymmetric algorithms. The form and manner of this Apache Software Foundation distribution
|
||||
makes it eligible for export under the License Exception ENC Technology Software
|
||||
Unrestricted (TSU) exception (see the BIS Export Administration Regulations, Section
|
||||
740.13) for both object code and source code.
|
||||
|
||||
The following provides more details on the included cryptographic software:
|
||||
|
||||
TODO
|
||||
|
||||
|
|
|
|||
|
|
@ -1,89 +0,0 @@
|
|||
-- Licensed to the Apache Software Foundation (ASF) under one
|
||||
-- or more contributor license agreements. See the NOTICE file
|
||||
-- distributed with this work for additional information
|
||||
-- regarding copyright ownership. The ASF licenses this file
|
||||
-- to you under the Apache License, Version 2.0 (the
|
||||
-- "License"); you may not use this file except in compliance
|
||||
-- with the License. You may obtain a copy of the License at
|
||||
--
|
||||
-- http://www.apache.org/licenses/LICENSE-2.0
|
||||
--
|
||||
-- Unless required by applicable law or agreed to in writing,
|
||||
-- software distributed under the License is distributed on an
|
||||
-- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
-- KIND, either express or implied. See the License for the
|
||||
-- specific language governing permissions and limitations
|
||||
-- under the License.
|
||||
DROP TABLE IF EXISTS `cloud`.`mockhost`;
|
||||
DROP TABLE IF EXISTS `cloud`.`mocksecstorage`;
|
||||
DROP TABLE IF EXISTS `cloud`.`mockstoragepool`;
|
||||
DROP TABLE IF EXISTS `cloud`.`mockvm`;
|
||||
DROP TABLE IF EXISTS `cloud`.`mockvolume`;
|
||||
|
||||
CREATE TABLE `cloud`.`mockhost` (
|
||||
`id` bigint unsigned NOT NULL auto_increment,
|
||||
`name` varchar(255) NOT NULL,
|
||||
`private_ip_address` char(40),
|
||||
`private_mac_address` varchar(17),
|
||||
`private_netmask` varchar(15),
|
||||
`storage_ip_address` char(40),
|
||||
`storage_netmask` varchar(15),
|
||||
`storage_mac_address` varchar(17),
|
||||
`public_ip_address` char(40),
|
||||
`public_netmask` varchar(15),
|
||||
`public_mac_address` varchar(17),
|
||||
`guid` varchar(255) UNIQUE,
|
||||
`version` varchar(40) NOT NULL,
|
||||
`data_center_id` bigint unsigned NOT NULL,
|
||||
`pod_id` bigint unsigned,
|
||||
`cluster_id` bigint unsigned COMMENT 'foreign key to cluster',
|
||||
`cpus` int(10) unsigned,
|
||||
`speed` int(10) unsigned,
|
||||
`ram` bigint unsigned,
|
||||
`capabilities` varchar(255) COMMENT 'host capabilities in comma separated list',
|
||||
`vm_id` bigint unsigned,
|
||||
`resource` varchar(255) DEFAULT NULL COMMENT 'If it is a local resource, this is the class name',
|
||||
PRIMARY KEY (`id`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE `cloud`.`mocksecstorage` (
|
||||
`id` bigint unsigned NOT NULL auto_increment,
|
||||
`url` varchar(255),
|
||||
`capacity` bigint unsigned,
|
||||
`mount_point` varchar(255),
|
||||
PRIMARY KEY (`id`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE `cloud`.`mockstoragepool` (
|
||||
`id` bigint unsigned NOT NULL auto_increment,
|
||||
`guid` varchar(255),
|
||||
`mount_point` varchar(255),
|
||||
`capacity` bigint,
|
||||
`pool_type` varchar(40),
|
||||
PRIMARY KEY (`id`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
|
||||
CREATE TABLE `cloud`.`mockvm` (
|
||||
`id` bigint unsigned NOT NULL auto_increment,
|
||||
`name` varchar(255),
|
||||
`host_id` bigint unsigned,
|
||||
`type` varchar(40),
|
||||
`state` varchar(40),
|
||||
`vnc_port` bigint unsigned,
|
||||
`memory` bigint unsigned,
|
||||
`cpu` bigint unsigned,
|
||||
PRIMARY KEY (`id`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
|
||||
CREATE TABLE `cloud`.`mockvolume` (
|
||||
`id` bigint unsigned NOT NULL auto_increment,
|
||||
`name` varchar(255),
|
||||
`size` bigint unsigned,
|
||||
`path` varchar(255),
|
||||
`pool_id` bigint unsigned,
|
||||
`type` varchar(40),
|
||||
PRIMARY KEY (`id`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
|
|
@ -33,28 +33,34 @@ import com.cloud.simulator.MockHost;
|
|||
import com.cloud.utils.component.Manager;
|
||||
|
||||
public interface MockAgentManager extends Manager {
|
||||
public static final long DEFAULT_HOST_MEM_SIZE = 8 * 1024 * 1024 * 1024L; // 8G, unit of
|
||||
// Mbytes
|
||||
public static final int DEFAULT_HOST_CPU_CORES = 4; // 2 dual core CPUs (2 x
|
||||
// 2)
|
||||
public static final int DEFAULT_HOST_SPEED_MHZ = 8000; // 1 GHz CPUs
|
||||
boolean configure(String name, Map<String, Object> params) throws ConfigurationException;
|
||||
public static final long DEFAULT_HOST_MEM_SIZE = 8 * 1024 * 1024 * 1024L; // 8G,
|
||||
// unit
|
||||
// of
|
||||
// Mbytes
|
||||
public static final int DEFAULT_HOST_CPU_CORES = 4; // 2 dual core CPUs (2 x
|
||||
// 2)
|
||||
public static final int DEFAULT_HOST_SPEED_MHZ = 8000; // 1 GHz CPUs
|
||||
|
||||
Map<AgentResourceBase, Map<String, String>> createServerResources(Map<String, Object> params);
|
||||
boolean configure(String name, Map<String, Object> params) throws ConfigurationException;
|
||||
|
||||
boolean handleSystemVMStart(long vmId, String privateIpAddress, String privateMacAddress, String privateNetMask, long dcId, long podId, String name, String vmType, String url);
|
||||
Map<AgentResourceBase, Map<String, String>> createServerResources(Map<String, Object> params);
|
||||
|
||||
boolean handleSystemVMStop(long vmId);
|
||||
boolean handleSystemVMStart(long vmId, String privateIpAddress, String privateMacAddress, String privateNetMask,
|
||||
long dcId, long podId, String name, String vmType, String url);
|
||||
|
||||
GetHostStatsAnswer getHostStatistic(GetHostStatsCommand cmd);
|
||||
Answer checkHealth(CheckHealthCommand cmd);
|
||||
Answer pingTest(PingTestCommand cmd);
|
||||
|
||||
Answer prepareForMigrate(PrepareForMigrationCommand cmd);
|
||||
|
||||
MockHost getHost(String guid);
|
||||
boolean handleSystemVMStop(long vmId);
|
||||
|
||||
Answer maintain(MaintainCommand cmd);
|
||||
GetHostStatsAnswer getHostStatistic(GetHostStatsCommand cmd);
|
||||
|
||||
Answer checkHealth(CheckHealthCommand cmd);
|
||||
|
||||
Answer pingTest(PingTestCommand cmd);
|
||||
|
||||
Answer prepareForMigrate(PrepareForMigrationCommand cmd);
|
||||
|
||||
MockHost getHost(String guid);
|
||||
|
||||
Answer maintain(MaintainCommand cmd);
|
||||
|
||||
Answer checkNetworkCommand(CheckNetworkCommand cmd);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -61,321 +61,408 @@ import com.cloud.utils.Pair;
|
|||
import com.cloud.utils.component.Inject;
|
||||
import com.cloud.utils.concurrency.NamedThreadFactory;
|
||||
import com.cloud.utils.db.DB;
|
||||
import com.cloud.utils.db.Transaction;
|
||||
import com.cloud.utils.exception.CloudRuntimeException;
|
||||
import com.cloud.utils.net.NetUtils;
|
||||
|
||||
@Local(value = { MockAgentManager.class })
|
||||
public class MockAgentManagerImpl implements MockAgentManager {
|
||||
private static final Logger s_logger = Logger.getLogger(MockAgentManagerImpl.class);
|
||||
@Inject HostPodDao _podDao = null;
|
||||
@Inject MockHostDao _mockHostDao = null;
|
||||
@Inject MockVMDao _mockVmDao = null;
|
||||
@Inject SimulatorManager _simulatorMgr = null;
|
||||
@Inject AgentManager _agentMgr = null;
|
||||
@Inject MockStorageManager _storageMgr = null;
|
||||
@Inject ResourceManager _resourceMgr;
|
||||
private SecureRandom random;
|
||||
private Map<String, AgentResourceBase> _resources = new ConcurrentHashMap<String, AgentResourceBase>();
|
||||
private ThreadPoolExecutor _executor;
|
||||
private static final Logger s_logger = Logger.getLogger(MockAgentManagerImpl.class);
|
||||
@Inject
|
||||
HostPodDao _podDao = null;
|
||||
@Inject
|
||||
MockHostDao _mockHostDao = null;
|
||||
@Inject
|
||||
MockVMDao _mockVmDao = null;
|
||||
@Inject
|
||||
SimulatorManager _simulatorMgr = null;
|
||||
@Inject
|
||||
AgentManager _agentMgr = null;
|
||||
@Inject
|
||||
MockStorageManager _storageMgr = null;
|
||||
@Inject
|
||||
ResourceManager _resourceMgr;
|
||||
private SecureRandom random;
|
||||
private Map<String, AgentResourceBase> _resources = new ConcurrentHashMap<String, AgentResourceBase>();
|
||||
private ThreadPoolExecutor _executor;
|
||||
|
||||
private Pair<String, Long> getPodCidr(long podId, long dcId) {
|
||||
try {
|
||||
|
||||
HashMap<Long, List<Object>> podMap = _podDao
|
||||
.getCurrentPodCidrSubnets(dcId, 0);
|
||||
List<Object> cidrPair = podMap.get(podId);
|
||||
String cidrAddress = (String) cidrPair.get(0);
|
||||
Long cidrSize = (Long)cidrPair.get(1);
|
||||
return new Pair<String, Long>(cidrAddress, cidrSize);
|
||||
} catch (PatternSyntaxException e) {
|
||||
s_logger.error("Exception while splitting pod cidr");
|
||||
return null;
|
||||
} catch(IndexOutOfBoundsException e) {
|
||||
s_logger.error("Invalid pod cidr. Please check");
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
private Pair<String, Long> getPodCidr(long podId, long dcId) {
|
||||
try {
|
||||
|
||||
private String getIpAddress(long instanceId, long dcId, long podId) {
|
||||
Pair<String, Long> cidr = this.getPodCidr(podId, dcId);
|
||||
return NetUtils.long2Ip(NetUtils.ip2Long(cidr.first()) + instanceId);
|
||||
}
|
||||
|
||||
private String getMacAddress(long dcId, long podId, long clusterId, int instanceId) {
|
||||
return NetUtils.long2Mac((dcId << 40 + podId << 32 + clusterId << 24 + instanceId));
|
||||
}
|
||||
public synchronized int getNextAgentId(long cidrSize) {
|
||||
return random.nextInt((int)cidrSize);
|
||||
}
|
||||
|
||||
@Override
|
||||
@DB
|
||||
public Map<AgentResourceBase, Map<String, String>> createServerResources(
|
||||
Map<String, Object> params) {
|
||||
|
||||
Map<String, String> args = new HashMap<String, String>();
|
||||
Map<AgentResourceBase, Map<String,String>> newResources = new HashMap<AgentResourceBase, Map<String,String>>();
|
||||
AgentResourceBase agentResource;
|
||||
long cpuCore = Long.parseLong((String)params.get("cpucore"));
|
||||
long cpuSpeed = Long.parseLong((String)params.get("cpuspeed"));
|
||||
long memory = Long.parseLong((String)params.get("memory"));
|
||||
long localStorageSize = Long.parseLong((String)params.get("localstorage"));
|
||||
synchronized (this) {
|
||||
long dataCenterId = Long.parseLong((String)params.get("zone"));
|
||||
long podId = Long.parseLong((String)params.get("pod"));
|
||||
long clusterId = Long.parseLong((String)params.get("cluster"));
|
||||
long cidrSize = getPodCidr(podId, dataCenterId).second();
|
||||
HashMap<Long, List<Object>> podMap = _podDao.getCurrentPodCidrSubnets(dcId, 0);
|
||||
List<Object> cidrPair = podMap.get(podId);
|
||||
String cidrAddress = (String) cidrPair.get(0);
|
||||
Long cidrSize = (Long) cidrPair.get(1);
|
||||
return new Pair<String, Long>(cidrAddress, cidrSize);
|
||||
} catch (PatternSyntaxException e) {
|
||||
s_logger.error("Exception while splitting pod cidr");
|
||||
return null;
|
||||
} catch (IndexOutOfBoundsException e) {
|
||||
s_logger.error("Invalid pod cidr. Please check");
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
int agentId = getNextAgentId(cidrSize);
|
||||
String ipAddress = getIpAddress(agentId, dataCenterId, podId);
|
||||
String macAddress = getMacAddress(dataCenterId, podId, clusterId, agentId);
|
||||
MockHostVO mockHost = new MockHostVO();
|
||||
mockHost.setDataCenterId(dataCenterId);
|
||||
mockHost.setPodId(podId);
|
||||
mockHost.setClusterId(clusterId);
|
||||
mockHost.setCapabilities("hvm");
|
||||
mockHost.setCpuCount(cpuCore);
|
||||
mockHost.setCpuSpeed(cpuSpeed);
|
||||
mockHost.setMemorySize(memory);
|
||||
String guid = UUID.randomUUID().toString();
|
||||
mockHost.setGuid(guid);
|
||||
mockHost.setName("SimulatedAgent." + guid);
|
||||
mockHost.setPrivateIpAddress(ipAddress);
|
||||
mockHost.setPublicIpAddress(ipAddress);
|
||||
mockHost.setStorageIpAddress(ipAddress);
|
||||
mockHost.setPrivateMacAddress(macAddress);
|
||||
mockHost.setPublicMacAddress(macAddress);
|
||||
mockHost.setStorageMacAddress(macAddress);
|
||||
mockHost.setVersion(this.getClass().getPackage().getImplementationVersion());
|
||||
mockHost.setResource("com.cloud.agent.AgentRoutingResource");
|
||||
mockHost = _mockHostDao.persist(mockHost);
|
||||
|
||||
_storageMgr.getLocalStorage(guid, localStorageSize);
|
||||
private String getIpAddress(long instanceId, long dcId, long podId) {
|
||||
Pair<String, Long> cidr = this.getPodCidr(podId, dcId);
|
||||
return NetUtils.long2Ip(NetUtils.ip2Long(cidr.first()) + instanceId);
|
||||
}
|
||||
|
||||
agentResource = new AgentRoutingResource();
|
||||
if (agentResource != null) {
|
||||
try {
|
||||
params.put("guid", mockHost.getGuid());
|
||||
agentResource.start();
|
||||
agentResource.configure(mockHost.getName(),
|
||||
params);
|
||||
private String getMacAddress(long dcId, long podId, long clusterId, int instanceId) {
|
||||
return NetUtils.long2Mac((dcId << 40 + podId << 32 + clusterId << 24 + instanceId));
|
||||
}
|
||||
|
||||
newResources.put(agentResource, args);
|
||||
} catch (ConfigurationException e) {
|
||||
s_logger
|
||||
.error("error while configuring server resource"
|
||||
+ e.getMessage());
|
||||
}
|
||||
}
|
||||
}
|
||||
return newResources;
|
||||
}
|
||||
|
||||
|
||||
@Override
|
||||
public boolean configure(String name, Map<String, Object> params)
|
||||
throws ConfigurationException {
|
||||
try {
|
||||
random = SecureRandom.getInstance("SHA1PRNG");
|
||||
_executor = new ThreadPoolExecutor(1, 5, 1, TimeUnit.DAYS, new LinkedBlockingQueue<Runnable>(), new NamedThreadFactory("Simulator-Agent-Mgr"));
|
||||
//ComponentLocator locator = ComponentLocator.getCurrentLocator();
|
||||
//_simulatorMgr = (SimulatorManager) locator.getComponent(SimulatorManager.Name);
|
||||
} catch (NoSuchAlgorithmException e) {
|
||||
s_logger.debug("Failed to initialize random:" + e.toString());
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean handleSystemVMStart(long vmId, String privateIpAddress, String privateMacAddress, String privateNetMask, long dcId, long podId, String name, String vmType, String url) {
|
||||
_executor.execute(new SystemVMHandler(vmId, privateIpAddress, privateMacAddress, privateNetMask, dcId, podId, name, vmType, _simulatorMgr, url));
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean handleSystemVMStop(long vmId) {
|
||||
_executor.execute(new SystemVMHandler(vmId));
|
||||
return true;
|
||||
}
|
||||
|
||||
private class SystemVMHandler implements Runnable {
|
||||
private long vmId;
|
||||
private String privateIpAddress;
|
||||
private String privateMacAddress;
|
||||
private String privateNetMask;
|
||||
private long dcId;
|
||||
private long podId;
|
||||
private String guid;
|
||||
private String name;
|
||||
private String vmType;
|
||||
private SimulatorManager mgr;
|
||||
private String mode;
|
||||
private String url;
|
||||
public SystemVMHandler(long vmId, String privateIpAddress, String privateMacAddress, String privateNetMask, long dcId, long podId, String name, String vmType,
|
||||
SimulatorManager mgr, String url) {
|
||||
this.vmId = vmId;
|
||||
this.privateIpAddress = privateIpAddress;
|
||||
this.privateMacAddress = privateMacAddress;
|
||||
this.privateNetMask = privateNetMask;
|
||||
this.dcId = dcId;
|
||||
this.guid = "SystemVM-" + UUID.randomUUID().toString();
|
||||
this.name = name;
|
||||
this.vmType = vmType;
|
||||
this.mgr = mgr;
|
||||
this.mode = "Start";
|
||||
this.url = url;
|
||||
this.podId = podId;
|
||||
}
|
||||
|
||||
public SystemVMHandler(long vmId) {
|
||||
this.vmId = vmId;
|
||||
this.mode = "Stop";
|
||||
}
|
||||
|
||||
@Override
|
||||
@DB
|
||||
public void run() {
|
||||
if (this.mode.equalsIgnoreCase("Stop")) {
|
||||
MockHost host = _mockHostDao.findByVmId(this.vmId);
|
||||
if (host != null) {
|
||||
String guid = host.getGuid();
|
||||
if (guid != null) {
|
||||
AgentResourceBase res = _resources.get(guid);
|
||||
if (res != null) {
|
||||
res.stop();
|
||||
_resources.remove(guid);
|
||||
}
|
||||
}
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
String resource = null;
|
||||
if (vmType.equalsIgnoreCase("secstorage")) {
|
||||
resource = "com.cloud.agent.AgentStorageResource";
|
||||
}
|
||||
MockHostVO mockHost = new MockHostVO();
|
||||
mockHost.setDataCenterId(this.dcId);
|
||||
mockHost.setPodId(this.podId);
|
||||
mockHost.setCpuCount(DEFAULT_HOST_CPU_CORES);
|
||||
mockHost.setCpuSpeed(DEFAULT_HOST_SPEED_MHZ);
|
||||
mockHost.setMemorySize(DEFAULT_HOST_MEM_SIZE);
|
||||
mockHost.setGuid(this.guid);
|
||||
mockHost.setName(name);
|
||||
mockHost.setPrivateIpAddress(this.privateIpAddress);
|
||||
mockHost.setPublicIpAddress(this.privateIpAddress);
|
||||
mockHost.setStorageIpAddress(this.privateIpAddress);
|
||||
mockHost.setPrivateMacAddress(this.privateMacAddress);
|
||||
mockHost.setPublicMacAddress(this.privateMacAddress);
|
||||
mockHost.setStorageMacAddress(this.privateMacAddress);
|
||||
mockHost.setVersion(this.getClass().getPackage().getImplementationVersion());
|
||||
mockHost.setResource(resource);
|
||||
mockHost.setVmId(vmId);
|
||||
mockHost = _mockHostDao.persist(mockHost);
|
||||
|
||||
if (vmType.equalsIgnoreCase("secstorage")) {
|
||||
AgentStorageResource storageResource = new AgentStorageResource();
|
||||
try {
|
||||
Map<String, Object> params = new HashMap<String, Object>();
|
||||
Map<String, String> details = new HashMap<String, String>();
|
||||
params.put("guid", this.guid);
|
||||
details.put("guid", this.guid);
|
||||
storageResource.configure("secondaryStorage", params);
|
||||
storageResource.start();
|
||||
//on the simulator the ssvm is as good as a direct agent
|
||||
_resourceMgr.addHost(mockHost.getDataCenterId(), storageResource, Host.Type.SecondaryStorageVM, details);
|
||||
_resources.put(this.guid, storageResource);
|
||||
} catch (ConfigurationException e) {
|
||||
s_logger.debug("Failed to load secondary storage resource: " + e.toString());
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
public synchronized int getNextAgentId(long cidrSize) {
|
||||
return random.nextInt((int) cidrSize);
|
||||
}
|
||||
|
||||
@Override
|
||||
public MockHost getHost(String guid) {
|
||||
return _mockHostDao.findByGuid(guid);
|
||||
}
|
||||
@Override
|
||||
@DB
|
||||
public Map<AgentResourceBase, Map<String, String>> createServerResources(Map<String, Object> params) {
|
||||
|
||||
@Override
|
||||
public GetHostStatsAnswer getHostStatistic(GetHostStatsCommand cmd) {
|
||||
String hostGuid = cmd.getHostGuid();
|
||||
MockHost host = _mockHostDao.findByGuid(hostGuid);
|
||||
if (host == null) {
|
||||
return null;
|
||||
}
|
||||
List<MockVMVO> vms = _mockVmDao.findByHostId(host.getId());
|
||||
double usedMem = 0.0;
|
||||
double usedCpu = 0.0;
|
||||
for (MockVMVO vm : vms) {
|
||||
usedMem += vm.getMemory();
|
||||
usedCpu += vm.getCpu();
|
||||
}
|
||||
|
||||
HostStatsEntry hostStats = new HostStatsEntry();
|
||||
hostStats.setTotalMemoryKBs(host.getMemorySize());
|
||||
hostStats.setFreeMemoryKBs(host.getMemorySize() - usedMem);
|
||||
hostStats.setNetworkReadKBs(32768);
|
||||
hostStats.setNetworkWriteKBs(16384);
|
||||
hostStats.setCpuUtilization(usedCpu/(host.getCpuCount() * host.getCpuSpeed()));
|
||||
hostStats.setEntityType("simulator-host");
|
||||
hostStats.setHostId(cmd.getHostId());
|
||||
return new GetHostStatsAnswer(cmd, hostStats);
|
||||
}
|
||||
Map<String, String> args = new HashMap<String, String>();
|
||||
Map<AgentResourceBase, Map<String, String>> newResources = new HashMap<AgentResourceBase, Map<String, String>>();
|
||||
AgentResourceBase agentResource;
|
||||
long cpuCore = Long.parseLong((String) params.get("cpucore"));
|
||||
long cpuSpeed = Long.parseLong((String) params.get("cpuspeed"));
|
||||
long memory = Long.parseLong((String) params.get("memory"));
|
||||
long localStorageSize = Long.parseLong((String) params.get("localstorage"));
|
||||
synchronized (this) {
|
||||
long dataCenterId = Long.parseLong((String) params.get("zone"));
|
||||
long podId = Long.parseLong((String) params.get("pod"));
|
||||
long clusterId = Long.parseLong((String) params.get("cluster"));
|
||||
long cidrSize = getPodCidr(podId, dataCenterId).second();
|
||||
|
||||
int agentId = getNextAgentId(cidrSize);
|
||||
String ipAddress = getIpAddress(agentId, dataCenterId, podId);
|
||||
String macAddress = getMacAddress(dataCenterId, podId, clusterId, agentId);
|
||||
MockHostVO mockHost = new MockHostVO();
|
||||
mockHost.setDataCenterId(dataCenterId);
|
||||
mockHost.setPodId(podId);
|
||||
mockHost.setClusterId(clusterId);
|
||||
mockHost.setCapabilities("hvm");
|
||||
mockHost.setCpuCount(cpuCore);
|
||||
mockHost.setCpuSpeed(cpuSpeed);
|
||||
mockHost.setMemorySize(memory);
|
||||
String guid = UUID.randomUUID().toString();
|
||||
mockHost.setGuid(guid);
|
||||
mockHost.setName("SimulatedAgent." + guid);
|
||||
mockHost.setPrivateIpAddress(ipAddress);
|
||||
mockHost.setPublicIpAddress(ipAddress);
|
||||
mockHost.setStorageIpAddress(ipAddress);
|
||||
mockHost.setPrivateMacAddress(macAddress);
|
||||
mockHost.setPublicMacAddress(macAddress);
|
||||
mockHost.setStorageMacAddress(macAddress);
|
||||
mockHost.setVersion(this.getClass().getPackage().getImplementationVersion());
|
||||
mockHost.setResource("com.cloud.agent.AgentRoutingResource");
|
||||
|
||||
@Override
|
||||
public Answer checkHealth(CheckHealthCommand cmd) {
|
||||
return new Answer(cmd);
|
||||
}
|
||||
Transaction txn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
txn.start();
|
||||
mockHost = _mockHostDao.persist(mockHost);
|
||||
txn.commit();
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
s_logger.error("Error while configuring mock agent " + ex.getMessage());
|
||||
throw new CloudRuntimeException("Error configuring agent", ex);
|
||||
} finally {
|
||||
txn.close();
|
||||
txn = Transaction.open(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
}
|
||||
|
||||
_storageMgr.getLocalStorage(guid, localStorageSize);
|
||||
|
||||
@Override
|
||||
public Answer pingTest(PingTestCommand cmd) {
|
||||
return new Answer(cmd);
|
||||
}
|
||||
agentResource = new AgentRoutingResource();
|
||||
if (agentResource != null) {
|
||||
try {
|
||||
params.put("guid", mockHost.getGuid());
|
||||
agentResource.start();
|
||||
agentResource.configure(mockHost.getName(), params);
|
||||
|
||||
newResources.put(agentResource, args);
|
||||
} catch (ConfigurationException e) {
|
||||
s_logger.error("error while configuring server resource" + e.getMessage());
|
||||
}
|
||||
}
|
||||
}
|
||||
return newResources;
|
||||
}
|
||||
|
||||
@Override
|
||||
public PrepareForMigrationAnswer prepareForMigrate(PrepareForMigrationCommand cmd) {
|
||||
VirtualMachineTO vm = cmd.getVirtualMachine();
|
||||
if (s_logger.isDebugEnabled()) {
|
||||
s_logger.debug("Preparing host for migrating " + vm);
|
||||
}
|
||||
return new PrepareForMigrationAnswer(cmd);
|
||||
}
|
||||
@Override
|
||||
public boolean configure(String name, Map<String, Object> params) throws ConfigurationException {
|
||||
try {
|
||||
random = SecureRandom.getInstance("SHA1PRNG");
|
||||
_executor = new ThreadPoolExecutor(1, 5, 1, TimeUnit.DAYS, new LinkedBlockingQueue<Runnable>(),
|
||||
new NamedThreadFactory("Simulator-Agent-Mgr"));
|
||||
// ComponentLocator locator = ComponentLocator.getCurrentLocator();
|
||||
// _simulatorMgr = (SimulatorManager)
|
||||
// locator.getComponent(SimulatorManager.Name);
|
||||
} catch (NoSuchAlgorithmException e) {
|
||||
s_logger.debug("Failed to initialize random:" + e.toString());
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean handleSystemVMStart(long vmId, String privateIpAddress, String privateMacAddress,
|
||||
String privateNetMask, long dcId, long podId, String name, String vmType, String url) {
|
||||
_executor.execute(new SystemVMHandler(vmId, privateIpAddress, privateMacAddress, privateNetMask, dcId, podId,
|
||||
name, vmType, _simulatorMgr, url));
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean start() {
|
||||
return true;
|
||||
}
|
||||
@Override
|
||||
public boolean handleSystemVMStop(long vmId) {
|
||||
_executor.execute(new SystemVMHandler(vmId));
|
||||
return true;
|
||||
}
|
||||
|
||||
private class SystemVMHandler implements Runnable {
|
||||
private long vmId;
|
||||
private String privateIpAddress;
|
||||
private String privateMacAddress;
|
||||
private String privateNetMask;
|
||||
private long dcId;
|
||||
private long podId;
|
||||
private String guid;
|
||||
private String name;
|
||||
private String vmType;
|
||||
private SimulatorManager mgr;
|
||||
private String mode;
|
||||
private String url;
|
||||
|
||||
@Override
|
||||
public boolean stop() {
|
||||
return true;
|
||||
}
|
||||
public SystemVMHandler(long vmId, String privateIpAddress, String privateMacAddress, String privateNetMask,
|
||||
long dcId, long podId, String name, String vmType, SimulatorManager mgr, String url) {
|
||||
this.vmId = vmId;
|
||||
this.privateIpAddress = privateIpAddress;
|
||||
this.privateMacAddress = privateMacAddress;
|
||||
this.privateNetMask = privateNetMask;
|
||||
this.dcId = dcId;
|
||||
this.guid = "SystemVM-" + UUID.randomUUID().toString();
|
||||
this.name = name;
|
||||
this.vmType = vmType;
|
||||
this.mgr = mgr;
|
||||
this.mode = "Start";
|
||||
this.url = url;
|
||||
this.podId = podId;
|
||||
}
|
||||
|
||||
public SystemVMHandler(long vmId) {
|
||||
this.vmId = vmId;
|
||||
this.mode = "Stop";
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getName() {
|
||||
return this.getClass().getSimpleName();
|
||||
}
|
||||
@Override
|
||||
@DB
|
||||
public void run() {
|
||||
|
||||
Transaction txn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
if (this.mode.equalsIgnoreCase("Stop")) {
|
||||
txn.start();
|
||||
MockHost host = _mockHostDao.findByVmId(this.vmId);
|
||||
if (host != null) {
|
||||
String guid = host.getGuid();
|
||||
if (guid != null) {
|
||||
AgentResourceBase res = _resources.get(guid);
|
||||
if (res != null) {
|
||||
res.stop();
|
||||
_resources.remove(guid);
|
||||
}
|
||||
}
|
||||
}
|
||||
txn.commit();
|
||||
return;
|
||||
}
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
throw new CloudRuntimeException("Unable to get host " + guid + " due to " + ex.getMessage(), ex);
|
||||
} finally {
|
||||
txn.close();
|
||||
txn = Transaction.open(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
}
|
||||
|
||||
@Override
|
||||
public MaintainAnswer maintain(com.cloud.agent.api.MaintainCommand cmd) {
|
||||
return new MaintainAnswer(cmd);
|
||||
}
|
||||
String resource = null;
|
||||
if (vmType.equalsIgnoreCase("secstorage")) {
|
||||
resource = "com.cloud.agent.AgentStorageResource";
|
||||
}
|
||||
MockHostVO mockHost = new MockHostVO();
|
||||
mockHost.setDataCenterId(this.dcId);
|
||||
mockHost.setPodId(this.podId);
|
||||
mockHost.setCpuCount(DEFAULT_HOST_CPU_CORES);
|
||||
mockHost.setCpuSpeed(DEFAULT_HOST_SPEED_MHZ);
|
||||
mockHost.setMemorySize(DEFAULT_HOST_MEM_SIZE);
|
||||
mockHost.setGuid(this.guid);
|
||||
mockHost.setName(name);
|
||||
mockHost.setPrivateIpAddress(this.privateIpAddress);
|
||||
mockHost.setPublicIpAddress(this.privateIpAddress);
|
||||
mockHost.setStorageIpAddress(this.privateIpAddress);
|
||||
mockHost.setPrivateMacAddress(this.privateMacAddress);
|
||||
mockHost.setPublicMacAddress(this.privateMacAddress);
|
||||
mockHost.setStorageMacAddress(this.privateMacAddress);
|
||||
mockHost.setVersion(this.getClass().getPackage().getImplementationVersion());
|
||||
mockHost.setResource(resource);
|
||||
mockHost.setVmId(vmId);
|
||||
Transaction simtxn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
simtxn.start();
|
||||
mockHost = _mockHostDao.persist(mockHost);
|
||||
simtxn.commit();
|
||||
} catch (Exception ex) {
|
||||
simtxn.rollback();
|
||||
throw new CloudRuntimeException("Unable to persist host " + mockHost.getGuid() + " due to "
|
||||
+ ex.getMessage(), ex);
|
||||
} finally {
|
||||
simtxn.close();
|
||||
simtxn = Transaction.open(Transaction.CLOUD_DB);
|
||||
simtxn.close();
|
||||
}
|
||||
|
||||
if (vmType.equalsIgnoreCase("secstorage")) {
|
||||
AgentStorageResource storageResource = new AgentStorageResource();
|
||||
try {
|
||||
Map<String, Object> params = new HashMap<String, Object>();
|
||||
Map<String, String> details = new HashMap<String, String>();
|
||||
params.put("guid", this.guid);
|
||||
details.put("guid", this.guid);
|
||||
storageResource.configure("secondaryStorage", params);
|
||||
storageResource.start();
|
||||
// on the simulator the ssvm is as good as a direct
|
||||
// agent
|
||||
_resourceMgr.addHost(mockHost.getDataCenterId(), storageResource, Host.Type.SecondaryStorageVM,
|
||||
details);
|
||||
_resources.put(this.guid, storageResource);
|
||||
} catch (ConfigurationException e) {
|
||||
s_logger.debug("Failed to load secondary storage resource: " + e.toString());
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public MockHost getHost(String guid) {
|
||||
Transaction txn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
txn.start();
|
||||
MockHost _host = _mockHostDao.findByGuid(guid);
|
||||
txn.commit();
|
||||
if (_host != null) {
|
||||
return _host;
|
||||
} else {
|
||||
s_logger.error("Host with guid " + guid + " was not found");
|
||||
return null;
|
||||
}
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
throw new CloudRuntimeException("Unable to get host " + guid + " due to " + ex.getMessage(), ex);
|
||||
} finally {
|
||||
txn.close();
|
||||
txn = Transaction.open(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public GetHostStatsAnswer getHostStatistic(GetHostStatsCommand cmd) {
|
||||
String hostGuid = cmd.getHostGuid();
|
||||
MockHost host = null;
|
||||
Transaction txn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
txn.start();
|
||||
host = _mockHostDao.findByGuid(hostGuid);
|
||||
txn.commit();
|
||||
if (host == null) {
|
||||
return null;
|
||||
}
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
throw new CloudRuntimeException("Unable to get host " + hostGuid + " due to " + ex.getMessage(), ex);
|
||||
} finally {
|
||||
txn.close();
|
||||
txn = Transaction.open(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
}
|
||||
|
||||
Transaction vmtxn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
vmtxn.start();
|
||||
List<MockVMVO> vms = _mockVmDao.findByHostId(host.getId());
|
||||
vmtxn.commit();
|
||||
double usedMem = 0.0;
|
||||
double usedCpu = 0.0;
|
||||
for (MockVMVO vm : vms) {
|
||||
usedMem += vm.getMemory();
|
||||
usedCpu += vm.getCpu();
|
||||
}
|
||||
|
||||
HostStatsEntry hostStats = new HostStatsEntry();
|
||||
hostStats.setTotalMemoryKBs(host.getMemorySize());
|
||||
hostStats.setFreeMemoryKBs(host.getMemorySize() - usedMem);
|
||||
hostStats.setNetworkReadKBs(32768);
|
||||
hostStats.setNetworkWriteKBs(16384);
|
||||
hostStats.setCpuUtilization(usedCpu / (host.getCpuCount() * host.getCpuSpeed()));
|
||||
hostStats.setEntityType("simulator-host");
|
||||
hostStats.setHostId(cmd.getHostId());
|
||||
return new GetHostStatsAnswer(cmd, hostStats);
|
||||
} catch (Exception ex) {
|
||||
vmtxn.rollback();
|
||||
throw new CloudRuntimeException("Unable to get Vms on host " + host.getGuid() + " due to "
|
||||
+ ex.getMessage(), ex);
|
||||
} finally {
|
||||
vmtxn.close();
|
||||
vmtxn = Transaction.open(Transaction.CLOUD_DB);
|
||||
vmtxn.close();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public Answer checkHealth(CheckHealthCommand cmd) {
|
||||
return new Answer(cmd);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Answer pingTest(PingTestCommand cmd) {
|
||||
return new Answer(cmd);
|
||||
}
|
||||
|
||||
@Override
|
||||
public PrepareForMigrationAnswer prepareForMigrate(PrepareForMigrationCommand cmd) {
|
||||
VirtualMachineTO vm = cmd.getVirtualMachine();
|
||||
if (s_logger.isDebugEnabled()) {
|
||||
s_logger.debug("Preparing host for migrating " + vm);
|
||||
}
|
||||
return new PrepareForMigrationAnswer(cmd);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean start() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean stop() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getName() {
|
||||
return this.getClass().getSimpleName();
|
||||
}
|
||||
|
||||
@Override
|
||||
public MaintainAnswer maintain(com.cloud.agent.api.MaintainCommand cmd) {
|
||||
return new MaintainAnswer(cmd);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Answer checkNetworkCommand(CheckNetworkCommand cmd) {
|
||||
if (s_logger.isDebugEnabled()) {
|
||||
s_logger.debug("Checking if network name setup is done on the resource");
|
||||
}
|
||||
return new CheckNetworkAnswer(cmd, true , "Network Setup check by names is done");
|
||||
s_logger.debug("Checking if network name setup is done on the resource");
|
||||
}
|
||||
return new CheckNetworkAnswer(cmd, true, "Network Setup check by names is done");
|
||||
}
|
||||
}
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -75,6 +75,8 @@ import com.cloud.simulator.dao.MockVMDao;
|
|||
import com.cloud.utils.Pair;
|
||||
import com.cloud.utils.Ternary;
|
||||
import com.cloud.utils.component.Inject;
|
||||
import com.cloud.utils.db.Transaction;
|
||||
import com.cloud.utils.exception.CloudRuntimeException;
|
||||
import com.cloud.vm.VirtualMachine.State;
|
||||
|
||||
@Local(value = { MockVmManager.class })
|
||||
|
|
@ -85,7 +87,7 @@ public class MockVmManagerImpl implements MockVmManager {
|
|||
@Inject MockAgentManager _mockAgentMgr = null;
|
||||
@Inject MockHostDao _mockHostDao = null;
|
||||
@Inject MockSecurityRulesDao _mockSecurityDao = null;
|
||||
private Map<String, Map<String, Ternary<String,Long,Long>>> _securityRules = new ConcurrentHashMap<String, Map<String, Ternary<String, Long, Long>>>();
|
||||
private Map<String, Map<String, Ternary<String, Long, Long>>> _securityRules = new ConcurrentHashMap<String, Map<String, Ternary<String, Long, Long>>>();
|
||||
|
||||
public MockVmManagerImpl() {
|
||||
}
|
||||
|
|
@ -101,12 +103,27 @@ public class MockVmManagerImpl implements MockVmManager {
|
|||
int cpuHz, long ramSize,
|
||||
String bootArgs, String hostGuid) {
|
||||
|
||||
MockHost host = _mockHostDao.findByGuid(hostGuid);
|
||||
if (host == null) {
|
||||
return "can't find host";
|
||||
}
|
||||
|
||||
MockVm vm = _mockVmDao.findByVmName(vmName);
|
||||
Transaction txn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
MockHost host = null;
|
||||
MockVm vm = null;
|
||||
try {
|
||||
txn.start();
|
||||
host = _mockHostDao.findByGuid(hostGuid);
|
||||
if (host == null) {
|
||||
return "can't find host";
|
||||
}
|
||||
|
||||
vm = _mockVmDao.findByVmName(vmName);
|
||||
txn.commit();
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
throw new CloudRuntimeException("Unable to start VM " + vmName, ex);
|
||||
} finally {
|
||||
txn.close();
|
||||
txn = Transaction.open(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
}
|
||||
|
||||
if(vm == null) {
|
||||
int vncPort = 0;
|
||||
if(vncPort < 0)
|
||||
|
|
@ -127,11 +144,35 @@ public class MockVmManagerImpl implements MockVmManager {
|
|||
} else if (vmName.startsWith("i-")) {
|
||||
vm.setType("User");
|
||||
}
|
||||
vm = _mockVmDao.persist((MockVMVO)vm);
|
||||
txn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
txn.start();
|
||||
vm = _mockVmDao.persist((MockVMVO) vm);
|
||||
txn.commit();
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
throw new CloudRuntimeException("unable to save vm to db " + vm.getName(), ex);
|
||||
} finally {
|
||||
txn.close();
|
||||
txn = Transaction.open(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
}
|
||||
} else {
|
||||
if(vm.getState() == State.Stopped) {
|
||||
vm.setState(State.Running);
|
||||
_mockVmDao.update(vm.getId(), (MockVMVO)vm);
|
||||
txn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
txn.start();
|
||||
_mockVmDao.update(vm.getId(), (MockVMVO)vm);
|
||||
txn.commit();
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
throw new CloudRuntimeException("unable to update vm " + vm.getName(), ex);
|
||||
} finally {
|
||||
txn.close();
|
||||
txn = Transaction.open(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -179,37 +220,73 @@ public class MockVmManagerImpl implements MockVmManager {
|
|||
}
|
||||
|
||||
public boolean rebootVM(String vmName) {
|
||||
MockVm vm = _mockVmDao.findByVmName(vmName);
|
||||
if(vm != null) {
|
||||
vm.setState(State.Running);
|
||||
_mockVmDao.update(vm.getId(), (MockVMVO)vm);
|
||||
}
|
||||
return true;
|
||||
Transaction txn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
txn.start();
|
||||
MockVm vm = _mockVmDao.findByVmName(vmName);
|
||||
if (vm != null) {
|
||||
vm.setState(State.Running);
|
||||
_mockVmDao.update(vm.getId(), (MockVMVO) vm);
|
||||
|
||||
}
|
||||
txn.commit();
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
throw new CloudRuntimeException("unable to reboot vm " + vmName, ex);
|
||||
} finally {
|
||||
txn.close();
|
||||
txn = Transaction.open(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, MockVMVO> getVms(String hostGuid) {
|
||||
List<MockVMVO> vms = _mockVmDao.findByHostGuid(hostGuid);
|
||||
Map<String, MockVMVO> vmMap = new HashMap<String, MockVMVO>();
|
||||
for (MockVMVO vm : vms) {
|
||||
vmMap.put(vm.getName(), vm);
|
||||
public Map<String, MockVMVO> getVms(String hostGuid) {
|
||||
Transaction txn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
txn.start();
|
||||
List<MockVMVO> vms = _mockVmDao.findByHostGuid(hostGuid);
|
||||
Map<String, MockVMVO> vmMap = new HashMap<String, MockVMVO>();
|
||||
for (MockVMVO vm : vms) {
|
||||
vmMap.put(vm.getName(), vm);
|
||||
}
|
||||
txn.commit();
|
||||
return vmMap;
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
throw new CloudRuntimeException("unable to fetch vms from host " + hostGuid, ex);
|
||||
} finally {
|
||||
txn.close();
|
||||
txn = Transaction.open(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
}
|
||||
return vmMap;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, State> getVmStates(String hostGuid) {
|
||||
Map<String, State> states = new HashMap<String, State>();
|
||||
List<MockVMVO> vms = _mockVmDao.findByHostGuid(hostGuid);
|
||||
if (vms.isEmpty()) {
|
||||
return states;
|
||||
public Map<String, State> getVmStates(String hostGuid) {
|
||||
Transaction txn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
txn.start();
|
||||
Map<String, State> states = new HashMap<String, State>();
|
||||
List<MockVMVO> vms = _mockVmDao.findByHostGuid(hostGuid);
|
||||
if (vms.isEmpty()) {
|
||||
txn.commit();
|
||||
return states;
|
||||
}
|
||||
for (MockVm vm : vms) {
|
||||
states.put(vm.getName(), vm.getState());
|
||||
}
|
||||
txn.commit();
|
||||
return states;
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
throw new CloudRuntimeException("unable to fetch vms from host " + hostGuid, ex);
|
||||
} finally {
|
||||
txn.close();
|
||||
txn = Transaction.open(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
}
|
||||
|
||||
for(MockVm vm : vms) {
|
||||
states.put(vm.getName(), vm.getState());
|
||||
}
|
||||
|
||||
return states;
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
@ -243,14 +320,26 @@ public class MockVmManagerImpl implements MockVmManager {
|
|||
}
|
||||
|
||||
@Override
|
||||
public CheckVirtualMachineAnswer checkVmState(CheckVirtualMachineCommand cmd) {
|
||||
MockVMVO vm = _mockVmDao.findByVmName(cmd.getVmName());
|
||||
if (vm == null) {
|
||||
return new CheckVirtualMachineAnswer(cmd, "can't find vm:" + cmd.getVmName());
|
||||
}
|
||||
|
||||
return new CheckVirtualMachineAnswer(cmd, vm.getState(), vm.getVncPort());
|
||||
}
|
||||
public CheckVirtualMachineAnswer checkVmState(CheckVirtualMachineCommand cmd) {
|
||||
Transaction txn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
txn.start();
|
||||
MockVMVO vm = _mockVmDao.findByVmName(cmd.getVmName());
|
||||
if (vm == null) {
|
||||
return new CheckVirtualMachineAnswer(cmd, "can't find vm:" + cmd.getVmName());
|
||||
}
|
||||
|
||||
txn.commit();
|
||||
return new CheckVirtualMachineAnswer(cmd, vm.getState(), vm.getVncPort());
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
throw new CloudRuntimeException("unable to fetch vm state " + cmd.getVmName(), ex);
|
||||
} finally {
|
||||
txn.close();
|
||||
txn = Transaction.open(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public Answer startVM(StartCommand cmd, SimulatorInfo info) {
|
||||
|
|
@ -290,22 +379,34 @@ public class MockVmManagerImpl implements MockVmManager {
|
|||
}
|
||||
|
||||
@Override
|
||||
public MigrateAnswer Migrate(MigrateCommand cmd, SimulatorInfo info) {
|
||||
String vmName = cmd.getVmName();
|
||||
String destGuid = cmd.getHostGuid();
|
||||
MockVMVO vm = _mockVmDao.findByVmNameAndHost(vmName, info.getHostUuid());
|
||||
if (vm == null) {
|
||||
return new MigrateAnswer(cmd, false, "can;t find vm:" + vmName + " on host:" + info.getHostUuid(), null);
|
||||
}
|
||||
|
||||
MockHost destHost = _mockHostDao.findByGuid(destGuid);
|
||||
if (destHost == null) {
|
||||
return new MigrateAnswer(cmd, false, "can;t find host:" + info.getHostUuid(), null);
|
||||
}
|
||||
vm.setHostId(destHost.getId());
|
||||
_mockVmDao.update(vm.getId(), vm);
|
||||
return new MigrateAnswer(cmd, true,null, 0);
|
||||
}
|
||||
public MigrateAnswer Migrate(MigrateCommand cmd, SimulatorInfo info) {
|
||||
Transaction txn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
txn.start();
|
||||
String vmName = cmd.getVmName();
|
||||
String destGuid = cmd.getHostGuid();
|
||||
MockVMVO vm = _mockVmDao.findByVmNameAndHost(vmName, info.getHostUuid());
|
||||
if (vm == null) {
|
||||
return new MigrateAnswer(cmd, false, "can;t find vm:" + vmName + " on host:" + info.getHostUuid(), null);
|
||||
}
|
||||
|
||||
MockHost destHost = _mockHostDao.findByGuid(destGuid);
|
||||
if (destHost == null) {
|
||||
return new MigrateAnswer(cmd, false, "can;t find host:" + info.getHostUuid(), null);
|
||||
}
|
||||
vm.setHostId(destHost.getId());
|
||||
_mockVmDao.update(vm.getId(), vm);
|
||||
txn.commit();
|
||||
return new MigrateAnswer(cmd, true, null, 0);
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
throw new CloudRuntimeException("unable to migrate vm " + cmd.getVmName(), ex);
|
||||
} finally {
|
||||
txn.close();
|
||||
txn = Transaction.open(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public Answer IpAssoc(IpAssocCommand cmd) {
|
||||
|
|
@ -328,37 +429,77 @@ public class MockVmManagerImpl implements MockVmManager {
|
|||
}
|
||||
|
||||
@Override
|
||||
public Answer CleanupNetworkRules(CleanupNetworkRulesCmd cmd, SimulatorInfo info) {
|
||||
List<MockSecurityRulesVO> rules = _mockSecurityDao.findByHost(info.getHostUuid());
|
||||
for (MockSecurityRulesVO rule : rules) {
|
||||
MockVMVO vm = _mockVmDao.findByVmNameAndHost(rule.getVmName(), info.getHostUuid());
|
||||
if (vm == null) {
|
||||
_mockSecurityDao.remove(rule.getId());
|
||||
}
|
||||
}
|
||||
return new Answer(cmd);
|
||||
}
|
||||
public Answer CleanupNetworkRules(CleanupNetworkRulesCmd cmd, SimulatorInfo info) {
|
||||
Transaction txn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
txn.start();
|
||||
List<MockSecurityRulesVO> rules = _mockSecurityDao.findByHost(info.getHostUuid());
|
||||
for (MockSecurityRulesVO rule : rules) {
|
||||
MockVMVO vm = _mockVmDao.findByVmNameAndHost(rule.getVmName(), info.getHostUuid());
|
||||
if (vm == null) {
|
||||
_mockSecurityDao.remove(rule.getId());
|
||||
}
|
||||
}
|
||||
txn.commit();
|
||||
return new Answer(cmd);
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
throw new CloudRuntimeException("unable to clean up rules", ex);
|
||||
} finally {
|
||||
txn.close();
|
||||
txn = Transaction.open(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public Answer stopVM(StopCommand cmd) {
|
||||
String vmName = cmd.getVmName();
|
||||
MockVm vm = _mockVmDao.findByVmName(vmName);
|
||||
if(vm != null) {
|
||||
vm.setState(State.Stopped);
|
||||
_mockVmDao.update(vm.getId(), (MockVMVO)vm);
|
||||
}
|
||||
public Answer stopVM(StopCommand cmd) {
|
||||
Transaction txn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
txn.start();
|
||||
String vmName = cmd.getVmName();
|
||||
MockVm vm = _mockVmDao.findByVmName(vmName);
|
||||
if (vm != null) {
|
||||
vm.setState(State.Stopped);
|
||||
_mockVmDao.update(vm.getId(), (MockVMVO) vm);
|
||||
}
|
||||
|
||||
if (vmName.startsWith("s-")) {
|
||||
_mockAgentMgr.handleSystemVMStop(vm.getId());
|
||||
}
|
||||
|
||||
return new StopAnswer(cmd, null, new Integer(0), true);
|
||||
}
|
||||
if (vmName.startsWith("s-")) {
|
||||
_mockAgentMgr.handleSystemVMStop(vm.getId());
|
||||
}
|
||||
txn.commit();
|
||||
return new StopAnswer(cmd, null, new Integer(0), true);
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
throw new CloudRuntimeException("unable to stop vm " + cmd.getVmName(), ex);
|
||||
} finally {
|
||||
txn.close();
|
||||
txn = Transaction.open(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public Answer rebootVM(RebootCommand cmd) {
|
||||
return new RebootAnswer(cmd, "Rebooted "+cmd.getVmName(), false);
|
||||
}
|
||||
public Answer rebootVM(RebootCommand cmd) {
|
||||
Transaction txn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
txn.start();
|
||||
MockVm vm = _mockVmDao.findByVmName(cmd.getVmName());
|
||||
if (vm != null) {
|
||||
vm.setState(State.Running);
|
||||
_mockVmDao.update(vm.getId(), (MockVMVO) vm);
|
||||
}
|
||||
txn.commit();
|
||||
return new RebootAnswer(cmd, "Rebooted " + cmd.getVmName(), true);
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
throw new CloudRuntimeException("unable to stop vm " + cmd.getVmName(), ex);
|
||||
} finally {
|
||||
txn.close();
|
||||
txn = Transaction.open(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public Answer getVncPort(GetVncPortCommand cmd) {
|
||||
|
|
|
|||
|
|
@ -34,6 +34,7 @@ import com.cloud.agent.api.CheckHealthCommand;
|
|||
import com.cloud.agent.api.CheckNetworkCommand;
|
||||
import com.cloud.agent.api.CheckVirtualMachineCommand;
|
||||
import com.cloud.agent.api.CleanupNetworkRulesCmd;
|
||||
import com.cloud.agent.api.ClusterSyncAnswer;
|
||||
import com.cloud.agent.api.ClusterSyncCommand;
|
||||
import com.cloud.agent.api.Command;
|
||||
import com.cloud.agent.api.ComputeChecksumCommand;
|
||||
|
|
@ -111,11 +112,11 @@ public class SimulatorManagerImpl implements SimulatorManager {
|
|||
public boolean configure(String name, Map<String, Object> params) throws ConfigurationException {
|
||||
/*
|
||||
try {
|
||||
Connection conn = Transaction.getStandaloneConnectionWithException();
|
||||
Connection conn = Transaction.getStandaloneSimulatorConnection();
|
||||
conn.setAutoCommit(true);
|
||||
_concierge = new ConnectionConcierge("SimulatorConnection", conn, true);
|
||||
} catch (SQLException e) {
|
||||
throw new CloudRuntimeException("Unable to get a db connection", e);
|
||||
throw new CloudRuntimeException("Unable to get a db connection to simulator", e);
|
||||
}
|
||||
*/
|
||||
return true;
|
||||
|
|
@ -295,17 +296,20 @@ public class SimulatorManagerImpl implements SimulatorManager {
|
|||
return _mockVmMgr.getDomRVersion((GetDomRVersionCmd)cmd);
|
||||
} else if (cmd instanceof ClusterSyncCommand) {
|
||||
return new Answer(cmd);
|
||||
//return new ClusterSyncAnswer(((ClusterSyncCommand) cmd).getClusterId(), this.getVmStates(hostGuid));
|
||||
} else if (cmd instanceof CopyVolumeCommand) {
|
||||
return _mockStorageMgr.CopyVolume((CopyVolumeCommand)cmd);
|
||||
} else {
|
||||
return Answer.createUnsupportedCommandAnswer(cmd);
|
||||
}
|
||||
} catch(Exception e) {
|
||||
s_logger.debug("Failed execute cmd: " + e.toString());
|
||||
s_logger.error("Failed execute cmd: " + e.toString());
|
||||
txn.rollback();
|
||||
return new Answer(cmd, false, e.toString());
|
||||
} finally {
|
||||
txn.transitToAutoManagedConnection(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
txn = Transaction.open(Transaction.CLOUD_DB);
|
||||
txn.close();
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -315,53 +319,50 @@ public class SimulatorManagerImpl implements SimulatorManager {
|
|||
}
|
||||
|
||||
@Override
|
||||
public boolean configureSimulator(Long zoneId, Long podId, Long clusterId, Long hostId, String command, String values) {
|
||||
MockConfigurationVO config = _mockConfigDao.findByCommand(zoneId, podId, clusterId, hostId, command);
|
||||
if (config == null) {
|
||||
config = new MockConfigurationVO();
|
||||
config.setClusterId(clusterId);
|
||||
config.setDataCenterId(zoneId);
|
||||
config.setPodId(podId);
|
||||
config.setHostId(hostId);
|
||||
config.setName(command);
|
||||
config.setValues(values);
|
||||
_mockConfigDao.persist(config);
|
||||
} else {
|
||||
config.setValues(values);
|
||||
_mockConfigDao.update(config.getId(), config);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
@DB
|
||||
public Map<String, State> getVmStates(String hostGuid) {
|
||||
Transaction txn = Transaction.currentTxn();
|
||||
txn.transitToUserManagedConnection(_concierge.conn());
|
||||
try {
|
||||
return _mockVmMgr.getVmStates(hostGuid);
|
||||
} finally {
|
||||
txn.transitToAutoManagedConnection(Transaction.CLOUD_DB);
|
||||
}
|
||||
return _mockVmMgr.getVmStates(hostGuid);
|
||||
}
|
||||
|
||||
@Override
|
||||
@DB
|
||||
public Map<String, MockVMVO> getVms(String hostGuid) {
|
||||
Transaction txn = Transaction.currentTxn();
|
||||
txn.transitToUserManagedConnection(_concierge.conn());
|
||||
try {
|
||||
return _mockVmMgr.getVms(hostGuid);
|
||||
} finally {
|
||||
txn.transitToAutoManagedConnection(Transaction.CLOUD_DB);
|
||||
}
|
||||
return _mockVmMgr.getVms(hostGuid);
|
||||
}
|
||||
|
||||
|
||||
@Override
|
||||
public HashMap<String, Pair<Long, Long>> syncNetworkGroups(String hostGuid) {
|
||||
SimulatorInfo info = new SimulatorInfo();
|
||||
info.setHostUuid(hostGuid);
|
||||
return _mockVmMgr.syncNetworkGroups(info);
|
||||
return _mockVmMgr.syncNetworkGroups(info);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean configureSimulator(Long zoneId, Long podId, Long clusterId, Long hostId, String command,
|
||||
String values) {
|
||||
Transaction txn = Transaction.open(Transaction.SIMULATOR_DB);
|
||||
try {
|
||||
txn.start();
|
||||
MockConfigurationVO config = _mockConfigDao.findByCommand(zoneId, podId, clusterId, hostId, command);
|
||||
if (config == null) {
|
||||
config = new MockConfigurationVO();
|
||||
config.setClusterId(clusterId);
|
||||
config.setDataCenterId(zoneId);
|
||||
config.setPodId(podId);
|
||||
config.setHostId(hostId);
|
||||
config.setName(command);
|
||||
config.setValues(values);
|
||||
_mockConfigDao.persist(config);
|
||||
txn.commit();
|
||||
} else {
|
||||
config.setValues(values);
|
||||
_mockConfigDao.update(config.getId(), config);
|
||||
txn.commit();
|
||||
}
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
throw new CloudRuntimeException("Unable to configure simulator because of " + ex.getMessage(), ex);
|
||||
} finally {
|
||||
txn.close();
|
||||
}
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,24 @@
|
|||
package com.cloud.simulator;
|
||||
|
||||
import com.cloud.utils.SerialVersionUID;
|
||||
import com.cloud.utils.exception.RuntimeCloudException;
|
||||
|
||||
/**
|
||||
* wrap exceptions that you know there's no point in dealing with.
|
||||
*/
|
||||
public class SimulatorRuntimeException extends RuntimeCloudException {
|
||||
|
||||
private static final long serialVersionUID = SerialVersionUID.CloudRuntimeException;
|
||||
|
||||
public SimulatorRuntimeException(String message) {
|
||||
super(message);
|
||||
}
|
||||
|
||||
public SimulatorRuntimeException(String message, Throwable th) {
|
||||
super(message, th);
|
||||
}
|
||||
|
||||
protected SimulatorRuntimeException() {
|
||||
super();
|
||||
}
|
||||
}
|
||||
|
|
@ -67,7 +67,7 @@ export CLASSPATH="$SCP:$DCP:$ACP:$JCP:@AGENTSYSCONFDIR@:@AGENTLIBDIR@"
|
|||
start() {
|
||||
echo -n $"Starting $PROGNAME: "
|
||||
if hostname --fqdn >/dev/null 2>&1 ; then
|
||||
$JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" $CLASS
|
||||
$JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" -errfile SYSLOG $CLASS
|
||||
RETVAL=$?
|
||||
echo
|
||||
else
|
||||
|
|
|
|||
|
|
@ -67,7 +67,7 @@ export CLASSPATH="$SCP:$DCP:$ACP:$JCP:@AGENTSYSCONFDIR@:@AGENTLIBDIR@"
|
|||
start() {
|
||||
echo -n $"Starting $PROGNAME: "
|
||||
if hostname --fqdn >/dev/null 2>&1 ; then
|
||||
$JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" $CLASS
|
||||
$JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" -errfile SYSLOG $CLASS
|
||||
RETVAL=$?
|
||||
echo
|
||||
else
|
||||
|
|
|
|||
|
|
@ -97,7 +97,7 @@ start() {
|
|||
|
||||
wait_for_network
|
||||
|
||||
if jsvc -cp "$CLASSPATH" -pidfile "$PIDFILE" $CLASS
|
||||
if jsvc -cp "$CLASSPATH" -pidfile "$PIDFILE" -errfile SYSLOG $CLASS
|
||||
RETVAL=$?
|
||||
then
|
||||
rc=0
|
||||
|
|
|
|||
|
|
@ -67,7 +67,7 @@ export CLASSPATH="$SCP:$DCP:$ACP:$JCP:@AGENTSYSCONFDIR@:@AGENTLIBDIR@"
|
|||
start() {
|
||||
echo -n $"Starting $PROGNAME: "
|
||||
if hostname --fqdn >/dev/null 2>&1 ; then
|
||||
$JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" $CLASS
|
||||
$JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" -errfile SYSLOG $CLASS
|
||||
RETVAL=$?
|
||||
echo
|
||||
else
|
||||
|
|
|
|||
|
|
@ -97,7 +97,7 @@ start() {
|
|||
|
||||
wait_for_network
|
||||
|
||||
if jsvc -cp "$CLASSPATH" -pidfile "$PIDFILE" $CLASS
|
||||
if jsvc -cp "$CLASSPATH" -pidfile "$PIDFILE" -errfile SYSLOG $CLASS
|
||||
RETVAL=$?
|
||||
then
|
||||
rc=0
|
||||
|
|
|
|||
|
|
@ -99,7 +99,7 @@ start() {
|
|||
|
||||
wait_for_network
|
||||
|
||||
if start_daemon -p $PIDFILE $DAEMON -cp "$CLASSPATH" -pidfile "$PIDFILE" $CLASS
|
||||
if start_daemon -p $PIDFILE $DAEMON -cp "$CLASSPATH" -pidfile "$PIDFILE" -errfile SYSLOG $CLASS
|
||||
RETVAL=$?
|
||||
then
|
||||
rc=0
|
||||
|
|
@ -170,4 +170,4 @@ case "$1" in
|
|||
RETVAL=3
|
||||
esac
|
||||
|
||||
exit $RETVAL
|
||||
exit $RETVAL
|
||||
|
|
|
|||
|
|
@ -93,7 +93,7 @@
|
|||
<dependency>
|
||||
<groupId>org.apache.rampart</groupId>
|
||||
<artifactId>rahas</artifactId>
|
||||
<version>1.5</version>
|
||||
<version>${cs.rampart.version}</version>
|
||||
<type>mar</type>
|
||||
<exclusions>
|
||||
<exclusion>
|
||||
|
|
@ -105,7 +105,7 @@
|
|||
<dependency>
|
||||
<groupId>org.apache.rampart</groupId>
|
||||
<artifactId>rampart</artifactId>
|
||||
<version>1.5</version>
|
||||
<version>${cs.rampart.version}</version>
|
||||
<type>mar</type>
|
||||
<exclusions>
|
||||
<exclusion>
|
||||
|
|
@ -117,19 +117,19 @@
|
|||
<dependency>
|
||||
<groupId>org.apache.rampart</groupId>
|
||||
<artifactId>rampart-core</artifactId>
|
||||
<version>1.5</version>
|
||||
<version>${cs.rampart.version}</version>
|
||||
<scope>runtime</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.rampart</groupId>
|
||||
<artifactId>rampart-policy</artifactId>
|
||||
<version>1.5</version>
|
||||
<version>${cs.rampart.version}</version>
|
||||
<scope>runtime</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.rampart</groupId>
|
||||
<artifactId>rampart-trust</artifactId>
|
||||
<version>1.5</version>
|
||||
<version>${cs.rampart.version}</version>
|
||||
<scope>runtime</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
|
|
|
|||
|
|
@ -69,3 +69,15 @@ db.usage.autoReconnect=true
|
|||
|
||||
# awsapi database settings
|
||||
db.awsapi.name=cloudbridge
|
||||
|
||||
# Simulator database settings
|
||||
db.simulator.username=@DBUSER@
|
||||
db.simulator.password=@DBPW@
|
||||
db.simulator.host=@DBHOST@
|
||||
db.simulator.port=3306
|
||||
db.simulator.name=simulator
|
||||
db.simulator.maxActive=250
|
||||
db.simulator.maxIdle=30
|
||||
db.simulator.maxWait=10000
|
||||
db.simulator.autoReconnect=true
|
||||
|
||||
|
|
|
|||
|
|
@ -218,6 +218,7 @@ Requires: /sbin/chkconfig
|
|||
Requires: jna
|
||||
Requires: ebtables
|
||||
Requires: jsvc
|
||||
Requires: jakarta-commons-daemon
|
||||
Group: System Environment/Libraries
|
||||
|
||||
Requires: kvm
|
||||
|
|
@ -576,7 +577,10 @@ fi
|
|||
%attr(0755,root,root) %{_bindir}/cloud-setup-bridge
|
||||
|
||||
%changelog
|
||||
* Thu Aug 16 2012 Marcus Sorense <shadowsor@gmail.com> 4.0
|
||||
* Fri Sep 14 2012 Marcus Sorensen <shadowsor@gmail.com> 4.0.1
|
||||
- adding dependency jakarta-commons-daemon to fix "cannot find daemon loader"
|
||||
|
||||
* Thu Aug 16 2012 Marcus Sorensen <shadowsor@gmail.com> 4.0
|
||||
- rearranged files sections to match currently built files
|
||||
|
||||
* Mon May 3 2010 Manuel Amador (Rudd-O) <manuel@vmops.com> 1.9.12
|
||||
|
|
|
|||
|
|
@ -105,7 +105,7 @@ public class KVMGuestOsMapper {
|
|||
s_mapper.put("Ubuntu 8.04 (64-bit)", "Other Linux");
|
||||
s_mapper.put("Debian GNU/Linux 5(32-bit)", "Debian GNU/Linux 5");
|
||||
s_mapper.put("Debian GNU/Linux 5(64-bit)", "Debian GNU/Linux 5");
|
||||
s_mapper.put("Debian GNU/Linux 5.0(32-bit)", "Debian GNU/Linux 5");
|
||||
s_mapper.put("Debian GNU/Linux 5.0 (32-bit)", "Debian GNU/Linux 5");
|
||||
s_mapper.put("Debian GNU/Linux 4(32-bit)", "Debian GNU/Linux 4");
|
||||
s_mapper.put("Debian GNU/Linux 4(64-bit)", "Debian GNU/Linux 4");
|
||||
s_mapper.put("Debian GNU/Linux 6(64-bit)", "Debian GNU/Linux 6");
|
||||
|
|
|
|||
|
|
@ -2587,7 +2587,8 @@ public class LibvirtComputingResource extends ServerResourceBase implements
|
|||
if (disk.getDeviceType() == DiskDef.deviceType.CDROM
|
||||
&& disk.getDiskPath() != null) {
|
||||
cleanupDisk(conn, disk);
|
||||
} else if (disk.getDiskPath().contains(vmName + "-patchdisk")
|
||||
} else if (disk.getDiskPath() != null
|
||||
&& disk.getDiskPath().contains(vmName + "-patchdisk")
|
||||
&& vmName.matches("^[rsv]-\\d+-VM$")) {
|
||||
if (!_storagePoolMgr.deleteVbdByPath(disk.getDiskPath())) {
|
||||
s_logger.warn("failed to delete patch disk " + disk.getDiskPath());
|
||||
|
|
|
|||
1
pom.xml
1
pom.xml
|
|
@ -74,6 +74,7 @@
|
|||
<cs.mail.version>1.4</cs.mail.version>
|
||||
<cs.axis.version>1.4</cs.axis.version>
|
||||
<cs.axis2.version>1.5.1</cs.axis2.version>
|
||||
<cs.rampart.version>1.6.2</cs.rampart.version>
|
||||
<cs.axiom.version>1.2.8</cs.axiom.version>
|
||||
<cs.neethi.version>2.0.4</cs.neethi.version>
|
||||
<cs.servlet.version>2.4</cs.servlet.version>
|
||||
|
|
|
|||
|
|
@ -106,7 +106,7 @@ class Distribution:
|
|||
self.distro = "Fedora"
|
||||
elif os.path.exists("/etc/redhat-release"):
|
||||
version = file("/etc/redhat-release").readline()
|
||||
if version.find("Red Hat Enterprise Linux Server release 6") != -1 or version.find("Scientific Linux release 6") != -1 or version.find("CentOS Linux release 6") != -1 or version.find("CentOS release 6.2") != -1:
|
||||
if version.find("Red Hat Enterprise Linux Server release 6") != -1 or version.find("Scientific Linux release 6") != -1 or version.find("CentOS Linux release 6") != -1 or version.find("CentOS release 6.2") or version.find("CentOS release 6.3") != -1:
|
||||
self.distro = "RHEL6"
|
||||
elif version.find("CentOS release") != -1:
|
||||
self.distro = "CentOS"
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ failed() {
|
|||
mflag=
|
||||
fflag=
|
||||
ext="vhd"
|
||||
templateId=1
|
||||
templateId=
|
||||
hyper=
|
||||
msKey=password
|
||||
DISKSPACE=5120000 #free disk space required in kilobytes
|
||||
|
|
@ -143,21 +143,24 @@ else
|
|||
fi
|
||||
fi
|
||||
|
||||
if [ "$hyper" == "kvm" ]
|
||||
if [ "$templateId" == "" ]
|
||||
then
|
||||
ext="qcow2"
|
||||
templateId=(`mysql -h $dbHost --user=$dbUser --password=$dbPassword --skip-column-names -U cloud -e "select max(id) from cloud.vm_template where type = \"SYSTEM\" and hypervisor_type = \"KVM\" and removed is null"`)
|
||||
elif [ "$hyper" == "xenserver" ]
|
||||
then
|
||||
ext="vhd"
|
||||
templateId=(`mysql -h $dbHost --user=$dbUser --password=$dbPassword --skip-column-names -U cloud -e "select max(id) from cloud.vm_template where type = \"SYSTEM\" and hypervisor_type = \"XenServer\" and removed is null"`)
|
||||
elif [ "$hyper" == "vmware" ]
|
||||
then
|
||||
ext="ova"
|
||||
templateId=(`mysql -h $dbHost --user=$dbUser --password=$dbPassword --skip-column-names -U cloud -e "select max(id) from cloud.vm_template where type = \"SYSTEM\" and hypervisor_type = \"VMware\" and removed is null"`)
|
||||
else
|
||||
usage
|
||||
failed 2
|
||||
if [ "$hyper" == "kvm" ]
|
||||
then
|
||||
ext="qcow2"
|
||||
templateId=(`mysql -h $dbHost --user=$dbUser --password=$dbPassword --skip-column-names -U cloud -e "select max(id) from cloud.vm_template where type = \"SYSTEM\" and hypervisor_type = \"KVM\" and removed is null"`)
|
||||
elif [ "$hyper" == "xenserver" ]
|
||||
then
|
||||
ext="vhd"
|
||||
templateId=(`mysql -h $dbHost --user=$dbUser --password=$dbPassword --skip-column-names -U cloud -e "select max(id) from cloud.vm_template where type = \"SYSTEM\" and hypervisor_type = \"XenServer\" and removed is null"`)
|
||||
elif [ "$hyper" == "vmware" ]
|
||||
then
|
||||
ext="ova"
|
||||
templateId=(`mysql -h $dbHost --user=$dbUser --password=$dbPassword --skip-column-names -U cloud -e "select max(id) from cloud.vm_template where type = \"SYSTEM\" and hypervisor_type = \"VMware\" and removed is null"`)
|
||||
else
|
||||
usage
|
||||
failed 2
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ ! $templateId ]
|
||||
|
|
|
|||
|
|
@ -4846,11 +4846,19 @@ public class NetworkManagerImpl implements NetworkManager, NetworkService, Manag
|
|||
}
|
||||
|
||||
private String getDomainNetworkDomain(long domainId, long zoneId) {
|
||||
String networkDomain = _domainDao.findById(domainId).getNetworkDomain();
|
||||
String networkDomain = null;
|
||||
Long searchDomainId = domainId;
|
||||
while(searchDomainId != null){
|
||||
DomainVO domain = _domainDao.findById(searchDomainId);
|
||||
if(domain.getNetworkDomain() != null){
|
||||
networkDomain = domain.getNetworkDomain();
|
||||
break;
|
||||
}
|
||||
searchDomainId = domain.getParent();
|
||||
}
|
||||
if (networkDomain == null) {
|
||||
return getZoneNetworkDomain(zoneId);
|
||||
}
|
||||
|
||||
return networkDomain;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -31,6 +31,7 @@ import org.apache.log4j.Logger;
|
|||
import com.cloud.utils.crypt.DBEncryptionUtil;
|
||||
import com.cloud.utils.exception.CloudRuntimeException;
|
||||
import com.cloud.utils.script.Script;
|
||||
import com.cloud.dc.DataCenter.NetworkType;
|
||||
|
||||
public class Upgrade302to40 extends Upgrade30xBase implements DbUpgrade {
|
||||
final static Logger s_logger = Logger.getLogger(Upgrade302to40.class);
|
||||
|
|
@ -68,6 +69,8 @@ public class Upgrade302to40 extends Upgrade30xBase implements DbUpgrade {
|
|||
addVpcProvider(conn);
|
||||
updateRouterNetworkRef(conn);
|
||||
fixForeignKeys(conn);
|
||||
setupExternalNetworkDevices(conn);
|
||||
fixZoneUsingExternalDevices(conn);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
@ -681,4 +684,350 @@ public class Upgrade302to40 extends Upgrade30xBase implements DbUpgrade {
|
|||
throw new CloudRuntimeException("Unable to execute ssh_keypairs table update for adding domain_id foreign key", e);
|
||||
}
|
||||
}
|
||||
|
||||
// upgrades deployment with F5 and SRX devices, to 3.0's Network offerings & service providers paradigm
|
||||
private void setupExternalNetworkDevices(Connection conn) {
|
||||
PreparedStatement zoneSearchStmt = null, pNetworkStmt = null, f5DevicesStmt = null, srxDevicesStmt = null;
|
||||
ResultSet zoneResults = null, pNetworksResults = null, f5DevicesResult = null, srxDevicesResult = null;
|
||||
|
||||
try {
|
||||
zoneSearchStmt = conn.prepareStatement("SELECT id, networktype FROM `cloud`.`data_center`");
|
||||
zoneResults = zoneSearchStmt.executeQuery();
|
||||
while (zoneResults.next()) {
|
||||
long zoneId = zoneResults.getLong(1);
|
||||
String networkType = zoneResults.getString(2);
|
||||
|
||||
if (!NetworkType.Advanced.toString().equalsIgnoreCase(networkType)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
pNetworkStmt = conn.prepareStatement("SELECT id FROM `cloud`.`physical_network` where data_center_id=?");
|
||||
pNetworkStmt.setLong(1, zoneId);
|
||||
pNetworksResults = pNetworkStmt.executeQuery();
|
||||
while (pNetworksResults.next()) {
|
||||
long physicalNetworkId = pNetworksResults.getLong(1);
|
||||
PreparedStatement fetchF5NspStmt = conn.prepareStatement("SELECT id from `cloud`.`physical_network_service_providers` where physical_network_id=" + physicalNetworkId
|
||||
+ " and provider_name = 'F5BigIp'");
|
||||
ResultSet rsF5NSP = fetchF5NspStmt.executeQuery();
|
||||
boolean hasF5Nsp = rsF5NSP.next();
|
||||
fetchF5NspStmt.close();
|
||||
|
||||
if (!hasF5Nsp) {
|
||||
f5DevicesStmt = conn.prepareStatement("SELECT id FROM host WHERE data_center_id=? AND type = 'ExternalLoadBalancer' AND removed IS NULL");
|
||||
f5DevicesStmt.setLong(1, zoneId);
|
||||
f5DevicesResult = f5DevicesStmt.executeQuery();
|
||||
|
||||
while (f5DevicesResult.next()) {
|
||||
long f5HostId = f5DevicesResult.getLong(1);;
|
||||
// add F5BigIP provider and provider instance to physical network
|
||||
addF5ServiceProvider(conn, physicalNetworkId, zoneId);
|
||||
addF5LoadBalancer(conn, f5HostId, physicalNetworkId);
|
||||
}
|
||||
}
|
||||
|
||||
PreparedStatement fetchSRXNspStmt = conn.prepareStatement("SELECT id from `cloud`.`physical_network_service_providers` where physical_network_id=" + physicalNetworkId
|
||||
+ " and provider_name = 'JuniperSRX'");
|
||||
ResultSet rsSRXNSP = fetchSRXNspStmt.executeQuery();
|
||||
boolean hasSrxNsp = rsSRXNSP.next();
|
||||
fetchSRXNspStmt.close();
|
||||
|
||||
if (!hasSrxNsp) {
|
||||
srxDevicesStmt = conn.prepareStatement("SELECT id FROM host WHERE data_center_id=? AND type = 'ExternalFirewall' AND removed IS NULL");
|
||||
srxDevicesStmt.setLong(1, zoneId);
|
||||
srxDevicesResult = srxDevicesStmt.executeQuery();
|
||||
|
||||
while (srxDevicesResult.next()) {
|
||||
long srxHostId = srxDevicesResult.getLong(1);
|
||||
// add SRX provider and provider instance to physical network
|
||||
addSrxServiceProvider(conn, physicalNetworkId, zoneId);
|
||||
addSrxFirewall(conn, srxHostId, physicalNetworkId);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (zoneResults != null) {
|
||||
try {
|
||||
zoneResults.close();
|
||||
} catch (SQLException e) {
|
||||
}
|
||||
}
|
||||
if (zoneSearchStmt != null) {
|
||||
try {
|
||||
zoneSearchStmt.close();
|
||||
} catch (SQLException e) {
|
||||
}
|
||||
}
|
||||
} catch (SQLException e) {
|
||||
throw new CloudRuntimeException("Exception while adding PhysicalNetworks", e);
|
||||
} finally {
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
private void addF5LoadBalancer(Connection conn, long hostId, long physicalNetworkId){
|
||||
PreparedStatement pstmtUpdate = null;
|
||||
try{
|
||||
s_logger.debug("Adding F5 Big IP load balancer with host id " + hostId + " in to physical network" + physicalNetworkId);
|
||||
String insertF5 = "INSERT INTO `cloud`.`external_load_balancer_devices` (physical_network_id, host_id, provider_name, " +
|
||||
"device_name, capacity, is_dedicated, device_state, allocation_state, is_inline, is_managed, uuid) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)";
|
||||
pstmtUpdate = conn.prepareStatement(insertF5);
|
||||
pstmtUpdate.setLong(1, physicalNetworkId);
|
||||
pstmtUpdate.setLong(2, hostId);
|
||||
pstmtUpdate.setString(3, "F5BigIp");
|
||||
pstmtUpdate.setString(4, "F5BigIpLoadBalancer");
|
||||
pstmtUpdate.setLong(5, 0);
|
||||
pstmtUpdate.setBoolean(6, false);
|
||||
pstmtUpdate.setString(7, "Enabled");
|
||||
pstmtUpdate.setString(8, "Shared");
|
||||
pstmtUpdate.setBoolean(9, false);
|
||||
pstmtUpdate.setBoolean(10, false);
|
||||
pstmtUpdate.setString(11, UUID.randomUUID().toString());
|
||||
pstmtUpdate.executeUpdate();
|
||||
}catch (SQLException e) {
|
||||
throw new CloudRuntimeException("Exception while adding F5 load balancer device" , e);
|
||||
} finally {
|
||||
if (pstmtUpdate != null) {
|
||||
try {
|
||||
pstmtUpdate.close();
|
||||
} catch (SQLException e) {
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private void addSrxFirewall(Connection conn, long hostId, long physicalNetworkId){
|
||||
PreparedStatement pstmtUpdate = null;
|
||||
try{
|
||||
s_logger.debug("Adding SRX firewall device with host id " + hostId + " in to physical network" + physicalNetworkId);
|
||||
String insertSrx = "INSERT INTO `cloud`.`external_firewall_devices` (physical_network_id, host_id, provider_name, " +
|
||||
"device_name, capacity, is_dedicated, device_state, allocation_state, uuid) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?)";
|
||||
pstmtUpdate = conn.prepareStatement(insertSrx);
|
||||
pstmtUpdate.setLong(1, physicalNetworkId);
|
||||
pstmtUpdate.setLong(2, hostId);
|
||||
pstmtUpdate.setString(3, "JuniperSRX");
|
||||
pstmtUpdate.setString(4, "JuniperSRXFirewall");
|
||||
pstmtUpdate.setLong(5, 0);
|
||||
pstmtUpdate.setBoolean(6, false);
|
||||
pstmtUpdate.setString(7, "Enabled");
|
||||
pstmtUpdate.setString(8, "Shared");
|
||||
pstmtUpdate.setString(9, UUID.randomUUID().toString());
|
||||
pstmtUpdate.executeUpdate();
|
||||
}catch (SQLException e) {
|
||||
throw new CloudRuntimeException("Exception while adding SRX firewall device ", e);
|
||||
} finally {
|
||||
if (pstmtUpdate != null) {
|
||||
try {
|
||||
pstmtUpdate.close();
|
||||
} catch (SQLException e) {
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private void addF5ServiceProvider(Connection conn, long physicalNetworkId, long zoneId){
|
||||
PreparedStatement pstmtUpdate = null;
|
||||
try{
|
||||
// add physical network service provider - F5BigIp
|
||||
s_logger.debug("Adding PhysicalNetworkServiceProvider F5BigIp" + " in to physical network" + physicalNetworkId);
|
||||
String insertPNSP = "INSERT INTO `cloud`.`physical_network_service_providers` (`uuid`, `physical_network_id` , `provider_name`, `state` ," +
|
||||
"`destination_physical_network_id`, `vpn_service_provided`, `dhcp_service_provided`, `dns_service_provided`, `gateway_service_provided`," +
|
||||
"`firewall_service_provided`, `source_nat_service_provided`, `load_balance_service_provided`, `static_nat_service_provided`," +
|
||||
"`port_forwarding_service_provided`, `user_data_service_provided`, `security_group_service_provided`) VALUES (?,?,?,?,0,0,0,0,0,0,0,1,0,0,0,0)";
|
||||
|
||||
pstmtUpdate = conn.prepareStatement(insertPNSP);
|
||||
pstmtUpdate.setString(1, UUID.randomUUID().toString());
|
||||
pstmtUpdate.setLong(2, physicalNetworkId);
|
||||
pstmtUpdate.setString(3, "F5BigIp");
|
||||
pstmtUpdate.setString(4, "Enabled");
|
||||
pstmtUpdate.executeUpdate();
|
||||
}catch (SQLException e) {
|
||||
throw new CloudRuntimeException("Exception while adding PhysicalNetworkServiceProvider F5BigIp", e);
|
||||
} finally {
|
||||
if (pstmtUpdate != null) {
|
||||
try {
|
||||
pstmtUpdate.close();
|
||||
} catch (SQLException e) {
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private void addSrxServiceProvider(Connection conn, long physicalNetworkId, long zoneId){
|
||||
PreparedStatement pstmtUpdate = null;
|
||||
try{
|
||||
// add physical network service provider - JuniperSRX
|
||||
s_logger.debug("Adding PhysicalNetworkServiceProvider JuniperSRX");
|
||||
String insertPNSP = "INSERT INTO `cloud`.`physical_network_service_providers` (`uuid`, `physical_network_id` , `provider_name`, `state` ," +
|
||||
"`destination_physical_network_id`, `vpn_service_provided`, `dhcp_service_provided`, `dns_service_provided`, `gateway_service_provided`," +
|
||||
"`firewall_service_provided`, `source_nat_service_provided`, `load_balance_service_provided`, `static_nat_service_provided`," +
|
||||
"`port_forwarding_service_provided`, `user_data_service_provided`, `security_group_service_provided`) VALUES (?,?,?,?,0,0,0,0,1,1,1,0,1,1,0,0)";
|
||||
|
||||
pstmtUpdate = conn.prepareStatement(insertPNSP);
|
||||
pstmtUpdate.setString(1, UUID.randomUUID().toString());
|
||||
pstmtUpdate.setLong(2, physicalNetworkId);
|
||||
pstmtUpdate.setString(3, "JuniperSRX");
|
||||
pstmtUpdate.setString(4, "Enabled");
|
||||
pstmtUpdate.executeUpdate();
|
||||
}catch (SQLException e) {
|
||||
throw new CloudRuntimeException("Exception while adding PhysicalNetworkServiceProvider JuniperSRX" , e);
|
||||
} finally {
|
||||
if (pstmtUpdate != null) {
|
||||
try {
|
||||
pstmtUpdate.close();
|
||||
} catch (SQLException e) {
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// 1) ensure that networks using external load balancer/firewall in 2.2.14 or prior releases deployments
|
||||
// has entry in network_external_lb_device_map and network_external_firewall_device_map
|
||||
//
|
||||
// 2) Some keys of host details for F5 and SRX devices were stored in Camel Case in 2.x releases. From 3.0
|
||||
// they are made in lowercase. On upgrade change the host details name to lower case
|
||||
private void fixZoneUsingExternalDevices(Connection conn) {
|
||||
//Get zones to upgrade
|
||||
List<Long> zoneIds = new ArrayList<Long>();
|
||||
PreparedStatement pstmt = null;
|
||||
PreparedStatement pstmtUpdate = null;
|
||||
ResultSet rs = null;
|
||||
long networkOfferingId, networkId;
|
||||
long f5DeviceId, f5HostId;
|
||||
long srxDevivceId, srxHostId;
|
||||
|
||||
try {
|
||||
pstmt = conn.prepareStatement("select id from `cloud`.`data_center` where lb_provider='F5BigIp' or firewall_provider='JuniperSRX' or gateway_provider='JuniperSRX'");
|
||||
rs = pstmt.executeQuery();
|
||||
while (rs.next()) {
|
||||
zoneIds.add(rs.getLong(1));
|
||||
}
|
||||
} catch (SQLException e) {
|
||||
throw new CloudRuntimeException("Unable to create network to LB & firewalla device mapping for networks that use them", e);
|
||||
}
|
||||
|
||||
if (zoneIds.size() == 0) {
|
||||
return; // no zones using F5 and SRX devices so return
|
||||
}
|
||||
|
||||
// find the default network offering created for external devices during upgrade from 2.2.14
|
||||
try {
|
||||
pstmt = conn.prepareStatement("select id from `cloud`.`network_offerings` where unique_name='Isolated with external providers' ");
|
||||
rs = pstmt.executeQuery();
|
||||
if (rs.first()) {
|
||||
networkOfferingId = rs.getLong(1);
|
||||
} else {
|
||||
throw new CloudRuntimeException("Cannot upgrade as there is no 'Isolated with external providers' network offering crearted .");
|
||||
}
|
||||
} catch (SQLException e) {
|
||||
throw new CloudRuntimeException("Unable to create network to LB & firewalla device mapping for networks that use them", e);
|
||||
}
|
||||
|
||||
for (Long zoneId : zoneIds) {
|
||||
try {
|
||||
// find the F5 device id in the zone
|
||||
pstmt = conn.prepareStatement("SELECT id FROM host WHERE data_center_id=? AND type = 'ExternalLoadBalancer' AND removed IS NULL");
|
||||
pstmt.setLong(1, zoneId);
|
||||
rs = pstmt.executeQuery();
|
||||
if (rs.first()) {
|
||||
f5HostId = rs.getLong(1);
|
||||
} else {
|
||||
throw new CloudRuntimeException("Cannot upgrade as there is no F5 load balancer device found in data center " + zoneId);
|
||||
}
|
||||
pstmt = conn.prepareStatement("SELECT id FROM external_load_balancer_devices WHERE host_id=?");
|
||||
pstmt.setLong(1, f5HostId);
|
||||
rs = pstmt.executeQuery();
|
||||
if (rs.first()) {
|
||||
f5DeviceId = rs.getLong(1);
|
||||
} else {
|
||||
throw new CloudRuntimeException("Cannot upgrade as there is no F5 load balancer device with host ID " + f5HostId + " found in external_load_balancer_device");
|
||||
}
|
||||
|
||||
// find the SRX device id in the zone
|
||||
pstmt = conn.prepareStatement("SELECT id FROM host WHERE data_center_id=? AND type = 'ExternalFirewall' AND removed IS NULL");
|
||||
pstmt.setLong(1, zoneId);
|
||||
rs = pstmt.executeQuery();
|
||||
if (rs.first()) {
|
||||
srxHostId = rs.getLong(1);
|
||||
} else {
|
||||
throw new CloudRuntimeException("Cannot upgrade as there is no SRX firewall device found in data center " + zoneId);
|
||||
}
|
||||
pstmt = conn.prepareStatement("SELECT id FROM external_firewall_devices WHERE host_id=?");
|
||||
pstmt.setLong(1, srxHostId);
|
||||
rs = pstmt.executeQuery();
|
||||
if (rs.first()) {
|
||||
srxDevivceId = rs.getLong(1);
|
||||
} else {
|
||||
throw new CloudRuntimeException("Cannot upgrade as there is no SRX firewall device found with host ID " + srxHostId + " found in external_firewall_devices");
|
||||
}
|
||||
|
||||
// check if network any uses F5 or SRX devices in the zone
|
||||
pstmt = conn.prepareStatement("select id from `cloud`.`networks` where guest_type='Virtual' and data_center_id=? and network_offering_id=? and removed IS NULL");
|
||||
pstmt.setLong(1, zoneId);
|
||||
pstmt.setLong(2, networkOfferingId);
|
||||
rs = pstmt.executeQuery();
|
||||
while (rs.next()) {
|
||||
// get the network Id
|
||||
networkId = rs.getLong(1);
|
||||
|
||||
// add mapping for the network in network_external_lb_device_map
|
||||
String insertLbMapping = "INSERT INTO `cloud`.`network_external_lb_device_map` (uuid, network_id, external_load_balancer_device_id, created) VALUES ( ?, ?, ?, now())";
|
||||
pstmtUpdate = conn.prepareStatement(insertLbMapping);
|
||||
pstmtUpdate.setString(1, UUID.randomUUID().toString());
|
||||
pstmtUpdate.setLong(2, networkId);
|
||||
pstmtUpdate.setLong(3, f5DeviceId);
|
||||
pstmtUpdate.executeUpdate();
|
||||
s_logger.debug("Successfully added entry in network_external_lb_device_map for network " + networkId + " and F5 device ID " + f5DeviceId);
|
||||
|
||||
// add mapping for the network in network_external_firewall_device_map
|
||||
String insertFwMapping = "INSERT INTO `cloud`.`network_external_firewall_device_map` (uuid, network_id, external_firewall_device_id, created) VALUES ( ?, ?, ?, now())";
|
||||
pstmtUpdate = conn.prepareStatement(insertFwMapping);
|
||||
pstmtUpdate.setString(1, UUID.randomUUID().toString());
|
||||
pstmtUpdate.setLong(2, networkId);
|
||||
pstmtUpdate.setLong(3, srxDevivceId);
|
||||
pstmtUpdate.executeUpdate();
|
||||
s_logger.debug("Successfully added entry in network_external_firewall_device_map for network " + networkId + " and SRX device ID " + srxDevivceId);
|
||||
}
|
||||
|
||||
// update host details for F5 and SRX devices
|
||||
s_logger.debug("Updating the host details for F5 and SRX devices");
|
||||
pstmt = conn.prepareStatement("SELECT host_id, name FROM `cloud`.`host_details` WHERE host_id=? OR host_id=?");
|
||||
pstmt.setLong(1, f5HostId);
|
||||
pstmt.setLong(2, srxHostId);
|
||||
rs = pstmt.executeQuery();
|
||||
while (rs.next()) {
|
||||
long hostId = rs.getLong(1);
|
||||
String camlCaseName = rs.getString(2);
|
||||
if (!(camlCaseName.equalsIgnoreCase("numRetries") ||
|
||||
camlCaseName.equalsIgnoreCase("publicZone") ||
|
||||
camlCaseName.equalsIgnoreCase("privateZone") ||
|
||||
camlCaseName.equalsIgnoreCase("publicInterface") ||
|
||||
camlCaseName.equalsIgnoreCase("privateInterface") ||
|
||||
camlCaseName.equalsIgnoreCase("usageInterface") )) {
|
||||
continue;
|
||||
}
|
||||
String lowerCaseName = camlCaseName.toLowerCase();
|
||||
pstmt = conn.prepareStatement("update `cloud`.`host_details` set name=? where host_id=? AND name=?");
|
||||
pstmt.setString(1, lowerCaseName);
|
||||
pstmt.setLong(2, hostId);
|
||||
pstmt.setString(3, camlCaseName);
|
||||
pstmt.executeUpdate();
|
||||
}
|
||||
s_logger.debug("Successfully updated host details for F5 and SRX devices");
|
||||
} catch (SQLException e) {
|
||||
throw new CloudRuntimeException("Unable create a mapping for the networks in network_external_lb_device_map and network_external_firewall_device_map", e);
|
||||
} finally {
|
||||
try {
|
||||
if (rs != null) {
|
||||
rs.close();
|
||||
}
|
||||
if (pstmt != null) {
|
||||
pstmt.close();
|
||||
}
|
||||
} catch (SQLException e) {
|
||||
}
|
||||
}
|
||||
s_logger.info("Successfully upgraded networks using F5 and SRX devices to have a entry in the network_external_lb_device_map and network_external_firewall_device_map");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,29 @@
|
|||
-- Licensed to the Apache Software Foundation (ASF) under one
|
||||
-- or more contributor license agreements. See the NOTICE file
|
||||
-- distributed with this work for additional information
|
||||
-- regarding copyright ownership. The ASF licenses this file
|
||||
-- to you under the Apache License, Version 2.0 (the
|
||||
-- "License"); you may not use this file except in compliance
|
||||
-- with the License. You may obtain a copy of the License at
|
||||
--
|
||||
-- http://www.apache.org/licenses/LICENSE-2.0
|
||||
--
|
||||
-- Unless required by applicable law or agreed to in writing,
|
||||
-- software distributed under the License is distributed on an
|
||||
-- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
-- KIND, either express or implied. See the License for the
|
||||
-- specific language governing permissions and limitations
|
||||
-- under the License.
|
||||
|
||||
|
||||
DROP DATABASE IF EXISTS `simulator`;
|
||||
|
||||
CREATE DATABASE `simulator`;
|
||||
|
||||
GRANT ALL ON simulator.* to cloud@`localhost` identified by 'cloud';
|
||||
GRANT ALL ON simulator.* to cloud@`%` identified by 'cloud';
|
||||
|
||||
GRANT process ON *.* TO cloud@`localhost`;
|
||||
GRANT process ON *.* TO cloud@`%`;
|
||||
|
||||
commit;
|
||||
|
|
@ -15,14 +15,14 @@
|
|||
-- specific language governing permissions and limitations
|
||||
-- under the License.
|
||||
|
||||
DROP TABLE IF EXISTS `cloud`.`mockhost`;
|
||||
DROP TABLE IF EXISTS `cloud`.`mocksecstorage`;
|
||||
DROP TABLE IF EXISTS `cloud`.`mockstoragepool`;
|
||||
DROP TABLE IF EXISTS `cloud`.`mockvm`;
|
||||
DROP TABLE IF EXISTS `cloud`.`mockvolume`;
|
||||
DROP TABLE IF EXISTS `cloud`.`mocksecurityrules`;
|
||||
DROP TABLE IF EXISTS `simulator`.`mockhost`;
|
||||
DROP TABLE IF EXISTS `simulator`.`mocksecstorage`;
|
||||
DROP TABLE IF EXISTS `simulator`.`mockstoragepool`;
|
||||
DROP TABLE IF EXISTS `simulator`.`mockvm`;
|
||||
DROP TABLE IF EXISTS `simulator`.`mockvolume`;
|
||||
DROP TABLE IF EXISTS `simulator`.`mocksecurityrules`;
|
||||
|
||||
CREATE TABLE `cloud`.`mockhost` (
|
||||
CREATE TABLE `simulator`.`mockhost` (
|
||||
`id` bigint unsigned NOT NULL auto_increment,
|
||||
`name` varchar(255) NOT NULL,
|
||||
`private_ip_address` char(40),
|
||||
|
|
@ -48,7 +48,7 @@ CREATE TABLE `cloud`.`mockhost` (
|
|||
PRIMARY KEY (`id`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE `cloud`.`mocksecstorage` (
|
||||
CREATE TABLE `simulator`.`mocksecstorage` (
|
||||
`id` bigint unsigned NOT NULL auto_increment,
|
||||
`url` varchar(255),
|
||||
`capacity` bigint unsigned,
|
||||
|
|
@ -56,7 +56,7 @@ CREATE TABLE `cloud`.`mocksecstorage` (
|
|||
PRIMARY KEY (`id`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE `cloud`.`mockstoragepool` (
|
||||
CREATE TABLE `simulator`.`mockstoragepool` (
|
||||
`id` bigint unsigned NOT NULL auto_increment,
|
||||
`guid` varchar(255),
|
||||
`mount_point` varchar(255),
|
||||
|
|
@ -67,7 +67,7 @@ CREATE TABLE `cloud`.`mockstoragepool` (
|
|||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
|
||||
CREATE TABLE `cloud`.`mockvm` (
|
||||
CREATE TABLE `simulator`.`mockvm` (
|
||||
`id` bigint unsigned NOT NULL auto_increment,
|
||||
`name` varchar(255),
|
||||
`host_id` bigint unsigned,
|
||||
|
|
@ -83,7 +83,7 @@ CREATE TABLE `cloud`.`mockvm` (
|
|||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
|
||||
CREATE TABLE `cloud`.`mockvolume` (
|
||||
CREATE TABLE `simulator`.`mockvolume` (
|
||||
`id` bigint unsigned NOT NULL auto_increment,
|
||||
`name` varchar(255),
|
||||
`size` bigint unsigned,
|
||||
|
|
@ -97,7 +97,7 @@ CREATE TABLE `cloud`.`mockvolume` (
|
|||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
|
||||
CREATE TABLE `cloud`.`mockconfiguration` (
|
||||
CREATE TABLE `simulator`.`mockconfiguration` (
|
||||
`id` bigint unsigned NOT NULL auto_increment,
|
||||
`data_center_id` bigint unsigned,
|
||||
`pod_id` bigint unsigned,
|
||||
|
|
@ -108,7 +108,7 @@ CREATE TABLE `cloud`.`mockconfiguration` (
|
|||
PRIMARY KEY (`id`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE `cloud`.`mocksecurityrules` (
|
||||
CREATE TABLE `simulator`.`mocksecurityrules` (
|
||||
`id` bigint unsigned NOT NULL auto_increment,
|
||||
`vmid` bigint unsigned,
|
||||
`signature` varchar(255),
|
||||
|
|
|
|||
|
|
@ -87,6 +87,10 @@ echo "Recreating Database cloud_usage."
|
|||
mysql --user=root --password=$3 < create-database-premium.sql > /dev/null 2>/dev/null
|
||||
handle_error create-database-premium.sql
|
||||
|
||||
echo "Recreating Database simulator."
|
||||
mysql --user=root --password=$3 < create-database-simulator.sql > /dev/null 2>/dev/null
|
||||
handle_error create-database-simulator.sql
|
||||
|
||||
mysql --user=cloud --password=cloud cloud < create-schema.sql
|
||||
if [ $? -ne 0 ]; then
|
||||
printf "Error: Cannot execute create-schema.sql\n"
|
||||
|
|
|
|||
|
|
@ -1,145 +0,0 @@
|
|||
# Licensed to the Apache Software Foundation (ASF) under one
|
||||
# or more contributor license agreements. See the NOTICE file
|
||||
# distributed with this work for additional information
|
||||
# regarding copyright ownership. The ASF licenses this file
|
||||
# to you under the Apache License, Version 2.0 (the
|
||||
# "License"); you may not use this file except in compliance
|
||||
# with the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing,
|
||||
# software distributed under the License is distributed on an
|
||||
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
# KIND, either express or implied. See the License for the
|
||||
# specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
|
||||
#!/usr/bin/env python
|
||||
try:
|
||||
import unittest2 as unittest
|
||||
except ImportError:
|
||||
import unittest
|
||||
|
||||
import random
|
||||
import hashlib
|
||||
from cloudstackTestCase import *
|
||||
import remoteSSHClient
|
||||
|
||||
class SampleScenarios(cloudstackTestCase):
|
||||
'''
|
||||
'''
|
||||
def setUp(self):
|
||||
pass
|
||||
|
||||
|
||||
def tearDown(self):
|
||||
pass
|
||||
|
||||
|
||||
def test_1_createAccounts(self, numberOfAccounts=2):
|
||||
'''
|
||||
Create a bunch of user accounts
|
||||
'''
|
||||
mdf = hashlib.md5()
|
||||
mdf.update('password')
|
||||
mdf_pass = mdf.hexdigest()
|
||||
api = self.testClient.getApiClient()
|
||||
for i in range(1, numberOfAccounts + 1):
|
||||
acct = createAccount.createAccountCmd()
|
||||
acct.accounttype = 0
|
||||
acct.firstname = 'user' + str(i)
|
||||
acct.lastname = 'user' + str(i)
|
||||
acct.password = mdf_pass
|
||||
acct.username = 'user' + str(i)
|
||||
acct.email = 'user@example.com'
|
||||
acct.account = 'user' + str(i)
|
||||
acct.domainid = 1
|
||||
acctResponse = api.createAccount(acct)
|
||||
self.debug("successfully created account: %s, user: %s, id: %s"%(acctResponse.account, acctResponse.username, acctResponse.id))
|
||||
|
||||
|
||||
def test_2_createServiceOffering(self):
|
||||
apiClient = self.testClient.getApiClient()
|
||||
createSOcmd=createServiceOffering.createServiceOfferingCmd()
|
||||
createSOcmd.name='Sample SO'
|
||||
createSOcmd.displaytext='Sample SO'
|
||||
createSOcmd.storagetype='shared'
|
||||
createSOcmd.cpunumber=1
|
||||
createSOcmd.cpuspeed=100
|
||||
createSOcmd.memory=128
|
||||
createSOcmd.offerha='false'
|
||||
createSOresponse = apiClient.createServiceOffering(createSOcmd)
|
||||
return createSOresponse.id
|
||||
|
||||
def deployCmd(self, account, service):
|
||||
deployVmCmd = deployVirtualMachine.deployVirtualMachineCmd()
|
||||
deployVmCmd.zoneid = 1
|
||||
deployVmCmd.account=account
|
||||
deployVmCmd.domainid=1
|
||||
deployVmCmd.templateid=2
|
||||
deployVmCmd.serviceofferingid=service
|
||||
return deployVmCmd
|
||||
|
||||
def listVmsInAccountCmd(self, acct):
|
||||
api = self.testClient.getApiClient()
|
||||
listVmCmd = listVirtualMachines.listVirtualMachinesCmd()
|
||||
listVmCmd.account = acct
|
||||
listVmCmd.zoneid = 1
|
||||
listVmCmd.domainid = 1
|
||||
listVmResponse = api.listVirtualMachines(listVmCmd)
|
||||
return listVmResponse
|
||||
|
||||
|
||||
def destroyVmCmd(self, key):
|
||||
api = self.testClient.getApiClient()
|
||||
destroyVmCmd = destroyVirtualMachine.destroyVirtualMachineCmd()
|
||||
destroyVmCmd.id = key
|
||||
api.destroyVirtualMachine(destroyVmCmd)
|
||||
|
||||
|
||||
def test_3_stressDeploy(self):
|
||||
'''
|
||||
Deploy 5 Vms in each account
|
||||
'''
|
||||
service_id = self.test_2_createServiceOffering()
|
||||
api = self.testClient.getApiClient()
|
||||
for acct in range(1, 5):
|
||||
[api.deployVirtualMachine(self.deployCmd('user'+str(acct), service_id)) for x in range(0,5)]
|
||||
|
||||
@unittest.skip("skipping destroys")
|
||||
def test_4_stressDestroy(self):
|
||||
'''
|
||||
Cleanup all Vms in every account
|
||||
'''
|
||||
api = self.testClient.getApiClient()
|
||||
for acct in range(1, 6):
|
||||
for vm in self.listVmsInAccountCmd('user'+str(acct)):
|
||||
if vm is not None:
|
||||
self.destroyVmCmd(vm.id)
|
||||
|
||||
@unittest.skip("skipping destroys")
|
||||
def test_5_combineStress(self):
|
||||
for i in range(0, 5):
|
||||
self.test_3_stressDeploy()
|
||||
self.test_4_stressDestroy()
|
||||
|
||||
def deployN(self,nargs=300,batchsize=0):
|
||||
'''
|
||||
Deploy Nargs number of VMs concurrently in batches of size {batchsize}.
|
||||
When batchsize is 0 all Vms are deployed in one batch
|
||||
VMs will be deployed in 5:2:6 ratio
|
||||
'''
|
||||
cmds = []
|
||||
|
||||
if batchsize == 0:
|
||||
self.testClient.submitCmdsAndWait(cmds)
|
||||
else:
|
||||
while len(z) > 0:
|
||||
try:
|
||||
newbatch = [cmds.pop() for b in range(batchsize)] #pop batchsize items
|
||||
self.testClient.submitCmdsAndWait(newbatch)
|
||||
except IndexError:
|
||||
break
|
||||
|
|
@ -80,6 +80,7 @@ public class Transaction {
|
|||
public static final short CLOUD_DB = 0;
|
||||
public static final short USAGE_DB = 1;
|
||||
public static final short AWSAPI_DB = 2;
|
||||
public static final short SIMULATOR_DB = 3;
|
||||
public static final short CONNECTED_DB = -1;
|
||||
|
||||
private static AtomicLong s_id = new AtomicLong();
|
||||
|
|
@ -224,6 +225,7 @@ public class Transaction {
|
|||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
public static Connection getStandaloneAwsapiConnection() {
|
||||
try {
|
||||
Connection conn = s_awsapiDS.getConnection();
|
||||
|
|
@ -235,7 +237,21 @@ public class Transaction {
|
|||
s_logger.warn("Unexpected exception: ", e);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public static Connection getStandaloneSimulatorConnection() {
|
||||
try {
|
||||
Connection conn = s_simulatorDS.getConnection();
|
||||
if (s_connLogger.isTraceEnabled()) {
|
||||
s_connLogger.trace("Retrieving a standalone connection for simulator: dbconn" + System.identityHashCode(conn));
|
||||
}
|
||||
return conn;
|
||||
} catch (SQLException e) {
|
||||
s_logger.warn("Unexpected exception: ", e);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
protected void attach(TransactionAttachment value) {
|
||||
_stack.push(new StackElement(ATTACHMENT, value));
|
||||
}
|
||||
|
|
@ -546,6 +562,14 @@ public class Transaction {
|
|||
}
|
||||
break;
|
||||
|
||||
case SIMULATOR_DB:
|
||||
if(s_simulatorDS != null) {
|
||||
_conn = s_simulatorDS.getConnection();
|
||||
} else {
|
||||
s_logger.warn("A static-initialized variable becomes null, process is dying?");
|
||||
throw new CloudRuntimeException("Database is not initialized, process is dying?");
|
||||
}
|
||||
break;
|
||||
default:
|
||||
|
||||
throw new CloudRuntimeException("No database selected for the transaction");
|
||||
|
|
@ -976,6 +1000,7 @@ public class Transaction {
|
|||
private static DataSource s_ds;
|
||||
private static DataSource s_usageDS;
|
||||
private static DataSource s_awsapiDS;
|
||||
private static DataSource s_simulatorDS;
|
||||
static {
|
||||
try {
|
||||
final File dbPropsFile = PropertiesUtil.findConfigFile("db.properties");
|
||||
|
|
@ -1069,6 +1094,27 @@ public class Transaction {
|
|||
new StackKeyedObjectPoolFactory(), null, false, false);
|
||||
s_awsapiDS = new PoolingDataSource(awsapiPoolableConnectionFactory.getPool());
|
||||
|
||||
try{
|
||||
// configure the simulator db
|
||||
final int simulatorMaxActive = Integer.parseInt(dbProps.getProperty("db.simulator.maxActive"));
|
||||
final int simulatorMaxIdle = Integer.parseInt(dbProps.getProperty("db.simulator.maxIdle"));
|
||||
final long simulatorMaxWait = Long.parseLong(dbProps.getProperty("db.simulator.maxWait"));
|
||||
final String simulatorUsername = dbProps.getProperty("db.simulator.username");
|
||||
final String simulatorPassword = dbProps.getProperty("db.simulator.password");
|
||||
final String simulatorHost = dbProps.getProperty("db.simulator.host");
|
||||
final int simulatorPort = Integer.parseInt(dbProps.getProperty("db.simulator.port"));
|
||||
final String simulatorDbName = dbProps.getProperty("db.simulator.name");
|
||||
final boolean simulatorAutoReconnect = Boolean.parseBoolean(dbProps.getProperty("db.simulator.autoReconnect"));
|
||||
final GenericObjectPool simulatorConnectionPool = new GenericObjectPool(null, simulatorMaxActive, GenericObjectPool.DEFAULT_WHEN_EXHAUSTED_ACTION,
|
||||
simulatorMaxWait, simulatorMaxIdle);
|
||||
final ConnectionFactory simulatorConnectionFactory = new DriverManagerConnectionFactory("jdbc:mysql://"+simulatorHost + ":" + simulatorPort + "/" + simulatorDbName +
|
||||
"?autoReconnect="+simulatorAutoReconnect, simulatorUsername, simulatorPassword);
|
||||
final PoolableConnectionFactory simulatorPoolableConnectionFactory = new PoolableConnectionFactory(simulatorConnectionFactory, simulatorConnectionPool,
|
||||
new StackKeyedObjectPoolFactory(), null, false, false);
|
||||
s_simulatorDS = new PoolingDataSource(simulatorPoolableConnectionFactory.getPool());
|
||||
} catch (Exception e){
|
||||
s_logger.debug("Simulator DB properties are not available. Not initializing simulator DS");
|
||||
}
|
||||
} catch (final Exception e) {
|
||||
final GenericObjectPool connectionPool = new GenericObjectPool(null, 5);
|
||||
final ConnectionFactory connectionFactory = new DriverManagerConnectionFactory("jdbc:mysql://localhost:3306/cloud", "cloud", "cloud");
|
||||
|
|
@ -1079,6 +1125,11 @@ public class Transaction {
|
|||
final ConnectionFactory connectionFactoryUsage = new DriverManagerConnectionFactory("jdbc:mysql://localhost:3306/cloud_usage", "cloud", "cloud");
|
||||
final PoolableConnectionFactory poolableConnectionFactoryUsage = new PoolableConnectionFactory(connectionFactoryUsage, connectionPoolUsage, null, null, false, true);
|
||||
s_usageDS = new PoolingDataSource(poolableConnectionFactoryUsage.getPool());
|
||||
|
||||
final GenericObjectPool connectionPoolsimulator = new GenericObjectPool(null, 5);
|
||||
final ConnectionFactory connectionFactorysimulator = new DriverManagerConnectionFactory("jdbc:mysql://localhost:3306/cloud_simulator", "cloud", "cloud");
|
||||
final PoolableConnectionFactory poolableConnectionFactorysimulator = new PoolableConnectionFactory(connectionFactorysimulator, connectionPoolsimulator, null, null, false, true);
|
||||
s_simulatorDS = new PoolingDataSource(poolableConnectionFactorysimulator.getPool());
|
||||
s_logger.warn("Unable to load db configuration, using defaults with 5 connections. Please check your configuration", e);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -164,7 +164,7 @@ def build_dependences ():
|
|||
|
||||
bld.install_files('${JAVADIR}',start_path.ant_glob(["CAStorSDK-*.jar", "javax.persistence-2.0.0.jar", "apache-log4j-extras-1.1.jar", "libvirt-0.4.9.jar", "axis2-1.5.1.jar", "jstl-1.2.jar", "commons-discovery-0.5.jar", "commons-codec-1.6.jar", "ejb-api-3.0.jar", "xmlrpc-client-3.1.3.jar", "commons-dbcp-1.4.jar", "commons-pool-1.6.jar", "gson-1.7.1.jar",
|
||||
"netscaler-1.0.jar", "netscaler-sdx-1.0.jar", "backport-util-concurrent-3.1.jar", "ehcache-1.5.0.jar", "httpcore-4.0.jar", "log4j-1.2.16.jar", "trilead-ssh2-build213-svnkit-1.3-patch.jar", "cglib-nodep-2.2.2.jar", "xmlrpc-common-3.*.jar",
|
||||
"xmlrpc-client-3.*.jar", "axis-1.4.jar", "wsdl4j-1.6.2.jar", "bcprov-jdk16-1.45.jar", "jsch-0.1.42.jar", "jasypt-1.9.0.jar", "commons-configuration-1.8.jar", "commons-lang-2.6.jar", "mail-1.4.jar", "activation-1.1.jar", "xapi-5.6.100-1-SNAPSHOT.jar"], excl = excludes), cwd=start_path)
|
||||
"xmlrpc-client-3.*.jar", "wsdl4j-1.6.2.jar", "bcprov-jdk16-1.45.jar", "jsch-0.1.42.jar", "jasypt-1.9.0.jar", "commons-configuration-1.8.jar", "mail-1.4.jar", "activation-1.1.jar", "xapi-5.6.100-1-SNAPSHOT.jar"], excl = excludes), cwd=start_path)
|
||||
|
||||
#def build_console_proxy ():
|
||||
# binary unsubstitutable files:
|
||||
|
|
|
|||
Loading…
Reference in New Issue