diff --git a/HACKING b/HACKING deleted file mode 100644 index b6a16c3ef5e..00000000000 --- a/HACKING +++ /dev/null @@ -1,652 +0,0 @@ ---------------------------------------------------------------------- -THE QUICK GUIDE TO CLOUDSTACK DEVELOPMENT ---------------------------------------------------------------------- - - -=== Overview of the development lifecycle === - -To hack on a CloudStack component, you will generally: - -1. Configure the source code: - ./waf configure --prefix=/home/youruser/cloudstack - (see below, "./waf configure") - -2. Build and install the CloudStack - ./waf install - (see below, "./waf install") - -3. Set the CloudStack component up - (see below, "Running the CloudStack components from source") - -4. Run the CloudStack component - (see below, "Running the CloudStack components from source") - -5. Modify the source code - -6. Build and install the CloudStack again - ./waf install --preserve-config - (see below, "./waf install") - -7. GOTO 4 - - -=== What is this waf thing in my development lifecycle? === - -waf is a self-contained, advanced build system written by Thomas Nagy, -in the spirit of SCons or the GNU autotools suite. - -* To run waf on Linux / Mac: ./waf [...commands...] -* To run waf on Windows: waf.bat [...commands...] - -./waf --help should be your first discovery point to find out both the -configure-time options and the different processes that you can run -using waf. - - -=== What do the different waf commands above do? === - -1. ./waf configure --prefix=/some/path - - You run this command *once*, in preparation to building, or every - time you need to change a configure-time variable. - - This runs configure() in wscript, which takes care of setting the - variables and options that waf will use for compilation and - installation, including the installation directory (PREFIX). - - For convenience reasons, if you forget to run configure, waf - will proceed with some default configuration options. By - default, PREFIX is /usr/local, but you can set it e.g. to - /home/youruser/cloudstack if you plan to do a non-root - install. Be ware that you can later install the stack as a - regular user, but most components need to *run* as root. - - ./waf showconfig displays the values of the configure-time options - -2. ./waf - - You run this command to trigger compilation of the modified files. - - This runs the contents of wscript_build, which takes care of - discovering and describing what needs to be built, which - build products / sources need to be installed, and where. - -3. ./waf install - - You run this command when you want to install the CloudStack. - - If you are going to install for production, you should run this - process as root. If, conversely, you only want to install the - stack as your own user and in a directory that you have write - permission, it's fine to run waf install as your own user. - - This runs the contents of wscript_build, with an option variable - Options.is_install = True. When this variable is set, waf will - install the files described in wscript_build. For convenience - reasons, when you run install, any files that need to be recompiled - will also be recompiled prior to installation. - - -------------------- - - WARNING: each time you do ./waf install, the configuration files - in the installation directory are *overwritten*. - - There are, however, two ways to get around this: - - a) ./waf install has an option --preserve-config. If you pass - this option when installing, configuration files are never - overwritten. - - This option is useful when you have modified source files and - you need to deploy them on a system that already has the - CloudStack installed and configured, but you do *not* want to - overwrite the existing configuration of the CloudStack. - - If, however, you have reconfigured and rebuilt the source - since the last time you did ./waf install, then you are - advised to replace the configuration files and set the - components up again, because some configuration files - in the source use identifiers that may have changed during - the last ./waf configure. So, if this is your case, check - out the next way: - - b) Every configuration file can be overridden in the source - without touching the original. - - - Look for said config file X (or X.in) in the source, then - - create an override/ folder in the folder that contains X, then - - place a file named X (or X.in) inside override/, then - - put the desired contents inside X (or X.in) - - Now, every time you run ./waf install, the file that will be - installed is path/to/override/X.in, instead of /path/to/X.in. - - This option is useful if you are developing the CloudStack - and constantly reinstalling it. It guarantees that every - time you install the CloudStack, the installation will have - the correct configuration and will be ready to run. - - -=== Running the CloudStack components from source (for debugging / coding) === - -It is not technically possible to run the CloudStack components from -the source. That, however, is fine -- each component can be run -independently from the install directory: - -- Management Server - - 1) Execute ./waf install as your current user (or as root if the - installation path is only writable by root). - - WARNING: if any CloudStack configuration files have been - already configured / altered, they will be *overwritten* by this - process. Append --preserve-config to ./waf install to prevent this - from happening. Or resort to the override method discussed - above (search for "override" in this document). - - 2) If you haven't done so yet, set up the management server database: - - - either run ./waf deploydb_kvm, or - - run $BINDIR/cloud-setup-databases - - 3) Execute ./waf run as your current user (or as root if the - installation path is only writable by root). Alternatively, - you can use ./waf debug and this will run with debugging enabled. - - -- Agent (Linux-only): - - 1) Execute ./waf install as your current user (or as root if the - installation path is only writable by root). - - WARNING: if any CloudStack configuration files have been - already configured / altered, they will be *overwritten* by this - process. Append --preserve-config to ./waf install to prevent this - from happening. Or resort to the override method discussed - above (search for "override" in this document). - - 2) If you haven't done so yet, set the Agent up: - - - run $BINDIR/cloud-setup-agent - - 3) Execute ./waf run_agent as root - - this will launch sudo and require your root password unless you have - set sudo up not to ask for it - - -- Console Proxy (Linux-only): - - 1) Execute ./waf install as your current user (or as root if the - installation path is only writable by root). - - WARNING: if any CloudStack configuration files have been - already configured / altered, they will be *overwritten* by this - process. Append --preserve-config to ./waf install to prevent this - from happening. Or resort to the override method discussed - above (search for "override" in this document). - - 2) If you haven't done so yet, set the Console Proxy up: - - - run $BINDIR/cloud-setup-console-proxy - - 3) Execute ./waf run_console_proxy - - this will launch sudo and require your root password unless you have - set sudo up not to ask for it - - ---------------------------------------------------------------------- -BUILD SYSTEM TIPS ---------------------------------------------------------------------- - - -=== Integrating compilation and execution of each component into Eclipse === - -To run the Management Server from Eclipse, set up an External Tool of the -Program variety. Put the path to the waf binary in the Location of the -window, and the source directory as Working Directory. Then specify -"install --preserve-config run" as arguments (without the quotes). You can -now use the Run button in Eclipse to execute the Management Server directly -from Eclipse. You can replace run with debug if you want to run the -Management Server with the Debugging Proxy turned on. - -To run the Agent or Console Proxy from Eclipse, set up an External Tool of -the Program variety just like in the Management Server case. In there, -however, specify "install --preserve-config run_agent" or -"install --preserve-config run_console_proxy" as arguments instead. -Remember that you need to set sudo up to not ask you for a password and not -require a TTY, otherwise sudo -- implicitly called by waf run_agent or -waf run_console_proxy -- will refuse to work. - - -=== Building targets selectively === - -You can find out the targets of the build system: - -./waf list_targets - -If you want to run a specific task generator, - -./waf build --targets=patchsubst - -should run just that one (and whatever targets are required to build that -one, of course). - - -=== Common targets === - -* ./waf configure: you must always run configure once, and provide it with - the target installation paths for when you run install later - o --help: will show you all the configure options - o --no-dep-check: will skip dependency checks for java packages - needed to compile (saves 20 seconds when redoing the configure) - o --with-db-user, --with-db-pw, --with-db-host: informs the build - system of the MySQL configuration needed to set up the management - server upon install, and to do deploydb - -* ./waf build: will compile any source files (and, on some projects, will - also perform any variable substitutions on any .in files such as the - MANIFEST files). Build outputs will be in /artifacts/default. - -* ./waf install: will compile if not compiled yet, then execute an install - of the built targets. I had to write a significantly large amount of code - (that is, couple tens of lines of code) to make install work. - -* ./waf run: will run the management server in the foreground - -* ./waf debug: will run the management server in the foreground, and open - port 8787 to connect with the debugger (see the Run / debug options of - waf --help to change that port) - -* ./waf deploydb: deploys the database using the MySQL configuration supplied - with the configuration options when you did ./waf configure. RUN WAF BUILD - FIRST AT LEAST ONCE. - -* ./waf dist: create a source tarball. These tarballs will be distributed - independently on our Web site, and will form the source release of the - Cloud Stack. It is a self-contained release that can be ./waf built and - ./waf installed everywhere. - -* ./waf clean: remove known build products - -* ./waf distclean: remove the artifacts/ directory altogether - -* ./waf uninstall: uninstall all installed files - -* ./waf rpm: build RPM packages - o if the build fails because the system lacks dependencies from our - other modules, waf will attempt to install RPMs from the repos, - then try the build - o it will place the built packages in artifacts/rpmbuild/ - -* ./waf deb: build Debian packages - o if the build fails because the system lacks dependencies from our - other modules, waf will attempt to install DEBs from the repos, - then try the build - o it will place the built packages in artifacts/debbuild/ - -* ./waf uninstallrpms: removes all Cloud.com RPMs from a system (but not - logfiles or modified config files) - -* ./waf viewrpmdeps: displays RPM dependencies declared in the RPM specfile - -* ./waf installrpmdeps: runs Yum to install the packages required to build - the CloudStack - -* ./waf uninstalldebs: removes all Cloud.com DEBs from a system (AND logfiles - AND modified config files) -* ./waf viewdebdeps: displays DEB dependencies declared in the project - debian/control file - -* ./waf installdebdeps: runs aptitude to install the packages required to - build our software - - -=== Overriding certain source files === - -Earlier in this document we explored overriding configuration files. -Overrides are not limited to configuration files. - -If you want to provide your own server-setup.xml or SQL files in client/setup: - - * create a directory override inside the client/setup folder - * place your file that should override a file in client/setup there - -There's also override support in client/tomcatconf and agent/conf. - - -=== Environment substitutions === - -Any file named "something.in" has its tokens (@SOMETOKEN@) automatically -substituted for the corresponding build environment variable. The build -environment variables are generally constructed at configure time and -controllable by the --command-line-parameters to waf configure, and should -be available as a list of variables inside the file -artifacts/c4che/build.default.py. - - -=== The prerelease mechanism === - -The prerelease mechanism (--prerelease=BRANCHNAME) allows developers and -builders to build packages with pre-release Release tags. The Release tags -are constructed in such a way that both the build number and the branch name -is included, so developers can push these packages to repositories and upgrade -them using yum or aptitude without having to delete packages manually and -install packages manually every time a new build is done. Any package built -with the prerelease mechanism gets a standard X.Y.Z version number -- and, -due to the way that the prerelease Release tags are concocted, always upgrades -any older prerelease package already present on any system. The prerelease -mechanism must never be used to create packages that are intended to be -released as stable software to the general public. - -Relevant documentation: - - http://www.debian.org/doc/debian-policy/ch-controlfields.html#s-f-Version - http://fedoraproject.org/wiki/PackageNamingGuidelines#Pre-Release_packages - -Everything comes together on the build server in the following way: - - -=== SCCS info === - -When building a source distribution (waf dist), or RPM/DEB distributions -(waf deb / waf rpm), waf will automatically detect the relevant source code -control information if the git command is present on the machine where waf -is run, and it will write the information to a file called sccs-info inside -the source tarball / install it into /usr/share/doc/cloud*/sccs-info when -installing the packages. - -If this source code conrol information cannot be calculated, then the old -sccs-info file is preserved across dist runs if it exists, and if it did -not exist before, the fact that the source could not be properly tracked -down to a repository is noted in the file. - - -=== Debugging the build system === - -Almost all targets have names. waf build -vvvvv --zones=task will give you -the task names that you can use in --targets. - - ---------------------------------------------------------------------- -UNDERSTANDING THE BUILD SYSTEM ---------------------------------------------------------------------- - - -=== Documentation for the build system === - -The first and foremost reference material: - -- http://freehackers.org/~tnagy/wafbook/index.html - -Examples - -- http://code.google.com/p/waf/wiki/CodeSnippets -- http://code.google.com/p/waf/w/list - -FAQ - -- http://code.google.com/p/waf/wiki/FAQ - - -=== Why waf === - -The CloudStack uses waf to build itself. waf is a relative newcomer -to the build system world; it borrows concepts from SCons and -other later-generation build systems: - -- waf is very flexible and rich; unlike other build systems, it covers - the entire life cycle, from compilation to installation to - uninstallation. it also supports dist (create source tarball), - distcheck (check that the source tarball compiles and installs), - autoconf-like checks for dependencies at compilation time, - and more. - -- waf is self-contained. A single file, distributed with the project, - enables everything to be built, with only a dependency on Python, - which is freely available and shipped in all Linux computers. - -- waf also supports building projects written in multiple languages - (in the case of the CloudStack, we build from C, Java and Python). - -- since waf is written in Python, the entire library of the Python - language is available to use in the build process. - - -=== Hacking on the build system: what are these wscript files? === - -1. wscript: contains most commands you can run from within waf -2. wscript_configure: contains the process that discovers the software - on the system and configures the build to fit that -2. wscript_build: contains a manifest of *what* is built and installed - -Refer to the waf book for general information on waf: - http://freehackers.org/~tnagy/wafbook/index.html - - -=== What happens when waf runs === - -When you run waf, this happens behind the scenes: - -- When you run waf for the first time, it unpacks itself to a hidden - directory .waf-1.X.Y.MD5SUM, including the main program and all - the Python libraries it provides and needs. - -- Immediately after unpacking itself, waf reads the wscript file - at the root of the source directory. After parsing this file and - loading the functions defined here, it reads wscript_build and - generates a function build() based on it. - -- After loading the build scripts as explained above, waf calls - the functions you specified in the command line. - -So, for example, ./waf configure build install will: - -* call configure() from wscript, -* call build() loaded from the contents of wscript_build, -* call build() once more but with Options.is_install = True. - -As part of build(), waf invokes ant to build the Java portion of our -stack. - - -=== How and why we use ant within waf === - -By now, you have probably noticed that we do, indeed, ship ant -build files in the CloudStack. During the build process, waf calls -ant directly to build the Java portions of our stack, and it uses -the resulting JAR files to perform the installation. - -The reason we do this rather than use the native waf capabilities -for building Java projects is simple: by using ant, we can leverage -the support built-in for ant in Eclipse and many other IDEs. Another -reason to do this is because Java developers are familiar with ant, -so adding a new JAR file or modifying what gets built into the -existing JAR files is facilitated for Java developers. - -If you add to the ant build files a new ant target that uses the -compile-java macro, waf will automatically pick it up, along with its -depends= and JAR name attributes. In general, all you need to do is -add the produced JAR name to the packaging manifests (cloud.spec and -debian/{name-of-package}.install). - - ---------------------------------------------------------------------- -FOR ANT USERS ---------------------------------------------------------------------- - - -If you are using Ant directly instead of using waf, these instructions apply to you: - -in this document, the example instructions are based on local source repository rooted at c:\root. You are free to locate it to anywhere you'd like to. -3.1 Setup developer build type - - 1) Go to c:\cloud\java\build directory - - 2) Copy file build-cloud.properties.template to file build-cloud.properties, then modify some of the parameters to match your local setup. The template properties file should have content as - - debug=true - debuglevel=lines,vars,source - tomcat.home=$TOMCAT_HOME --> change to your local Tomcat root directory such as c:/apache-tomcat-6.0.18 - debug.jvmarg=-Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n - deprecation=off - build.type=developer - target.compat.version=1.5 - source.compat.version=1.5 - branding.name=default - - 3) Make sure the following Environment variables and Path are set: - -set enviroment variables: -CATALINA_HOME: -JAVA_HOME: -CLOUD_HOME: -MYSQL_HOME: - -update the path to include - -MYSQL_HOME\bin - - 4) Clone a full directory tree of C:\cloud\java\build\deploy\production to C:\cloud\java\build\deploy\developer - - You can use Windows Explorer to copy the directory tree over. Please note, during your daily development process, whenever you see updates in C:\cloud\java\build\deploy\production, be sure to sync it into C:\cloud\java\build\deploy\developer. -3.2 Common build instructions - -After you have setup the build type, you are ready to perform build and run Management Server alone locally. - -cd java -python waf configure build install - -More at Build system. - -Will install the management server and its requisites to the appropriate place (your Tomcat instance on Windows, /usr/local on Linux). It will also install the agent to /usr/local/cloud/agent (this will change in the future). -4. Database and Server deployment - -After a successful management server build (database deployment scripts use some of the artifacts from build process), you can use database deployment script to deploy and initialize the database. You can find the deployment scripts in C:/cloud/java/build/deploy/db. deploy-db.sh is used to create, populate your DB instance. Please take a look at content of deploy-db.sh for more details - -Before you run the scripts, you should edit C:/cloud/java/build/deploy/developer/db/server-setup-dev.xml to allocate Public and Private IP ranges for your development setup. Ensure that the ranges you pick are unallocated to others. - -Customized VM templates to be populated are in C:/cloud/java/build/deploy/developer/db/templates-dev.sql Edit this file to customize the templates to your needs. - -Deploy the DB by running - -./deploy-db.sh ../developer/db/server-setup-dev.xml ../developer/db/templates-dev.xml -4.1. Management Server Deployment - -ant build-server - -Build Management Server - -ant deploy-server - -Deploy Management Server software to Tomcat environment - -ant debug - -Start Management Server in debug mode. The JVM debug options can be found in cloud-build.properties - -ant run - -Start Management Server in normal mode. - -5. Agent deployment - -After a successful build process, you should be able to find build artifacts at distribution directory, in this example case, for developer build type, the artifacts locate at c:\cloud\java\dist\developer, particularly, if you have run - -ant package-agent build command, you should see the agent software be packaged in a single file named agent.zip under c:\cloud\java\dist\developer, together with the agent deployment script deploy-agent.sh. -5.1 Agent Type - -Agent software can be deployed and configured to serve with different roles at run time. In current implementation, there are 3 types of agent configuration, respectively called as Computing Server, Routing Server and Storage Server. - - * When agent software is configured to run as Computing server, it is responsible to host user VMs. Agent software should be running in Xen Dom0 system on computer server machine. - - * When agent software is configured to run as Routing Server, it is responsible to host routing VMs for user virtual network and console proxy system VMs. Routing server serves as the bridge to outside network, the machine that agent software is running should have at least two network interfaces, one towards outside network, one participates the internal VMOps management network. Like computer server, agent software on routing server should also be running in Xen Dom0 system. - - * When agent software is configured to run as Storage server, it is responsible to provide storage service for all VMs. The storage service is based on ZFS running on a Solaris system, agent software on storage server is therefore running under Solaris (actually a Solaris VM), Dom0 systems on computing server and routing server can access the storage service through iScsi initiator. The storage volume will be eventually mounted on Dom0 system and make available to DomU VMs through our agent software. - -5.2 Resource sharing - -All developers can share the same set of agent server machines for development, to make this possible, the concept of instance appears in various places - - * VM names. VM names are structual names, it contains a instance section that can identify VMs from different VMOps cloud instances. VMOps cloud instance name is configured in server configuration parameter AgentManager/instance.name - * iScsi initiator mount point. For Computing servers and Routing servers, the mount point can distinguish the mounted DomU VM images from different agent deployments. The mount location can be specified in agent.properties file with a name-value pair named mount.parent - * iScsi target allocation point. For storage servers, this allocation point can distinguish the storage allocation from different storage agent deployments. The allocation point can be specified in agent.properties file with a name-value pair named parent - -5.4 Deploy agent software - -Before running the deployment scripts, first copy the build artifacts agent.zip and deploy-agent.sh to your personal development directory on agent server machines. By our current convention, you can create your personal development directory that usually locates at /root/your name. In following example, the agent package and deployment scripts are copied to test0.lab.vmops.com and the deployment script file has been marked as executible. - - On build machine, - - scp agent.zip root@test0:/root/your name - - scp deploy-agent.sh root@test0:/root/your name - - On agent server machine - -chmod +x deploy-agent.sh -5.4.1 Deploy agent on computing server - -deploy-agent.sh -d /root//agent -h -t computing -m expert -5.4.2 Deploy agent on routing server - -deploy-agent.sh -d /root//agent -h -t routing -m expert -5.4.3 Deploy agent on storage server - -deploy-agent.sh -d /root//agent -h -t storage -m expert -5.5 Configure agent - -After you have deployed the agent software, you should configure the agent by editing the agent.properties file under /root//agent/conf directory on each of the Routing, Computing and Storage servers. Add/Edit following properties. The rest are defaults that get populated by the agent at runtime. - workers=3 - host= - port=8250 - pod= - zone= - instance= - developer=true - -Following is a sample agent.properties file for Routing server - - workers=3 - id=1 - port=8250 - pod=RC - storage=comstar - zone=RC - type=routing - private.network.nic=xenbr0 - instance=RC - public.network.nic=xenbr1 - developer=true - host=192.168.1.138 -5.5 Running agent - -Edit /root//agent/conf/log4j-cloud.xml to update the location of logs to somewhere under /root/ - -Once you have deployed and configured the agent software, you are ready to launch it. Under the agent root directory (in our example, /root//agent. there is a scrip file named run.sh, you can use it to launch the agent. - -Launch agent in detached background process - -nohup ./run.sh & - -Launch agent in interactive mode - -./run.sh - -Launch agent in debug mode, for example, following command makes JVM listen at TCP port 8787 - -./run.sh -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n - -If agent is launched in debug mode, you may use Eclipse IDE to remotely debug it, please note, when you are sharing agent server machine with others, choose a TCP port that is not in use by someone else. - -Please also note that, run.sh also searches for /etc/cloud directory for agent.properties, make sure it uses the correct agent.properties file! -5.5. Stopping the Agents - -the pid of the agent process is in /var/run/agent..pid - -To Stop the agent: - -kill - - \ No newline at end of file diff --git a/INSTALL b/INSTALL deleted file mode 100644 index bcf10e20b23..00000000000 --- a/INSTALL +++ /dev/null @@ -1,155 +0,0 @@ ---------------------------------------------------------------------- -TABLE OF CONTENTS ---------------------------------------------------------------------- - - -1. Really quick start: building and installing a production stack -2. Post-install: setting the CloudStack components up -3. Installation paths: where the stack is installed on your system -4. Uninstalling the CloudStack from your system - - ---------------------------------------------------------------------- -REALLY QUICK START: BUILDING AND INSTALLING A PRODUCTION STACK ---------------------------------------------------------------------- - - -You have two options. Choose one: - -a) Building distribution packages from the source and installing them -b) Building from the source and installing directly from there - - -=== I want to build and install distribution packages === - -This is the recommended way to run your CloudStack cloud. The -advantages are that dependencies are taken care of automatically -for you, and you can verify the integrity of the installed files -using your system's package manager. - -1. As root, install the build dependencies. - - a) Fedora / CentOS: ./waf installrpmdeps - - b) Ubuntu: ./waf installdebdeps - -2. As a non-root user, build the CloudStack packages. - - a) Fedora / CentOS: ./waf rpm - - b) Ubuntu: ./waf deb - -3. As root, install the CloudStack packages. - You can choose which components to install on your system. - - a) Fedora / CentOS: the installable RPMs are in artifacts/rpmbuild - install as root: rpm -ivh artifacts/rpmbuild/RPMS/{x86_64,noarch,i386}/*.rpm - - b) Ubuntu: the installable DEBs are in artifacts/debbuild - install as root: dpkg -i artifacts/debbuild/*.deb - -4. Configure and start the components you intend to run. - Consult the Installation Guide to find out how to - configure each component, and "Installation paths" for information - on where programs, initscripts and config files are installed. - - -=== I want to build and install directly from the source === - -This is the recommended way to run your CloudStack cloud if you -intend to modify the source, if you intend to port the CloudStack to -another distribution, or if you intend to run the CloudStack on a -distribution for which packages are not built. - -1. As root, install the build dependencies. - See below for a list. - -2. As non-root, configure the build. - See below to discover configuration options. - - ./waf configure - -3. As non-root, build the CloudStack. - To learn more, see "Quick guide to developing, building and - installing from source" below. - - ./waf build - -4. As root, install the runtime dependencies. - See below for a list. - -5. As root, Install the CloudStack - - ./waf install - -6. Configure and start the components you intend to run. - Consult the Installation Guide to find out how to - configure each component, and "Installation paths" for information - on where to find programs, initscripts and config files mentioned - in the Installation Guide (paths may vary). - - -=== Dependencies of the CloudStack === - -- Build dependencies: - - 1. FIXME DEPENDENCIES LIST THEM HERE - -- Runtime dependencies: - - 2. FIXME DEPENDENCIES LIST THEM HERE - - ---------------------------------------------------------------------- -INSTALLATION PATHS: WHERE THE STACK IS INSTALLED ON YOUR SYSTEM ---------------------------------------------------------------------- - - -The CloudStack build system installs files on a variety of paths, each -one of which is selectable when building from source. - -- $PREFIX: - the default prefix where the entire stack is installed - defaults to /usr/local on source builds - defaults to /usr on package builds - -- $SYSCONFDIR/cloud: - - the prefix for CloudStack configuration files - defaults to $PREFIX/etc/cloud on source builds - defaults to /etc/cloud on package builds - -- $SYSCONFDIR/init.d: - the prefix for CloudStack initscripts - defaults to $PREFIX/etc/init.d on source builds - defaults to /etc/init.d on package builds - -- $BINDIR: - the CloudStack installs programs there - defaults to $PREFIX/bin on source builds - defaults to /usr/bin on package builds - -- $LIBEXECDIR: - the CloudStack installs service runners there - defaults to $PREFIX/libexec on source builds - defaults to /usr/libexec on package builds (/usr/bin on Ubuntu) - - ---------------------------------------------------------------------- -UNINSTALLING THE CLOUDSTACK FROM YOUR SYSTEM ---------------------------------------------------------------------- - - -- If you installed the CloudStack using packages, use your operating - system package manager to remove the CloudStack packages. - - a) Fedora / CentOS: the installable RPMs are in artifacts/rpmbuild - as root: rpm -qa | grep ^cloud- | xargs rpm -e - - b) Ubuntu: the installable DEBs are in artifacts/debbuild - aptitude purge '~ncloud' - -- If you installed from a source tree: - - ./waf uninstall - diff --git a/README b/README deleted file mode 100644 index b0478ff475f..00000000000 --- a/README +++ /dev/null @@ -1,52 +0,0 @@ -Hello, and thanks for downloading the Cloud.com CloudStack™! The -Cloud.com CloudStack™ is Open Source Software that allows -organizations to build Infrastructure as a Service (Iaas) clouds. -Working with server, storage, and networking equipment of your -choice, the CloudStack provides a turn-key software stack that -dramatically simplifies the process of deploying and managing a -cloud. - - ---------------------------------------------------------------------- -HOW TO INSTALL THE CLOUDSTACK ---------------------------------------------------------------------- - - -Please refer to the document INSTALL distributed with the source. - - ---------------------------------------------------------------------- -HOW TO HACK ON THE CLOUDSTACK ---------------------------------------------------------------------- - - -Please refer to the document HACKING distributed with the source. - - ---------------------------------------------------------------------- -BE PART OF THE CLOUD.COM COMMUNITY! ---------------------------------------------------------------------- - - -We are more than happy to have you ask us questions, hack our source -code, and receive your contributions. - -* Our forums are available at http://cloud.com/community . -* If you would like to modify / extend / hack on the CloudStack source, - refer to the file HACKING for more information. -* If you find bugs, please log on to http://bugs.cloud.com/ and file - a report. -* If you have patches to send us get in touch with us at info@cloud.com - or file them as attachments in our bug tracker above. - - ---------------------------------------------------------------------- -Cloud.com's contact information is: - -20400 Stevens Creek Blvd -Suite 390 -Cupertino, CA 95014 -Tel: +1 (888) 384-0962 - -This software is OSI certified Open Source Software. OSI Certified is a -certification mark of the Open Source Initiative. diff --git a/README.html b/README.html index 2ece7a070e7..8212176103e 100644 --- a/README.html +++ b/README.html @@ -512,6 +512,13 @@ Also see [[AdvancedOptions]]
+
+
|''Type:''|file|
+|''URL:''|http://tiddlyvault.tiddlyspot.com/#%5B%5BDisableWikiLinksPlugin%20(TiddlyTools)%5D%5D|
+|''Workspace:''|(default)|
+
+This tiddler was automatically created to record the details of this server
+
---------------------------------------------------------------------
 FOR ANT USERS
@@ -702,21 +709,18 @@ Once this command is done, the packages will be built in the directory {{{artifa
 # As a non-root user, run the command {{{./waf deb}}} in the source directory.
 Once this command is done, the packages will be built in the directory {{{artifacts/debbuild}}}.
-
-
!Obtain the source for the CloudStack
+
+
You need to do the following steps on each machine that will run a CloudStack component.
+!Obtain the source for the CloudStack
 If you aren't reading this from a local copy of the source code, see [[Obtaining the source]].
-!Prepare your development environment
-See [[Preparing your development environment]].
-!Configure the build on the builder machine
+!Prepare your environment
+See [[Preparing your environment]].
+!Configure the build
 As non-root, run the command {{{./waf configure}}}.  See [[waf configure]] to discover configuration options for that command.
-!Build the CloudStack on the builder machine
+!Build the CloudStack
 As non-root, run the command {{{./waf build}}}.  See [[waf build]] for an explanation.
-!Install the CloudStack on the target systems
-On each machine where you intend to run a CloudStack component:
-# upload the entire source code tree after compilation, //ensuring that the source ends up in the same path as the machine in which you compiled it//,
-## {{{rsync}}} is [[usually very handy|Using rsync to quickly transport the source tree to another machine]] for this
-# in that newly uploaded directory of the target machine, run the command {{{./waf install}}} //as root//.
-Consult [[waf install]] for information on installation.
+!Install the CloudStack +Run the command {{{./waf install}}} //as root//. Consult [[waf install]] for information on installation.
!Changing the [[configuration|waf configure]] process
@@ -737,11 +741,91 @@ See the files in the {{{debian/}}} folder.
The Cloud.com CloudStack is an open source software product that enables the deployment, management, and configuration of multi-tier and multi-tenant infrastructure cloud services by enterprises and service providers.
-
-
Not done yet!
+
+
Prior to building the CloudStack, you need to install the following software packages in your system.
+# Sun Java 1.6
+## You must install the Java Development Kit with {{{javac}}}, not just the Java Runtime Environment
+## The commands {{{java}}} and {{{javac}}} must be found in your {{{PATH}}}
+# Apache Tomcat
+## If you are using the official Apache binary distribution, set the environment variable {{{TOMCAT_HOME}}} to point to the Apache Tomcat directory
+# MySQL
+## At the very minimum, you need to have the client and libraries installed
+## If your development machine is also going to be the database server, you need to have the server installed and running as well
+# Python 2.6
+## Ensure that the {{{python}}} command is in your {{{PATH}}}
+## Do ''not'' install Cygwin Python!
+# The MySQLdb module for Python 2.6
+## If you use Windows, you can find a [[pre-built package here|http://soemin.googlecode.com/files/MySQL-python-1.2.3c1.win32-py2.6.exe]]
+# The Bourne-again shell (also known as bash)
+# GNU coreutils
+''Note for Windows users'': Some of the packages in the above list are only available on Windows through Cygwin.  If that is your case, install them using Cygwin and remember to include the Cygwin {{{bin/}}} directory in your PATH.  Under no circumstances install Cygwin Python!  Use the Python for Windows official installer instead.
+!Additional dependencies for Linux development environments
+# GCC (only needed on Linux)
+# glibc-devel / glibc-dev
+# The Java packages (usually available in your distribution):
+## commons-collections
+## commons-dbcp
+## commons-logging
+## commons-logging-api
+## commons-pool
+## commons-httpclient
+## ws-commons-util
+# useradd
+# userdel
-
-
Not done yet!
+
+
The following software / programs must be correctly installed in the machines where you will run a CloudStack component.  This list is by no means complete yet, but it will be soon.
+
+''Note for Windows users'':  Some of the packages in the lists below are only available on Windows through Cygwin.  If that is your case, install them using Cygwin and remember to include the Cygwin {{{bin/}}} directory in your PATH.  Under no circumstances install Cygwin Python!  Use the Python for Windows official installer instead.
+!Run-time dependencies common to all components of the CloudStack
+# bash
+# coreutils
+# Sun Java 1.6
+## You must install the Java Development Kit with {{{javac}}}, not just the Java Runtime Environment
+## The commands {{{java}}} and {{{javac}}} must be found in your {{{PATH}}}
+# Python 2.6
+## Ensure that the {{{python}}} command is in your {{{PATH}}}
+## Do ''not'' install Cygwin Python!
+# The Java packages (usually available in your distribution):
+## commons-collections
+## commons-dbcp
+## commons-logging
+## commons-logging-api
+## commons-pool
+## commons-httpclient
+## ws-commons-util
+!Management Server-specific dependencies
+# Apache Tomcat
+## If you are using the official Apache binary distribution, set the environment variable {{{TOMCAT_HOME}}} to point to the Apache Tomcat directory
+# MySQL
+## At the very minimum, you need to have the client and libraries installed
+## If you will be running the Management Server in the same machine that will run the database server, you need to have the server installed and running as well
+# The MySQLdb module for Python 2.6
+## If you use Windows, you can find a [[pre-built package here|http://soemin.googlecode.com/files/MySQL-python-1.2.3c1.win32-py2.6.exe]] 
+# openssh-clients (provides the ssh-keygen command)
+# mkisofs (provides the genisoimage command)
+
+
+
To support incremental migration from one version to another without having to redeploy the database, the CloudStack supports an incremental schema migration mechanism for the database.
+!!!How does it work?
+When the database is deployed for the first time with [[waf deploydb]] or the command {{{cloud-setup-databases}}}, a row is written to the {{{configuration}}} table, named {{{schema.level}}} and containing the current schema level.  This schema level row comes from the file {{{setup/db/schema-level.sql}}} in the source (refer to the [[Installation paths]] topic to find out where this file is installed in a running system).
+
+This value is used by the database migrator {{{cloud-migrate-databases}}} (source {{{setup/bindir/cloud-migrate-databases.in}}}) to determine the starting schema level.  The database migrator has a series of classes -- each class represents a step in the migration process and is usually tied to the execution of a SQL file stored in {{{setup/db}}}.  To migrate the database, the database migrator:
+# walks the list of steps it knows about,
+# generates a list of steps sorted by the order they should be executed in,
+# executes each step in order
+# at the end of each step, records the new schema level to the database table {{{configuration}}}
+For more information, refer to the database migrator source -- it is documented.
+!!!What impact does this have on me as a developer?
+Whenever you need to evolve the schema of the database:
+# write a migration SQL script and store it in {{{setup/db}}},
+# include your schema changes in the appropriate SQL file {{{create-*.sql}}} too (as the database is expected to be at its latest evolved schema level right after deploying a fresh database)
+# write a class in {{{setup/bindir/cloud-migrate-databases.in}}}, describing the migration step; in detail:
+## the schema level your migration step expects the database to be in,
+## the schema level your migration step will leave your database in (presumably the latest schema level, which you will have to choose!),
+## and the name / description of the step
+# bump the schema level in {{{setup/db/schema-level.sql}}} to the latest schema level
+Otherwise, ''end-user migration will fail catastrophically''.
[[Welcome]]
@@ -749,13 +833,115 @@ See the files in the {{{debian/}}} folder.
#[[Source layout guide]]
+
+
/***
+|Name|DisableWikiLinksPlugin|
+|Source|http://www.TiddlyTools.com/#DisableWikiLinksPlugin|
+|Version|1.6.0|
+|Author|Eric Shulman|
+|License|http://www.TiddlyTools.com/#LegalStatements|
+|~CoreVersion|2.1|
+|Type|plugin|
+|Description|selectively disable TiddlyWiki's automatic ~WikiWord linking behavior|
+This plugin allows you to disable TiddlyWiki's automatic ~WikiWord linking behavior, so that WikiWords embedded in tiddler content will be rendered as regular text, instead of being automatically converted to tiddler links.  To create a tiddler link when automatic linking is disabled, you must enclose the link text within {{{[[...]]}}}.
+!!!!!Usage
+<<<
+You can block automatic WikiWord linking behavior for any specific tiddler by ''tagging it with<<tag excludeWikiWords>>'' (see configuration below) or, check a plugin option to disable automatic WikiWord links to non-existing tiddler titles, while still linking WikiWords that correspond to existing tiddlers titles or shadow tiddler titles.  You can also block specific selected WikiWords from being automatically linked by listing them in [[DisableWikiLinksList]] (see configuration below), separated by whitespace.  This tiddler is optional and, when present, causes the listed words to always be excluded, even if automatic linking of other WikiWords is being permitted.  
+
+Note: WikiWords contained in default ''shadow'' tiddlers will be automatically linked unless you select an additional checkbox option lets you disable these automatic links as well, though this is not recommended, since it can make it more difficult to access some TiddlyWiki standard default content (such as AdvancedOptions or SideBarTabs)
+<<<
+!!!!!Configuration
+<<<
+<<option chkDisableWikiLinks>> Disable ALL automatic WikiWord tiddler links
+<<option chkAllowLinksFromShadowTiddlers>> ... except for WikiWords //contained in// shadow tiddlers
+<<option chkDisableNonExistingWikiLinks>> Disable automatic WikiWord links for non-existing tiddlers
+Disable automatic WikiWord links for words listed in: <<option txtDisableWikiLinksList>>
+Disable automatic WikiWord links for tiddlers tagged with: <<option txtDisableWikiLinksTag>>
+<<<
+!!!!!Revisions
+<<<
+2008.07.22 [1.6.0] hijack tiddler changed() method to filter disabled wiki words from internal links[] array (so they won't appear in the missing tiddlers list)
+2007.06.09 [1.5.0] added configurable txtDisableWikiLinksTag (default value: "excludeWikiWords") to allows selective disabling of automatic WikiWord links for any tiddler tagged with that value.
+2006.12.31 [1.4.0] in formatter, test for chkDisableNonExistingWikiLinks
+2006.12.09 [1.3.0] in formatter, test for excluded wiki words specified in DisableWikiLinksList
+2006.12.09 [1.2.2] fix logic in autoLinkWikiWords() (was allowing links TO shadow tiddlers, even when chkDisableWikiLinks is TRUE).  
+2006.12.09 [1.2.1] revised logic for handling links in shadow content
+2006.12.08 [1.2.0] added hijack of Tiddler.prototype.autoLinkWikiWords so regular (non-bracketed) WikiWords won't be added to the missing list
+2006.05.24 [1.1.0] added option to NOT bypass automatic wikiword links when displaying default shadow content (default is to auto-link shadow content)
+2006.02.05 [1.0.1] wrapped wikifier hijack in init function to eliminate globals and avoid FireFox 1.5.0.1 crash bug when referencing globals
+2005.12.09 [1.0.0] initial release
+<<<
+!!!!!Code
+***/
+//{{{
+version.extensions.DisableWikiLinksPlugin= {major: 1, minor: 6, revision: 0, date: new Date(2008,7,22)};
+
+if (config.options.chkDisableNonExistingWikiLinks==undefined) config.options.chkDisableNonExistingWikiLinks= false;
+if (config.options.chkDisableWikiLinks==undefined) config.options.chkDisableWikiLinks=false;
+if (config.options.txtDisableWikiLinksList==undefined) config.options.txtDisableWikiLinksList="DisableWikiLinksList";
+if (config.options.chkAllowLinksFromShadowTiddlers==undefined) config.options.chkAllowLinksFromShadowTiddlers=true;
+if (config.options.txtDisableWikiLinksTag==undefined) config.options.txtDisableWikiLinksTag="excludeWikiWords";
+
+// find the formatter for wikiLink and replace handler with 'pass-thru' rendering
+initDisableWikiLinksFormatter();
+function initDisableWikiLinksFormatter() {
+	for (var i=0; i<config.formatters.length && config.formatters[i].name!="wikiLink"; i++);
+	config.formatters[i].coreHandler=config.formatters[i].handler;
+	config.formatters[i].handler=function(w) {
+		// supress any leading "~" (if present)
+		var skip=(w.matchText.substr(0,1)==config.textPrimitives.unWikiLink)?1:0;
+		var title=w.matchText.substr(skip);
+		var exists=store.tiddlerExists(title);
+		var inShadow=w.tiddler && store.isShadowTiddler(w.tiddler.title);
+		// check for excluded Tiddler
+		if (w.tiddler && w.tiddler.isTagged(config.options.txtDisableWikiLinksTag))
+			{ w.outputText(w.output,w.matchStart+skip,w.nextMatch); return; }
+		// check for specific excluded wiki words
+		var t=store.getTiddlerText(config.options.txtDisableWikiLinksList);
+		if (t && t.length && t.indexOf(w.matchText)!=-1)
+			{ w.outputText(w.output,w.matchStart+skip,w.nextMatch); return; }
+		// if not disabling links from shadows (default setting)
+		if (config.options.chkAllowLinksFromShadowTiddlers && inShadow)
+			return this.coreHandler(w);
+		// check for non-existing non-shadow tiddler
+		if (config.options.chkDisableNonExistingWikiLinks && !exists)
+			{ w.outputText(w.output,w.matchStart+skip,w.nextMatch); return; }
+		// if not enabled, just do standard WikiWord link formatting
+		if (!config.options.chkDisableWikiLinks)
+			return this.coreHandler(w);
+		// just return text without linking
+		w.outputText(w.output,w.matchStart+skip,w.nextMatch)
+	}
+}
+
+Tiddler.prototype.coreAutoLinkWikiWords = Tiddler.prototype.autoLinkWikiWords;
+Tiddler.prototype.autoLinkWikiWords = function()
+{
+	// if all automatic links are not disabled, just return results from core function
+	if (!config.options.chkDisableWikiLinks)
+		return this.coreAutoLinkWikiWords.apply(this,arguments);
+	return false;
+}
+
+Tiddler.prototype.disableWikiLinks_changed = Tiddler.prototype.changed;
+Tiddler.prototype.changed = function()
+{
+	this.disableWikiLinks_changed.apply(this,arguments);
+	// remove excluded wiki words from links array
+	var t=store.getTiddlerText(config.options.txtDisableWikiLinksList,"").readBracketedList();
+	if (t.length) for (var i=0; i<t.length; i++)
+		if (this.links.contains(t[i]))
+			this.links.splice(this.links.indexOf(t[i]),1);
+};
+//}}}
+
Not done yet!
-
+
Start here if you want to learn the essentials to extend, modify and enhance the CloudStack.  This assumes that you've already familiarized yourself with CloudStack concepts, installation and configuration using the [[Getting started|Welcome]] instructions.
 * [[Obtain the source|Obtaining the source]]
-* [[Prepare your environment|Preparing your development environment]]
+* [[Prepare your environment|Preparing your environment]]
 * [[Get acquainted with the development lifecycle|Your development lifecycle]]
 * [[Familiarize yourself with our development conventions|Development conventions]]
 Extra developer information:
@@ -764,6 +950,7 @@ Extra developer information:
 * [[How to integrate with Eclipse]]
 * [[Starting over]]
 * [[Making a source release|waf dist]]
+* [[How to write database migration scripts|Database migration infrastructure]]
 
@@ -785,13 +972,13 @@ Any ant target added to the ant project files will automatically be detected -- The reason we do this rather than use the native waf capabilities for building Java projects is simple: by using ant, we can leverage the support built-in for ant in [[Eclipse|How to integrate with Eclipse]] and many other """IDEs""". Another reason to do this is because Java developers are familiar with ant, so adding a new JAR file or modifying what gets built into the existing JAR files is facilitated for Java developers.
-
+
The CloudStack build system installs files on a variety of paths, each
 one of which is selectable when building from source.
 * {{{$PREFIX}}}:
 ** the default prefix where the entire stack is installed
-** defaults to /usr/local on source builds
-** defaults to /usr on package builds
+** defaults to {{{/usr/local}}} on source builds as root, {{{$HOME/cloudstack}}} on source builds as a regular user, {{{C:\CloudStack}}} on Windows builds
+** defaults to {{{/usr}}} on package builds
 * {{{$SYSCONFDIR/cloud}}}:
 ** the prefix for CloudStack configuration files
 ** defaults to $PREFIX/etc/cloud on source builds
@@ -901,16 +1088,17 @@ This will create a folder called {{{cloudstack-oss}}} in your current folder.
 !Browsing the source code online
 You can browse the CloudStack source code through [[our CGit Web interface|http://git.cloud.com/cloudstack-oss]].
-
-
!Install the build dependencies on the machine where you will compile the CloudStack
-!!Fedora / CentOS
-The command [[waf installrpmdeps]] issued from the source tree gets it done.
-!!Ubuntu
-The command [[waf installdebdeps]] issues from the source tree gets it done.
-!!Other distributions
-See [[CloudStack build dependencies]]
-!Install the run-time dependencies on the machines where you will run the CloudStack
-See [[CloudStack run-time dependencies]].
+
+
!Install the build dependencies
+* If you want to compile the CloudStack on Linux:
+** Fedora / CentOS: The command [[waf installrpmdeps]] issued from the source tree gets it done.
+** Ubuntu: The command [[waf installdebdeps]] issues from the source tree gets it done.
+** Other distributions: Manually install the packages listed in [[CloudStack build dependencies]].
+* If you want to compile the CloudStack on Windows or Mac:
+** Manually install the packages listed in [[CloudStack build dependencies]].
+** Note that you won't be able to deploy this compiled CloudStack onto Linux machines -- you will be limited to running the Management Server.
+!Install the run-time dependencies
+In addition to the build dependencies, a number of software packages need to be installed on the machine to be able to run certain components of the CloudStack.  These packages are not strictly required to //build// the stack, but they are required to run at least one part of it.  See the topic [[CloudStack run-time dependencies]] for the list of packages.
Every time you run {{{./waf install}}} to deploy changed code, waf will install configuration files once again.  This can be a nuisance if you are developing the stack.
@@ -1149,9 +1337,9 @@ Cloud.com's contact information is:
 !Legal information
 //Unless otherwise specified// by Cloud.com, Inc., or in the sources themselves, [[this software is OSI certified Open Source Software distributed under the GNU General Public License, version 3|License statement]].  OSI Certified is a certification mark of the Open Source Initiative.  The software powering this documentation is """BSD-licensed""" and obtained from [[TiddlyWiki.com|http://tiddlywiki.com/]].
-
-
This is the typical lifecycle that you would follow when hacking on a CloudStack component, assuming that your [[development environment has been set up|Preparing your development environment]]:
-# [[Configure|waf configure]] the source code<br>{{{./waf configure --prefix=/home/youruser/cloudstack}}}
+
+
This is the typical lifecycle that you would follow when hacking on a CloudStack component, assuming that your [[development environment has been set up|Preparing your environment]]:
+# [[Configure|waf configure]] the source code<br>{{{./waf configure}}}
 # [[Build|waf build]] and [[install|waf install]] the CloudStack
 ## {{{./waf install}}}
 ## [[How to perform these tasks from Eclipse|How to integrate with Eclipse]]
@@ -1229,7 +1417,7 @@ Makes an inventory of all build products in {{{artifacts/default}}}, and removes
 
 Contrast to [[waf distclean]].
-
+
{{{
 ./waf configure --prefix=/directory/that/you/have/write/permission/to
 }}}
@@ -1238,7 +1426,7 @@ This runs the file {{{wscript_configure}}}, which takes care of setting the  var
 !When / why should I run this?
 You run this command //once//, in preparation to building the stack, or every time you need to change a configure-time variable.  Once you find an acceptable set of configure-time variables, you should not need to run {{{configure}}} again.
 !What happens if I don't run it?
-For convenience reasons, if you forget to configure the source, waf will autoconfigure itself and select some sensible default configuration options.  By default, {{{PREFIX}}} is {{{/usr/local}}}, but you can set it e.g. to  {{{/home/youruser/cloudstack}}} if you plan to do a non-root install.  Be ware that you can later install the stack as a regular user, but most components need to //run// as root.
+For convenience reasons, if you forget to configure the source, waf will autoconfigure itself and select some sensible default configuration options.  By default, {{{PREFIX}}} is {{{/usr/local}}} if you configure as root (do this if you plan to do a non-root install), or {{{/home/youruser/cloudstack}}} if you configure as your regular user name.  Be ware that you can later install the stack as a regular user, but most components need to //run// as root.
 !What variables / options exist for configure?
 In general: refer to the output of {{{./waf configure --help}}}.
 
diff --git a/agent/src/com/cloud/agent/resource/computing/LibvirtComputingResource.java b/agent/src/com/cloud/agent/resource/computing/LibvirtComputingResource.java
index ad7c036fe31..f1d2d36e29b 100644
--- a/agent/src/com/cloud/agent/resource/computing/LibvirtComputingResource.java
+++ b/agent/src/com/cloud/agent/resource/computing/LibvirtComputingResource.java
@@ -1311,7 +1311,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
          try {
 			StoragePool secondaryStoragePool = getNfsSPbyURI(_conn, new URI(secondaryStoragePoolURL));
 			String ssPmountPath = _mountPoint + File.separator + secondaryStoragePool.getUUIDString();
-			snapshotDestPath = ssPmountPath + File.separator + dcId + File.separator + "snapshots" + File.separator + accountId + File.separator + volumeId; 
+			snapshotDestPath = ssPmountPath + File.separator + "snapshots" + File.separator +  dcId + File.separator + accountId + File.separator + volumeId; 
 			Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
 			command.add("-b", snapshotPath);
 			command.add("-n", snapshotName);
@@ -1367,7 +1367,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
     	try {
     		StoragePool secondaryStoragePool = getNfsSPbyURI(_conn, new URI(cmd.getSecondaryStoragePoolURL()));
 			String ssPmountPath = _mountPoint + File.separator + secondaryStoragePool.getUUIDString();
-			String snapshotDestPath = ssPmountPath + File.separator + dcId + File.separator + "snapshots" + File.separator + accountId + File.separator + volumeId;
+			String snapshotDestPath = ssPmountPath + File.separator + "snapshots"  + File.separator + dcId + File.separator + accountId + File.separator + volumeId;
 			
 			final Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
 			command.add("-d", snapshotDestPath);
@@ -1389,11 +1389,12 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
     	try {
     		StoragePool secondaryStoragePool = getNfsSPbyURI(_conn, new URI(cmd.getSecondaryStoragePoolURL()));
 			String ssPmountPath = _mountPoint + File.separator + secondaryStoragePool.getUUIDString();
-			String snapshotDestPath = ssPmountPath + File.separator + dcId + File.separator + "snapshots" + File.separator + accountId + File.separator + volumeId;
+			String snapshotDestPath = ssPmountPath + File.separator + "snapshots" + File.separator +  dcId + File.separator + accountId + File.separator + volumeId;
 			
 			final Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
 			command.add("-d", snapshotDestPath);
 			command.add("-n", cmd.getSnapshotName());
+			command.add("-f");
 			command.execute();
     	} catch (LibvirtException e) {
     		return new Answer(cmd, false, e.toString());
@@ -1428,10 +1429,8 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
     		 secondaryPool = getNfsSPbyURI(_conn, new URI(cmd.getSecondaryStoragePoolURL()));
     		 /*TODO: assuming all the storage pools mounted under _mountPoint, the mount point should be got from pool.dumpxml*/
     		 String templatePath = _mountPoint + File.separator + secondaryPool.getUUIDString() + File.separator + templateInstallFolder;	 
-    		 File f = new File(templatePath);
-    		 if (!f.exists()) {
-    			 f.mkdirs();
-    		 }
+    		 _storage.mkdirs(templatePath);
+    		 
     		 String tmplPath = templateInstallFolder + File.separator + tmplFileName;
     		 Script command = new Script(_createTmplPath, _timeout, s_logger);
     		 command.add("-t", templatePath);
@@ -1487,10 +1486,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
         	 secondaryStorage = getNfsSPbyURI(_conn, new URI(secondaryStorageURL));
         	 /*TODO: assuming all the storage pools mounted under _mountPoint, the mount point should be got from pool.dumpxml*/
         	 String tmpltPath = _mountPoint + File.separator + secondaryStorage.getUUIDString() + templateInstallFolder;
-        	 File mpfile = new File(tmpltPath);
-        	 if (!mpfile.exists()) {
-        		 mpfile.mkdirs();
-        	 }
+        	 _storage.mkdirs(tmpltPath);
 
         	 Script command = new Script(_createTmplPath, _timeout, s_logger);
         	 command.add("-f", cmd.getSnapshotPath());
@@ -1589,10 +1585,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
           
           if (sp == null) {
         	  try {
-        		  File tpFile = new File(targetPath);
-        		  if (!tpFile.exists()) {
-        			  tpFile.mkdir();
-        		  }
+        		  _storage.mkdir(targetPath);
         		  LibvirtStoragePoolDef spd = new LibvirtStoragePoolDef(poolType.NFS, uuid, uuid,
         				  sourceHost, sourcePath, targetPath);
         		  s_logger.debug(spd.toString());
@@ -1702,10 +1695,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
     	String targetPath = _mountPoint + File.separator + pool.getUuid();
     	LibvirtStoragePoolDef spd = new LibvirtStoragePoolDef(poolType.NFS, pool.getUuid(), pool.getUuid(),
     														  pool.getHostAddress(), pool.getPath(), targetPath);
-    	File tpFile = new File(targetPath);
-		  if (!tpFile.exists()) {
-			  tpFile.mkdirs();
-		  }
+    	_storage.mkdir(targetPath);
     	StoragePool sp = null;
     	try {
     		s_logger.debug(spd.toString());
diff --git a/api/src/com/cloud/storage/Volume.java b/api/src/com/cloud/storage/Volume.java
index f43f5a5be74..64c868412e3 100755
--- a/api/src/com/cloud/storage/Volume.java
+++ b/api/src/com/cloud/storage/Volume.java
@@ -17,6 +17,8 @@
  */
 package com.cloud.storage;
 
+import java.util.Date;
+
 import com.cloud.domain.PartOf;
 import com.cloud.template.BasedOn;
 import com.cloud.user.OwnedBy;
@@ -86,4 +88,8 @@ public interface Volume extends PartOf, OwnedBy, BasedOn {
 	void setSourceId(Long sourceId);
 
 	Long getSourceId();
+
+	Date getAttached();
+
+	void setAttached(Date attached);
 }
diff --git a/build/build-cloud.xml b/build/build-cloud.xml
index ebdefbb90b0..848af06df1e 100755
--- a/build/build-cloud.xml
+++ b/build/build-cloud.xml
@@ -107,9 +107,6 @@
   
   
 
-  
-  
-  
 
   
   
@@ -518,40 +515,19 @@
   
 
 
-  
-    
-    
-      
-        
-        
-        
-      
-      
+  
+    
+    
+      
         
         
         
+        
       
     
-    
-    
-  
-
-  
-    
-    
-      
-        
-        
-        
-      
-      
-        
-        
-        
-      
-    
-    
-    
+    
+    
+    
   
 
   
diff --git a/build/package.xml b/build/package.xml
index fce58ddcd5e..ec0cc82077b 100755
--- a/build/package.xml
+++ b/build/package.xml
@@ -23,7 +23,6 @@
   
   
   
-  
 
   
     
@@ -92,9 +91,9 @@
   
 
 
-  
+  
     
-      
+      
         
       
       
@@ -103,14 +102,15 @@
     
   
 
-  
+  
     
       
       
       
       
       
-      
+      
+      
     
   
 
@@ -135,7 +135,7 @@
     
   
 
-  
+  
   
 
   
diff --git a/client/tomcatconf/commands.properties.in b/client/tomcatconf/commands.properties.in
old mode 100644
new mode 100755
index 47f934cabd4..bd38848c3f3
--- a/client/tomcatconf/commands.properties.in
+++ b/client/tomcatconf/commands.properties.in
@@ -61,6 +61,7 @@ deleteTemplate=com.cloud.api.commands.DeleteTemplateCmd;15
 listTemplates=com.cloud.api.commands.ListTemplatesCmd;15
 updateTemplatePermissions=com.cloud.api.commands.UpdateTemplatePermissionsCmd;15
 listTemplatePermissions=com.cloud.api.commands.ListTemplatePermissionsCmd;15
+extractTemplate=com.cloud.api.commands.ExtractTemplateCmd;15
 
 #### iso commands
 attachIso=com.cloud.api.commands.AttachIsoCmd;15
diff --git a/client/tomcatconf/components.xml.in b/client/tomcatconf/components.xml.in
index f094434bfc8..d4fd563f2b3 100755
--- a/client/tomcatconf/components.xml.in
+++ b/client/tomcatconf/components.xml.in
@@ -172,6 +172,8 @@
         
         
         
+        
+        
         
         
         
diff --git a/cloud.spec b/cloud.spec
index 690c816876e..4b85ee7bde3 100644
--- a/cloud.spec
+++ b/cloud.spec
@@ -35,6 +35,7 @@ BuildRequires: jpackage-utils
 BuildRequires: gcc
 BuildRequires: glibc-devel
 BuildRequires: /usr/bin/mkisofs
+BuildRequires: MySQL-python
 
 %global _premium %(tar jtvmf %{SOURCE0} '*/cloudstack-proprietary/' --occurrence=1 2>/dev/null | wc -l)
 
@@ -182,12 +183,11 @@ Summary:   Cloud.com setup tools
 Obsoletes: vmops-setup < %{version}-%{release}
 Requires: java >= 1.6.0
 Requires: python
-Requires: mysql
+Requires: MySQL-python
 Requires: %{name}-utils = %{version}-%{release}
 Requires: %{name}-server = %{version}-%{release}
 Requires: %{name}-deps = %{version}-%{release}
 Requires: %{name}-python = %{version}-%{release}
-Requires: MySQL-python
 Group:     System Environment/Libraries
 %description setup
 The Cloud.com setup tools let you set up your Management Server and Usage Server.
@@ -373,7 +373,6 @@ if [ "$1" == "1" ] ; then
     /sbin/chkconfig --add %{name}-management > /dev/null 2>&1 || true
     /sbin/chkconfig --level 345 %{name}-management on > /dev/null 2>&1 || true
 fi
-test -f %{_sharedstatedir}/%{name}/management/.ssh/id_rsa || su - %{name} -c 'yes "" 2>/dev/null | ssh-keygen -t rsa -q -N ""' < /dev/null
 
 
 
@@ -457,30 +456,17 @@ fi
 %doc %{_docdir}/%{name}-%{version}/sccs-info
 %doc %{_docdir}/%{name}-%{version}/version-info
 %doc %{_docdir}/%{name}-%{version}/configure-info
-%doc README
-%doc INSTALL
-%doc HACKING
 %doc README.html
 %doc debian/copyright
 
 %files client-ui
 %defattr(0644,root,root,0755)
 %{_datadir}/%{name}/management/webapps/client/*
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc README.html
-%doc debian/copyright
 
 %files server
 %defattr(0644,root,root,0755)
 %{_javadir}/%{name}-server.jar
 %{_sysconfdir}/%{name}/server/*
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc README.html
-%doc debian/copyright
 
 %files agent-scripts
 %defattr(-,root,root,-)
@@ -498,20 +484,10 @@ fi
 %endif
 %{_libdir}/%{name}/agent/vms/systemvm.zip
 %{_libdir}/%{name}/agent/vms/systemvm.iso
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc README.html
-%doc debian/copyright
 
 %files daemonize
 %defattr(-,root,root,-)
 %attr(755,root,root) %{_bindir}/%{name}-daemonize
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc README.html
-%doc debian/copyright
 
 %files deps
 %defattr(0644,root,root,0755)
@@ -532,39 +508,20 @@ fi
 %{_javadir}/%{name}-xenserver-5.5.0-1.jar
 %{_javadir}/%{name}-xmlrpc-common-3.*.jar
 %{_javadir}/%{name}-xmlrpc-client-3.*.jar
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc README.html
-%doc debian/copyright
 
 %files core
 %defattr(0644,root,root,0755)
 %{_javadir}/%{name}-core.jar
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc debian/copyright
 
 %files vnet
 %defattr(0644,root,root,0755)
 %attr(0755,root,root) %{_sbindir}/%{name}-vnetd
 %attr(0755,root,root) %{_sbindir}/%{name}-vn
 %attr(0755,root,root) %{_initrddir}/%{name}-vnetd
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc README.html
-%doc debian/copyright
 
 %files python
 %defattr(0644,root,root,0755)
 %{_prefix}/lib*/python*/site-packages/%{name}*
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc README.html
-%doc debian/copyright
 
 %files setup
 %attr(0755,root,root) %{_bindir}/%{name}-setup-databases
@@ -582,11 +539,9 @@ fi
 %{_datadir}/%{name}/setup/index-212to213.sql
 %{_datadir}/%{name}/setup/postprocess-20to21.sql
 %{_datadir}/%{name}/setup/schema-20to21.sql
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc README.html
-%doc debian/copyright
+%{_datadir}/%{name}/setup/schema-level.sql
+%{_datadir}/%{name}/setup/schema-21to22.sql
+%{_datadir}/%{name}/setup/data-21to22.sql
 
 %files client
 %defattr(0644,root,root,0755)
@@ -626,19 +581,10 @@ fi
 %dir %attr(770,root,%{name}) %{_localstatedir}/cache/%{name}/management/temp
 %dir %attr(770,root,%{name}) %{_localstatedir}/log/%{name}/management
 %dir %attr(770,root,%{name}) %{_localstatedir}/log/%{name}/agent
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc README.html
-%doc debian/copyright
 
 %files agent-libs
 %defattr(0644,root,root,0755)
 %{_javadir}/%{name}-agent.jar
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc debian/copyright
 
 %files agent
 %defattr(0644,root,root,0755)
@@ -654,11 +600,6 @@ fi
 %{_libdir}/%{name}/agent/images
 %attr(0755,root,root) %{_bindir}/%{name}-setup-agent
 %dir %attr(770,root,root) %{_localstatedir}/log/%{name}/agent
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc README.html
-%doc debian/copyright
 
 %files console-proxy
 %defattr(0644,root,root,0755)
@@ -671,11 +612,6 @@ fi
 %{_libdir}/%{name}/console-proxy/*
 %attr(0755,root,root) %{_bindir}/%{name}-setup-console-proxy
 %dir %attr(770,root,root) %{_localstatedir}/log/%{name}/console-proxy
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc README.html
-%doc debian/copyright
 
 %if %{_premium}
 
@@ -686,20 +622,10 @@ fi
 %{_sharedstatedir}/%{name}/test/*
 %{_libdir}/%{name}/test/*
 %{_sysconfdir}/%{name}/test/*
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc README.html
-%doc debian/copyright
 
 %files premium-deps
 %defattr(0644,root,root,0755)
 %{_javadir}/%{name}-premium/*.jar
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc README.html
-%doc debian/copyright
 
 %files premium
 %defattr(0644,root,root,0755)
@@ -719,11 +645,6 @@ fi
 %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xenheartbeat.sh
 %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xenserver56/patch-premium
 %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xs_cleanup.sh
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc README.html
-%doc debian/copyright
 
 %files usage
 %defattr(0644,root,root,0755)
@@ -734,11 +655,6 @@ fi
 %{_sysconfdir}/%{name}/usage/usage-components.xml
 %config(noreplace) %{_sysconfdir}/%{name}/usage/log4j-%{name}_usage.xml
 %config(noreplace) %attr(640,root,%{name}) %{_sysconfdir}/%{name}/usage/db.properties
-%doc README
-%doc INSTALL
-%doc HACKING
-%doc README.html
-%doc debian/copyright
 
 %endif
 
diff --git a/console-proxy/scripts/config_auth.sh b/console-proxy/scripts/config_auth.sh
index 893920d2be2..503c90f1d0a 100755
--- a/console-proxy/scripts/config_auth.sh
+++ b/console-proxy/scripts/config_auth.sh
@@ -2,7 +2,12 @@
 
 BASE_DIR="/var/www/html/copy/template/"
 HTACCESS="$BASE_DIR/.htaccess"
+
 PASSWDFILE="/etc/httpd/.htpasswd"
+if [ -d /etc/apache2 ]
+then
+  PASSWDFILE="/etc/apache2/.htpasswd"
+fi
 
 config_htaccess() {
   mkdir -p $BASE_DIR
diff --git a/console-proxy/scripts/config_ssl.sh b/console-proxy/scripts/config_ssl.sh
index a3be8d32dff..ef59852d69b 100755
--- a/console-proxy/scripts/config_ssl.sh
+++ b/console-proxy/scripts/config_ssl.sh
@@ -15,6 +15,17 @@ config_httpd_conf() {
   echo "" >> /etc/httpd/conf/httpd.conf
 }
 
+config_apache2_conf() {
+  local ip=$1
+  local srvr=$2
+  cp -f /etc/apache2/sites-available/default.orig /etc/apache2/sites-available/default
+  cp -f /etc/apache2/sites-available/default-ssl.orig /etc/apache2/sites-available/default-ssl
+  sed -i -e "s/VirtualHost.*:80$/VirtualHost $ip:80/" /etc/httpd/conf/httpd.conf
+  sed -i  's/_default_/$ip/' /etc/apache2/sites-available/default-ssl
+  sed -i  's/ssl-cert-snakeoil.key/realhostip.key/' /etc/apache2/sites-available/default-ssl
+  sed -i  's/ssl-cert-snakeoil.pem/realhostip.crt/' /etc/apache2/sites-available/default-ssl
+}
+
 copy_certs() {
   local certdir=$(dirname $0)/certs
   local mydir=$(dirname $0)
@@ -25,16 +36,37 @@ copy_certs() {
   return 1
 }
 
+copy_certs_apache2() {
+  local certdir=$(dirname $0)/certs
+  local mydir=$(dirname $0)
+  if [ -d $certdir ] && [ -f $certdir/realhostip.key ] &&  [ -f $certdir/realhostip.crt ] ; then
+      cp $certdir/realhostip.key /etc/ssl/private/   &&  cp $certdir/realhostip.crt /etc/ssl/certs/
+      return $?
+  fi
+  return 1
+}
+
 if [ $# -ne 2 ] ; then
 	echo $"Usage: `basename $0` ipaddr servername "
 	exit 0
 fi
 
-copy_certs
+if [ -d /etc/apache2 ]
+then
+  copy_certs_apache2
+else
+  copy_certs
+fi
+
 if [ $? -ne 0 ]
 then
   echo "Failed to copy certificates"
   exit 2
 fi
 
-config_httpd_conf $1 $2
+if [ -d /etc/apache2 ]
+then
+  config_apache2_conf $1 $2
+else
+  config_httpd_conf $1 $2
+fi
diff --git a/core/src/com/cloud/agent/AgentManager.java b/core/src/com/cloud/agent/AgentManager.java
index 52efd2946ac..49bda30e8b3 100755
--- a/core/src/com/cloud/agent/AgentManager.java
+++ b/core/src/com/cloud/agent/AgentManager.java
@@ -213,4 +213,6 @@ public interface AgentManager extends Manager {
     public boolean reconnect(final long hostId) throws AgentUnavailableException;
     
     public List discoverHosts(long dcId, Long podId, Long clusterId, URI url, String username, String password) throws DiscoveryException;
+
+	Answer easySend(Long hostId, Command cmd, int timeout);
 }
diff --git a/core/src/com/cloud/agent/api/storage/AbstractUploadCommand.java b/core/src/com/cloud/agent/api/storage/AbstractUploadCommand.java
new file mode 100644
index 00000000000..234ab6bbfcc
--- /dev/null
+++ b/core/src/com/cloud/agent/api/storage/AbstractUploadCommand.java
@@ -0,0 +1,52 @@
+package com.cloud.agent.api.storage;
+
+import com.cloud.storage.Storage.ImageFormat;
+
+public class AbstractUploadCommand  extends StorageCommand{
+
+
+    private String url;
+    private ImageFormat format;
+    private long accountId;
+    private String name;
+    
+    protected AbstractUploadCommand() {
+    }
+    
+    protected AbstractUploadCommand(String name, String url, ImageFormat format, long accountId) {
+        this.url = url;
+        this.format = format;
+        this.accountId = accountId;
+        this.name = name;
+    }
+    
+    protected AbstractUploadCommand(AbstractUploadCommand that) {
+        this(that.name, that.url, that.format, that.accountId);
+    }
+    
+    public String getUrl() {
+        return url;
+    }
+    
+    public String getName() {
+        return name;
+    }
+    
+    public ImageFormat getFormat() {
+        return format;
+    }
+    
+    public long getAccountId() {
+        return accountId;
+    }
+    
+    @Override
+    public boolean executeInSequence() {
+        return true;
+    }
+
+	public void setUrl(String url) {
+		this.url = url;
+	}
+
+}
diff --git a/core/src/com/cloud/agent/api/storage/CreateCommand.java b/core/src/com/cloud/agent/api/storage/CreateCommand.java
index ab370027c8d..48e53748f1e 100644
--- a/core/src/com/cloud/agent/api/storage/CreateCommand.java
+++ b/core/src/com/cloud/agent/api/storage/CreateCommand.java
@@ -64,7 +64,7 @@ public class CreateCommand extends Command {
         this.pool = new StoragePoolTO(pool);
         this.templateUrl = null;
         this.size = size;
-        this.instanceName = vm.getInstanceName();
+        //this.instanceName = vm.getInstanceName();
     }
     
     @Override
diff --git a/core/src/com/cloud/agent/api/storage/UploadAnswer.java b/core/src/com/cloud/agent/api/storage/UploadAnswer.java
new file mode 100644
index 00000000000..6bcda28d484
--- /dev/null
+++ b/core/src/com/cloud/agent/api/storage/UploadAnswer.java
@@ -0,0 +1,103 @@
+package com.cloud.agent.api.storage;
+
+import java.io.File;
+
+import com.cloud.agent.api.Answer;
+import com.cloud.agent.api.Command;
+import com.cloud.storage.VMTemplateHostVO;
+import com.cloud.storage.VMTemplateStorageResourceAssoc.Status;
+
+public class UploadAnswer extends Answer {
+
+	
+	private String jobId;
+	private int uploadPct;
+	private String errorString;
+	private VMTemplateHostVO.Status uploadStatus;
+	private String uploadPath;
+	private String installPath;
+	public Long templateSize = 0L;
+	
+	public int getUploadPct() {
+		return uploadPct;
+	}
+	public String getErrorString() {
+		return errorString;
+	}
+	
+	public String getUploadStatusString() {
+		return uploadStatus.toString();
+	}
+	
+	public VMTemplateHostVO.Status getUploadStatus() {
+		return uploadStatus;
+	}
+	
+	public String getUploadPath() {
+		return uploadPath;
+	}
+	protected UploadAnswer() {
+		
+	}
+	
+	public String getJobId() {
+		return jobId;
+	}
+	public void setJobId(String jobId) {
+		this.jobId = jobId;
+	}
+	
+	public UploadAnswer(String jobId, int uploadPct, String errorString,
+			Status uploadStatus, String fileSystemPath, String installPath, long templateSize) {
+		super();
+		this.jobId = jobId;
+		this.uploadPct = uploadPct;
+		this.errorString = errorString;
+		this.uploadStatus = uploadStatus;
+		this.uploadPath = fileSystemPath;
+		this.installPath = fixPath(installPath);
+		this.templateSize = templateSize;
+	}
+
+   public UploadAnswer(String jobId, int uploadPct, Command command,
+            Status uploadStatus, String fileSystemPath, String installPath) {
+	    super(command);
+        this.jobId = jobId;
+        this.uploadPct = uploadPct;
+        this.uploadStatus = uploadStatus;
+        this.uploadPath = fileSystemPath;
+        this.installPath = installPath;
+    }
+		
+	private static String fixPath(String path){
+		if (path == null)
+			return path;
+		if (path.startsWith(File.separator)) {
+			path=path.substring(File.separator.length());
+		}
+		if (path.endsWith(File.separator)) {
+			path=path.substring(0, path.length()-File.separator.length());
+		}
+		return path;
+	}
+	
+	public void setUploadStatus(VMTemplateHostVO.Status uploadStatus) {
+		this.uploadStatus = uploadStatus;
+	}
+	
+	public String getInstallPath() {
+		return installPath;
+	}
+	public void setInstallPath(String installPath) {
+		this.installPath = fixPath(installPath);
+	}
+
+	public void setTemplateSize(long templateSize) {
+		this.templateSize = templateSize;
+	}
+	
+	public Long getTemplateSize() {
+		return templateSize;
+	}
+	
+}
diff --git a/core/src/com/cloud/agent/api/storage/UploadCommand.java b/core/src/com/cloud/agent/api/storage/UploadCommand.java
new file mode 100644
index 00000000000..f8175bc20d1
--- /dev/null
+++ b/core/src/com/cloud/agent/api/storage/UploadCommand.java
@@ -0,0 +1,115 @@
+package com.cloud.agent.api.storage;
+
+import com.cloud.storage.VMTemplateHostVO;
+import com.cloud.storage.VMTemplateVO;
+import com.cloud.agent.api.storage.AbstractUploadCommand;
+import com.cloud.agent.api.storage.DownloadCommand.PasswordAuth;
+
+
+public class UploadCommand extends AbstractUploadCommand {
+
+	private VMTemplateVO template;
+	private String url;
+	private String installPath;	
+	private boolean hvm;
+	private String description;
+	private String checksum;
+	private PasswordAuth auth;
+	private long templateSizeInBytes;
+	private long id;
+
+	public UploadCommand(VMTemplateVO template, String url, VMTemplateHostVO vmTemplateHost) {
+		
+		this.template = template;
+		this.url = url;
+		this.installPath = vmTemplateHost.getInstallPath();
+		this.checksum = template.getChecksum();
+		this.id = template.getId();
+		this.templateSizeInBytes = vmTemplateHost.getSize();
+		
+	}
+	
+	protected UploadCommand() {
+	}
+	
+	public UploadCommand(UploadCommand that) {
+		this.template = that.template;
+		this.url = that.url;
+		this.installPath = that.installPath;
+		this.checksum = that.getChecksum();
+		this.id = that.id;
+	}
+
+	public String getDescription() {
+		return description;
+	}
+
+
+	public VMTemplateVO getTemplate() {
+		return template;
+	}
+
+	public void setTemplate(VMTemplateVO template) {
+		this.template = template;
+	}
+
+	public String getUrl() {
+		return url;
+	}
+
+	public void setUrl(String url) {
+		this.url = url;
+	}
+
+	public boolean isHvm() {
+		return hvm;
+	}
+
+	public void setHvm(boolean hvm) {
+		this.hvm = hvm;
+	}
+
+	public PasswordAuth getAuth() {
+		return auth;
+	}
+
+	public void setAuth(PasswordAuth auth) {
+		this.auth = auth;
+	}
+
+	public Long getTemplateSizeInBytes() {
+		return templateSizeInBytes;
+	}
+
+	public void setTemplateSizeInBytes(Long templateSizeInBytes) {
+		this.templateSizeInBytes = templateSizeInBytes;
+	}
+
+	public long getId() {
+		return id;
+	}
+
+	public void setId(long id) {
+		this.id = id;
+	}
+
+	public void setInstallPath(String installPath) {
+		this.installPath = installPath;
+	}
+
+	public void setDescription(String description) {
+		this.description = description;
+	}
+
+	public void setChecksum(String checksum) {
+		this.checksum = checksum;
+	}
+
+	public String getInstallPath() {
+		return installPath;
+	}
+
+	public String getChecksum() {
+		return checksum;
+	}
+}
diff --git a/core/src/com/cloud/agent/api/storage/UploadProgressCommand.java b/core/src/com/cloud/agent/api/storage/UploadProgressCommand.java
new file mode 100644
index 00000000000..9da9f2f1cb2
--- /dev/null
+++ b/core/src/com/cloud/agent/api/storage/UploadProgressCommand.java
@@ -0,0 +1,32 @@
+package com.cloud.agent.api.storage;
+
+public class UploadProgressCommand extends UploadCommand {
+
+	public static enum RequestType {GET_STATUS, ABORT, RESTART, PURGE, GET_OR_RESTART}
+	private String jobId;
+	private RequestType request;
+
+	protected UploadProgressCommand() {
+		super();
+	}
+	
+	public UploadProgressCommand(UploadCommand cmd, String jobId, RequestType req) {
+	    super(cmd);
+
+		this.jobId = jobId;
+		this.setRequest(req);
+	}
+
+	public String getJobId() {
+		return jobId;
+	}
+
+	public void setRequest(RequestType request) {
+		this.request = request;
+	}
+
+	public RequestType getRequest() {
+		return request;
+	}
+	
+}
\ No newline at end of file
diff --git a/core/src/com/cloud/event/EventTypes.java b/core/src/com/cloud/event/EventTypes.java
index e25f73dad39..4ad2523f831 100644
--- a/core/src/com/cloud/event/EventTypes.java
+++ b/core/src/com/cloud/event/EventTypes.java
@@ -78,7 +78,10 @@ public class EventTypes {
 	public static final String EVENT_TEMPLATE_COPY = "TEMPLATE.COPY";
 	public static final String EVENT_TEMPLATE_DOWNLOAD_START = "TEMPLATE.DOWNLOAD.START";
 	public static final String EVENT_TEMPLATE_DOWNLOAD_SUCCESS = "TEMPLATE.DOWNLOAD.SUCCESS";
-	public static final String EVENT_TEMPLATE_DOWNLOAD_FAILED = "TEMPLATE.DOWNLOAD.FAILED";
+	public static final String EVENT_TEMPLATE_DOWNLOAD_FAILED = "TEMPLATE.DOWNLOAD.FAILED";
+	public static final String EVENT_TEMPLATE_UPLOAD_FAILED = "TEMPLATE.UPLOAD.FAILED";
+	public static final String EVENT_TEMPLATE_UPLOAD_START = "TEMPLATE.UPLOAD.START";
+	public static final String EVENT_TEMPLATE_UPLOAD_SUCCESS = "TEMPLATE.UPLOAD.SUCCESS";
 	
 	// Volume Events
 	public static final String EVENT_VOLUME_CREATE = "VOLUME.CREATE";
diff --git a/core/src/com/cloud/hypervisor/xen/resource/CitrixHelper.java b/core/src/com/cloud/hypervisor/xen/resource/CitrixHelper.java
new file mode 100644
index 00000000000..23de8d0a0af
--- /dev/null
+++ b/core/src/com/cloud/hypervisor/xen/resource/CitrixHelper.java
@@ -0,0 +1,163 @@
+/**
+ *  Copyright (C) 2010 Cloud.com.  All rights reserved.
+ *
+ * This software is licensed under the GNU General Public License v3 or later. 
+ *
+ * It is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 3 of the License, or any later
+version.
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see .
+ *
+ */
+package com.cloud.hypervisor.xen.resource;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+
+/**
+ * Reduce bloat inside CitrixResourceBase
+ *
+ */
+public class CitrixHelper {
+	private static final HashMap _guestOsMap = new HashMap(70);
+	private static final ArrayList _guestOsList = new ArrayList(70);
+
+
+    static {
+        _guestOsMap.put("CentOS 4.5 (32-bit)", "CentOS 4.5");
+        _guestOsMap.put("CentOS 4.6 (32-bit)", "CentOS 4.6");
+        _guestOsMap.put("CentOS 4.7 (32-bit)", "CentOS 4.7");
+        _guestOsMap.put("CentOS 4.8 (32-bit)", "CentOS 4.8");
+        _guestOsMap.put("CentOS 5.0 (32-bit)", "CentOS 5.0");
+        _guestOsMap.put("CentOS 5.0 (64-bit)", "CentOS 5.0 x64");
+        _guestOsMap.put("CentOS 5.1 (32-bit)", "CentOS 5.1");
+        _guestOsMap.put("CentOS 5.1 (64-bit)", "CentOS 5.1 x64");
+        _guestOsMap.put("CentOS 5.2 (32-bit)", "CentOS 5.2");
+        _guestOsMap.put("CentOS 5.2 (64-bit)", "CentOS 5.2 x64");
+        _guestOsMap.put("CentOS 5.3 (32-bit)", "CentOS 5.3");
+        _guestOsMap.put("CentOS 5.3 (64-bit)", "CentOS 5.3 x64");
+        _guestOsMap.put("CentOS 5.4 (32-bit)", "CentOS 5.4");
+        _guestOsMap.put("CentOS 5.4 (64-bit)", "CentOS 5.4 x64");
+        _guestOsMap.put("Debian Lenny 5.0 (32-bit)", "Debian Lenny 5.0 (32-bit)");
+        _guestOsMap.put("Oracle Enterprise Linux 5.0 (32-bit)", "Oracle Enterprise Linux 5.0");
+        _guestOsMap.put("Oracle Enterprise Linux 5.0 (64-bit)", "Oracle Enterprise Linux 5.0 x64");
+        _guestOsMap.put("Oracle Enterprise Linux 5.1 (32-bit)", "Oracle Enterprise Linux 5.1");
+        _guestOsMap.put("Oracle Enterprise Linux 5.1 (64-bit)", "Oracle Enterprise Linux 5.1 x64");
+        _guestOsMap.put("Oracle Enterprise Linux 5.2 (32-bit)", "Oracle Enterprise Linux 5.2");
+        _guestOsMap.put("Oracle Enterprise Linux 5.2 (64-bit)", "Oracle Enterprise Linux 5.2 x64");
+        _guestOsMap.put("Oracle Enterprise Linux 5.3 (32-bit)", "Oracle Enterprise Linux 5.3");
+        _guestOsMap.put("Oracle Enterprise Linux 5.3 (64-bit)", "Oracle Enterprise Linux 5.3 x64");
+        _guestOsMap.put("Oracle Enterprise Linux 5.4 (32-bit)", "Oracle Enterprise Linux 5.4");
+        _guestOsMap.put("Oracle Enterprise Linux 5.4 (64-bit)", "Oracle Enterprise Linux 5.4 x64");
+        _guestOsMap.put("Red Hat Enterprise Linux 4.5 (32-bit)", "Red Hat Enterprise Linux 4.5");
+        _guestOsMap.put("Red Hat Enterprise Linux 4.6 (32-bit)", "Red Hat Enterprise Linux 4.6");
+        _guestOsMap.put("Red Hat Enterprise Linux 4.7 (32-bit)", "Red Hat Enterprise Linux 4.7");
+        _guestOsMap.put("Red Hat Enterprise Linux 4.8 (32-bit)", "Red Hat Enterprise Linux 4.8");
+        _guestOsMap.put("Red Hat Enterprise Linux 5.0 (32-bit)", "Red Hat Enterprise Linux 5.0");
+        _guestOsMap.put("Red Hat Enterprise Linux 5.0 (64-bit)", "Red Hat Enterprise Linux 5.0 x64");
+        _guestOsMap.put("Red Hat Enterprise Linux 5.1 (32-bit)", "Red Hat Enterprise Linux 5.1");
+        _guestOsMap.put("Red Hat Enterprise Linux 5.1 (64-bit)", "Red Hat Enterprise Linux 5.1 x64");
+        _guestOsMap.put("Red Hat Enterprise Linux 5.2 (32-bit)", "Red Hat Enterprise Linux 5.2");
+        _guestOsMap.put("Red Hat Enterprise Linux 5.2 (64-bit)", "Red Hat Enterprise Linux 5.2 x64");
+        _guestOsMap.put("Red Hat Enterprise Linux 5.3 (32-bit)", "Red Hat Enterprise Linux 5.3");
+        _guestOsMap.put("Red Hat Enterprise Linux 5.3 (64-bit)", "Red Hat Enterprise Linux 5.3 x64");
+        _guestOsMap.put("Red Hat Enterprise Linux 5.4 (32-bit)", "Red Hat Enterprise Linux 5.4");
+        _guestOsMap.put("Red Hat Enterprise Linux 5.4 (64-bit)", "Red Hat Enterprise Linux 5.4 x64");
+        _guestOsMap.put("SUSE Linux Enterprise Server 9 SP4 (32-bit)", "SUSE Linux Enterprise Server 9 SP4");
+        _guestOsMap.put("SUSE Linux Enterprise Server 10 SP1 (32-bit)", "SUSE Linux Enterprise Server 10 SP1");
+        _guestOsMap.put("SUSE Linux Enterprise Server 10 SP1 (64-bit)", "SUSE Linux Enterprise Server 10 SP1 x64");
+        _guestOsMap.put("SUSE Linux Enterprise Server 10 SP2 (32-bit)", "SUSE Linux Enterprise Server 10 SP2");
+        _guestOsMap.put("SUSE Linux Enterprise Server 10 SP2 (64-bit)", "SUSE Linux Enterprise Server 10 SP2 x64");
+        _guestOsMap.put("SUSE Linux Enterprise Server 10 SP3 (64-bit)", "Other install media");
+        _guestOsMap.put("SUSE Linux Enterprise Server 11 (32-bit)", "SUSE Linux Enterprise Server 11");
+        _guestOsMap.put("SUSE Linux Enterprise Server 11 (64-bit)", "SUSE Linux Enterprise Server 11 x64");
+        _guestOsMap.put("Windows 7 (32-bit)", "Windows 7");
+        _guestOsMap.put("Windows 7 (64-bit)", "Windows 7 x64");
+        _guestOsMap.put("Windows Server 2003 (32-bit)", "Windows Server 2003");
+        _guestOsMap.put("Windows Server 2003 (64-bit)", "Windows Server 2003 x64");
+        _guestOsMap.put("Windows Server 2008 (32-bit)", "Windows Server 2008");
+        _guestOsMap.put("Windows Server 2008 (64-bit)", "Windows Server 2008 x64");
+        _guestOsMap.put("Windows Server 2008 R2 (64-bit)", "Windows Server 2008 R2 x64");
+        _guestOsMap.put("Windows 2000 SP4 (32-bit)", "Windows 2000 SP4");
+        _guestOsMap.put("Windows Vista (32-bit)", "Windows Vista");
+        _guestOsMap.put("Windows XP SP2 (32-bit)", "Windows XP SP2");
+        _guestOsMap.put("Windows XP SP3 (32-bit)", "Windows XP SP3");
+        _guestOsMap.put("Other install media", "Other install media");
+
+        //access by index
+        _guestOsList.add("CentOS 4.5");
+        _guestOsList.add("CentOS 4.6");
+        _guestOsList.add("CentOS 4.7");
+        _guestOsList.add("CentOS 4.8");
+        _guestOsList.add("CentOS 5.0");
+        _guestOsList.add("CentOS 5.0 x64");
+        _guestOsList.add("CentOS 5.1");
+        _guestOsList.add("CentOS 5.1 x64");
+        _guestOsList.add("CentOS 5.2");
+        _guestOsList.add("CentOS 5.2 x64");
+        _guestOsList.add("CentOS 5.3");
+        _guestOsList.add("CentOS 5.3 x64");
+        _guestOsList.add("CentOS 5.4");
+        _guestOsList.add("CentOS 5.4 x64");
+        _guestOsList.add("Debian Lenny 5.0 (32-bit)");
+        _guestOsList.add("Oracle Enterprise Linux 5.0");
+        _guestOsList.add("Oracle Enterprise Linux 5.0 x64");
+        _guestOsList.add("Oracle Enterprise Linux 5.1");
+        _guestOsList.add("Oracle Enterprise Linux 5.1 x64");
+        _guestOsList.add("Oracle Enterprise Linux 5.2");
+        _guestOsList.add("Oracle Enterprise Linux 5.2 x64");
+        _guestOsList.add("Oracle Enterprise Linux 5.3");
+        _guestOsList.add("Oracle Enterprise Linux 5.3 x64");
+        _guestOsList.add("Oracle Enterprise Linux 5.4");
+        _guestOsList.add("Oracle Enterprise Linux 5.4 x64");
+        _guestOsList.add("Red Hat Enterprise Linux 4.5");
+        _guestOsList.add("Red Hat Enterprise Linux 4.6");
+        _guestOsList.add("Red Hat Enterprise Linux 4.7");
+        _guestOsList.add("Red Hat Enterprise Linux 4.8");
+        _guestOsList.add("Red Hat Enterprise Linux 5.0");
+        _guestOsList.add("Red Hat Enterprise Linux 5.0 x64");
+        _guestOsList.add("Red Hat Enterprise Linux 5.1");
+        _guestOsList.add("Red Hat Enterprise Linux 5.1 x64");
+        _guestOsList.add("Red Hat Enterprise Linux 5.2");
+        _guestOsList.add("Red Hat Enterprise Linux 5.2 x64");
+        _guestOsList.add("Red Hat Enterprise Linux 5.3");
+        _guestOsList.add("Red Hat Enterprise Linux 5.3 x64");
+        _guestOsList.add("Red Hat Enterprise Linux 5.4");
+        _guestOsList.add("Red Hat Enterprise Linux 5.4 x64");
+        _guestOsList.add("SUSE Linux Enterprise Server 9 SP4");
+        _guestOsList.add("SUSE Linux Enterprise Server 10 SP1");
+        _guestOsList.add("SUSE Linux Enterprise Server 10 SP1 x64");
+        _guestOsList.add("SUSE Linux Enterprise Server 10 SP2");
+        _guestOsList.add("SUSE Linux Enterprise Server 10 SP2 x64");
+        _guestOsList.add("Other install media");
+        _guestOsList.add("SUSE Linux Enterprise Server 11");
+        _guestOsList.add("SUSE Linux Enterprise Server 11 x64");
+        _guestOsList.add("Windows 7");
+        _guestOsList.add("Windows 7 x64");
+        _guestOsList.add("Windows Server 2003");
+        _guestOsList.add("Windows Server 2003 x64");
+        _guestOsList.add("Windows Server 2008");
+        _guestOsList.add("Windows Server 2008 x64");
+        _guestOsList.add("Windows Server 2008 R2 x64");
+        _guestOsList.add("Windows 2000 SP4");
+        _guestOsList.add("Windows Vista");
+        _guestOsList.add("Windows XP SP2");
+        _guestOsList.add("Windows XP SP3");
+        _guestOsList.add("Other install media");
+    }
+    
+    public static String getGuestOsType(String stdType) {
+        return _guestOsMap.get(stdType);
+    }
+
+    public static String getGuestOsType(long guestOsId) {
+        return _guestOsList.get((int) (guestOsId-1));
+    }
+}
diff --git a/core/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java b/core/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java
index 069e31b26c9..80521c5910e 100644
--- a/core/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java
+++ b/core/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java
@@ -1,5 +1,5 @@
 /**
-: *  Copyright (C) 2010 Cloud.com, Inc.  All rights reserved.
+ *  Copyright (C) 2010 Cloud.com, Inc.  All rights reserved.
  * 
  * This software is licensed under the GNU General Public License v3 or later.
  * 
@@ -234,6 +234,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
     protected int _wait;
     protected IAgentControl _agentControl;
     protected boolean _isRemoteAgent = false;
+    
 
     protected final XenServerHost _host = new XenServerHost();
 
@@ -270,69 +271,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
         s_statesTable.put(Types.VmPowerState.UNKNOWN, State.Unknown);
         s_statesTable.put(Types.VmPowerState.UNRECOGNIZED, State.Unknown);
     }
-    private static HashMap _guestOsType = new HashMap(50);
-    static {
-        _guestOsType.put("CentOS 4.5 (32-bit)", "CentOS 4.5");
-        _guestOsType.put("CentOS 4.6 (32-bit)", "CentOS 4.6");
-        _guestOsType.put("CentOS 4.7 (32-bit)", "CentOS 4.7");
-        _guestOsType.put("CentOS 4.8 (32-bit)", "CentOS 4.8");
-        _guestOsType.put("CentOS 5.0 (32-bit)", "CentOS 5.0");
-        _guestOsType.put("CentOS 5.0 (64-bit)", "CentOS 5.0 x64");
-        _guestOsType.put("CentOS 5.1 (32-bit)", "CentOS 5.1");
-        _guestOsType.put("CentOS 5.1 (64-bit)", "CentOS 5.1 x64");
-        _guestOsType.put("CentOS 5.2 (32-bit)", "CentOS 5.2");
-        _guestOsType.put("CentOS 5.2 (64-bit)", "CentOS 5.2 x64");
-        _guestOsType.put("CentOS 5.3 (32-bit)", "CentOS 5.3");
-        _guestOsType.put("CentOS 5.3 (64-bit)", "CentOS 5.3 x64");
-        _guestOsType.put("CentOS 5.4 (32-bit)", "CentOS 5.4");
-        _guestOsType.put("CentOS 5.4 (64-bit)", "CentOS 5.4 x64");
-        _guestOsType.put("Debian Lenny 5.0 (32-bit)", "Debian Lenny 5.0");
-        _guestOsType.put("Oracle Enterprise Linux 5.0 (32-bit)", "Oracle Enterprise Linux 5.0");
-        _guestOsType.put("Oracle Enterprise Linux 5.0 (64-bit)", "Oracle Enterprise Linux 5.0 x64");
-        _guestOsType.put("Oracle Enterprise Linux 5.1 (32-bit)", "Oracle Enterprise Linux 5.1");
-        _guestOsType.put("Oracle Enterprise Linux 5.1 (64-bit)", "Oracle Enterprise Linux 5.1 x64");
-        _guestOsType.put("Oracle Enterprise Linux 5.2 (32-bit)", "Oracle Enterprise Linux 5.2");
-        _guestOsType.put("Oracle Enterprise Linux 5.2 (64-bit)", "Oracle Enterprise Linux 5.2 x64");
-        _guestOsType.put("Oracle Enterprise Linux 5.3 (32-bit)", "Oracle Enterprise Linux 5.3");
-        _guestOsType.put("Oracle Enterprise Linux 5.3 (64-bit)", "Oracle Enterprise Linux 5.3 x64");
-        _guestOsType.put("Oracle Enterprise Linux 5.4 (32-bit)", "Oracle Enterprise Linux 5.4");
-        _guestOsType.put("Oracle Enterprise Linux 5.4 (64-bit)", "Oracle Enterprise Linux 5.4 x64");
-        _guestOsType.put("Red Hat Enterprise Linux 4.5 (32-bit)", "Red Hat Enterprise Linux 4.5");
-        _guestOsType.put("Red Hat Enterprise Linux 4.6 (32-bit)", "Red Hat Enterprise Linux 4.6");
-        _guestOsType.put("Red Hat Enterprise Linux 4.7 (32-bit)", "Red Hat Enterprise Linux 4.7");
-        _guestOsType.put("Red Hat Enterprise Linux 4.8 (32-bit)", "Red Hat Enterprise Linux 4.8");
-        _guestOsType.put("Red Hat Enterprise Linux 5.0 (32-bit)", "Red Hat Enterprise Linux 5.0");
-        _guestOsType.put("Red Hat Enterprise Linux 5.0 (64-bit)", "Red Hat Enterprise Linux 5.0 x64");
-        _guestOsType.put("Red Hat Enterprise Linux 5.1 (32-bit)", "Red Hat Enterprise Linux 5.1");
-        _guestOsType.put("Red Hat Enterprise Linux 5.1 (64-bit)", "Red Hat Enterprise Linux 5.1 x64");
-        _guestOsType.put("Red Hat Enterprise Linux 5.2 (32-bit)", "Red Hat Enterprise Linux 5.2");
-        _guestOsType.put("Red Hat Enterprise Linux 5.2 (64-bit)", "Red Hat Enterprise Linux 5.2 x64");
-        _guestOsType.put("Red Hat Enterprise Linux 5.3 (32-bit)", "Red Hat Enterprise Linux 5.3");
-        _guestOsType.put("Red Hat Enterprise Linux 5.3 (64-bit)", "Red Hat Enterprise Linux 5.3 x64");
-        _guestOsType.put("Red Hat Enterprise Linux 5.4 (32-bit)", "Red Hat Enterprise Linux 5.4");
-        _guestOsType.put("Red Hat Enterprise Linux 5.4 (64-bit)", "Red Hat Enterprise Linux 5.4 x64");
-        _guestOsType.put("SUSE Linux Enterprise Server 9 SP4 (32-bit)", "SUSE Linux Enterprise Server 9 SP4");
-        _guestOsType.put("SUSE Linux Enterprise Server 10 SP1 (32-bit)", "SUSE Linux Enterprise Server 10 SP1");
-        _guestOsType.put("SUSE Linux Enterprise Server 10 SP1 (64-bit)", "SUSE Linux Enterprise Server 10 SP1 x64");
-        _guestOsType.put("SUSE Linux Enterprise Server 10 SP2 (32-bit)", "SUSE Linux Enterprise Server 10 SP2");
-        _guestOsType.put("SUSE Linux Enterprise Server 10 SP2 (64-bit)", "SUSE Linux Enterprise Server 10 SP2 x64");
-        _guestOsType.put("SUSE Linux Enterprise Server 10 SP3 (64-bit)", "Other install media");
-        _guestOsType.put("SUSE Linux Enterprise Server 11 (32-bit)", "SUSE Linux Enterprise Server 11");
-        _guestOsType.put("SUSE Linux Enterprise Server 11 (64-bit)", "SUSE Linux Enterprise Server 11 x64");
-        _guestOsType.put("Windows 7 (32-bit)", "Windows 7");
-        _guestOsType.put("Windows 7 (64-bit)", "Windows 7 x64");
-        _guestOsType.put("Windows Server 2003 (32-bit)", "Windows Server 2003");
-        _guestOsType.put("Windows Server 2003 (64-bit)", "Windows Server 2003 x64");
-        _guestOsType.put("Windows Server 2008 (32-bit)", "Windows Server 2008");
-        _guestOsType.put("Windows Server 2008 (64-bit)", "Windows Server 2008 x64");
-        _guestOsType.put("Windows Server 2008 R2 (64-bit)", "Windows Server 2008 R2 x64");
-        _guestOsType.put("Windows 2000 SP4 (32-bit)", "Windows 2000 SP4");
-        _guestOsType.put("Windows Vista (32-bit)", "Windows Vista");
-        _guestOsType.put("Windows XP SP2 (32-bit)", "Windows XP SP2");
-        _guestOsType.put("Windows XP SP3 (32-bit)", "Windows XP SP3");
-        _guestOsType.put("Other install media", "Other install media");
-      
-    }
+    
     
     protected boolean isRefNull(XenAPIObject object) {
         return (object == null || object.toWireString().equals("OpaqueRef:NULL"));
@@ -1171,7 +1110,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
             bootArgs += " pod=" + _pod;
             bootArgs += " localgw=" + _localGateway;
             String result = startSystemVM(vmName, storage.getVlanId(), network, cmd.getVolumes(), bootArgs, storage.getGuestMacAddress(), storage.getGuestIpAddress(), storage
-                    .getPrivateMacAddress(), storage.getPublicMacAddress(), cmd.getProxyCmdPort(), storage.getRamSize());
+                    .getPrivateMacAddress(), storage.getPublicMacAddress(), cmd.getProxyCmdPort(), storage.getRamSize(), storage.getGuestOSId());
             if (result == null) {
                 return new StartSecStorageVmAnswer(cmd);
             }
@@ -2078,6 +2017,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
 
             /* Does the template exist in primary storage pool? If yes, no copy */
             VDI vmtmpltvdi = null;
+            VDI snapshotvdi = null;
 
             Set vdis = VDI.getByNameLabel(conn, "Template " + cmd.getName());
 
@@ -2110,19 +2050,21 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
                     return new DownloadAnswer(null, 0, msg, com.cloud.storage.VMTemplateStorageResourceAssoc.Status.DOWNLOAD_ERROR, "", "", 0);
                 }
                 vmtmpltvdi = cloudVDIcopy(tmpltvdi, poolsr);
-
-                vmtmpltvdi.setNameLabel(conn, "Template " + cmd.getName());
+                snapshotvdi = vmtmpltvdi.snapshot(conn, new HashMap());
+                vmtmpltvdi.destroy(conn);
+                snapshotvdi.setNameLabel(conn, "Template " + cmd.getName());
                 // vmtmpltvdi.setNameDescription(conn, cmd.getDescription());
-                uuid = vmtmpltvdi.getUuid(conn);
+                uuid = snapshotvdi.getUuid(conn);
+                vmtmpltvdi = snapshotvdi;
 
             } else
                 uuid = vmtmpltvdi.getUuid(conn);
 
             // Determine the size of the template
-            long createdSize = vmtmpltvdi.getVirtualSize(conn);
+            long phySize = vmtmpltvdi.getPhysicalUtilisation(conn);
 
             DownloadAnswer answer = new DownloadAnswer(null, 100, cmd, com.cloud.storage.VMTemplateStorageResourceAssoc.Status.DOWNLOADED, uuid, uuid);
-            answer.setTemplateSize(createdSize);
+            answer.setTemplateSize(phySize);
 
             return answer;
 
@@ -2593,9 +2535,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
         return vm;
     }
 
-    protected String getGuestOsType(String stdType) {
-        return _guestOsType.get(stdType);
-    }
+
 
     public boolean joinPool(String address, String username, String password) {
         Connection conn = getConnection();
@@ -3139,7 +3079,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
             String bootArgs = cmd.getBootArgs();
 
             String result = startSystemVM(vmName, router.getVlanId(), network, cmd.getVolumes(), bootArgs, router.getGuestMacAddress(), router.getPrivateIpAddress(), router
-                    .getPrivateMacAddress(), router.getPublicMacAddress(), 3922, router.getRamSize());
+                    .getPrivateMacAddress(), router.getPublicMacAddress(), 3922, router.getRamSize(), router.getGuestOSId());
             if (result == null) {
                 networkUsage(router.getPrivateIpAddress(), "create", null);
                 return new StartRouterAnswer(cmd);
@@ -3154,7 +3094,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
     }
 
     protected String startSystemVM(String vmName, String vlanId, Network nw0, List vols, String bootArgs, String guestMacAddr, String privateIp, String privateMacAddr,
-            String publicMacAddr, int cmdPort, long ramSize) {
+            String publicMacAddr, int cmdPort, long ramSize, long guestOsId) {
 
     	setupLinkLocalNetwork();
         VM vm = null;
@@ -3172,14 +3112,12 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
 
             Ternary mount = mounts.get(0);
 
-            Set templates = VM.getByNameLabel(conn, "CentOS 5.3");
+            Set templates = VM.getByNameLabel(conn, CitrixHelper.getGuestOsType(guestOsId));
             if (templates.size() == 0) {
-                templates = VM.getByNameLabel(conn, "CentOS 5.3 (64-bit)");
-                if (templates.size() == 0) {
-                    String msg = " can not find template CentOS 5.3 ";
-                    s_logger.warn(msg);
-                    return msg;
-                }
+            	String msg = " can not find systemvm template " + CitrixHelper.getGuestOsType(guestOsId) ;
+            	s_logger.warn(msg);
+            	return msg;
+
             }
 
             VM template = templates.iterator().next();
@@ -3340,7 +3278,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
             bootArgs += " localgw=" + _localGateway;
 
             String result = startSystemVM(vmName, proxy.getVlanId(), network, cmd.getVolumes(), bootArgs, proxy.getGuestMacAddress(), proxy.getGuestIpAddress(), proxy
-                    .getPrivateMacAddress(), proxy.getPublicMacAddress(), cmd.getProxyCmdPort(), proxy.getRamSize());
+                    .getPrivateMacAddress(), proxy.getPublicMacAddress(), cmd.getProxyCmdPort(), proxy.getRamSize(), proxy.getGuestOSId());
             if (result == null) {
                 return new StartConsoleProxyAnswer(cmd);
             }
@@ -3477,8 +3415,12 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
             return false;
         return true;
     }
-
     protected String callHostPlugin(String plugin, String cmd, String... params) {
+        //default time out is 300 s
+        return callHostPluginWithTimeOut(plugin, cmd, 300, params);
+    }
+
+    protected String callHostPluginWithTimeOut(String plugin, String cmd, int timeout, String... params) {
         Map args = new HashMap();
         Session slaveSession = null;
         Connection slaveConn = null;
@@ -3490,7 +3432,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
                 // TODO Auto-generated catch block
                 e.printStackTrace();
             }
-            slaveConn = new Connection(slaveUrl, 1800);
+            slaveConn = new Connection(slaveUrl, timeout);
             slaveSession = Session.slaveLocalLoginWithPassword(slaveConn, _username, _password);
 
             if (s_logger.isDebugEnabled()) {
@@ -4451,9 +4393,8 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
             SR.Record srr = sr.getRecord(conn);
             Set pbds = sr.getPBDs(conn);
             if (pbds.size() == 0) {
-                String msg = "There is no PBDs for this SR: " + _host.uuid;
+                String msg = "There is no PBDs for this SR: " + srr.nameLabel + " on host:" + _host.uuid;
                 s_logger.warn(msg);
-                removeSR(sr);
                 return false;
             }
             Set hosts = null;
@@ -4507,15 +4448,11 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
 
     protected Answer execute(ModifyStoragePoolCommand cmd) {
         StoragePoolVO pool = cmd.getPool();
+        StoragePoolTO poolTO = new StoragePoolTO(pool);
         try {
             Connection conn = getConnection();
 
-            SR sr = getStorageRepository(conn, pool);
-            if (!checkSR(sr)) {
-                String msg = "ModifyStoragePoolCommand checkSR failed! host:" + _host.uuid + " pool: " + pool.getName() + pool.getHostAddress() + pool.getPath();
-                s_logger.warn(msg);
-                return new Answer(cmd, false, msg);
-            }
+            SR sr = getStorageRepository(conn, poolTO);
             long capacity = sr.getPhysicalSize(conn);
             long available = capacity - sr.getPhysicalUtilisation(conn);
             if (capacity == -1) {
@@ -4540,14 +4477,10 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
 
     protected Answer execute(DeleteStoragePoolCommand cmd) {
         StoragePoolVO pool = cmd.getPool();
+        StoragePoolTO poolTO = new StoragePoolTO(pool);
         try {
             Connection conn = getConnection();
-            SR sr = getStorageRepository(conn, pool);
-            if (!checkSR(sr)) {
-                String msg = "DeleteStoragePoolCommand checkSR failed! host:" + _host.uuid + " pool: " + pool.getName() + pool.getHostAddress() + pool.getPath();
-                s_logger.warn(msg);
-                return new Answer(cmd, false, msg);
-            }
+            SR sr = getStorageRepository(conn, poolTO);
             sr.setNameLabel(conn, pool.getUuid());
             sr.setNameDescription(conn, pool.getName());
 
@@ -4957,119 +4890,10 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
             s_logger.warn(msg, e);
             throw new CloudRuntimeException(msg, e);
         }
-
     }
 
-    protected SR getIscsiSR(Connection conn, StoragePoolVO pool) {
-
-        synchronized (pool.getUuid().intern()) {
-            Map deviceConfig = new HashMap();
-            try {
-                String target = pool.getHostAddress().trim();
-                String path = pool.getPath().trim();
-                if (path.endsWith("/")) {
-                    path = path.substring(0, path.length() - 1);
-                }
-
-                String tmp[] = path.split("/");
-                if (tmp.length != 3) {
-                    String msg = "Wrong iscsi path " + pool.getPath() + " it should be /targetIQN/LUN";
-                    s_logger.warn(msg);
-                    throw new CloudRuntimeException(msg);
-                }
-                String targetiqn = tmp[1].trim();
-                String lunid = tmp[2].trim();
-                String scsiid = "";
-
-                Set srs = SR.getByNameLabel(conn, pool.getUuid());
-                for (SR sr : srs) {
-                    if (!SRType.LVMOISCSI.equals(sr.getType(conn)))
-                        continue;
-
-                    Set pbds = sr.getPBDs(conn);
-                    if (pbds.isEmpty())
-                        continue;
-
-                    PBD pbd = pbds.iterator().next();
-
-                    Map dc = pbd.getDeviceConfig(conn);
-
-                    if (dc == null)
-                        continue;
-
-                    if (dc.get("target") == null)
-                        continue;
-
-                    if (dc.get("targetIQN") == null)
-                        continue;
-
-                    if (dc.get("lunid") == null)
-                        continue;
-
-                    if (target.equals(dc.get("target")) && targetiqn.equals(dc.get("targetIQN")) && lunid.equals(dc.get("lunid"))) {
-                        return sr;
-                    }
-
-                }
-                deviceConfig.put("target", target);
-                deviceConfig.put("targetIQN", targetiqn);
-
-                Host host = Host.getByUuid(conn, _host.uuid);
-                SR sr = null;
-                try {
-                    sr = SR.create(conn, host, deviceConfig, new Long(0), pool.getUuid(), pool.getName(), SRType.LVMOISCSI.toString(), "user", true, new HashMap());
-                } catch (XenAPIException e) {
-                    String errmsg = e.toString();
-                    if (errmsg.contains("SR_BACKEND_FAILURE_107")) {
-                        String lun[] = errmsg.split("");
-                        boolean found = false;
-                        for (int i = 1; i < lun.length; i++) {
-                            int blunindex = lun[i].indexOf("") + 7;
-                            int elunindex = lun[i].indexOf("");
-                            String ilun = lun[i].substring(blunindex, elunindex);
-                            ilun = ilun.trim();
-                            if (ilun.equals(lunid)) {
-                                int bscsiindex = lun[i].indexOf("") + 8;
-                                int escsiindex = lun[i].indexOf("");
-                                scsiid = lun[i].substring(bscsiindex, escsiindex);
-                                scsiid = scsiid.trim();
-                                found = true;
-                                break;
-                            }
-                        }
-                        if (!found) {
-                            String msg = "can not find LUN " + lunid + " in " + errmsg;
-                            s_logger.warn(msg);
-                            throw new CloudRuntimeException(msg);
-                        }
-                    } else {
-                        String msg = "Unable to create Iscsi SR  " + deviceConfig + " due to  " + e.toString();
-                        s_logger.warn(msg, e);
-                        throw new CloudRuntimeException(msg, e);
-                    }
-                }
-                deviceConfig.put("SCSIid", scsiid);
-                sr = SR.create(conn, host, deviceConfig, new Long(0), pool.getUuid(), pool.getName(), SRType.LVMOISCSI.toString(), "user", true, new HashMap());
-                if( !checkSR(sr) ) {
-                    throw new Exception("no attached PBD");
-                }           
-                sr.scan(conn);
-                return sr;
-
-            } catch (XenAPIException e) {
-                String msg = "Unable to create Iscsi SR  " + deviceConfig + " due to  " + e.toString();
-                s_logger.warn(msg, e);
-                throw new CloudRuntimeException(msg, e);
-            } catch (Exception e) {
-                String msg = "Unable to create Iscsi SR  " + deviceConfig + " due to  " + e.getMessage();
-                s_logger.warn(msg, e);
-                throw new CloudRuntimeException(msg, e);
-            }
-        }
-    }
-
-    protected SR getIscsiSR(Connection conn, StoragePoolTO pool) {
-
+    protected SR getIscsiSR(StoragePoolTO pool) {
+        Connection conn = getConnection();
         synchronized (pool.getUuid().intern()) {
             Map deviceConfig = new HashMap();
             try {
@@ -5118,6 +4942,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
                         if (checkSR(sr)) {
                             return sr;
                         }
+                        throw new CloudRuntimeException("SR check failed for storage pool: " + pool.getUuid() + "on host:" + _host.uuid);
                     }
 
                 }
@@ -5177,13 +5002,12 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
         }
     }
 
-    protected SR getNfsSR(StoragePoolVO pool) {
+    protected SR getNfsSR(StoragePoolTO pool) {
         Connection conn = getConnection();
 
         Map deviceConfig = new HashMap();
         try {
-
-            String server = pool.getHostAddress();
+            String server = pool.getHost();
             String serverpath = pool.getPath();
             serverpath = serverpath.replace("//", "/");
             Set srs = SR.getAll(conn);
@@ -5212,59 +5036,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
                     if (checkSR(sr)) {
                         return sr;
                     }
-                }
-
-            }
-
-            deviceConfig.put("server", server);
-            deviceConfig.put("serverpath", serverpath);
-            Host host = Host.getByUuid(conn, _host.uuid);
-            SR sr = SR.create(conn, host, deviceConfig, new Long(0), pool.getUuid(), pool.getName(), SRType.NFS.toString(), "user", true, new HashMap());
-            sr.scan(conn);
-            return sr;
-
-        } catch (XenAPIException e) {
-            String msg = "Unable to create NFS SR  " + deviceConfig + " due to  " + e.toString();
-            s_logger.warn(msg, e);
-            throw new CloudRuntimeException(msg, e);
-        } catch (Exception e) {
-            String msg = "Unable to create NFS SR  " + deviceConfig + " due to  " + e.getMessage();
-            s_logger.warn(msg);
-            throw new CloudRuntimeException(msg, e);
-        }
-    }
-
-    protected SR getNfsSR(Connection conn, StoragePoolTO pool) {
-        Map deviceConfig = new HashMap();
-
-        String server = pool.getHost();
-        String serverpath = pool.getPath();
-        serverpath = serverpath.replace("//", "/");
-        try {
-            Set srs = SR.getAll(conn);
-            for (SR sr : srs) {
-                if (!SRType.NFS.equals(sr.getType(conn)))
-                    continue;
-
-                Set pbds = sr.getPBDs(conn);
-                if (pbds.isEmpty())
-                    continue;
-
-                PBD pbd = pbds.iterator().next();
-
-                Map dc = pbd.getDeviceConfig(conn);
-
-                if (dc == null)
-                    continue;
-
-                if (dc.get("server") == null)
-                    continue;
-
-                if (dc.get("serverpath") == null)
-                    continue;
-
-                if (server.equals(dc.get("server")) && serverpath.equals(dc.get("serverpath"))) {
-                    return sr;
+                    throw new CloudRuntimeException("SR check failed for storage pool: " + pool.getUuid() + "on host:" + _host.uuid);
                 }
 
             }
@@ -5351,6 +5123,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
     public CopyVolumeAnswer execute(final CopyVolumeCommand cmd) {
         String volumeUUID = cmd.getVolumePath();
         StoragePoolVO pool = cmd.getPool();
+        StoragePoolTO poolTO = new StoragePoolTO(pool);
         String secondaryStorageURL = cmd.getSecondaryStorageURL();
 
         URI uri = null;
@@ -5403,7 +5176,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
                 }
 
                 // Copy the volume to the primary storage pool
-                primaryStoragePool = getStorageRepository(conn, pool);
+                primaryStoragePool = getStorageRepository(conn, poolTO);
                 destVolume = cloudVDIcopy(srcVolume, primaryStoragePool);
             }
 
@@ -6277,40 +6050,6 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
             throw new CloudRuntimeException("Unable to get SR " + pool.getUuid() + " due to " + e.getMessage(), e);
         }
 
-        if (srs.size() > 1) {
-            throw new CloudRuntimeException("More than one storage repository was found for pool with uuid: " + pool.getUuid());
-        }
-
-        if (srs.size() == 1) {
-            SR sr = srs.iterator().next();
-            if (s_logger.isDebugEnabled()) {
-                s_logger.debug("SR retrieved for " + pool.getId() + " is mapped to " + sr.toString());
-            }
-
-            if (checkSR(sr)) {
-                return sr;
-            }
-        }
-
-        if (pool.getType() == StoragePoolType.NetworkFilesystem)
-            return getNfsSR(conn, pool);
-        else if (pool.getType() == StoragePoolType.IscsiLUN)
-            return getIscsiSR(conn, pool);
-        else
-            throw new CloudRuntimeException("The pool type: " + pool.getType().name() + " is not supported.");
-
-    }
-
-    protected SR getStorageRepository(Connection conn, StoragePoolVO pool) {
-        Set srs;
-        try {
-            srs = SR.getByNameLabel(conn, pool.getUuid());
-        } catch (XenAPIException e) {
-            throw new CloudRuntimeException("Unable to get SR " + pool.getUuid() + " due to " + e.toString(), e);
-        } catch (Exception e) {
-            throw new CloudRuntimeException("Unable to get SR " + pool.getUuid() + " due to " + e.getMessage(), e);
-        }
-
         if (srs.size() > 1) {
             throw new CloudRuntimeException("More than one storage repository was found for pool with uuid: " + pool.getUuid());
         } else if (srs.size() == 1) {
@@ -6322,15 +6061,15 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
             if (checkSR(sr)) {
                 return sr;
             }
-            throw new CloudRuntimeException("Check this SR failed");
+            throw new CloudRuntimeException("SR check failed for storage pool: " + pool.getUuid() + "on host:" + _host.uuid);
         } else {
 	
-	        if (pool.getPoolType() == StoragePoolType.NetworkFilesystem)
+	        if (pool.getType() == StoragePoolType.NetworkFilesystem)
 	            return getNfsSR(pool);
-	        else if (pool.getPoolType() == StoragePoolType.IscsiLUN)
-	            return getIscsiSR(conn, pool);
+	        else if (pool.getType() == StoragePoolType.IscsiLUN)
+	            return getIscsiSR(pool);
 	        else
-	            throw new CloudRuntimeException("The pool type: " + pool.getPoolType().name() + " is not supported.");
+	            throw new CloudRuntimeException("The pool type: " + pool.getType().name() + " is not supported.");
         }
 
     }
@@ -6405,7 +6144,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
             checksum = "";
         }
 
-        String result = callHostPlugin("vmopsSnapshot", "post_create_private_template", "remoteTemplateMountPath", remoteTemplateMountPath, "templateDownloadFolder", templateDownloadFolder,
+        String result = callHostPluginWithTimeOut("vmopsSnapshot", "post_create_private_template", 110*60, "remoteTemplateMountPath", remoteTemplateMountPath, "templateDownloadFolder", templateDownloadFolder,
                 "templateInstallFolder", templateInstallFolder, "templateFilename", templateFilename, "templateName", templateName, "templateDescription", templateDescription,
                 "checksum", checksum, "virtualSize", String.valueOf(virtualSize), "templateId", String.valueOf(templateId));
 
@@ -6443,7 +6182,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
 
         // Each argument is put in a separate line for readability.
         // Using more lines does not harm the environment.
-        String results = callHostPlugin("vmopsSnapshot", "backupSnapshot", "primaryStorageSRUuid", primaryStorageSRUuid, "dcId", dcId.toString(), "accountId", accountId.toString(), "volumeId",
+        String results = callHostPluginWithTimeOut("vmopsSnapshot", "backupSnapshot", 110*60, "primaryStorageSRUuid", primaryStorageSRUuid, "dcId", dcId.toString(), "accountId", accountId.toString(), "volumeId",
                 volumeId.toString(), "secondaryStorageMountPath", secondaryStorageMountPath, "snapshotUuid", snapshotUuid, "prevSnapshotUuid", prevSnapshotUuid, "prevBackupUuid",
                 prevBackupUuid, "isFirstSnapshotOfRootVolume", isFirstSnapshotOfRootVolume.toString(), "isISCSI", isISCSI.toString());
 
@@ -6546,7 +6285,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
 
         String failureString = "Could not create volume from " + backedUpSnapshotUuid;
         templatePath = (templatePath == null) ? "" : templatePath;
-        String results = callHostPlugin("vmopsSnapshot", "createVolumeFromSnapshot", "dcId", dcId.toString(), "accountId", accountId.toString(), "volumeId", volumeId.toString(),
+        String results = callHostPluginWithTimeOut("vmopsSnapshot","createVolumeFromSnapshot", 110*60, "dcId", dcId.toString(), "accountId", accountId.toString(), "volumeId", volumeId.toString(),
                 "secondaryStorageMountPath", secondaryStorageMountPath, "backedUpSnapshotUuid", backedUpSnapshotUuid, "templatePath", templatePath, "templateDownloadFolder",
                 templateDownloadFolder, "isISCSI", isISCSI.toString());
 
@@ -6699,4 +6438,8 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
             return virtualSize;
         }
     }
+
+	protected String getGuestOsType(String stdType) {
+		return CitrixHelper.getGuestOsType(stdType);
+	}
 }
diff --git a/core/src/com/cloud/server/Criteria.java b/core/src/com/cloud/server/Criteria.java
index f2d7090341c..6d41b967be9 100644
--- a/core/src/com/cloud/server/Criteria.java
+++ b/core/src/com/cloud/server/Criteria.java
@@ -76,7 +76,8 @@ public class Criteria {
     public static final String TARGET_IQN = "targetiqn";
     public static final String SCOPE = "scope";
     public static final String NETWORKGROUP = "networkGroup";
-
+    public static final String GROUP = "group";
+    public static final String EMPTY_GROUP = "emptyGroup";
 
 	public Criteria(String orderBy, Boolean ascending, Long offset, Long limit) {
 		this.offset = offset;
diff --git a/core/src/com/cloud/server/ManagementServer.java b/core/src/com/cloud/server/ManagementServer.java
old mode 100644
new mode 100755
index ab380004dfe..a96fab84d37
--- a/core/src/com/cloud/server/ManagementServer.java
+++ b/core/src/com/cloud/server/ManagementServer.java
@@ -615,8 +615,8 @@ public interface ManagementServer {
      * @volumeId
      * @throws InvalidParameterValueException, InternalErrorException
      */
-    void detachVolumeFromVM(long volumeId, long startEventId) throws InternalErrorException;
-    long detachVolumeFromVMAsync(long volumeId) throws InvalidParameterValueException;
+    void detachVolumeFromVM(long volumeId, long startEventId, long deviceId, long instanceId) throws InternalErrorException;
+    long detachVolumeFromVMAsync(long volumeId, long deviceId, long instanceId) throws InvalidParameterValueException;
     
     /**
      * Attaches an ISO to the virtual CDROM device of the specified VM. Will fail if the VM already has an ISO mounted.
@@ -2186,7 +2186,17 @@ public interface ManagementServer {
 	boolean validateCustomVolumeSizeRange(long size) throws InvalidParameterValueException;
 	
 	boolean checkIfMaintenable(long hostId);
+	/**
+	 * Extracts the template to a particular location.
+	 * @param url - the url  where the template needs to be extracted to
+	 * @param zoneId - zone id of the template
+	 * @param template id - the id of the template
+	 *  
+	 */
+	void extractTemplate(String url, Long templateId, Long zoneId) throws URISyntaxException;
 
     Map listCapabilities();
-	GuestOSCategoryVO getGuestOsCategory(Long guestOsId);
+	GuestOSVO getGuestOs(Long guestOsId);
+	VolumeVO findVolumeByInstanceAndDeviceId(long instanceId, long deviceId);
+	VolumeVO getRootVolume(Long instanceId);
 }
diff --git a/core/src/com/cloud/storage/DiskOfferingVO.java b/core/src/com/cloud/storage/DiskOfferingVO.java
index eaed5fa3d4b..48f817a2f9f 100644
--- a/core/src/com/cloud/storage/DiskOfferingVO.java
+++ b/core/src/com/cloud/storage/DiskOfferingVO.java
@@ -234,5 +234,9 @@ public class DiskOfferingVO implements DiskOffering {
         buf.delete(buf.length() - 1, buf.length());
         
         setTags(buf.toString());
-    }
+    }
+
+	public void setUseLocalStorage(boolean useLocalStorage) {
+		this.useLocalStorage = useLocalStorage;
+	}
 }
diff --git a/core/src/com/cloud/storage/StorageResource.java b/core/src/com/cloud/storage/StorageResource.java
index 7ad29f10c83..209e82d7325 100755
--- a/core/src/com/cloud/storage/StorageResource.java
+++ b/core/src/com/cloud/storage/StorageResource.java
@@ -53,12 +53,14 @@ import com.cloud.agent.api.storage.ShareAnswer;
 import com.cloud.agent.api.storage.ShareCommand;
 import com.cloud.agent.api.storage.UpgradeDiskAnswer;
 import com.cloud.agent.api.storage.UpgradeDiskCommand;
+import com.cloud.agent.api.storage.UploadCommand;
 import com.cloud.host.Host;
 import com.cloud.resource.ServerResource;
 import com.cloud.resource.ServerResourceBase;
 import com.cloud.storage.Storage.StoragePoolType;
 import com.cloud.storage.template.DownloadManager;
 import com.cloud.storage.template.TemplateInfo;
+import com.cloud.storage.template.UploadManager;
 import com.cloud.utils.NumbersUtil;
 import com.cloud.utils.exception.CloudRuntimeException;
 import com.cloud.utils.script.OutputInterpreter;
@@ -112,6 +114,7 @@ public abstract class StorageResource extends ServerResourceBase implements Serv
 	protected String _zfsScriptsDir;
 
 	protected DownloadManager _downloadManager;
+	protected UploadManager _uploadManager;
 
 	protected Map _volumeHourlySnapshotRequests = new HashMap();
     protected Map _volumeDailySnapshotRequests = new HashMap();
@@ -127,6 +130,8 @@ public abstract class StorageResource extends ServerResourceBase implements Serv
         	return execute((PrimaryStorageDownloadCommand)cmd);
         } else if (cmd instanceof DownloadCommand) {
             return execute((DownloadCommand)cmd);
+        }else if (cmd instanceof UploadCommand) {
+                return execute((UploadCommand)cmd);
         } else if (cmd instanceof GetStorageStatsCommand) {
             return execute((GetStorageStatsCommand)cmd);
         } else if (cmd instanceof UpgradeDiskCommand) {
@@ -159,6 +164,11 @@ public abstract class StorageResource extends ServerResourceBase implements Serv
     protected Answer execute(final PrimaryStorageDownloadCommand cmd) {
     	return Answer.createUnsupportedCommandAnswer(cmd);
     }
+
+	private Answer execute(UploadCommand cmd) {	
+		s_logger.warn(" Nitin got the cmd " +cmd);
+		return _uploadManager.handleUploadCommand(cmd);
+	}
     
     protected Answer execute(final DownloadCommand cmd) {
     	return _downloadManager.handleDownloadCommand(cmd);
diff --git a/core/src/com/cloud/storage/VMTemplateHostVO.java b/core/src/com/cloud/storage/VMTemplateHostVO.java
index c7d78fd7af0..4393fb8696c 100644
--- a/core/src/com/cloud/storage/VMTemplateHostVO.java
+++ b/core/src/com/cloud/storage/VMTemplateHostVO.java
@@ -59,7 +59,10 @@ public class VMTemplateHostVO implements VMTemplateStorageResourceAssoc {
 	private Date lastUpdated = null;
 	
 	@Column (name="download_pct")
-	private int downloadPercent;
+	private int downloadPercent;
+	
+	@Column (name="upload_pct")
+	private int uploadPercent;
 	
 	@Column (name="size")
 	private long size;
@@ -67,15 +70,25 @@ public class VMTemplateHostVO implements VMTemplateStorageResourceAssoc {
 	@Column (name="download_state")
 	@Enumerated(EnumType.STRING)
 	private Status downloadState;
+	
+	@Column (name="upload_state")
+	@Enumerated(EnumType.STRING)
+	private Status uploadState;
 	
 	@Column (name="local_path")
 	private String localDownloadPath;
 	
 	@Column (name="error_str")
 	private String errorString;
+	
+	@Column (name="upload_error_str")
+	private String upload_errorString;
 	
 	@Column (name="job_id")
 	private String jobId;
+	
+	@Column (name="upload_job_id")
+	private String uploadJobId;
 	
 	@Column (name="pool_id")
 	private Long poolId;
@@ -85,7 +98,10 @@ public class VMTemplateHostVO implements VMTemplateStorageResourceAssoc {
 	
 	@Column (name="url")
 	private String downloadUrl;
-	
+
+	@Column (name="upload_url")
+	private String uploadUrl;
+		
 	@Column(name="is_copy")
 	private boolean isCopy = false;
     
@@ -262,5 +278,45 @@ public class VMTemplateHostVO implements VMTemplateStorageResourceAssoc {
 
 	public boolean isCopy() {
 		return isCopy;
+	}
+
+	public int getUploadPercent() {
+		return uploadPercent;
+	}
+
+	public void setUploadPercent(int uploadPercent) {
+		this.uploadPercent = uploadPercent;
+	}
+
+	public Status getUploadState() {
+		return uploadState;
+	}
+
+	public void setUploadState(Status uploadState) {
+		this.uploadState = uploadState;
+	}
+
+	public String getUpload_errorString() {
+		return upload_errorString;
+	}
+
+	public void setUpload_errorString(String uploadErrorString) {
+		upload_errorString = uploadErrorString;
+	}
+
+	public String getUploadUrl() {
+		return uploadUrl;
+	}
+
+	public void setUploadUrl(String uploadUrl) {
+		this.uploadUrl = uploadUrl;
+	}
+
+	public String getUploadJobId() {
+		return uploadJobId;
+	}
+
+	public void setUploadJobId(String uploadJobId) {
+		this.uploadJobId = uploadJobId;
 	}
 }
diff --git a/core/src/com/cloud/storage/VMTemplateStorageResourceAssoc.java b/core/src/com/cloud/storage/VMTemplateStorageResourceAssoc.java
index 699f2f8297a..d42c7f10cb7 100644
--- a/core/src/com/cloud/storage/VMTemplateStorageResourceAssoc.java
+++ b/core/src/com/cloud/storage/VMTemplateStorageResourceAssoc.java
@@ -24,7 +24,7 @@ import java.util.Date;
  *
  */
 public interface VMTemplateStorageResourceAssoc {
-	public static enum Status  {UNKNOWN, DOWNLOAD_ERROR, NOT_DOWNLOADED, DOWNLOAD_IN_PROGRESS, DOWNLOADED, ABANDONED}
+	public static enum Status  {UNKNOWN, DOWNLOAD_ERROR, NOT_DOWNLOADED, DOWNLOAD_IN_PROGRESS, DOWNLOADED, ABANDONED, UPLOADED, NOT_UPLOADED, UPLOAD_ERROR, UPLOAD_IN_PROGRESS}
 
 	public String getInstallPath();
 
diff --git a/core/src/com/cloud/storage/VolumeVO.java b/core/src/com/cloud/storage/VolumeVO.java
index d3bcf9f4f17..0c145ffa70c 100755
--- a/core/src/com/cloud/storage/VolumeVO.java
+++ b/core/src/com/cloud/storage/VolumeVO.java
@@ -90,6 +90,10 @@ public class VolumeVO implements Volume {
     @Column(name="created")
     Date created;
     
+    @Column(name="attached")
+    @Temporal(value=TemporalType.TIMESTAMP)
+    Date attached;
+    
     @Column(name="data_center_id")
     long dataCenterId;
     
@@ -539,4 +543,15 @@ public class VolumeVO implements Volume {
 	public Long getSourceId(){
 		return this.sourceId;
 	}
+	
+	@Override
+	public Date getAttached(){
+		return this.attached; 
+	}
+	
+	@Override
+	public void setAttached(Date attached){
+		this.attached = attached;
+	}
+	
 }
diff --git a/core/src/com/cloud/storage/dao/StoragePoolDao.java b/core/src/com/cloud/storage/dao/StoragePoolDao.java
index 104f282d1f2..85b5b33b40e 100644
--- a/core/src/com/cloud/storage/dao/StoragePoolDao.java
+++ b/core/src/com/cloud/storage/dao/StoragePoolDao.java
@@ -101,5 +101,7 @@ public interface StoragePoolDao extends GenericDao {
 	List searchForStoragePoolDetails(long poolId, String value);
 	
     long countBy(long podId, Status... statuses);
+
+	List findIfDuplicatePoolsExistByUUID(String uuid);
     
 }
diff --git a/core/src/com/cloud/storage/dao/StoragePoolDaoImpl.java b/core/src/com/cloud/storage/dao/StoragePoolDaoImpl.java
index 700dfccccab..456019f6874 100644
--- a/core/src/com/cloud/storage/dao/StoragePoolDaoImpl.java
+++ b/core/src/com/cloud/storage/dao/StoragePoolDaoImpl.java
@@ -61,6 +61,7 @@ public class StoragePoolDaoImpl extends GenericDaoBase  imp
     protected final SearchBuilder DeleteLvmSearch;
     protected final GenericSearchBuilder MaintenanceCountSearch;
     
+    
     protected final StoragePoolDetailsDao _detailsDao;
 	
     private final String DetailsSqlPrefix = "SELECT storage_pool.* from storage_pool LEFT JOIN storage_pool_details ON storage_pool.id = storage_pool_details.pool_id WHERE storage_pool.data_center_id = ? and (storage_pool.pod_id = ? or storage_pool.pod_id is null) and (";
@@ -144,6 +145,13 @@ public class StoragePoolDaoImpl extends GenericDaoBase  imp
         return findOneBy(sc);
 	}
 
+	@Override
+	public List findIfDuplicatePoolsExistByUUID(String uuid) {
+		SearchCriteria sc = UUIDSearch.create();
+        sc.setParameters("uuid", uuid);
+        return listActiveBy(sc);
+	}
+
 
 	@Override
 	public List listByDataCenterId(long datacenterId) {
diff --git a/core/src/com/cloud/storage/dao/VMTemplateHostDao.java b/core/src/com/cloud/storage/dao/VMTemplateHostDao.java
old mode 100644
new mode 100755
index 5a2b26cd5c2..4294322f108
--- a/core/src/com/cloud/storage/dao/VMTemplateHostDao.java
+++ b/core/src/com/cloud/storage/dao/VMTemplateHostDao.java
@@ -18,9 +18,11 @@
 
 package com.cloud.storage.dao;
 
+import java.util.Date;
 import java.util.List;
 
 import com.cloud.storage.VMTemplateHostVO;
+import com.cloud.storage.VMTemplateStorageResourceAssoc.Status;
 import com.cloud.utils.db.GenericDao;
 
 public interface VMTemplateHostDao extends GenericDao {
@@ -41,6 +43,9 @@ public interface VMTemplateHostDao extends GenericDao {
     List listByTemplatePool(long templateId, long poolId);
 
     void update(VMTemplateHostVO instance);
+    
+    void updateUploadStatus(long hostId, long templateId, int uploadPercent, Status uploadState,
+			String jobId, String uploadUrl );
 
     List listByTemplateStatus(long templateId, VMTemplateHostVO.Status downloadState);
 
@@ -53,4 +58,6 @@ public interface VMTemplateHostDao extends GenericDao {
     List listDestroyed(long hostId);
 
     boolean templateAvailable(long templateId, long hostId);
+
+	List listByTemplateUploadStatus(long templateId,Status UploadState);
 }
diff --git a/core/src/com/cloud/storage/dao/VMTemplateHostDaoImpl.java b/core/src/com/cloud/storage/dao/VMTemplateHostDaoImpl.java
old mode 100644
new mode 100755
index 48f3d939624..abb9242d940
--- a/core/src/com/cloud/storage/dao/VMTemplateHostDaoImpl.java
+++ b/core/src/com/cloud/storage/dao/VMTemplateHostDaoImpl.java
@@ -49,16 +49,26 @@ public class VMTemplateHostDaoImpl extends GenericDaoBase PoolTemplateSearch;
 	protected final SearchBuilder HostTemplatePoolSearch;
 	protected final SearchBuilder TemplateStatusSearch;
-	protected final SearchBuilder TemplateStatesSearch;
+	protected final SearchBuilder TemplateStatesSearch;
+	protected final SearchBuilder TemplateUploadStatusSearch;
 	
 	protected static final String UPDATE_TEMPLATE_HOST_REF =
 		"UPDATE template_host_ref SET download_state = ?, download_pct= ?, last_updated = ? "
 	+   ", error_str = ?, local_path = ?, job_id = ? "
+	+   "WHERE host_id = ? and template_id = ?";
+	
+	protected static final String UPDATE_UPLOAD_INFO =
+		"UPDATE template_host_ref SET upload_state = ?, upload_pct= ?, last_updated = ? "
+	+   ", upload_error_str = ?, upload_job_id = ? "
 	+   "WHERE host_id = ? and template_id = ?";
 	
 	protected static final String DOWNLOADS_STATE_DC=
 		"SELECT * FROM template_host_ref t, host h where t.host_id = h.id and h.data_center_id=? "
 	+	" and t.template_id=? and t.download_state = ?" ;
+
+	protected static final String UPLOADS_STATE_DC=
+		"SELECT * FROM template_host_ref t, host h where t.host_id = h.id and h.data_center_id=? "
+	+	" and t.template_id=? and t.upload_state = ?" ;
 	
 	protected static final String DOWNLOADS_STATE_DC_POD=
 		"SELECT * FROM template_host_ref t, host h where t.host_id = h.id and h.data_center_id=? and h.pod_id=? "
@@ -67,7 +77,12 @@ public class VMTemplateHostDaoImpl extends GenericDaoBase listByTemplateUploadStatus(long templateId, VMTemplateHostVO.Status uploadState) {
+		SearchCriteria sc = TemplateUploadStatusSearch.create();
+		sc.setParameters("template_id", templateId);
+		sc.setParameters("upload_state", uploadState.toString());
+		return listBy(sc);
+	}
 	
 	@Override
 	public List listByTemplateStatus(long templateId, VMTemplateHostVO.Status downloadState) {
diff --git a/core/src/com/cloud/storage/dao/VolumeDao.java b/core/src/com/cloud/storage/dao/VolumeDao.java
index 39979744aab..3bd43faf9bc 100755
--- a/core/src/com/cloud/storage/dao/VolumeDao.java
+++ b/core/src/com/cloud/storage/dao/VolumeDao.java
@@ -46,4 +46,5 @@ public interface VolumeDao extends GenericDao {
     List listRemovedButNotDestroyed();
     List findCreatedByInstance(long id);
     List findByPoolId(long poolId);
+	List findByInstanceAndDeviceId(long instanceId, long deviceId);
 }
diff --git a/core/src/com/cloud/storage/dao/VolumeDaoImpl.java b/core/src/com/cloud/storage/dao/VolumeDaoImpl.java
index d63b1d8ab04..faf319367f8 100755
--- a/core/src/com/cloud/storage/dao/VolumeDaoImpl.java
+++ b/core/src/com/cloud/storage/dao/VolumeDaoImpl.java
@@ -61,6 +61,7 @@ public class VolumeDaoImpl extends GenericDaoBase implements Vol
     protected final GenericSearchBuilder ActiveTemplateSearch;
     protected final SearchBuilder RemovedButNotDestroyedSearch;
     protected final SearchBuilder PoolIdSearch;
+    protected final SearchBuilder InstanceAndDeviceIdSearch;
     
     protected static final String SELECT_VM_SQL = "SELECT DISTINCT instance_id from volumes v where v.host_id = ? and v.mirror_state = ?";
     protected static final String SELECT_VM_ID_SQL = "SELECT DISTINCT instance_id from volumes v where v.host_id = ?";
@@ -117,6 +118,14 @@ public class VolumeDaoImpl extends GenericDaoBase implements Vol
         sc.setParameters("instanceId", id);
 	    return listActiveBy(sc);
 	}
+   
+    @Override
+    public List findByInstanceAndDeviceId(long instanceId, long deviceId){
+    	SearchCriteria sc = InstanceAndDeviceIdSearch.create();
+    	sc.setParameters("instanceId", instanceId);
+    	sc.setParameters("deviceId", deviceId);
+    	return listActiveBy(sc);
+    }
     
     @Override
     public List findByPoolId(long poolId) {
@@ -234,6 +243,7 @@ public class VolumeDaoImpl extends GenericDaoBase implements Vol
     	volume.setInstanceId(vmId);
     	volume.setDeviceId(deviceId);
     	volume.setUpdated(new Date());
+    	volume.setAttached(new Date());
     	update(volumeId, volume);
     }
     
@@ -243,6 +253,7 @@ public class VolumeDaoImpl extends GenericDaoBase implements Vol
     	volume.setInstanceId(null);
         volume.setDeviceId(null);
     	volume.setUpdated(new Date());
+    	volume.setAttached(null);
     	update(volumeId, volume);
     }
     
@@ -302,6 +313,11 @@ public class VolumeDaoImpl extends GenericDaoBase implements Vol
         InstanceIdSearch.and("instanceId", InstanceIdSearch.entity().getInstanceId(), SearchCriteria.Op.EQ);
         InstanceIdSearch.done();
 
+        InstanceAndDeviceIdSearch = createSearchBuilder();
+        InstanceAndDeviceIdSearch.and("instanceId", InstanceAndDeviceIdSearch.entity().getInstanceId(), SearchCriteria.Op.EQ);
+        InstanceAndDeviceIdSearch.and("deviceId", InstanceAndDeviceIdSearch.entity().getDeviceId(), SearchCriteria.Op.EQ);
+        InstanceAndDeviceIdSearch.done();
+        
         PoolIdSearch = createSearchBuilder();
         PoolIdSearch.and("poolId", PoolIdSearch.entity().getPoolId(), SearchCriteria.Op.EQ);
         PoolIdSearch.done();
diff --git a/core/src/com/cloud/storage/resource/NfsSecondaryStorageResource.java b/core/src/com/cloud/storage/resource/NfsSecondaryStorageResource.java
old mode 100644
new mode 100755
index 51be90328c7..b99d867978f
--- a/core/src/com/cloud/storage/resource/NfsSecondaryStorageResource.java
+++ b/core/src/com/cloud/storage/resource/NfsSecondaryStorageResource.java
@@ -48,6 +48,7 @@ import com.cloud.agent.api.SecStorageFirewallCfgCommand.PortConfig;
 import com.cloud.agent.api.storage.DeleteTemplateCommand;
 import com.cloud.agent.api.storage.DownloadCommand;
 import com.cloud.agent.api.storage.DownloadProgressCommand;
+import com.cloud.agent.api.storage.UploadCommand;
 import com.cloud.host.Host;
 import com.cloud.host.Host.Type;
 import com.cloud.resource.ServerResource;
@@ -58,6 +59,8 @@ import com.cloud.storage.Storage.StoragePoolType;
 import com.cloud.storage.template.DownloadManager;
 import com.cloud.storage.template.DownloadManagerImpl;
 import com.cloud.storage.template.TemplateInfo;
+import com.cloud.storage.template.UploadManager;
+import com.cloud.storage.template.UploadManagerImpl;
 import com.cloud.utils.NumbersUtil;
 import com.cloud.utils.component.ComponentLocator;
 import com.cloud.utils.exception.CloudRuntimeException;
@@ -85,6 +88,7 @@ public class NfsSecondaryStorageResource extends ServerResourceBase implements S
     Random _rand = new Random(System.currentTimeMillis());
     
     DownloadManager _dlMgr;
+    UploadManager _upldMgr;
 	private String _configSslScr;
 	private String _configAuthScr;
 	private String _publicIp;
@@ -111,6 +115,8 @@ public class NfsSecondaryStorageResource extends ServerResourceBase implements S
             return _dlMgr.handleDownloadCommand((DownloadProgressCommand)cmd);
         } else if (cmd instanceof DownloadCommand) {
             return _dlMgr.handleDownloadCommand((DownloadCommand)cmd);
+        }else if (cmd instanceof UploadCommand) {        	
+            return _upldMgr.handleUploadCommand((UploadCommand)cmd);
         } else if (cmd instanceof GetStorageStatsCommand) {
         	return execute((GetStorageStatsCommand)cmd);
         } else if (cmd instanceof CheckHealthCommand) {
@@ -413,6 +419,8 @@ public class NfsSecondaryStorageResource extends ServerResourceBase implements S
             _params.put(StorageLayer.InstanceConfigKey, _storage);
             _dlMgr = new DownloadManagerImpl();
             _dlMgr.configure("DownloadManager", _params);
+            _upldMgr = new UploadManagerImpl();
+            _upldMgr.configure("UploadManager", params);
         } catch (ConfigurationException e) {
             s_logger.warn("Caught problem while configuring DownloadManager", e);
             return false;
diff --git a/core/src/com/cloud/storage/template/FtpTemplateUploader.java b/core/src/com/cloud/storage/template/FtpTemplateUploader.java
new file mode 100644
index 00000000000..01c96e965bc
--- /dev/null
+++ b/core/src/com/cloud/storage/template/FtpTemplateUploader.java
@@ -0,0 +1,224 @@
+package com.cloud.storage.template;
+
+import java.io.BufferedInputStream;
+import java.io.BufferedOutputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.net.URLConnection;
+import java.util.Date;
+
+import org.apache.log4j.Logger;
+
+
+public class FtpTemplateUploader implements TemplateUploader {
+	
+	public static final Logger s_logger = Logger.getLogger(FtpTemplateUploader.class.getName());
+	public TemplateUploader.Status status = TemplateUploader.Status.NOT_STARTED;
+	public String errorString = "";
+	public long totalBytes = 0;
+	public long templateSizeinBytes;
+	private String sourcePath;
+	private String ftpUrl;	
+	private UploadCompleteCallback completionCallback;
+	private boolean resume;
+    private BufferedInputStream inputStream = null;
+    private BufferedOutputStream outputStream = null;
+	private static final int CHUNK_SIZE = 1024*1024; //1M
+	
+	public FtpTemplateUploader(String sourcePath, String url, UploadCompleteCallback callback, long templateSizeinBytes){
+		
+		this.sourcePath = sourcePath;
+		this.ftpUrl = url;
+		this.completionCallback = callback;
+		this.templateSizeinBytes = templateSizeinBytes;
+		s_logger.warn("Nitin in  FtpTemplateUploader " +url + " "+sourcePath);
+	}
+	
+	public long upload(UploadCompleteCallback callback )
+		   {
+		
+				switch (status) {
+				case ABORTED:
+				case UNRECOVERABLE_ERROR:
+				case UPLOAD_FINISHED:
+					return 0;
+				default:
+		
+				}
+				
+	             Date start = new Date();
+				 s_logger.warn("Nitin in  FtpTemplateUploader ");
+		         StringBuffer sb = new StringBuffer();
+		         // check for authentication else assume its anonymous access.
+		        /* if (user != null && password != null)
+		         {
+		            sb.append( user );
+		            sb.append( ':' );
+		            sb.append( password );
+		            sb.append( '@' );
+		         }*/
+		         sb.append( ftpUrl );
+		         /*sb.append( '/' );
+		         sb.append( fileName ); filename where u want to dld it */
+		         /*ftp://10.91.18.14/
+		          * type ==> a=ASCII mode, i=image (binary) mode, d= file directory
+		          * listing
+		          */
+		         sb.append( ";type=i" );
+
+		         try
+		         {
+		            URL url = new URL( sb.toString() );
+		            URLConnection urlc = url.openConnection();
+
+		            outputStream = new BufferedOutputStream( urlc.getOutputStream() );
+		            inputStream = new BufferedInputStream( new FileInputStream( new File(sourcePath) ) );	            
+
+		            status = TemplateUploader.Status.IN_PROGRESS;
+
+		            int bytes = 0;
+		            byte[] block = new byte[CHUNK_SIZE];
+		            boolean done=false;
+		            while (!done && status != Status.ABORTED ) {
+		            	if ( (bytes = inputStream.read(block, 0, CHUNK_SIZE)) > -1) {
+		            		outputStream.write(block,0, bytes);		            			            				            			            		
+		            		totalBytes += bytes;
+		            	} else {
+		            		done = true;
+		            	}
+		            }		            
+		            status = TemplateUploader.Status.UPLOAD_FINISHED;
+		            s_logger.warn("Nitin in  FtpTemplateUploader " +status);
+		            return totalBytes;
+		         } catch (MalformedURLException e) {
+		        	status = TemplateUploader.Status.UNRECOVERABLE_ERROR;
+		 			errorString = e.getMessage();
+		 			s_logger.error("Nitin in  FtpTemplateUploader " +errorString);
+				} catch (IOException e) {
+					status = TemplateUploader.Status.UNRECOVERABLE_ERROR;
+		 			errorString = e.getMessage();
+		 			s_logger.error("Nitin in  FtpTemplateUploader " +errorString);
+				}
+		         finally
+		         {
+		           try		         
+	               {
+		            if (inputStream != null){		               
+		            	inputStream.close();
+		            }
+		            if (outputStream != null){		               
+		                  outputStream.close();
+		            }
+	               }catch (IOException ioe){
+	            	   s_logger.error(" Caught exception while closing the resources" ); 		                  
+	               }
+				   if (callback != null) {
+					   callback.uploadComplete(status);
+				   }
+		         }
+
+				return 0;
+		   }
+
+	@Override
+	public void run() {
+		try {
+			upload(completionCallback);
+		} catch (Throwable t) {
+			s_logger.warn("Caught exception during upload "+ t.getMessage(), t);
+			errorString = "Failed to install: " + t.getMessage();
+			status = TemplateUploader.Status.UNRECOVERABLE_ERROR;
+		}
+		
+	}
+
+	@Override
+	public Status getStatus() {
+		return status;
+	}
+
+	@Override
+	public String getUploadError() {
+		return errorString;
+	}
+
+	@Override
+	public String getUploadLocalPath() {
+		return null;
+	}
+
+	@Override
+	public int getUploadPercent() {
+		if (templateSizeinBytes == 0) {
+			return 0;
+		}		
+		return (int)(100.0*totalBytes/templateSizeinBytes);
+	}
+
+	@Override
+	public long getUploadTime() {
+		// TODO Auto-generated method stub
+		return 0;
+	}
+
+	@Override
+	public long getUploadedBytes() {
+		return totalBytes;
+	}
+
+	@Override
+	public boolean isInited() {
+		return false;
+	}
+
+	@Override
+	public void setResume(boolean resume) {
+		this.resume = resume;
+		
+	}
+
+	@Override
+	public void setStatus(Status status) {
+		this.status = status;		
+	}
+
+	@Override
+	public void setUploadError(String string) {
+		errorString = string;		
+	}
+
+	@Override
+	public boolean stopUpload() {
+		switch (getStatus()) {
+		case IN_PROGRESS:
+			try {
+				if(outputStream != null) {
+					outputStream.close();
+				}
+				if (inputStream != null){				
+					inputStream.close();					
+				}
+			} catch (IOException e) {
+				s_logger.error(" Caught exception while closing the resources" );
+			}
+			status = TemplateUploader.Status.ABORTED;
+			return true;
+		case UNKNOWN:
+		case NOT_STARTED:
+		case RECOVERABLE_ERROR:
+		case UNRECOVERABLE_ERROR:
+		case ABORTED:
+			status = TemplateUploader.Status.ABORTED;
+		case UPLOAD_FINISHED:
+			return true;
+
+		default:
+			return true;
+		}
+	}
+	
+
+}
diff --git a/core/src/com/cloud/storage/template/TemplateUploader.java b/core/src/com/cloud/storage/template/TemplateUploader.java
new file mode 100644
index 00000000000..feff934ff01
--- /dev/null
+++ b/core/src/com/cloud/storage/template/TemplateUploader.java
@@ -0,0 +1,77 @@
+package com.cloud.storage.template;
+
+import com.cloud.storage.template.TemplateUploader.UploadCompleteCallback;
+import com.cloud.storage.template.TemplateUploader.Status;
+
+public interface TemplateUploader extends Runnable{
+
+	/**
+	 * Callback used to notify completion of upload
+	 * @author nitin
+	 *
+	 */
+	public interface UploadCompleteCallback {
+		void uploadComplete( Status status);
+
+	}
+
+	public static enum Status  {UNKNOWN, NOT_STARTED, IN_PROGRESS, ABORTED, UNRECOVERABLE_ERROR, RECOVERABLE_ERROR, UPLOAD_FINISHED, POST_UPLOAD_FINISHED}
+
+	
+	/**
+	 * Initiate upload
+	 * @param callback completion callback to be called after upload is complete
+	 * @return bytes uploaded
+	 */
+	public long upload(UploadCompleteCallback callback);
+	
+	/**
+	 * @return
+	 */
+	public boolean stopUpload();
+	
+	/**
+	 * @return percent of file uploaded
+	 */
+	public int getUploadPercent();
+
+	/**
+	 * Get the status of the upload
+	 * @return status of upload
+	 */
+	public TemplateUploader.Status getStatus();
+
+
+	/**
+	 * Get time taken to upload so far
+	 * @return time in seconds taken to upload
+	 */
+	public long getUploadTime();
+
+	/**
+	 * Get bytes uploaded
+	 * @return bytes uploaded so far
+	 */
+	public long getUploadedBytes();
+
+	/**
+	 * Get the error if any
+	 * @return error string if any
+	 */
+	public String getUploadError();
+
+	/** Get local path of the uploaded file
+	 * @return local path of the file uploaded
+	 */
+	public String getUploadLocalPath();
+
+	public void setStatus(TemplateUploader.Status status);
+
+	public void setUploadError(String string);
+
+	public void setResume(boolean resume);
+	
+	public boolean isInited();	
+	
+
+}
diff --git a/core/src/com/cloud/storage/template/UploadManager.java b/core/src/com/cloud/storage/template/UploadManager.java
new file mode 100644
index 00000000000..b2728af111d
--- /dev/null
+++ b/core/src/com/cloud/storage/template/UploadManager.java
@@ -0,0 +1,68 @@
+package com.cloud.storage.template;
+
+import java.util.List;
+import java.util.Map;
+
+import com.cloud.agent.api.storage.UploadAnswer;
+import com.cloud.agent.api.storage.UploadCommand;
+import com.cloud.agent.api.storage.UploadCommand;
+import com.cloud.storage.StorageResource;
+import com.cloud.storage.VMTemplateHostVO;
+import com.cloud.storage.Storage.ImageFormat;
+import com.cloud.utils.component.Manager;
+
+public interface UploadManager extends Manager {
+
+
+	/**
+	 * Get the status of a upload job
+	 * @param jobId job Id
+	 * @return status of the upload job
+	 */
+	public TemplateUploader.Status getUploadStatus(String jobId);
+	
+	/**
+	 * Get the status of a upload job
+	 * @param jobId job Id
+	 * @return status of the upload job
+	 */
+	public VMTemplateHostVO.Status getUploadStatus2(String jobId);
+
+	/**
+	 * Get the upload percent of a upload job
+	 * @param jobId job Id
+	 * @return
+	 */
+	public int getUploadPct(String jobId);
+
+	/**
+	 * Get the upload error if any
+	 * @param jobId job Id
+	 * @return
+	 */
+	public String getUploadError(String jobId);
+
+	/**
+	 * Get the local path for the upload
+	 * @param jobId job Id
+	 * @return
+	public String getUploadLocalPath(String jobId);
+     */
+	
+	/** Handle upload commands from the management server
+	 * @param cmd cmd from server
+	 * @return answer representing status of upload.
+	 */
+	public UploadAnswer handleUploadCommand(UploadCommand cmd);		
+	
+	public String setRootDir(String rootDir, StorageResource storage);
+    
+    public String getPublicTemplateRepo();
+
+
+	String uploadPublicTemplate(long id, String url, String name,
+			ImageFormat format, Long accountId, String descr,
+			String cksum, String installPathPrefix, String user,
+			String password, long maxTemplateSizeInBytes);
+	
+}
diff --git a/core/src/com/cloud/storage/template/UploadManagerImpl.java b/core/src/com/cloud/storage/template/UploadManagerImpl.java
new file mode 100644
index 00000000000..a20a8b77431
--- /dev/null
+++ b/core/src/com/cloud/storage/template/UploadManagerImpl.java
@@ -0,0 +1,597 @@
+package com.cloud.storage.template;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+
+import javax.naming.ConfigurationException;
+
+import org.apache.log4j.Logger;
+
+import com.cloud.agent.api.storage.UploadAnswer;
+import com.cloud.agent.api.storage.UploadProgressCommand;
+import com.cloud.agent.api.storage.UploadCommand;
+import com.cloud.agent.api.storage.UploadAnswer;
+import com.cloud.agent.api.storage.UploadCommand;
+import com.cloud.storage.StorageLayer;
+import com.cloud.storage.StorageResource;
+import com.cloud.storage.VMTemplateHostVO;
+import com.cloud.storage.Storage.ImageFormat;
+import com.cloud.storage.template.TemplateUploader.UploadCompleteCallback;
+import com.cloud.storage.template.TemplateUploader.Status;
+import com.cloud.utils.NumbersUtil;
+import com.cloud.utils.UUID;
+import com.cloud.utils.component.Adapters;
+import com.cloud.utils.component.ComponentLocator;
+import com.cloud.utils.exception.CloudRuntimeException;
+import com.cloud.utils.script.Script;
+
+public class UploadManagerImpl implements UploadManager {
+
+
+   public class Completion implements UploadCompleteCallback {
+        private final String jobId;
+
+        public Completion(String jobId) {
+            this.jobId = jobId;
+        }
+
+        @Override
+        public void uploadComplete(Status status) {
+            setUploadStatus(jobId, status);
+        }
+    }
+   
+   private static class UploadJob {
+       private final TemplateUploader td;
+       private final String jobId;
+       private final String tmpltName;
+       private final ImageFormat format;
+       private String tmpltPath;
+       private String description;
+       private String checksum;
+       private Long accountId;
+       private String installPathPrefix;
+       private long templatesize;
+       private long id;
+
+       public UploadJob(TemplateUploader td, String jobId, long id, String tmpltName, ImageFormat format, boolean hvm, Long accountId, String descr, String cksum, String installPathPrefix) {
+           super();
+           this.td = td;
+           this.jobId = jobId;
+           this.tmpltName = tmpltName;
+           this.format = format;
+           this.accountId = accountId;
+           this.description = descr;
+           this.checksum = cksum;
+           this.installPathPrefix = installPathPrefix;
+           this.templatesize = 0;
+           this.id = id;
+       }
+
+       public TemplateUploader getTd() {
+           return td;
+       }
+
+       public String getDescription() {
+           return description;
+       }
+
+       public String getChecksum() {
+           return checksum;
+       }
+
+       public UploadJob(TemplateUploader td, String jobId, UploadCommand cmd) {
+           this.td = td;
+           this.jobId = jobId;
+           this.tmpltName = cmd.getName();
+           this.format = cmd.getFormat();           
+       }
+
+       public TemplateUploader getTemplateUploader() {
+           return td;
+       }
+
+       public String getJobId() {
+           return jobId;
+       }
+
+       public String getTmpltName() {
+           return tmpltName;
+       }
+
+       public ImageFormat getFormat() {
+           return format;
+       }
+
+       public Long getAccountId() {
+           return accountId;
+       }
+
+       public long getId() {
+           return id;
+       }
+
+       public void setTmpltPath(String tmpltPath) {
+           this.tmpltPath = tmpltPath;
+       }
+
+       public String getTmpltPath() {
+           return tmpltPath;
+       }
+
+       public String getInstallPathPrefix() {
+           return installPathPrefix;
+       }
+
+       public void cleanup() {
+       }
+
+       public void setTemplatesize(long templatesize) {
+           this.templatesize = templatesize;
+       }
+
+       public long getTemplatesize() {
+           return templatesize;
+       }
+   }
+   public static final Logger s_logger = Logger.getLogger(UploadManagerImpl.class);
+   private ExecutorService threadPool;
+   private final Map jobs = new ConcurrentHashMap();
+	private String parentDir;
+	private Adapters _processors;
+	private String publicTemplateRepo;
+	private StorageLayer _storage;
+	private int installTimeoutPerGig;
+	private boolean _sslCopy;
+	private String _name;
+	private boolean hvm;
+   
+	@Override
+	public String uploadPublicTemplate(long id, String url, String name,
+			ImageFormat format, Long accountId, String descr,
+			String cksum, String installPathPrefix, String userName,
+			String passwd, long templateSizeInBytes) {		
+		
+        UUID uuid = new UUID();
+        String jobId = uuid.toString();
+
+        String completePath = parentDir + File.separator + installPathPrefix;
+        s_logger.debug("Starting upload from " + completePath);
+        
+        URI uri;
+		try {
+		    uri = new URI(url);
+		} catch (URISyntaxException e) {
+		    s_logger.error("URI is incorrect: " + url);
+		    throw new CloudRuntimeException("URI is incorrect: " + url);
+		}
+		TemplateUploader tu;
+		if ((uri != null) && (uri.getScheme() != null)) {
+		    if (uri.getScheme().equalsIgnoreCase("ftp")) {
+		        tu = new FtpTemplateUploader(completePath, url, new Completion(jobId), templateSizeInBytes);                                
+		    } else {
+		    	s_logger.error("Scheme is not supported " + url);
+		        throw new CloudRuntimeException("Scheme is not supported " + url);
+		    }
+		} else {
+		    s_logger.error("Unable to download from URL: " + url);
+		    throw new CloudRuntimeException("Unable to download from URL: " + url);
+		}
+		UploadJob uj = new UploadJob(tu, jobId, id, name, format, hvm, accountId, descr, cksum, installPathPrefix);
+		jobs.put(jobId, uj);
+		threadPool.execute(tu);
+
+		return jobId;
+				
+	}
+
+	@Override
+	public String getUploadError(String jobId) {
+        UploadJob uj = jobs.get(jobId);
+        if (uj != null) {
+            return uj.getTemplateUploader().getUploadError();
+        }
+        return null;
+	}
+
+	@Override
+	public int getUploadPct(String jobId) {
+		UploadJob uj = jobs.get(jobId);
+        if (uj != null) {
+            return uj.getTemplateUploader().getUploadPercent();
+        }
+        return 0;
+	}
+
+	@Override
+	public Status getUploadStatus(String jobId) {
+        UploadJob job = jobs.get(jobId);
+        if (job != null) {
+            TemplateUploader tu = job.getTemplateUploader();
+            if (tu != null) {
+                return tu.getStatus();
+            }
+        }
+        return Status.UNKNOWN;
+	}
+	
+    public static VMTemplateHostVO.Status convertStatus(Status tds) {
+        switch (tds) {
+        case ABORTED:
+            return VMTemplateHostVO.Status.NOT_UPLOADED;
+        case UPLOAD_FINISHED:
+            return VMTemplateHostVO.Status.UPLOAD_IN_PROGRESS;
+        case IN_PROGRESS:
+            return VMTemplateHostVO.Status.UPLOAD_IN_PROGRESS;
+        case NOT_STARTED:
+            return VMTemplateHostVO.Status.NOT_UPLOADED;
+        case RECOVERABLE_ERROR:
+            return VMTemplateHostVO.Status.NOT_UPLOADED;
+        case UNKNOWN:
+            return VMTemplateHostVO.Status.UNKNOWN;
+        case UNRECOVERABLE_ERROR:
+            return VMTemplateHostVO.Status.UPLOAD_ERROR;
+        case POST_UPLOAD_FINISHED:
+            return VMTemplateHostVO.Status.UPLOADED;
+        default:
+            return VMTemplateHostVO.Status.UNKNOWN;
+        }
+    }
+
+    @Override
+    public com.cloud.storage.VMTemplateHostVO.Status getUploadStatus2(String jobId) {
+        return convertStatus(getUploadStatus(jobId));
+    }
+	@Override
+	public String getPublicTemplateRepo() {
+		// TODO Auto-generated method stub
+		return null;
+	}
+
+    private UploadAnswer handleUploadProgressCmd(UploadProgressCommand cmd) {
+        String jobId = cmd.getJobId();
+        UploadAnswer answer;
+        UploadJob uj = null;
+        if (jobId != null)
+            uj = jobs.get(jobId);
+        if (uj == null) {           
+           return new UploadAnswer(null, 0, "Cannot find job", com.cloud.storage.VMTemplateHostVO.Status.UNKNOWN, "", "", 0);            
+        }
+        TemplateUploader td = uj.getTemplateUploader();
+        switch (cmd.getRequest()) {
+        case GET_STATUS:
+            break;
+        case ABORT:
+            td.stopUpload();
+            sleep();
+            break;
+        /*case RESTART:
+            td.stopUpload();
+            sleep();
+            threadPool.execute(td);
+            break;*/
+        case PURGE:
+            td.stopUpload();
+            answer = new UploadAnswer(jobId, getUploadPct(jobId), getUploadError(jobId), getUploadStatus2(jobId), getUploadLocalPath(jobId), getInstallPath(jobId), getUploadTemplateSize(jobId));
+            jobs.remove(jobId);
+            return answer;
+        default:
+            break; // TODO
+        }
+        return new UploadAnswer(jobId, getUploadPct(jobId), getUploadError(jobId), getUploadStatus2(jobId), getUploadLocalPath(jobId), getInstallPath(jobId),
+                getUploadTemplateSize(jobId));
+    }	
+	
+    @Override
+    public UploadAnswer handleUploadCommand(UploadCommand cmd) {
+    	s_logger.warn(" handliing the upload " +cmd.getInstallPath() + " " + cmd.getId());
+        if (cmd instanceof UploadProgressCommand) {
+            return handleUploadProgressCmd((UploadProgressCommand) cmd);
+        }
+        /*
+        if (cmd.getUrl() == null) {
+            return new UploadAnswer(null, 0, "Template is corrupted on storage due to an invalid url , cannot Upload", com.cloud.storage.VMTemplateStorageResourceAssoc.Status.UPLOAD_ERROR, "", "", 0);
+        }
+
+        if (cmd.getName() == null) {
+            return new UploadAnswer(null, 0, "Invalid Name", com.cloud.storage.VMTemplateStorageResourceAssoc.Status.UPLOAD_ERROR, "", "", 0);
+        }*/
+
+       // String installPathPrefix = null;
+       // installPathPrefix = publicTemplateRepo;
+
+        String user = null;
+        String password = null;                
+        String jobId = uploadPublicTemplate(cmd.getId(), cmd.getUrl(), cmd.getName(), 
+        									cmd.getFormat(), cmd.getAccountId(), cmd.getDescription(),
+        									cmd.getChecksum(), cmd.getInstallPath(), user, password,
+        									cmd.getTemplateSizeInBytes());
+        sleep();
+        if (jobId == null) {
+            return new UploadAnswer(null, 0, "Internal Error", com.cloud.storage.VMTemplateStorageResourceAssoc.Status.UPLOAD_ERROR, "", "", 0);
+        }
+        return new UploadAnswer(jobId, getUploadPct(jobId), getUploadError(jobId), getUploadStatus2(jobId), getUploadLocalPath(jobId), getInstallPath(jobId),
+                getUploadTemplateSize(jobId));
+    }
+
+	private String getInstallPath(String jobId) {
+		// TODO Auto-generated method stub
+		return null;
+	}
+
+	private String getUploadLocalPath(String jobId) {
+		// TODO Auto-generated method stub
+		return null;
+	}
+
+	private long getUploadTemplateSize(String jobId){
+		UploadJob uj = jobs.get(jobId);
+        if (uj != null) {
+            return uj.getTemplatesize();
+        }
+        return 0;
+	}
+
+	@Override
+	public String setRootDir(String rootDir, StorageResource storage) {
+        this.publicTemplateRepo = rootDir + publicTemplateRepo;
+        return null;
+	}
+
+	@Override
+	public boolean configure(String name, Map params)
+			throws ConfigurationException {
+        _name = name;
+
+        String value = null;
+
+        _storage = (StorageLayer) params.get(StorageLayer.InstanceConfigKey);
+        if (_storage == null) {
+            value = (String) params.get(StorageLayer.ClassConfigKey);
+            if (value == null) {
+                throw new ConfigurationException("Unable to find the storage layer");
+            }
+
+            Class clazz;
+            try {
+                clazz = (Class) Class.forName(value);
+            } catch (ClassNotFoundException e) {
+                throw new ConfigurationException("Unable to instantiate " + value);
+            }
+            _storage = ComponentLocator.inject(clazz);
+        }
+        String useSsl = (String)params.get("sslcopy");
+        if (useSsl != null) {
+        	_sslCopy = Boolean.parseBoolean(useSsl);
+        	
+        }
+        configureFolders(name, params);
+        String inSystemVM = (String)params.get("secondary.storage.vm");
+        if (inSystemVM != null && "true".equalsIgnoreCase(inSystemVM)) {
+        	s_logger.info("UploadManager: starting additional services since we are inside system vm");
+        	startAdditionalServices();
+        	blockOutgoingOnPrivate();
+        }
+
+        value = (String) params.get("install.timeout.pergig");
+        this.installTimeoutPerGig = NumbersUtil.parseInt(value, 15 * 60) * 1000;
+
+        value = (String) params.get("install.numthreads");
+        final int numInstallThreads = NumbersUtil.parseInt(value, 10);        
+
+        String scriptsDir = (String) params.get("template.scripts.dir");
+        if (scriptsDir == null) {
+            scriptsDir = "scripts/storage/secondary";
+        }
+
+        List processors = new ArrayList();
+        _processors = new Adapters("processors", processors);
+        Processor processor = new VhdProcessor();
+        
+        processor.configure("VHD Processor", params);
+        processors.add(processor);
+        
+        processor = new IsoProcessor();
+        processor.configure("ISO Processor", params);
+        processors.add(processor);
+        
+        processor = new QCOW2Processor();
+        processor.configure("QCOW2 Processor", params);
+        processors.add(processor);
+        // Add more processors here.
+        threadPool = Executors.newFixedThreadPool(numInstallThreads);
+        return true;
+	}
+	
+	protected void configureFolders(String name, Map params) throws ConfigurationException {
+        parentDir = (String) params.get("template.parent");
+        if (parentDir == null) {
+            throw new ConfigurationException("Unable to find the parent root for the templates");
+        }
+
+        String value = (String) params.get("public.templates.root.dir");
+        if (value == null) {
+            value = TemplateConstants.DEFAULT_TMPLT_ROOT_DIR;
+        }
+               
+		if (value.startsWith(File.separator)) {
+            publicTemplateRepo = value;
+        } else {
+            publicTemplateRepo = parentDir + File.separator + value;
+        }
+        
+        if (!publicTemplateRepo.endsWith(File.separator)) {
+            publicTemplateRepo += File.separator;
+        }
+        
+        publicTemplateRepo += TemplateConstants.DEFAULT_TMPLT_FIRST_LEVEL_DIR;
+        
+        if (!_storage.mkdirs(publicTemplateRepo)) {
+            throw new ConfigurationException("Unable to create public templates directory");
+        }
+    }
+
+	@Override
+	public String getName() {
+		return _name;
+	}
+
+	@Override
+	public boolean start() {
+		return true;
+	}
+
+	@Override
+	public boolean stop() {
+		return true;
+	}
+
+    /**
+     * Get notified of change of job status. Executed in context of uploader thread
+     * 
+     * @param jobId
+     *            the id of the job
+     * @param status
+     *            the status of the job
+     */
+    public void setUploadStatus(String jobId, Status status) {
+        UploadJob uj = jobs.get(jobId);
+        if (uj == null) {
+            s_logger.warn("setUploadStatus for jobId: " + jobId + ", status=" + status + " no job found");
+            return;
+        }
+        TemplateUploader tu = uj.getTemplateUploader();
+        s_logger.warn("Upload Completion for jobId: " + jobId + ", status=" + status);
+        s_logger.warn("UploadedBytes=" + tu.getUploadedBytes() + ", error=" + tu.getUploadError() + ", pct=" + tu.getUploadPercent());
+
+        switch (status) {
+        case ABORTED:
+        case NOT_STARTED:
+        case UNRECOVERABLE_ERROR:
+            // TODO
+            uj.cleanup();
+            break;
+        case UNKNOWN:
+            return;
+        case IN_PROGRESS:
+            s_logger.info("Resuming jobId: " + jobId + ", status=" + status);
+            tu.setResume(true);
+            threadPool.execute(tu);
+            break;
+        case RECOVERABLE_ERROR:
+            threadPool.execute(tu);
+            break;
+        case UPLOAD_FINISHED:
+            tu.setUploadError("Upload success, starting install ");
+            String result = postUpload(jobId);
+            if (result != null) {
+                s_logger.error("Failed post upload script: " + result);
+                tu.setStatus(Status.UNRECOVERABLE_ERROR);
+                tu.setUploadError("Failed post upload script: " + result);
+            } else {
+            	s_logger.warn("Upload completed successfully at " + new SimpleDateFormat().format(new Date()));
+                tu.setStatus(Status.POST_UPLOAD_FINISHED);
+                tu.setUploadError("Upload completed successfully at " + new SimpleDateFormat().format(new Date()));
+            }
+            uj.cleanup();
+            break;
+        default:
+            break;
+        }
+    }
+
+	private String postUpload(String jobId) {
+		return null;
+	}
+
+    private void sleep() {
+        try {
+            Thread.sleep(3000);
+        } catch (InterruptedException e) {
+            // ignore
+        }
+    }
+
+    private void blockOutgoingOnPrivate() {
+    	Script command = new Script("/bin/bash", s_logger);
+    	String intf = "eth1";
+    	command.add("-c");
+    	command.add("iptables -A OUTPUT -o " + intf + " -p tcp -m state --state NEW -m tcp --dport " + "80" + " -j REJECT;" +
+    			"iptables -A OUTPUT -o " + intf + " -p tcp -m state --state NEW -m tcp --dport " + "443" + " -j REJECT;");
+
+    	String result = command.execute();
+    	if (result != null) {
+    		s_logger.warn("Error in blocking outgoing to port 80/443 err=" + result );
+    		return;
+    	}		
+	}
+
+  private void startAdditionalServices() {
+    	
+    	Script command = new Script("/bin/bash", s_logger);
+		command.add("-c");
+    	command.add("service httpd stop ");
+    	String result = command.execute();
+    	if (result != null) {
+    		s_logger.warn("Error in stopping httpd service err=" + result );
+    	}
+    	String port = Integer.toString(TemplateConstants.DEFAULT_TMPLT_COPY_PORT);
+    	String intf = TemplateConstants.DEFAULT_TMPLT_COPY_INTF;
+    	
+    	command = new Script("/bin/bash", s_logger);
+		command.add("-c");
+    	command.add("iptables -D INPUT -i " + intf + " -p tcp -m state --state NEW -m tcp --dport " + port + " -j DROP;" +
+			        "iptables -D INPUT -i " + intf + " -p tcp -m state --state NEW -m tcp --dport " + port + " -j HTTP;" +
+			        "iptables -D INPUT -i " + intf + " -p tcp -m state --state NEW -m tcp --dport " + "443" + " -j DROP;" +
+			        "iptables -D INPUT -i " + intf + " -p tcp -m state --state NEW -m tcp --dport " + "443" + " -j HTTP;" +
+			        "iptables -F HTTP;" +
+			        "iptables -X HTTP;" +
+			        "iptables -N HTTP;" +
+    			    "iptables -I INPUT -i " + intf + " -p tcp -m state --state NEW -m tcp --dport " + port + " -j DROP;" +
+    			    "iptables -I INPUT -i " + intf + " -p tcp -m state --state NEW -m tcp --dport " + "443" + " -j DROP;" +
+    			    "iptables -I INPUT -i " + intf + " -p tcp -m state --state NEW -m tcp --dport " + port + " -j HTTP;" +
+    	            "iptables -I INPUT -i " + intf + " -p tcp -m state --state NEW -m tcp --dport " + "443" + " -j HTTP;");
+
+    	result = command.execute();
+    	if (result != null) {
+    		s_logger.warn("Error in opening up httpd port err=" + result );
+    		return;
+    	}
+    	
+    	command = new Script("/bin/bash", s_logger);
+		command.add("-c");
+    	command.add("service httpd start ");
+    	result = command.execute();
+    	if (result != null) {
+    		s_logger.warn("Error in starting httpd service err=" + result );
+    		return;
+    	}
+    	command = new Script("mkdir", s_logger);
+		command.add("-p");
+    	command.add("/var/www/html/copy/template");
+    	result = command.execute();
+    	if (result != null) {
+    		s_logger.warn("Error in creating directory =" + result );
+    		return;
+    	}
+    	
+    	command = new Script("/bin/bash", s_logger);
+		command.add("-c");
+    	command.add("ln -sf " + publicTemplateRepo + " /var/www/html/copy/template");
+    	result = command.execute();
+    	if (result != null) {
+    		s_logger.warn("Error in linking  err=" + result );
+    		return;
+    	}
+	}
+}    
diff --git a/core/src/com/cloud/vm/dao/UserVmDaoImpl.java b/core/src/com/cloud/vm/dao/UserVmDaoImpl.java
index 6ab439486af..4e909b7b9f3 100755
--- a/core/src/com/cloud/vm/dao/UserVmDaoImpl.java
+++ b/core/src/com/cloud/vm/dao/UserVmDaoImpl.java
@@ -116,6 +116,8 @@ public class UserVmDaoImpl extends GenericDaoBase implements Use
         DestroySearch.and("updateTime", DestroySearch.entity().getUpdateTime(), SearchCriteria.Op.LT);
         DestroySearch.done();
 
+        
+        
         _updateTimeAttr = _allAttributes.get("updateTime");
         assert _updateTimeAttr != null : "Couldn't get this updateTime attribute";
     }
diff --git a/debian/cloud-agent-scripts.install b/debian/cloud-agent-scripts.install
index 5c448a8c15d..eb0c1589ee0 100644
--- a/debian/cloud-agent-scripts.install
+++ b/debian/cloud-agent-scripts.install
@@ -12,8 +12,6 @@
 /usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/id_rsa.cloud
 /usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/make_migratable.sh
 /usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/network_info.sh
-/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/networkUsage.sh
-/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/prepsystemvm.sh
 /usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/setup_iscsi.sh
 /usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/setupxenserver.sh
 /usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/vmops
diff --git a/debian/cloud-client.postinst b/debian/cloud-client.postinst
index ce3ebc3da6d..af731f19be7 100644
--- a/debian/cloud-client.postinst
+++ b/debian/cloud-client.postinst
@@ -17,8 +17,6 @@ case "$1" in
 		chgrp cloud $i
 	done
 
-	test -f /var/lib/cloud/management/.ssh/id_rsa || su - cloud -c 'yes "" | ssh-keygen -t rsa -q -N ""' < /dev/null
-
 	for i in /etc/cloud/management/db.properties
 	do
 		chmod 0640 $i
diff --git a/debian/cloud-setup.install b/debian/cloud-setup.install
index 7d35dbe9929..48969370521 100644
--- a/debian/cloud-setup.install
+++ b/debian/cloud-setup.install
@@ -12,3 +12,6 @@
 /usr/share/cloud/setup/index-212to213.sql
 /usr/share/cloud/setup/postprocess-20to21.sql
 /usr/share/cloud/setup/schema-20to21.sql
+/usr/share/cloud/setup/schema-level.sql
+/usr/share/cloud/setup/schema-21to22.sql
+/usr/share/cloud/setup/data-21to22.sql
diff --git a/debian/control b/debian/control
index 0f1f3e5093a..91a8186c662 100644
--- a/debian/control
+++ b/debian/control
@@ -2,7 +2,7 @@ Source: cloud
 Section: libs
 Priority: extra
 Maintainer: Manuel Amador (Rudd-O) 
-Build-Depends: debhelper (>= 7), openjdk-6-jdk, tomcat6, libws-commons-util-java, libcommons-dbcp-java, libcommons-collections-java, libcommons-httpclient-java, libservlet2.5-java, genisoimage
+Build-Depends: debhelper (>= 7), openjdk-6-jdk, tomcat6, libws-commons-util-java, libcommons-dbcp-java, libcommons-collections-java, libcommons-httpclient-java, libservlet2.5-java, genisoimage, python-mysqldb
 Standards-Version: 3.8.1
 Homepage: http://techcenter.cloud.com/software/cloudstack
 
@@ -128,7 +128,7 @@ Provides: vmops-setup
 Conflicts: vmops-setup
 Replaces: vmops-setup
 Architecture: any
-Depends: openjdk-6-jre, python, cloud-utils (= ${source:Version}), mysql-client, cloud-deps (= ${source:Version}), cloud-server (= ${source:Version}), cloud-python (= ${source:Version}), python-mysqldb
+Depends: openjdk-6-jre, python, cloud-utils (= ${source:Version}), cloud-deps (= ${source:Version}), cloud-server (= ${source:Version}), cloud-python (= ${source:Version}), python-mysqldb
 Description: Cloud.com client
  The Cloud.com setup tools let you set up your Management Server and Usage Server.
 
diff --git a/debian/rules b/debian/rules
index c99b62b85a7..4f0fa109a82 100755
--- a/debian/rules
+++ b/debian/rules
@@ -91,7 +91,7 @@ binary-common:
 	dh_testdir
 	dh_testroot
 	dh_installchangelogs 
-	dh_installdocs -A README INSTALL HACKING README.html
+	dh_installdocs -A README.html
 #	dh_installexamples
 #	dh_installmenu
 #	dh_installdebconf
diff --git a/patches/kvm/etc/init.d/seteth1 b/patches/kvm/etc/init.d/seteth1
deleted file mode 100755
index 0dd61a77f9b..00000000000
--- a/patches/kvm/etc/init.d/seteth1
+++ /dev/null
@@ -1,223 +0,0 @@
-
-
-
-#! /bin/bash
-# chkconfig: 35 09 90
-# description: pre-boot configuration using boot line parameters 
-#   This file exists in /etc/init.d/ 
-
-replace_in_file() {
-  local filename=$1
-  local keyname=$2
-  local value=$3
-  sed -i /$keyname=/d $filename
-  echo "$keyname=$value" >> $filename
-  return $?
-}
-
-setup_interface() {
-  local intfnum=$1
-  local ip=$2
-  local mask=$3
-  
-  cfg=/etc/sysconfig/network-scripts/ifcfg-eth${intfnum} 
-  replace_in_file ${cfg} IPADDR ${ip}
-  replace_in_file ${cfg} NETMASK ${mask}
-  replace_in_file ${cfg} BOOTPROTO STATIC
-  if [ "$ip" == "0.0.0.0" ]
-  then
-    replace_in_file ${cfg} ONBOOT No
-  else
-    replace_in_file ${cfg} ONBOOT Yes
-  fi
-}
-
-setup_common() {
-  setup_interface "0" $ETH0_IP $ETH0_MASK
-  setup_interface "1" $ETH1_IP $ETH1_MASK
-  setup_interface "2" $ETH2_IP $ETH2_MASK
-  
-  replace_in_file /etc/sysconfig/network GATEWAY $GW
-  replace_in_file /etc/sysconfig/network HOSTNAME $NAME
-  echo "NOZEROCONF=yes" >> /etc/sysconfig/network
-  hostname $NAME
-  
-  #Nameserver
-  if [ -n "$NS1" ]
-  then
-    echo "nameserver $NS1" > /etc/dnsmasq-resolv.conf
-    echo "nameserver $NS1" > /etc/resolv.conf
-  fi
-  
-  if [ -n "$NS2" ]
-  then
-    echo "nameserver $NS2" >> /etc/dnsmasq-resolv.conf
-    echo "nameserver $NS2" >> /etc/resolv.conf
-  fi
-  if [[ -n "$MGMTNET"  && -n "$LOCAL_GW" ]]
-  then
-    echo "$MGMTNET via $LOCAL_GW dev eth1" > /etc/sysconfig/network-scripts/route-eth1
-  fi
-}
-
-setup_router() {
-  setup_common
-  [ -z $DHCP_RANGE ] && DHCP_RANGE=$ETH0_IP
-  if [ -n "$DOMAIN" ]
-  then
-    #send domain name to dhcp clients
-    sed -i s/[#]*dhcp-option=15.*$/dhcp-option=15,\"$DOMAIN\"/ /etc/dnsmasq.conf
-    #DNS server will append $DOMAIN to local queries
-    sed -r -i s/^[#]?domain=.*$/domain=$DOMAIN/ /etc/dnsmasq.conf
-    #answer all local domain queries
-    sed  -i -e "s/^[#]*local=.*$/local=\/$DOMAIN\//" /etc/dnsmasq.conf
-  fi
-  sed -i -e "s/^dhcp-range=.*$/dhcp-range=$DHCP_RANGE,static/" /etc/dnsmasq.conf
-  sed -i -e "s/^[#]*listen-address=.*$/listen-address=$ETH0_IP/" /etc/dnsmasq.conf
-  sed -i  /gateway/d /etc/hosts
-  echo "$ETH0_IP $NAME" >> /etc/hosts
-  [ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*$/Listen $ETH0_IP:80/" /etc/httpd/conf/httpd.conf
-  [ -f /etc/httpd/conf.d/ssl.conf ] && mv /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/ssl.conf.bak
-  [ -f /etc/ssh/sshd_config ] && sed -i -e "s/^[#]*ListenAddress.*$/ListenAddress $ETH1_IP/" /etc/ssh/sshd_config
-}
-
-setup_dhcpsrvr() {
-  setup_common
-  [ -z $DHCP_RANGE ] && DHCP_RANGE=$ETH0_IP
-  if [ -n "$DOMAIN" ]
-  then
-    #send domain name to dhcp clients
-    sed -i s/[#]*dhcp-option=15.*$/dhcp-option=15,\"$DOMAIN\"/ /etc/dnsmasq.conf
-    #DNS server will append $DOMAIN to local queries
-    sed -r -i s/^[#]?domain=.*$/domain=$DOMAIN/ /etc/dnsmasq.conf
-    #answer all local domain queries
-    sed  -i -e "s/^[#]*local=.*$/local=\/$DOMAIN\//" /etc/dnsmasq.conf
-  else
-    #delete domain option
-    sed -i /^dhcp-option=15.*$/d /etc/dnsmasq.conf
-    sed -i /^domain=.*$/d /etc/dnsmasq.conf
-    sed  -i -e "/^local=.*$/d" /etc/dnsmasq.conf
-  fi
-  sed -i -e "s/^dhcp-range=.*$/dhcp-range=$DHCP_RANGE,static/" /etc/dnsmasq.conf
-  sed -i -e "s/^[#]*dhcp-option=option:router.*$/dhcp-option=option:router,$GW/" /etc/dnsmasq.conf
-  echo "dhcp-option=6,$NS1,$NS2" >> /etc/dnsmasq.conf
-  sed -i  /gateway/d /etc/hosts
-  echo "$ETH0_IP $NAME" >> /etc/hosts
-  [ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*$/Listen $ETH0_IP:80/" /etc/httpd/conf/httpd.conf
-  [ -f /etc/httpd/conf.d/ssl.conf ] && mv /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/ssl.conf.bak
-}
-
-setup_secstorage() {
-  setup_common
-  sed -i  /gateway/d /etc/hosts
-  public_ip=$ETH2_IP
-  [ "$ETH2_IP" == "0.0.0.0" ] && public_ip=$ETH1_IP
-  echo "$public_ip $NAME" >> /etc/hosts
-  [ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*:80$/Listen $public_ip:80/" /etc/httpd/conf/httpd.conf
-  [ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*:443$/Listen $public_ip:443/" /etc/httpd/conf/httpd.conf
-}
-
-setup_console_proxy() {
-  setup_common
-  public_ip=$ETH2_IP
-  [ "$ETH2_IP" == "0.0.0.0" ] && public_ip=$ETH1_IP
-  sed -i  /gateway/d /etc/hosts
-  echo "$public_ip $NAME" >> /etc/hosts
-}
-
-if [ -f /mnt/cmdline ]
-then
-    CMDLINE=$(cat /mnt/cmdline)
-else
-    CMDLINE=$(cat /proc/cmdline)
-fi
-
-TYPE="router"
-
-for i in $CMDLINE
-  do
-    # search for foo=bar pattern and cut out foo
-    KEY=$(echo $i | cut -d= -f1)
-    VALUE=$(echo $i | cut -d= -f2)
-    case $KEY in 
-      eth0ip)
-          ETH0_IP=$VALUE
-          ;;
-      eth1ip)
-          ETH1_IP=$VALUE
-          ;;
-      eth2ip)
-          ETH2_IP=$VALUE
-          ;;
-      gateway)
-          GW=$VALUE
-          ;;
-      eth0mask)
-          ETH0_MASK=$VALUE
-          ;;
-      eth1mask)
-          ETH1_MASK=$VALUE
-          ;;
-      eth2mask)
-          ETH2_MASK=$VALUE
-          ;;
-      dns1)
-          NS1=$VALUE
-          ;;
-      dns2)
-          NS2=$VALUE
-          ;;
-      domain)
-          DOMAIN=$VALUE
-          ;;
-      mgmtcidr)
-          MGMTNET=$VALUE
-          ;;
-      localgw)
-          LOCAL_GW=$VALUE
-          ;;
-      template)
-        TEMPLATE=$VALUE
-      	;;
-      name)
-	NAME=$VALUE
-	;;
-      dhcprange)
-        DHCP_RANGE=$(echo $VALUE | tr ':' ',')
-      	;;
-      type)
-        TYPE=$VALUE	
-	;;
-    esac
-done
-
-case $TYPE in 
-   router)
-       [ "$NAME" == "" ] && NAME=router
-       setup_router
-	;;
-   dhcpsrvr)
-       [ "$NAME" == "" ] && NAME=dhcpsrvr
-       setup_dhcpsrvr
-	;;
-   secstorage)
-       [ "$NAME" == "" ] && NAME=secstorage
-       setup_secstorage;
-	;;
-   consoleproxy)
-       [ "$NAME" == "" ] && NAME=consoleproxy
-       setup_console_proxy;
-	;;
-esac
-
-if [ ! -d /root/.ssh ]
-then
-   mkdir /root/.ssh
-   chmod 700 /root/.ssh
-fi
-if [ -f /mnt/id_rsa.pub ]
-then
-   cat /mnt/id_rsa.pub > /root/.ssh/authorized_keys
-   chmod 600 /root/.ssh/authorized_keys
-fi
-
diff --git a/patches/kvm/etc/sysconfig/iptables b/patches/kvm/etc/sysconfig/iptables
deleted file mode 100755
index 5048fb6d670..00000000000
--- a/patches/kvm/etc/sysconfig/iptables
+++ /dev/null
@@ -1,33 +0,0 @@
-# Generated by iptables-save v1.3.8 on Thu Oct  1 18:16:05 2009
-# @VERSION@
-*nat
-:PREROUTING ACCEPT [499:70846]
-:POSTROUTING ACCEPT [1:85]
-:OUTPUT ACCEPT [1:85]
-COMMIT
-# Completed on Thu Oct  1 18:16:06 2009
-# Generated by iptables-save v1.3.8 on Thu Oct  1 18:16:06 2009
-*filter
-#:INPUT DROP [288:42467]
-:FORWARD DROP [0:0]
-:OUTPUT ACCEPT [65:9665]
--A INPUT -i eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT 
--A INPUT -i eth2 -m state --state RELATED,ESTABLISHED -j ACCEPT 
--A INPUT -p icmp -j ACCEPT 
--A INPUT -i eth0 -p udp -m udp --dport 67 -j ACCEPT 
--A INPUT -i eth0 -p udp -m udp --dport 53 -j ACCEPT 
--A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT 
--A INPUT -i eth1 -p tcp -m tcp --dport 3922 --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT 
--A INPUT -i eth0 -p tcp -m tcp --dport 8080 --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
--A INPUT -p tcp -m tcp --dport 8001 --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
--A INPUT -p tcp -m tcp --dport 443 --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
--A INPUT -p tcp -m tcp --dport 80 --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
--A INPUT -i eth1 -p tcp -m state --state NEW -m tcp --dport 8001 -j ACCEPT
--A INPUT -i eth2 -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
--A INPUT -i eth2 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
--A FORWARD -i eth0 -o eth1 -j ACCEPT 
--A FORWARD -i eth0 -o eth2 -j ACCEPT 
--A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT 
--A FORWARD -i eth2 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT 
-COMMIT
-# Completed on Thu Oct  1 18:16:06 2009
diff --git a/patches/kvm/etc/sysconfig/iptables-config b/patches/kvm/etc/sysconfig/iptables-config
deleted file mode 100644
index c8a02b4a306..00000000000
--- a/patches/kvm/etc/sysconfig/iptables-config
+++ /dev/null
@@ -1,48 +0,0 @@
-# Load additional iptables modules (nat helpers)
-#   Default: -none-
-# Space separated list of nat helpers (e.g. 'ip_nat_ftp ip_nat_irc'), which
-# are loaded after the firewall rules are applied. Options for the helpers are
-# stored in /etc/modprobe.conf.
-IPTABLES_MODULES="ip_conntrack_ftp nf_nat_ftp"
-
-# Unload modules on restart and stop
-#   Value: yes|no,  default: yes
-# This option has to be 'yes' to get to a sane state for a firewall
-# restart or stop. Only set to 'no' if there are problems unloading netfilter
-# modules.
-IPTABLES_MODULES_UNLOAD="yes"
-
-# Save current firewall rules on stop.
-#   Value: yes|no,  default: no
-# Saves all firewall rules to /etc/sysconfig/iptables if firewall gets stopped
-# (e.g. on system shutdown).
-IPTABLES_SAVE_ON_STOP="no"
-
-# Save current firewall rules on restart.
-#   Value: yes|no,  default: no
-# Saves all firewall rules to /etc/sysconfig/iptables if firewall gets
-# restarted.
-IPTABLES_SAVE_ON_RESTART="no"
-
-# Save (and restore) rule and chain counter.
-#   Value: yes|no,  default: no
-# Save counters for rules and chains to /etc/sysconfig/iptables if
-# 'service iptables save' is called or on stop or restart if SAVE_ON_STOP or
-# SAVE_ON_RESTART is enabled.
-IPTABLES_SAVE_COUNTER="no"
-
-# Numeric status output
-#   Value: yes|no,  default: yes
-# Print IP addresses and port numbers in numeric format in the status output.
-IPTABLES_STATUS_NUMERIC="yes"
-
-# Verbose status output
-#   Value: yes|no,  default: yes
-# Print info about the number of packets and bytes plus the "input-" and
-# "outputdevice" in the status output.
-IPTABLES_STATUS_VERBOSE="no"
-
-# Status output with numbered lines
-#   Value: yes|no,  default: yes
-# Print a counter/number for every rule in the status output.
-IPTABLES_STATUS_LINENUMBERS="yes"
diff --git a/patches/kvm/etc/dnsmasq.conf b/patches/systemvm/etc/dnsmasq.conf
similarity index 99%
rename from patches/kvm/etc/dnsmasq.conf
rename to patches/systemvm/etc/dnsmasq.conf
index 238efc08d9b..234bcdaed5d 100755
--- a/patches/kvm/etc/dnsmasq.conf
+++ b/patches/systemvm/etc/dnsmasq.conf
@@ -74,13 +74,15 @@ resolv-file=/etc/dnsmasq-resolv.conf
 interface=eth0
 # Or you can specify which interface _not_ to listen on
 except-interface=eth1
+except-interface=eth2
 # Or which to listen on by address (remember to include 127.0.0.1 if
 # you use this.)
 #listen-address=
 # If you want dnsmasq to provide only DNS service on an interface,
 # configure it as shown above, and then use the following line to
 # disable DHCP on it.
-#no-dhcp-interface=eth1
+no-dhcp-interface=eth1
+no-dhcp-interface=eth2
 
 # On systems which support it, dnsmasq binds the wildcard address,
 # even when it is listening on only some interfaces. It then discards
@@ -109,7 +111,7 @@ expand-hosts
 # 2) Sets the "domain" DHCP option thereby potentially setting the
 #    domain of all systems configured by DHCP
 # 3) Provides the domain part for "expand-hosts"
-domain=foo.com
+#domain=foo.com
 
 # Uncomment this to enable the integrated DHCP server, you need
 # to supply the range of addresses available for lease and optionally
@@ -248,7 +250,7 @@ dhcp-hostsfile=/etc/dhcphosts.txt
 #dhcp-option=27,1
 
 # Set the domain
-dhcp-option=15,"foo.com"
+#dhcp-option=15,"foo.com"
 
 # Send the etherboot magic flag and then etherboot options (a string).
 #dhcp-option=128,e4:45:74:68:00:00
diff --git a/patches/kvm/etc/haproxy/haproxy.cfg b/patches/systemvm/etc/haproxy/haproxy.cfg
similarity index 100%
rename from patches/kvm/etc/haproxy/haproxy.cfg
rename to patches/systemvm/etc/haproxy/haproxy.cfg
diff --git a/patches/kvm/etc/hosts b/patches/systemvm/etc/hosts
similarity index 100%
rename from patches/kvm/etc/hosts
rename to patches/systemvm/etc/hosts
diff --git a/patches/kvm/etc/init.d/domr_webserver b/patches/systemvm/etc/init.d/domr_webserver
similarity index 100%
rename from patches/kvm/etc/init.d/domr_webserver
rename to patches/systemvm/etc/init.d/domr_webserver
diff --git a/patches/xenserver/etc/init.d/postinit b/patches/systemvm/etc/init.d/postinit
similarity index 95%
rename from patches/xenserver/etc/init.d/postinit
rename to patches/systemvm/etc/init.d/postinit
index ae17565c50b..681d5264fd9 100755
--- a/patches/xenserver/etc/init.d/postinit
+++ b/patches/systemvm/etc/init.d/postinit
@@ -26,7 +26,14 @@ setup_console_proxy() {
   echo "$public_ip $NAME" >> /etc/hosts
 }
 
-CMDLINE=$(cat /proc/cmdline)
+
+if [ -f /mnt/cmdline ]
+then
+    CMDLINE=$(cat /mnt/cmdline)
+else
+    CMDLINE=$(cat /proc/cmdline)
+fi
+
 TYPE="router"
 BOOTPROTO="static"
 
diff --git a/patches/xenserver/etc/init.d/seteth1 b/patches/systemvm/etc/init.d/seteth1
similarity index 92%
rename from patches/xenserver/etc/init.d/seteth1
rename to patches/systemvm/etc/init.d/seteth1
index 01ae5724950..32a0ad704f4 100755
--- a/patches/xenserver/etc/init.d/seteth1
+++ b/patches/systemvm/etc/init.d/seteth1
@@ -118,11 +118,12 @@ setup_dhcpsrvr() {
   sed -i -e "s/^dhcp-range=.*$/dhcp-range=$DHCP_RANGE,static/" /etc/dnsmasq.conf
   sed -i -e "s/^[#]*dhcp-option=option:router.*$/dhcp-option=option:router,$GW/" /etc/dnsmasq.conf
   #for now set up ourself as the dns server as well
-  #echo "dhcp-option=6,$NS1,$NS2" >> /etc/dnsmasq.conf
+  sed -i s/[#]*dhcp-option=6.*$/dhcp-option=6,\"$NS1\",\"$NS2\"/ /etc/dnsmasq.conf
   sed -i  /gateway/d /etc/hosts
   echo "$ETH0_IP $NAME" >> /etc/hosts
   [ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*$/Listen $ETH0_IP:80/" /etc/httpd/conf/httpd.conf
   [ -f /etc/httpd/conf.d/ssl.conf ] && mv /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/ssl.conf.bak
+  [ -f /etc/ssh/sshd_config ] && sed -i -e "s/^[#]*ListenAddress.*$/ListenAddress $ETH1_IP/" /etc/ssh/sshd_config
 }
 
 setup_secstorage() {
@@ -143,7 +144,25 @@ setup_console_proxy() {
   echo "$public_ip $NAME" >> /etc/hosts
 }
 
-CMDLINE=$(cat /proc/cmdline)
+if [ -f /mnt/cmdline ]
+then
+  CMDLINE=$(cat /mnt/cmdline)
+else
+  CMDLINE=$(cat /proc/cmdline)
+fi
+
+ 
+if [ ! -d /root/.ssh ]
+then
+  mkdir /root/.ssh
+  chmod 700 /root/.ssh
+fi
+if [ -f /mnt/id_rsa.pub ]
+then
+  cat /mnt/id_rsa.pub > /root/.ssh/authorized_keys
+  chmod 600 /root/.ssh/authorized_keys
+fi
+
 TYPE="router"
 BOOTPROTO="static"
 
diff --git a/patches/kvm/etc/init.d/vmops b/patches/systemvm/etc/init.d/vmops
similarity index 100%
rename from patches/kvm/etc/init.d/vmops
rename to patches/systemvm/etc/init.d/vmops
diff --git a/patches/kvm/etc/rc.local b/patches/systemvm/etc/rc.local
similarity index 100%
rename from patches/kvm/etc/rc.local
rename to patches/systemvm/etc/rc.local
diff --git a/patches/kvm/etc/ssh/sshd_config b/patches/systemvm/etc/ssh/sshd_config
similarity index 100%
rename from patches/kvm/etc/ssh/sshd_config
rename to patches/systemvm/etc/ssh/sshd_config
diff --git a/patches/xenserver/etc/sysconfig/iptables-consoleproxy b/patches/systemvm/etc/sysconfig/iptables-consoleproxy
similarity index 100%
rename from patches/xenserver/etc/sysconfig/iptables-consoleproxy
rename to patches/systemvm/etc/sysconfig/iptables-consoleproxy
diff --git a/patches/xenserver/etc/sysconfig/iptables-domr b/patches/systemvm/etc/sysconfig/iptables-domr
similarity index 100%
rename from patches/xenserver/etc/sysconfig/iptables-domr
rename to patches/systemvm/etc/sysconfig/iptables-domr
diff --git a/patches/xenserver/etc/sysconfig/iptables-secstorage b/patches/systemvm/etc/sysconfig/iptables-secstorage
similarity index 100%
rename from patches/xenserver/etc/sysconfig/iptables-secstorage
rename to patches/systemvm/etc/sysconfig/iptables-secstorage
diff --git a/patches/xenserver/etc/sysctl.conf b/patches/systemvm/etc/sysctl.conf
similarity index 100%
rename from patches/xenserver/etc/sysctl.conf
rename to patches/systemvm/etc/sysctl.conf
diff --git a/patches/xenserver/root/.ssh/authorized_keys b/patches/systemvm/root/.ssh/authorized_keys
similarity index 100%
rename from patches/xenserver/root/.ssh/authorized_keys
rename to patches/systemvm/root/.ssh/authorized_keys
diff --git a/patches/xenserver/root/clearUsageRules.sh b/patches/systemvm/root/clearUsageRules.sh
similarity index 100%
rename from patches/xenserver/root/clearUsageRules.sh
rename to patches/systemvm/root/clearUsageRules.sh
diff --git a/patches/xenserver/root/edithosts.sh b/patches/systemvm/root/edithosts.sh
similarity index 100%
rename from patches/xenserver/root/edithosts.sh
rename to patches/systemvm/root/edithosts.sh
diff --git a/patches/xenserver/root/firewall.sh b/patches/systemvm/root/firewall.sh
similarity index 100%
rename from patches/xenserver/root/firewall.sh
rename to patches/systemvm/root/firewall.sh
diff --git a/patches/xenserver/root/loadbalancer.sh b/patches/systemvm/root/loadbalancer.sh
similarity index 100%
rename from patches/xenserver/root/loadbalancer.sh
rename to patches/systemvm/root/loadbalancer.sh
diff --git a/patches/xenserver/root/patchsystemvm.sh b/patches/systemvm/root/patchsystemvm.sh
similarity index 100%
rename from patches/xenserver/root/patchsystemvm.sh
rename to patches/systemvm/root/patchsystemvm.sh
diff --git a/patches/kvm/root/reconfigLB.sh b/patches/systemvm/root/reconfigLB.sh
similarity index 100%
rename from patches/kvm/root/reconfigLB.sh
rename to patches/systemvm/root/reconfigLB.sh
diff --git a/patches/kvm/root/run_domr_webserver b/patches/systemvm/root/run_domr_webserver
similarity index 100%
rename from patches/kvm/root/run_domr_webserver
rename to patches/systemvm/root/run_domr_webserver
diff --git a/patches/kvm/root/send_password_to_domu.sh b/patches/systemvm/root/send_password_to_domu.sh
similarity index 100%
rename from patches/kvm/root/send_password_to_domu.sh
rename to patches/systemvm/root/send_password_to_domu.sh
diff --git a/patches/shared/var/www/html/latest/.htaccess b/patches/systemvm/var/www/html/latest/.htaccess
similarity index 100%
rename from patches/shared/var/www/html/latest/.htaccess
rename to patches/systemvm/var/www/html/latest/.htaccess
diff --git a/patches/shared/var/www/html/metadata/.htaccess b/patches/systemvm/var/www/html/metadata/.htaccess
similarity index 100%
rename from patches/shared/var/www/html/metadata/.htaccess
rename to patches/systemvm/var/www/html/metadata/.htaccess
diff --git a/patches/shared/var/www/html/userdata/.htaccess b/patches/systemvm/var/www/html/userdata/.htaccess
similarity index 100%
rename from patches/shared/var/www/html/userdata/.htaccess
rename to patches/systemvm/var/www/html/userdata/.htaccess
diff --git a/patches/wscript_build b/patches/wscript_build
index 4351d4e605d..a28272fb8e4 100644
--- a/patches/wscript_build
+++ b/patches/wscript_build
@@ -4,26 +4,15 @@ bld.substitute("*/**",name="patchsubst")
 
 for virttech in Utils.to_list(bld.path.ant_glob("*",dir=True)):
 	if virttech in ["shared","wscript_build"]: continue
-	patchfiles = bld.path.ant_glob('%s/** shared/**'%virttech,src=True,bld=True,dir=False,flat=True)
+	patchfiles = bld.path.ant_glob('shared/** %s/**'%virttech,src=False,bld=True,dir=False,flat=True)
 	tgen = bld(
 		features  = 'tar',#Utils.tar_up,
 		source = patchfiles,
 		target = '%s-patch.tgz'%virttech,
 		name   = '%s-patch_tgz'%virttech,
-		root = "patches/%s"%virttech,
+		root = os.path.join("patches",virttech),
 		rename = lambda x: re.sub(".subst$","",x),
-		after = 'patchsubst',
 	)
-	bld.process_after(tgen)
 	if virttech != "xenserver":
 		# xenserver uses the patch.tgz file later to make an ISO, so we do not need to install it
 		bld.install_as("${AGENTLIBDIR}/scripts/vm/hypervisor/%s/patch.tgz"%virttech, "%s-patch.tgz"%virttech)
-
-tgen = bld(
-	rule = 'cp ${SRC} ${TGT}',
-	source = 'xenserver-patch.tgz',
-	target = 'patch.tgz',
-	after = 'xenserver-patch_tgz',
-	name = 'patch_tgz'
-)
-bld.process_after(tgen)
diff --git a/patches/xenserver/etc/sysconfig/iptables-config b/patches/xenserver/etc/sysconfig/iptables-config
deleted file mode 100644
index c8a02b4a306..00000000000
--- a/patches/xenserver/etc/sysconfig/iptables-config
+++ /dev/null
@@ -1,48 +0,0 @@
-# Load additional iptables modules (nat helpers)
-#   Default: -none-
-# Space separated list of nat helpers (e.g. 'ip_nat_ftp ip_nat_irc'), which
-# are loaded after the firewall rules are applied. Options for the helpers are
-# stored in /etc/modprobe.conf.
-IPTABLES_MODULES="ip_conntrack_ftp nf_nat_ftp"
-
-# Unload modules on restart and stop
-#   Value: yes|no,  default: yes
-# This option has to be 'yes' to get to a sane state for a firewall
-# restart or stop. Only set to 'no' if there are problems unloading netfilter
-# modules.
-IPTABLES_MODULES_UNLOAD="yes"
-
-# Save current firewall rules on stop.
-#   Value: yes|no,  default: no
-# Saves all firewall rules to /etc/sysconfig/iptables if firewall gets stopped
-# (e.g. on system shutdown).
-IPTABLES_SAVE_ON_STOP="no"
-
-# Save current firewall rules on restart.
-#   Value: yes|no,  default: no
-# Saves all firewall rules to /etc/sysconfig/iptables if firewall gets
-# restarted.
-IPTABLES_SAVE_ON_RESTART="no"
-
-# Save (and restore) rule and chain counter.
-#   Value: yes|no,  default: no
-# Save counters for rules and chains to /etc/sysconfig/iptables if
-# 'service iptables save' is called or on stop or restart if SAVE_ON_STOP or
-# SAVE_ON_RESTART is enabled.
-IPTABLES_SAVE_COUNTER="no"
-
-# Numeric status output
-#   Value: yes|no,  default: yes
-# Print IP addresses and port numbers in numeric format in the status output.
-IPTABLES_STATUS_NUMERIC="yes"
-
-# Verbose status output
-#   Value: yes|no,  default: yes
-# Print info about the number of packets and bytes plus the "input-" and
-# "outputdevice" in the status output.
-IPTABLES_STATUS_VERBOSE="no"
-
-# Status output with numbered lines
-#   Value: yes|no,  default: yes
-# Print a counter/number for every rule in the status output.
-IPTABLES_STATUS_LINENUMBERS="yes"
diff --git a/python/lib/cloud_utils.py b/python/lib/cloud_utils.py
index 1434372d548..3c4d5598d62 100644
--- a/python/lib/cloud_utils.py
+++ b/python/lib/cloud_utils.py
@@ -1,1159 +1,1160 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""Cloud.com Python utility library"""
-
-import sys, os, subprocess, errno, re, time, glob
-import urllib2
-import xml.dom.minidom
-import logging
-import socket
-
-# exit() error constants
-E_GENERIC= 1
-E_NOKVM = 2
-E_NODEFROUTE = 3
-E_DHCP = 4
-E_NOPERSISTENTNET = 5
-E_NETRECONFIGFAILED = 6
-E_VIRTRECONFIGFAILED = 7
-E_FWRECONFIGFAILED = 8
-E_AGENTRECONFIGFAILED = 9
-E_AGENTFAILEDTOSTART = 10
-E_NOFQDN = 11
-E_SELINUXENABLED = 12
-E_USAGE = os.EX_USAGE
-
-E_NEEDSMANUALINTERVENTION = 13
-E_INTERRUPTED = 14
-E_SETUPFAILED = 15
-E_UNHANDLEDEXCEPTION = 16
-E_MISSINGDEP = 17
-
-Unknown = 0
-Fedora = 1
-CentOS = 2
-Ubuntu = 3
-
-IPV4 = 4
-IPV6 = 6
-
-#=================== DISTRIBUTION DETECTION =================
-
-if os.path.exists("/etc/fedora-release"): distro = Fedora
-elif os.path.exists("/etc/centos-release"): distro = CentOS
-elif os.path.exists("/etc/redhat-release") and not os.path.exists("/etc/fedora-release"): distro = CentOS
-elif os.path.exists("/etc/legal") and "Ubuntu" in file("/etc/legal").read(-1): distro = Ubuntu
-else: distro = Unknown
-
-logFileName=None
-# ==================  LIBRARY UTILITY CODE=============
-def setLogFile(logFile):
-	global logFileName
-	logFileName=logFile
-def read_properties(propfile):
-	if not hasattr(propfile,"read"): propfile = file(propfile)
-	properties = propfile.read().splitlines()
-	properties = [ s.strip() for s in properties ]
-	properties = [ s for s in properties if
-			s and
-			not s.startswith("#") and
-			not s.startswith(";") ]
-	#[ logging.debug("Valid config file line: %s",s) for s in properties ]
-	proppairs = [ s.split("=",1) for s in properties ]
-	return dict(proppairs)
-
-def stderr(msgfmt,*args):
-	"""Print a message to stderr, optionally interpolating the arguments into it"""
-	msgfmt += "\n"
-	if logFileName != None:
-		sys.stderr = open(logFileName, 'a+')
-	if args: sys.stderr.write(msgfmt%args)
-	else: sys.stderr.write(msgfmt)
-
-def exit(errno=E_GENERIC,message=None,*args):
-	"""Exit with an error status code, printing a message to stderr if specified"""
-	if message: stderr(message,*args)
-	sys.exit(errno)
-
-def resolve(host,port):
-	return [ (x[4][0],len(x[4])+2) for x in socket.getaddrinfo(host,port,socket.AF_UNSPEC,socket.SOCK_STREAM, 0, socket.AI_PASSIVE) ]
-	
-def resolves_to_ipv6(host,port):
-	return resolve(host,port)[0][1] == IPV6
-
-###add this to Python 2.4, patching the subprocess module at runtime
-if hasattr(subprocess,"check_call"):
-	from subprocess import CalledProcessError, check_call
-else:
-	class CalledProcessError(Exception):
-		def __init__(self, returncode, cmd):
-			self.returncode = returncode ; self.cmd = cmd
-		def __str__(self): return "Command '%s' returned non-zero exit status %d" % (self.cmd, self.returncode)
-	subprocess.CalledProcessError = CalledProcessError
-	
-	def check_call(*popenargs, **kwargs):
-		retcode = subprocess.call(*popenargs, **kwargs)
-		cmd = kwargs.get("args")
-		if cmd is None: cmd = popenargs[0]
-		if retcode: raise subprocess.CalledProcessError(retcode, cmd)
-		return retcode
-	subprocess.check_call = check_call
-
-# python 2.4 does not have this
-try:
-	any = any
-	all = all
-except NameError:
-	def any(sequence):
-		for i in sequence:
-			if i: return True
-		return False
-	def all(sequence):
-		for i in sequence:
-			if not i: return False
-		return True
-
-class Command:
-	"""This class simulates a shell command"""
-	def __init__(self,name,parent=None):
-		self.__name = name
-		self.__parent = parent
-	def __getattr__(self,name):
-		if name == "_print": name = "print"
-		return Command(name,self)
-	def __call__(self,*args,**kwargs):
-		cmd = self.__get_recursive_name() + list(args)
-		#print "	",cmd
-		kwargs = dict(kwargs)
-		if "stdout" not in kwargs: kwargs["stdout"] = subprocess.PIPE
-		if "stderr" not in kwargs: kwargs["stderr"] = subprocess.PIPE
-		popen = subprocess.Popen(cmd,**kwargs)
-		m = popen.communicate()
-		ret = popen.wait()
-		if ret:
-			e = CalledProcessError(ret,cmd)
-			e.stdout,e.stderr = m
-			raise e
-		class CommandOutput:
-			def __init__(self,stdout,stderr):
-				self.stdout = stdout
-				self.stderr = stderr
-		return CommandOutput(*m)
-	def __lt__(self,other):
-		cmd = self.__get_recursive_name()
-		#print "	",cmd,"<",other
-		popen = subprocess.Popen(cmd,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
-		m = popen.communicate(other)
-		ret = popen.wait()
-		if ret:
-			e = CalledProcessError(ret,cmd)
-			e.stdout,e.stderr = m
-			raise e
-		class CommandOutput:
-			def __init__(self,stdout,stderr):
-				self.stdout = stdout
-				self.stderr = stderr
-		return CommandOutput(*m)
-		
-	def __get_recursive_name(self,sep=None):
-		m = self
-		l = []
-		while m is not None:
-			l.append(m.__name)
-			m = m.__parent
-		l.reverse()
-		if sep: return sep.join(l)
-		else: return l
-	def __str__(self):
-		return ''%self.__get_recursive_name(sep=" ")
-		
-	def __repr__(self): return self.__str__()
-
-kvmok = Command("kvm-ok")
-getenforce = Command("/usr/sbin/getenforce")
-ip = Command("ip")
-service = Command("service")
-chkconfig = Command("chkconfig")
-updatercd = Command("update-rc.d")
-ufw = Command("ufw")
-iptables = Command("iptables")
-iptablessave = Command("iptables-save")
-augtool = Command("augtool")
-ifconfig = Command("ifconfig")
-ifdown = Command("ifdown")
-ifup = Command("ifup")
-brctl = Command("brctl")
-uuidgen = Command("uuidgen")
-
-
-def is_service_running(servicename):
-	try:
-		o = service(servicename,"status")
-		if distro is Ubuntu:
-			# status in ubuntu does not signal service status via return code
-			if "start/running" in o.stdout: return True
-			return False
-		else:
-			# retcode 0, service running
-			return True
-	except CalledProcessError,e:
-		# retcode nonzero, service not running
-		return False
-
-
-def stop_service(servicename,force=False):
-	# This function is idempotent.  N number of calls have the same result as N+1 number of calls.
-	if is_service_running(servicename) or force: service(servicename,"stop",stdout=None,stderr=None)
-
-
-def disable_service(servicename):
-	# Stops AND disables the service
-	stop_service(servicename)
-	if distro is Ubuntu:
-		updatercd("-f",servicename,"remove",stdout=None,stderr=None)
-	else:
-		chkconfig("--del",servicename,stdout=None,stderr=None)
-
-
-def start_service(servicename,force=False):
-	# This function is idempotent unless force is True.  N number of calls have the same result as N+1 number of calls.
-	if not is_service_running(servicename) or force: service(servicename,"start",stdout=None,stderr=None)
-
-
-def enable_service(servicename,forcestart=False):
-	# Stops AND disables the service
-	if distro is Ubuntu:
-		updatercd("-f",servicename,"remove",stdout=None,stderr=None)
-		updatercd("-f",servicename,"start","2","3","4","5",".",stdout=None,stderr=None)
-	else:
-		chkconfig("--add",servicename,stdout=None,stderr=None)
-		chkconfig("--level","345",servicename,"on",stdout=None,stderr=None)
-	start_service(servicename,force=forcestart)
-
-
-def replace_line(f,startswith,stanza,always_add=False):
-	lines = [ s.strip() for s in file(f).readlines() ]
-	newlines = []
-	replaced = False
-	for line in lines:
-		if line.startswith(startswith):
-			newlines.append(stanza)
-			replaced = True
-		else: newlines.append(line)
-	if not replaced and always_add: newlines.append(stanza)
-	newlines = [ s + '\n' for s in newlines ]
-	file(f,"w").writelines(newlines)
-
-def replace_or_add_line(f,startswith,stanza):
-	return replace_line(f,startswith,stanza,always_add=True)
-	
-# ==================================== CHECK FUNCTIONS ==========================
-
-# If they return without exception, it's okay.  If they raise a CheckFailed exception, that means a condition
-# (generallly one that needs administrator intervention) was detected.
-
-class CheckFailed(Exception): pass
-
-#check function
-def check_hostname():
-	"""If the hostname is a non-fqdn, fail with CalledProcessError.  Else return 0."""
-	try: check_call(["hostname",'--fqdn'])
-	except CalledProcessError:
-		raise CheckFailed("This machine does not have an FQDN (fully-qualified domain name) for a hostname")
-
-#check function
-def check_kvm():
-	if distro in (Fedora,CentOS):
-		if os.path.exists("/dev/kvm"): return True
-		raise CheckFailed("KVM is not correctly installed on this system, or support for it is not enabled in the BIOS")
-	else:
-		try:
-			kvmok()
-			return True
-		except CalledProcessError:
-			raise CheckFailed("KVM is not correctly installed on this system, or support for it is not enabled in the BIOS")
-		except OSError,e:
-			if e.errno is errno.ENOENT: raise CheckFailed("KVM is not correctly installed on this system, or support for it is not enabled in the BIOS")
-			raise
-		return True
-	raise AssertionError, "check_kvm() should have never reached this part"
-
-def check_cgroups():
-	return glob.glob("/*/cpu.shares")
-
-#check function
-def check_selinux():
-	if distro not in [Fedora,CentOS]: return # no selinux outside of those
-	enforcing = False
-	try:
-		output = getenforce().stdout.strip()
-		if "nforcing" in output:
-			enforcing = True
-		if any ( [ s.startswith("SELINUX=enforcing") for s in file("/etc/selinux/config").readlines() ] ):
-			enforcing = True
-	except (IOError,OSError),e:
-		if e.errno == 2: pass
-		else: raise CheckFailed("An unknown error (%s) took place while checking for SELinux"%str(e))
-	if enforcing:
-		raise CheckFailed("SELinux is set to enforcing, please set it to permissive in /etc/selinux/config, then reboot the machine or type setenforce Permissive, after which you can run this program again.")
-
-
-def preflight_checks(do_check_kvm=True):
-	if distro is Ubuntu:
-		preflight_checks = [
-			(check_hostname,"Checking hostname"),
-		]
-	else:
-		preflight_checks = [
-			(check_hostname,"Checking hostname"),
-			(check_selinux,"Checking if SELinux is disabled"),
-		]
-	#preflight_checks.append( (check_cgroups,"Checking if the control groups /cgroup filesystem is mounted") )
-	if do_check_kvm: preflight_checks.append( (check_kvm,"Checking for KVM") )
-	return preflight_checks
-
-
-# ========================== CONFIGURATION TASKS ================================
-
-# A Task is a function that runs within the context of its run() function that runs the function execute(), which does several things, reporting back to the caller as it goes with the use of yield
-# the done() method ought to return true if the task has run in the past
-# the execute() method must implement the configuration act itself
-# run() wraps the output of execute() within a Starting taskname and a Completed taskname message
-# tasks have a name
-
-class TaskFailed(Exception): pass
-	#def __init__(self,code,msg):
-		#Exception.__init__(self,msg)
-		#self.code = code
-
-class ConfigTask:
-	name = "generic config task"
-	autoMode=False
-	def __init__(self): pass
-	def done(self):
-		"""Returns true if the config task has already been done in the past, false if it hasn't"""
-		return False
-	def execute(self):
-		"""Executes the configuration task.  Must not be run if test() returned true.
-		Must yield strings that describe the steps in the task.
-		Raises TaskFailed if the task failed at some step.
-		"""
-	def run (self):
-		stderr("Starting %s"%self.name)
-		it = self.execute()
-		if not it:
-			pass # not a yielding iterable
-		else:
-			for msg in it: stderr(msg)
-		stderr("Completed %s"%self.name)
-	def setAutoMode(self, autoMode):
-		self.autoMode = autoMode
-	def  isAutoMode(self):
-		return self.autoMode
-
-
-# ============== these are some configuration tasks ==================
-
-class SetupNetworking(ConfigTask):
-	name = "network setup"
-	def __init__(self,brname):
-		ConfigTask.__init__(self)
-		self.brname = brname
-		self.runtime_state_changed = False
-		self.was_nm_service_running = None
-		self.was_net_service_running = None
-		if distro in (Fedora, CentOS):
-			self.nmservice = 'NetworkManager'
-			self.netservice = 'network'
-		else:
-			self.nmservice = 'network-manager'
-			self.netservice = 'networking'
-		
-		
-	def done(self):
-		try:
-			if distro in (Fedora,CentOS):
-				alreadysetup = augtool._print("/files/etc/sysconfig/network-scripts/ifcfg-%s"%self.brname).stdout.strip()
-			else:
-				alreadysetup = augtool.match("/files/etc/network/interfaces/iface",self.brname).stdout.strip()
-			return alreadysetup
-		except OSError,e:
-			if e.errno is 2: raise TaskFailed("augtool has not been properly installed on this system")
-			raise
-
-	def restore_state(self):
-		if not self.runtime_state_changed: return
-		
-		try:
-			o = ifconfig(self.brname)
-			bridge_exists = True
-		except CalledProcessError,e:
-			print e.stdout + e.stderr
-			bridge_exists = False
-			
-		if bridge_exists:
-			ifconfig(self.brname,"0.0.0.0")
-			if hasattr(self,"old_net_device"):
-				ifdown(self.old_net_device)
-				ifup(self.old_net_device)
-			try: ifdown(self.brname)
-			except CalledProcessError: pass
-			try: ifconfig(self.brname,"down")
-			except CalledProcessError: pass
-			try: brctl("delbr",self.brname)
-			except CalledProcessError: pass
-			try: ifdown("--force",self.brname)
-			except CalledProcessError: pass
-		
-		
-		if self.was_net_service_running is None:
-			# we do nothing
-			pass
-		elif self.was_net_service_running == False:
-			stop_service(self.netservice,force=True)
-			time.sleep(1)
-		else:
-			# we altered service configuration
-			stop_service(self.netservice,force=True)
-			time.sleep(1)
-			try: start_service(self.netservice,force=True)
-			except CalledProcessError,e:
-				if e.returncode == 1: pass
-				else: raise
-			time.sleep(1)
-		
-		if self.was_nm_service_running is None:
-			 # we do nothing
-			 pass
-		elif self.was_nm_service_running == False:
-			stop_service(self.nmservice,force=True)
-			time.sleep(1)
-		else:
-			# we altered service configuration
-			stop_service(self.nmservice,force=True)
-			time.sleep(1)
-			start_service(self.nmservice,force=True)
-			time.sleep(1)
-		
-		self.runtime_state_changed = False
-
-	def execute(self):
-		yield "Determining default route"
-		routes = ip.route().stdout.splitlines()
-		defaultroute = [ x for x in routes if x.startswith("default") ]
-		if not defaultroute: raise TaskFailed("Your network configuration does not have a default route")
-		
-		dev = defaultroute[0].split()[4]
-		yield "Default route assigned to device %s"%dev
-		
-		self.old_net_device = dev
-		
-		if distro in (Fedora, CentOS):
-			inconfigfile = "/".join(augtool.match("/files/etc/sysconfig/network-scripts/*/DEVICE",dev).stdout.strip().split("/")[:-1])
-			if not inconfigfile: raise TaskFailed("Device %s has not been set up in /etc/sysconfig/network-scripts"%dev)
-			pathtoconfigfile = inconfigfile[6:]
-
-		if distro in (Fedora, CentOS):
-			automatic = augtool.match("%s/ONBOOT"%inconfigfile,"yes").stdout.strip()
-		else:
-			automatic = augtool.match("/files/etc/network/interfaces/auto/*/",dev).stdout.strip()
-		if not automatic:
-			if distro is Fedora: raise TaskFailed("Device %s has not been set up in %s as automatic on boot"%dev,pathtoconfigfile)
-			else: raise TaskFailed("Device %s has not been set up in /etc/network/interfaces as automatic on boot"%dev)
-			
-		if distro not in (Fedora , CentOS):
-			inconfigfile = augtool.match("/files/etc/network/interfaces/iface",dev).stdout.strip()
-			if not inconfigfile: raise TaskFailed("Device %s has not been set up in /etc/network/interfaces"%dev)
-
-		if distro in (Fedora, CentOS):
-			isstatic = augtool.match(inconfigfile + "/BOOTPROTO","none").stdout.strip()
-			if not isstatic: isstatic = augtool.match(inconfigfile + "/BOOTPROTO","static").stdout.strip()
-		else:
-			isstatic = augtool.match(inconfigfile + "/method","static").stdout.strip()
-		if not isstatic:
-			if distro in (Fedora, CentOS): raise TaskFailed("Device %s has not been set up as a static device in %s"%(dev,pathtoconfigfile))
-			else: raise TaskFailed("Device %s has not been set up as a static device in /etc/network/interfaces"%dev)
-
-		if is_service_running(self.nmservice):
-			self.was_nm_service_running = True
-			yield "Stopping NetworkManager to avoid automatic network reconfiguration"
-			disable_service(self.nmservice)
-		else:
-			self.was_nm_service_running = False
-			
-		if is_service_running(self.netservice):
-			self.was_net_service_running = True
-		else:
-			self.was_net_service_running = False
-			
-		yield "Creating Cloud bridging device and making device %s member of this bridge"%dev
-
-		if distro in (Fedora, CentOS):
-			ifcfgtext = file(pathtoconfigfile).read()
-			newf = "/etc/sysconfig/network-scripts/ifcfg-%s"%self.brname
-			#def restore():
-				#try: os.unlink(newf)
-				#except OSError,e:
-					#if errno == 2: pass
-					#raise
-				#try: file(pathtoconfigfile,"w").write(ifcfgtext)
-				#except OSError,e: raise
-
-			f = file(newf,"w") ; f.write(ifcfgtext) ; f.flush() ; f.close()
-			innewconfigfile = "/files" + newf
-
-			script = """set %s/DEVICE %s
-set %s/NAME %s
-set %s/BRIDGE_PORTS %s
-set %s/TYPE Bridge
-rm %s/HWADDR
-rm %s/UUID
-rm %s/HWADDR
-rm %s/IPADDR
-rm %s/DEFROUTE
-rm %s/NETMASK
-rm %s/GATEWAY
-rm %s/BROADCAST
-rm %s/NETWORK
-set %s/BRIDGE %s
-save"""%(innewconfigfile,self.brname,innewconfigfile,self.brname,innewconfigfile,dev,
-			innewconfigfile,innewconfigfile,innewconfigfile,innewconfigfile,
-			inconfigfile,inconfigfile,inconfigfile,inconfigfile,inconfigfile,inconfigfile,
-			inconfigfile,self.brname)
-			
-			yield "Executing the following reconfiguration script:\n%s"%script
-			
-			try:
-				returned = augtool < script
-				if "Saved 2 file" not in returned.stdout:
-					print returned.stdout + returned.stderr
-					#restore()
-					raise TaskFailed("Network reconfiguration failed.")
-				else:
-					yield "Network reconfiguration complete"
-			except CalledProcessError,e:
-				#restore()
-				print e.stdout + e.stderr
-				raise TaskFailed("Network reconfiguration failed")
-		else: # Not fedora
-			backup = file("/etc/network/interfaces").read(-1)
-			#restore = lambda: file("/etc/network/interfaces","w").write(backup)
-
-			script = """set %s %s
-set %s %s
-set %s/bridge_ports %s
-save"""%(automatic,self.brname,inconfigfile,self.brname,inconfigfile,dev)
-			
-			yield "Executing the following reconfiguration script:\n%s"%script
-			
-			try:
-				returned = augtool < script
-				if "Saved 1 file" not in returned.stdout:
-					#restore()
-					raise TaskFailed("Network reconfiguration failed.")
-				else:
-					yield "Network reconfiguration complete"
-			except CalledProcessError,e:
-				#restore()
-				print e.stdout + e.stderr
-				raise TaskFailed("Network reconfiguration failed")
-		
-		yield "We are going to restart network services now, to make the network changes take effect.  Hit ENTER when you are ready."
-		if self.isAutoMode(): pass
-        	else:
-		    raw_input()
-		
-		# if we reach here, then if something goes wrong we should attempt to revert the runinng state
-		# if not, then no point
-		self.runtime_state_changed = True
-		
-		yield "Enabling and restarting non-NetworkManager networking"
-		if distro is Ubuntu: ifup(self.brname,stdout=None,stderr=None)
-		stop_service(self.netservice)
-		try: enable_service(self.netservice,forcestart=True)
-		except CalledProcessError,e:
-			if e.returncode == 1: pass
-			else: raise
-		
-		yield "Verifying that the bridge is up"
-		try:
-			o = ifconfig(self.brname)
-		except CalledProcessError,e:
-			print e.stdout + e.stderr
-			raise TaskFailed("The bridge could not be set up properly")
-		
-		yield "Networking restart done"
-
-
-class SetupCgConfig(ConfigTask):
-	name = "control groups configuration"
-	
-	def done(self):
-		
-		try:
-			return "group virt" in file("/etc/cgconfig.conf","r").read(-1)
-		except IOError,e:
-			if e.errno is 2: raise TaskFailed("cgconfig has not been properly installed on this system")
-			raise
-		
-	def execute(self):
-		cgconfig = file("/etc/cgconfig.conf","r").read(-1)
-		cgconfig = cgconfig + """
-group virt {
-	cpu {
-		cpu.shares = 9216;
-	}
-}
-"""
-		file("/etc/cgconfig.conf","w").write(cgconfig)
-		
-		stop_service("cgconfig")
-		enable_service("cgconfig",forcestart=True)
-
-
-class SetupCgRules(ConfigTask):
-	name = "control group rules setup"
-	cfgline = "root:/usr/sbin/libvirtd	cpu	virt/"
-	
-	def done(self):
-		try:
-			return self.cfgline in file("/etc/cgrules.conf","r").read(-1)
-		except IOError,e:
-			if e.errno is 2: raise TaskFailed("cgrulesd has not been properly installed on this system")
-			raise
-	
-	def execute(self):
-		cgrules = file("/etc/cgrules.conf","r").read(-1)
-		cgrules = cgrules + "\n" + self.cfgline + "\n"
-		file("/etc/cgrules.conf","w").write(cgrules)
-		
-		stop_service("cgred")
-		enable_service("cgred")
-
-
-class SetupCgroupControllers(ConfigTask):
-	name = "qemu cgroup controllers setup"
-	cfgline = "cgroup_controllers = [ \"cpu\" ]"
-	filename = "/etc/libvirt/qemu.conf"
-	
-	def done(self):
-		try:
-			return self.cfgline in file(self.filename,"r").read(-1)
-		except IOError,e:
-			if e.errno is 2: raise TaskFailed("qemu has not been properly installed on this system")
-			raise
-	
-	def execute(self):
-		libvirtqemu = file(self.filename,"r").read(-1)
-		libvirtqemu = libvirtqemu + "\n" + self.cfgline + "\n"
-		file("/etc/libvirt/qemu.conf","w").write(libvirtqemu)
-
-
-class SetupSecurityDriver(ConfigTask):
-	name = "security driver setup"
-	cfgline = "security_driver = \"none\""
-	filename = "/etc/libvirt/qemu.conf"
-	
-	def done(self):
-		try:
-			return self.cfgline in file(self.filename,"r").read(-1)
-		except IOError,e:
-			if e.errno is 2: raise TaskFailed("qemu has not been properly installed on this system")
-			raise
-	
-	def execute(self):
-		libvirtqemu = file(self.filename,"r").read(-1)
-		libvirtqemu = libvirtqemu + "\n" + self.cfgline + "\n"
-		file("/etc/libvirt/qemu.conf","w").write(libvirtqemu)
-
-
-class SetupLibvirt(ConfigTask):
-	name = "libvirt setup"
-	cfgline = "export CGROUP_DAEMON='cpu:/virt'"
-	def done(self):
-		try:
-			if distro in (Fedora,CentOS): 	 libvirtfile = "/etc/sysconfig/libvirtd"
-			elif distro is Ubuntu:	 libvirtfile = "/etc/default/libvirt-bin"
-			else: raise AssertionError, "We should not reach this"
-			return self.cfgline in file(libvirtfile,"r").read(-1)
-		except IOError,e:
-			if e.errno is 2: raise TaskFailed("libvirt has not been properly installed on this system")
-			raise
-	
-	def execute(self):
-		if distro in (Fedora,CentOS): 	 libvirtfile = "/etc/sysconfig/libvirtd"
-		elif distro is Ubuntu:	 libvirtfile = "/etc/default/libvirt-bin"
-		else: raise AssertionError, "We should not reach this"
-		libvirtbin = file(libvirtfile,"r").read(-1)
-		libvirtbin = libvirtbin + "\n" + self.cfgline + "\n"
-		file(libvirtfile,"w").write(libvirtbin)
-		
-		if distro in (CentOS, Fedora):	svc = "libvirtd"
-		else:					svc = "libvirt-bin"
-		stop_service(svc)
-		enable_service(svc)
-
-class SetupLiveMigration(ConfigTask):
-	name = "live migration setup"
-	stanzas = (
-			"listen_tcp=1",
-			'tcp_port="16509"',
-			'auth_tcp="none"',
-			"listen_tls=0",
-	)
-	
-	def done(self):
-		try:
-			lines = [ s.strip() for s in file("/etc/libvirt/libvirtd.conf").readlines() ]
-			if all( [ stanza in lines for stanza in self.stanzas ] ): return True
-		except IOError,e:
-			if e.errno is 2: raise TaskFailed("libvirt has not been properly installed on this system")
-			raise
-	
-	def execute(self):
-		
-		for stanza in self.stanzas:
-			startswith = stanza.split("=")[0] + '='
-			replace_or_add_line("/etc/libvirt/libvirtd.conf",startswith,stanza)
-
-		if distro is Fedora:
-			replace_or_add_line("/etc/sysconfig/libvirtd","LIBVIRTD_ARGS=","LIBVIRTD_ARGS=-l")
-		
-		elif distro is Ubuntu:
-			if os.path.exists("/etc/init/libvirt-bin.conf"):
-				replace_line("/etc/init/libvirt-bin.conf", "exec /usr/sbin/libvirtd","exec /usr/sbin/libvirtd -d -l")
-			else:
-				replace_or_add_line("/etc/default/libvirt-bin","libvirtd_opts=","libvirtd_opts='-l -d'")
-			
-		else:
-			raise AssertionError("Unsupported distribution")
-		
-		if distro in (CentOS, Fedora):	svc = "libvirtd"
-		else:						svc = "libvirt-bin"
-		stop_service(svc)
-		enable_service(svc)
-
-
-class SetupRequiredServices(ConfigTask):
-	name = "required services setup"
-	
-	def done(self):
-		if distro is Fedora:  nfsrelated = "rpcbind nfslock"
-		elif distro is CentOS: nfsrelated = "portmap nfslock"
-		else: return True
-		return all( [ is_service_running(svc) for svc in nfsrelated.split() ] )
-		
-	def execute(self):
-
-		if distro is Fedora:  nfsrelated = "rpcbind nfslock"
-		elif distro is CentOS: nfsrelated = "portmap nfslock"
-		else: raise AssertionError("Unsupported distribution")
-
-		for svc in nfsrelated.split(): enable_service(svc)
-
-
-class SetupFirewall(ConfigTask):
-	name = "firewall setup"
-	
-	def done(self):
-		
-		if distro in (Fedora, CentOS):
-			if not os.path.exists("/etc/sysconfig/iptables"): return True
-			if ":on" not in chkconfig("--list","iptables").stdout: return True
-		else:
-			if "Status: active" not in ufw.status().stdout: return True
-			if not os.path.exists("/etc/ufw/before.rules"): return True
-		rule = "-p tcp -m tcp --dport 16509 -j ACCEPT"
-		if rule in iptablessave().stdout: return True
-		return False
-	
-	def execute(self):
-		ports = "22 1798 16509".split()
-		if distro in (Fedora , CentOS):
-			for p in ports: iptables("-I","INPUT","1","-p","tcp","--dport",p,'-j','ACCEPT')
-			o = service.iptables.save() ; print o.stdout + o.stderr
-		else:
-			for p in ports: ufw.allow(p)
-
-
-class SetupFirewall2(ConfigTask):
-	# this closes bug 4371
-	name = "additional firewall setup"
-	def __init__(self,brname):
-		ConfigTask.__init__(self)
-		self.brname = brname
-	
-	def done(self):
-		
-		if distro in (Fedora, CentOS):
-			if not os.path.exists("/etc/sysconfig/iptables"): return True
-			if ":on" not in chkconfig("--list","iptables").stdout: return True
-			rule = "FORWARD -i %s -o %s -j ACCEPT"%(self.brname,self.brname)
-			if rule in iptablessave().stdout: return True
-			return False
-		else:
-			if "Status: active" not in ufw.status().stdout: return True
-			if not os.path.exists("/etc/ufw/before.rules"): return True
-			rule = "-A ufw-before-forward -i %s -o %s -j ACCEPT"%(self.brname,self.brname)
-			if rule in file("/etc/ufw/before.rules").read(-1): return True
-			return False
-		
-	def execute(self):
-		
-		yield "Permitting traffic in the bridge interface, migration port and for VNC ports"
-		
-		if distro in (Fedora , CentOS):
-			
-			for rule in (
-				"-I FORWARD -i %s -o %s -j ACCEPT"%(self.brname,self.brname),
-				"-I INPUT 1 -p tcp --dport 5900:6100 -j ACCEPT",
-				"-I INPUT 1 -p tcp --dport 49152:49216 -j ACCEPT",
-				):
-				args = rule.split()
-				o = iptables(*args)
-			service.iptables.save(stdout=None,stderr=None)
-			
-		else:
-			
-			rule = "-A ufw-before-forward -i %s -o %s -j ACCEPT"%(self.brname,self.brname)
-			text = file("/etc/ufw/before.rules").readlines()
-			newtext = []
-			for line in text:
-				if line.startswith("COMMIT"):
-					newtext.append(rule + "\n")
-				newtext.append(line)
-			file("/etc/ufw/before.rules","w").writelines(newtext)
-			ufw.allow.proto.tcp("from","any","to","any","port","5900:6100")
-			ufw.allow.proto.tcp("from","any","to","any","port","49152:49216")
-
-			stop_service("ufw")
-			start_service("ufw")
-
-
-# Tasks according to distribution -- at some point we will split them in separate modules
-
-def config_tasks(brname):
-	if distro is CentOS:
-		config_tasks = (
-			SetupNetworking(brname),
-			SetupLibvirt(),
-			SetupRequiredServices(),
-			SetupFirewall(),
-			SetupFirewall2(brname),
-		)
-	elif distro in (Ubuntu,Fedora):
-		config_tasks = (
-			SetupNetworking(brname),
-			SetupCgConfig(),
-			SetupCgRules(),
-			SetupCgroupControllers(),
-			SetupSecurityDriver(),
-			SetupLibvirt(),
-			SetupLiveMigration(),
-			SetupRequiredServices(),
-			SetupFirewall(),
-			SetupFirewall2(brname),
-		)
-	else:
-		raise AssertionError("Unknown distribution")
-	return config_tasks
-
-
-def backup_etc(targetdir):
-	if not targetdir.endswith("/"): targetdir += "/"
-	check_call( ["mkdir","-p",targetdir] )
-	rsynccall = ["rsync","-ax","--delete"] + ["/etc/",targetdir]
-	check_call( rsynccall )
-def restore_etc(targetdir):
-	if not targetdir.endswith("/"): targetdir += "/"
-	rsynccall = ["rsync","-ax","--delete"] + [targetdir,"/etc/"]
-	check_call( rsynccall )
-def remove_backup(targetdir):
-	check_call( ["rm","-rf",targetdir] )
-
-def list_zonespods(host):
-	text = urllib2.urlopen('http://%s:8096/client/api?command=listPods'%host).read(-1)
-	dom = xml.dom.minidom.parseString(text) 
-	x = [ (zonename,podname)
-		for pod in dom.childNodes[0].childNodes  
-		for podname in [ x.childNodes[0].wholeText for x in pod.childNodes if x.tagName == "name" ] 
-		for zonename in  [ x.childNodes[0].wholeText for x in pod.childNodes if x.tagName == "zonename" ]
-		]
-	return x
-	
-def prompt_for_hostpods(zonespods):
-	"""Ask user to select one from those zonespods
-	Returns (zone,pod) or None if the user made the default selection."""
-	while True:
-		stderr("Type the number of the zone and pod combination this host belongs to (hit ENTER to skip this step)")
-		print "  N) ZONE, POD" 
-		print "================"
-		for n,(z,p) in enumerate(zonespods):
-			print "%3d) %s, %s"%(n,z,p)
-		print "================"
-		zoneandpod = raw_input().strip()
-		
-		if not zoneandpod:
-			# we go with default, do not touch anything, just break
-			return None
-		
-		try:
-			# if parsing fails as an int, just vomit and retry
-			zoneandpod = int(zoneandpod)
-			if zoneandpod >= len(zonespods) or zoneandpod < 0: raise ValueError, "%s out of bounds"%zoneandpod
-		except ValueError,e:
-			stderr(str(e))
-			continue # re-ask
-		
-		# oh yeah, the int represents an valid zone and pod index in the array
-		return zonespods[zoneandpod]
-	
-# this configures the agent
-
-def setup_agent_config(configfile, host, zone, pod, guid):
-	stderr("Examining Agent configuration")
-	fn = configfile
-	text = file(fn).read(-1)
-	lines = [ s.strip() for s in text.splitlines() ]
-	confopts = dict([ m.split("=",1) for m in lines if "=" in m and not m.startswith("#") ])
-	confposes = dict([ (m.split("=",1)[0],n) for n,m in enumerate(lines) if "=" in m and not m.startswith("#") ])
-	
-	if guid != None:
-		confopts['guid'] = guid
-	else:
-		if not "guid" in confopts:
-			stderr("Generating GUID for this Agent")
-			confopts['guid'] = uuidgen().stdout.strip()
-	
-	if host == None:
-		try: host = confopts["host"]
-		except KeyError: host = "localhost"
-		stderr("Please enter the host name of the management server that this agent will connect to: (just hit ENTER to go with %s)",host)
-		newhost = raw_input().strip()
-		if newhost: host = newhost
-
-	confopts["host"] = host
-	
-	stderr("Querying %s for zones and pods",host)
-	
-	try:
-	    if zone == None or pod == None:
-			x = list_zonespods(confopts['host'])
-			zoneandpod = prompt_for_hostpods(x)
-			if zoneandpod:
-				confopts["zone"],confopts["pod"] = zoneandpod
-				stderr("You selected zone %s pod %s",confopts["zone"],confopts["pod"])
-			else:
-				stderr("Skipped -- using the previous zone %s pod %s",confopts["zone"],confopts["pod"])
-	    else:
-			confopts["zone"] = zone
-			confopts["pod"] = pod
-	except (urllib2.URLError,urllib2.HTTPError),e:
-		stderr("Query failed: %s.  Defaulting to zone %s pod %s",str(e),confopts["zone"],confopts["pod"])
-
-	for opt,val in confopts.items():
-		line = "=".join([opt,val])
-		if opt not in confposes: lines.append(line)
-		else: lines[confposes[opt]] = line
-	
-	text = "\n".join(lines)
-	file(fn,"w").write(text)
-
-def setup_consoleproxy_config(configfile, host, zone, pod):
-	stderr("Examining Console Proxy configuration")
-	fn = configfile
-	text = file(fn).read(-1)
-	lines = [ s.strip() for s in text.splitlines() ]
-	confopts = dict([ m.split("=",1) for m in lines if "=" in m and not m.startswith("#") ])
-	confposes = dict([ (m.split("=",1)[0],n) for n,m in enumerate(lines) if "=" in m and not m.startswith("#") ])
-
-	if not "guid" in confopts:
-		stderr("Generating GUID for this Console Proxy")
-		confopts['guid'] = uuidgen().stdout.strip()
-
-        if host == None:
-		try: host = confopts["host"]
-		except KeyError: host = "localhost"
-		stderr("Please enter the host name of the management server that this console-proxy will connect to: (just hit ENTER to go with %s)",host)
-		newhost = raw_input().strip()
-		if newhost: host = newhost
-	confopts["host"] = host
-
-	stderr("Querying %s for zones and pods",host)
-	
-	try:
-                if zone == None or pod == None:
-			x = list_zonespods(confopts['host'])
-			zoneandpod = prompt_for_hostpods(x)
-			if zoneandpod:
-				confopts["zone"],confopts["pod"] = zoneandpod
-				stderr("You selected zone %s pod %s",confopts["zone"],confopts["pod"])
-			else:
-				stderr("Skipped -- using the previous zone %s pod %s",confopts["zone"],confopts["pod"])
-		else:
-			confopts["zone"] = zone
-			confopts["pod"] = pod
-	except (urllib2.URLError,urllib2.HTTPError),e:
-		stderr("Query failed: %s.  Defaulting to zone %s pod %s",str(e),confopts["zone"],confopts["pod"])
-
-	for opt,val in confopts.items():
-		line = "=".join([opt,val])
-		if opt not in confposes: lines.append(line)
-		else: lines[confposes[opt]] = line
-	
-	text = "\n".join(lines)
-	file(fn,"w").write(text)
-
-# =========================== DATABASE MIGRATION SUPPORT CODE ===================
-
-# Migrator, Migratee and Evolvers -- this is the generic infrastructure.
-# To actually implement Cloud.com-specific code, search "Cloud.com-specific evolvers and context"
-
-
-class MigratorException(Exception): pass
-class NoMigrationPath(MigratorException): pass
-class NoMigrator(MigratorException): pass
-
-INITIAL_LEVEL = '-'
-
-class Migrator:
-	"""Migrator class.
-	
-	The migrator gets a list of Python objects, and discovers MigrationSteps in it. It then sorts the steps into a chain, based on the attributes from_level and to_level in each one of the steps.
-	
-	When the migrator's run(context) is called, the chain of steps is applied sequentially on the context supplied to run(), in the order of the chain of steps found at discovery time.  See the documentation for the MigrationStep class for information on how that happens.
-	"""
-	
-	def __init__(self,evolver_source):
-		self.discover_evolvers(evolver_source)
-		self.sort_evolvers()
-		
-	def discover_evolvers(self,source):
-		self.evolvers = []
-		for val in source:
-			if hasattr(val,"from_level") and hasattr(val,"to_level") and val.to_level:
-				self.evolvers.append(val)
-	
-	def sort_evolvers(self):
-		new = []
-		while self.evolvers:
-			if not new:
-				try: idx= [ i for i,s in enumerate(self.evolvers)
-					if s.from_level == INITIAL_LEVEL ][0] # initial evolver
-				except IndexError,e:
-					raise IndexError, "no initial evolver (from_level is None) could be found"
-			else:
-				try: idx= [ i for i,s in enumerate(self.evolvers)
-					if new[-1].to_level == s.from_level ][0]
-				except IndexError,e:
-					raise IndexError, "no evolver could be found to evolve from level %s"%new[-1].to_level
-			new.append(self.evolvers.pop(idx))
-		self.evolvers = new
-	
-	def get_evolver_chain(self):
-		return [ (s.from_level, s.to_level, s) for s in self.evolvers ]
-		
-	def get_evolver_by_starting_level(self,level):
-		try: return [ s for s in self.evolvers if s.from_level == level][0]
-		except IndexError: raise NoMigrator, "No evolver knows how to evolve the database from schema level %r"%level
-	
-	def get_evolver_by_ending_level(self,level):
-		try: return [ s for s in self.evolvers if s.to_level == level][0]
-		except IndexError: raise NoMigrator, "No evolver knows how to evolve the database to schema level %r"%level
-	
-	def run(self, context, dryrun = False, starting_level = None, ending_level = None):
-		"""Runs each one of the steps in sequence, passing the migration context to each. At the end of the process, context.commit() is called to save the changes, or context.rollback() is called if dryrun = True.
-		
-		If starting_level is not specified, then the context.get_schema_level() is used to find out at what level the context is at.  Then starting_level is set to that.
-		
-		If ending_level is not specified, then the evolvers will run till the end of the chain."""
-		
-		assert dryrun is False # NOT IMPLEMENTED, prolly gonna implement by asking the context itself to remember its state
-		
-		starting_level = starting_level or context.get_schema_level() or self.evolvers[0].from_level
-		ending_level = ending_level or self.evolvers[-1].to_level
-		
-		evolution_path = self.evolvers
-		idx = evolution_path.index(self.get_evolver_by_starting_level(starting_level))
-		evolution_path = evolution_path[idx:]
-		try: idx = evolution_path.index(self.get_evolver_by_ending_level(ending_level))
-		except ValueError:
-			raise NoEvolutionPath, "No evolution path from schema level %r to schema level %r" % \
-				(starting_level,ending_level)
-		evolution_path = evolution_path[:idx+1]
-		
-		logging.info("Starting migration on %s"%context)
-		
-		for ec in evolution_path:
-			assert ec.from_level == context.get_schema_level()
-			evolver = ec(context=context)
-			logging.info("%s (from level %s to level %s)",
-				evolver,
-				evolver.from_level,
-				evolver.to_level)
-			#try:
-			evolver.run()
-			#except:
-				#context.rollback()
-				#raise
-			context.set_schema_level(evolver.to_level)
-			#context.commit()
-			logging.info("%s is now at level %s",context,context.get_schema_level())
-		
-		#if dryrun: # implement me with backup and restore
-			#logging.info("Rolling back changes on %s",context)
-			#context.rollback()
-		#else:
-			#logging.info("Committing changes on %s",context)
-			#context.commit()
-		
-		logging.info("Migration finished")
-		
-
-class MigrationStep:
-	"""Base MigrationStep class, aka evolver.
-	
-	You develop your own steps, and then pass a list of those steps to the
-	Migrator instance that will run them in order.
-	
-	When the migrator runs, it will take the list of steps you gave him,
-	and, for each step:
-	
-	a) instantiate it, passing the context you gave to the migrator
-	   into the step's __init__().
-	b) run() the method in the migration step.
-	
-	As you can see, the default MigrationStep constructor makes the passed
-	context available as self.context in the methods of your step.
-	
-	Each step has two member vars that determine in which order they
-	are run, and if they need to run:
-	
-	- from_level = the schema level that the database should be at,
-		       before running the evolver
-		       The value None has special meaning here, it
-		       means the first evolver that should be run if the
-		       database does not have a schema level yet.
-	- to_level =   the schema level number that the database will be at
-		       after the evolver has run
-	"""
-	
-	# Implement these attributes in your steps
-	from_level = None
-	to_level = None
-	
-	def __init__(self,context):
-		self.context = context
-		
-	def run(self):
-		raise NotImplementedError
-
-
-class MigrationContext:
-	def __init__(self): pass
-	def commit(self):raise NotImplementedError
-	def rollback(self):raise NotImplementedError
-	def get_schema_level(self):raise NotImplementedError
-	def set_schema_level(self,l):raise NotImplementedError
-
-
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+"""Cloud.com Python utility library"""
+
+import sys, os, subprocess, errno, re, time, glob
+import urllib2
+import xml.dom.minidom
+import logging
+import socket
+
+# exit() error constants
+E_GENERIC= 1
+E_NOKVM = 2
+E_NODEFROUTE = 3
+E_DHCP = 4
+E_NOPERSISTENTNET = 5
+E_NETRECONFIGFAILED = 6
+E_VIRTRECONFIGFAILED = 7
+E_FWRECONFIGFAILED = 8
+E_AGENTRECONFIGFAILED = 9
+E_AGENTFAILEDTOSTART = 10
+E_NOFQDN = 11
+E_SELINUXENABLED = 12
+try: E_USAGE = os.EX_USAGE
+except AttributeError: E_USAGE = 64
+
+E_NEEDSMANUALINTERVENTION = 13
+E_INTERRUPTED = 14
+E_SETUPFAILED = 15
+E_UNHANDLEDEXCEPTION = 16
+E_MISSINGDEP = 17
+
+Unknown = 0
+Fedora = 1
+CentOS = 2
+Ubuntu = 3
+
+IPV4 = 4
+IPV6 = 6
+
+#=================== DISTRIBUTION DETECTION =================
+
+if os.path.exists("/etc/fedora-release"): distro = Fedora
+elif os.path.exists("/etc/centos-release"): distro = CentOS
+elif os.path.exists("/etc/redhat-release") and not os.path.exists("/etc/fedora-release"): distro = CentOS
+elif os.path.exists("/etc/legal") and "Ubuntu" in file("/etc/legal").read(-1): distro = Ubuntu
+else: distro = Unknown
+
+logFileName=None
+# ==================  LIBRARY UTILITY CODE=============
+def setLogFile(logFile):
+	global logFileName
+	logFileName=logFile
+def read_properties(propfile):
+	if not hasattr(propfile,"read"): propfile = file(propfile)
+	properties = propfile.read().splitlines()
+	properties = [ s.strip() for s in properties ]
+	properties = [ s for s in properties if
+			s and
+			not s.startswith("#") and
+			not s.startswith(";") ]
+	#[ logging.debug("Valid config file line: %s",s) for s in properties ]
+	proppairs = [ s.split("=",1) for s in properties ]
+	return dict(proppairs)
+
+def stderr(msgfmt,*args):
+	"""Print a message to stderr, optionally interpolating the arguments into it"""
+	msgfmt += "\n"
+	if logFileName != None:
+		sys.stderr = open(logFileName, 'a+')
+	if args: sys.stderr.write(msgfmt%args)
+	else: sys.stderr.write(msgfmt)
+
+def exit(errno=E_GENERIC,message=None,*args):
+	"""Exit with an error status code, printing a message to stderr if specified"""
+	if message: stderr(message,*args)
+	sys.exit(errno)
+
+def resolve(host,port):
+	return [ (x[4][0],len(x[4])+2) for x in socket.getaddrinfo(host,port,socket.AF_UNSPEC,socket.SOCK_STREAM, 0, socket.AI_PASSIVE) ]
+	
+def resolves_to_ipv6(host,port):
+	return resolve(host,port)[0][1] == IPV6
+
+###add this to Python 2.4, patching the subprocess module at runtime
+if hasattr(subprocess,"check_call"):
+	from subprocess import CalledProcessError, check_call
+else:
+	class CalledProcessError(Exception):
+		def __init__(self, returncode, cmd):
+			self.returncode = returncode ; self.cmd = cmd
+		def __str__(self): return "Command '%s' returned non-zero exit status %d" % (self.cmd, self.returncode)
+	subprocess.CalledProcessError = CalledProcessError
+	
+	def check_call(*popenargs, **kwargs):
+		retcode = subprocess.call(*popenargs, **kwargs)
+		cmd = kwargs.get("args")
+		if cmd is None: cmd = popenargs[0]
+		if retcode: raise subprocess.CalledProcessError(retcode, cmd)
+		return retcode
+	subprocess.check_call = check_call
+
+# python 2.4 does not have this
+try:
+	any = any
+	all = all
+except NameError:
+	def any(sequence):
+		for i in sequence:
+			if i: return True
+		return False
+	def all(sequence):
+		for i in sequence:
+			if not i: return False
+		return True
+
+class Command:
+	"""This class simulates a shell command"""
+	def __init__(self,name,parent=None):
+		self.__name = name
+		self.__parent = parent
+	def __getattr__(self,name):
+		if name == "_print": name = "print"
+		return Command(name,self)
+	def __call__(self,*args,**kwargs):
+		cmd = self.__get_recursive_name() + list(args)
+		#print "	",cmd
+		kwargs = dict(kwargs)
+		if "stdout" not in kwargs: kwargs["stdout"] = subprocess.PIPE
+		if "stderr" not in kwargs: kwargs["stderr"] = subprocess.PIPE
+		popen = subprocess.Popen(cmd,**kwargs)
+		m = popen.communicate()
+		ret = popen.wait()
+		if ret:
+			e = CalledProcessError(ret,cmd)
+			e.stdout,e.stderr = m
+			raise e
+		class CommandOutput:
+			def __init__(self,stdout,stderr):
+				self.stdout = stdout
+				self.stderr = stderr
+		return CommandOutput(*m)
+	def __lt__(self,other):
+		cmd = self.__get_recursive_name()
+		#print "	",cmd,"<",other
+		popen = subprocess.Popen(cmd,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
+		m = popen.communicate(other)
+		ret = popen.wait()
+		if ret:
+			e = CalledProcessError(ret,cmd)
+			e.stdout,e.stderr = m
+			raise e
+		class CommandOutput:
+			def __init__(self,stdout,stderr):
+				self.stdout = stdout
+				self.stderr = stderr
+		return CommandOutput(*m)
+		
+	def __get_recursive_name(self,sep=None):
+		m = self
+		l = []
+		while m is not None:
+			l.append(m.__name)
+			m = m.__parent
+		l.reverse()
+		if sep: return sep.join(l)
+		else: return l
+	def __str__(self):
+		return ''%self.__get_recursive_name(sep=" ")
+		
+	def __repr__(self): return self.__str__()
+
+kvmok = Command("kvm-ok")
+getenforce = Command("/usr/sbin/getenforce")
+ip = Command("ip")
+service = Command("service")
+chkconfig = Command("chkconfig")
+updatercd = Command("update-rc.d")
+ufw = Command("ufw")
+iptables = Command("iptables")
+iptablessave = Command("iptables-save")
+augtool = Command("augtool")
+ifconfig = Command("ifconfig")
+ifdown = Command("ifdown")
+ifup = Command("ifup")
+brctl = Command("brctl")
+uuidgen = Command("uuidgen")
+
+
+def is_service_running(servicename):
+	try:
+		o = service(servicename,"status")
+		if distro is Ubuntu:
+			# status in ubuntu does not signal service status via return code
+			if "start/running" in o.stdout: return True
+			return False
+		else:
+			# retcode 0, service running
+			return True
+	except CalledProcessError,e:
+		# retcode nonzero, service not running
+		return False
+
+
+def stop_service(servicename,force=False):
+	# This function is idempotent.  N number of calls have the same result as N+1 number of calls.
+	if is_service_running(servicename) or force: service(servicename,"stop",stdout=None,stderr=None)
+
+
+def disable_service(servicename):
+	# Stops AND disables the service
+	stop_service(servicename)
+	if distro is Ubuntu:
+		updatercd("-f",servicename,"remove",stdout=None,stderr=None)
+	else:
+		chkconfig("--del",servicename,stdout=None,stderr=None)
+
+
+def start_service(servicename,force=False):
+	# This function is idempotent unless force is True.  N number of calls have the same result as N+1 number of calls.
+	if not is_service_running(servicename) or force: service(servicename,"start",stdout=None,stderr=None)
+
+
+def enable_service(servicename,forcestart=False):
+	# Stops AND disables the service
+	if distro is Ubuntu:
+		updatercd("-f",servicename,"remove",stdout=None,stderr=None)
+		updatercd("-f",servicename,"start","2","3","4","5",".",stdout=None,stderr=None)
+	else:
+		chkconfig("--add",servicename,stdout=None,stderr=None)
+		chkconfig("--level","345",servicename,"on",stdout=None,stderr=None)
+	start_service(servicename,force=forcestart)
+
+
+def replace_line(f,startswith,stanza,always_add=False):
+	lines = [ s.strip() for s in file(f).readlines() ]
+	newlines = []
+	replaced = False
+	for line in lines:
+		if line.startswith(startswith):
+			newlines.append(stanza)
+			replaced = True
+		else: newlines.append(line)
+	if not replaced and always_add: newlines.append(stanza)
+	newlines = [ s + '\n' for s in newlines ]
+	file(f,"w").writelines(newlines)
+
+def replace_or_add_line(f,startswith,stanza):
+	return replace_line(f,startswith,stanza,always_add=True)
+	
+# ==================================== CHECK FUNCTIONS ==========================
+
+# If they return without exception, it's okay.  If they raise a CheckFailed exception, that means a condition
+# (generallly one that needs administrator intervention) was detected.
+
+class CheckFailed(Exception): pass
+
+#check function
+def check_hostname():
+	"""If the hostname is a non-fqdn, fail with CalledProcessError.  Else return 0."""
+	try: check_call(["hostname",'--fqdn'])
+	except CalledProcessError:
+		raise CheckFailed("This machine does not have an FQDN (fully-qualified domain name) for a hostname")
+
+#check function
+def check_kvm():
+	if distro in (Fedora,CentOS):
+		if os.path.exists("/dev/kvm"): return True
+		raise CheckFailed("KVM is not correctly installed on this system, or support for it is not enabled in the BIOS")
+	else:
+		try:
+			kvmok()
+			return True
+		except CalledProcessError:
+			raise CheckFailed("KVM is not correctly installed on this system, or support for it is not enabled in the BIOS")
+		except OSError,e:
+			if e.errno is errno.ENOENT: raise CheckFailed("KVM is not correctly installed on this system, or support for it is not enabled in the BIOS")
+			raise
+		return True
+	raise AssertionError, "check_kvm() should have never reached this part"
+
+def check_cgroups():
+	return glob.glob("/*/cpu.shares")
+
+#check function
+def check_selinux():
+	if distro not in [Fedora,CentOS]: return # no selinux outside of those
+	enforcing = False
+	try:
+		output = getenforce().stdout.strip()
+		if "nforcing" in output:
+			enforcing = True
+		if any ( [ s.startswith("SELINUX=enforcing") for s in file("/etc/selinux/config").readlines() ] ):
+			enforcing = True
+	except (IOError,OSError),e:
+		if e.errno == 2: pass
+		else: raise CheckFailed("An unknown error (%s) took place while checking for SELinux"%str(e))
+	if enforcing:
+		raise CheckFailed("SELinux is set to enforcing, please set it to permissive in /etc/selinux/config, then reboot the machine or type setenforce Permissive, after which you can run this program again.")
+
+
+def preflight_checks(do_check_kvm=True):
+	if distro is Ubuntu:
+		preflight_checks = [
+			(check_hostname,"Checking hostname"),
+		]
+	else:
+		preflight_checks = [
+			(check_hostname,"Checking hostname"),
+			(check_selinux,"Checking if SELinux is disabled"),
+		]
+	#preflight_checks.append( (check_cgroups,"Checking if the control groups /cgroup filesystem is mounted") )
+	if do_check_kvm: preflight_checks.append( (check_kvm,"Checking for KVM") )
+	return preflight_checks
+
+
+# ========================== CONFIGURATION TASKS ================================
+
+# A Task is a function that runs within the context of its run() function that runs the function execute(), which does several things, reporting back to the caller as it goes with the use of yield
+# the done() method ought to return true if the task has run in the past
+# the execute() method must implement the configuration act itself
+# run() wraps the output of execute() within a Starting taskname and a Completed taskname message
+# tasks have a name
+
+class TaskFailed(Exception): pass
+	#def __init__(self,code,msg):
+		#Exception.__init__(self,msg)
+		#self.code = code
+
+class ConfigTask:
+	name = "generic config task"
+	autoMode=False
+	def __init__(self): pass
+	def done(self):
+		"""Returns true if the config task has already been done in the past, false if it hasn't"""
+		return False
+	def execute(self):
+		"""Executes the configuration task.  Must not be run if test() returned true.
+		Must yield strings that describe the steps in the task.
+		Raises TaskFailed if the task failed at some step.
+		"""
+	def run (self):
+		stderr("Starting %s"%self.name)
+		it = self.execute()
+		if not it:
+			pass # not a yielding iterable
+		else:
+			for msg in it: stderr(msg)
+		stderr("Completed %s"%self.name)
+	def setAutoMode(self, autoMode):
+		self.autoMode = autoMode
+	def  isAutoMode(self):
+		return self.autoMode
+
+
+# ============== these are some configuration tasks ==================
+
+class SetupNetworking(ConfigTask):
+	name = "network setup"
+	def __init__(self,brname):
+		ConfigTask.__init__(self)
+		self.brname = brname
+		self.runtime_state_changed = False
+		self.was_nm_service_running = None
+		self.was_net_service_running = None
+		if distro in (Fedora, CentOS):
+			self.nmservice = 'NetworkManager'
+			self.netservice = 'network'
+		else:
+			self.nmservice = 'network-manager'
+			self.netservice = 'networking'
+		
+		
+	def done(self):
+		try:
+			if distro in (Fedora,CentOS):
+				alreadysetup = augtool._print("/files/etc/sysconfig/network-scripts/ifcfg-%s"%self.brname).stdout.strip()
+			else:
+				alreadysetup = augtool.match("/files/etc/network/interfaces/iface",self.brname).stdout.strip()
+			return alreadysetup
+		except OSError,e:
+			if e.errno is 2: raise TaskFailed("augtool has not been properly installed on this system")
+			raise
+
+	def restore_state(self):
+		if not self.runtime_state_changed: return
+		
+		try:
+			o = ifconfig(self.brname)
+			bridge_exists = True
+		except CalledProcessError,e:
+			print e.stdout + e.stderr
+			bridge_exists = False
+			
+		if bridge_exists:
+			ifconfig(self.brname,"0.0.0.0")
+			if hasattr(self,"old_net_device"):
+				ifdown(self.old_net_device)
+				ifup(self.old_net_device)
+			try: ifdown(self.brname)
+			except CalledProcessError: pass
+			try: ifconfig(self.brname,"down")
+			except CalledProcessError: pass
+			try: brctl("delbr",self.brname)
+			except CalledProcessError: pass
+			try: ifdown("--force",self.brname)
+			except CalledProcessError: pass
+		
+		
+		if self.was_net_service_running is None:
+			# we do nothing
+			pass
+		elif self.was_net_service_running == False:
+			stop_service(self.netservice,force=True)
+			time.sleep(1)
+		else:
+			# we altered service configuration
+			stop_service(self.netservice,force=True)
+			time.sleep(1)
+			try: start_service(self.netservice,force=True)
+			except CalledProcessError,e:
+				if e.returncode == 1: pass
+				else: raise
+			time.sleep(1)
+		
+		if self.was_nm_service_running is None:
+			 # we do nothing
+			 pass
+		elif self.was_nm_service_running == False:
+			stop_service(self.nmservice,force=True)
+			time.sleep(1)
+		else:
+			# we altered service configuration
+			stop_service(self.nmservice,force=True)
+			time.sleep(1)
+			start_service(self.nmservice,force=True)
+			time.sleep(1)
+		
+		self.runtime_state_changed = False
+
+	def execute(self):
+		yield "Determining default route"
+		routes = ip.route().stdout.splitlines()
+		defaultroute = [ x for x in routes if x.startswith("default") ]
+		if not defaultroute: raise TaskFailed("Your network configuration does not have a default route")
+		
+		dev = defaultroute[0].split()[4]
+		yield "Default route assigned to device %s"%dev
+		
+		self.old_net_device = dev
+		
+		if distro in (Fedora, CentOS):
+			inconfigfile = "/".join(augtool.match("/files/etc/sysconfig/network-scripts/*/DEVICE",dev).stdout.strip().split("/")[:-1])
+			if not inconfigfile: raise TaskFailed("Device %s has not been set up in /etc/sysconfig/network-scripts"%dev)
+			pathtoconfigfile = inconfigfile[6:]
+
+		if distro in (Fedora, CentOS):
+			automatic = augtool.match("%s/ONBOOT"%inconfigfile,"yes").stdout.strip()
+		else:
+			automatic = augtool.match("/files/etc/network/interfaces/auto/*/",dev).stdout.strip()
+		if not automatic:
+			if distro is Fedora: raise TaskFailed("Device %s has not been set up in %s as automatic on boot"%dev,pathtoconfigfile)
+			else: raise TaskFailed("Device %s has not been set up in /etc/network/interfaces as automatic on boot"%dev)
+			
+		if distro not in (Fedora , CentOS):
+			inconfigfile = augtool.match("/files/etc/network/interfaces/iface",dev).stdout.strip()
+			if not inconfigfile: raise TaskFailed("Device %s has not been set up in /etc/network/interfaces"%dev)
+
+		if distro in (Fedora, CentOS):
+			isstatic = augtool.match(inconfigfile + "/BOOTPROTO","none").stdout.strip()
+			if not isstatic: isstatic = augtool.match(inconfigfile + "/BOOTPROTO","static").stdout.strip()
+		else:
+			isstatic = augtool.match(inconfigfile + "/method","static").stdout.strip()
+		if not isstatic:
+			if distro in (Fedora, CentOS): raise TaskFailed("Device %s has not been set up as a static device in %s"%(dev,pathtoconfigfile))
+			else: raise TaskFailed("Device %s has not been set up as a static device in /etc/network/interfaces"%dev)
+
+		if is_service_running(self.nmservice):
+			self.was_nm_service_running = True
+			yield "Stopping NetworkManager to avoid automatic network reconfiguration"
+			disable_service(self.nmservice)
+		else:
+			self.was_nm_service_running = False
+			
+		if is_service_running(self.netservice):
+			self.was_net_service_running = True
+		else:
+			self.was_net_service_running = False
+			
+		yield "Creating Cloud bridging device and making device %s member of this bridge"%dev
+
+		if distro in (Fedora, CentOS):
+			ifcfgtext = file(pathtoconfigfile).read()
+			newf = "/etc/sysconfig/network-scripts/ifcfg-%s"%self.brname
+			#def restore():
+				#try: os.unlink(newf)
+				#except OSError,e:
+					#if errno == 2: pass
+					#raise
+				#try: file(pathtoconfigfile,"w").write(ifcfgtext)
+				#except OSError,e: raise
+
+			f = file(newf,"w") ; f.write(ifcfgtext) ; f.flush() ; f.close()
+			innewconfigfile = "/files" + newf
+
+			script = """set %s/DEVICE %s
+set %s/NAME %s
+set %s/BRIDGE_PORTS %s
+set %s/TYPE Bridge
+rm %s/HWADDR
+rm %s/UUID
+rm %s/HWADDR
+rm %s/IPADDR
+rm %s/DEFROUTE
+rm %s/NETMASK
+rm %s/GATEWAY
+rm %s/BROADCAST
+rm %s/NETWORK
+set %s/BRIDGE %s
+save"""%(innewconfigfile,self.brname,innewconfigfile,self.brname,innewconfigfile,dev,
+			innewconfigfile,innewconfigfile,innewconfigfile,innewconfigfile,
+			inconfigfile,inconfigfile,inconfigfile,inconfigfile,inconfigfile,inconfigfile,
+			inconfigfile,self.brname)
+			
+			yield "Executing the following reconfiguration script:\n%s"%script
+			
+			try:
+				returned = augtool < script
+				if "Saved 2 file" not in returned.stdout:
+					print returned.stdout + returned.stderr
+					#restore()
+					raise TaskFailed("Network reconfiguration failed.")
+				else:
+					yield "Network reconfiguration complete"
+			except CalledProcessError,e:
+				#restore()
+				print e.stdout + e.stderr
+				raise TaskFailed("Network reconfiguration failed")
+		else: # Not fedora
+			backup = file("/etc/network/interfaces").read(-1)
+			#restore = lambda: file("/etc/network/interfaces","w").write(backup)
+
+			script = """set %s %s
+set %s %s
+set %s/bridge_ports %s
+save"""%(automatic,self.brname,inconfigfile,self.brname,inconfigfile,dev)
+			
+			yield "Executing the following reconfiguration script:\n%s"%script
+			
+			try:
+				returned = augtool < script
+				if "Saved 1 file" not in returned.stdout:
+					#restore()
+					raise TaskFailed("Network reconfiguration failed.")
+				else:
+					yield "Network reconfiguration complete"
+			except CalledProcessError,e:
+				#restore()
+				print e.stdout + e.stderr
+				raise TaskFailed("Network reconfiguration failed")
+		
+		yield "We are going to restart network services now, to make the network changes take effect.  Hit ENTER when you are ready."
+		if self.isAutoMode(): pass
+        	else:
+		    raw_input()
+		
+		# if we reach here, then if something goes wrong we should attempt to revert the runinng state
+		# if not, then no point
+		self.runtime_state_changed = True
+		
+		yield "Enabling and restarting non-NetworkManager networking"
+		if distro is Ubuntu: ifup(self.brname,stdout=None,stderr=None)
+		stop_service(self.netservice)
+		try: enable_service(self.netservice,forcestart=True)
+		except CalledProcessError,e:
+			if e.returncode == 1: pass
+			else: raise
+		
+		yield "Verifying that the bridge is up"
+		try:
+			o = ifconfig(self.brname)
+		except CalledProcessError,e:
+			print e.stdout + e.stderr
+			raise TaskFailed("The bridge could not be set up properly")
+		
+		yield "Networking restart done"
+
+
+class SetupCgConfig(ConfigTask):
+	name = "control groups configuration"
+	
+	def done(self):
+		
+		try:
+			return "group virt" in file("/etc/cgconfig.conf","r").read(-1)
+		except IOError,e:
+			if e.errno is 2: raise TaskFailed("cgconfig has not been properly installed on this system")
+			raise
+		
+	def execute(self):
+		cgconfig = file("/etc/cgconfig.conf","r").read(-1)
+		cgconfig = cgconfig + """
+group virt {
+	cpu {
+		cpu.shares = 9216;
+	}
+}
+"""
+		file("/etc/cgconfig.conf","w").write(cgconfig)
+		
+		stop_service("cgconfig")
+		enable_service("cgconfig",forcestart=True)
+
+
+class SetupCgRules(ConfigTask):
+	name = "control group rules setup"
+	cfgline = "root:/usr/sbin/libvirtd	cpu	virt/"
+	
+	def done(self):
+		try:
+			return self.cfgline in file("/etc/cgrules.conf","r").read(-1)
+		except IOError,e:
+			if e.errno is 2: raise TaskFailed("cgrulesd has not been properly installed on this system")
+			raise
+	
+	def execute(self):
+		cgrules = file("/etc/cgrules.conf","r").read(-1)
+		cgrules = cgrules + "\n" + self.cfgline + "\n"
+		file("/etc/cgrules.conf","w").write(cgrules)
+		
+		stop_service("cgred")
+		enable_service("cgred")
+
+
+class SetupCgroupControllers(ConfigTask):
+	name = "qemu cgroup controllers setup"
+	cfgline = "cgroup_controllers = [ \"cpu\" ]"
+	filename = "/etc/libvirt/qemu.conf"
+	
+	def done(self):
+		try:
+			return self.cfgline in file(self.filename,"r").read(-1)
+		except IOError,e:
+			if e.errno is 2: raise TaskFailed("qemu has not been properly installed on this system")
+			raise
+	
+	def execute(self):
+		libvirtqemu = file(self.filename,"r").read(-1)
+		libvirtqemu = libvirtqemu + "\n" + self.cfgline + "\n"
+		file("/etc/libvirt/qemu.conf","w").write(libvirtqemu)
+
+
+class SetupSecurityDriver(ConfigTask):
+	name = "security driver setup"
+	cfgline = "security_driver = \"none\""
+	filename = "/etc/libvirt/qemu.conf"
+	
+	def done(self):
+		try:
+			return self.cfgline in file(self.filename,"r").read(-1)
+		except IOError,e:
+			if e.errno is 2: raise TaskFailed("qemu has not been properly installed on this system")
+			raise
+	
+	def execute(self):
+		libvirtqemu = file(self.filename,"r").read(-1)
+		libvirtqemu = libvirtqemu + "\n" + self.cfgline + "\n"
+		file("/etc/libvirt/qemu.conf","w").write(libvirtqemu)
+
+
+class SetupLibvirt(ConfigTask):
+	name = "libvirt setup"
+	cfgline = "export CGROUP_DAEMON='cpu:/virt'"
+	def done(self):
+		try:
+			if distro in (Fedora,CentOS): 	 libvirtfile = "/etc/sysconfig/libvirtd"
+			elif distro is Ubuntu:	 libvirtfile = "/etc/default/libvirt-bin"
+			else: raise AssertionError, "We should not reach this"
+			return self.cfgline in file(libvirtfile,"r").read(-1)
+		except IOError,e:
+			if e.errno is 2: raise TaskFailed("libvirt has not been properly installed on this system")
+			raise
+	
+	def execute(self):
+		if distro in (Fedora,CentOS): 	 libvirtfile = "/etc/sysconfig/libvirtd"
+		elif distro is Ubuntu:	 libvirtfile = "/etc/default/libvirt-bin"
+		else: raise AssertionError, "We should not reach this"
+		libvirtbin = file(libvirtfile,"r").read(-1)
+		libvirtbin = libvirtbin + "\n" + self.cfgline + "\n"
+		file(libvirtfile,"w").write(libvirtbin)
+		
+		if distro in (CentOS, Fedora):	svc = "libvirtd"
+		else:					svc = "libvirt-bin"
+		stop_service(svc)
+		enable_service(svc)
+
+class SetupLiveMigration(ConfigTask):
+	name = "live migration setup"
+	stanzas = (
+			"listen_tcp=1",
+			'tcp_port="16509"',
+			'auth_tcp="none"',
+			"listen_tls=0",
+	)
+	
+	def done(self):
+		try:
+			lines = [ s.strip() for s in file("/etc/libvirt/libvirtd.conf").readlines() ]
+			if all( [ stanza in lines for stanza in self.stanzas ] ): return True
+		except IOError,e:
+			if e.errno is 2: raise TaskFailed("libvirt has not been properly installed on this system")
+			raise
+	
+	def execute(self):
+		
+		for stanza in self.stanzas:
+			startswith = stanza.split("=")[0] + '='
+			replace_or_add_line("/etc/libvirt/libvirtd.conf",startswith,stanza)
+
+		if distro is Fedora:
+			replace_or_add_line("/etc/sysconfig/libvirtd","LIBVIRTD_ARGS=","LIBVIRTD_ARGS=-l")
+		
+		elif distro is Ubuntu:
+			if os.path.exists("/etc/init/libvirt-bin.conf"):
+				replace_line("/etc/init/libvirt-bin.conf", "exec /usr/sbin/libvirtd","exec /usr/sbin/libvirtd -d -l")
+			else:
+				replace_or_add_line("/etc/default/libvirt-bin","libvirtd_opts=","libvirtd_opts='-l -d'")
+			
+		else:
+			raise AssertionError("Unsupported distribution")
+		
+		if distro in (CentOS, Fedora):	svc = "libvirtd"
+		else:						svc = "libvirt-bin"
+		stop_service(svc)
+		enable_service(svc)
+
+
+class SetupRequiredServices(ConfigTask):
+	name = "required services setup"
+	
+	def done(self):
+		if distro is Fedora:  nfsrelated = "rpcbind nfslock"
+		elif distro is CentOS: nfsrelated = "portmap nfslock"
+		else: return True
+		return all( [ is_service_running(svc) for svc in nfsrelated.split() ] )
+		
+	def execute(self):
+
+		if distro is Fedora:  nfsrelated = "rpcbind nfslock"
+		elif distro is CentOS: nfsrelated = "portmap nfslock"
+		else: raise AssertionError("Unsupported distribution")
+
+		for svc in nfsrelated.split(): enable_service(svc)
+
+
+class SetupFirewall(ConfigTask):
+	name = "firewall setup"
+	
+	def done(self):
+		
+		if distro in (Fedora, CentOS):
+			if not os.path.exists("/etc/sysconfig/iptables"): return True
+			if ":on" not in chkconfig("--list","iptables").stdout: return True
+		else:
+			if "Status: active" not in ufw.status().stdout: return True
+			if not os.path.exists("/etc/ufw/before.rules"): return True
+		rule = "-p tcp -m tcp --dport 16509 -j ACCEPT"
+		if rule in iptablessave().stdout: return True
+		return False
+	
+	def execute(self):
+		ports = "22 1798 16509".split()
+		if distro in (Fedora , CentOS):
+			for p in ports: iptables("-I","INPUT","1","-p","tcp","--dport",p,'-j','ACCEPT')
+			o = service.iptables.save() ; print o.stdout + o.stderr
+		else:
+			for p in ports: ufw.allow(p)
+
+
+class SetupFirewall2(ConfigTask):
+	# this closes bug 4371
+	name = "additional firewall setup"
+	def __init__(self,brname):
+		ConfigTask.__init__(self)
+		self.brname = brname
+	
+	def done(self):
+		
+		if distro in (Fedora, CentOS):
+			if not os.path.exists("/etc/sysconfig/iptables"): return True
+			if ":on" not in chkconfig("--list","iptables").stdout: return True
+			rule = "FORWARD -i %s -o %s -j ACCEPT"%(self.brname,self.brname)
+			if rule in iptablessave().stdout: return True
+			return False
+		else:
+			if "Status: active" not in ufw.status().stdout: return True
+			if not os.path.exists("/etc/ufw/before.rules"): return True
+			rule = "-A ufw-before-forward -i %s -o %s -j ACCEPT"%(self.brname,self.brname)
+			if rule in file("/etc/ufw/before.rules").read(-1): return True
+			return False
+		
+	def execute(self):
+		
+		yield "Permitting traffic in the bridge interface, migration port and for VNC ports"
+		
+		if distro in (Fedora , CentOS):
+			
+			for rule in (
+				"-I FORWARD -i %s -o %s -j ACCEPT"%(self.brname,self.brname),
+				"-I INPUT 1 -p tcp --dport 5900:6100 -j ACCEPT",
+				"-I INPUT 1 -p tcp --dport 49152:49216 -j ACCEPT",
+				):
+				args = rule.split()
+				o = iptables(*args)
+			service.iptables.save(stdout=None,stderr=None)
+			
+		else:
+			
+			rule = "-A ufw-before-forward -i %s -o %s -j ACCEPT"%(self.brname,self.brname)
+			text = file("/etc/ufw/before.rules").readlines()
+			newtext = []
+			for line in text:
+				if line.startswith("COMMIT"):
+					newtext.append(rule + "\n")
+				newtext.append(line)
+			file("/etc/ufw/before.rules","w").writelines(newtext)
+			ufw.allow.proto.tcp("from","any","to","any","port","5900:6100")
+			ufw.allow.proto.tcp("from","any","to","any","port","49152:49216")
+
+			stop_service("ufw")
+			start_service("ufw")
+
+
+# Tasks according to distribution -- at some point we will split them in separate modules
+
+def config_tasks(brname):
+	if distro is CentOS:
+		config_tasks = (
+			SetupNetworking(brname),
+			SetupLibvirt(),
+			SetupRequiredServices(),
+			SetupFirewall(),
+			SetupFirewall2(brname),
+		)
+	elif distro in (Ubuntu,Fedora):
+		config_tasks = (
+			SetupNetworking(brname),
+			SetupCgConfig(),
+			SetupCgRules(),
+			SetupCgroupControllers(),
+			SetupSecurityDriver(),
+			SetupLibvirt(),
+			SetupLiveMigration(),
+			SetupRequiredServices(),
+			SetupFirewall(),
+			SetupFirewall2(brname),
+		)
+	else:
+		raise AssertionError("Unknown distribution")
+	return config_tasks
+
+
+def backup_etc(targetdir):
+	if not targetdir.endswith("/"): targetdir += "/"
+	check_call( ["mkdir","-p",targetdir] )
+	rsynccall = ["rsync","-ax","--delete"] + ["/etc/",targetdir]
+	check_call( rsynccall )
+def restore_etc(targetdir):
+	if not targetdir.endswith("/"): targetdir += "/"
+	rsynccall = ["rsync","-ax","--delete"] + [targetdir,"/etc/"]
+	check_call( rsynccall )
+def remove_backup(targetdir):
+	check_call( ["rm","-rf",targetdir] )
+
+def list_zonespods(host):
+	text = urllib2.urlopen('http://%s:8096/client/api?command=listPods'%host).read(-1)
+	dom = xml.dom.minidom.parseString(text) 
+	x = [ (zonename,podname)
+		for pod in dom.childNodes[0].childNodes  
+		for podname in [ x.childNodes[0].wholeText for x in pod.childNodes if x.tagName == "name" ] 
+		for zonename in  [ x.childNodes[0].wholeText for x in pod.childNodes if x.tagName == "zonename" ]
+		]
+	return x
+	
+def prompt_for_hostpods(zonespods):
+	"""Ask user to select one from those zonespods
+	Returns (zone,pod) or None if the user made the default selection."""
+	while True:
+		stderr("Type the number of the zone and pod combination this host belongs to (hit ENTER to skip this step)")
+		print "  N) ZONE, POD" 
+		print "================"
+		for n,(z,p) in enumerate(zonespods):
+			print "%3d) %s, %s"%(n,z,p)
+		print "================"
+		zoneandpod = raw_input().strip()
+		
+		if not zoneandpod:
+			# we go with default, do not touch anything, just break
+			return None
+		
+		try:
+			# if parsing fails as an int, just vomit and retry
+			zoneandpod = int(zoneandpod)
+			if zoneandpod >= len(zonespods) or zoneandpod < 0: raise ValueError, "%s out of bounds"%zoneandpod
+		except ValueError,e:
+			stderr(str(e))
+			continue # re-ask
+		
+		# oh yeah, the int represents an valid zone and pod index in the array
+		return zonespods[zoneandpod]
+	
+# this configures the agent
+
+def setup_agent_config(configfile, host, zone, pod, guid):
+	stderr("Examining Agent configuration")
+	fn = configfile
+	text = file(fn).read(-1)
+	lines = [ s.strip() for s in text.splitlines() ]
+	confopts = dict([ m.split("=",1) for m in lines if "=" in m and not m.startswith("#") ])
+	confposes = dict([ (m.split("=",1)[0],n) for n,m in enumerate(lines) if "=" in m and not m.startswith("#") ])
+	
+	if guid != None:
+		confopts['guid'] = guid
+	else:
+		if not "guid" in confopts:
+			stderr("Generating GUID for this Agent")
+			confopts['guid'] = uuidgen().stdout.strip()
+	
+	if host == None:
+		try: host = confopts["host"]
+		except KeyError: host = "localhost"
+		stderr("Please enter the host name of the management server that this agent will connect to: (just hit ENTER to go with %s)",host)
+		newhost = raw_input().strip()
+		if newhost: host = newhost
+
+	confopts["host"] = host
+	
+	stderr("Querying %s for zones and pods",host)
+	
+	try:
+	    if zone == None or pod == None:
+			x = list_zonespods(confopts['host'])
+			zoneandpod = prompt_for_hostpods(x)
+			if zoneandpod:
+				confopts["zone"],confopts["pod"] = zoneandpod
+				stderr("You selected zone %s pod %s",confopts["zone"],confopts["pod"])
+			else:
+				stderr("Skipped -- using the previous zone %s pod %s",confopts["zone"],confopts["pod"])
+	    else:
+			confopts["zone"] = zone
+			confopts["pod"] = pod
+	except (urllib2.URLError,urllib2.HTTPError),e:
+		stderr("Query failed: %s.  Defaulting to zone %s pod %s",str(e),confopts["zone"],confopts["pod"])
+
+	for opt,val in confopts.items():
+		line = "=".join([opt,val])
+		if opt not in confposes: lines.append(line)
+		else: lines[confposes[opt]] = line
+	
+	text = "\n".join(lines)
+	file(fn,"w").write(text)
+
+def setup_consoleproxy_config(configfile, host, zone, pod):
+	stderr("Examining Console Proxy configuration")
+	fn = configfile
+	text = file(fn).read(-1)
+	lines = [ s.strip() for s in text.splitlines() ]
+	confopts = dict([ m.split("=",1) for m in lines if "=" in m and not m.startswith("#") ])
+	confposes = dict([ (m.split("=",1)[0],n) for n,m in enumerate(lines) if "=" in m and not m.startswith("#") ])
+
+	if not "guid" in confopts:
+		stderr("Generating GUID for this Console Proxy")
+		confopts['guid'] = uuidgen().stdout.strip()
+
+        if host == None:
+		try: host = confopts["host"]
+		except KeyError: host = "localhost"
+		stderr("Please enter the host name of the management server that this console-proxy will connect to: (just hit ENTER to go with %s)",host)
+		newhost = raw_input().strip()
+		if newhost: host = newhost
+	confopts["host"] = host
+
+	stderr("Querying %s for zones and pods",host)
+	
+	try:
+                if zone == None or pod == None:
+			x = list_zonespods(confopts['host'])
+			zoneandpod = prompt_for_hostpods(x)
+			if zoneandpod:
+				confopts["zone"],confopts["pod"] = zoneandpod
+				stderr("You selected zone %s pod %s",confopts["zone"],confopts["pod"])
+			else:
+				stderr("Skipped -- using the previous zone %s pod %s",confopts["zone"],confopts["pod"])
+		else:
+			confopts["zone"] = zone
+			confopts["pod"] = pod
+	except (urllib2.URLError,urllib2.HTTPError),e:
+		stderr("Query failed: %s.  Defaulting to zone %s pod %s",str(e),confopts["zone"],confopts["pod"])
+
+	for opt,val in confopts.items():
+		line = "=".join([opt,val])
+		if opt not in confposes: lines.append(line)
+		else: lines[confposes[opt]] = line
+	
+	text = "\n".join(lines)
+	file(fn,"w").write(text)
+
+# =========================== DATABASE MIGRATION SUPPORT CODE ===================
+
+# Migrator, Migratee and Evolvers -- this is the generic infrastructure.
+# To actually implement Cloud.com-specific code, search "Cloud.com-specific evolvers and context"
+
+
+class MigratorException(Exception): pass
+class NoMigrationPath(MigratorException): pass
+class NoMigrator(MigratorException): pass
+
+INITIAL_LEVEL = '-'
+
+class Migrator:
+	"""Migrator class.
+	
+	The migrator gets a list of Python objects, and discovers MigrationSteps in it. It then sorts the steps into a chain, based on the attributes from_level and to_level in each one of the steps.
+	
+	When the migrator's run(context) is called, the chain of steps is applied sequentially on the context supplied to run(), in the order of the chain of steps found at discovery time.  See the documentation for the MigrationStep class for information on how that happens.
+	"""
+	
+	def __init__(self,evolver_source):
+		self.discover_evolvers(evolver_source)
+		self.sort_evolvers()
+		
+	def discover_evolvers(self,source):
+		self.evolvers = []
+		for val in source:
+			if hasattr(val,"from_level") and hasattr(val,"to_level") and val.to_level:
+				self.evolvers.append(val)
+	
+	def sort_evolvers(self):
+		new = []
+		while self.evolvers:
+			if not new:
+				try: idx= [ i for i,s in enumerate(self.evolvers)
+					if s.from_level == INITIAL_LEVEL ][0] # initial evolver
+				except IndexError,e:
+					raise IndexError, "no initial evolver (from_level is None) could be found"
+			else:
+				try: idx= [ i for i,s in enumerate(self.evolvers)
+					if new[-1].to_level == s.from_level ][0]
+				except IndexError,e:
+					raise IndexError, "no evolver could be found to evolve from level %s"%new[-1].to_level
+			new.append(self.evolvers.pop(idx))
+		self.evolvers = new
+	
+	def get_evolver_chain(self):
+		return [ (s.from_level, s.to_level, s) for s in self.evolvers ]
+		
+	def get_evolver_by_starting_level(self,level):
+		try: return [ s for s in self.evolvers if s.from_level == level][0]
+		except IndexError: raise NoMigrator, "No evolver knows how to evolve the database from schema level %r"%level
+	
+	def get_evolver_by_ending_level(self,level):
+		try: return [ s for s in self.evolvers if s.to_level == level][0]
+		except IndexError: raise NoMigrator, "No evolver knows how to evolve the database to schema level %r"%level
+	
+	def run(self, context, dryrun = False, starting_level = None, ending_level = None):
+		"""Runs each one of the steps in sequence, passing the migration context to each. At the end of the process, context.commit() is called to save the changes, or context.rollback() is called if dryrun = True.
+		
+		If starting_level is not specified, then the context.get_schema_level() is used to find out at what level the context is at.  Then starting_level is set to that.
+		
+		If ending_level is not specified, then the evolvers will run till the end of the chain."""
+		
+		assert dryrun is False # NOT IMPLEMENTED, prolly gonna implement by asking the context itself to remember its state
+		
+		starting_level = starting_level or context.get_schema_level() or self.evolvers[0].from_level
+		ending_level = ending_level or self.evolvers[-1].to_level
+		
+		evolution_path = self.evolvers
+		idx = evolution_path.index(self.get_evolver_by_starting_level(starting_level))
+		evolution_path = evolution_path[idx:]
+		try: idx = evolution_path.index(self.get_evolver_by_ending_level(ending_level))
+		except ValueError:
+			raise NoEvolutionPath, "No evolution path from schema level %r to schema level %r" % \
+				(starting_level,ending_level)
+		evolution_path = evolution_path[:idx+1]
+		
+		logging.info("Starting migration on %s"%context)
+		
+		for ec in evolution_path:
+			assert ec.from_level == context.get_schema_level()
+			evolver = ec(context=context)
+			logging.info("%s (from level %s to level %s)",
+				evolver,
+				evolver.from_level,
+				evolver.to_level)
+			#try:
+			evolver.run()
+			#except:
+				#context.rollback()
+				#raise
+			context.set_schema_level(evolver.to_level)
+			#context.commit()
+			logging.info("%s is now at level %s",context,context.get_schema_level())
+		
+		#if dryrun: # implement me with backup and restore
+			#logging.info("Rolling back changes on %s",context)
+			#context.rollback()
+		#else:
+			#logging.info("Committing changes on %s",context)
+			#context.commit()
+		
+		logging.info("Migration finished")
+		
+
+class MigrationStep:
+	"""Base MigrationStep class, aka evolver.
+	
+	You develop your own steps, and then pass a list of those steps to the
+	Migrator instance that will run them in order.
+	
+	When the migrator runs, it will take the list of steps you gave him,
+	and, for each step:
+	
+	a) instantiate it, passing the context you gave to the migrator
+	   into the step's __init__().
+	b) run() the method in the migration step.
+	
+	As you can see, the default MigrationStep constructor makes the passed
+	context available as self.context in the methods of your step.
+	
+	Each step has two member vars that determine in which order they
+	are run, and if they need to run:
+	
+	- from_level = the schema level that the database should be at,
+		       before running the evolver
+		       The value None has special meaning here, it
+		       means the first evolver that should be run if the
+		       database does not have a schema level yet.
+	- to_level =   the schema level number that the database will be at
+		       after the evolver has run
+	"""
+	
+	# Implement these attributes in your steps
+	from_level = None
+	to_level = None
+	
+	def __init__(self,context):
+		self.context = context
+		
+	def run(self):
+		raise NotImplementedError
+
+
+class MigrationContext:
+	def __init__(self): pass
+	def commit(self):raise NotImplementedError
+	def rollback(self):raise NotImplementedError
+	def get_schema_level(self):raise NotImplementedError
+	def set_schema_level(self,l):raise NotImplementedError
+
+
diff --git a/python/wscript_build b/python/wscript_build
index 4b78e04b404..d3a80e70d26 100644
--- a/python/wscript_build
+++ b/python/wscript_build
@@ -1,3 +1,2 @@
-if bld.env.DISTRO not in ['Windows','Mac']:
-	obj = bld(features = 'py',name='pythonmodules')
-	obj.find_sources_in_dirs('lib', exts=['.py'])
+obj = bld(features = 'py',name='pythonmodules')
+obj.find_sources_in_dirs('lib', exts=['.py'])
diff --git a/scripts/network/domr/call_firewall.sh b/scripts/network/domr/call_firewall.sh
index 8878ec833ba..287efa21f5d 100755
--- a/scripts/network/domr/call_firewall.sh
+++ b/scripts/network/domr/call_firewall.sh
@@ -85,7 +85,7 @@ do
   esac
 done
 
-CERT="$(dirname $0)/id_rsa"
+cert="/root/.ssh/id_rsa.cloud"
 
 # Check if DomR is up and running. If not, exit with error code 1.
 check_gw "$domRIp"
@@ -114,7 +114,7 @@ then
   exit 2
 fi
 
-ssh -p 3922 -q -o StrictHostKeyChecking=no -i $CERT root@$domRIp "/root/firewall.sh $*"
+ssh -p 3922 -q -o StrictHostKeyChecking=no -i $cert root@$domRIp "/root/firewall.sh $*"
 exit $?
 
 
diff --git a/scripts/network/domr/call_loadbalancer.sh b/scripts/network/domr/call_loadbalancer.sh
index fd9a8c02700..fed3abe3d80 100755
--- a/scripts/network/domr/call_loadbalancer.sh
+++ b/scripts/network/domr/call_loadbalancer.sh
@@ -26,7 +26,7 @@ copy_haproxy() {
   local domRIp=$1
   local cfg=$2
 
-  scp -P 3922 -q -o StrictHostKeyChecking=no -i $CERT $cfg root@$domRIp:/etc/haproxy/haproxy.cfg.new
+  scp -P 3922 -q -o StrictHostKeyChecking=no -i $cert $cfg root@$domRIp:/etc/haproxy/haproxy.cfg.new
   return $?
 }
 
@@ -56,7 +56,7 @@ do
   esac
 done
 
-CERT="$(dirname $0)/id_rsa"
+cert="/root/.ssh/id_rsa.cloud"
 
 if [ "$iflag$fflag" != "11" ]
 then
@@ -79,5 +79,5 @@ then
   exit 1
 fi
 	
-ssh -p 3922 -q -o StrictHostKeyChecking=no -i $CERT root@$domRIp "/root/loadbalancer.sh $*"
+ssh -p 3922 -q -o StrictHostKeyChecking=no -i $cert root@$domRIp "/root/loadbalancer.sh $*"
 exit $?	
diff --git a/scripts/network/domr/ipassoc.sh b/scripts/network/domr/ipassoc.sh
index 91b0f1d3698..14d932c5308 100755
--- a/scripts/network/domr/ipassoc.sh
+++ b/scripts/network/domr/ipassoc.sh
@@ -57,7 +57,7 @@ add_nat_entry() {
    ssh -p 3922 -o StrictHostKeyChecking=no -i $cert root@$dRIp "\
       ip addr add dev $correctVif $pubIp
       iptables -t nat -I POSTROUTING   -j SNAT -o $correctVif --to-source $pubIp ;
-      /sbin/arping -c 3 -I $correctVif -A -U -s $pubIp $pubIp;
+      arping -c 3 -I $correctVif -A -U -s $pubIp $pubIp;
      "
   if [ $? -gt 0  -a $? -ne 2 ]
   then
@@ -91,7 +91,7 @@ add_an_ip () {
    ssh -p 3922 -o StrictHostKeyChecking=no -i $cert root@$dRIp "\
    	  ifconfig $correctVif up;
       ip addr add dev $correctVif $pubIp ;
-      /sbin/arping -c 3 -I $correctVif -A -U -s $pubIp $pubIp;
+      arping -c 3 -I $correctVif -A -U -s $pubIp $pubIp;
      "
    return $?
 }
diff --git a/scripts/vm/hypervisor/xenserver/networkUsage.sh b/scripts/network/domr/networkUsage.sh
similarity index 98%
rename from scripts/vm/hypervisor/xenserver/networkUsage.sh
rename to scripts/network/domr/networkUsage.sh
index 129aa76f05e..66b2e6a44ef 100755
--- a/scripts/vm/hypervisor/xenserver/networkUsage.sh
+++ b/scripts/network/domr/networkUsage.sh
@@ -18,7 +18,7 @@ check_gw() {
   return $?;
 }
 
-cert="$(dirname $0)/id_rsa"
+cert="/root/.ssh/id_rsa.cloud"
 
 create_usage_rules () {
   local dRIp=$1
diff --git a/scripts/network/domr/vm_data.sh b/scripts/network/domr/vm_data.sh
index 78fc7f92c50..32b27bbdad9 100755
--- a/scripts/network/domr/vm_data.sh
+++ b/scripts/network/domr/vm_data.sh
@@ -10,7 +10,7 @@ usage() {
 }
 
 set -x
-CERT="/root/.ssh/id_rsa.cloud"
+cert="/root/.ssh/id_rsa.cloud"
 PORT=3922
 
 create_htaccess() {
@@ -24,7 +24,7 @@ create_htaccess() {
   entry="RewriteRule ^$file$  ../$folder/%{REMOTE_ADDR}/$file [L,NC,QSA]"
   htaccessFolder="/var/www/html/latest"
   htaccessFile=$htaccessFolder/.htaccess
-  ssh -p $PORT -o StrictHostKeyChecking=no -i $CERT root@$domrIp "mkdir -p $htaccessFolder; touch $htaccessFile; grep -F \"$entry\" $htaccessFile; if [ \$? -gt 0 ]; then echo -e \"$entry\" >> $htaccessFile; fi" >/dev/null
+  ssh -p $PORT -o StrictHostKeyChecking=no -i $cert root@$domrIp "mkdir -p $htaccessFolder; touch $htaccessFile; grep -F \"$entry\" $htaccessFile; if [ \$? -gt 0 ]; then echo -e \"$entry\" >> $htaccessFile; fi" >/dev/null
   result=$?
   
   if [ $result -eq 0 ]
@@ -32,7 +32,7 @@ create_htaccess() {
     entry="Options -Indexes\\nOrder Deny,Allow\\nDeny from all\\nAllow from $vmIp"
     htaccessFolder="/var/www/html/$folder/$vmIp"
     htaccessFile=$htaccessFolder/.htaccess
-    ssh -p $PORT -o StrictHostKeyChecking=no -i $CERT root@$domrIp "mkdir -p $htaccessFolder; echo -e \"$entry\" > $htaccessFile" >/dev/null
+    ssh -p $PORT -o StrictHostKeyChecking=no -i $cert root@$domrIp "mkdir -p $htaccessFolder; echo -e \"$entry\" > $htaccessFile" >/dev/null
     result=$?
   fi
   
@@ -47,7 +47,7 @@ copy_vm_data_file() {
   local dataFile=$5        
   
   chmod +r $dataFile
-  scp -P $PORT -o StrictHostKeyChecking=no -i $CERT $dataFile root@$domrIp:/var/www/html/$folder/$vmIp/$file >/dev/null
+  scp -P $PORT -o StrictHostKeyChecking=no -i $cert $dataFile root@$domrIp:/var/www/html/$folder/$vmIp/$file >/dev/null
   return $?
 }
 
@@ -58,7 +58,7 @@ delete_vm_data_file() {
   local file=$4
   
   vmDataFilePath="/var/www/html/$folder/$vmIp/$file"
-  ssh -p $PORT -o StrictHostKeyChecking=no -i $CERT root@$domrIp "if [ -f $vmDataFilePath ]; then rm -rf $vmDataFilePath; fi" >/dev/null
+  ssh -p $PORT -o StrictHostKeyChecking=no -i $cert root@$domrIp "if [ -f $vmDataFilePath ]; then rm -rf $vmDataFilePath; fi" >/dev/null
   return $?
 }
 
diff --git a/scripts/storage/qcow2/createtmplt.sh b/scripts/storage/qcow2/createtmplt.sh
index ef75c6270e9..f9c72b7219b 100755
--- a/scripts/storage/qcow2/createtmplt.sh
+++ b/scripts/storage/qcow2/createtmplt.sh
@@ -78,6 +78,7 @@ create_from_file() {
   then
     rm -f $tmpltimg
   fi
+  chmod a+r /$tmpltfs/$tmpltname
 }
 
 create_from_snapshot() {
@@ -92,6 +93,8 @@ create_from_snapshot() {
      printf "Failed to create template /$tmplfs/$tmpltname from snapshot $snapshotName on disk $tmpltImg "
      exit 2
   fi
+
+  chmod a+r /$tmpltfs/$tmpltname
 }
 
 tflag=
@@ -165,6 +168,7 @@ else
 fi
 
 touch /$tmpltfs/template.properties
+chmod a+r /$tmpltfs/template.properties
 echo -n "" > /$tmpltfs/template.properties
 
 today=$(date '+%m_%d_%Y')
diff --git a/scripts/storage/qcow2/managesnapshot.sh b/scripts/storage/qcow2/managesnapshot.sh
index d9b339267a0..80db8febec1 100755
--- a/scripts/storage/qcow2/managesnapshot.sh
+++ b/scripts/storage/qcow2/managesnapshot.sh
@@ -43,8 +43,24 @@ create_snapshot() {
 destroy_snapshot() {
   local disk=$1
   local snapshotname=$2
+  local deleteDir=$3
   local failed=0
 
+  if [ -d $disk ]
+  then
+     if [ -f $disk/$snapshotname ]
+     then
+	rm -rf $disk/$snapshotname >& /dev/null
+     fi
+
+     if [ "$deleteDir" == "1" ]
+     then
+	rm -rf %disk >& /dev/null
+     fi
+
+     return $failed
+  fi
+
   if [ ! -f $disk ]
   then
      failed=1
@@ -119,8 +135,9 @@ nflag=
 pathval=
 snapshot=
 tmplName=
+deleteDir=
 
-while getopts 'c:d:r:n:b:p:t:' OPTION
+while getopts 'c:d:r:n:b:p:t:f' OPTION
 do
   case $OPTION in
   c)	cflag=1
@@ -142,6 +159,8 @@ do
         ;;
   t)    tmplName="$OPTARG"
 	;;
+  f)    deleteDir=1
+	;;
   ?)	usage
 	;;
   esac
@@ -154,7 +173,7 @@ then
   exit $?
 elif [ "$dflag" == "1" ]
 then
-  destroy_snapshot $pathval $snapshot
+  destroy_snapshot $pathval $snapshot $deleteDir
   exit $?
 elif [ "$bflag" == "1" ]
 then
diff --git a/scripts/storage/secondary/createtmplt.sh b/scripts/storage/secondary/createtmplt.sh
index 9dfd5be5cd1..a2d296332df 100755
--- a/scripts/storage/secondary/createtmplt.sh
+++ b/scripts/storage/secondary/createtmplt.sh
@@ -3,7 +3,7 @@
 # createtmplt.sh -- install a template
 
 usage() {
-  printf "Usage: %s: -t  -n  -f  -s  -c  -d  -h  [-u]\n" $(basename $0) >&2
+  printf "Usage: %s: -t  -n  -f  -c  -d  -h  [-u]\n" $(basename $0) >&2
 }
 
 
@@ -67,7 +67,7 @@ uncompress() {
     return 1 
   fi
  
-  rm $1
+  rm -f $1
   printf $tmpfile
 
   return 0
@@ -77,16 +77,10 @@ create_from_file() {
   local tmpltfs=$1
   local tmpltimg=$2
   local tmpltname=$3
-  local volsize=$4
-  local cleanup=$5
 
   #copy the file to the disk
   mv $tmpltimg /$tmpltfs/$tmpltname
 
-#  if [ "$cleanup" == "true" ]
-#  then
-#    rm -f $tmpltimg
-#  fi
 }
 
 tflag=
@@ -112,7 +106,6 @@ do
 		tmpltimg="$OPTARG"
 		;;
   s)	sflag=1
-		volsize="$OPTARG"
 		;;
   c)	cflag=1
 		cksum="$OPTARG"
@@ -161,33 +154,18 @@ rollback_if_needed $tmpltfs $? "failed to uncompress $tmpltimg\n"
 tmpltimg2=$(untar $tmpltimg2)
 rollback_if_needed $tmpltfs $? "tar archives not supported\n"
 
-if [ ${tmpltname%.vhd} = ${tmpltname} ]
+if [ ${tmpltname%.vhd} != ${tmpltname} ]
 then
-  vhd-util check -n ${tmpltimg2} > /dev/null
-  rollback_if_needed $tmpltfs $? "vhd tool check $tmpltimg2 failed\n"
-fi
-
-# need the 'G' suffix on volume size
-if [ ${volsize:(-1)} != G ]
-then
-  volsize=${volsize}G
-fi
-
-#determine source file size -- it needs to be less than or equal to volsize
-imgsize=$(ls -lh $tmpltimg2| awk -F" " '{print $5}')
-if [ ${imgsize:(-1)} == G ] 
-then
-  imgsize=${imgsize%G} #strip out the G 
-  imgsize=${imgsize%.*} #...and any decimal part
-  let imgsize=imgsize+1 # add 1 to compensate for decimal part
-  volsizetmp=${volsize%G}
-  if [ $volsizetmp -lt $imgsize ]
-  then
-    volsize=${imgsize}G  
+  if  which  vhd-util 2>/dev/null
+  then 
+    vhd-util check -n ${tmpltimg2} > /dev/null
+    rollback_if_needed $tmpltfs $? "vhd tool check $tmpltimg2 failed\n"
   fi
 fi
 
-create_from_file $tmpltfs $tmpltimg2 $tmpltname $volsize $cleanup
+imgsize=$(ls -l $tmpltimg2| awk -F" " '{print $5}')
+
+create_from_file $tmpltfs $tmpltimg2 $tmpltname
 
 touch /$tmpltfs/template.properties
 rollback_if_needed $tmpltfs $? "Failed to create template.properties file"
@@ -195,13 +173,10 @@ echo -n "" > /$tmpltfs/template.properties
 
 today=$(date '+%m_%d_%Y')
 echo "filename=$tmpltname" > /$tmpltfs/template.properties
-echo "snapshot.name=$today" >> /$tmpltfs/template.properties
 echo "description=$descr" >> /$tmpltfs/template.properties
-echo "name=$tmpltname" >> /$tmpltfs/template.properties
 echo "checksum=$cksum" >> /$tmpltfs/template.properties
 echo "hvm=$hvm" >> /$tmpltfs/template.properties
-echo "volume.size=$volsize" >> /$tmpltfs/template.properties
-
+echo "size=$imgsize" >> /$tmpltfs/template.properties
 
 if [ "$cleanup" == "true" ]
 then
diff --git a/scripts/storage/secondary/installrtng.sh b/scripts/storage/secondary/installrtng.sh
index e2e423d0745..7c9e7d0d257 100755
--- a/scripts/storage/secondary/installrtng.sh
+++ b/scripts/storage/secondary/installrtng.sh
@@ -116,6 +116,9 @@ then
   echo "Failed to install routing template $tmpltimg to $destdir"
 fi
 
+tmpltfile=$destdir/$tmpfile
+tmpltsize=$(ls -l $tmpltfile| awk -F" " '{print $5}')
+
 echo "vhd=true" >> $destdir/template.properties
 echo "id=1" >> $destdir/template.properties
 echo "public=true" >> $destdir/template.properties
@@ -123,6 +126,6 @@ echo "vhd.filename=$localfile" >> $destdir/template.properties
 echo "uniquename=routing" >> $destdir/template.properties
 echo "vhd.virtualsize=2147483648" >> $destdir/template.properties
 echo "virtualsize=2147483648" >> $destdir/template.properties
-echo "vhd.size=2101252608" >> $destdir/template.properties
+echo "vhd.size=$tmpltsize" >> $destdir/template.properties
 
 echo "Successfully installed routing template $tmpltimg to $destdir"
diff --git a/scripts/vm/hypervisor/xenserver/prepsystemvm.sh b/scripts/vm/hypervisor/xenserver/prepsystemvm.sh
deleted file mode 100755
index 4a5b7f0e695..00000000000
--- a/scripts/vm/hypervisor/xenserver/prepsystemvm.sh
+++ /dev/null
@@ -1,232 +0,0 @@
-#/bin/bash
-# $Id: prepsystemvm.sh 10800 2010-07-16 13:48:39Z edison $ $HeadURL: svn://svn.lab.vmops.com/repos/vmdev/java/scripts/vm/hypervisor/xenserver/prepsystemvm.sh $
-
-#set -x
-
-mntpath() {
-  local vmname=$1
-  echo "/mnt/$vmname"
-}
-
-mount_local() {
-   local vmname=$1
-   local disk=$2
-   local path=$(mntpath $vmname)
-
-   mkdir -p ${path}
-   mount $disk ${path} 
-
-   return $?
-}
-
-umount_local() {
-   local vmname=$1
-   local path=$(mntpath $vmname)
-
-   umount  $path
-   local ret=$?
-   
-   rm -rf $path
-   return $ret
-}
-
-
-patch_scripts() {
-   local vmname=$1
-   local patchfile=$2
-   local path=$(mntpath $vmname)
-
-   local oldmd5=
-   local md5file=${path}/md5sum
-   [ -f ${md5file} ] && oldmd5=$(cat ${md5file})
-   local newmd5=$(md5sum $patchfile | awk '{print $1}')
-
-   if [ "$oldmd5" != "$newmd5" ]
-   then
-     tar xzf $patchfile -C ${path}
-     echo ${newmd5} > ${md5file}
-   fi
-
-   return 0
-}
-
-#
-# To use existing console proxy .zip-based package file
-#
-patch_console_proxy() {
-   local vmname=$1
-   local patchfile=$2
-   local path=$(mntpath $vmname)
-   local oldmd5=
-   local md5file=${path}/usr/local/cloud/systemvm/md5sum
-
-   [ -f ${md5file} ] && oldmd5=$(cat ${md5file})
-   local newmd5=$(md5sum $patchfile | awk '{print $1}')
-
-   if [ "$oldmd5" != "$newmd5" ]
-   then
-     echo "All" | unzip $patchfile -d ${path}/usr/local/cloud/systemvm >/dev/null 2>&1
-     chmod 555 ${path}/usr/local/cloud/systemvm/run.sh
-     find ${path}/usr/local/cloud/systemvm/ -name \*.sh | xargs chmod 555
-     echo ${newmd5} > ${md5file}
-   fi
-
-   return 0
-}
-
-consoleproxy_svcs() {
-   local vmname=$1
-   local path=$(mntpath $vmname)
-
-   chroot ${path} /sbin/chkconfig cloud on
-   chroot ${path} /sbin/chkconfig postinit on
-   chroot ${path} /sbin/chkconfig domr_webserver off
-   chroot ${path} /sbin/chkconfig haproxy off ;
-   chroot ${path} /sbin/chkconfig dnsmasq off
-   chroot ${path} /sbin/chkconfig sshd on
-   chroot ${path} /sbin/chkconfig httpd off
-   chroot ${path} /sbin/chkconfig nfs off
-   chroot ${path} /sbin/chkconfig nfslock off
-   chroot ${path} /sbin/chkconfig rpcbind off
-   chroot ${path} /sbin/chkconfig rpcidmap off
-
-   cp ${path}/etc/sysconfig/iptables-consoleproxy ${path}/etc/sysconfig/iptables
-}
-
-secstorage_svcs() {
-   local vmname=$1
-   local path=$(mntpath $vmname)
-
-   chroot ${path} /sbin/chkconfig cloud on
-   chroot ${path} /sbin/chkconfig postinit on
-   chroot ${path} /sbin/chkconfig domr_webserver off
-   chroot ${path} /sbin/chkconfig haproxy off ;
-   chroot ${path} /sbin/chkconfig dnsmasq off
-   chroot ${path} /sbin/chkconfig sshd on
-   chroot ${path} /sbin/chkconfig httpd off
-    
-
-   cp ${path}/etc/sysconfig/iptables-secstorage ${path}/etc/sysconfig/iptables
-   mkdir -p ${path}/var/log/cloud
-}
-
-routing_svcs() {
-   local vmname=$1
-   local path=$(mntpath $vmname)
-
-   chroot ${path} /sbin/chkconfig cloud off
-   chroot ${path} /sbin/chkconfig domr_webserver on ; 
-   chroot ${path} /sbin/chkconfig haproxy on ; 
-   chroot ${path} /sbin/chkconfig dnsmasq on
-   chroot ${path} /sbin/chkconfig sshd on
-   chroot ${path} /sbin/chkconfig nfs off
-   chroot ${path} /sbin/chkconfig nfslock off
-   chroot ${path} /sbin/chkconfig rpcbind off
-   chroot ${path} /sbin/chkconfig rpcidmap off
-   cp ${path}/etc/sysconfig/iptables-domr ${path}/etc/sysconfig/iptables
-}
-
-lflag=
-dflag=
-
-while getopts 't:l:d:' OPTION
-do
-  case $OPTION in
-  l)	lflag=1
-	vmname="$OPTARG"
-        ;;
-  t)    tflag=1
-        vmtype="$OPTARG"
-        ;;
-  d)    dflag=1
-        rootdisk="$OPTARG"
-        ;;
-  *)    ;;
-  esac
-done
-
-if [ "$lflag$tflag$dflag" != "111" ]
-then
-  printf "Error: Not enough parameter\n" >&2
-  exit 1
-fi
-
-
-mount_local $vmname $rootdisk
-
-if [ $? -gt 0 ]
-then
-  printf "Failed to mount disk $rootdisk for $vmname\n" >&2
-  exit 1
-fi
-
-if [ -f $(dirname $0)/patch.tgz ]
-then
-  patch_scripts $vmname $(dirname $0)/patch.tgz
-  if [ $? -gt 0 ]
-  then
-    printf "Failed to apply patch patch.zip to $vmname\n" >&2
-    umount_local $vmname
-    exit 4
-  fi
-fi
-
-cpfile=$(dirname $0)/systemvm-premium.zip
-if [ "$vmtype" == "consoleproxy" ] || [ "$vmtype" == "secstorage" ]  && [ -f $cpfile ]
-then
-  patch_console_proxy $vmname $cpfile
-  if [ $? -gt 0 ]
-  then
-    printf "Failed to apply patch $patch $cpfile to $vmname\n" >&2
-    umount_local $vmname
-    exit 5
-  fi
-fi
-
-# domr is 64 bit, need to copy 32bit chkconfig to domr
-# this is workaroud, will use 32 bit domr
-dompath=$(mntpath $vmname)
-cp /sbin/chkconfig $dompath/sbin
-# copy public key to system vm
-cp $(dirname $0)/id_rsa.pub  $dompath/root/.ssh/authorized_keys
-#empty known hosts
-echo "" > $dompath/root/.ssh/known_hosts
-
-if [ "$vmtype" == "router" ]
-then
-  routing_svcs $vmname
-  if [ $? -gt 0 ]
-  then
-    printf "Failed to execute routing_svcs\n" >&2
-    umount_local $vmname
-    exit 6
-  fi
-fi
-
-
-if [ "$vmtype" == "consoleproxy" ]
-then
-  consoleproxy_svcs $vmname
-  if [ $? -gt 0 ]
-  then
-    printf "Failed to execute consoleproxy_svcs\n" >&2
-    umount_local $vmname
-    exit 7
-  fi
-fi
-
-if [ "$vmtype" == "secstorage" ]
-then
-  secstorage_svcs $vmname
-  if [ $? -gt 0 ]
-  then
-    printf "Failed to execute secstorage_svcs\n" >&2
-    umount_local $vmname
-    exit 8
-  fi
-fi
-
-
-umount_local $vmname
-
-exit $?
diff --git a/scripts/vm/hypervisor/xenserver/xenserver56/patch b/scripts/vm/hypervisor/xenserver/xenserver56/patch
index c7b20d84600..29cbcd95d05 100644
--- a/scripts/vm/hypervisor/xenserver/xenserver56/patch
+++ b/scripts/vm/hypervisor/xenserver/xenserver56/patch
@@ -21,13 +21,11 @@ vmopsSnapshot=..,0755,/etc/xapi.d/plugins
 xs_cleanup.sh=..,0755,/opt/xensource/bin
 systemvm.iso=../../../../../vms,0644,/opt/xensource/packages/iso
 hostvmstats.py=..,0755,/opt/xensource/sm
-id_rsa.cloud=..,0600,/opt/xensource/bin
 id_rsa.cloud=..,0600,/root/.ssh
 network_info.sh=..,0755,/opt/xensource/bin
 prepsystemvm.sh=..,0755,/opt/xensource/bin
 setupxenserver.sh=..,0755,/opt/xensource/bin
 make_migratable.sh=..,0755,/opt/xensource/bin
-networkUsage.sh=..,0755,/opt/xensource/bin
 setup_iscsi.sh=..,0755,/opt/xensource/bin
 version=..,0755,/opt/xensource/bin
 pingtest.sh=../../..,0755,/opt/xensource/bin
@@ -35,5 +33,6 @@ dhcp_entry.sh=../../../../network/domr/,0755,/opt/xensource/bin
 ipassoc.sh=../../../../network/domr/,0755,/opt/xensource/bin
 vm_data.sh=../../../../network/domr/,0755,/opt/xensource/bin
 save_password_to_domr.sh=../../../../network/domr/,0755,/opt/xensource/bin
+networkUsage.sh=../../../../network/domr/,0755,/opt/xensource/bin
 call_firewall.sh=../../../../network/domr/,0755,/opt/xensource/bin
 call_loadbalancer.sh=../../../../network/domr/,0755,/opt/xensource/bin
diff --git a/server/src/com/cloud/agent/manager/AgentManagerImpl.java b/server/src/com/cloud/agent/manager/AgentManagerImpl.java
index a6241c6fe3c..b7834abf292 100755
--- a/server/src/com/cloud/agent/manager/AgentManagerImpl.java
+++ b/server/src/com/cloud/agent/manager/AgentManagerImpl.java
@@ -551,6 +551,7 @@ public class AgentManagerImpl implements AgentManager, HandlerFactory {
             host.setGuid(null);
             host.setClusterId(null);
             _hostDao.update(host.getId(), host);
+            
             _hostDao.remove(hostId);
             
             //delete the associated primary storage from db
@@ -614,6 +615,8 @@ public class AgentManagerImpl implements AgentManager, HandlerFactory {
             templateHostSC.addAnd("hostId", SearchCriteria.Op.EQ, secStorageHost.getId());
             _vmTemplateHostDao.remove(templateHostSC);
             
+            /*Disconnected agent needs special handling here*/
+    		secStorageHost.setGuid(null);
     		txn.commit();
     		return true;
     	}catch (Throwable t) {
@@ -1142,11 +1145,16 @@ public class AgentManagerImpl implements AgentManager, HandlerFactory {
             }
         }
     }
+    
+    @Override
+    public Answer easySend(final Long hostId, final Command cmd) {   	
+    	return easySend(hostId, cmd, _wait);
+    }
 
     @Override
-    public Answer easySend(final Long hostId, final Command cmd) {
+    public Answer easySend(final Long hostId, final Command cmd, int timeout) {
         try {
-            final Answer answer = send(hostId, cmd, _wait);
+            final Answer answer = send(hostId, cmd, timeout);
             if (answer == null) {
                 s_logger.warn("send returns null answer");
                 return null;
@@ -1764,6 +1772,7 @@ public class AgentManagerImpl implements AgentManager, HandlerFactory {
 
     }
     
+    @Override
     public Host findHost(VmCharacteristics vm, Set avoids) {
         return null;
     }
diff --git a/server/src/com/cloud/agent/manager/allocator/impl/UserConcentratedAllocator.java b/server/src/com/cloud/agent/manager/allocator/impl/UserConcentratedAllocator.java
index 1ee5183138e..aa89678d2dc 100755
--- a/server/src/com/cloud/agent/manager/allocator/impl/UserConcentratedAllocator.java
+++ b/server/src/com/cloud/agent/manager/allocator/impl/UserConcentratedAllocator.java
@@ -51,7 +51,6 @@ import com.cloud.utils.DateUtil;
 import com.cloud.utils.NumbersUtil;
 import com.cloud.utils.Pair;
 import com.cloud.utils.component.Inject;
-import com.cloud.utils.db.GlobalLock;
 import com.cloud.utils.db.SearchCriteria;
 import com.cloud.vm.State;
 import com.cloud.vm.UserVmVO;
@@ -78,7 +77,6 @@ public class UserConcentratedAllocator implements PodAllocator {
     @Inject VMInstanceDao _vmInstanceDao;
     
     Random _rand = new Random(System.currentTimeMillis());
-    private final GlobalLock m_capacityCheckLock = GlobalLock.getInternLock("capacity.check");
     private int _hoursToSkipStoppedVMs = 24;
     
     private int _secStorageVmRamSize = 1024;
@@ -145,7 +143,7 @@ public class UserConcentratedAllocator implements PodAllocator {
         }
         
         if (availablePods.size() == 0) {
-            s_logger.debug("There are no pods with enough memory/CPU capacity in zone" + zone.getName());
+            s_logger.debug("There are no pods with enough memory/CPU capacity in zone " + zone.getName());
             return null;
         } else {
         	// Return a random pod
@@ -158,30 +156,14 @@ public class UserConcentratedAllocator implements PodAllocator {
 
     private boolean dataCenterAndPodHasEnoughCapacity(long dataCenterId, long podId, long capacityNeeded, short capacityType, long[] hostCandidate) {
         List capacities = null;
-        if (m_capacityCheckLock.lock(120)) { // 2 minutes
-            try {
-                SearchCriteria sc = _capacityDao.createSearchCriteria();
-                sc.addAnd("capacityType", SearchCriteria.Op.EQ, capacityType);
-                sc.addAnd("dataCenterId", SearchCriteria.Op.EQ, dataCenterId);
-                sc.addAnd("podId", SearchCriteria.Op.EQ, podId);
-                capacities = _capacityDao.search(sc, null);
-            } finally {
-                m_capacityCheckLock.unlock();
-            }
-        } else {
-            s_logger.error("Unable to acquire synchronization lock for pod allocation");
-            
-            // we now try to enforce reservation-style allocation, waiting time has been adjusted
-            // to 2 minutes
-            return false;
-
-/*
-            // If we can't lock the table, just return that there is enough capacity and allow instance creation to fail on the agent
-            // if there is not enough capacity.  All that does is skip the optimization of checking for capacity before sending the
-            // command to the agent.
-            return true;
-*/
-        }
+        
+        SearchCriteria sc = _capacityDao.createSearchCriteria();
+        sc.addAnd("capacityType", SearchCriteria.Op.EQ, capacityType);
+        sc.addAnd("dataCenterId", SearchCriteria.Op.EQ, dataCenterId);
+        sc.addAnd("podId", SearchCriteria.Op.EQ, podId);
+        s_logger.trace("Executing search");
+        capacities = _capacityDao.search(sc, null);
+        s_logger.trace("Done with a search");
 
         boolean enoughCapacity = false;
         if (capacities != null) {
diff --git a/server/src/com/cloud/alert/AlertManagerImpl.java b/server/src/com/cloud/alert/AlertManagerImpl.java
index 033b1ba3282..9250bb86557 100644
--- a/server/src/com/cloud/alert/AlertManagerImpl.java
+++ b/server/src/com/cloud/alert/AlertManagerImpl.java
@@ -65,8 +65,9 @@ import com.cloud.storage.dao.VolumeDao;
 import com.cloud.utils.NumbersUtil;
 import com.cloud.utils.Pair;
 import com.cloud.utils.component.ComponentLocator;
-import com.cloud.utils.db.GlobalLock;
+import com.cloud.utils.db.DB;
 import com.cloud.utils.db.SearchCriteria;
+import com.cloud.utils.db.Transaction;
 import com.cloud.vm.ConsoleProxyVO;
 import com.cloud.vm.DomainRouterVO;
 import com.cloud.vm.SecondaryStorageVmVO;
@@ -118,8 +119,6 @@ public class AlertManagerImpl implements AlertManager {
     private double _publicIPCapacityThreshold = 0.75;
     private double _privateIPCapacityThreshold = 0.75;
 
-    private final GlobalLock m_capacityCheckLock = GlobalLock.getInternLock("capacity.check");
-
     @Override
     public boolean configure(String name, Map params) throws ConfigurationException {
         _name = name;
@@ -319,7 +318,7 @@ public class AlertManagerImpl implements AlertManager {
         }
     }
 
-    @Override
+    @Override @DB
     public void recalculateCapacity() {
         // FIXME: the right way to do this is to register a listener (see RouterStatsListener, VMSyncListener)
         //        for the vm sync state.  The listener model has connects/disconnects to keep things in sync much better
@@ -435,25 +434,23 @@ public class AlertManagerImpl implements AlertManager {
             newCapacities.add(newPrivateIPCapacity);
         }
 
-        if (m_capacityCheckLock.lock(5)) { // 5 second timeout
-            try {
-                // delete the old records
-                _capacityDao.clearNonStorageCapacities();
+        Transaction txn = Transaction.currentTxn();
+        try {
+        	txn.start();
+        	// delete the old records
+            _capacityDao.clearNonStorageCapacities();
 
-                for (CapacityVO newCapacity : newCapacities) {
-                    _capacityDao.persist(newCapacity);
-                }
-            } finally {
-                m_capacityCheckLock.unlock();
-            }
-
-            if (s_logger.isTraceEnabled()) {
-                s_logger.trace("done recalculating system capacity");
-            }
-        } else {
-            if (s_logger.isTraceEnabled()) {
-                s_logger.trace("Skipping capacity check, unable to lock the capacity table for recalculation.");
-            }
+            for (CapacityVO newCapacity : newCapacities) {
+            	s_logger.trace("Executing capacity update");
+                _capacityDao.persist(newCapacity);
+                s_logger.trace("Done with capacity update");
+            }
+            txn.commit();
+        } catch (Exception ex) {
+        	txn.rollback();
+        	s_logger.error("Unable to start transaction for capacity update");
+        }finally {
+        	txn.close();
         }
     }
 
diff --git a/server/src/com/cloud/api/BaseCmd.java b/server/src/com/cloud/api/BaseCmd.java
index 4c630bd734c..c84a46c862e 100644
--- a/server/src/com/cloud/api/BaseCmd.java
+++ b/server/src/com/cloud/api/BaseCmd.java
@@ -153,6 +153,7 @@ public abstract class BaseCmd {
         CPU_ALLOCATED("cpuallocated", BaseCmd.TYPE_LONG, "cpuallocated"),
         CPU_USED("cpuused", BaseCmd.TYPE_LONG, "cpuused"),
         CREATED("created", BaseCmd.TYPE_DATE, "created"),
+        ATTACHED("attached", BaseCmd.TYPE_DATE, "attached"),
         CROSS_ZONES("crossZones", BaseCmd.TYPE_BOOLEAN, "crosszones"),
         DAILY_MAX("dailymax", BaseCmd.TYPE_INT, "dailyMax"),
         DATA_DISK_OFFERING_ID("datadiskofferingid", BaseCmd.TYPE_LONG, "dataDiskOfferingId"),
@@ -198,6 +199,7 @@ public abstract class BaseCmd {
         GROUP("group", BaseCmd.TYPE_STRING, "group"),
         GROUP_ID("group", BaseCmd.TYPE_LONG, "groupId"),
         GROUP_IDS("groupids", BaseCmd.TYPE_STRING, "groupIds"),
+        GUEST_OS_ID("guestosid", BaseCmd.TYPE_LONG, "guestOsId"),
         HA_ENABLE("haenable", BaseCmd.TYPE_BOOLEAN, "haEnable"),
         HAS_CHILD("haschild", BaseCmd.TYPE_BOOLEAN, "haschild"),
         HOST_ID("hostid", BaseCmd.TYPE_LONG, "hostId"),
@@ -308,6 +310,8 @@ public abstract class BaseCmd {
         RESOURCE_TYPE("resourcetype", BaseCmd.TYPE_INT, "resourcetype"),
         RESPONSE_TYPE("response",BaseCmd.TYPE_STRING,"response"),
         ROOT_DISK_OFFERING_ID("rootdiskofferingid", BaseCmd.TYPE_LONG, "rootDiskOfferingId"),
+        ROOT_DEVICE_ID("rootdeviceid", BaseCmd.TYPE_LONG, "rootDeviceId"),
+        ROOT_DEVICE_TYPE("rootdevicetype", BaseCmd.TYPE_STRING, "rootDeviceType"),
         RULE_ID("ruleid", BaseCmd.TYPE_LONG, "ruleId"),
         RUNNING_VMS("runningvms", BaseCmd.TYPE_LONG, "runningvms"),
         SCHEDULE("schedule", BaseCmd.TYPE_STRING, "schedule"),
diff --git a/server/src/com/cloud/api/commands/DetachVolumeCmd.java b/server/src/com/cloud/api/commands/DetachVolumeCmd.java
index d8f69b965d4..a5ddaeec23e 100644
--- a/server/src/com/cloud/api/commands/DetachVolumeCmd.java
+++ b/server/src/com/cloud/api/commands/DetachVolumeCmd.java
@@ -37,7 +37,9 @@ public class DetachVolumeCmd extends BaseCmd {
 
     static {
     	s_properties.add(new Pair(BaseCmd.Properties.ACCOUNT_OBJ, Boolean.FALSE));
-        s_properties.add(new Pair(BaseCmd.Properties.ID, Boolean.TRUE));
+        s_properties.add(new Pair(BaseCmd.Properties.ID, Boolean.FALSE));
+        s_properties.add(new Pair(BaseCmd.Properties.DEVICE_ID, Boolean.FALSE));
+        s_properties.add(new Pair(BaseCmd.Properties.VIRTUAL_MACHINE_ID, Boolean.FALSE));
     }
 
     public String getName() {
@@ -56,6 +58,23 @@ public class DetachVolumeCmd extends BaseCmd {
     public List> execute(Map params) {
     	Account account = (Account) params.get(BaseCmd.Properties.ACCOUNT_OBJ.getName());
     	Long volumeId = (Long) params.get(BaseCmd.Properties.ID.getName());
+    	Long deviceId = (Long) params.get(BaseCmd.Properties.DEVICE_ID.getName());
+    	Long instanceId = (Long) params.get(BaseCmd.Properties.VIRTUAL_MACHINE_ID.getName());
+    	VolumeVO volume = null;
+    	
+    	if((volumeId==null && (deviceId==null && instanceId==null)) || (volumeId!=null && (deviceId!=null || instanceId!=null)) || (volumeId==null && (deviceId==null || instanceId==null)))
+    	{
+    		throw new ServerApiException(BaseCmd.PARAM_ERROR, "Please provide either a volume id, or a tuple(device id, instance id)");
+    	}
+
+    	if(volumeId!=null)
+    	{
+    		deviceId = instanceId = Long.valueOf("0");
+    	}
+    	else
+    	{
+    		volumeId = Long.valueOf("0");;
+    	}
     	
     	boolean isAdmin;
     	if (account == null) {
@@ -67,9 +86,18 @@ public class DetachVolumeCmd extends BaseCmd {
     	}
 
     	// Check that the volume ID is valid
-    	VolumeVO volume = getManagementServer().findVolumeById(volumeId);
-    	if (volume == null)
-    		throw new ServerApiException(BaseCmd.PARAM_ERROR, "Unable to find volume with ID: " + volumeId);
+    	if(volumeId != 0)
+    	{
+    		volume = getManagementServer().findVolumeById(volumeId);
+    		if (volume == null)
+    			throw new ServerApiException(BaseCmd.PARAM_ERROR, "Unable to find volume with ID: " + volumeId);
+    	}
+    	else
+    	{
+    		volume = getManagementServer().findVolumeByInstanceAndDeviceId(instanceId, deviceId);
+    		if (volume == null)
+    			throw new ServerApiException(BaseCmd.PARAM_ERROR, "Unable to find volume with ID: " + volumeId);
+    	}
 
     	// If the account is not an admin, check that the volume is owned by the account that was passed in
     	if (!isAdmin) {
@@ -82,7 +110,7 @@ public class DetachVolumeCmd extends BaseCmd {
     	}
 
     	try {
-    		long jobId = getManagementServer().detachVolumeFromVMAsync(volumeId);
+    		long jobId = getManagementServer().detachVolumeFromVMAsync(volumeId,deviceId,instanceId);
 
     		if (jobId == 0) {
             	s_logger.warn("Unable to schedule async-job for DetachVolume comamnd");
diff --git a/server/src/com/cloud/api/commands/ExtractTemplateCmd.java b/server/src/com/cloud/api/commands/ExtractTemplateCmd.java
new file mode 100644
index 00000000000..a73c291bbcd
--- /dev/null
+++ b/server/src/com/cloud/api/commands/ExtractTemplateCmd.java
@@ -0,0 +1,90 @@
+package com.cloud.api.commands;
+
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.log4j.Logger;
+
+import com.cloud.api.BaseCmd;
+import com.cloud.api.ServerApiException;
+import com.cloud.dc.DataCenterVO;
+import com.cloud.server.ManagementServer;
+import com.cloud.storage.VMTemplateVO;
+import com.cloud.user.Account;
+import com.cloud.utils.Pair;
+
+public class ExtractTemplateCmd extends BaseCmd {
+
+	public static final Logger s_logger = Logger.getLogger(ExtractTemplateCmd.class.getName());
+
+    private static final String s_name = "extracttemplateresponse";
+    private static final List> s_properties = new ArrayList>();
+
+    static {        
+        s_properties.add(new Pair(BaseCmd.Properties.URL, Boolean.TRUE));
+        s_properties.add(new Pair(BaseCmd.Properties.ID, Boolean.TRUE));
+        s_properties.add(new Pair(BaseCmd.Properties.ZONE_ID, Boolean.TRUE));
+        s_properties.add(new Pair(BaseCmd.Properties.ACCOUNT_OBJ, Boolean.FALSE));
+    }
+    
+	@Override
+	public List> execute(Map params) {
+		String url		   = (String) params.get(BaseCmd.Properties.URL.getName());
+		Long templateId    = (Long) params.get(BaseCmd.Properties.ID.getName());
+		Long zoneId		   = (Long) params.get(BaseCmd.Properties.ZONE_ID.getName());
+		Account account = (Account) params.get(BaseCmd.Properties.ACCOUNT_OBJ.getName());				
+		
+		ManagementServer managementServer = getManagementServer();
+        VMTemplateVO template = managementServer.findTemplateById(templateId.longValue());
+        if (template == null) {
+            throw new ServerApiException(BaseCmd.INTERNAL_ERROR, "Unable to find template with id " + templateId);
+        }
+        if (template.getName().startsWith("xs-tools") ){
+        	throw new ServerApiException(BaseCmd.INTERNAL_ERROR, "Unable to extract the template " + template.getName() + " It is not supported yet");
+        }
+		
+        if(url.toLowerCase().contains("file://")){
+        	throw new ServerApiException(BaseCmd.PARAM_ERROR, "file:// type urls are currently unsupported");
+        }
+                
+    	if (account != null) {    		    	
+    		if(!isAdmin(account.getType())){
+    			if (template.getAccountId() != account.getId()){
+    				throw new ServerApiException(BaseCmd.PARAM_ERROR, "Unable to find template with ID: " + templateId + " for account: " + account.getAccountName());
+    			}
+    		}else if(!managementServer.isChildDomain(account.getDomainId(), managementServer.findDomainIdByAccountId(template.getAccountId())) ) {
+    			throw new ServerApiException(BaseCmd.PARAM_ERROR, "Unable to extract template " + templateId + " to " + url + ", permission denied.");
+    		}
+    	}
+    	
+        try {
+			managementServer.extractTemplate(url, templateId, zoneId);
+		} catch (Exception e) {			
+			s_logger.error(e.getMessage(), e);
+            throw new ServerApiException(BaseCmd.INTERNAL_ERROR, "Internal Error Extracting the template " + e.getMessage());
+		}
+		DataCenterVO zone = managementServer.getDataCenterBy(zoneId);		
+		List> response = new ArrayList>();
+		response.add(new Pair(BaseCmd.Properties.TEMPLATE_ID.getName(), templateId));
+		response.add(new Pair(BaseCmd.Properties.NAME.getName(), template.getName()));
+		response.add(new Pair(BaseCmd.Properties.DISPLAY_TEXT.getName(), template.getDisplayText()));
+		response.add(new Pair(BaseCmd.Properties.URL.getName(), url));
+		response.add(new Pair(BaseCmd.Properties.ZONE_ID.getName(), zoneId));
+		response.add(new Pair(BaseCmd.Properties.ZONE_NAME.getName(), zone.getName()));
+		response.add(new Pair(BaseCmd.Properties.TEMPLATE_STATUS.getName(), "Processing"));		
+		return response;
+	}
+
+	@Override
+	public String getName() {
+		return s_name;
+	}
+
+	@Override
+	public List> getProperties() {
+		return s_properties;
+	}
+
+}
diff --git a/server/src/com/cloud/api/commands/ListVMsCmd.java b/server/src/com/cloud/api/commands/ListVMsCmd.java
index 0a61217a95c..68ffa17b314 100644
--- a/server/src/com/cloud/api/commands/ListVMsCmd.java
+++ b/server/src/com/cloud/api/commands/ListVMsCmd.java
@@ -35,7 +35,10 @@ import com.cloud.server.Criteria;
 import com.cloud.service.ServiceOfferingVO;
 import com.cloud.storage.GuestOSCategoryVO;
 import com.cloud.storage.GuestOSVO;
+import com.cloud.storage.StoragePool;
+import com.cloud.storage.StoragePoolVO;
 import com.cloud.storage.VMTemplateVO;
+import com.cloud.storage.VolumeVO;
 import com.cloud.user.Account;
 import com.cloud.uservm.UserVm;
 import com.cloud.utils.Pair;
@@ -53,6 +56,7 @@ public class ListVMsCmd extends BaseCmd {
         s_properties.add(new Pair(BaseCmd.Properties.STATE, Boolean.FALSE));
         s_properties.add(new Pair(BaseCmd.Properties.ZONE_ID, Boolean.FALSE));
         s_properties.add(new Pair(BaseCmd.Properties.POD_ID, Boolean.FALSE));
+        s_properties.add(new Pair(BaseCmd.Properties.GROUP, Boolean.FALSE));
         s_properties.add(new Pair(BaseCmd.Properties.HOST_ID, Boolean.FALSE));
         s_properties.add(new Pair(BaseCmd.Properties.KEYWORD, Boolean.FALSE));
         s_properties.add(new Pair(BaseCmd.Properties.ACCOUNT, Boolean.FALSE));
@@ -82,6 +86,7 @@ public class ListVMsCmd extends BaseCmd {
         Long zoneId = (Long)params.get(BaseCmd.Properties.ZONE_ID.getName());
         Long podId = (Long)params.get(BaseCmd.Properties.POD_ID.getName());
         Long hostId = (Long)params.get(BaseCmd.Properties.HOST_ID.getName());
+        String group = (String)params.get(BaseCmd.Properties.GROUP.getName());
         String keyword = (String)params.get(BaseCmd.Properties.KEYWORD.getName());
         Integer page = (Integer)params.get(BaseCmd.Properties.PAGE.getName());
         Integer pageSize = (Integer)params.get(BaseCmd.Properties.PAGESIZE.getName());
@@ -140,6 +145,14 @@ public class ListVMsCmd extends BaseCmd {
             if(zoneId != null)
             	c.addCriteria(Criteria.DATACENTERID, zoneId);
 
+            if(group != null)
+            {
+            	if(group.equals(""))
+            		c.addCriteria(Criteria.EMPTY_GROUP, group);
+            	else
+            		c.addCriteria(Criteria.GROUP, group);
+            }
+
             // ignore these search requests if it's not an admin
             if (isAdmin == true) {
     	        c.addCriteria(Criteria.DOMAINID, domainId);
@@ -169,6 +182,14 @@ public class ListVMsCmd extends BaseCmd {
         }
 
         for (UserVm vmInstance : virtualMachines) {
+    
+        	//if the account is deleted, do not return the user vm 
+        	Account currentVmAccount = getManagementServer().getAccount(vmInstance.getAccountId());
+        	if(currentVmAccount.getRemoved()!=null)
+        	{
+        		continue; //not returning this vm
+        	}
+        	
             List> vmData = new ArrayList>();
             AsyncJobVO asyncJob = getManagementServer().findInstancePendingAsyncJob("vm_instance", vmInstance.getId());
             if(asyncJob != null) {
@@ -260,14 +281,22 @@ public class ListVMsCmd extends BaseCmd {
                 long networkKbWrite = (long)vmStats.getNetworkWriteKBs();
                 vmData.add(new Pair(BaseCmd.Properties.NETWORK_KB_WRITE.getName(), networkKbWrite));
             }
+            vmData.add(new Pair(BaseCmd.Properties.GUEST_OS_ID.getName(), vmInstance.getGuestOSId()));
             
-            GuestOSCategoryVO guestOsCategory = getManagementServer().getGuestOsCategory(vmInstance.getGuestOSId());
-            if(guestOsCategory!=null)
-            	vmData.add(new Pair(BaseCmd.Properties.OS_TYPE_ID.getName(),guestOsCategory.getId()));
+            GuestOSVO guestOs = getManagementServer().getGuestOs(vmInstance.getGuestOSId());
+            if(guestOs!=null)
+            	vmData.add(new Pair(BaseCmd.Properties.OS_TYPE_ID.getName(),guestOs.getCategoryId()));
 
             //network groups
             vmData.add(new Pair(BaseCmd.Properties.NETWORK_GROUP_LIST.getName(), getManagementServer().getNetworkGroupsNamesForVm(vmInstance.getId())));
             
+            //root device related
+            VolumeVO rootVolume = getManagementServer().findRootVolume(vmInstance.getId());
+            vmData.add(new Pair(BaseCmd.Properties.ROOT_DEVICE_ID.getName(), rootVolume.getDeviceId()));
+            
+            StoragePoolVO storagePool = getManagementServer().findPoolById(rootVolume.getPoolId());
+            vmData.add(new Pair(BaseCmd.Properties.ROOT_DEVICE_TYPE.getName(), storagePool.getPoolType().toString()));
+            
             vmTag[i++] = vmData;
         }
         List> returnTags = new ArrayList>();
diff --git a/server/src/com/cloud/api/commands/ListVolumesCmd.java b/server/src/com/cloud/api/commands/ListVolumesCmd.java
index 85d93f39223..006f1565ca2 100755
--- a/server/src/com/cloud/api/commands/ListVolumesCmd.java
+++ b/server/src/com/cloud/api/commands/ListVolumesCmd.java
@@ -143,7 +143,7 @@ public class ListVolumesCmd extends BaseCmd{
 
         List volumes = getManagementServer().searchForVolumes(c);
 
-        if (volumes == null || volumes.size()==0) {
+        if (volumes == null) {
             throw new ServerApiException(BaseCmd.INTERNAL_ERROR, "unable to find volumes");
         }
 
@@ -194,6 +194,7 @@ public class ListVolumesCmd extends BaseCmd{
             volumeData.add(new Pair(BaseCmd.Properties.SIZE.getName(), virtualSizeInBytes));
 
             volumeData.add(new Pair(BaseCmd.Properties.CREATED.getName(), getDateString(volume.getCreated())));
+            volumeData.add(new Pair(BaseCmd.Properties.ATTACHED.getName(), getDateString(volume.getAttached())));
             volumeData.add(new Pair(BaseCmd.Properties.STATE.getName(),volume.getStatus()));
             
             Account accountTemp = getManagementServer().findAccountById(volume.getAccountId());
diff --git a/server/src/com/cloud/api/commands/PreparePrimaryStorageForMaintenanceCmd.java b/server/src/com/cloud/api/commands/PreparePrimaryStorageForMaintenanceCmd.java
index cfd3d013553..52e3455064a 100644
--- a/server/src/com/cloud/api/commands/PreparePrimaryStorageForMaintenanceCmd.java
+++ b/server/src/com/cloud/api/commands/PreparePrimaryStorageForMaintenanceCmd.java
@@ -27,7 +27,6 @@ import org.apache.log4j.Logger;
 import com.cloud.api.BaseCmd;
 import com.cloud.api.ServerApiException;
 import com.cloud.exception.InvalidParameterValueException;
-import com.cloud.host.HostVO;
 import com.cloud.host.Status;
 import com.cloud.storage.StoragePoolVO;
 import com.cloud.user.Account;
diff --git a/server/src/com/cloud/async/executor/VolumeOperationExecutor.java b/server/src/com/cloud/async/executor/VolumeOperationExecutor.java
index 9f32126c568..750bf2f6f9e 100644
--- a/server/src/com/cloud/async/executor/VolumeOperationExecutor.java
+++ b/server/src/com/cloud/async/executor/VolumeOperationExecutor.java
@@ -86,7 +86,7 @@ public class VolumeOperationExecutor extends BaseAsyncJobExecutor {
                     eventType = EventTypes.EVENT_VOLUME_DETACH;
                     failureDescription = "Failed to detach volume";
 
-    				asyncMgr.getExecutorContext().getManagementServer().detachVolumeFromVM(param.getVolumeId(), param.getEventId());
+    				asyncMgr.getExecutorContext().getManagementServer().detachVolumeFromVM(param.getVolumeId(), param.getEventId(),param.getDeviceId(),param.getVmId());
     				success = true;
     				asyncMgr.completeAsyncJob(getJob().getId(), AsyncJobResult.STATUS_SUCCEEDED, 0, null);
     			} else {
diff --git a/server/src/com/cloud/async/executor/VolumeOperationParam.java b/server/src/com/cloud/async/executor/VolumeOperationParam.java
index 4ee39e087a0..b59ddd51f2f 100644
--- a/server/src/com/cloud/async/executor/VolumeOperationParam.java
+++ b/server/src/com/cloud/async/executor/VolumeOperationParam.java
@@ -41,7 +41,7 @@ public class VolumeOperationParam {
 	private long volumeId;
 	private long eventId;
 	private Long deviceId;
-
+	
 	public VolumeOperationParam() {
 	}
 	
diff --git a/server/src/com/cloud/configuration/Config.java b/server/src/com/cloud/configuration/Config.java
index ae4dd16ca23..e2f22c3989d 100644
--- a/server/src/com/cloud/configuration/Config.java
+++ b/server/src/com/cloud/configuration/Config.java
@@ -168,7 +168,7 @@ public enum Config {
     
 	// Premium
 	
-	UsageAggregationTimezone("Premium", ManagementServer.class, String.class, "usage.aggregation.timezone", "GMT", "The timezone to use when aggregating user statistics", null),
+	UsageExecutionTimezone("Premium", ManagementServer.class, String.class, "usage.execution.timezone", null, "The timezone to use for usage job execution time", null),
 	UsageStatsJobAggregationRange("Premium", ManagementServer.class, Integer.class, "usage.stats.job.aggregation.range", "1440", "The range of time for aggregating the user statistics specified in minutes (e.g. 1440 for daily, 60 for hourly.", null),
 	UsageStatsJobExecTime("Premium", ManagementServer.class, String.class, "usage.stats.job.exec.time", "00:15", "The time at which the usage statistics aggregation job will run as an HH24:MM time, e.g. 00:30 to run at 12:30am.", null),
     EnableUsageServer("Premium", ManagementServer.class, Boolean.class, "enable.usage.server", "true", "Flag for enabling usage", null),
diff --git a/server/src/com/cloud/hypervisor/kvm/discoverer/KvmServerDiscoverer.java b/server/src/com/cloud/hypervisor/kvm/discoverer/KvmServerDiscoverer.java
index b1d8bb088cf..a17536db1a0 100644
--- a/server/src/com/cloud/hypervisor/kvm/discoverer/KvmServerDiscoverer.java
+++ b/server/src/com/cloud/hypervisor/kvm/discoverer/KvmServerDiscoverer.java
@@ -24,6 +24,7 @@ import com.cloud.configuration.dao.ConfigurationDao;
 import com.cloud.exception.DiscoveryException;
 import com.cloud.host.HostVO;
 import com.cloud.host.Status;
+import com.cloud.host.Status.Event;
 import com.cloud.host.dao.HostDao;
 import com.cloud.hypervisor.kvm.resource.KvmDummyResourceBase;
 import com.cloud.hypervisor.xen.resource.CitrixResourceBase;
@@ -44,7 +45,7 @@ public class KvmServerDiscoverer extends DiscovererBase implements Discoverer,
 	 private String _setupAgentPath;
 	 private ConfigurationDao _configDao;
 	 private String _hostIp;
-	 private int _waitTime = 10;
+	 private int _waitTime = 3; /*wait for 3 minutes*/
 	 @Inject HostDao _hostDao = null;
 	 
 	@Override
@@ -244,6 +245,7 @@ public class KvmServerDiscoverer extends DiscovererBase implements Discoverer,
 		for (int i = 0 ; i < _waitTime; i++) {
 			
 			if (host.getStatus() != Status.Up) {
+				s_logger.debug("Wait host comes back, try: " + i);
 				try {
 					Thread.sleep(60000);
 				} catch (InterruptedException e) {
@@ -253,9 +255,11 @@ public class KvmServerDiscoverer extends DiscovererBase implements Discoverer,
 				return;
 			}
 		}
-
+		
+		
+		_hostDao.updateStatus(host, Event.AgentDisconnected, msId);
 		/*Timeout, throw warning msg to user*/
-		throw new DiscoveryException("Agent " + host.getId() + ":" + host.getPublicIpAddress() + " does not come back, It may connect to server later, if not, please check the agent log");
+		throw new DiscoveryException("Host " + host.getId() + ":" + host.getPrivateIpAddress() + " does not come back, It may connect to server later, if not, please check the agent log on this host");
 	}
 	
 	@Override
diff --git a/server/src/com/cloud/network/NetworkManagerImpl.java b/server/src/com/cloud/network/NetworkManagerImpl.java
index 7d109c60897..28b5a04bccf 100644
--- a/server/src/com/cloud/network/NetworkManagerImpl.java
+++ b/server/src/com/cloud/network/NetworkManagerImpl.java
@@ -642,6 +642,9 @@ public class NetworkManagerImpl implements NetworkManager, VirtualMachineManager
             }
             
             if (!found) {
+                event.setDescription("failed to create Domain Router : " + name);
+                event.setLevel(EventVO.LEVEL_ERROR);
+                _eventDao.persist(event);
                 throw new ExecutionException("Unable to create DomainRouter");
             }
             _routerDao.updateIf(router, Event.OperationSucceeded, null);
@@ -1793,8 +1796,6 @@ public class NetworkManagerImpl implements NetworkManager, VirtualMachineManager
         
         final Map configs = _configDao.getConfiguration("AgentManager", params);
 
-        _routerTemplateId = NumbersUtil.parseInt(configs.get("router.template.id"), 1);
-
         _routerRamSize = NumbersUtil.parseInt(configs.get("router.ram.size"), 128);
 
 //        String value = configs.get("guest.ip.network");
@@ -1836,11 +1837,11 @@ public class NetworkManagerImpl implements NetworkManager, VirtualMachineManager
         _offering = new ServiceOfferingVO("Fake Offering For DomR", 1, _routerRamSize, 0, 0, 0, false, null, NetworkOffering.GuestIpType.Virtualized, useLocalStorage, true, null);
         _offering.setUniqueName("Cloud.Com-SoftwareRouter");
         _offering = _serviceOfferingDao.persistSystemServiceOffering(_offering);
-        _template = _templateDao.findById(_routerTemplateId);
+        _template = _templateDao.findRoutingTemplate();
         if (_template == null) {
         	s_logger.error("Unable to find system vm template.");
-        	
-            // throw new ConfigurationException("Unable to find the template for the router: " + _routerTemplateId);
+        } else {
+        	_routerTemplateId = _template.getId();
         }
         
         NetworkOfferingVO publicNetworkOffering = new NetworkOfferingVO(NetworkOfferingVO.SystemVmPublicNetwork, TrafficType.Public, null);
diff --git a/server/src/com/cloud/network/security/NetworkGroupManagerImpl.java b/server/src/com/cloud/network/security/NetworkGroupManagerImpl.java
index afc5bab16a3..82e4b0ab531 100644
--- a/server/src/com/cloud/network/security/NetworkGroupManagerImpl.java
+++ b/server/src/com/cloud/network/security/NetworkGroupManagerImpl.java
@@ -684,6 +684,9 @@ public class NetworkGroupManagerImpl implements NetworkGroupManager {
 	@Override
 	@DB
 	public boolean addInstanceToGroups(final Long userVmId, final List groups) {
+		if (!_enabled) {
+			return true;
+		}
 		if (groups != null) {
 			final Set uniqueGroups = new TreeSet(new NetworkGroupVOComparator());
 			uniqueGroups.addAll(groups);
@@ -724,6 +727,9 @@ public class NetworkGroupManagerImpl implements NetworkGroupManager {
 	@Override
 	@DB
 	public void removeInstanceFromGroups(Long userVmId) {
+		if (!_enabled) {
+			return;
+		}
 		final Transaction txn = Transaction.currentTxn();
 		txn.start();
 		UserVm userVm = _userVMDao.acquire(userVmId); //ensures that duplicate entries are not created in addInstance
diff --git a/server/src/com/cloud/server/ConfigurationServerImpl.java b/server/src/com/cloud/server/ConfigurationServerImpl.java
index 8a16cc25108..851e3068772 100644
--- a/server/src/com/cloud/server/ConfigurationServerImpl.java
+++ b/server/src/com/cloud/server/ConfigurationServerImpl.java
@@ -238,7 +238,7 @@ public class ConfigurationServerImpl implements ConfigurationServer {
 		
 		String[] defaultRouteList = defaultRoute.split("\\s+");
 		
-		if (defaultRouteList.length != 7) {
+		if (defaultRouteList.length < 5) {
 			return null;
 		}
 		
@@ -420,10 +420,12 @@ public class ConfigurationServerImpl implements ConfigurationServer {
 
             String homeDir = Script.runSimpleBashScript("echo ~");
             if (homeDir == "~") {
-                s_logger.warn("No home directory was detected.  Trouble with SSH keys ahead.");
-                return;
+                s_logger.error("No home directory was detected.  Set the HOME environment variable to point to your user profile or home directory.");
+                throw new RuntimeException("No home directory was detected.  Set the HOME environment variable to point to your user profile or home directory.");
             }
 
+            String keygenOutput = Script.runSimpleBashScript("if [ -f ~/.ssh/id_rsa ] ; then true ; else yes '' | ssh-keygen -t rsa -q -O no-pty ; fi");
+
             File privkeyfile = new File(homeDir + "/.ssh/id_rsa");
             File pubkeyfile  = new File(homeDir + "/.ssh/id_rsa.pub");
             byte[] arr1 = new byte[4094]; // configuration table column value size
@@ -431,8 +433,8 @@ public class ConfigurationServerImpl implements ConfigurationServer {
                 new DataInputStream(new FileInputStream(privkeyfile)).readFully(arr1);
             } catch (EOFException e) {
             } catch (Exception e) {
-                s_logger.warn("Cannot read the private key file",e);
-                return;
+                s_logger.error("Cannot read the private key file",e);
+                throw new RuntimeException("Cannot read the private key file");
             }
             String privateKey = new String(arr1).trim();
             byte[] arr2 = new byte[4094]; // configuration table column value size
@@ -441,7 +443,7 @@ public class ConfigurationServerImpl implements ConfigurationServer {
             } catch (EOFException e) {			    
             } catch (Exception e) {
                 s_logger.warn("Cannot read the public key file",e);
-                return;
+                throw new RuntimeException("Cannot read the public key file");
             }
             String publicKey  = new String(arr2).trim();
 
@@ -458,7 +460,8 @@ public class ConfigurationServerImpl implements ConfigurationServer {
                     s_logger.debug("Private key inserted into database");
                 }
             } catch (SQLException ex) {
-                s_logger.warn("SQL of the private key failed",ex);
+                s_logger.error("SQL of the private key failed",ex);
+                throw new RuntimeException("SQL of the private key failed");
             }
 
             try {
@@ -468,7 +471,8 @@ public class ConfigurationServerImpl implements ConfigurationServer {
                     s_logger.debug("Public key inserted into database");
                 }
             } catch (SQLException ex) {
-                s_logger.warn("SQL of the public key failed",ex);
+                s_logger.error("SQL of the public key failed",ex);
+                throw new RuntimeException("SQL of the public key failed");
             }
         }
     }
diff --git a/server/src/com/cloud/server/ManagementServerImpl.java b/server/src/com/cloud/server/ManagementServerImpl.java
old mode 100644
new mode 100755
index 3c65b3033e3..0426877579e
--- a/server/src/com/cloud/server/ManagementServerImpl.java
+++ b/server/src/com/cloud/server/ManagementServerImpl.java
@@ -101,15 +101,15 @@ import com.cloud.async.executor.SecurityGroupParam;
 import com.cloud.async.executor.UpdateLoadBalancerParam;
 import com.cloud.async.executor.UpgradeVMParam;
 import com.cloud.async.executor.VMOperationParam;
-import com.cloud.async.executor.VMOperationParam.VmOp;
 import com.cloud.async.executor.VolumeOperationParam;
+import com.cloud.async.executor.VMOperationParam.VmOp;
 import com.cloud.async.executor.VolumeOperationParam.VolumeOp;
 import com.cloud.capacity.CapacityVO;
 import com.cloud.capacity.dao.CapacityDao;
 import com.cloud.configuration.ConfigurationManager;
 import com.cloud.configuration.ConfigurationVO;
-import com.cloud.configuration.ResourceCount.ResourceType;
 import com.cloud.configuration.ResourceLimitVO;
+import com.cloud.configuration.ResourceCount.ResourceType;
 import com.cloud.configuration.dao.ConfigurationDao;
 import com.cloud.configuration.dao.ResourceLimitDao;
 import com.cloud.consoleproxy.ConsoleProxyManager;
@@ -119,8 +119,8 @@ import com.cloud.dc.DataCenterIpAddressVO;
 import com.cloud.dc.DataCenterVO;
 import com.cloud.dc.HostPodVO;
 import com.cloud.dc.PodVlanMapVO;
-import com.cloud.dc.Vlan.VlanType;
 import com.cloud.dc.VlanVO;
+import com.cloud.dc.Vlan.VlanType;
 import com.cloud.dc.dao.AccountVlanMapDao;
 import com.cloud.dc.dao.ClusterDao;
 import com.cloud.dc.dao.DataCenterDao;
@@ -175,7 +175,6 @@ import com.cloud.network.security.NetworkGroupRulesVO;
 import com.cloud.network.security.NetworkGroupVO;
 import com.cloud.network.security.dao.NetworkGroupDao;
 import com.cloud.offering.NetworkOffering;
-import com.cloud.offering.NetworkOffering.GuestIpType;
 import com.cloud.offering.ServiceOffering;
 import com.cloud.serializer.GsonHelper;
 import com.cloud.server.auth.UserAuthenticator;
@@ -188,13 +187,10 @@ import com.cloud.storage.GuestOSCategoryVO;
 import com.cloud.storage.GuestOSVO;
 import com.cloud.storage.LaunchPermissionVO;
 import com.cloud.storage.Snapshot;
-import com.cloud.storage.Snapshot.SnapshotType;
 import com.cloud.storage.SnapshotPolicyVO;
 import com.cloud.storage.SnapshotScheduleVO;
 import com.cloud.storage.SnapshotVO;
 import com.cloud.storage.Storage;
-import com.cloud.storage.Storage.FileSystem;
-import com.cloud.storage.Storage.ImageFormat;
 import com.cloud.storage.StorageManager;
 import com.cloud.storage.StoragePoolHostVO;
 import com.cloud.storage.StoragePoolVO;
@@ -202,9 +198,13 @@ import com.cloud.storage.StorageStats;
 import com.cloud.storage.VMTemplateHostVO;
 import com.cloud.storage.VMTemplateStorageResourceAssoc;
 import com.cloud.storage.VMTemplateVO;
-import com.cloud.storage.Volume.VolumeType;
+import com.cloud.storage.Volume;
 import com.cloud.storage.VolumeStats;
 import com.cloud.storage.VolumeVO;
+import com.cloud.storage.Snapshot.SnapshotType;
+import com.cloud.storage.Storage.FileSystem;
+import com.cloud.storage.Storage.ImageFormat;
+import com.cloud.storage.Volume.VolumeType;
 import com.cloud.storage.dao.DiskOfferingDao;
 import com.cloud.storage.dao.DiskTemplateDao;
 import com.cloud.storage.dao.GuestOSCategoryDao;
@@ -215,9 +215,9 @@ import com.cloud.storage.dao.SnapshotPolicyDao;
 import com.cloud.storage.dao.StoragePoolDao;
 import com.cloud.storage.dao.StoragePoolHostDao;
 import com.cloud.storage.dao.VMTemplateDao;
-import com.cloud.storage.dao.VMTemplateDao.TemplateFilter;
 import com.cloud.storage.dao.VMTemplateHostDao;
 import com.cloud.storage.dao.VolumeDao;
+import com.cloud.storage.dao.VMTemplateDao.TemplateFilter;
 import com.cloud.storage.preallocatedlun.PreallocatedLunVO;
 import com.cloud.storage.preallocatedlun.dao.PreallocatedLunDao;
 import com.cloud.storage.secondary.SecondaryStorageVmManager;
@@ -239,12 +239,12 @@ import com.cloud.user.dao.UserDao;
 import com.cloud.user.dao.UserStatisticsDao;
 import com.cloud.uservm.UserVm;
 import com.cloud.utils.DateUtil;
-import com.cloud.utils.DateUtil.IntervalType;
 import com.cloud.utils.EnumUtils;
 import com.cloud.utils.NumbersUtil;
 import com.cloud.utils.Pair;
 import com.cloud.utils.PasswordGenerator;
 import com.cloud.utils.StringUtils;
+import com.cloud.utils.DateUtil.IntervalType;
 import com.cloud.utils.component.Adapters;
 import com.cloud.utils.component.ComponentLocator;
 import com.cloud.utils.concurrency.NamedThreadFactory;
@@ -921,6 +921,9 @@ public class ManagementServerImpl implements ManagementServer {
             // Mark the account's volumes as destroyed
             List volumes = _volumeDao.findDetachedByAccount(accountId);
             for (VolumeVO volume : volumes) {
+            	if(volume.getPoolId()==null){
+            		accountCleanupNeeded = true;
+            	}
             	_storageMgr.destroyVolume(volume);
             }
 
@@ -1983,13 +1986,17 @@ public class ManagementServerImpl implements ManagementServer {
     }
 
     @Override
-    public void detachVolumeFromVM(long volumeId, long startEventId) throws InternalErrorException {
-        _vmMgr.detachVolumeFromVM(volumeId, startEventId);
+    public void detachVolumeFromVM(long volumeId, long startEventId, long deviceId, long instanceId) throws InternalErrorException {
+        _vmMgr.detachVolumeFromVM(volumeId, startEventId, deviceId, instanceId);
     }
 
     @Override
-    public long detachVolumeFromVMAsync(long volumeId) throws InvalidParameterValueException {
-        VolumeVO volume = _volumeDao.findById(volumeId);
+    public long detachVolumeFromVMAsync(long volumeId, long deviceId, long instanceId) throws InvalidParameterValueException {
+    	VolumeVO volume = null;
+    	if(volumeId!=0)
+    		volume = _volumeDao.findById(volumeId);
+    	else
+    		volume = _volumeDao.findByInstanceAndDeviceId(instanceId, deviceId).get(0);
 
         // Check that the volume is a data volume
         if (volume.getVolumeType() != VolumeType.DATADISK) {
@@ -2020,6 +2027,8 @@ public class ManagementServerImpl implements ManagementServer {
         param.setAccountId(volume.getAccountId());
         param.setOp(VolumeOp.Detach);
         param.setVolumeId(volumeId);
+        param.setDeviceId(deviceId);
+        param.setVmId(instanceId);
         param.setEventId(eventId);
 
         Gson gson = GsonHelper.getBuilder().create();
@@ -4694,6 +4703,42 @@ public class ManagementServerImpl implements ManagementServer {
         return _vlanDao.findById(vlanDbId);
     }
 
+    @Override
+    public void extractTemplate(String url, Long templateId, Long zoneId) throws URISyntaxException{
+    
+        URI uri = new URI(url);
+        if ( (uri.getScheme() == null) || (!uri.getScheme().equalsIgnoreCase("ftp") )) {
+           throw new IllegalArgumentException("Unsupported scheme for url: " + url);
+        }
+        String host = uri.getHost();
+        
+        try {
+        	InetAddress hostAddr = InetAddress.getByName(host);
+        	if (hostAddr.isAnyLocalAddress() || hostAddr.isLinkLocalAddress() || hostAddr.isLoopbackAddress() || hostAddr.isMulticastAddress() ) {
+        		throw new IllegalArgumentException("Illegal host specified in url");
+        	}
+        	if (hostAddr instanceof Inet6Address) {
+        		throw new IllegalArgumentException("IPV6 addresses not supported (" + hostAddr.getHostAddress() + ")");
+        	}
+        } catch (UnknownHostException uhe) {
+        	throw new IllegalArgumentException("Unable to resolve " + host);
+        }
+        
+    	if (_dcDao.findById(zoneId) == null) {
+    		throw new IllegalArgumentException("Please specify a valid zone.");
+    	}
+        
+        VMTemplateVO template = findTemplateById(templateId);
+        
+        VMTemplateHostVO tmpltHostRef = findTemplateHostRef(templateId, zoneId);
+        if (tmpltHostRef != null && tmpltHostRef.getDownloadState() != com.cloud.storage.VMTemplateStorageResourceAssoc.Status.DOWNLOADED){
+        	throw new IllegalArgumentException("The template hasnt been downloaded ");
+        }
+        
+        _tmpltMgr.extract(template, url, tmpltHostRef, zoneId);
+        
+    }
+    
     @Override
     public Long createTemplate(long userId, Long zoneId, String name, String displayText, boolean isPublic, boolean featured, String format, String diskType, String url, String chksum, boolean requiresHvm, int bits, boolean enablePassword, long guestOSId, boolean bootable) throws InvalidParameterValueException,IllegalArgumentException, ResourceAllocationException {
         try
@@ -4989,7 +5034,6 @@ public class ManagementServerImpl implements ManagementServer {
     public List searchForUserVMs(Criteria c) {
         Filter searchFilter = new Filter(UserVmVO.class, c.getOrderBy(), c.getAscending(), c.getOffset(), c.getLimit());
         SearchBuilder sb = _userVmDao.createSearchBuilder();
-
         // some criteria matter for generating the join condition
         Object[] accountIds = (Object[]) c.getCriteria(Criteria.ACCOUNTID);
         Object domainId = c.getCriteria(Criteria.DOMAINID);
@@ -5006,7 +5050,8 @@ public class ManagementServerImpl implements ManagementServer {
         Object keyword = c.getCriteria(Criteria.KEYWORD);
         Object isAdmin = c.getCriteria(Criteria.ISADMIN);
         Object ipAddress = c.getCriteria(Criteria.IPADDRESS);
-
+        Object vmGroup = c.getCriteria(Criteria.GROUP);
+        Object emptyGroup = c.getCriteria(Criteria.EMPTY_GROUP);
         sb.and("displayName", sb.entity().getDisplayName(), SearchCriteria.Op.LIKE);
         sb.and("id", sb.entity().getId(), SearchCriteria.Op.EQ);
         sb.and("accountIdEQ", sb.entity().getAccountId(), SearchCriteria.Op.EQ);
@@ -5020,7 +5065,8 @@ public class ManagementServerImpl implements ManagementServer {
         sb.and("hostIdEQ", sb.entity().getHostId(), SearchCriteria.Op.EQ);
         sb.and("hostIdIN", sb.entity().getHostId(), SearchCriteria.Op.IN);
         sb.and("guestIP", sb.entity().getGuestIpAddress(), SearchCriteria.Op.EQ);
-
+        sb.and("groupEQ", sb.entity().getGroup(),SearchCriteria.Op.EQ);
+        
         if ((accountIds == null) && (domainId != null)) {
             // if accountId isn't specified, we can do a domain match for the admin case
             SearchBuilder domainSearch = _domainDao.createSearchBuilder();
@@ -5109,7 +5155,23 @@ public class ManagementServerImpl implements ManagementServer {
         if (ipAddress != null) {
             sc.setParameters("guestIP", ipAddress);
         }
+        
+        if(vmGroup!=null)
+        	sc.setParameters("groupEQ", vmGroup);
+        
+        if (emptyGroup!= null) 
+        {
+        	SearchBuilder emptyGroupSearch = _userVmDao.createSearchBuilder();
+        	emptyGroupSearch.and("group", emptyGroupSearch.entity().getGroup(), SearchCriteria.Op.EQ);
+        	emptyGroupSearch.or("null", emptyGroupSearch.entity().getGroup(), SearchCriteria.Op.NULL);
 
+        	SearchCriteria sc1 = _userVmDao.createSearchCriteria();
+        	sc1 = emptyGroupSearch.create();
+        	sc1.setParameters("group", "");
+        	
+        	sc.addAnd("group", SearchCriteria.Op.SC, sc1);
+        }
+        
         return _userVmDao.search(sc, searchFilter);
     }
 
@@ -5538,6 +5600,20 @@ public class ManagementServerImpl implements ManagementServer {
              return null;
          }
     }
+    
+    @Override
+    public VolumeVO findVolumeByInstanceAndDeviceId(long instanceId, long deviceId) 
+    {
+         VolumeVO volume = _volumeDao.findByInstanceAndDeviceId(instanceId, deviceId).get(0);
+         if (volume != null && !volume.getDestroyed() && volume.getRemoved() == null) 
+         {
+             return volume;
+         }
+         else 
+         {
+             return null;
+         }
+    }
 
 
     @Override
@@ -8633,9 +8709,15 @@ public class ManagementServerImpl implements ManagementServer {
     }
 
     @Override
-    public GuestOSCategoryVO getGuestOsCategory(Long guestOsId)
+    public GuestOSVO getGuestOs(Long guestOsId)
     {
-    	return _guestOSCategoryDao.findById(guestOsId);
+    	return _guestOSDao.findById(guestOsId);
+    }
+    
+    @Override
+    public VolumeVO getRootVolume(Long instanceId)
+    {
+    	return _volumeDao.findByInstanceAndType(instanceId, Volume.VolumeType.ROOT).get(0);
     }
 }
 
diff --git a/server/src/com/cloud/server/StatsCollector.java b/server/src/com/cloud/server/StatsCollector.java
index 58cfe27cdf8..668d4674171 100644
--- a/server/src/com/cloud/server/StatsCollector.java
+++ b/server/src/com/cloud/server/StatsCollector.java
@@ -61,6 +61,7 @@ import com.cloud.utils.component.ComponentLocator;
 import com.cloud.utils.concurrency.NamedThreadFactory;
 import com.cloud.utils.db.GlobalLock;
 import com.cloud.utils.db.SearchCriteria;
+import com.cloud.utils.db.Transaction;
 import com.cloud.vm.UserVmManager;
 import com.cloud.vm.UserVmVO;
 import com.cloud.vm.VmStats;
@@ -99,7 +100,7 @@ public class StatsCollector {
 	long storageStatsInterval = -1L;
 	long volumeStatsInterval = -1L;
 
-	private final GlobalLock m_capacityCheckLock = GlobalLock.getInternLock("capacity.check");
+	//private final GlobalLock m_capacityCheckLock = GlobalLock.getInternLock("capacity.check");
 
     public static StatsCollector getInstance() {
         return s_instance;
@@ -335,32 +336,25 @@ public class StatsCollector {
 //                    _capacityDao.persist(capacity);
                 }
 
-                if (m_capacityCheckLock.lock(5)) { // 5 second timeout
-		            if (s_logger.isTraceEnabled()) {
-		                s_logger.trace("recalculating system storage capacity");
-		            }
-		            try {
-		                // now update the capacity table with the new stats
-		                // FIXME: the right way to do this is to register a listener (see RouterStatsListener)
-		                //        for the host stats, send the WatchCommand at a regular interval
-		                //        to collect the stats from an agent and update the database as needed.  The
-		                //        listener model has connects/disconnects to keep things in sync much better
-		                //        than this model right now
-		                _capacityDao.clearStorageCapacities();
+                Transaction txn = Transaction.open(Transaction.CLOUD_DB);
+                try {
+                	if (s_logger.isTraceEnabled()) {
+		                s_logger.trace("recalculating system storage capacity");
+		            }
+                	txn.start();
+                	 _capacityDao.clearStorageCapacities();
 
-		                for (CapacityVO newCapacity : newCapacities) {
-		                    _capacityDao.persist(newCapacity);
-		                }
-		            } finally {
-                        m_capacityCheckLock.unlock();
-		            }
-                    if (s_logger.isTraceEnabled()) {
-                        s_logger.trace("done recalculating system storage capacity");
-                    }
-                } else {
-                    if (s_logger.isTraceEnabled()) {
-                        s_logger.trace("not recalculating system storage capacity, unable to lock capacity table");
-                    }
+	                for (CapacityVO newCapacity : newCapacities) {
+	                	s_logger.trace("Executing capacity update");
+	                    _capacityDao.persist(newCapacity);
+	                    s_logger.trace("Done with capacity update");
+	                }
+		            txn.commit();
+                } catch (Exception ex) {
+                	txn.rollback();
+                	s_logger.error("Unable to start transaction for storage capacity update");
+                }finally {
+                	txn.close();
                 }
 			} catch (Throwable t) {
 				s_logger.error("Error trying to retrieve storage stats", t);
diff --git a/server/src/com/cloud/storage/LocalStoragePoolListener.java b/server/src/com/cloud/storage/LocalStoragePoolListener.java
index 2c82edfe064..b113ec1a909 100644
--- a/server/src/com/cloud/storage/LocalStoragePoolListener.java
+++ b/server/src/com/cloud/storage/LocalStoragePoolListener.java
@@ -91,7 +91,9 @@ public class LocalStoragePoolListener implements Listener {
                 pool = new StoragePoolVO(poolId, name, pInfo.getUuid(), pInfo.getPoolType(), host.getDataCenterId(),
                                          host.getPodId(), pInfo.getAvailableBytes(), pInfo.getCapacityBytes(), pInfo.getHost(), 0,
                                          pInfo.getHostPath());
+                pool.setStatus(Status.Up);
                 pool.setClusterId(host.getClusterId());
+                pool.setStatus(Status.Up);
                 _storagePoolDao.persist(pool, pInfo.getDetails());
                 StoragePoolHostVO poolHost = new StoragePoolHostVO(pool.getId(), host.getId(), pInfo.getLocalPath());
                 _storagePoolHostDao.persist(poolHost);
diff --git a/server/src/com/cloud/storage/StorageManagerImpl.java b/server/src/com/cloud/storage/StorageManagerImpl.java
index 2dfe42aba89..c0db46eb496 100644
--- a/server/src/com/cloud/storage/StorageManagerImpl.java
+++ b/server/src/com/cloud/storage/StorageManagerImpl.java
@@ -123,6 +123,7 @@ import com.cloud.user.AccountManager;
 import com.cloud.user.AccountVO;
 import com.cloud.user.User;
 import com.cloud.user.dao.AccountDao;
+import com.cloud.user.dao.UserDao;
 import com.cloud.uservm.UserVm;
 import com.cloud.utils.NumbersUtil;
 import com.cloud.utils.Pair;
@@ -188,6 +189,7 @@ public class StorageManagerImpl implements StorageManager {
     @Inject protected VMTemplateDao _templateDao;
     @Inject protected VMTemplateHostDao _templateHostDao;
     @Inject protected ServiceOfferingDao _offeringDao;
+    @Inject protected UserDao _userDao;
     
     protected SearchBuilder HostTemplateStatesSearch;
     protected SearchBuilder PoolsUsedByVmSearch;
@@ -298,6 +300,21 @@ public class StorageManagerImpl implements StorageManager {
                 return vols;
             }
             
+            //if we have a system vm
+            //get the storage pool
+            //if pool is in maintenance
+            //add to recreate vols, and continue
+            if(vm.getType().equals(VirtualMachine.Type.ConsoleProxy) || vm.getType().equals(VirtualMachine.Type.DomainRouter) || vm.getType().equals(VirtualMachine.Type.SecondaryStorageVm))
+            {
+            	StoragePoolVO sp = _storagePoolDao.findById(vol.getPoolId());
+            	
+            	if(sp.getStatus().equals(Status.PrepareForMaintenance))
+            	{
+            		recreateVols.add(vol);
+            		continue;
+            	}
+            }
+            
             StoragePoolHostVO ph = _storagePoolHostDao.findByPoolHost(vol.getPoolId(), host.getId());
             if (ph == null) {
                 if (s_logger.isDebugEnabled()) {
@@ -893,7 +910,7 @@ public class StorageManagerImpl implements StorageManager {
             
             if (dataVol != null) {
                 StoragePoolVO pool = _storagePoolDao.findById(rootCreated.getPoolId());
-                dataCreated = createVolume(dataVol, vm, template, dc, pod, pool.getClusterId(), offering, diskOffering, avoids,size);
+                dataCreated = createVolume(dataVol, vm, null, dc, pod, pool.getClusterId(), offering, diskOffering, avoids,size);
                 if (dataCreated == null) {
                     throw new CloudRuntimeException("Unable to create " + dataVol);
                 }
@@ -922,6 +939,17 @@ public class StorageManagerImpl implements StorageManager {
         }
         
         for (VolumeVO v : volumes) {
+        	
+        	//when the user vm is created, the volume is attached upon creation
+        	//set the attached datetime
+        	try{
+        		v.setAttached(new Date());
+        		_volsDao.update(v.getId(), v);
+        	}catch(Exception e)
+        	{
+        		s_logger.warn("Error updating the attached value for volume "+v.getId()+":"+e);
+        	}
+        	
         	long volumeId = v.getId();
         	// Create an event
         	long sizeMB = v.getSize() / (1024 * 1024);
@@ -1359,6 +1387,14 @@ public class StorageManagerImpl implements StorageManager {
         }
         long poolId = _storagePoolDao.getNextInSequence(Long.class, "id");
         String uuid = UUID.nameUUIDFromBytes(new String(storageHost + hostPath).getBytes()).toString();
+        
+        List spHandles = _storagePoolDao.findIfDuplicatePoolsExistByUUID(uuid);
+        if(spHandles!=null && spHandles.size()>0)
+        {
+        	s_logger.debug("Another active pool with the same uuid already exists");
+        	throw new ResourceInUseException("Another active pool with the same uuid already exists");
+        }
+        
         s_logger.debug("In createPool Setting poolId - " +poolId+ " uuid - " +uuid+ " zoneId - " +zoneId+ " podId - " +podId+ " poolName - " +poolName);
         pool.setId(poolId);
         pool.setUuid(uuid);
@@ -1794,7 +1830,8 @@ public class StorageManagerImpl implements StorageManager {
                     }
                 }
                 s_logger.debug("Trying to execute Command: " + cmd + " on host: " + hostId + " try: " + tryCount);
-                answer = _agentMgr.send(hostId, cmd);
+                // set 120 min timeout for storage related command
+                answer = _agentMgr.send(hostId, cmd, 120*60*1000);
                 
                 if (answer != null && answer.getResult()) {
                     return answer;
@@ -1954,10 +1991,8 @@ public class StorageManagerImpl implements StorageManager {
     @DB
     public boolean preparePrimaryStorageForMaintenance(long primaryStorageId, long userId) 
     {
-        boolean destroyVolumes = false;
         long count = 1;
-        long consoleProxyId = 0;
-        long ssvmId = 0;
+        boolean restart = true;
         try 
         {
         	//1. Get the primary storage record
@@ -1970,34 +2005,31 @@ public class StorageManagerImpl implements StorageManager {
         	}	
         	
         	//check to see if other ps exist
-        	//if they do, then we can migrate over the system vms to them, destroy volumes for sys vms
-        	//if they dont, then do NOT destroy the volumes on this one
+        	//if they do, then we can migrate over the system vms to them
+        	//if they dont, then just stop all vms on this one
         	count = _storagePoolDao.countBy(primaryStorage.getId(), Status.Up);
-        	if(count>1)
-        	{
-        		destroyVolumes = true;
-        	}
         	
+        	if(count == 1)
+        		restart = false;
+        		
         	//2. Get a list of all the volumes within this storage pool
         	List allVolumes = _volsDao.findByPoolId(primaryStorageId);
-        	List markedVolumes = new ArrayList();
         	
         	//3. Each volume has an instance associated with it, stop the instance if running
         	for(VolumeVO volume : allVolumes)
         	{
         		VMInstanceVO vmInstance = _vmInstanceDao.findById(volume.getInstanceId());
         		
+        		if(vmInstance == null)
+        			continue;
+        		
         		//shut down the running vms
         		if(vmInstance.getState().equals(State.Running) || vmInstance.getState().equals(State.Stopped) || vmInstance.getState().equals(State.Stopping) || vmInstance.getState().equals(State.Starting))
         		{
         			
         			//if the instance is of type consoleproxy, call the console proxy
         			if(vmInstance.getType().equals(VirtualMachine.Type.ConsoleProxy))
-        			{
-        				//add this volume to be removed if flag=true
-        				if(destroyVolumes)
-        					markedVolumes.add(volume);
-        				
+        			{        				
         				//make sure it is not restarted again, update config to set flag to false
         				_configMgr.updateConfiguration(userId, "consoleproxy.restart", "false");
         				
@@ -2012,14 +2044,21 @@ public class StorageManagerImpl implements StorageManager {
                     		_storagePoolDao.persist(primaryStorage);
                     		return false;
             			}
-        				else
+        				else if(restart)
         				{
-        					if(destroyVolumes)
-        					{
-        						//proxy vm is stopped, and we have another ps available 
-        						//get the id for restart
-        						consoleProxyId = vmInstance.getId();        						
-        					}
+    						//create a dummy event
+    						long eventId1 = saveScheduledEvent(User.UID_SYSTEM, Account.ACCOUNT_ID_SYSTEM, EventTypes.EVENT_PROXY_START, "starting console proxy with Id: "+vmInstance.getId());
+    						
+    						//Restore config val for consoleproxy.restart to true
+    						_configMgr.updateConfiguration(userId, "consoleproxy.restart", "true");
+    						
+    						if(_consoleProxyMgr.startProxy(vmInstance.getId(), eventId1)==null)
+    						{
+    							s_logger.warn("There was an error starting the console proxy id: "+vmInstance.getId()+" on another storage pool, cannot enable primary storage maintenance");
+    			            	primaryStorage.setStatus(Status.ErrorInMaintenance);
+    			        		_storagePoolDao.persist(primaryStorage);
+    							return false;				
+    						}	  						
         				}
         			}
         			
@@ -2039,10 +2078,6 @@ public class StorageManagerImpl implements StorageManager {
         			//if the instance is of type secondary storage vm, call the secondary storage vm manager
         			if(vmInstance.getType().equals(VirtualMachine.Type.SecondaryStorageVm))
         			{           				
-        				//add this volume to be removed if flag=true
-        				if(destroyVolumes)
-        					markedVolumes.add(volume);
-        				
         				//create a dummy event
         				long eventId1 = saveScheduledEvent(User.UID_SYSTEM, Account.ACCOUNT_ID_SYSTEM, EventTypes.EVENT_SSVM_STOP, "stopping ssvm with Id: "+vmInstance.getId());
 
@@ -2053,25 +2088,23 @@ public class StorageManagerImpl implements StorageManager {
         	        		_storagePoolDao.persist(primaryStorage);
         					return false;
         				}
-        				else
+        				else if(restart)
         				{
-        					if(destroyVolumes)
-        					{
-        						//ss vm is stopped, and we have another ps available 				
-        						//get the id for restart
-        						ssvmId = vmInstance.getId();
-        					}
+    						//create a dummy event and restart the ssvm immediately
+    						long eventId = saveScheduledEvent(User.UID_SYSTEM, Account.ACCOUNT_ID_SYSTEM, EventTypes.EVENT_SSVM_START, "starting ssvm with Id: "+vmInstance.getId());
+    						if(_secStorageMgr.startSecStorageVm(vmInstance.getId(), eventId)==null)
+    						{
+    							s_logger.warn("There was an error starting the ssvm id: "+vmInstance.getId()+" on another storage pool, cannot enable primary storage maintenance");
+    			            	primaryStorage.setStatus(Status.ErrorInMaintenance);
+    			        		_storagePoolDao.persist(primaryStorage);
+    							return false;
+    						}
         				}
-
         			}
 
            			//if the instance is of type domain router vm, call the network manager
         			if(vmInstance.getType().equals(VirtualMachine.Type.DomainRouter))
         			{   
-        				//add this volume to be removed if flag=true
-        				if(destroyVolumes)
-        					markedVolumes.add(volume);
-        				
         				//create a dummy event
         				long eventId2 = saveScheduledEvent(User.UID_SYSTEM, Account.ACCOUNT_ID_SYSTEM, EventTypes.EVENT_ROUTER_STOP, "stopping domain router with Id: "+vmInstance.getId());
 
@@ -2082,45 +2115,23 @@ public class StorageManagerImpl implements StorageManager {
         	        		_storagePoolDao.persist(primaryStorage);
         					return false;
         				}
+           				else if(restart)
+        				{
+    						//create a dummy event and restart the domr immediately
+    						long eventId = saveScheduledEvent(User.UID_SYSTEM, Account.ACCOUNT_ID_SYSTEM, EventTypes.EVENT_PROXY_START, "starting domr with Id: "+vmInstance.getId());
+    						if(_networkMgr.startRouter(vmInstance.getId(), eventId)==null)
+    						{
+    							s_logger.warn("There was an error starting the omr id: "+vmInstance.getId()+" on another storage pool, cannot enable primary storage maintenance");
+    			            	primaryStorage.setStatus(Status.ErrorInMaintenance);
+    			        		_storagePoolDao.persist(primaryStorage);
+    							return false;
+    						}
+        				}
         			}
-
         		}	
         	}
         	
-        	//4. Mark the volumes as removed
-        	for(VolumeVO vol : markedVolumes)
-        	{
-        		_volsDao.remove(vol.getId());
-        	}
-        	
-        	//5. Restart all the system vms conditionally
-        	if(destroyVolumes) //this means we have another ps. Ok to restart
-        	{
-				//create a dummy event
-				long eventId = saveScheduledEvent(User.UID_SYSTEM, Account.ACCOUNT_ID_SYSTEM, EventTypes.EVENT_SSVM_START, "starting ssvm with Id: "+ssvmId);
-				if(_secStorageMgr.startSecStorageVm(ssvmId, eventId)==null)
-				{
-					s_logger.warn("There was an error starting the ssvm id: "+ssvmId+" on another storage pool, cannot enable primary storage maintenance");
-	            	primaryStorage.setStatus(Status.ErrorInMaintenance);
-	        		_storagePoolDao.persist(primaryStorage);
-					return false;
-				}
-				
-				//create a dummy event
-				long eventId1 = saveScheduledEvent(User.UID_SYSTEM, Account.ACCOUNT_ID_SYSTEM, EventTypes.EVENT_PROXY_START, "starting console proxy with Id: "+consoleProxyId);
-				
-				//Restore config val for consoleproxy.restart to true
-				_configMgr.updateConfiguration(userId, "consoleproxy.restart", "true");
-				
-				if(_consoleProxyMgr.startProxy(consoleProxyId, eventId1)==null)
-				{
-					s_logger.warn("There was an error starting the console proxy id: "+consoleProxyId+" on another storage pool, cannot enable primary storage maintenance");
-	            	primaryStorage.setStatus(Status.ErrorInMaintenance);
-	        		_storagePoolDao.persist(primaryStorage);
-					return false;				}
-        	}
-        	
-        	//6. Update the status
+        	//5. Update the status
         	primaryStorage.setStatus(Status.Maintenance);
         	_storagePoolDao.persist(primaryStorage);
         	
diff --git a/server/src/com/cloud/storage/download/DownloadMonitorImpl.java b/server/src/com/cloud/storage/download/DownloadMonitorImpl.java
index 31739bfec3e..582e0f58f9e 100644
--- a/server/src/com/cloud/storage/download/DownloadMonitorImpl.java
+++ b/server/src/com/cloud/storage/download/DownloadMonitorImpl.java
@@ -450,12 +450,13 @@ public class DownloadMonitorImpl implements  DownloadMonitor {
                     tmpltHost.setDownloadPercent(100);
                     tmpltHost.setDownloadState(Status.DOWNLOADED);
                     tmpltHost.setInstallPath(templateInfo.get(uniqueName).getInstallPath());
+                    tmpltHost.setSize(templateInfo.get(uniqueName).getSize());
                     tmpltHost.setLastUpdated(new Date());
 					_vmTemplateHostDao.update(tmpltHost.getId(), tmpltHost);
 				} else {
-					VMTemplateHostVO templtHost = new VMTemplateHostVO(sserverId, tmplt.getId(), new Date(), 100, Status.DOWNLOADED, null, null, null, templateInfo.get(uniqueName).getInstallPath(), tmplt.getUrl());
-					templtHost.setSize(templateInfo.get(uniqueName).getSize());
-					_vmTemplateHostDao.persist(templtHost);
+				    tmpltHost = new VMTemplateHostVO(sserverId, tmplt.getId(), new Date(), 100, Status.DOWNLOADED, null, null, null, templateInfo.get(uniqueName).getInstallPath(), tmplt.getUrl());
+					tmpltHost.setSize(templateInfo.get(uniqueName).getSize());
+					_vmTemplateHostDao.persist(tmpltHost);
 				}
 				templateInfo.remove(uniqueName);
 				continue;
diff --git a/server/src/com/cloud/storage/snapshot/SnapshotManagerImpl.java b/server/src/com/cloud/storage/snapshot/SnapshotManagerImpl.java
index c6f82d2f567..d2cd55e2c68 100644
--- a/server/src/com/cloud/storage/snapshot/SnapshotManagerImpl.java
+++ b/server/src/com/cloud/storage/snapshot/SnapshotManagerImpl.java
@@ -1006,6 +1006,9 @@ public class SnapshotManagerImpl implements SnapshotManager {
         // i.e Call them before the VMs for those volumes are destroyed.
         boolean success = true;
         for (VolumeVO volume : volumes) {
+        	if(volume.getPoolId()==null){
+        		continue;
+        	}
         	Long volumeId = volume.getId();
         	Long dcId = volume.getDataCenterId();
         	String secondaryStoragePoolURL = _storageMgr.getSecondaryStorageURL(dcId);
diff --git a/server/src/com/cloud/storage/upload/NotUploadedState.java b/server/src/com/cloud/storage/upload/NotUploadedState.java
new file mode 100644
index 00000000000..76f3d8fab72
--- /dev/null
+++ b/server/src/com/cloud/storage/upload/NotUploadedState.java
@@ -0,0 +1,23 @@
+package com.cloud.storage.upload;
+
+import com.cloud.agent.api.storage.UploadProgressCommand.RequestType;
+import com.cloud.storage.VMTemplateStorageResourceAssoc.Status;
+
+public class NotUploadedState extends UploadActiveState {
+	
+	public NotUploadedState(UploadListener uploadListener) {
+		super(uploadListener);
+	}	
+
+	@Override
+	public String getName() {
+		return Status.NOT_UPLOADED.toString();
+	}
+
+	@Override
+	public void onEntry(String prevState, UploadEvent event, Object evtObj) {
+		super.onEntry(prevState, event, evtObj);
+		getUploadListener().scheduleStatusCheck(RequestType.GET_STATUS);
+	}
+	
+}
diff --git a/server/src/com/cloud/storage/upload/UploadAbandonedState.java b/server/src/com/cloud/storage/upload/UploadAbandonedState.java
new file mode 100644
index 00000000000..3b51bd0a14d
--- /dev/null
+++ b/server/src/com/cloud/storage/upload/UploadAbandonedState.java
@@ -0,0 +1,27 @@
+package com.cloud.storage.upload;
+
+import com.cloud.agent.api.storage.UploadProgressCommand.RequestType;
+import com.cloud.storage.VMTemplateStorageResourceAssoc.Status;
+
+public class UploadAbandonedState extends UploadInactiveState {
+
+	public UploadAbandonedState(UploadListener dl) {
+		super(dl);
+	}
+
+	@Override
+	public String getName() {
+		return Status.ABANDONED.toString();
+	}
+
+	@Override
+	public void onEntry(String prevState, UploadEvent event, Object evtObj) {
+		super.onEntry(prevState, event, evtObj);
+		if (!prevState.equalsIgnoreCase(getName())){
+			getUploadListener().updateDatabase(Status.ABANDONED, "Upload canceled");
+			getUploadListener().cancelStatusTask();
+			getUploadListener().cancelTimeoutTask();
+			getUploadListener().sendCommand(RequestType.ABORT);
+		}
+	}
+}
diff --git a/server/src/com/cloud/storage/upload/UploadActiveState.java b/server/src/com/cloud/storage/upload/UploadActiveState.java
new file mode 100644
index 00000000000..a249906a9ec
--- /dev/null
+++ b/server/src/com/cloud/storage/upload/UploadActiveState.java
@@ -0,0 +1,95 @@
+package com.cloud.storage.upload;
+
+import org.apache.log4j.Level;
+
+import com.cloud.agent.api.storage.DownloadAnswer;
+import com.cloud.agent.api.storage.UploadAnswer;
+import com.cloud.agent.api.storage.UploadAnswer;
+import com.cloud.agent.api.storage.UploadProgressCommand.RequestType;
+import com.cloud.storage.VMTemplateStorageResourceAssoc.Status;
+import com.cloud.storage.download.DownloadState.DownloadEvent;
+
+public abstract class UploadActiveState extends UploadState {
+	
+	public UploadActiveState(UploadListener ul) {
+		super(ul);
+	}
+	
+	@Override
+	public  String handleAbort(){
+		return Status.ABANDONED.toString();
+	}
+	
+	@Override
+	public  String handleDisconnect(){
+
+		return Status.UPLOAD_ERROR.toString();
+	}
+
+
+	@Override
+	public String handleAnswer(UploadAnswer answer) {
+		if (s_logger.isDebugEnabled()) {
+			s_logger.debug("handleAnswer, answer status=" + answer.getUploadStatus() + ", curr state=" + getName());
+		}
+		switch (answer.getUploadStatus()) {
+		case UPLOAD_IN_PROGRESS:
+			getUploadListener().scheduleStatusCheck(RequestType.GET_STATUS);
+			return Status.UPLOAD_IN_PROGRESS.toString();
+		case UPLOADED:
+			getUploadListener().scheduleImmediateStatusCheck(RequestType.PURGE);
+			getUploadListener().cancelTimeoutTask();
+			return Status.UPLOADED.toString();
+		case NOT_UPLOADED:
+			getUploadListener().scheduleStatusCheck(RequestType.GET_STATUS);
+			return Status.NOT_UPLOADED.toString();
+		case UPLOAD_ERROR:
+			getUploadListener().cancelStatusTask();
+			getUploadListener().cancelTimeoutTask();
+			return Status.UPLOAD_ERROR.toString();
+		case UNKNOWN:
+			getUploadListener().cancelStatusTask();
+			getUploadListener().cancelTimeoutTask();
+			return Status.UPLOAD_ERROR.toString();
+		default:
+			return null;
+		}
+	}
+
+	@Override
+	public String handleTimeout(long updateMs) {
+		if (s_logger.isTraceEnabled()) {
+			getUploadListener().log("handleTimeout, updateMs=" + updateMs + ", curr state= " + getName(), Level.TRACE);
+		}
+		String newState = this.getName();
+		if (updateMs > 5*UploadListener.STATUS_POLL_INTERVAL){
+			newState=Status.UPLOAD_ERROR.toString();
+			getUploadListener().log("timeout: transitioning to upload error state, currstate=" + getName(), Level.DEBUG );
+		} else if (updateMs > 3*UploadListener.STATUS_POLL_INTERVAL) {
+			getUploadListener().cancelStatusTask();
+			getUploadListener().scheduleImmediateStatusCheck(RequestType.GET_STATUS);
+			getUploadListener().scheduleTimeoutTask(3*UploadListener.STATUS_POLL_INTERVAL);
+			getUploadListener().log(getName() + " first timeout: checking again ", Level.DEBUG );
+		} else {
+			getUploadListener().scheduleTimeoutTask(3*UploadListener.STATUS_POLL_INTERVAL);
+		}
+		return newState;
+	}
+	
+	@Override
+	public  void onEntry(String prevState, UploadEvent event, Object evtObj) {
+		if (s_logger.isTraceEnabled()) {
+			getUploadListener().log("onEntry, prev state= " + prevState + ", curr state=" + getName() + ", event=" + event, Level.TRACE);
+		}
+		
+		if (event == UploadEvent.UPLOAD_ANSWER) {
+			getUploadListener().updateDatabase((UploadAnswer)evtObj);
+			getUploadListener().setLastUpdated();
+		}
+		
+	}
+	
+	@Override
+	public  void onExit() {
+	}
+}
diff --git a/server/src/com/cloud/storage/upload/UploadCompleteState.java b/server/src/com/cloud/storage/upload/UploadCompleteState.java
new file mode 100644
index 00000000000..1d433b3fce2
--- /dev/null
+++ b/server/src/com/cloud/storage/upload/UploadCompleteState.java
@@ -0,0 +1,30 @@
+package com.cloud.storage.upload;
+
+import com.cloud.agent.api.storage.UploadProgressCommand.RequestType;
+import com.cloud.storage.VMTemplateStorageResourceAssoc.Status;
+
+public class UploadCompleteState extends UploadInactiveState {
+
+	public UploadCompleteState(UploadListener ul) {
+		super(ul);
+	}
+
+	@Override
+	public String getName() {
+		return Status.UPLOADED.toString();
+
+	}
+
+
+	@Override
+	public void onEntry(String prevState, UploadEvent event, Object evtObj) {
+		super.onEntry(prevState, event, evtObj);
+		if (! prevState.equals(getName())) {
+			if (event == UploadEvent.UPLOAD_ANSWER){
+				getUploadListener().scheduleImmediateStatusCheck(RequestType.PURGE);
+			}
+			getUploadListener().setUploadInactive(Status.UPLOADED);
+		}
+		
+	}
+}
diff --git a/server/src/com/cloud/storage/upload/UploadErrorState.java b/server/src/com/cloud/storage/upload/UploadErrorState.java
new file mode 100644
index 00000000000..185c0633b28
--- /dev/null
+++ b/server/src/com/cloud/storage/upload/UploadErrorState.java
@@ -0,0 +1,73 @@
+package com.cloud.storage.upload;
+
+import org.apache.log4j.Level;
+
+import com.cloud.agent.api.storage.UploadAnswer;
+import com.cloud.agent.api.storage.UploadProgressCommand.RequestType;
+import com.cloud.storage.VMTemplateStorageResourceAssoc.Status;
+
+public class UploadErrorState extends UploadInactiveState {
+
+	public UploadErrorState(UploadListener ul) {
+		super(ul);
+	}
+
+	@Override
+	public String handleAnswer(UploadAnswer answer) {
+		switch (answer.getUploadStatus()) {
+		case UPLOAD_IN_PROGRESS:
+			getUploadListener().scheduleStatusCheck(RequestType.GET_STATUS);
+			return Status.UPLOAD_IN_PROGRESS.toString();
+		case UPLOADED:
+			getUploadListener().scheduleImmediateStatusCheck(RequestType.PURGE);
+			getUploadListener().cancelTimeoutTask();
+			return Status.UPLOADED.toString();
+		case NOT_UPLOADED:
+			getUploadListener().scheduleStatusCheck(RequestType.GET_STATUS);
+			return Status.NOT_UPLOADED.toString();
+		case UPLOAD_ERROR:
+			getUploadListener().cancelStatusTask();
+			getUploadListener().cancelTimeoutTask();
+			return Status.UPLOAD_ERROR.toString();
+		case UNKNOWN:
+			getUploadListener().cancelStatusTask();
+			getUploadListener().cancelTimeoutTask();
+			return Status.UPLOAD_ERROR.toString();
+		default:
+			return null;
+		}
+	}
+
+
+
+	@Override
+	public String handleAbort() {
+		return Status.ABANDONED.toString();
+	}
+
+
+	@Override
+	public String getName() {
+		return Status.UPLOAD_ERROR.toString();
+	}
+
+
+	@Override
+	public void onEntry(String prevState, UploadEvent event, Object evtObj) {
+		super.onEntry(prevState, event, evtObj);
+		if (event==UploadEvent.DISCONNECT){
+			getUploadListener().logDisconnect();
+			getUploadListener().cancelStatusTask();
+			getUploadListener().cancelTimeoutTask();
+			getUploadListener().updateDatabase(Status.UPLOAD_ERROR, "Storage agent or storage VM disconnected");  
+			getUploadListener().log("Entering upload error state because the storage host disconnected", Level.WARN);
+		} else if (event==UploadEvent.TIMEOUT_CHECK){
+			getUploadListener().updateDatabase(Status.UPLOAD_ERROR, "Timeout waiting for response from storage host");
+			getUploadListener().log("Entering upload error state: timeout waiting for response from storage host", Level.WARN);
+		}
+		getUploadListener().setUploadInactive(Status.UPLOAD_ERROR);
+	}
+
+
+
+}
diff --git a/server/src/com/cloud/storage/upload/UploadInProgressState.java b/server/src/com/cloud/storage/upload/UploadInProgressState.java
new file mode 100644
index 00000000000..e72f3db42cf
--- /dev/null
+++ b/server/src/com/cloud/storage/upload/UploadInProgressState.java
@@ -0,0 +1,23 @@
+package com.cloud.storage.upload;
+
+import com.cloud.storage.VMTemplateStorageResourceAssoc.Status;
+
+public class UploadInProgressState extends UploadActiveState {
+
+	public UploadInProgressState(UploadListener dl) {
+		super(dl);
+	}
+
+	@Override
+	public String getName() {
+		return Status.UPLOAD_IN_PROGRESS.toString();
+	}
+
+	@Override
+	public void onEntry(String prevState, UploadEvent event, Object evtObj) {
+		super.onEntry(prevState, event, evtObj);
+		if (!prevState.equals(getName()))
+			getUploadListener().logUploadStart();
+	}
+
+}
diff --git a/server/src/com/cloud/storage/upload/UploadInactiveState.java b/server/src/com/cloud/storage/upload/UploadInactiveState.java
new file mode 100644
index 00000000000..f70f3d8d0e3
--- /dev/null
+++ b/server/src/com/cloud/storage/upload/UploadInactiveState.java
@@ -0,0 +1,35 @@
+package com.cloud.storage.upload;
+
+import com.cloud.agent.api.storage.DownloadAnswer;
+import com.cloud.agent.api.storage.UploadAnswer;
+
+public abstract class UploadInactiveState extends UploadState {
+
+	public UploadInactiveState(UploadListener ul) {
+		super(ul);
+	}
+
+	@Override
+	public String handleAnswer(UploadAnswer answer) {
+		// ignore and stay put
+		return getName();
+	}
+
+	@Override
+	public String handleAbort() {
+		// ignore and stay put
+		return getName();
+	}
+
+	@Override
+	public String handleDisconnect() {
+		//ignore and stay put
+		return getName();
+	}
+
+	@Override
+	public String handleTimeout(long updateMs) {
+		// ignore and stay put
+		return getName();
+	}
+}
diff --git a/server/src/com/cloud/storage/upload/UploadListener.java b/server/src/com/cloud/storage/upload/UploadListener.java
new file mode 100644
index 00000000000..73033de96b4
--- /dev/null
+++ b/server/src/com/cloud/storage/upload/UploadListener.java
@@ -0,0 +1,369 @@
+package com.cloud.storage.upload;
+
+
+import java.util.Date;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Timer;
+import java.util.TimerTask;
+
+import org.apache.log4j.Level;
+import org.apache.log4j.Logger;
+
+import com.cloud.agent.Listener;
+import com.cloud.agent.api.AgentControlAnswer;
+import com.cloud.agent.api.AgentControlCommand;
+import com.cloud.agent.api.Answer;
+import com.cloud.agent.api.Command;
+import com.cloud.agent.api.StartupCommand;
+import com.cloud.agent.api.StartupStorageCommand;
+import com.cloud.agent.api.storage.DownloadCommand;
+import com.cloud.agent.api.storage.DownloadProgressCommand;
+import com.cloud.agent.api.storage.UploadAnswer;
+import com.cloud.agent.api.storage.UploadCommand;
+import com.cloud.agent.api.storage.UploadProgressCommand;
+import com.cloud.agent.api.storage.UploadProgressCommand.RequestType;
+import com.cloud.event.EventTypes;
+import com.cloud.event.EventVO;
+import com.cloud.host.HostVO;
+import com.cloud.storage.Storage;
+import com.cloud.storage.VMTemplateHostVO;
+import com.cloud.storage.VMTemplateVO;
+import com.cloud.storage.dao.VMTemplateHostDao;
+import com.cloud.storage.VMTemplateStorageResourceAssoc.Status;
+import com.cloud.storage.download.DownloadState.DownloadEvent;
+import com.cloud.storage.upload.UploadMonitorImpl;
+import com.cloud.storage.upload.UploadState.UploadEvent;
+import com.cloud.utils.exception.CloudRuntimeException;
+
+public class UploadListener implements Listener {
+	
+
+	private static final class StatusTask extends TimerTask {
+		private final UploadListener ul;
+		private final RequestType reqType;
+		
+		public StatusTask( UploadListener ul,  RequestType req) {
+			this.reqType = req;
+			this.ul = ul;
+		}
+
+		@Override
+		public void run() {
+		  ul.sendCommand(reqType);
+
+		}
+	}
+	
+	private static final class TimeoutTask extends TimerTask {
+		private final UploadListener ul;
+		
+		public TimeoutTask( UploadListener ul) {
+			this.ul = ul;
+		}
+
+		@Override
+		public void run() {
+		  ul.checkProgress();
+		}
+	}
+
+	public static final Logger s_logger = Logger.getLogger(UploadListener.class.getName());
+	public static final int SMALL_DELAY = 100;
+	public static final long STATUS_POLL_INTERVAL = 10000L;
+	
+	public static final String UPLOADED=Status.UPLOADED.toString();
+	public static final String NOT_UPLOADED=Status.NOT_UPLOADED.toString();
+	public static final String UPLOAD_ERROR=Status.UPLOAD_ERROR.toString();
+	public static final String UPLOAD_IN_PROGRESS=Status.UPLOAD_IN_PROGRESS.toString();
+	public static final String UPLOAD_ABANDONED=Status.ABANDONED.toString();
+
+
+	private HostVO sserver;
+	private VMTemplateVO template;
+	
+	private boolean uploadActive = true;
+
+	private VMTemplateHostDao vmTemplateHostDao;
+
+	private final UploadMonitorImpl uploadMonitor;
+	
+	private UploadState currState;
+	
+	private UploadCommand cmd;
+
+	private Timer timer;
+
+	private StatusTask statusTask;
+	private TimeoutTask timeoutTask;
+	private Date lastUpdated = new Date();
+	private String jobId;
+	
+	private final Map stateMap = new HashMap();
+	private Long templateHostId;
+	
+	public UploadListener(HostVO host, VMTemplateVO template, Timer _timer, VMTemplateHostDao dao, Long templHostId, UploadMonitorImpl uploadMonitor, UploadCommand cmd) {
+		this.sserver = host;
+		this.template = template;
+		this.vmTemplateHostDao = dao;
+		this.uploadMonitor = uploadMonitor;
+		this.cmd = cmd;
+		this.templateHostId = templHostId;
+		initStateMachine();
+		this.currState = getState(Status.NOT_UPLOADED.toString());
+		this.timer = _timer;
+		this.timeoutTask = new TimeoutTask(this);
+		this.timer.schedule(timeoutTask, 3*STATUS_POLL_INTERVAL);
+		updateDatabase(Status.NOT_UPLOADED, cmd.getUrl(),"");
+	}
+	
+	public UploadListener(UploadMonitorImpl monitor) {
+	    uploadMonitor = monitor;
+	}	
+	
+	public void checkProgress() {
+		transition(UploadEvent.TIMEOUT_CHECK, null);
+	}
+
+	@Override
+	public int getTimeout() {
+		return -1;
+	}
+
+	@Override
+	public boolean isRecurring() {
+		return false;
+	}
+
+	public void setCommand(UploadCommand _cmd) {
+		this.cmd = _cmd;
+	}
+	
+	public void setJobId(String _jobId) {
+		this.jobId = _jobId;
+	}
+	
+	public String getJobId() {
+		return jobId;
+	}
+	
+	@Override
+	public boolean processAnswer(long agentId, long seq, Answer[] answers) {
+		boolean processed = false;
+    	if(answers != null & answers.length > 0) {
+    		if(answers[0] instanceof UploadAnswer) {
+    			final UploadAnswer answer = (UploadAnswer)answers[0];
+    			if (getJobId() == null) {
+    				setJobId(answer.getJobId());
+    			} else if (!getJobId().equalsIgnoreCase(answer.getJobId())){
+    				return false;//TODO
+    			}
+    			transition(UploadEvent.UPLOAD_ANSWER, answer);
+    			processed = true;
+    		}
+    	}
+        return processed;
+	}
+	
+
+	@Override
+	public boolean processCommand(long agentId, long seq, Command[] commands) {
+		return false;
+	}
+
+	@Override
+	public boolean processConnect(HostVO agent, StartupCommand cmd) {		
+		if (!(cmd instanceof StartupStorageCommand)) {
+	        return true;
+	    }
+	   /* if (cmd.getGuid().startsWith("iso:")) {
+	        //FIXME: do not download template for ISO secondary
+	        return true;
+	    }*/
+	    
+	    long agentId = agent.getId();
+	    
+	    StartupStorageCommand storage = (StartupStorageCommand)cmd;
+	    if (storage.getResourceType() == Storage.StorageResourceType.STORAGE_HOST ||
+	    storage.getResourceType() == Storage.StorageResourceType.SECONDARY_STORAGE )
+	    {
+	    	uploadMonitor.handleUploadTemplateSync(agentId, storage.getTemplateInfo());
+	    } else {
+	    	//downloadMonitor.handlePoolTemplateSync(storage.getPoolInfo(), storage.getTemplateInfo());
+	    	//no need to do anything. The storagepoolmonitor will initiate template sync.
+	    }
+		return true;
+	}
+
+	@Override
+	public AgentControlAnswer processControlCommand(long agentId,
+			AgentControlCommand cmd) {
+		return null;
+	}
+	
+	public void setUploadInactive(Status reason) {
+		uploadActive=false;
+		uploadMonitor.handleUploadEvent(sserver, template, reason);
+	}
+	
+	public void logUploadStart() {
+		uploadMonitor.logEvent(template.getAccountId(), EventTypes.EVENT_TEMPLATE_UPLOAD_START, "Storage server " + sserver.getName() + " started upload of template " + template.getName(), EventVO.LEVEL_INFO);
+	}
+	
+	public void cancelTimeoutTask() {
+		if (timeoutTask != null) timeoutTask.cancel();
+	}
+	
+	public void cancelStatusTask() {
+		if (statusTask != null) statusTask.cancel();
+	}
+
+	@Override
+	public boolean processDisconnect(long agentId, com.cloud.host.Status state) {	
+		setDisconnected();
+		return true;
+	}
+
+	@Override
+	public boolean processTimeout(long agentId, long seq) {		
+		return true;
+	}
+	
+	private void initStateMachine() {
+		stateMap.put(Status.NOT_UPLOADED.toString(), new NotUploadedState(this));
+		stateMap.put(Status.UPLOADED.toString(), new UploadCompleteState(this));
+		stateMap.put(Status.UPLOAD_ERROR.toString(), new UploadErrorState(this));
+		stateMap.put(Status.UPLOAD_IN_PROGRESS.toString(), new UploadInProgressState(this));
+		stateMap.put(Status.ABANDONED.toString(), new UploadAbandonedState(this));
+	}
+	
+	private UploadState getState(String stateName) {
+		return stateMap.get(stateName);
+	}
+
+	private synchronized void transition(UploadEvent event, Object evtObj) {
+	    if (currState == null) {
+	        return;
+	    }
+		String prevName = currState.getName();
+		String nextState = currState.handleEvent(event, evtObj);
+		if (nextState != null) {
+			currState = getState(nextState);
+			if (currState != null) {
+				currState.onEntry(prevName, event, evtObj);
+			} else {
+				throw new CloudRuntimeException("Invalid next state: currState="+prevName+", evt="+event + ", next=" + nextState);
+			}
+		} else {
+			throw new CloudRuntimeException("Unhandled event transition: currState="+prevName+", evt="+event);
+		}
+	}
+	
+	public Date getLastUpdated() {
+		return lastUpdated;
+	}
+	
+	public void setLastUpdated() {
+		lastUpdated  = new Date();
+	}
+	
+	public void log(String message, Level level) {
+		s_logger.log(level, message + ", template=" + template.getName() + " at host " + sserver.getName());
+	}
+
+	public void setDisconnected() {
+		transition(UploadEvent.DISCONNECT, null);
+	}
+	
+	public void scheduleStatusCheck(com.cloud.agent.api.storage.UploadProgressCommand.RequestType getStatus) {
+		if (statusTask != null) statusTask.cancel();
+
+		statusTask = new StatusTask(this, getStatus);
+		timer.schedule(statusTask, STATUS_POLL_INTERVAL);
+	}
+
+	public void scheduleTimeoutTask(long delay) {
+		if (timeoutTask != null) timeoutTask.cancel();
+
+		timeoutTask = new TimeoutTask(this);
+		timer.schedule(timeoutTask, delay);
+		if (s_logger.isDebugEnabled()) {
+			log("Scheduling timeout at " + delay + " ms", Level.DEBUG);
+		}
+	}
+	
+	public void updateDatabase(Status state, String uploadErrorString) {
+		
+		VMTemplateHostVO vo = vmTemplateHostDao.createForUpdate();
+		vo.setUploadState(state);
+		vo.setLastUpdated(new Date());
+		vo.setUpload_errorString(uploadErrorString);
+		vmTemplateHostDao.update(getTemplateHostId(), vo);
+	}
+	
+	public void updateDatabase(Status state, String uploadUrl,String uploadErrorString) {
+		
+		VMTemplateHostVO vo = vmTemplateHostDao.createForUpdate();
+		vo.setUploadState(state);
+		vo.setLastUpdated(new Date());
+		vo.setUploadUrl(uploadUrl);
+		vo.setUploadJobId(null);
+		vo.setUploadPercent(0);
+		vo.setUpload_errorString(uploadErrorString);
+		
+		vmTemplateHostDao.update(getTemplateHostId(), vo);
+	}
+	
+	private Long getTemplateHostId() {
+		if (templateHostId == null){
+			VMTemplateHostVO templHost = vmTemplateHostDao.findByHostTemplate(sserver.getId(), template.getId());
+			templateHostId = templHost.getId();
+		}
+		return templateHostId;
+	}
+
+	public synchronized void updateDatabase(UploadAnswer answer) {		
+		
+        VMTemplateHostVO updateBuilder = vmTemplateHostDao.createForUpdate();
+		updateBuilder.setUploadPercent(answer.getUploadPct());
+		updateBuilder.setUploadState(answer.getUploadStatus());
+		updateBuilder.setLastUpdated(new Date());
+		updateBuilder.setUpload_errorString(answer.getErrorString());
+		updateBuilder.setUploadJobId(answer.getJobId());
+		
+		vmTemplateHostDao.update(getTemplateHostId(), updateBuilder);
+	}
+
+	public void sendCommand(RequestType reqType) {
+		if (getJobId() != null) {
+			if (s_logger.isTraceEnabled()) {
+				log("Sending progress command ", Level.TRACE);
+			}
+			long sent = uploadMonitor.send(sserver.getId(), new UploadProgressCommand(getCommand(), getJobId(), reqType), this);
+			if (sent == -1) {
+				setDisconnected();
+			}
+		}
+		
+	}
+	
+	private UploadCommand getCommand() {
+		return cmd;
+	}
+
+	public void logDisconnect() {
+		s_logger.warn("Unable to monitor upload progress of " + template.getName() + " at host " + sserver.getName());
+		uploadMonitor.logEvent(template.getAccountId(), EventTypes.EVENT_TEMPLATE_UPLOAD_FAILED, "Storage server " + sserver.getName() + " disconnected during upload of template " + template.getName(), EventVO.LEVEL_WARN);
+	}
+	
+	public void scheduleImmediateStatusCheck(RequestType request) {
+		if (statusTask != null) statusTask.cancel();
+		statusTask = new StatusTask(this, request);
+		timer.schedule(statusTask, SMALL_DELAY);
+	}
+
+	public void setCurrState(Status uploadState) {
+		this.currState = getState(currState.toString());		
+	}
+	
+}
diff --git a/server/src/com/cloud/storage/upload/UploadMonitor.java b/server/src/com/cloud/storage/upload/UploadMonitor.java
new file mode 100755
index 00000000000..9967900c172
--- /dev/null
+++ b/server/src/com/cloud/storage/upload/UploadMonitor.java
@@ -0,0 +1,43 @@
+/**
+ *  Copyright (C) 2010 Cloud.com, Inc.  All rights reserved.
+ * 
+ * This software is licensed under the GNU General Public License v3 or later.
+ * 
+ * It is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 3 of the License, or any later version.
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see .
+ * 
+ */
+
+package com.cloud.storage.upload;
+
+import java.util.Map;
+
+import com.cloud.storage.VMTemplateHostVO;
+import com.cloud.storage.VMTemplateVO;
+import com.cloud.storage.template.TemplateInfo;
+import com.cloud.utils.component.Manager;
+
+/**
+ * Monitor upload progress of all templates.
+ * @author nitin
+ *
+ */
+public interface UploadMonitor extends Manager{		
+	
+	public void cancelAllUploads(Long templateId);
+
+	public void extractTemplate(VMTemplateVO template, String url,
+			VMTemplateHostVO tmpltHostRef,Long dataCenterId);
+
+	void handleUploadTemplateSync(long sserverId,
+			Map templateInfo);
+
+}
\ No newline at end of file
diff --git a/server/src/com/cloud/storage/upload/UploadMonitorImpl.java b/server/src/com/cloud/storage/upload/UploadMonitorImpl.java
new file mode 100644
index 00000000000..9f1f77e0f70
--- /dev/null
+++ b/server/src/com/cloud/storage/upload/UploadMonitorImpl.java
@@ -0,0 +1,314 @@
+package com.cloud.storage.upload;
+
+import java.util.Date;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.Timer;
+import java.util.concurrent.ConcurrentHashMap;
+
+import javax.ejb.Local;
+import javax.naming.ConfigurationException;
+
+import org.apache.log4j.Logger;
+
+import com.cloud.agent.AgentManager;
+import com.cloud.agent.Listener;
+import com.cloud.agent.api.Command;
+import com.cloud.agent.api.storage.UploadCommand;
+import com.cloud.agent.api.storage.UploadProgressCommand;
+import com.cloud.agent.api.storage.UploadProgressCommand.RequestType;
+import com.cloud.configuration.dao.ConfigurationDao;
+import com.cloud.dc.dao.DataCenterDao;
+import com.cloud.event.EventTypes;
+import com.cloud.event.EventVO;
+import com.cloud.event.dao.EventDao;
+import com.cloud.host.Host;
+import com.cloud.host.HostVO;
+import com.cloud.host.dao.HostDao;
+import com.cloud.storage.StoragePoolHostVO;
+import com.cloud.storage.VMTemplateHostVO;
+import com.cloud.storage.VMTemplateStoragePoolVO;
+import com.cloud.storage.VMTemplateStorageResourceAssoc;
+import com.cloud.storage.VMTemplateVO;
+import com.cloud.storage.Storage.ImageFormat;
+import com.cloud.storage.VMTemplateStorageResourceAssoc.Status;
+import com.cloud.storage.dao.StoragePoolHostDao;
+import com.cloud.storage.dao.VMTemplateDao;
+import com.cloud.storage.dao.VMTemplateHostDao;
+import com.cloud.storage.dao.VMTemplatePoolDao;
+import com.cloud.storage.template.TemplateInfo;
+import com.cloud.utils.component.Inject;
+import com.cloud.utils.exception.CloudRuntimeException;
+import com.cloud.vm.dao.SecondaryStorageVmDao;
+
+/**
+ * @author nitin
+ *
+ */
+@Local(value={UploadMonitor.class})
+public class UploadMonitorImpl implements UploadMonitor {
+
+	static final Logger s_logger = Logger.getLogger(UploadMonitorImpl.class);
+	
+	private String _hyperVisorType;
+    @Inject 
+    VMTemplateHostDao _vmTemplateHostDao;
+    @Inject
+	VMTemplatePoolDao _vmTemplatePoolDao;
+    @Inject
+    StoragePoolHostDao _poolHostDao;
+    @Inject
+    SecondaryStorageVmDao _secStorageVmDao;
+
+    
+    @Inject
+    HostDao _serverDao = null;
+    @Inject
+    private final DataCenterDao _dcDao = null;
+    @Inject
+    VMTemplateDao _templateDao =  null;
+    @Inject
+	private final EventDao _eventDao = null;
+    @Inject
+	private AgentManager _agentMgr;
+    @Inject
+    ConfigurationDao _configDao;
+
+	private String _name;
+	private Boolean _sslCopy = new Boolean(false);
+	private String _copyAuthPasswd;
+
+
+	Timer _timer;
+
+	final Map _listenerMap = new ConcurrentHashMap();
+
+	
+	@Override
+	public void cancelAllUploads(Long templateId) {
+		// TODO Auto-generated method stub
+
+	}
+	public boolean isTemplateUploadInProgress(Long templateId) {
+		List uploadsInProgress =
+			_vmTemplateHostDao.listByTemplateStatus(templateId, VMTemplateHostVO.Status.UPLOAD_IN_PROGRESS);
+		return (uploadsInProgress.size() != 0);
+	}
+
+	@Override
+	public void extractTemplate( VMTemplateVO template, String url,
+			VMTemplateHostVO vmTemplateHost,Long dataCenterId){
+
+		if (isTemplateUploadInProgress(template.getId()) ){
+			return;
+		}		
+		
+		List storageServers = _serverDao.listByTypeDataCenter(Host.Type.SecondaryStorage, dataCenterId);
+		HostVO sserver = storageServers.get(0);			
+		
+		_vmTemplateHostDao.updateUploadStatus(sserver.getId(), template.getId(), 0, VMTemplateStorageResourceAssoc.Status.NOT_UPLOADED, "jobid0000", url);                
+        		
+		if(vmTemplateHost != null) {
+		    start();
+			UploadCommand ucmd = new UploadCommand(template, url, vmTemplateHost);	
+			UploadListener ul = new UploadListener(sserver, template, _timer, _vmTemplateHostDao, vmTemplateHost.getId(), this, ucmd);
+			_listenerMap.put(vmTemplateHost, ul);
+
+			long result = send(sserver.getId(), ucmd, ul);	
+			if (result == -1) {
+				s_logger.warn("Unable to start upload of template " + template.getUniqueName() + " from " + sserver.getName() + " to " +url);
+				ul.setDisconnected();
+				ul.scheduleStatusCheck(RequestType.GET_OR_RESTART);
+			}
+		}
+		
+	}
+
+
+	public long send(Long hostId, Command cmd, Listener listener) {
+		return _agentMgr.gatherStats(hostId, cmd, listener);
+	}
+
+	@Override
+	public boolean configure(String name, Map params)
+			throws ConfigurationException {
+		_name = name;
+        final Map configs = _configDao.getConfiguration("ManagementServer", params);
+        _sslCopy = Boolean.parseBoolean(configs.get("secstorage.encrypt.copy"));
+        
+        String cert = configs.get("secstorage.secure.copy.cert");
+        if ("realhostip.com".equalsIgnoreCase(cert)) {
+        	s_logger.warn("Only realhostip.com ssl cert is supported, ignoring self-signed and other certs");
+        }
+        
+        _hyperVisorType = _configDao.getValue("hypervisor.type");
+        
+        _copyAuthPasswd = configs.get("secstorage.copy.password");
+        
+        _agentMgr.registerForHostEvents(new UploadListener(this), true, false, false);
+		return true;
+	}
+
+	@Override
+	public String getName() {
+		// TODO Auto-generated method stub
+		return _name;
+	}
+
+	@Override
+	public boolean start() {
+		_timer = new Timer();
+		return true;
+	}
+
+	@Override
+	public boolean stop() {		
+		return true;
+	}
+	
+	public void handleUploadEvent(HostVO host, VMTemplateVO template, Status upldStatus) {
+		
+		if ((upldStatus == VMTemplateStorageResourceAssoc.Status.UPLOADED) || (upldStatus==Status.ABANDONED)){
+			VMTemplateHostVO vmTemplateHost = new VMTemplateHostVO(host.getId(), template.getId());
+			UploadListener oldListener = _listenerMap.get(vmTemplateHost);
+			if (oldListener != null) {
+				_listenerMap.remove(vmTemplateHost);
+			}
+		}
+		if (upldStatus == VMTemplateStorageResourceAssoc.Status.UPLOADED) {
+			logEvent(template.getAccountId(), EventTypes.EVENT_TEMPLATE_UPLOAD_SUCCESS, template.getName() + " successfully uploaded from storage server " + host.getName(), EventVO.LEVEL_INFO);
+		}
+		if (upldStatus == Status.UPLOAD_ERROR) {
+			logEvent(template.getAccountId(), EventTypes.EVENT_TEMPLATE_UPLOAD_FAILED, template.getName() + " failed to upload from storage server " + host.getName(), EventVO.LEVEL_ERROR);
+		}
+		if (upldStatus == Status.ABANDONED) {
+			logEvent(template.getAccountId(), EventTypes.EVENT_TEMPLATE_UPLOAD_FAILED, template.getName() + " :aborted upload from storage server " + host.getName(), EventVO.LEVEL_WARN);
+		}
+		
+		/*VMTemplateHostVO vmTemplateHost = _vmTemplateHostDao.findByHostTemplate(host.getId(), template.getId());
+		
+        if (upldStatus == Status.UPLOADED) {
+            long size = -1;
+            if(vmTemplateHost!=null){
+            	size = vmTemplateHost.getSize();
+            }
+            else{
+            	s_logger.warn("Failed to get size for template" + template.getName());
+            }
+			String eventParams = "id=" + template.getId() + "\ndcId="+host.getDataCenterId()+"\nsize="+size;
+            EventVO event = new EventVO();
+            event.setUserId(1L);
+            event.setAccountId(template.getAccountId());
+            if((template.getFormat()).equals(ImageFormat.ISO)){
+            	event.setType(EventTypes.EVENT_ISO_CREATE);
+            	event.setDescription("Successfully uploaded ISO " + template.getName());
+            }
+            else{
+            	event.setType(EventTypes.EVENT_TEMPLATE_);
+            	event.setDescription("Successfully uploaded template " + template.getName());
+            }
+            event.setParameters(eventParams);
+            event.setLevel(EventVO.LEVEL_INFO);
+            _eventDao.persist(event);
+        } 
+        
+		if (vmTemplateHost != null) {
+			Long poolId = vmTemplateHost.getPoolId();
+			if (poolId != null) {
+				VMTemplateStoragePoolVO vmTemplatePool = _vmTemplatePoolDao.findByPoolTemplate(poolId, template.getId());
+				StoragePoolHostVO poolHost = _poolHostDao.findByPoolHost(poolId, host.getId());
+				if (vmTemplatePool != null && poolHost != null) {
+					vmTemplatePool.setDownloadPercent(vmTemplateHost.getUploadPercent());
+					vmTemplatePool.setDownloadState(vmTemplateHost.getUploadState());
+					vmTemplatePool.setErrorString(vmTemplateHost.getUpload_errorString());
+					String localPath = poolHost.getLocalPath();
+					String installPath = vmTemplateHost.getInstallPath();
+					if (installPath != null) {
+						if (!installPath.startsWith("/")) {
+							installPath = "/" + installPath;
+						}
+						if (!(localPath == null) && !installPath.startsWith(localPath)) {
+							localPath = localPath.replaceAll("/\\p{Alnum}+/*$", ""); //remove instance if necessary
+						}
+						if (!(localPath == null) && installPath.startsWith(localPath)) {
+							installPath = installPath.substring(localPath.length());
+						}
+					}
+					vmTemplatePool.setInstallPath(installPath);
+					vmTemplatePool.setLastUpdated(vmTemplateHost.getLastUpdated());
+					vmTemplatePool.setJobId(vmTemplateHost.getJobId());
+					vmTemplatePool.setLocalDownloadPath(vmTemplateHost.getLocalDownloadPath());
+					_vmTemplatePoolDao.update(vmTemplatePool.getId(),vmTemplatePool);
+				}
+			}
+		}*/
+
+	}
+	
+	public void logEvent(long accountId, String evtType, String description, String level) {
+		EventVO event = new EventVO();
+		event.setUserId(1);
+		event.setAccountId(accountId);
+		event.setType(evtType);
+		event.setDescription(description);
+		event.setLevel(level);
+		_eventDao.persist(event);
+		
+	}
+
+	@Override
+	public void handleUploadTemplateSync(long sserverId, Map templateInfo) {
+		HostVO storageHost = _serverDao.findById(sserverId);
+		if (storageHost == null) {
+			s_logger.warn("Huh? Agent id " + sserverId + " does not correspond to a row in hosts table?");
+			return;
+		}		
+		
+		List allTemplates = _templateDao.listAllInZone(storageHost.getDataCenterId());
+		VMTemplateVO rtngTmplt = _templateDao.findRoutingTemplate();
+		VMTemplateVO defaultBuiltin = _templateDao.findDefaultBuiltinTemplate();
+
+		if (rtngTmplt != null && !allTemplates.contains(rtngTmplt))
+			allTemplates.add(rtngTmplt);
+
+		if (defaultBuiltin != null && !allTemplates.contains(defaultBuiltin)) {
+			allTemplates.add(defaultBuiltin);
+		}			
+		        
+        
+		for (VMTemplateVO tmplt: allTemplates) {
+			String uniqueName = tmplt.getUniqueName();
+			VMTemplateHostVO tmpltHost = _vmTemplateHostDao.findByHostTemplate(sserverId, tmplt.getId());
+			if (templateInfo.containsKey(uniqueName)) {		
+				if (tmpltHost != null) {
+					s_logger.info("Template Sync found " + uniqueName + " already in the template host table");
+                    if (tmpltHost.getUploadState() != Status.UPLOADED) {
+                    	tmpltHost.setUpload_errorString("");
+                    }
+                    tmpltHost.setUploadPercent(100);
+                    tmpltHost.setUploadState(Status.UPLOADED);                    
+                    tmpltHost.setLastUpdated(new Date());
+					_vmTemplateHostDao.update(tmpltHost.getId(), tmpltHost);
+				} else {
+					VMTemplateHostVO templtHost = new VMTemplateHostVO(sserverId, tmplt.getId(), new Date(), 100, Status.UPLOADED, null, null, null, templateInfo.get(uniqueName).getInstallPath(), tmplt.getUrl());
+					templtHost.setSize(templateInfo.get(uniqueName).getSize());
+					_vmTemplateHostDao.persist(templtHost);
+				}
+				templateInfo.remove(uniqueName);
+				continue;
+			}
+			/*if (tmpltHost != null && tmpltHost.getUploadState() != Status.UPLOADED) {
+				s_logger.info("Template Sync did not find " + uniqueName + " ready on server " + sserverId + ", will request upload to start/resume shortly");
+
+			} else if (tmpltHost == null) {
+				s_logger.info("Template Sync did not find " + uniqueName + " on the server " + sserverId + ", will request upload shortly");
+				VMTemplateHostVO templtHost = new VMTemplateHostVO(sserverId, tmplt.getId(), new Date(), 0, Status.NOT_UPLOADED, null, null, null, null, tmplt.getUrl());
+				_vmTemplateHostDao.persist(templtHost);
+			}*/
+
+		}				
+	}	
+}
diff --git a/server/src/com/cloud/storage/upload/UploadState.java b/server/src/com/cloud/storage/upload/UploadState.java
new file mode 100644
index 00000000000..6bccbf06e2c
--- /dev/null
+++ b/server/src/com/cloud/storage/upload/UploadState.java
@@ -0,0 +1,70 @@
+package com.cloud.storage.upload;
+
+import java.util.Date;
+
+import org.apache.log4j.Level;
+import org.apache.log4j.Logger;
+
+import com.cloud.agent.api.storage.UploadAnswer;
+import com.cloud.storage.upload.UploadState.UploadEvent;
+
+public abstract class UploadState {
+
+	public static enum UploadEvent {UPLOAD_ANSWER, ABANDON_UPLOAD, TIMEOUT_CHECK, DISCONNECT};
+	protected static final Logger s_logger = Logger.getLogger(UploadListener.class.getName());
+
+	private UploadListener ul;
+	
+	public UploadState(UploadListener ul) {
+		this.ul = ul;
+	}
+	
+	protected UploadListener getUploadListener() {
+		return ul;
+	}
+	
+	public String handleEvent(UploadEvent event, Object eventObj){
+		if (s_logger.isTraceEnabled()) {
+			getUploadListener().log("handleEvent, event type=" + event + ", curr state=" + getName(), Level.TRACE);
+		}
+		switch (event) {
+		case UPLOAD_ANSWER:
+			UploadAnswer answer=(UploadAnswer)eventObj;
+			return handleAnswer(answer);
+		case ABANDON_UPLOAD:
+			return handleAbort();
+		case TIMEOUT_CHECK:
+			Date now = new Date();
+			long update = now.getTime() - ul.getLastUpdated().getTime();
+			return handleTimeout(update);
+		case DISCONNECT:
+			return handleDisconnect();
+		}
+		return null;
+	}
+	
+	public   void onEntry(String prevState, UploadEvent event, Object evtObj){
+		if (s_logger.isTraceEnabled()) {
+			getUploadListener().log("onEntry, event type=" + event + ", curr state=" + getName(), Level.TRACE);
+		}
+		if (event == UploadEvent.UPLOAD_ANSWER) {
+			getUploadListener().updateDatabase((UploadAnswer)evtObj);
+		}
+	}
+	
+	public  void onExit() {
+		
+	}
+	
+	public abstract String handleTimeout(long updateMs) ;
+	
+	public abstract String handleAbort();
+	
+	public abstract  String handleDisconnect();
+
+	public abstract String handleAnswer(UploadAnswer answer) ;
+	
+	public abstract String getName();
+
+
+}
diff --git a/server/src/com/cloud/template/TemplateManager.java b/server/src/com/cloud/template/TemplateManager.java
index e6dcd0b9393..a4f67f3a0da 100644
--- a/server/src/com/cloud/template/TemplateManager.java
+++ b/server/src/com/cloud/template/TemplateManager.java
@@ -127,5 +127,7 @@ public interface TemplateManager extends Manager {
     void evictTemplateFromStoragePool(VMTemplateStoragePoolVO templatePoolVO);
     
     boolean templateIsDeleteable(VMTemplateHostVO templateHostRef);
+
+	void extract(VMTemplateVO template, String url, VMTemplateHostVO tmpltHostRef, Long zoneId);
     
 }
diff --git a/server/src/com/cloud/template/TemplateManagerImpl.java b/server/src/com/cloud/template/TemplateManagerImpl.java
old mode 100644
new mode 100755
index 71fe9be0052..3893caa0b63
--- a/server/src/com/cloud/template/TemplateManagerImpl.java
+++ b/server/src/com/cloud/template/TemplateManagerImpl.java
@@ -66,6 +66,7 @@ import com.cloud.storage.dao.VMTemplatePoolDao;
 import com.cloud.storage.dao.VMTemplateZoneDao;
 import com.cloud.storage.dao.VolumeDao;
 import com.cloud.storage.download.DownloadMonitor;
+import com.cloud.storage.upload.UploadMonitor;
 import com.cloud.user.Account;
 import com.cloud.user.AccountManager;
 import com.cloud.user.AccountVO;
@@ -98,6 +99,7 @@ public class TemplateManagerImpl implements TemplateManager {
     @Inject StoragePoolHostDao _poolHostDao;
     @Inject EventDao _eventDao;
     @Inject DownloadMonitor _downloadMonitor;
+    @Inject UploadMonitor _uploadMonitor;
     @Inject UserAccountDao _userAccountDao;
     @Inject AccountDao _accountDao;
     @Inject UserDao _userDao;
@@ -109,7 +111,7 @@ public class TemplateManagerImpl implements TemplateManager {
     @Inject SnapshotDao _snapshotDao;
     long _routerTemplateId = -1;
     @Inject StorageManager _storageMgr;
-    protected SearchBuilder HostTemplateStatesSearch;
+    protected SearchBuilder HostTemplateStatesSearch;	
     
 
     @Override
@@ -145,6 +147,11 @@ public class TemplateManagerImpl implements TemplateManager {
         return id;
     }
     
+    @Override
+    public void extract(VMTemplateVO template, String url, VMTemplateHostVO tmpltHostRef, Long zoneId){
+    	_uploadMonitor.extractTemplate(template, url, tmpltHostRef, zoneId);
+    }
+    
     @Override @DB
     public VMTemplateStoragePoolVO prepareTemplateForCreate(VMTemplateVO template, StoragePoolVO pool) {
     	template = _tmpltDao.findById(template.getId(), true);
@@ -236,7 +243,8 @@ public class TemplateManagerImpl implements TemplateManager {
                     s_logger.debug("Downloading " + templateId + " via " + vo.getHostId());
                 }
             	dcmd.setLocalPath(vo.getLocalPath());
-                DownloadAnswer answer = (DownloadAnswer)_agentMgr.easySend(vo.getHostId(), dcmd);
+            	// set 120 min timeout for this command
+                DownloadAnswer answer = (DownloadAnswer)_agentMgr.easySend(vo.getHostId(), dcmd, 120*60*1000);
                 if (answer != null) {
             		templateStoragePoolRef.setDownloadPercent(templateStoragePoolRef.getDownloadPercent());
             		templateStoragePoolRef.setDownloadState(answer.getDownloadStatus());
diff --git a/server/src/com/cloud/test/DatabaseConfig.java b/server/src/com/cloud/test/DatabaseConfig.java
index 7beda280aa1..1f2e6b0db21 100644
--- a/server/src/com/cloud/test/DatabaseConfig.java
+++ b/server/src/com/cloud/test/DatabaseConfig.java
@@ -771,6 +771,11 @@ public class DatabaseConfig {
         int diskSpace = Integer.parseInt(_currentObjectParams.get("diskSpace"));
 //        boolean mirroring = Boolean.parseBoolean(_currentObjectParams.get("mirrored"));
         String tags = _currentObjectParams.get("tags");
+        String useLocal = _currentObjectParams.get("useLocal");
+        boolean local = false;
+        if (useLocal != null) {
+        	local = Boolean.parseBoolean(useLocal);
+        }
         
         if (tags != null && tags.length() > 0) {
             String[] tokens = tags.split(",");
@@ -782,6 +787,7 @@ public class DatabaseConfig {
             tags = newTags.toString();
         }
         DiskOfferingVO diskOffering = new DiskOfferingVO(domainId, name, displayText, diskSpace, tags);
+        diskOffering.setUseLocalStorage(local);
         DiskOfferingDaoImpl offering = ComponentLocator.inject(DiskOfferingDaoImpl.class);
         try {
             offering.persist(diskOffering);
diff --git a/server/src/com/cloud/vm/UserVmManager.java b/server/src/com/cloud/vm/UserVmManager.java
index 4c2a3e98e3d..f4f09bc81eb 100644
--- a/server/src/com/cloud/vm/UserVmManager.java
+++ b/server/src/com/cloud/vm/UserVmManager.java
@@ -117,7 +117,7 @@ public interface UserVmManager extends Manager, VirtualMachineManager
      * @param volumeId
      * @throws InternalErrorException
      */
-    void detachVolumeFromVM(long volumeId, long startEventId) throws InternalErrorException;
+    void detachVolumeFromVM(long volumeId, long startEventId, long deviceId, long instanceId) throws InternalErrorException;
     
     /**
      * Attaches an ISO to the virtual CDROM device of the specified VM. Will eject any existing virtual CDROM if isoPath is null.
diff --git a/server/src/com/cloud/vm/UserVmManagerImpl.java b/server/src/com/cloud/vm/UserVmManagerImpl.java
index 721f73e3eb4..6e724f2a5b3 100755
--- a/server/src/com/cloud/vm/UserVmManagerImpl.java
+++ b/server/src/com/cloud/vm/UserVmManagerImpl.java
@@ -82,8 +82,8 @@ import com.cloud.configuration.dao.ConfigurationDao;
 import com.cloud.configuration.dao.ResourceLimitDao;
 import com.cloud.dc.DataCenterVO;
 import com.cloud.dc.HostPodVO;
-import com.cloud.dc.Vlan.VlanType;
 import com.cloud.dc.VlanVO;
+import com.cloud.dc.Vlan.VlanType;
 import com.cloud.dc.dao.DataCenterDao;
 import com.cloud.dc.dao.HostPodDao;
 import com.cloud.dc.dao.VlanDao;
@@ -121,7 +121,6 @@ import com.cloud.network.dao.SecurityGroupVMMapDao;
 import com.cloud.network.security.NetworkGroupManager;
 import com.cloud.network.security.NetworkGroupVO;
 import com.cloud.offering.NetworkOffering;
-import com.cloud.offering.NetworkOffering.GuestIpType;
 import com.cloud.offering.ServiceOffering;
 import com.cloud.offerings.NetworkOfferingVO;
 import com.cloud.service.ServiceOfferingVO;
@@ -129,18 +128,18 @@ import com.cloud.service.dao.ServiceOfferingDao;
 import com.cloud.storage.DiskOfferingVO;
 import com.cloud.storage.GuestOSVO;
 import com.cloud.storage.Snapshot;
-import com.cloud.storage.Snapshot.SnapshotType;
 import com.cloud.storage.SnapshotVO;
 import com.cloud.storage.Storage;
-import com.cloud.storage.Storage.ImageFormat;
 import com.cloud.storage.StorageManager;
 import com.cloud.storage.StoragePoolVO;
 import com.cloud.storage.VMTemplateHostVO;
-import com.cloud.storage.VMTemplateStorageResourceAssoc.Status;
 import com.cloud.storage.VMTemplateVO;
 import com.cloud.storage.Volume;
-import com.cloud.storage.Volume.VolumeType;
 import com.cloud.storage.VolumeVO;
+import com.cloud.storage.Snapshot.SnapshotType;
+import com.cloud.storage.Storage.ImageFormat;
+import com.cloud.storage.VMTemplateStorageResourceAssoc.Status;
+import com.cloud.storage.Volume.VolumeType;
 import com.cloud.storage.dao.DiskOfferingDao;
 import com.cloud.storage.dao.DiskTemplateDao;
 import com.cloud.storage.dao.GuestOSCategoryDao;
@@ -419,10 +418,28 @@ public class UserVmManagerImpl implements UserVmManager {
     }
     
     @Override
-    public void detachVolumeFromVM(long volumeId, long startEventId) throws InternalErrorException {
-    	VolumeVO volume = _volsDao.findById(volumeId);
+    public void detachVolumeFromVM(long volumeId, long startEventId, long deviceId, long instanceId) throws InternalErrorException {
+    	VolumeVO volume = null;
     	
-    	Long vmId = volume.getInstanceId();
+    	if(volumeId!=0)
+    	{
+    		volume = _volsDao.findById(volumeId);
+    	}
+    	else
+    	{
+    		volume = _volsDao.findByInstanceAndDeviceId(instanceId, deviceId).get(0);
+    	}
+    	
+    	Long vmId = null;
+    	
+    	if(instanceId==0)
+    	{
+    		vmId = volume.getInstanceId();
+    	}
+    	else
+    	{
+    		vmId = instanceId;
+    	}
     	
     	if (vmId == null) {
     		return;
@@ -455,7 +472,7 @@ public class UserVmManagerImpl implements UserVmManager {
     	Answer answer = null;
     	
     	if (sendCommand) {
-			AttachVolumeCommand cmd = new AttachVolumeCommand(false, vm.getInstanceName(), volume.getPoolType(), volume.getFolder(), volume.getPath(), volume.getName(), volume.getDeviceId());
+			AttachVolumeCommand cmd = new AttachVolumeCommand(false, vm.getInstanceName(), volume.getPoolType(), volume.getFolder(), volume.getPath(), volume.getName(), deviceId!=0 ? deviceId : volume.getDeviceId());
 			
 			try {
     			answer = _agentMgr.send(vm.getHostId(), cmd);
@@ -1494,6 +1511,10 @@ public class UserVmManagerImpl implements UserVmManager {
                 podsToAvoid.add(pod.first().getId());
             }
 
+            if(pod == null){
+                throw new ResourceAllocationException("Create VM " + ((vm == null) ? vmId : vm.toString()) + " failed. There are no pods with enough CPU/memory");
+            }
+            
             if ((vm == null) || (poolid == 0)) {
                 throw new ResourceAllocationException("Create VM " + ((vm == null) ? vmId : vm.toString()) + " failed due to no Storage Pool is available");
             }
@@ -2725,6 +2746,8 @@ public class UserVmManagerImpl implements UserVmManager {
 	        	else
 	        	{
 	        		s_logger.debug("failed to create VM instance : " + name);
+	        		throw new InternalErrorException("We could not find a suitable pool for creating this directly attached vm");
+	        		
 	        	}
 	            return null;
 	        }
@@ -2757,7 +2780,7 @@ public class UserVmManagerImpl implements UserVmManager {
             _accountMgr.decrementResourceCount(account.getId(), ResourceType.volume, numVolumes);
 
 	        s_logger.error("Unable to create vm", th);
-	        throw new CloudRuntimeException("Unable to create vm", th);
+	        throw new CloudRuntimeException("Unable to create vm: "+th.getMessage(), th);
 	    }
 	}
     
diff --git a/server/test/async-job-component.xml b/server/test/async-job-component.xml
index 6694f44a7fb..03f23bd17a9 100644
--- a/server/test/async-job-component.xml
+++ b/server/test/async-job-component.xml
@@ -150,6 +150,8 @@
         
         
         
+        
+        
         
         
         
diff --git a/setup/bindir/cloud-migrate-databases.in b/setup/bindir/cloud-migrate-databases.in
index dab1515d50c..6adffa75d3d 100644
--- a/setup/bindir/cloud-migrate-databases.in
+++ b/setup/bindir/cloud-migrate-databases.in
@@ -150,6 +150,23 @@ class From21datamigratedTo21postprocessed(cloud_utils.MigrationStep):
 	to_level = "2.1"
 	def run(self): self.context.run_sql_resource("postprocess-20to21.sql")
 
+class From21To213(cloud_utils.MigrationStep):
+	def __str__(self): return "Dropping obsolete indexes"
+	from_level = "2.1"
+	to_level = "2.1.3"
+	def run(self): self.context.run_sql_resource("index-212to213.sql")
+
+class From213To22data(cloud_utils.MigrationStep):
+	def __str__(self): return "Migrating data"
+	from_level = "2.1.3"
+	to_level = "2.2-01"
+	def run(self): self.context.run_sql_resource("data-21to22.sql")
+
+class From22dataTo22(cloud_utils.MigrationStep):
+	def __str__(self): return "Migrating indexes"
+	from_level = "2.2-01"
+	to_level = "2.2"
+	def run(self): self.context.run_sql_resource("index-21to22.sql")
 
 # command line harness functions
 
diff --git a/setup/bindir/cloud-setup-databases.in b/setup/bindir/cloud-setup-databases.in
index 6c34575fcc0..9543cb19571 100755
--- a/setup/bindir/cloud-setup-databases.in
+++ b/setup/bindir/cloud-setup-databases.in
@@ -9,6 +9,11 @@ from random import choice
 import string
 from optparse import OptionParser
 import commands
+import MySQLdb
+
+# squelch mysqldb spurious warnings
+import warnings
+warnings.simplefilter('ignore')
 
 # ---- This snippet of code adds the sources path and the waf configured PYTHONDIR to the Python path ----
 # ---- We do this so cloud_utils can be looked up in the following order:
@@ -128,22 +133,34 @@ def get_creds(parser,options,args):
 	host,port = parse_hostport(hostinfo)
 	return (user,password,host,port)
 
-def run_mysql(text,user,password,host,port,extraargs=None):
-  cmd = ["mysql",
-    "--user=%s"%user,
-    "--host=%s"%host,
-  ]
-  if password: 
-    cmd.append("--password=%s"%password)
-  if password: 
-    cmd.append("--port=%s"%port)
-  if extraargs:
-    cmd.extend(extraargs)
-    
-  p = subprocess.Popen(cmd,stdin=subprocess.PIPE)
-  p.communicate(text)
-  ret = p.wait()
-  if ret != 0: raise CalledProcessError(ret,cmd)
+def run_mysql(text,user,password,host,port,debug=False):
+  kwargs = {}
+  kwargs['host'] = host
+  kwargs['user'] = user
+  if password: kwargs['passwd']   = password
+  if port: kwargs['port']   = port
+
+  conn = MySQLdb.connect(**kwargs)
+  cur = conn.cursor()
+  import re
+  exp = re.compile("DELIMITER (.*)$",re.M)
+  pairs = [";"]+[x.strip() for x in exp.split(text)]
+  delims = []
+  chunks = []
+  while pairs:
+      delims.append( pairs[0] )
+      chunks.append( pairs[1] )
+      pairs = pairs[2:]
+
+  for delim,chunk in zip(delims,chunks):
+      for stmt in chunk.split(delim):
+	stmt = stmt.strip()
+	if not stmt: continue
+	if debug: print stmt
+	cur.execute(stmt)
+  cur.close()
+  conn.commit()
+  conn.close()
 
 def ifaces():
     status,lines = commands.getstatusoutput('LANG=C /sbin/ip address show')
@@ -154,7 +171,8 @@ def ifaces():
 
 def ip(iface):
     status,lines = commands.getstatusoutput('LANG=C /sbin/ip address show %s'%iface)
-    assert status == 0
+    if status != 0: return False
+    #used to say: assert status == 0 but it caused a bug in ifaces without IP
     lines = [ l for l in lines.splitlines() if l.startswith('    inet ') ]
     if not lines: return None
     toks = lines[0].split()
@@ -239,8 +257,8 @@ if options.serversetup and not os.path.isfile(options.serversetup):
 	e("%s is not a valid file"%options.serversetup)
 
 
-dbfilepath = "@SETUPDATADIR@"
-dbppaths = [ os.path.join("@MSCONF@","db.properties") ] # , os.path.join("@USAGESYSCONFDIR@","db.properties") ]
+dbfilepath = r"@SETUPDATADIR@"
+dbppaths = [ os.path.join(r"@MSCONF@","db.properties") ] # , os.path.join("@USAGESYSCONFDIR@","db.properties") ]
 dbppaths = [ x for x in dbppaths if os.path.exists(x) ]
 if not dbppaths:
 	print "No services to set up installed on this system.  Refusing to continue."
@@ -249,28 +267,29 @@ if not dbppaths:
 #run sanity checks
 # checkutc()
 checkdbserverhostname(host)
-checkhostname()
+if sys.platform != "win32": checkhostname()
 try: checkselinux()
 except OSError,e:
 	if e.errno == 2: pass
 	else: raise
-checknetwork()
+if sys.platform != 'win32': checknetwork()
 
 
 #initialize variables
-ipaddr = firstip(ifaces())
+if sys.platform != 'win32': ipaddr = firstip(ifaces())
+else: ipaddr = None
 if not ipaddr: ipaddr='127.0.0.1'
 
 
 if rootuser:
 	print "Testing specified deployment credentials on server %s:%s"%(host,port)
-	try: run_mysql("SELECT * from mysql.user limit 0",rootuser,rootpassword,host,port)
+        try: run_mysql("SELECT * from mysql.user limit 0",rootuser,rootpassword,host,port,debug=options.debug)
 	except CalledProcessError:
 		print "The deployment credentials you specified are not valid.  Refusing to continue."
 		sys.exit(19)
 else:
 	print "Testing specified connection credentials on server %s:%s"%(host,port)
-	try: run_mysql("SELECT * from cloud.user limit 0",user,password,host,port)
+        try: run_mysql("SELECT * from cloud.user limit 0",user,password,host,port,debug=options.debug)
 	except CalledProcessError:
 		print "The connection credentials you specified are not valid.  Refusing to continue."
 		sys.exit(19)
@@ -287,7 +306,9 @@ if rootuser:
 
 	replacements = (
 		("CREATE USER cloud identified by 'cloud';",
-			""),	
+			"CREATE USER %s@`localhost` identified by '%s'; CREATE USER %s@`%%` identified by '%s';"%(
+					(user,password,user,password)
+				)),
 		("cloud identified by 'cloud';",
 			"%s identified by '%s';"%(user,password)),
 		("cloud@`localhost` identified by 'cloud'",
@@ -315,22 +336,27 @@ if rootuser:
 		if not os.path.exists(p): continue
 		text = file(p).read()
 		for t,r in replacements: text = text.replace(t,r)
-		print "Applying file %s to the database on server %s:%s"%(p,host,port)
-		try: run_mysql(text,rootuser,rootpassword,host,port)
+                print "Applying file %s to the database on server %s:%s"%(p,host,port)
+		try: run_mysql(text,rootuser,rootpassword,host,port,debug=options.debug)
 		except CalledProcessError: sys.exit(20)
 		
 	if options.serversetup:
-		systemjars = "@SYSTEMJARS@".split()
-		pipe = subprocess.Popen(["build-classpath"]+systemjars,stdout=subprocess.PIPE)
-		systemcp,throwaway = pipe.communicate()
-		systemcp = systemcp.strip()
-		if pipe.wait(): # this means that build-classpath failed miserably
-			systemcp = "@SYSTEMCLASSPATH@"
-		pcp = os.path.pathsep.join( glob.glob( os.path.join ( "@PREMIUMJAVADIR@" , "*" ) ) )
-		mscp = "@MSCLASSPATH@"
-		depscp = "@DEPSCLASSPATH@"
 		conf = os.path.dirname(dbppaths[0])
-		classpath = os.path.pathsep.join([pcp,systemcp,depscp,mscp,conf])
+		pcp = os.path.pathsep.join( glob.glob( os.path.join ( r"@PREMIUMJAVADIR@" , "*" ) ) )
+		if sys.platform == 'win32':
+			mscp = r"@MSCLASSPATH@"
+			depscp = r"@DEPSCLASSPATH@"
+			classpath = os.path.pathsep.join([pcp,depscp,mscp,conf])
+		else:
+			systemjars = r"@SYSTEMJARS@".split()
+			pipe = subprocess.Popen(["build-classpath"]+systemjars,stdout=subprocess.PIPE)
+			systemcp,throwaway = pipe.communicate()
+			systemcp = systemcp.strip()
+			if pipe.wait(): # this means that build-classpath failed miserably
+				systemcp = r"@SYSTEMCLASSPATH@"
+			mscp = r"@MSCLASSPATH@"
+			depscp = r"@DEPSCLASSPATH@"
+			classpath = os.path.pathsep.join([pcp,systemcp,depscp,mscp,conf])
 		print "Performing unattended automated setup using file %s"%options.serversetup
 		cmd = ["java","-cp",classpath,"com.cloud.test.DatabaseConfig",options.serversetup]
 		if options.debug: print "Running command: %s"%" ".join(cmd)
@@ -343,12 +369,19 @@ if rootuser:
 			p = os.path.join(dbfilepath,"%s.sql"%f)
 			text = file(p).read()
 			print "Applying file %s to the database on server %s:%s"%(p,host,port)
-			try: run_mysql(text,rootuser,rootpassword,host,port)
+                        try: run_mysql(text,rootuser,rootpassword,host,port,debug=options.debug)
 			except CalledProcessError: sys.exit(22)
 
 	for f in ["templates.%s"%virttech,"create-index-fk"]:
 		p = os.path.join(dbfilepath,"%s.sql"%f)
 		text = file(p).read()
 		print "Applying file %s to the database on server %s:%s"%(p,host,port)
-		try: run_mysql(text,rootuser,rootpassword,host,port)
+                try: run_mysql(text,rootuser,rootpassword,host,port,debug=options.debug)
+		except CalledProcessError: sys.exit(22)
+
+	p = os.path.join(dbfilepath,"schema-level.sql")
+	if os.path.isfile(p):
+		text = file(p).read()
+		print "Applying file %s to the database on server %s:%s"%(p,host,port)
+                try: run_mysql(text,rootuser,rootpassword,host,port,debug=options.debug)
 		except CalledProcessError: sys.exit(22)
diff --git a/setup/db/create-database.sql b/setup/db/create-database.sql
index 3a2ad436170..704568edc60 100644
--- a/setup/db/create-database.sql
+++ b/setup/db/create-database.sql
@@ -24,7 +24,7 @@ BEGIN
   IF foo > 0 THEN 
          DROP USER 'cloud'@'%' ;
   END IF;
-END ;$$
+END $$
 DELIMITER ;
 
 CALL `mysql`.`cloud_drop_user_if_exists`() ;
diff --git a/setup/db/create-schema.sql b/setup/db/create-schema.sql
old mode 100644
new mode 100755
index 3ed8535790f..d813bc43261
--- a/setup/db/create-schema.sql
+++ b/setup/db/create-schema.sql
@@ -261,6 +261,7 @@ CREATE TABLE `cloud`.`volumes` (
   `recreatable` tinyint(1) unsigned NOT NULL DEFAULT 0 COMMENT 'Is this volume recreatable?',
   `destroyed` tinyint(1) COMMENT 'indicates whether the volume was destroyed by the user or not',
   `created` datetime COMMENT 'Date Created',
+  `attached` datetime COMMENT 'Date Attached',
   `updated` datetime COMMENT 'Date updated for attach/detach',
   `removed` datetime COMMENT 'Date removed.  not null if removed',
   `status` varchar(32) COMMENT 'Async API volume creation status',
@@ -633,13 +634,18 @@ CREATE TABLE  `cloud`.`template_host_ref` (
   `created` DATETIME NOT NULL,
   `last_updated` DATETIME,
   `job_id` varchar(255),
+  `upload_job_id` varchar(255),
   `download_pct` int(10) unsigned,
+  `upload_pct` int(10) unsigned,
   `size` bigint unsigned,
   `download_state` varchar(255),
+  `upload_state` varchar(255),
   `error_str` varchar(255),
+  `upload_error_str` varchar(255),
   `local_path` varchar(255),
   `install_path` varchar(255),
   `url` varchar(255),
+  `upload_url` varchar(255),
   `destroyed` tinyint(1) COMMENT 'indicates whether the template_host entry was destroyed by the user or not',
   `is_copy` tinyint(1) NOT NULL DEFAULT 0 COMMENT 'indicates whether this was copied ',
   PRIMARY KEY  (`id`)
@@ -911,7 +917,7 @@ CREATE TABLE `cloud`.`load_balancer` (
 CREATE TABLE  `cloud`.`storage_pool` (
   `id` bigint unsigned UNIQUE NOT NULL,
   `name` varchar(255) COMMENT 'should be NOT NULL',
-  `uuid` varchar(255) UNIQUE NOT NULL,
+  `uuid` varchar(255) NOT NULL,
   `pool_type` varchar(32) NOT NULL,
   `port` int unsigned NOT NULL,
   `data_center_id` bigint unsigned NOT NULL,
diff --git a/setup/db/data-21to22.sql b/setup/db/data-21to22.sql
new file mode 100644
index 00000000000..791436ff3c5
--- /dev/null
+++ b/setup/db/data-21to22.sql
@@ -0,0 +1,12 @@
+--data upgrade from 21 to 22
+use cloud;
+
+START TRANSACTION;
+
+DELETE FROM configuration where name='upgrade.url';
+DELETE FROM configuration where name='router.template.id';
+UPDATE vm_template set unique_name='routing_old'  where id=1;
+INSERT INTO vm_template (id, unique_name, name, public, created, type, hvm, bits, account_id, url, checksum, enable_password, display_text, format, guest_os_id, featured, cross_zones)
+    VALUES (10, 'routing', 'SystemVM Template', 0, now(), 'ext3', 0, 64, 1, 'http://download.cloud.com/releases/2.2/systemvm.vhd.bz2', 'bcc7f290f4c27ab4d0fe95d1012829ea', 0, 'SystemVM Template', 'VHD', 15, 0, 1);
+
+COMMIT;
diff --git a/setup/db/migration/data-21to22.sql b/setup/db/migration/data-21to22.sql
deleted file mode 100644
index 0af83cc2afd..00000000000
--- a/setup/db/migration/data-21to22.sql
+++ /dev/null
@@ -1,8 +0,0 @@
---data upgrade from 21 to 22
-use cloud;
-
-START TRANSACTION;
-
-DELETE FROM configuration where name='upgrade.url';
-
-COMMIT;
diff --git a/setup/db/migration/schema-21to22.sql b/setup/db/schema-21to22.sql
similarity index 88%
rename from setup/db/migration/schema-21to22.sql
rename to setup/db/schema-21to22.sql
index dff1e91ae52..ae3b3db4446 100644
--- a/setup/db/migration/schema-21to22.sql
+++ b/setup/db/schema-21to22.sql
@@ -9,3 +9,4 @@ ALTER TABLE `cloud`.`resource_count` MODIFY COLUMN `account_id` bigint unsigned;
 ALTER TABLE `cloud`.`storage_pool` add COLUMN STATUS varchar(32) not null; -- new status column for maintenance mode support for primary storage
 ALTER TABLE `cloud`.`volumes` ADD COLUMN `source_id` bigint unsigned;  -- id for the source
 ALTER TABLE `cloud`.`volumes` ADD COLUMN `source_type` varchar(32); --source from which the volume is created i.e. snapshot, diskoffering, template, blank
+ALTER TABLE `cloud`.`volumes` ADD COLUMN 'attached' datetime; --date and time the volume was attached
diff --git a/setup/db/schema-level.sql b/setup/db/schema-level.sql
new file mode 100644
index 00000000000..e3b0eea48fb
--- /dev/null
+++ b/setup/db/schema-level.sql
@@ -0,0 +1 @@
+INSERT INTO `cloud`.`configuration` (category, instance, component, name, value, description) VALUES ('Hidden', 'DEFAULT', 'database', 'schema.level', '2.2', 'The schema level of this database');
diff --git a/setup/db/server-setup.xml b/setup/db/server-setup.xml
old mode 100644
new mode 100755
diff --git a/setup/db/templates.xenserver.sql b/setup/db/templates.xenserver.sql
index 0432c97716f..02651fd99a8 100644
--- a/setup/db/templates.xenserver.sql
+++ b/setup/db/templates.xenserver.sql
@@ -1,5 +1,5 @@
 INSERT INTO `cloud`.`vm_template` (id, unique_name, name, public, created, type, hvm, bits, account_id, url, checksum, enable_password, display_text, format, guest_os_id, featured, cross_zones)
-    VALUES (1, 'routing', 'SystemVM Template', 0, now(), 'ext3', 0, 64, 1, 'http://download.cloud.com/releases/2.0.0RC5/systemvm.vhd.bz2', '31cd7ce94fe68c973d5dc37c3349d02e', 0, 'SystemVM Template', 'VHD', 12, 0, 1);
+    VALUES (1, 'routing', 'SystemVM Template', 0, now(), 'ext3', 0, 64, 1, 'http://download.cloud.com/releases/2.2/systemvm.vhd.bz2', 'bcc7f290f4c27ab4d0fe95d1012829ea', 0, 'SystemVM Template', 'VHD', 15, 0, 1);
 INSERT INTO `cloud`.`vm_template` (id, unique_name, name, public, created, type, hvm, bits, account_id, url, checksum, enable_password, display_text,  format, guest_os_id, featured, cross_zones)
     VALUES (2, 'centos53-x86_64', 'CentOS 5.3(x86_64) no GUI', 1, now(), 'ext3', 0, 64, 1, 'http://download.cloud.com/templates/builtin/f59f18fb-ae94-4f97-afd2-f84755767aca.vhd.bz2', 'b63d854a9560c013142567bbae8d98cf', 0, 'CentOS 5.3(x86_64) no GUI', 'VHD', 12, 1, 1);
 
diff --git a/tools/systemvm/debian/README b/tools/systemvm/debian/README
new file mode 100644
index 00000000000..730ae00f22f
--- /dev/null
+++ b/tools/systemvm/debian/README
@@ -0,0 +1,31 @@
+1. The buildsystemvm.sh script builds a 32-bit system vm disk based on the Debian Squeeze distro. This system vm can boot on any hypervisor thanks to the pvops support in the kernel. It is fully automated except for one step (see 4 below)
+2. The files under config/ are the specific tweaks to the default Debian configuration that are required for CloudStack operation.
+3. The variables at the top of the buildsystemvm.sh script can be customized:
+   
+	IMAGENAME=systemvm # dont touch this
+	LOCATION=/var/lib/images/systemvm #
+	MOUNTPOINT=/mnt/$IMAGENAME/ # this is where the image is mounted on your host while the vm image is built
+	IMAGELOC=$LOCATION/$IMAGENAME.img
+	PASSWORD=password # password for the vm
+	APT_PROXY= #you can put in an APT cacher such as apt-cacher-ng
+	HOSTNAME=systemvm # dont touch this
+	SIZE=2000 # dont touch this for now
+	DEBIAN_MIRROR=ftp.us.debian.org/debian 
+	MINIMIZE=true # if this is true, a lot of docs, fonts, locales and apt cache is wiped out
+
+4. The systemvm includes the (non-free) Sun JRE. You can put in the standard debian jre-headless package instead but it pulls in X and bloats the image. The sun jre package install requires a manual step of saying "yes" to the license. The packages() function is where you can swap in the standard jre
+5. You need to be 'root' to run the buildsystemvm.sh script
+
+6. The image is a raw image. However Citrix Xenserver requires the image to be in the VHD format. To convert this to VHD, follow these steps: 
+   a. The xen repository has a tool called vhd-util that compiles and runs on any linux system (http://xenbits.xensource.com/xen-4.0-testing.hg?file/8e8dd38374e9/tools/blktap2/vhd/ or full Xen source at http://www.xen.org/products/xen_source.html).
+   b. Apply this patch: http://lists.xensource.com/archives/cgi-bin/mesg.cgi?a=xen-devel&i=006101cb22f6%242004dd40%24600e97c0%24%40zhuo%40cloudex.cn.
+   c. Build the vhd-util tool
+     cd tools/blktap2
+     make
+     sudo make install
+   d. Use the vhd-util tool to convert from raw to vhd:
+    cp  
+    vhd-util convert -s 0 -t 1 -i  -o 
+    vhd-util convert -s 1 -t 2 -i  -o 
+7. Cloudstack requires the system vm disk to be in QCOW2 format to support the KVM hypervisor. To convert the raw disk image to qcow2, you need the qemu-img tool on the host system. 
+    qemu-img  convert -f raw -O qcow2 systemvm.img systemvm.qcow2
diff --git a/tools/systemvm/debian/buildsystemvm.sh b/tools/systemvm/debian/buildsystemvm.sh
index 409883d0214..41fa315f711 100755
--- a/tools/systemvm/debian/buildsystemvm.sh
+++ b/tools/systemvm/debian/buildsystemvm.sh
@@ -1,26 +1,34 @@
 #!/bin/bash
 
+set -x
+
 IMAGENAME=systemvm
-LOCATION=/var/lib/images/systemvm2
+LOCATION=/var/lib/images/systemvm
 PASSWORD=password
 APT_PROXY=
 HOSTNAME=systemvm
 SIZE=2000
 DEBIAN_MIRROR=ftp.us.debian.org/debian
 MINIMIZE=true
+MOUNTPOINT=/mnt/$IMAGENAME/
+IMAGELOC=$LOCATION/$IMAGENAME.img
+scriptdir=$(dirname $PWD/$0)
 
 baseimage() {
   mkdir -p $LOCATION
+  #dd if=/dev/zero of=$IMAGELOC bs=1M  count=$SIZE
   dd if=/dev/zero of=$IMAGELOC bs=1M seek=$((SIZE - 1)) count=1
   loopdev=$(losetup -f)
   losetup $loopdev $IMAGELOC
   parted $loopdev -s 'mklabel msdos'
   parted $loopdev -s 'mkpart primary ext3 512B 2097151000B'
+  sleep 2 
   losetup -d $loopdev
   loopdev=$(losetup --show -o 512 -f $IMAGELOC )
   mkfs.ext3  -L ROOT $loopdev
   mkdir -p $MOUNTPOINT
   tune2fs -c 100 -i 0 $loopdev
+  sleep 2 
   losetup -d $loopdev
   
   mount -o loop,offset=512 $IMAGELOC  $MOUNTPOINT
@@ -105,12 +113,11 @@ ff02::3 ip6-allhosts
 EOF
 
   cat >> etc/network/interfaces << EOF
-auto lo
+auto lo eth0
 iface lo inet loopback
 
 # The primary network interface
-allow-hotplug eth0
-iface eth0 inet dhcp
+iface eth0 inet static
 
 EOF
 }
@@ -147,6 +154,8 @@ EOF
 
 
 fixgrub() {
+  kern=$(basename $(ls  boot/vmlinuz-*))
+  ver=${kern#vmlinuz-}
   cat > boot/grub/menu.lst << EOF
 default 0
 timeout 2
@@ -156,10 +165,10 @@ color cyan/blue white/blue
 # kopt=root=LABEL=ROOT ro
 
 ## ## End Default Options ##
-title		Debian GNU/Linux, kernel 2.6.32-5-686-bigmem 
+title		Debian GNU/Linux, kernel $ver
 root		(hd0,0)
-kernel		/boot/vmlinuz-2.6.32-5-686-bigmem root=LABEL=ROOT ro console=tty0 xencons=ttyS0,115200 console=hvc0 quiet
-initrd		/boot/initrd.img-2.6.32-5-686-bigmem
+kernel		/boot/$kern root=LABEL=ROOT ro console=tty0 xencons=ttyS0,115200 console=hvc0 quiet
+initrd		/boot/initrd.img-$ver
 
 ### END DEBIAN AUTOMAGIC KERNELS LIST
 EOF
@@ -182,6 +191,7 @@ EOF
 }
 
 fixacpid() {
+  mkdir -p etc/acpi/events
   cat >> etc/acpi/events/power << EOF
 event=button/power.*
 action=/usr/local/sbin/power.sh "%e"
@@ -193,15 +203,140 @@ EOF
   chmod a+x usr/local/sbin/power.sh
 }
 
+fixiptables() {
+cat >> etc/modules << EOF
+nf_conntrack
+nf_conntrack_ipv4
+EOF
+cat > etc/init.d/iptables-persistent << EOF
+#!/bin/sh
+### BEGIN INIT INFO
+# Provides:          iptables
+# Required-Start:    mountkernfs $local_fs
+# Required-Stop:     $local_fs
+# Should-Start:      cloud-early-config
+# Default-Start:     S
+# Default-Stop:     
+# Short-Description: Set up iptables rules
+### END INIT INFO
+
+PATH="/sbin:/bin:/usr/sbin:/usr/bin"
+
+# Include config file for iptables-persistent
+. /etc/iptables/iptables.conf
+
+case "\$1" in
+start)
+    if [ -e /var/run/iptables ]; then
+        echo "iptables is already started!"
+        exit 1
+    else
+        touch /var/run/iptables
+    fi
+
+    if [ \$ENABLE_ROUTING -ne 0 ]; then
+        # Enable Routing
+        echo 1 > /proc/sys/net/ipv4/ip_forward
+    fi
+
+    # Load Modules
+    modprobe -a \$MODULES
+
+    # Load saved rules
+    if [ -f /etc/iptables/rules ]; then
+        iptables-restore /etc/iptables/rules
+    fi
+
+    # Restore Default Policies
+    iptables -P INPUT ACCEPT
+    iptables -P FORWARD ACCEPT
+    iptables -P OUTPUT ACCEPT
+
+    # Flush rules on default tables
+    iptables -F
+    iptables -t nat -F
+    iptables -t mangle -F
+
+    # Unload previously loaded modules
+    modprobe -r \$MODULES
+
+    # Disable Routing if enabled
+    if [ \$ENABLE_ROUTING -ne 0 ]; then
+        # Disable Routing
+        echo 0 > /proc/sys/net/ipv4/ip_forward
+    fi
+
+    ;;
+restart|force-reload)
+    \$0 stop
+    \$0 start
+    ;;
+status)
+    echo "Filter Rules:"
+    echo "--------------"
+    iptables -L -v
+    echo ""
+    echo "NAT Rules:"
+    echo "-------------"
+    iptables -t nat -L -v
+    echo ""
+    echo "Mangle Rules:"
+    echo "----------------"
+    iptables -t mangle -L -v
+    ;;
+*)
+    echo "Usage: \$0 {start|stop|force-stop|restart|force-reload|status}" >&2
+    exit 1
+    ;;
+esac
+
+exit 0
+EOF
+  chmod a+x etc/init.d/iptables-persistent
+
+
+  touch etc/iptables/iptables.conf 
+  cat > etc/iptables/iptables.conf << EOF
+# A basic config file for the /etc/init.d/iptable-persistent script
+#
+
+# Should new manually added rules from command line be saved on reboot? Assign to a value different that 0 if you want this enabled.
+SAVE_NEW_RULES=0
+
+# Modules to load:
+MODULES="nf_nat_ftp nf_conntrack_ftp"
+
+# Enable Routing?
+ENABLE_ROUTING=1
+EOF
+  chmod a+x etc/iptables/iptables.conf
+
+}
+
 packages() {
   DEBIAN_FRONTEND=noninteractive
   DEBIAN_PRIORITY=critical
   DEBCONF_DB_OVERRIDE=’File{/root/config.dat}’
   export DEBIAN_FRONTEND DEBIAN_PRIORITY DEBCONF_DB_OVERRIDE
 
-  chroot .  apt-get --no-install-recommends -q -y --force-yes install rsyslog chkconfig insserv net-tools ifupdown vim-tiny netbase iptables openssh-server grub e2fsprogs dhcp3-client dnsmasq tcpdump socat wget apache2 python2.5 bzip2 sed gawk diff grep gzip less tar telnet xl2tpd traceroute openswan psmisc 
+  chroot .  apt-get --no-install-recommends -q -y --force-yes install rsyslog chkconfig insserv net-tools ifupdown vim-tiny netbase iptables openssh-server grub e2fsprogs dhcp3-client dnsmasq tcpdump socat wget apache2 ssl-cert python bzip2 sed gawk diff grep gzip less tar telnet xl2tpd traceroute openswan psmisc inetutils-ping iputils-arping httping dnsutils zip unzip ethtool uuid
 
-  chroot . apt-get --no-install-recommends -q -y --force-yes -t backports install haproxy nfs-common
+  chroot . apt-get --no-install-recommends -q -y --force-yes install haproxy nfs-common
 
   echo "***** getting additional modules *********"
   chroot .  apt-get --no-install-recommends -q -y --force-yes  install iproute acpid iptables-persistent
@@ -218,8 +353,34 @@ password() {
   chroot . echo "root:$PASSWORD" | chroot . chpasswd
 }
 
+apache2() {
+   chroot . a2enmod ssl rewrite auth_basic auth_digest
+   chroot . a2ensite default-ssl
+   cp etc/apache2/sites-available/default etc/apache2/sites-available/default.orig
+   cp etc/apache2/sites-available/default-ssl etc/apache2/sites-available/default-ssl.orig
+}
+
+services() {
+  mkdir -p ./var/www/html
+  mkdir -p ./opt/cloud/bin
+  mkdir -p ./var/cache/cloud
+  mkdir -p ./usr/share/cloud
+  mkdir -p ./usr/local/cloud
+  mkdir -p ./root/.ssh
+  
+  /bin/cp -r ${scriptdir}/config/* ./
+  chroot . chkconfig xl2tpd off
+  chroot . chkconfig --add cloud-early-config
+  chroot . chkconfig cloud-early-config on
+  chroot . chkconfig --add cloud-passwd-srvr 
+  chroot . chkconfig cloud-passwd-srvr off
+  chroot . chkconfig --add cloud
+  chroot . chkconfig cloud off
+}
+
 cleanup() {
   rm -f usr/sbin/policy-rc.d
+  rm -f root/config.dat
   rm -f etc/apt/apt.conf.d/01proxy 
 
   if [ "$MINIMIZE" == "true" ]
@@ -229,17 +390,22 @@ cleanup() {
     rm -rf usr/share/locale/[a-d]*
     rm -rf usr/share/locale/[f-z]*
     rm -rf usr/share/doc/*
+    size=$(df   $MOUNTPOINT | awk '{print $4}' | grep -v Available)
+    dd if=/dev/zero of=$MOUNTPOINT/zeros.img bs=1M count=$((((size-200000)) / 1000))
+    rm -f $MOUNTPOINT/zeros.img
   fi
 }
 
+signature() {
+  (cd ${scriptdir}/config;  tar czf ${MOUNTPOINT}/usr/share/cloud/cloud-scripts.tgz *)
+  md5sum ${MOUNTPOINT}/usr/share/cloud/cloud-scripts.tgz |awk '{print $1}'  > ${MOUNTPOINT}/var/cache/cloud/cloud-scripts-signature
+}
+
 mkdir -p $IMAGENAME
 mkdir -p $LOCATION
-MOUNTPOINT=/mnt/$IMAGENAME/
-IMAGELOC=$LOCATION/$IMAGENAME.img
-scriptdir=$(dirname $PWD/$0)
 
 rm -f $IMAGELOC
-
+begin=$(date +%s)
 echo "*************INSTALLING BASEIMAGE********************"
 baseimage
 
@@ -278,26 +444,35 @@ echo "*************CONFIGURING ACPID********************"
 fixacpid
 echo "*************DONE CONFIGURING ACPID********************"
 
-#cp etc/inittab etc/inittab.hvm
-#cp $scriptdir/inittab.xen etc/inittab.xen
-#cp $scriptdir/inittab.xen etc/inittab
-#cp $scriptdir/fstab.xen etc/fstab.xen
-#cp $scriptdir/fstab.xen etc/fstab
-#cp $scriptdir/fstab etc/fstab
-
 echo "*************INSTALLING PACKAGES********************"
 packages
 echo "*************DONE INSTALLING PACKAGES********************"
 
+echo "*************CONFIGURING IPTABLES********************"
+fixiptables
+echo "*************DONE CONFIGURING IPTABLES********************"
+
 echo "*************CONFIGURING PASSWORD********************"
 password
 
+echo "*************CONFIGURING SERVICES********************"
+services
+
+echo "*************CONFIGURING APACHE********************"
+apache2
+
 echo "*************CLEANING UP********************"
 cleanup 
 
+echo "*************GENERATING SIGNATURE********************"
+signature
+
 cd $scriptdir
 
 umount $MOUNTPOINT/proc
 umount $MOUNTPOINT/dev
 umount $MOUNTPOINT
+fin=$(date +%s)
+t=$((fin-begin))
+echo "Finished building image $IMAGELOC in $t seconds"
 
diff --git a/tools/systemvm/debian/config.dat b/tools/systemvm/debian/config.dat
new file mode 100644
index 00000000000..b16638f742e
--- /dev/null
+++ b/tools/systemvm/debian/config.dat
@@ -0,0 +1,398 @@
+Name: adduser/homedir-permission
+Template: adduser/homedir-permission
+Value: true
+Owners: adduser
+
+Name: ca-certificates/enable_crts
+Template: ca-certificates/enable_crts
+Value: brasil.gov.br/brasil.gov.br.crt, cacert.org/cacert.org.crt, cacert.org/class3.crt, cacert.org/root.crt, debconf.org/ca.crt, gouv.fr/cert_igca_dsa.crt, gouv.fr/cert_igca_rsa.crt, mozilla/ABAecom_=sub.__Am._Bankers_Assn.=_Root_CA.crt, mozilla/AddTrust_External_Root.crt, mozilla/AddTrust_Low-Value_Services_Root.crt, mozilla/AddTrust_Public_Services_Root.crt, mozilla/AddTrust_Qualified_Certificates_Root.crt, mozilla/America_Online_Root_Certification_Authority_1.crt, mozilla/America_Online_Root_Certification_Authority_2.crt, mozilla/AOL_Time_Warner_Root_Certification_Authority_1.crt, mozilla/AOL_Time_Warner_Root_Certification_Authority_2.crt, mozilla/Baltimore_CyberTrust_Root.crt, mozilla/beTRUSTed_Root_CA-Baltimore_Implementation.crt, mozilla/beTRUSTed_Root_CA.crt, mozilla/beTRUSTed_Root_CA_-_Entrust_Implementation.crt, mozilla/beTRUSTed_Root_CA_-_RSA_Implementation.crt, mozilla/Camerfirma_Chambers_of_Commerce_Root.crt, mozilla/Camerfirma_Global_Chambersign_Root.crt, mozilla/Certplus_Class_2_Primary_CA.crt, mozilla/Certum_Root_CA.crt, mozilla/Comodo_AAA_Services_root.crt, mozilla/COMODO_Certification_Authority.crt, mozilla/Comodo_Secure_Services_root.crt, mozilla/Comodo_Trusted_Services_root.crt, mozilla/DigiCert_Assured_ID_Root_CA.crt, mozilla/DigiCert_Global_Root_CA.crt, mozilla/DigiCert_High_Assurance_EV_Root_CA.crt, mozilla/Digital_Signature_Trust_Co._Global_CA_1.crt, mozilla/Digital_Signature_Trust_Co._Global_CA_2.crt, mozilla/Digital_Signature_Trust_Co._Global_CA_3.crt, mozilla/Digital_Signature_Trust_Co._Global_CA_4.crt, mozilla/DST_ACES_CA_X6.crt, mozilla/DST_Root_CA_X3.crt, mozilla/Entrust.net_Global_Secure_Personal_CA.crt, mozilla/Entrust.net_Global_Secure_Server_CA.crt, mozilla/Entrust.net_Premium_2048_Secure_Server_CA.crt, mozilla/Entrust.net_Secure_Personal_CA.crt, mozilla/Entrust.net_Secure_Server_CA.crt, mozilla/Entrust_Root_Certification_Authority.crt, mozilla/Equifax_Secure_CA.crt, mozilla/Equifax_Secure_eBusiness_CA_1.crt, mozilla/Equifax_Secure_eBusiness_CA_2.crt, mozilla/Equifax_Secure_Global_eBusiness_CA.crt, mozilla/Firmaprofesional_Root_CA.crt, mozilla/GeoTrust_Global_CA_2.crt, mozilla/GeoTrust_Global_CA.crt, mozilla/GeoTrust_Primary_Certification_Authority.crt, mozilla/GeoTrust_Universal_CA_2.crt, mozilla/GeoTrust_Universal_CA.crt, mozilla/GlobalSign_Root_CA.crt, mozilla/GlobalSign_Root_CA_-_R2.crt, mozilla/Go_Daddy_Class_2_CA.crt, mozilla/GTE_CyberTrust_Global_Root.crt, mozilla/GTE_CyberTrust_Root_CA.crt, mozilla/IPS_Chained_CAs_root.crt, mozilla/IPS_CLASE1_root.crt, mozilla/IPS_CLASE3_root.crt, mozilla/IPS_CLASEA1_root.crt, mozilla/IPS_CLASEA3_root.crt, mozilla/IPS_Servidores_root.crt, mozilla/IPS_Timestamping_root.crt, mozilla/NetLock_Business_=Class_B=_Root.crt, mozilla/NetLock_Express_=Class_C=_Root.crt, mozilla/NetLock_Notary_=Class_A=_Root.crt, mozilla/NetLock_Qualified_=Class_QA=_Root.crt, mozilla/QuoVadis_Root_CA_2.crt, mozilla/QuoVadis_Root_CA_3.crt, mozilla/QuoVadis_Root_CA.crt, mozilla/RSA_Root_Certificate_1.crt, mozilla/RSA_Security_1024_v3.crt, mozilla/RSA_Security_2048_v3.crt, mozilla/Secure_Global_CA.crt, mozilla/SecureTrust_CA.crt, mozilla/Security_Communication_Root_CA.crt, mozilla/Sonera_Class_1_Root_CA.crt, mozilla/Sonera_Class_2_Root_CA.crt, mozilla/Staat_der_Nederlanden_Root_CA.crt, mozilla/Starfield_Class_2_CA.crt, mozilla/StartCom_Certification_Authority.crt, mozilla/StartCom_Ltd..crt, mozilla/Swisscom_Root_CA_1.crt, mozilla/SwissSign_Gold_CA_-_G2.crt, mozilla/SwissSign_Platinum_CA_-_G2.crt, mozilla/SwissSign_Silver_CA_-_G2.crt, mozilla/Taiwan_GRCA.crt, mozilla/TC_TrustCenter__Germany__Class_2_CA.crt, mozilla/TC_TrustCenter__Germany__Class_3_CA.crt, mozilla/TDC_Internet_Root_CA.crt, mozilla/TDC_OCES_Root_CA.crt, mozilla/Thawte_Personal_Basic_CA.crt, mozilla/Thawte_Personal_Freemail_CA.crt, mozilla/Thawte_Personal_Premium_CA.crt, mozilla/Thawte_Premium_Server_CA.crt, mozilla/thawte_Primary_Root_CA.crt, mozilla/Thawte_Server_CA.crt, mozilla/Thawte_Time_Stamping_CA.crt, mozilla/TURKTRUST_Certificate_Services_Provider_Root_1.crt, mozilla/TURKTRUST_Certificate_Services_Provider_Root_2.crt, mozilla/UTN_DATACorp_SGC_Root_CA.crt, mozilla/UTN_USERFirst_Email_Root_CA.crt, mozilla/UTN_USERFirst_Hardware_Root_CA.crt, mozilla/UTN-USER_First-Network_Applications.crt, mozilla/UTN_USERFirst_Object_Root_CA.crt, mozilla/ValiCert_Class_1_VA.crt, mozilla/ValiCert_Class_2_VA.crt, mozilla/Verisign_Class_1_Public_Primary_Certification_Authority.crt, mozilla/Verisign_Class_1_Public_Primary_Certification_Authority_-_G2.crt, mozilla/Verisign_Class_1_Public_Primary_Certification_Authority_-_G3.crt, mozilla/Verisign_Class_2_Public_Primary_Certification_Authority.crt, mozilla/Verisign_Class_2_Public_Primary_Certification_Authority_-_G2.crt, mozilla/Verisign_Class_2_Public_Primary_Certification_Authority_-_G3.crt, mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt, mozilla/Verisign_Class_3_Public_Primary_Certification_Authority_-_G2.crt, mozilla/Verisign_Class_3_Public_Primary_Certification_Authority_-_G3.crt, mozilla/VeriSign_Class_3_Public_Primary_Certification_Authority_-_G5.crt, mozilla/Verisign_Class_4_Public_Primary_Certification_Authority_-_G2.crt, mozilla/Verisign_Class_4_Public_Primary_Certification_Authority_-_G3.crt, mozilla/Verisign_RSA_Secure_Server_CA.crt, mozilla/Verisign_Time_Stamping_Authority_CA.crt, mozilla/Visa_eCommerce_Root.crt, mozilla/Visa_International_Global_Root_2.crt, mozilla/Wells_Fargo_Root_CA.crt, mozilla/XRamp_Global_CA_Root.crt, quovadis.bm/QuoVadis_Root_Certification_Authority.crt, signet.pl/signet_ca1_pem.crt, signet.pl/signet_ca2_pem.crt, signet.pl/signet_ca3_pem.crt, signet.pl/signet_ocspklasa2_pem.crt, signet.pl/signet_ocspklasa3_pem.crt, signet.pl/signet_pca2_pem.crt, signet.pl/signet_pca3_pem.crt, signet.pl/signet_rootca_pem.crt, signet.pl/signet_tsa1_pem.crt, spi-inc.org/spi-ca-2003.crt, spi-inc.org/spi-cacert-2008.crt, telesec.de/deutsche-telekom-root-ca-2.crt
+Owners: ca-certificates
+Variables:
+ enable_crts = brasil.gov.br/brasil.gov.br.crt, cacert.org/cacert.org.crt, cacert.org/class3.crt, cacert.org/root.crt, debconf.org/ca.crt, gouv.fr/cert_igca_dsa.crt, gouv.fr/cert_igca_rsa.crt, mozilla/ABAecom_=sub.__Am._Bankers_Assn.=_Root_CA.crt, mozilla/AddTrust_External_Root.crt, mozilla/AddTrust_Low-Value_Services_Root.crt, mozilla/AddTrust_Public_Services_Root.crt, mozilla/AddTrust_Qualified_Certificates_Root.crt, mozilla/America_Online_Root_Certification_Authority_1.crt, mozilla/America_Online_Root_Certification_Authority_2.crt, mozilla/AOL_Time_Warner_Root_Certification_Authority_1.crt, mozilla/AOL_Time_Warner_Root_Certification_Authority_2.crt, mozilla/Baltimore_CyberTrust_Root.crt, mozilla/beTRUSTed_Root_CA-Baltimore_Implementation.crt, mozilla/beTRUSTed_Root_CA.crt, mozilla/beTRUSTed_Root_CA_-_Entrust_Implementation.crt, mozilla/beTRUSTed_Root_CA_-_RSA_Implementation.crt, mozilla/Camerfirma_Chambers_of_Commerce_Root.crt, mozilla/Camerfirma_Global_Chambersign_Root.crt, mozilla/Certplus_Class_2_Primary_CA.crt, mozilla/Certum_Root_CA.crt, mozilla/Comodo_AAA_Services_root.crt, mozilla/COMODO_Certification_Authority.crt, mozilla/Comodo_Secure_Services_root.crt, mozilla/Comodo_Trusted_Services_root.crt, mozilla/DigiCert_Assured_ID_Root_CA.crt, mozilla/DigiCert_Global_Root_CA.crt, mozilla/DigiCert_High_Assurance_EV_Root_CA.crt, mozilla/Digital_Signature_Trust_Co._Global_CA_1.crt, mozilla/Digital_Signature_Trust_Co._Global_CA_2.crt, mozilla/Digital_Signature_Trust_Co._Global_CA_3.crt, mozilla/Digital_Signature_Trust_Co._Global_CA_4.crt, mozilla/DST_ACES_CA_X6.crt, mozilla/DST_Root_CA_X3.crt, mozilla/Entrust.net_Global_Secure_Personal_CA.crt, mozilla/Entrust.net_Global_Secure_Server_CA.crt, mozilla/Entrust.net_Premium_2048_Secure_Server_CA.crt, mozilla/Entrust.net_Secure_Personal_CA.crt, mozilla/Entrust.net_Secure_Server_CA.crt, mozilla/Entrust_Root_Certification_Authority.crt, mozilla/Equifax_Secure_CA.crt, mozilla/Equifax_Secure_eBusiness_CA_1.crt, mozilla/Equifax_Secure_eBusiness_CA_2.crt, mozilla/Equifax_Secure_Global_eBusiness_CA.crt, mozilla/Firmaprofesional_Root_CA.crt, mozilla/GeoTrust_Global_CA_2.crt, mozilla/GeoTrust_Global_CA.crt, mozilla/GeoTrust_Primary_Certification_Authority.crt, mozilla/GeoTrust_Universal_CA_2.crt, mozilla/GeoTrust_Universal_CA.crt, mozilla/GlobalSign_Root_CA.crt, mozilla/GlobalSign_Root_CA_-_R2.crt, mozilla/Go_Daddy_Class_2_CA.crt, mozilla/GTE_CyberTrust_Global_Root.crt, mozilla/GTE_CyberTrust_Root_CA.crt, mozilla/IPS_Chained_CAs_root.crt, mozilla/IPS_CLASE1_root.crt, mozilla/IPS_CLASE3_root.crt, mozilla/IPS_CLASEA1_root.crt, mozilla/IPS_CLASEA3_root.crt, mozilla/IPS_Servidores_root.crt, mozilla/IPS_Timestamping_root.crt, mozilla/NetLock_Business_=Class_B=_Root.crt, mozilla/NetLock_Express_=Class_C=_Root.crt, mozilla/NetLock_Notary_=Class_A=_Root.crt, mozilla/NetLock_Qualified_=Class_QA=_Root.crt, mozilla/QuoVadis_Root_CA_2.crt, mozilla/QuoVadis_Root_CA_3.crt, mozilla/QuoVadis_Root_CA.crt, mozilla/RSA_Root_Certificate_1.crt, mozilla/RSA_Security_1024_v3.crt, mozilla/RSA_Security_2048_v3.crt, mozilla/Secure_Global_CA.crt, mozilla/SecureTrust_CA.crt, mozilla/Security_Communication_Root_CA.crt, mozilla/Sonera_Class_1_Root_CA.crt, mozilla/Sonera_Class_2_Root_CA.crt, mozilla/Staat_der_Nederlanden_Root_CA.crt, mozilla/Starfield_Class_2_CA.crt, mozilla/StartCom_Certification_Authority.crt, mozilla/StartCom_Ltd..crt, mozilla/Swisscom_Root_CA_1.crt, mozilla/SwissSign_Gold_CA_-_G2.crt, mozilla/SwissSign_Platinum_CA_-_G2.crt, mozilla/SwissSign_Silver_CA_-_G2.crt, mozilla/Taiwan_GRCA.crt, mozilla/TC_TrustCenter__Germany__Class_2_CA.crt, mozilla/TC_TrustCenter__Germany__Class_3_CA.crt, mozilla/TDC_Internet_Root_CA.crt, mozilla/TDC_OCES_Root_CA.crt, mozilla/Thawte_Personal_Basic_CA.crt, mozilla/Thawte_Personal_Freemail_CA.crt, mozilla/Thawte_Personal_Premium_CA.crt, mozilla/Thawte_Premium_Server_CA.crt, mozilla/thawte_Primary_Root_CA.crt, mozilla/Thawte_Server_CA.crt, mozilla/Thawte_Time_Stamping_CA.crt, mozilla/TURKTRUST_Certificate_Services_Provider_Root_1.crt, mozilla/TURKTRUST_Certificate_Services_Provider_Root_2.crt, mozilla/UTN_DATACorp_SGC_Root_CA.crt, mozilla/UTN_USERFirst_Email_Root_CA.crt, mozilla/UTN_USERFirst_Hardware_Root_CA.crt, mozilla/UTN-USER_First-Network_Applications.crt, mozilla/UTN_USERFirst_Object_Root_CA.crt, mozilla/ValiCert_Class_1_VA.crt, mozilla/ValiCert_Class_2_VA.crt, mozilla/Verisign_Class_1_Public_Primary_Certification_Authority.crt, mozilla/Verisign_Class_1_Public_Primary_Certification_Authority_-_G2.crt, mozilla/Verisign_Class_1_Public_Primary_Certification_Authority_-_G3.crt, mozilla/Verisign_Class_2_Public_Primary_Certification_Authority.crt, mozilla/Verisign_Class_2_Public_Primary_Certification_Authority_-_G2.crt, mozilla/Verisign_Class_2_Public_Primary_Certification_Authority_-_G3.crt, mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt, mozilla/Verisign_Class_3_Public_Primary_Certification_Authority_-_G2.crt, mozilla/Verisign_Class_3_Public_Primary_Certification_Authority_-_G3.crt, mozilla/VeriSign_Class_3_Public_Primary_Certification_Authority_-_G5.crt, mozilla/Verisign_Class_4_Public_Primary_Certification_Authority_-_G2.crt, mozilla/Verisign_Class_4_Public_Primary_Certification_Authority_-_G3.crt, mozilla/Verisign_RSA_Secure_Server_CA.crt, mozilla/Verisign_Time_Stamping_Authority_CA.crt, mozilla/Visa_eCommerce_Root.crt, mozilla/Visa_International_Global_Root_2.crt, mozilla/Wells_Fargo_Root_CA.crt, mozilla/XRamp_Global_CA_Root.crt, quovadis.bm/QuoVadis_Root_Certification_Authority.crt, signet.pl/signet_ca1_pem.crt, signet.pl/signet_ca2_pem.crt, signet.pl/signet_ca3_pem.crt, signet.pl/signet_ocspklasa2_pem.crt, signet.pl/signet_ocspklasa3_pem.crt, signet.pl/signet_pca2_pem.crt, signet.pl/signet_pca3_pem.crt, signet.pl/signet_rootca_pem.crt, signet.pl/signet_tsa1_pem.crt, spi-inc.org/spi-ca-2003.crt, spi-inc.org/spi-cacert-2008.crt, telesec.de/deutsche-telekom-root-ca-2.crt
+
+Name: ca-certificates/new_crts
+Template: ca-certificates/new_crts
+Owners: ca-certificates
+Variables:
+ new_crts = 
+
+Name: ca-certificates/trust_new_crts
+Template: ca-certificates/trust_new_crts
+Value: yes
+Owners: ca-certificates
+
+Name: debconf-apt-progress/info
+Template: debconf-apt-progress/info
+Owners: debconf
+
+Name: debconf-apt-progress/media-change
+Template: debconf-apt-progress/media-change
+Owners: debconf
+
+Name: debconf-apt-progress/preparing
+Template: debconf-apt-progress/preparing
+Owners: debconf
+
+Name: debconf-apt-progress/title
+Template: debconf-apt-progress/title
+Owners: debconf
+
+Name: debconf/frontend
+Template: debconf/frontend
+Value: noninteractive
+Owners: debconf
+
+Name: debconf/priority
+Template: debconf/priority
+Value: high
+Owners: debconf
+
+Name: dhcp3-client/dhclient-needs-restarting
+Template: dhcp3-client/dhclient-needs-restarting
+Owners: dhcp3-client
+
+Name: dhcp3-client/dhclient-script_moved
+Template: dhcp3-client/dhclient-script_moved
+Owners: dhcp3-client
+
+Name: glibc/restart-failed
+Template: glibc/restart-failed
+Owners: libc6
+
+Name: glibc/restart-services
+Template: glibc/restart-services
+Owners: libc6
+
+Name: glibc/upgrade
+Template: glibc/upgrade
+Owners: libc6
+
+Name: libpam-modules/disable-screensaver
+Template: libpam-modules/disable-screensaver
+Owners: libpam-modules
+
+Name: libpam0g/restart-failed
+Template: libpam0g/restart-failed
+Owners: libpam0g
+
+Name: libpam0g/restart-services
+Template: libpam0g/restart-services
+Owners: libpam0g
+
+Name: libpam0g/xdm-needs-restart
+Template: libpam0g/xdm-needs-restart
+Owners: libpam0g
+
+Name: libssl0.9.8/restart-failed
+Template: libssl0.9.8/restart-failed
+Owners: libssl0.9.8
+
+Name: libssl0.9.8/restart-services
+Template: libssl0.9.8/restart-services
+Owners: libssl0.9.8
+
+Name: linux-base/disk-id-convert-auto
+Template: linux-base/disk-id-convert-auto
+Owners: linux-base
+
+Name: linux-base/disk-id-convert-plan
+Template: linux-base/disk-id-convert-plan
+Owners: linux-base
+
+Name: linux-base/disk-id-convert-plan-no-relabel
+Template: linux-base/disk-id-convert-plan-no-relabel
+Owners: linux-base
+
+Name: linux-base/disk-id-manual
+Template: linux-base/disk-id-manual
+Owners: linux-base
+
+Name: linux-base/disk-id-manual-boot-loader
+Template: linux-base/disk-id-manual-boot-loader
+Owners: linux-base
+
+Name: linux-image-2.6.32-bpo.5-686/postinst/bootloader-error-2.6.32-bpo.5-686
+Template: linux-image-2.6.32-bpo.5-686/postinst/bootloader-error-2.6.32-bpo.5-686
+Owners: linux-image-2.6.32-bpo.5-686
+
+Name: linux-image-2.6.32-bpo.5-686/postinst/bootloader-test-error-2.6.32-bpo.5-686
+Template: linux-image-2.6.32-bpo.5-686/postinst/bootloader-test-error-2.6.32-bpo.5-686
+Owners: linux-image-2.6.32-bpo.5-686
+
+Name: linux-image-2.6.32-bpo.5-686/postinst/depmod-error-initrd-2.6.32-bpo.5-686
+Template: linux-image-2.6.32-bpo.5-686/postinst/depmod-error-initrd-2.6.32-bpo.5-686
+Owners: linux-image-2.6.32-bpo.5-686
+
+Name: linux-image-2.6.32-bpo.5-686/postinst/missing-firmware-2.6.32-bpo.5-686
+Template: linux-image-2.6.32-bpo.5-686/postinst/missing-firmware-2.6.32-bpo.5-686
+Owners: linux-image-2.6.32-bpo.5-686
+
+Name: linux-image-2.6.32-bpo.5-686/prerm/removing-running-kernel-2.6.32-bpo.5-686
+Template: linux-image-2.6.32-bpo.5-686/prerm/removing-running-kernel-2.6.32-bpo.5-686
+Owners: linux-image-2.6.32-bpo.5-686
+
+Name: linux-image-2.6.32-bpo.5-686/prerm/would-invalidate-boot-loader-2.6.32-bpo.5-686
+Template: linux-image-2.6.32-bpo.5-686/prerm/would-invalidate-boot-loader-2.6.32-bpo.5-686
+Owners: linux-image-2.6.32-bpo.5-686
+
+Name: linux-image-2.6.32-bpo.5-xen-686/postinst/bootloader-error-2.6.32-bpo.5-xen-686
+Template: linux-image-2.6.32-bpo.5-xen-686/postinst/bootloader-error-2.6.32-bpo.5-xen-686
+Owners: linux-image-2.6.32-bpo.5-xen-686
+
+Name: linux-image-2.6.32-bpo.5-xen-686/postinst/bootloader-test-error-2.6.32-bpo.5-xen-686
+Template: linux-image-2.6.32-bpo.5-xen-686/postinst/bootloader-test-error-2.6.32-bpo.5-xen-686
+Owners: linux-image-2.6.32-bpo.5-xen-686
+
+Name: linux-image-2.6.32-bpo.5-xen-686/postinst/depmod-error-initrd-2.6.32-bpo.5-xen-686
+Template: linux-image-2.6.32-bpo.5-xen-686/postinst/depmod-error-initrd-2.6.32-bpo.5-xen-686
+Owners: linux-image-2.6.32-bpo.5-xen-686
+
+Name: linux-image-2.6.32-bpo.5-xen-686/postinst/missing-firmware-2.6.32-bpo.5-xen-686
+Template: linux-image-2.6.32-bpo.5-xen-686/postinst/missing-firmware-2.6.32-bpo.5-xen-686
+Owners: linux-image-2.6.32-bpo.5-xen-686
+
+Name: linux-image-2.6.32-bpo.5-xen-686/prerm/removing-running-kernel-2.6.32-bpo.5-xen-686
+Template: linux-image-2.6.32-bpo.5-xen-686/prerm/removing-running-kernel-2.6.32-bpo.5-xen-686
+Owners: linux-image-2.6.32-bpo.5-xen-686
+
+Name: linux-image-2.6.32-bpo.5-xen-686/prerm/would-invalidate-boot-loader-2.6.32-bpo.5-xen-686
+Template: linux-image-2.6.32-bpo.5-xen-686/prerm/would-invalidate-boot-loader-2.6.32-bpo.5-xen-686
+Owners: linux-image-2.6.32-bpo.5-xen-686
+
+Name: locales/default_environment_locale
+Template: locales/default_environment_locale
+Value: en_US.UTF-8
+Owners: locales
+Variables:
+ locales = en_US.UTF-8
+
+Name: locales/locales_to_be_generated
+Template: locales/locales_to_be_generated
+Value: en_US.UTF-8 UTF-8
+Owners: locales
+Variables:
+ locales = aa_DJ ISO-8859-1, aa_DJ.UTF-8 UTF-8, aa_ER UTF-8, aa_ER@saaho UTF-8, aa_ET UTF-8, af_ZA ISO-8859-1, af_ZA.UTF-8 UTF-8, am_ET UTF-8, an_ES ISO-8859-15, an_ES.UTF-8 UTF-8, ar_AE ISO-8859-6, ar_AE.UTF-8 UTF-8, ar_BH ISO-8859-6, ar_BH.UTF-8 UTF-8, ar_DZ ISO-8859-6, ar_DZ.UTF-8 UTF-8, ar_EG ISO-8859-6, ar_EG.UTF-8 UTF-8, ar_IN UTF-8, ar_IQ ISO-8859-6, ar_IQ.UTF-8 UTF-8, ar_JO ISO-8859-6, ar_JO.UTF-8 UTF-8, ar_KW ISO-8859-6, ar_KW.UTF-8 UTF-8, ar_LB ISO-8859-6, ar_LB.UTF-8 UTF-8, ar_LY ISO-8859-6, ar_LY.UTF-8 UTF-8, ar_MA ISO-8859-6, ar_MA.UTF-8 UTF-8, ar_OM ISO-8859-6, ar_OM.UTF-8 UTF-8, ar_QA ISO-8859-6, ar_QA.UTF-8 UTF-8, ar_SA ISO-8859-6, ar_SA.UTF-8 UTF-8, ar_SD ISO-8859-6, ar_SD.UTF-8 UTF-8, ar_SY ISO-8859-6, ar_SY.UTF-8 UTF-8, ar_TN ISO-8859-6, ar_TN.UTF-8 UTF-8, ar_YE ISO-8859-6, ar_YE.UTF-8 UTF-8, as_IN.UTF-8 UTF-8, ast_ES ISO-8859-15, ast_ES.UTF-8 UTF-8, az_AZ.UTF-8 UTF-8, be_BY CP1251, be_BY.UTF-8 UTF-8, be_BY@latin UTF-8, ber_DZ UTF-8, ber_MA UTF-8, bg_BG CP1251, bg_BG.UTF-8 UTF-8, bn_BD UTF-8, bn_IN UTF-8, br_FR ISO-8859-1, br_FR.UTF-8 UTF-8, br_FR@euro ISO-8859-15, bs_BA ISO-8859-2, bs_BA.UTF-8 UTF-8, byn_ER UTF-8, ca_AD ISO-8859-15, ca_AD.UTF-8 UTF-8, ca_ES ISO-8859-1, ca_ES.UTF-8 UTF-8, ca_ES.UTF-8@valencia UTF-8, ca_ES@euro ISO-8859-15, ca_ES@valencia ISO-8859-15, ca_FR ISO-8859-15, ca_FR.UTF-8 UTF-8, ca_IT ISO-8859-15, ca_IT.UTF-8 UTF-8, crh_UA UTF-8, cs_CZ ISO-8859-2, cs_CZ.UTF-8 UTF-8, csb_PL UTF-8, cy_GB ISO-8859-14, cy_GB.UTF-8 UTF-8, da_DK ISO-8859-1, da_DK.ISO-8859-15 ISO-8859-15, da_DK.UTF-8 UTF-8, de_AT ISO-8859-1, de_AT.UTF-8 UTF-8, de_AT@euro ISO-8859-15, de_BE ISO-8859-1, de_BE.UTF-8 UTF-8, de_BE@euro ISO-8859-15, de_CH ISO-8859-1, de_CH.UTF-8 UTF-8, de_DE ISO-8859-1, de_DE.UTF-8 UTF-8, de_DE@euro ISO-8859-15, de_LI.UTF-8 UTF-8, de_LU ISO-8859-1, de_LU.UTF-8 UTF-8, de_LU@euro ISO-8859-15, dz_BT UTF-8, el_CY ISO-8859-7, el_CY.UTF-8 UTF-8, el_GR ISO-8859-7, el_GR.UTF-8 UTF-8, en_AU ISO-8859-1, en_AU.UTF-8 UTF-8, en_BW ISO-8859-1, en_BW.UTF-8 UTF-8, en_CA ISO-8859-1, en_CA.UTF-8 UTF-8, en_DK ISO-8859-1, en_DK.ISO-8859-15 ISO-8859-15, en_DK.UTF-8 UTF-8, en_GB ISO-8859-1, en_GB.ISO-8859-15 ISO-8859-15, en_GB.UTF-8 UTF-8, en_HK ISO-8859-1, en_HK.UTF-8 UTF-8, en_IE ISO-8859-1, en_IE.UTF-8 UTF-8, en_IE@euro ISO-8859-15, en_IN UTF-8, en_NG UTF-8, en_NZ ISO-8859-1, en_NZ.UTF-8 UTF-8, en_PH ISO-8859-1, en_PH.UTF-8 UTF-8, en_SG ISO-8859-1, en_SG.UTF-8 UTF-8, en_US ISO-8859-1, en_US.ISO-8859-15 ISO-8859-15, en_US.UTF-8 UTF-8, en_ZA ISO-8859-1, en_ZA.UTF-8 UTF-8, en_ZW ISO-8859-1, en_ZW.UTF-8 UTF-8, eo ISO-8859-3, eo.UTF-8 UTF-8, es_AR ISO-8859-1, es_AR.UTF-8 UTF-8, es_BO ISO-8859-1, es_BO.UTF-8 UTF-8, es_CL ISO-8859-1, es_CL.UTF-8 UTF-8, es_CO ISO-8859-1, es_CO.UTF-8 UTF-8, es_CR ISO-8859-1, es_CR.UTF-8 UTF-8, es_DO ISO-8859-1, es_DO.UTF-8 UTF-8, es_EC ISO-8859-1, es_EC.UTF-8 UTF-8, es_ES ISO-8859-1, es_ES.UTF-8 UTF-8, es_ES@euro ISO-8859-15, es_GT ISO-8859-1, es_GT.UTF-8 UTF-8, es_HN ISO-8859-1, es_HN.UTF-8 UTF-8, es_MX ISO-8859-1, es_MX.UTF-8 UTF-8, es_NI ISO-8859-1, es_NI.UTF-8 UTF-8, es_PA ISO-8859-1, es_PA.UTF-8 UTF-8, es_PE ISO-8859-1, es_PE.UTF-8 UTF-8, es_PR ISO-8859-1, es_PR.UTF-8 UTF-8, es_PY ISO-8859-1, es_PY.UTF-8 UTF-8, es_SV ISO-8859-1, es_SV.UTF-8 UTF-8, es_US ISO-8859-1, es_US.UTF-8 UTF-8, es_UY ISO-8859-1, es_UY.UTF-8 UTF-8, es_VE ISO-8859-1, es_VE.UTF-8 UTF-8, et_EE ISO-8859-1, et_EE.ISO-8859-15 ISO-8859-15, et_EE.UTF-8 UTF-8, eu_ES ISO-8859-1, eu_ES.UTF-8 UTF-8, eu_ES@euro ISO-8859-15, eu_FR ISO-8859-1, eu_FR.UTF-8 UTF-8, eu_FR@euro ISO-8859-15, fa_IR UTF-8, fi_FI ISO-8859-1, fi_FI.UTF-8 UTF-8, fi_FI@euro ISO-8859-15, fil_PH UTF-8, fo_FO ISO-8859-1, fo_FO.UTF-8 UTF-8, fr_BE ISO-8859-1, fr_BE.UTF-8 UTF-8, fr_BE@euro ISO-8859-15, fr_CA ISO-8859-1, fr_CA.UTF-8 UTF-8, fr_CH ISO-8859-1, fr_CH.UTF-8 UTF-8, fr_FR ISO-8859-1, fr_FR.UTF-8 UTF-8, fr_FR@euro ISO-8859-15, fr_LU ISO-8859-1, fr_LU.UTF-8 UTF-8, fr_LU@euro ISO-8859-15, fur_IT UTF-8, fy_DE UTF-8, fy_NL UTF-8, ga_IE ISO-8859-1, ga_IE.UTF-8 UTF-8, ga_IE@euro ISO-8859-15, gd_GB ISO-8859-15, gd_GB.UTF-8 UTF-8, gez_ER UTF-8, gez_ER@abegede UTF-8, gez_ET UTF-8, gez_ET@abegede UTF-8, gl_ES ISO-8859-1, gl_ES.UTF-8 UTF-8, gl_ES@euro ISO-8859-15, gu_IN UTF-8, gv_GB ISO-8859-1, gv_GB.UTF-8 UTF-8, ha_NG UTF-8, he_IL ISO-8859-8, he_IL.UTF-8 UTF-8, hi_IN UTF-8, hr_HR ISO-8859-2, hr_HR.UTF-8 UTF-8, hsb_DE ISO-8859-2, hsb_DE.UTF-8 UTF-8, hu_HU ISO-8859-2, hu_HU.UTF-8 UTF-8, hy_AM UTF-8, hy_AM.ARMSCII-8 ARMSCII-8, ia UTF-8, id_ID ISO-8859-1, id_ID.UTF-8 UTF-8, ig_NG UTF-8, ik_CA UTF-8, is_IS ISO-8859-1, is_IS.UTF-8 UTF-8, it_CH ISO-8859-1, it_CH.UTF-8 UTF-8, it_IT ISO-8859-1, it_IT.UTF-8 UTF-8, it_IT@euro ISO-8859-15, iu_CA UTF-8, iw_IL ISO-8859-8, iw_IL.UTF-8 UTF-8, ja_JP.EUC-JP EUC-JP, ja_JP.UTF-8 UTF-8, ka_GE GEORGIAN-PS, ka_GE.UTF-8 UTF-8, kk_KZ PT154, kk_KZ.UTF-8 UTF-8, kl_GL ISO-8859-1, kl_GL.UTF-8 UTF-8, km_KH UTF-8, kn_IN UTF-8, ko_KR.EUC-KR EUC-KR, ko_KR.UTF-8 UTF-8, ks_IN UTF-8, ku_TR ISO-8859-9, ku_TR.UTF-8 UTF-8, kw_GB ISO-8859-1, kw_GB.UTF-8 UTF-8, ky_KG UTF-8, lg_UG ISO-8859-10, lg_UG.UTF-8 UTF-8, li_BE UTF-8, li_NL UTF-8, lo_LA UTF-8, lt_LT ISO-8859-13, lt_LT.UTF-8 UTF-8, lv_LV ISO-8859-13, lv_LV.UTF-8 UTF-8, mai_IN UTF-8, mg_MG ISO-8859-15, mg_MG.UTF-8 UTF-8, mi_NZ ISO-8859-13, mi_NZ.UTF-8 UTF-8, mk_MK ISO-8859-5, mk_MK.UTF-8 UTF-8, ml_IN UTF-8, mn_MN UTF-8, mr_IN UTF-8, ms_MY ISO-8859-1, ms_MY.UTF-8 UTF-8, mt_MT ISO-8859-3, mt_MT.UTF-8 UTF-8, nb_NO ISO-8859-1, nb_NO.UTF-8 UTF-8, nds_DE UTF-8, nds_NL UTF-8, ne_NP UTF-8, nl_BE ISO-8859-1, nl_BE.UTF-8 UTF-8, nl_BE@euro ISO-8859-15, nl_NL ISO-8859-1, nl_NL.UTF-8 UTF-8, nl_NL@euro ISO-8859-15, nn_NO ISO-8859-1, nn_NO.UTF-8 UTF-8, nr_ZA UTF-8, nso_ZA UTF-8, oc_FR ISO-8859-1, oc_FR.UTF-8 UTF-8, om_ET UTF-8, om_KE ISO-8859-1, om_KE.UTF-8 UTF-8, or_IN UTF-8, pa_IN UTF-8, pa_PK UTF-8, pap_AN UTF-8, pl_PL ISO-8859-2, pl_PL.UTF-8 UTF-8, pt_BR ISO-8859-1, pt_BR.UTF-8 UTF-8, pt_PT ISO-8859-1, pt_PT.UTF-8 UTF-8, pt_PT@euro ISO-8859-15, ro_RO ISO-8859-2, ro_RO.UTF-8 UTF-8, ru_RU ISO-8859-5, ru_RU.CP1251 CP1251, ru_RU.KOI8-R KOI8-R, ru_RU.UTF-8 UTF-8, ru_UA KOI8-U, ru_UA.UTF-8 UTF-8, rw_RW UTF-8, sa_IN UTF-8, sc_IT UTF-8, se_NO UTF-8, si_LK UTF-8, sid_ET UTF-8, sk_SK ISO-8859-2, sk_SK.UTF-8 UTF-8, sl_SI ISO-8859-2, sl_SI.UTF-8 UTF-8, so_DJ ISO-8859-1, so_DJ.UTF-8 UTF-8, so_ET UTF-8, so_KE ISO-8859-1, so_KE.UTF-8 UTF-8, so_SO ISO-8859-1, so_SO.UTF-8 UTF-8, sq_AL ISO-8859-1, sq_AL.UTF-8 UTF-8, sr_ME UTF-8, sr_RS UTF-8, sr_RS@latin UTF-8, ss_ZA UTF-8, st_ZA ISO-8859-1, st_ZA.UTF-8 UTF-8, sv_FI ISO-8859-1, sv_FI.UTF-8 UTF-8, sv_FI@euro ISO-8859-15, sv_SE ISO-8859-1, sv_SE.ISO-8859-15 ISO-8859-15, sv_SE.UTF-8 UTF-8, ta_IN UTF-8, te_IN UTF-8, tg_TJ KOI8-T, tg_TJ.UTF-8 UTF-8, th_TH TIS-620, th_TH.UTF-8 UTF-8, ti_ER UTF-8, ti_ET UTF-8, tig_ER UTF-8, tk_TM UTF-8, tl_PH ISO-8859-1, tl_PH.UTF-8 UTF-8, tn_ZA UTF-8, tr_CY ISO-8859-9, tr_CY.UTF-8 UTF-8, tr_TR ISO-8859-9, tr_TR.UTF-8 UTF-8, ts_ZA UTF-8, tt_RU.UTF-8 UTF-8, tt_RU@iqtelif.UTF-8 UTF-8, ug_CN UTF-8, uk_UA KOI8-U, uk_UA.UTF-8 UTF-8, ur_PK UTF-8, uz_UZ ISO-8859-1, uz_UZ.UTF-8 UTF-8, uz_UZ@cyrillic UTF-8, ve_ZA UTF-8, vi_VN UTF-8, vi_VN.TCVN TCVN5712-1, wa_BE ISO-8859-1, wa_BE.UTF-8 UTF-8, wa_BE@euro ISO-8859-15, wo_SN UTF-8, xh_ZA ISO-8859-1, xh_ZA.UTF-8 UTF-8, yi_US CP1255, yi_US.UTF-8 UTF-8, yo_NG UTF-8, zh_CN GB2312, zh_CN.GB18030 GB18030, zh_CN.GBK GBK, zh_CN.UTF-8 UTF-8, zh_HK BIG5-HKSCS, zh_HK.UTF-8 UTF-8, zh_SG GB2312, zh_SG.GBK GBK, zh_SG.UTF-8 UTF-8, zh_TW BIG5, zh_TW.EUC-TW EUC-TW, zh_TW.UTF-8 UTF-8, zu_ZA ISO-8859-1, zu_ZA.UTF-8 UTF-8
+
+Name: openswan/create_rsa_key
+Template: openswan/create_rsa_key
+Value: true
+Owners: openswan
+Flags: seen
+
+Name: openswan/enable-oe
+Template: openswan/enable-oe
+Value: false
+Owners: openswan
+Flags: seen
+
+Name: openswan/existing_x509_certificate
+Template: openswan/existing_x509_certificate
+Value: false
+Owners: openswan
+Flags: seen
+
+Name: openswan/existing_x509_certificate_filename
+Template: openswan/existing_x509_certificate_filename
+Owners: openswan
+
+Name: openswan/existing_x509_key_filename
+Template: openswan/existing_x509_key_filename
+Owners: openswan
+
+Name: openswan/restart
+Template: openswan/restart
+Value: true
+Owners: openswan
+
+Name: openswan/rsa_key_length
+Template: openswan/rsa_key_length
+Value: 2048
+Owners: openswan
+
+Name: openswan/rsa_key_type
+Template: openswan/rsa_key_type
+Value: x509
+Owners: openswan
+Flags: seen
+
+Name: openswan/start_level
+Template: openswan/start_level
+Value: earliest
+Owners: openswan
+
+Name: openswan/x509_common_name
+Template: openswan/x509_common_name
+Value: 
+Owners: openswan
+
+Name: openswan/x509_country_code
+Template: openswan/x509_country_code
+Value: AT
+Owners: openswan
+
+Name: openswan/x509_email_address
+Template: openswan/x509_email_address
+Value: 
+Owners: openswan
+
+Name: openswan/x509_locality_name
+Template: openswan/x509_locality_name
+Value: 
+Owners: openswan
+
+Name: openswan/x509_organization_name
+Template: openswan/x509_organization_name
+Value: 
+Owners: openswan
+
+Name: openswan/x509_organizational_unit
+Template: openswan/x509_organizational_unit
+Value: 
+Owners: openswan
+
+Name: openswan/x509_self_signed
+Template: openswan/x509_self_signed
+Value: true
+Owners: openswan
+Flags: seen
+
+Name: openswan/x509_state_name
+Template: openswan/x509_state_name
+Value: 
+Owners: openswan
+
+Name: portmap/loopback
+Template: portmap/loopback
+Value: false
+Owners: portmap
+
+Name: shared/accepted-sun-dlj-v1-1
+Template: shared/accepted-sun-dlj-v1-1
+Value: true
+Owners: sun-java6-bin, sun-java6-jre
+Flags: seen
+
+Name: shared/error-sun-dlj-v1-1
+Template: shared/error-sun-dlj-v1-1
+Owners: sun-java6-bin, sun-java6-jre
+
+Name: shared/kernel-image/really-run-bootloader
+Template: shared/kernel-image/really-run-bootloader
+Owners: linux-image-2.6.32-bpo.5-686, linux-image-2.6.32-bpo.5-xen-686
+
+Name: shared/present-sun-dlj-v1-1
+Template: shared/present-sun-dlj-v1-1
+Value: true
+Owners: sun-java6-bin, sun-java6-jre
+Flags: seen
+
+Name: ssh/disable_cr_auth
+Template: ssh/disable_cr_auth
+Owners: openssh-server
+
+Name: ssh/encrypted_host_key_but_no_keygen
+Template: ssh/encrypted_host_key_but_no_keygen
+Owners: openssh-server
+
+Name: ssh/new_config
+Template: ssh/new_config
+Owners: openssh-server
+
+Name: ssh/use_old_init_script
+Template: ssh/use_old_init_script
+Value: true
+Owners: openssh-server
+Flags: seen
+
+Name: ssh/vulnerable_host_keys
+Template: ssh/vulnerable_host_keys
+Owners: openssh-server
+
+Name: sun-java6-jre/jcepolicy
+Template: sun-java6-jre/jcepolicy
+Owners: sun-java6-jre
+
+Name: sun-java6-jre/stopthread
+Template: sun-java6-jre/stopthread
+Owners: sun-java6-jre
+
+Name: tzdata/Areas
+Template: tzdata/Areas
+Value: Etc
+Owners: tzdata
+Flags: seen
+
+Name: tzdata/Zones/Africa
+Template: tzdata/Zones/Africa
+Owners: tzdata
+
+Name: tzdata/Zones/America
+Template: tzdata/Zones/America
+Owners: tzdata
+
+Name: tzdata/Zones/Antarctica
+Template: tzdata/Zones/Antarctica
+Owners: tzdata
+
+Name: tzdata/Zones/Arctic
+Template: tzdata/Zones/Arctic
+Owners: tzdata
+
+Name: tzdata/Zones/Asia
+Template: tzdata/Zones/Asia
+Owners: tzdata
+
+Name: tzdata/Zones/Atlantic
+Template: tzdata/Zones/Atlantic
+Owners: tzdata
+
+Name: tzdata/Zones/Australia
+Template: tzdata/Zones/Australia
+Owners: tzdata
+
+Name: tzdata/Zones/Etc
+Template: tzdata/Zones/Etc
+Value: UTC
+Owners: tzdata
+Flags: seen
+
+Name: tzdata/Zones/Europe
+Template: tzdata/Zones/Europe
+Owners: tzdata
+
+Name: tzdata/Zones/Indian
+Template: tzdata/Zones/Indian
+Owners: tzdata
+
+Name: tzdata/Zones/Pacific
+Template: tzdata/Zones/Pacific
+Owners: tzdata
+
+Name: tzdata/Zones/SystemV
+Template: tzdata/Zones/SystemV
+Owners: tzdata
+
+Name: ucf/changeprompt
+Template: ucf/changeprompt
+Owners: ucf
+
+Name: ucf/changeprompt_threeway
+Template: ucf/changeprompt_threeway
+Owners: ucf
+
+Name: ucf/show_diff
+Template: ucf/show_diff
+Owners: ucf
+
+Name: ucf/title
+Template: ucf/title
+Owners: ucf
+
+Name: udev/new_kernel_needed
+Template: udev/new_kernel_needed
+Owners: udev
+
+Name: udev/reboot_needed
+Template: udev/reboot_needed
+Owners: udev
+
diff --git a/tools/systemvm/debian/config/etc/apache2/httpd.conf b/tools/systemvm/debian/config/etc/apache2/httpd.conf
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/tools/systemvm/debian/config/etc/apache2/ports.conf b/tools/systemvm/debian/config/etc/apache2/ports.conf
new file mode 100644
index 00000000000..369cb295e00
--- /dev/null
+++ b/tools/systemvm/debian/config/etc/apache2/ports.conf
@@ -0,0 +1,23 @@
+# If you just change the port or add more ports here, you will likely also
+# have to change the VirtualHost statement in
+# /etc/apache2/sites-enabled/000-default
+# This is also true if you have upgraded from before 2.2.9-3 (i.e. from
+# Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and
+# README.Debian.gz
+
+NameVirtualHost 10.1.1.1:80
+Listen 10.1.1.1:80
+
+
+    # If you add NameVirtualHost *:443 here, you will also have to change
+    # the VirtualHost statement in /etc/apache2/sites-available/default-ssl
+    # to 
+    # Server Name Indication for SSL named virtual hosts is currently not
+    # supported by MSIE on Windows XP.
+    Listen 10.1.1.1:443
+
+
+
+    Listen 10.1.1.1:443
+
+
diff --git a/tools/systemvm/debian/config/etc/apache2/sites-available/default b/tools/systemvm/debian/config/etc/apache2/sites-available/default
new file mode 100644
index 00000000000..ae009b71ca2
--- /dev/null
+++ b/tools/systemvm/debian/config/etc/apache2/sites-available/default
@@ -0,0 +1,41 @@
+
+	ServerAdmin webmaster@localhost
+
+	DocumentRoot /var/www/html
+	
+		Options FollowSymLinks
+		AllowOverride None
+	
+	
+		Options Indexes FollowSymLinks MultiViews
+		AllowOverride All
+		Order allow,deny
+		allow from all
+	
+
+	ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
+	
+		AllowOverride None
+		Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
+		Order allow,deny
+		Allow from all
+	
+
+	ErrorLog ${APACHE_LOG_DIR}/error.log
+
+	# Possible values include: debug, info, notice, warn, error, crit,
+	# alert, emerg.
+	LogLevel warn
+
+	CustomLog ${APACHE_LOG_DIR}/access.log combined
+
+    Alias /doc/ "/usr/share/doc/"
+    
+        Options Indexes MultiViews FollowSymLinks
+        AllowOverride None
+        Order deny,allow
+        Deny from all
+        Allow from 127.0.0.0/255.0.0.0 ::1/128
+    
+
+
diff --git a/tools/systemvm/debian/config/etc/apache2/sites-available/default-ssl b/tools/systemvm/debian/config/etc/apache2/sites-available/default-ssl
new file mode 100644
index 00000000000..0eea44d0103
--- /dev/null
+++ b/tools/systemvm/debian/config/etc/apache2/sites-available/default-ssl
@@ -0,0 +1,172 @@
+
+
+	ServerAdmin webmaster@localhost
+
+	DocumentRoot /var/www/html
+	
+		Options FollowSymLinks
+		AllowOverride None
+	
+	
+		Options Indexes FollowSymLinks MultiViews
+		AllowOverride all
+		Order allow,deny
+		allow from all
+	
+
+	ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
+	
+		AllowOverride None
+		Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
+		Order allow,deny
+		Allow from all
+	
+
+	ErrorLog ${APACHE_LOG_DIR}/error.log
+
+	# Possible values include: debug, info, notice, warn, error, crit,
+	# alert, emerg.
+	LogLevel warn
+
+	CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined
+
+	Alias /doc/ "/usr/share/doc/"
+	
+		Options Indexes MultiViews FollowSymLinks
+		AllowOverride None
+		Order deny,allow
+		Deny from all
+		Allow from 127.0.0.0/255.0.0.0 ::1/128
+	
+
+	#   SSL Engine Switch:
+	#   Enable/Disable SSL for this virtual host.
+	SSLEngine on
+
+	#   A self-signed (snakeoil) certificate can be created by installing
+	#   the ssl-cert package. See
+	#   /usr/share/doc/apache2.2-common/README.Debian.gz for more info.
+	#   If both key and certificate are stored in the same file, only the
+	#   SSLCertificateFile directive is needed.
+	SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
+	SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
+
+	#   Server Certificate Chain:
+	#   Point SSLCertificateChainFile at a file containing the
+	#   concatenation of PEM encoded CA certificates which form the
+	#   certificate chain for the server certificate. Alternatively
+	#   the referenced file can be the same as SSLCertificateFile
+	#   when the CA certificates are directly appended to the server
+	#   certificate for convinience.
+	#SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt
+
+	#   Certificate Authority (CA):
+	#   Set the CA certificate verification path where to find CA
+	#   certificates for client authentication or alternatively one
+	#   huge file containing all of them (file must be PEM encoded)
+	#   Note: Inside SSLCACertificatePath you need hash symlinks
+	#         to point to the certificate files. Use the provided
+	#         Makefile to update the hash symlinks after changes.
+	#SSLCACertificatePath /etc/ssl/certs/
+	#SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt
+
+	#   Certificate Revocation Lists (CRL):
+	#   Set the CA revocation path where to find CA CRLs for client
+	#   authentication or alternatively one huge file containing all
+	#   of them (file must be PEM encoded)
+	#   Note: Inside SSLCARevocationPath you need hash symlinks
+	#         to point to the certificate files. Use the provided
+	#         Makefile to update the hash symlinks after changes.
+	#SSLCARevocationPath /etc/apache2/ssl.crl/
+	#SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl
+
+	#   Client Authentication (Type):
+	#   Client certificate verification type and depth.  Types are
+	#   none, optional, require and optional_no_ca.  Depth is a
+	#   number which specifies how deeply to verify the certificate
+	#   issuer chain before deciding the certificate is not valid.
+	#SSLVerifyClient require
+	#SSLVerifyDepth  10
+
+	#   Access Control:
+	#   With SSLRequire you can do per-directory access control based
+	#   on arbitrary complex boolean expressions containing server
+	#   variable checks and other lookup directives.  The syntax is a
+	#   mixture between C and Perl.  See the mod_ssl documentation
+	#   for more details.
+	#
+	#SSLRequire (    %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \
+	#            and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \
+	#            and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \
+	#            and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \
+	#            and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20       ) \
+	#           or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/
+	#
+
+	#   SSL Engine Options:
+	#   Set various options for the SSL engine.
+	#   o FakeBasicAuth:
+	#     Translate the client X.509 into a Basic Authorisation.  This means that
+	#     the standard Auth/DBMAuth methods can be used for access control.  The
+	#     user name is the `one line' version of the client's X.509 certificate.
+	#     Note that no password is obtained from the user. Every entry in the user
+	#     file needs this password: `xxj31ZMTZzkVA'.
+	#   o ExportCertData:
+	#     This exports two additional environment variables: SSL_CLIENT_CERT and
+	#     SSL_SERVER_CERT. These contain the PEM-encoded certificates of the
+	#     server (always existing) and the client (only existing when client
+	#     authentication is used). This can be used to import the certificates
+	#     into CGI scripts.
+	#   o StdEnvVars:
+	#     This exports the standard SSL/TLS related `SSL_*' environment variables.
+	#     Per default this exportation is switched off for performance reasons,
+	#     because the extraction step is an expensive operation and is usually
+	#     useless for serving static content. So one usually enables the
+	#     exportation for CGI and SSI requests only.
+	#   o StrictRequire:
+	#     This denies access when "SSLRequireSSL" or "SSLRequire" applied even
+	#     under a "Satisfy any" situation, i.e. when it applies access is denied
+	#     and no other module can change it.
+	#   o OptRenegotiate:
+	#     This enables optimized SSL connection renegotiation handling when SSL
+	#     directives are used in per-directory context.
+	#SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
+	
+		SSLOptions +StdEnvVars
+	
+	
+		SSLOptions +StdEnvVars
+	
+
+	#   SSL Protocol Adjustments:
+	#   The safe and default but still SSL/TLS standard compliant shutdown
+	#   approach is that mod_ssl sends the close notify alert but doesn't wait for
+	#   the close notify alert from client. When you need a different shutdown
+	#   approach you can use one of the following variables:
+	#   o ssl-unclean-shutdown:
+	#     This forces an unclean shutdown when the connection is closed, i.e. no
+	#     SSL close notify alert is send or allowed to received.  This violates
+	#     the SSL/TLS standard but is needed for some brain-dead browsers. Use
+	#     this when you receive I/O errors because of the standard approach where
+	#     mod_ssl sends the close notify alert.
+	#   o ssl-accurate-shutdown:
+	#     This forces an accurate shutdown when the connection is closed, i.e. a
+	#     SSL close notify alert is send and mod_ssl waits for the close notify
+	#     alert of the client. This is 100% SSL/TLS standard compliant, but in
+	#     practice often causes hanging connections with brain-dead browsers. Use
+	#     this only for browsers where you know that their SSL implementation
+	#     works correctly.
+	#   Notice: Most problems of broken clients are also related to the HTTP
+	#   keep-alive facility, so you usually additionally want to disable
+	#   keep-alive for those clients, too. Use variable "nokeepalive" for this.
+	#   Similarly, one has to force some clients to use HTTP/1.0 to workaround
+	#   their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and
+	#   "force-response-1.0" for this.
+	BrowserMatch "MSIE [2-6]" \
+		nokeepalive ssl-unclean-shutdown \
+		downgrade-1.0 force-response-1.0
+	# MSIE 7 and newer should be able to use keepalive
+	BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown
+
+
+
diff --git a/tools/systemvm/debian/config/etc/default/cloud b/tools/systemvm/debian/config/etc/default/cloud
new file mode 100644
index 00000000000..6da9d9466df
--- /dev/null
+++ b/tools/systemvm/debian/config/etc/default/cloud
@@ -0,0 +1,2 @@
+#set ENABLED to 1 if you want the init script to start the password server
+ENABLED=0
diff --git a/tools/systemvm/debian/config/etc/default/cloud-passwd-srvr b/tools/systemvm/debian/config/etc/default/cloud-passwd-srvr
new file mode 100644
index 00000000000..6da9d9466df
--- /dev/null
+++ b/tools/systemvm/debian/config/etc/default/cloud-passwd-srvr
@@ -0,0 +1,2 @@
+#set ENABLED to 1 if you want the init script to start the password server
+ENABLED=0
diff --git a/tools/systemvm/debian/config/etc/dnsmasq.conf b/tools/systemvm/debian/config/etc/dnsmasq.conf
new file mode 100644
index 00000000000..b908c2e4bee
--- /dev/null
+++ b/tools/systemvm/debian/config/etc/dnsmasq.conf
@@ -0,0 +1,463 @@
+# Configuration file for dnsmasq.
+#
+# Format is one option per line, legal options are the same
+# as the long options legal on the command line. See
+# "/usr/sbin/dnsmasq --help" or "man 8 dnsmasq" for details.
+
+# The following two options make you a better netizen, since they
+# tell dnsmasq to filter out queries which the public DNS cannot
+# answer, and which load the servers (especially the root servers)
+# uneccessarily. If you have a dial-on-demand link they also stop
+# these requests from bringing up the link uneccessarily.
+
+# Never forward plain names (without a dot or domain part)
+domain-needed
+# Never forward addresses in the non-routed address spaces.
+bogus-priv
+
+
+# Uncomment this to filter useless windows-originated DNS requests
+# which can trigger dial-on-demand links needlessly.
+# Note that (amongst other things) this blocks all SRV requests,
+# so don't use it if you use eg Kerberos.
+# This option only affects forwarding, SRV records originating for
+# dnsmasq (via srv-host= lines) are not suppressed by it.
+#filterwin2k
+
+# Change this line if you want dns to get its upstream servers from
+# somewhere other that /etc/resolv.conf
+resolv-file=/etc/dnsmasq-resolv.conf
+
+# By  default,  dnsmasq  will  send queries to any of the upstream
+# servers it knows about and tries to favour servers to are  known
+# to  be  up.  Uncommenting this forces dnsmasq to try each query
+# with  each  server  strictly  in  the  order  they   appear   in
+# /etc/resolv.conf
+#strict-order
+
+# If you don't want dnsmasq to read /etc/resolv.conf or any other
+# file, getting its servers from this file instead (see below), then
+# uncomment this.
+#no-resolv
+
+# If you don't want dnsmasq to poll /etc/resolv.conf or other resolv
+# files for changes and re-read them then uncomment this.
+#no-poll
+
+# Add other name servers here, with domain specs if they are for
+# non-public domains.
+#server=/localnet/192.168.0.1
+
+# Example of routing PTR queries to nameservers: this will send all 
+# address->name queries for 192.168.3/24 to nameserver 10.1.2.3
+#server=/3.168.192.in-addr.arpa/10.1.2.3
+
+# Add local-only domains here, queries in these domains are answered
+# from /etc/hosts or DHCP only.
+local=/2.vmops-test.vmops.com/
+
+# Add domains which you want to force to an IP address here.
+# The example below send any host in doubleclick.net to a local
+# webserver.
+#address=/doubleclick.net/127.0.0.1
+
+# If you want dnsmasq to change uid and gid to something other
+# than the default, edit the following lines.
+#user=
+#group=
+
+# If you want dnsmasq to listen for DHCP and DNS requests only on
+# specified interfaces (and the loopback) give the name of the
+# interface (eg eth0) here.
+# Repeat the line for more than one interface.
+interface=eth0
+# Or you can specify which interface _not_ to listen on
+except-interface=eth1
+except-interface=eth2
+except-interface=lo
+# Or which to listen on by address (remember to include 127.0.0.1 if
+# you use this.)
+#listen-address=
+# If you want dnsmasq to provide only DNS service on an interface,
+# configure it as shown above, and then use the following line to
+# disable DHCP on it.
+no-dhcp-interface=eth1
+no-dhcp-interface=eth2
+
+# On systems which support it, dnsmasq binds the wildcard address,
+# even when it is listening on only some interfaces. It then discards
+# requests that it shouldn't reply to. This has the advantage of
+# working even when interfaces come and go and change address. If you
+# want dnsmasq to really bind only the interfaces it is listening on,
+# uncomment this option. About the only time you may need this is when
+# running another nameserver on the same machine.
+bind-interfaces
+
+# If you don't want dnsmasq to read /etc/hosts, uncomment the
+# following line.
+#no-hosts
+# or if you want it to read another file, as well as /etc/hosts, use
+# this.
+#addn-hosts=/etc/banner_add_hosts
+
+# Set this (and domain: see below) if you want to have a domain
+# automatically added to simple names in a hosts-file.
+expand-hosts
+
+# Set the domain for dnsmasq. this is optional, but if it is set, it
+# does the following things.
+# 1) Allows DHCP hosts to have fully qualified domain names, as long
+#     as the domain part matches this setting.
+# 2) Sets the "domain" DHCP option thereby potentially setting the
+#    domain of all systems configured by DHCP
+# 3) Provides the domain part for "expand-hosts"
+domain=2.vmops-test.vmops.com
+
+# Uncomment this to enable the integrated DHCP server, you need
+# to supply the range of addresses available for lease and optionally
+# a lease time. If you have more than one network, you will need to
+# repeat this for each network on which you want to supply DHCP
+# service.
+dhcp-range=10.1.1.1,static
+#dhcp-range=10.0.0.1,10.255.255.255
+dhcp-hostsfile=/etc/dhcphosts.txt
+
+# This is an example of a DHCP range where the netmask is given. This
+# is needed for networks we reach the dnsmasq DHCP server via a relay
+# agent. If you don't know what a DHCP relay agent is, you probably
+# don't need to worry about this.
+#dhcp-range=192.168.0.50,192.168.0.150,255.255.255.0,12h
+
+# This is an example of a DHCP range with a network-id, so that
+# some DHCP options may be set only for this network.
+#dhcp-range=red,192.168.0.50,192.168.0.150
+
+# Supply parameters for specified hosts using DHCP. There are lots
+# of valid alternatives, so we will give examples of each. Note that
+# IP addresses DO NOT have to be in the range given above, they just
+# need to be on the same network. The order of the parameters in these
+# do not matter, it's permissble to give name,adddress and MAC in any order
+
+# Always allocate the host with ethernet address 11:22:33:44:55:66
+# The IP address 192.168.0.60
+#dhcp-host=11:22:33:44:55:66,192.168.0.60
+
+# Always set the name of the host with hardware address
+# 11:22:33:44:55:66 to be "fred"
+#dhcp-host=11:22:33:44:55:66,fred
+
+# Always give the host with ethernet address 11:22:33:44:55:66
+# the name fred and IP address 192.168.0.60 and lease time 45 minutes
+#dhcp-host=11:22:33:44:55:66,fred,192.168.0.60,45m
+
+# Give the machine which says it's name is "bert" IP address
+# 192.168.0.70 and an infinite lease
+#dhcp-host=bert,192.168.0.70,infinite
+
+# Always give the host with client identifier 01:02:02:04
+# the IP address 192.168.0.60
+#dhcp-host=id:01:02:02:04,192.168.0.60
+
+# Always give the host with client identifier "marjorie"
+# the IP address 192.168.0.60
+#dhcp-host=id:marjorie,192.168.0.60
+
+# Enable the address given for "judge" in /etc/hosts
+# to be given to a machine presenting the name "judge" when
+# it asks for a DHCP lease.
+#dhcp-host=judge
+
+# Never offer DHCP service to a machine whose ethernet
+# address is 11:22:33:44:55:66
+#dhcp-host=11:22:33:44:55:66,ignore
+
+# Ignore any client-id presented by the machine with ethernet
+# address 11:22:33:44:55:66. This is useful to prevent a machine
+# being treated differently when running under different OS's or
+# between PXE boot and OS boot.
+#dhcp-host=11:22:33:44:55:66,id:*
+
+# Send extra options which are tagged as "red" to
+# the machine with ethernet address 11:22:33:44:55:66
+#dhcp-host=11:22:33:44:55:66,net:red
+
+# Send extra options which are tagged as "red" to
+# any machine with ethernet address starting 11:22:33:
+#dhcp-host=11:22:33:*:*:*,net:red
+
+# Ignore any clients which are specified in dhcp-host lines
+# or /etc/ethers. Equivalent to ISC "deny unkown-clients".
+# This relies on the special "known" tag which is set when 
+# a host is matched.
+#dhcp-ignore=#known
+
+# Send extra options which are tagged as "red" to any machine whose
+# DHCP vendorclass string includes the substring "Linux"
+#dhcp-vendorclass=red,Linux
+
+# Send extra options which are tagged as "red" to any machine one
+# of whose DHCP userclass strings includes the substring "accounts"
+#dhcp-userclass=red,accounts
+
+# Send extra options which are tagged as "red" to any machine whose
+# MAC address matches the pattern.
+#dhcp-mac=red,00:60:8C:*:*:*
+
+# If this line is uncommented, dnsmasq will read /etc/ethers and act
+# on the ethernet-address/IP pairs found there just as if they had
+# been given as --dhcp-host options. Useful if you keep
+# MAC-address/host mappings there for other purposes.
+#read-ethers
+
+# Send options to hosts which ask for a DHCP lease.
+# See RFC 2132 for details of available options.
+# Common options can be given to dnsmasq by name: 
+# run "dnsmasq --help dhcp" to get a list.
+# Note that all the common settings, such as netmask and
+# broadcast address, DNS server and default route, are given
+# sane defaults by dnsmasq. You very likely will not need 
+# any dhcp-options. If you use Windows clients and Samba, there
+# are some options which are recommended, they are detailed at the
+# end of this section.
+
+# Override the default route supplied by dnsmasq, which assumes the
+# router is the same machine as the one running dnsmasq.
+#dhcp-option=3,1.2.3.4
+
+# Do the same thing, but using the option name
+#dhcp-option=option:router,1.2.3.4
+
+# Override the default route supplied by dnsmasq and send no default
+# route at all. Note that this only works for the options sent by
+# default (1, 3, 6, 12, 28) the same line will send a zero-length option 
+# for all other option numbers.
+#dhcp-option=3
+
+# Set the NTP time server addresses to 192.168.0.4 and 10.10.0.5
+#dhcp-option=option:ntp-server,192.168.0.4,10.10.0.5
+
+# Set the NTP time server address to be the same machine as
+# is running dnsmasq
+#dhcp-option=42,0.0.0.0
+
+# Set the NIS domain name to "welly"
+#dhcp-option=40,welly
+
+# Set the default time-to-live to 50
+#dhcp-option=23,50
+
+# Set the "all subnets are local" flag
+#dhcp-option=27,1
+
+# Set the domain
+dhcp-option=15,"2.vmops-test.vmops.com"
+
+# Send the etherboot magic flag and then etherboot options (a string).
+#dhcp-option=128,e4:45:74:68:00:00
+#dhcp-option=129,NIC=eepro100
+
+# Specify an option which will only be sent to the "red" network
+# (see dhcp-range for the declaration of the "red" network)
+# Note that the net: part must precede the option: part.
+#dhcp-option = net:red, option:ntp-server, 192.168.1.1
+
+# The following DHCP options set up dnsmasq in the same way as is specified
+# for the ISC dhcpcd in
+# http://www.samba.org/samba/ftp/docs/textdocs/DHCP-Server-Configuration.txt
+# adapted for a typical dnsmasq installation where the host running
+# dnsmasq is also the host running samba.
+# you may want to uncomment them if you use Windows clients and Samba.
+#dhcp-option=19,0           # option ip-forwarding off
+#dhcp-option=44,0.0.0.0     # set netbios-over-TCP/IP nameserver(s) aka WINS server(s)
+#dhcp-option=45,0.0.0.0     # netbios datagram distribution server
+#dhcp-option=46,8           # netbios node type
+#dhcp-option=47             # empty netbios scope.
+
+# Send RFC-3397 DNS domain search DHCP option. WARNING: Your DHCP client
+# probably doesn't support this......
+#dhcp-option=option:domain-search,eng.apple.com,marketing.apple.com
+
+# Send RFC-3442 classless static routes (note the netmask encoding)
+#dhcp-option=121,192.168.1.0/24,1.2.3.4,10.0.0.0/8,5.6.7.8
+
+# Send vendor-class specific options encapsulated in DHCP option 43. 
+# The meaning of the options is defined by the vendor-class so
+# options are sent only when the client supplied vendor class
+# matches the class given here. (A substring match is OK, so "MSFT" 
+# matches "MSFT" and "MSFT 5.0"). This example sets the
+# mtftp address to 0.0.0.0 for PXEClients.
+#dhcp-option=vendor:PXEClient,1,0.0.0.0
+
+# Send microsoft-specific option to tell windows to release the DHCP lease
+# when it shuts down. Note the "i" flag, to tell dnsmasq to send the
+# value as a four-byte integer - that's what microsoft wants. See
+# http://technet2.microsoft.com/WindowsServer/en/library/a70f1bb7-d2d4-49f0-96d6-4b7414ecfaae1033.mspx?mfr=true
+dhcp-option=vendor:MSFT,2,1i
+
+# Send the Encapsulated-vendor-class ID needed by some configurations of
+# Etherboot to allow is to recognise the DHCP server.
+#dhcp-option=vendor:Etherboot,60,"Etherboot"
+
+# Send options to PXELinux. Note that we need to send the options even
+# though they don't appear in the parameter request list, so we need
+# to use dhcp-option-force here. 
+# See http://syslinux.zytor.com/pxe.php#special for details.
+# Magic number - needed before anything else is recognised
+#dhcp-option-force=208,f1:00:74:7e
+# Configuration file name
+#dhcp-option-force=209,configs/common
+# Path prefix
+#dhcp-option-force=210,/tftpboot/pxelinux/files/
+# Reboot time. (Note 'i' to send 32-bit value)
+#dhcp-option-force=211,30i
+
+# Set the boot filename for BOOTP. You will only need 
+# this is you want to boot machines over the network and you will need
+# a TFTP server; either dnsmasq's built in TFTP server or an
+# external one. (See below for how to enable the TFTP server.)
+#dhcp-boot=pxelinux.0
+
+# Enable dnsmasq's built-in TFTP server
+#enable-tftp
+
+# Set the root directory for files availble via FTP.
+#tftp-root=/var/ftpd
+
+# Make the TFTP server more secure: with this set, only files owned by
+# the user dnsmasq is running as will be send over the net.
+#tftp-secure
+
+# Set the boot file name only when the "red" tag is set.
+#dhcp-boot=net:red,pxelinux.red-net
+
+# An example of dhcp-boot with an external server: the name and IP
+# address of the server are given after the filename.
+#dhcp-boot=/var/ftpd/pxelinux.0,boothost,192.168.0.3
+
+# Set the limit on DHCP leases, the default is 150
+#dhcp-lease-max=150
+
+# The DHCP server needs somewhere on disk to keep its lease database.
+# This defaults to a sane location, but if you want to change it, use
+# the line below.
+#dhcp-leasefile=/var/lib/misc/dnsmasq.leases
+leasefile-ro
+
+# Set the DHCP server to authoritative mode. In this mode it will barge in
+# and take over the lease for any client which broadcasts on the network,
+# whether it has a record of the lease or not. This avoids long timeouts
+# when a machine wakes up on a new network. DO NOT enable this if there's
+# the slighest chance that you might end up accidentally configuring a DHCP
+# server for your campus/company accidentally. The ISC server uses
+# the same option, and this URL provides more information:
+# http://www.isc.org/index.pl?/sw/dhcp/authoritative.php
+#dhcp-authoritative
+
+# Run an executable when a DHCP lease is created or destroyed.
+# The arguments sent to the script are "add" or "del", 
+# then the MAC address, the IP address and finally the hostname
+# if there is one. 
+#dhcp-script=/bin/echo
+
+# Set the cachesize here.
+#cache-size=150
+
+# If you want to disable negative caching, uncomment this.
+#no-negcache
+
+# Normally responses which come form /etc/hosts and the DHCP lease
+# file have Time-To-Live set as zero, which conventionally means
+# do not cache further. If you are happy to trade lower load on the
+# server for potentially stale date, you can set a time-to-live (in
+# seconds) here.
+#local-ttl=
+
+# If you want dnsmasq to detect attempts by Verisign to send queries
+# to unregistered .com and .net hosts to its sitefinder service and
+# have dnsmasq instead return the correct NXDOMAIN response, uncomment
+# this line. You can add similar lines to do the same for other
+# registries which have implemented wildcard A records.
+#bogus-nxdomain=64.94.110.11
+
+# If you want to fix up DNS results from upstream servers, use the
+# alias option. This only works for IPv4.
+# This alias makes a result of 1.2.3.4 appear as 5.6.7.8
+#alias=1.2.3.4,5.6.7.8
+# and this maps 1.2.3.x to 5.6.7.x
+#alias=1.2.3.0,5.6.7.0,255.255.255.0
+
+
+# Change these lines if you want dnsmasq to serve MX records.
+
+# Return an MX record named "maildomain.com" with target
+# servermachine.com and preference 50
+#mx-host=maildomain.com,servermachine.com,50
+
+# Set the default target for MX records created using the localmx option.
+#mx-target=servermachine.com
+
+# Return an MX record pointing to the mx-target for all local
+# machines.
+#localmx
+
+# Return an MX record pointing to itself for all local machines.
+#selfmx
+
+# Change the following lines if you want dnsmasq to serve SRV
+# records.  These are useful if you want to serve ldap requests for
+# Active Directory and other windows-originated DNS requests.
+# See RFC 2782.
+# You may add multiple srv-host lines.
+# The fields are ,,,,
+# If the domain part if missing from the name (so that is just has the
+# service and protocol sections) then the domain given by the domain=
+# config option is used. (Note that expand-hosts does not need to be
+# set for this to work.)
+
+# A SRV record sending LDAP for the example.com domain to
+# ldapserver.example.com port 289
+#srv-host=_ldap._tcp.example.com,ldapserver.example.com,389
+
+# A SRV record sending LDAP for the example.com domain to
+# ldapserver.example.com port 289 (using domain=)
+###domain=example.com
+#srv-host=_ldap._tcp,ldapserver.example.com,389
+
+# Two SRV records for LDAP, each with different priorities
+#srv-host=_ldap._tcp.example.com,ldapserver.example.com,389,1
+#srv-host=_ldap._tcp.example.com,ldapserver.example.com,389,2
+
+# A SRV record indicating that there is no LDAP server for the domain
+# example.com
+#srv-host=_ldap._tcp.example.com
+
+# The following line shows how to make dnsmasq serve an arbitrary PTR
+# record. This is useful for DNS-SD. (Note that the
+# domain-name expansion done for SRV records _does_not
+# occur for PTR records.)
+#ptr-record=_http._tcp.dns-sd-services,"New Employee Page._http._tcp.dns-sd-services"
+
+# Change the following lines to enable dnsmasq to serve TXT records.
+# These are used for things like SPF and zeroconf. (Note that the
+# domain-name expansion done for SRV records _does_not
+# occur for TXT records.)
+
+#Example SPF.
+#txt-record=example.com,"v=spf1 a -all"
+
+#Example zeroconf
+#txt-record=_http._tcp.example.com,name=value,paper=A4
+
+
+# For debugging purposes, log each DNS query as it passes through
+# dnsmasq.
+log-queries
+
+# Log lots of extra information about DHCP transactions.
+#log-dhcp
+
+log-facility=/var/log/dnsmasq.log
+
+# Include a another lot of configuration options.
+#conf-file=/etc/dnsmasq.more.conf
+conf-dir=/etc/dnsmasq.d
diff --git a/tools/systemvm/debian/config/etc/haproxy/haproxy.cfg b/tools/systemvm/debian/config/etc/haproxy/haproxy.cfg
new file mode 100644
index 00000000000..94737ac328e
--- /dev/null
+++ b/tools/systemvm/debian/config/etc/haproxy/haproxy.cfg
@@ -0,0 +1,26 @@
+global
+	log 127.0.0.1:3914   local0 info
+	chroot /var/lib/haproxy
+	user haproxy
+	group haproxy
+	daemon
+	 
+defaults
+	log     global
+	mode    tcp
+	option  dontlognull
+	retries 3
+	option redispatch
+	option forwardfor
+	stats enable
+	stats uri     /admin?stats
+	stats realm   Haproxy\ Statistics
+	stats auth    admin1:AdMiN123
+	option forceclose
+	timeout connect      5000
+	timeout client      50000
+	timeout server      50000
+	 
+	 
+listen cloud-default 0.0.0.0:35999
+	option transparent
diff --git a/tools/systemvm/debian/config/etc/httpd/conf/httpd.conf b/tools/systemvm/debian/config/etc/httpd/conf/httpd.conf
new file mode 100644
index 00000000000..e11384ef772
--- /dev/null
+++ b/tools/systemvm/debian/config/etc/httpd/conf/httpd.conf
@@ -0,0 +1,990 @@
+#
+# This is the main Apache server configuration file.  It contains the
+# configuration directives that give the server its instructions.
+# See  for detailed information.
+# In particular, see
+# 
+# for a discussion of each configuration directive.
+#
+#
+# Do NOT simply read the instructions in here without understanding
+# what they do.  They're here only as hints or reminders.  If you are unsure
+# consult the online docs. You have been warned.  
+#
+# The configuration directives are grouped into three basic sections:
+#  1. Directives that control the operation of the Apache server process as a
+#     whole (the 'global environment').
+#  2. Directives that define the parameters of the 'main' or 'default' server,
+#     which responds to requests that aren't handled by a virtual host.
+#     These directives also provide default values for the settings
+#     of all virtual hosts.
+#  3. Settings for virtual hosts, which allow Web requests to be sent to
+#     different IP addresses or hostnames and have them handled by the
+#     same Apache server process.
+#
+# Configuration and logfile names: If the filenames you specify for many
+# of the server's control files begin with "/" (or "drive:/" for Win32), the
+# server will use that explicit path.  If the filenames do *not* begin
+# with "/", the value of ServerRoot is prepended -- so "logs/foo.log"
+# with ServerRoot set to "/etc/httpd" will be interpreted by the
+# server as "/etc/httpd/logs/foo.log".
+#
+
+### Section 1: Global Environment
+#
+# The directives in this section affect the overall operation of Apache,
+# such as the number of concurrent requests it can handle or where it
+# can find its configuration files.
+#
+
+#
+# Don't give away too much information about all the subcomponents
+# we are running.  Comment out this line if you don't mind remote sites
+# finding out what major optional modules you are running
+ServerTokens OS
+
+#
+# ServerRoot: The top of the directory tree under which the server's
+# configuration, error, and log files are kept.
+#
+# NOTE!  If you intend to place this on an NFS (or otherwise network)
+# mounted filesystem then please read the LockFile documentation
+# (available at );
+# you will save yourself a lot of trouble.
+#
+# Do NOT add a slash at the end of the directory path.
+#
+ServerRoot "/etc/httpd"
+
+#
+# PidFile: The file in which the server should record its process
+# identification number when it starts.
+#
+PidFile run/httpd.pid
+
+#
+# Timeout: The number of seconds before receives and sends time out.
+#
+Timeout 120
+
+#
+# KeepAlive: Whether or not to allow persistent connections (more than
+# one request per connection). Set to "Off" to deactivate.
+#
+KeepAlive Off
+
+#
+# MaxKeepAliveRequests: The maximum number of requests to allow
+# during a persistent connection. Set to 0 to allow an unlimited amount.
+# We recommend you leave this number high, for maximum performance.
+#
+MaxKeepAliveRequests 100
+
+#
+# KeepAliveTimeout: Number of seconds to wait for the next request from the
+# same client on the same connection.
+#
+KeepAliveTimeout 15
+
+##
+## Server-Pool Size Regulation (MPM specific)
+## 
+
+# prefork MPM
+# StartServers: number of server processes to start
+# MinSpareServers: minimum number of server processes which are kept spare
+# MaxSpareServers: maximum number of server processes which are kept spare
+# ServerLimit: maximum value for MaxClients for the lifetime of the server
+# MaxClients: maximum number of server processes allowed to start
+# MaxRequestsPerChild: maximum number of requests a server process serves
+
+StartServers       8
+MinSpareServers    5
+MaxSpareServers   20
+ServerLimit      256
+MaxClients       256
+MaxRequestsPerChild  4000
+
+
+# worker MPM
+# StartServers: initial number of server processes to start
+# MaxClients: maximum number of simultaneous client connections
+# MinSpareThreads: minimum number of worker threads which are kept spare
+# MaxSpareThreads: maximum number of worker threads which are kept spare
+# ThreadsPerChild: constant number of worker threads in each server process
+# MaxRequestsPerChild: maximum number of requests a server process serves
+
+StartServers         2
+MaxClients         150
+MinSpareThreads     25
+MaxSpareThreads     75 
+ThreadsPerChild     25
+MaxRequestsPerChild  0
+
+
+#
+# Listen: Allows you to bind Apache to specific IP addresses and/or
+# ports, in addition to the default. See also the 
+# directive.
+#
+# Change this to Listen on specific IP addresses as shown below to 
+# prevent Apache from glomming onto all bound IP addresses (0.0.0.0)
+#
+#Listen 12.34.56.78:80
+Listen 10.1.1.1:80
+
+#
+# Dynamic Shared Object (DSO) Support
+#
+# To be able to use the functionality of a module which was built as a DSO you
+# have to place corresponding `LoadModule' lines at this location so the
+# directives contained in it are actually available _before_ they are used.
+# Statically compiled modules (those listed by `httpd -l') do not need
+# to be loaded here.
+#
+# Example:
+# LoadModule foo_module modules/mod_foo.so
+#
+LoadModule auth_basic_module modules/mod_auth_basic.so
+LoadModule auth_digest_module modules/mod_auth_digest.so
+LoadModule authn_file_module modules/mod_authn_file.so
+LoadModule authn_alias_module modules/mod_authn_alias.so
+LoadModule authn_anon_module modules/mod_authn_anon.so
+LoadModule authn_dbm_module modules/mod_authn_dbm.so
+LoadModule authn_default_module modules/mod_authn_default.so
+LoadModule authz_host_module modules/mod_authz_host.so
+LoadModule authz_user_module modules/mod_authz_user.so
+LoadModule authz_owner_module modules/mod_authz_owner.so
+LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
+LoadModule authz_dbm_module modules/mod_authz_dbm.so
+LoadModule authz_default_module modules/mod_authz_default.so
+LoadModule ldap_module modules/mod_ldap.so
+LoadModule authnz_ldap_module modules/mod_authnz_ldap.so
+LoadModule include_module modules/mod_include.so
+LoadModule log_config_module modules/mod_log_config.so
+LoadModule logio_module modules/mod_logio.so
+LoadModule env_module modules/mod_env.so
+LoadModule ext_filter_module modules/mod_ext_filter.so
+LoadModule mime_magic_module modules/mod_mime_magic.so
+LoadModule expires_module modules/mod_expires.so
+LoadModule deflate_module modules/mod_deflate.so
+LoadModule headers_module modules/mod_headers.so
+LoadModule usertrack_module modules/mod_usertrack.so
+LoadModule setenvif_module modules/mod_setenvif.so
+LoadModule mime_module modules/mod_mime.so
+LoadModule dav_module modules/mod_dav.so
+LoadModule status_module modules/mod_status.so
+LoadModule autoindex_module modules/mod_autoindex.so
+LoadModule info_module modules/mod_info.so
+LoadModule dav_fs_module modules/mod_dav_fs.so
+LoadModule vhost_alias_module modules/mod_vhost_alias.so
+LoadModule negotiation_module modules/mod_negotiation.so
+LoadModule dir_module modules/mod_dir.so
+LoadModule actions_module modules/mod_actions.so
+LoadModule speling_module modules/mod_speling.so
+LoadModule userdir_module modules/mod_userdir.so
+LoadModule alias_module modules/mod_alias.so
+LoadModule rewrite_module modules/mod_rewrite.so
+LoadModule proxy_module modules/mod_proxy.so
+LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
+LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
+LoadModule proxy_http_module modules/mod_proxy_http.so
+LoadModule proxy_connect_module modules/mod_proxy_connect.so
+LoadModule cache_module modules/mod_cache.so
+LoadModule suexec_module modules/mod_suexec.so
+LoadModule disk_cache_module modules/mod_disk_cache.so
+LoadModule file_cache_module modules/mod_file_cache.so
+LoadModule mem_cache_module modules/mod_mem_cache.so
+LoadModule cgi_module modules/mod_cgi.so
+
+#
+# The following modules are not loaded by default:
+#
+#LoadModule cern_meta_module modules/mod_cern_meta.so
+#LoadModule asis_module modules/mod_asis.so
+
+#
+# Load config files from the config directory "/etc/httpd/conf.d".
+#
+Include conf.d/*.conf
+
+#
+# ExtendedStatus controls whether Apache will generate "full" status
+# information (ExtendedStatus On) or just basic information (ExtendedStatus
+# Off) when the "server-status" handler is called. The default is Off.
+#
+#ExtendedStatus On
+
+#
+# If you wish httpd to run as a different user or group, you must run
+# httpd as root initially and it will switch.  
+#
+# User/Group: The name (or #number) of the user/group to run httpd as.
+#  . On SCO (ODT 3) use "User nouser" and "Group nogroup".
+#  . On HPUX you may not be able to use shared memory as nobody, and the
+#    suggested workaround is to create a user www and use that user.
+#  NOTE that some kernels refuse to setgid(Group) or semctl(IPC_SET)
+#  when the value of (unsigned)Group is above 60000; 
+#  don't use Group #-1 on these systems!
+#
+User apache
+Group apache
+
+### Section 2: 'Main' server configuration
+#
+# The directives in this section set up the values used by the 'main'
+# server, which responds to any requests that aren't handled by a
+#  definition.  These values also provide defaults for
+# any  containers you may define later in the file.
+#
+# All of these directives may appear inside  containers,
+# in which case these default settings will be overridden for the
+# virtual host being defined.
+#
+
+#
+# ServerAdmin: Your address, where problems with the server should be
+# e-mailed.  This address appears on some server-generated pages, such
+# as error documents.  e.g. admin@your-domain.com
+#
+ServerAdmin root@localhost
+
+#
+# ServerName gives the name and port that the server uses to identify itself.
+# This can often be determined automatically, but we recommend you specify
+# it explicitly to prevent problems during startup.
+#
+# If this is not set to valid DNS name for your host, server-generated
+# redirections will not work.  See also the UseCanonicalName directive.
+#
+# If your host doesn't have a registered DNS name, enter its IP address here.
+# You will have to access it by its address anyway, and this will make 
+# redirections work in a sensible way.
+#
+#ServerName www.example.com:80
+
+#
+# UseCanonicalName: Determines how Apache constructs self-referencing 
+# URLs and the SERVER_NAME and SERVER_PORT variables.
+# When set "Off", Apache will use the Hostname and Port supplied
+# by the client.  When set "On", Apache will use the value of the
+# ServerName directive.
+#
+UseCanonicalName Off
+
+#
+# DocumentRoot: The directory out of which you will serve your
+# documents. By default, all requests are taken from this directory, but
+# symbolic links and aliases may be used to point to other locations.
+#
+DocumentRoot "/var/www/html"
+
+#
+# Each directory to which Apache has access can be configured with respect
+# to which services and features are allowed and/or disabled in that
+# directory (and its subdirectories). 
+#
+# First, we configure the "default" to be a very restrictive set of 
+# features.  
+#
+
+    Options FollowSymLinks
+    AllowOverride None
+
+
+#
+# Note that from this point forward you must specifically allow
+# particular features to be enabled - so if something's not working as
+# you might expect, make sure that you have specifically enabled it
+# below.
+#
+
+#
+# This should be changed to whatever you set DocumentRoot to.
+#
+
+
+#
+# Possible values for the Options directive are "None", "All",
+# or any combination of:
+#   Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews
+#
+# Note that "MultiViews" must be named *explicitly* --- "Options All"
+# doesn't give it to you.
+#
+# The Options directive is both complicated and important.  Please see
+# http://httpd.apache.org/docs/2.2/mod/core.html#options
+# for more information.
+#
+    Options Indexes FollowSymLinks
+
+#
+# AllowOverride controls what directives may be placed in .htaccess files.
+# It can be "All", "None", or any combination of the keywords:
+#   Options FileInfo AuthConfig Limit
+#
+    AllowOverride All
+
+#
+# Controls who can get stuff from this server.
+#
+    Order allow,deny
+    Allow from all
+
+
+
+#
+# UserDir: The name of the directory that is appended onto a user's home
+# directory if a ~user request is received.
+#
+# The path to the end user account 'public_html' directory must be
+# accessible to the webserver userid.  This usually means that ~userid
+# must have permissions of 711, ~userid/public_html must have permissions
+# of 755, and documents contained therein must be world-readable.
+# Otherwise, the client will only receive a "403 Forbidden" message.
+#
+# See also: http://httpd.apache.org/docs/misc/FAQ.html#forbidden
+#
+
+    #
+    # UserDir is disabled by default since it can confirm the presence
+    # of a username on the system (depending on home directory
+    # permissions).
+    #
+    UserDir disable
+
+    #
+    # To enable requests to /~user/ to serve the user's public_html
+    # directory, remove the "UserDir disable" line above, and uncomment
+    # the following line instead:
+    # 
+    #UserDir public_html
+
+
+
+#
+# Control access to UserDir directories.  The following is an example
+# for a site where these directories are restricted to read-only.
+#
+#
+#    AllowOverride FileInfo AuthConfig Limit
+#    Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
+#    
+#        Order allow,deny
+#        Allow from all
+#    
+#    
+#        Order deny,allow
+#        Deny from all
+#    
+#
+
+#
+# DirectoryIndex: sets the file that Apache will serve if a directory
+# is requested.
+#
+# The index.html.var file (a type-map) is used to deliver content-
+# negotiated documents.  The MultiViews Option can be used for the 
+# same purpose, but it is much slower.
+#
+DirectoryIndex index.html index.html.var
+
+#
+# AccessFileName: The name of the file to look for in each directory
+# for additional configuration directives.  See also the AllowOverride
+# directive.
+#
+AccessFileName .htaccess
+
+#
+# The following lines prevent .htaccess and .htpasswd files from being 
+# viewed by Web clients. 
+#
+
+    Order allow,deny
+    Deny from all
+
+
+#
+# TypesConfig describes where the mime.types file (or equivalent) is
+# to be found.
+#
+TypesConfig /etc/mime.types
+
+#
+# DefaultType is the default MIME type the server will use for a document
+# if it cannot otherwise determine one, such as from filename extensions.
+# If your server contains mostly text or HTML documents, "text/plain" is
+# a good value.  If most of your content is binary, such as applications
+# or images, you may want to use "application/octet-stream" instead to
+# keep browsers from trying to display binary files as though they are
+# text.
+#
+DefaultType text/plain
+
+#
+# The mod_mime_magic module allows the server to use various hints from the
+# contents of the file itself to determine its type.  The MIMEMagicFile
+# directive tells the module where the hint definitions are located.
+#
+
+#   MIMEMagicFile /usr/share/magic.mime
+    MIMEMagicFile conf/magic
+
+
+#
+# HostnameLookups: Log the names of clients or just their IP addresses
+# e.g., www.apache.org (on) or 204.62.129.132 (off).
+# The default is off because it'd be overall better for the net if people
+# had to knowingly turn this feature on, since enabling it means that
+# each client request will result in AT LEAST one lookup request to the
+# nameserver.
+#
+HostnameLookups Off
+
+#
+# EnableMMAP: Control whether memory-mapping is used to deliver
+# files (assuming that the underlying OS supports it).
+# The default is on; turn this off if you serve from NFS-mounted 
+# filesystems.  On some systems, turning it off (regardless of
+# filesystem) can improve performance; for details, please see
+# http://httpd.apache.org/docs/2.2/mod/core.html#enablemmap
+#
+#EnableMMAP off
+
+#
+# EnableSendfile: Control whether the sendfile kernel support is 
+# used to deliver files (assuming that the OS supports it). 
+# The default is on; turn this off if you serve from NFS-mounted 
+# filesystems.  Please see
+# http://httpd.apache.org/docs/2.2/mod/core.html#enablesendfile
+#
+#EnableSendfile off
+
+#
+# ErrorLog: The location of the error log file.
+# If you do not specify an ErrorLog directive within a 
+# container, error messages relating to that virtual host will be
+# logged here.  If you *do* define an error logfile for a 
+# container, that host's errors will be logged there and not here.
+#
+ErrorLog logs/error_log
+
+#
+# LogLevel: Control the number of messages logged to the error_log.
+# Possible values include: debug, info, notice, warn, error, crit,
+# alert, emerg.
+#
+LogLevel warn
+
+#
+# The following directives define some format nicknames for use with
+# a CustomLog directive (see below).
+#
+LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
+LogFormat "%h %l %u %t \"%r\" %>s %b" common
+LogFormat "%{Referer}i -> %U" referer
+LogFormat "%{User-agent}i" agent
+
+# "combinedio" includes actual counts of actual bytes received (%I) and sent (%O); this
+# requires the mod_logio module to be loaded.
+#LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
+
+#
+# The location and format of the access logfile (Common Logfile Format).
+# If you do not define any access logfiles within a 
+# container, they will be logged here.  Contrariwise, if you *do*
+# define per- access logfiles, transactions will be
+# logged therein and *not* in this file.
+#
+#CustomLog logs/access_log common
+
+#
+# If you would like to have separate agent and referer logfiles, uncomment
+# the following directives.
+#
+#CustomLog logs/referer_log referer
+#CustomLog logs/agent_log agent
+
+#
+# For a single logfile with access, agent, and referer information
+# (Combined Logfile Format), use the following directive:
+#
+CustomLog logs/access_log combined
+
+#
+# Optionally add a line containing the server version and virtual host
+# name to server-generated pages (internal error documents, FTP directory
+# listings, mod_status and mod_info output etc., but not CGI generated
+# documents or custom error documents).
+# Set to "EMail" to also include a mailto: link to the ServerAdmin.
+# Set to one of:  On | Off | EMail
+#
+ServerSignature On
+
+#
+# Aliases: Add here as many aliases as you need (with no limit). The format is 
+# Alias fakename realname
+#
+# Note that if you include a trailing / on fakename then the server will
+# require it to be present in the URL.  So "/icons" isn't aliased in this
+# example, only "/icons/".  If the fakename is slash-terminated, then the 
+# realname must also be slash terminated, and if the fakename omits the 
+# trailing slash, the realname must also omit it.
+#
+# We include the /icons/ alias for FancyIndexed directory listings.  If you
+# do not use FancyIndexing, you may comment this out.
+#
+Alias /icons/ "/var/www/icons/"
+
+
+    Options Indexes MultiViews FollowSymLinks
+    AllowOverride None
+    Order allow,deny
+    Allow from all
+
+
+#
+# WebDAV module configuration section.
+# 
+
+    # Location of the WebDAV lock database.
+    DAVLockDB /var/lib/dav/lockdb
+
+
+#
+# ScriptAlias: This controls which directories contain server scripts.
+# ScriptAliases are essentially the same as Aliases, except that
+# documents in the realname directory are treated as applications and
+# run by the server when requested rather than as documents sent to the client.
+# The same rules about trailing "/" apply to ScriptAlias directives as to
+# Alias.
+#
+ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"
+
+#
+# "/var/www/cgi-bin" should be changed to whatever your ScriptAliased
+# CGI directory exists, if you have that configured.
+#
+
+    AllowOverride None
+    Options None
+    Order allow,deny
+    Allow from all
+
+
+#
+# Redirect allows you to tell clients about documents which used to exist in
+# your server's namespace, but do not anymore. This allows you to tell the
+# clients where to look for the relocated document.
+# Example:
+# Redirect permanent /foo http://www.example.com/bar
+
+#
+# Directives controlling the display of server-generated directory listings.
+#
+
+#
+# IndexOptions: Controls the appearance of server-generated directory
+# listings.
+#
+IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable
+
+#
+# AddIcon* directives tell the server which icon to show for different
+# files or filename extensions.  These are only displayed for
+# FancyIndexed directories.
+#
+AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip
+
+AddIconByType (TXT,/icons/text.gif) text/*
+AddIconByType (IMG,/icons/image2.gif) image/*
+AddIconByType (SND,/icons/sound2.gif) audio/*
+AddIconByType (VID,/icons/movie.gif) video/*
+
+AddIcon /icons/binary.gif .bin .exe
+AddIcon /icons/binhex.gif .hqx
+AddIcon /icons/tar.gif .tar
+AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv
+AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip
+AddIcon /icons/a.gif .ps .ai .eps
+AddIcon /icons/layout.gif .html .shtml .htm .pdf
+AddIcon /icons/text.gif .txt
+AddIcon /icons/c.gif .c
+AddIcon /icons/p.gif .pl .py
+AddIcon /icons/f.gif .for
+AddIcon /icons/dvi.gif .dvi
+AddIcon /icons/uuencoded.gif .uu
+AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl
+AddIcon /icons/tex.gif .tex
+AddIcon /icons/bomb.gif core
+
+AddIcon /icons/back.gif ..
+AddIcon /icons/hand.right.gif README
+AddIcon /icons/folder.gif ^^DIRECTORY^^
+AddIcon /icons/blank.gif ^^BLANKICON^^
+
+#
+# DefaultIcon is which icon to show for files which do not have an icon
+# explicitly set.
+#
+DefaultIcon /icons/unknown.gif
+
+#
+# AddDescription allows you to place a short description after a file in
+# server-generated indexes.  These are only displayed for FancyIndexed
+# directories.
+# Format: AddDescription "description" filename
+#
+#AddDescription "GZIP compressed document" .gz
+#AddDescription "tar archive" .tar
+#AddDescription "GZIP compressed tar archive" .tgz
+
+#
+# ReadmeName is the name of the README file the server will look for by
+# default, and append to directory listings.
+#
+# HeaderName is the name of a file which should be prepended to
+# directory indexes. 
+ReadmeName README.html
+HeaderName HEADER.html
+
+#
+# IndexIgnore is a set of filenames which directory indexing should ignore
+# and not include in the listing.  Shell-style wildcarding is permitted.
+#
+IndexIgnore .??* *~ *# HEADER* README* RCS CVS *,v *,t
+
+#
+# DefaultLanguage and AddLanguage allows you to specify the language of 
+# a document. You can then use content negotiation to give a browser a 
+# file in a language the user can understand.
+#
+# Specify a default language. This means that all data
+# going out without a specific language tag (see below) will 
+# be marked with this one. You probably do NOT want to set
+# this unless you are sure it is correct for all cases.
+#
+# * It is generally better to not mark a page as 
+# * being a certain language than marking it with the wrong
+# * language!
+#
+# DefaultLanguage nl
+#
+# Note 1: The suffix does not have to be the same as the language
+# keyword --- those with documents in Polish (whose net-standard
+# language code is pl) may wish to use "AddLanguage pl .po" to
+# avoid the ambiguity with the common suffix for perl scripts.
+#
+# Note 2: The example entries below illustrate that in some cases 
+# the two character 'Language' abbreviation is not identical to 
+# the two character 'Country' code for its country,
+# E.g. 'Danmark/dk' versus 'Danish/da'.
+#
+# Note 3: In the case of 'ltz' we violate the RFC by using a three char
+# specifier. There is 'work in progress' to fix this and get
+# the reference data for rfc1766 cleaned up.
+#
+# Catalan (ca) - Croatian (hr) - Czech (cs) - Danish (da) - Dutch (nl)
+# English (en) - Esperanto (eo) - Estonian (et) - French (fr) - German (de)
+# Greek-Modern (el) - Hebrew (he) - Italian (it) - Japanese (ja)
+# Korean (ko) - Luxembourgeois* (ltz) - Norwegian Nynorsk (nn)
+# Norwegian (no) - Polish (pl) - Portugese (pt)
+# Brazilian Portuguese (pt-BR) - Russian (ru) - Swedish (sv)
+# Simplified Chinese (zh-CN) - Spanish (es) - Traditional Chinese (zh-TW)
+#
+AddLanguage ca .ca
+AddLanguage cs .cz .cs
+AddLanguage da .dk
+AddLanguage de .de
+AddLanguage el .el
+AddLanguage en .en
+AddLanguage eo .eo
+AddLanguage es .es
+AddLanguage et .et
+AddLanguage fr .fr
+AddLanguage he .he
+AddLanguage hr .hr
+AddLanguage it .it
+AddLanguage ja .ja
+AddLanguage ko .ko
+AddLanguage ltz .ltz
+AddLanguage nl .nl
+AddLanguage nn .nn
+AddLanguage no .no
+AddLanguage pl .po
+AddLanguage pt .pt
+AddLanguage pt-BR .pt-br
+AddLanguage ru .ru
+AddLanguage sv .sv
+AddLanguage zh-CN .zh-cn
+AddLanguage zh-TW .zh-tw
+
+#
+# LanguagePriority allows you to give precedence to some languages
+# in case of a tie during content negotiation.
+#
+# Just list the languages in decreasing order of preference. We have
+# more or less alphabetized them here. You probably want to change this.
+#
+LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW
+
+#
+# ForceLanguagePriority allows you to serve a result page rather than
+# MULTIPLE CHOICES (Prefer) [in case of a tie] or NOT ACCEPTABLE (Fallback)
+# [in case no accepted languages matched the available variants]
+#
+ForceLanguagePriority Prefer Fallback
+
+#
+# Specify a default charset for all content served; this enables
+# interpretation of all content as UTF-8 by default.  To use the 
+# default browser choice (ISO-8859-1), or to allow the META tags
+# in HTML content to override this choice, comment out this
+# directive:
+#
+AddDefaultCharset UTF-8
+
+#
+# AddType allows you to add to or override the MIME configuration
+# file mime.types for specific file types.
+#
+#AddType application/x-tar .tgz
+
+#
+# AddEncoding allows you to have certain browsers uncompress
+# information on the fly. Note: Not all browsers support this.
+# Despite the name similarity, the following Add* directives have nothing
+# to do with the FancyIndexing customization directives above.
+#
+#AddEncoding x-compress .Z
+#AddEncoding x-gzip .gz .tgz
+
+# If the AddEncoding directives above are commented-out, then you
+# probably should define those extensions to indicate media types:
+#
+AddType application/x-compress .Z
+AddType application/x-gzip .gz .tgz
+
+#
+# AddHandler allows you to map certain file extensions to "handlers":
+# actions unrelated to filetype. These can be either built into the server
+# or added with the Action directive (see below)
+#
+# To use CGI scripts outside of ScriptAliased directories:
+# (You will also need to add "ExecCGI" to the "Options" directive.)
+#
+#AddHandler cgi-script .cgi
+
+#
+# For files that include their own HTTP headers:
+#
+#AddHandler send-as-is asis
+
+#
+# For type maps (negotiated resources):
+# (This is enabled by default to allow the Apache "It Worked" page
+#  to be distributed in multiple languages.)
+#
+AddHandler type-map var
+
+#
+# Filters allow you to process content before it is sent to the client.
+#
+# To parse .shtml files for server-side includes (SSI):
+# (You will also need to add "Includes" to the "Options" directive.)
+#
+AddType text/html .shtml
+AddOutputFilter INCLUDES .shtml
+
+#
+# Action lets you define media types that will execute a script whenever
+# a matching file is called. This eliminates the need for repeated URL
+# pathnames for oft-used CGI file processors.
+# Format: Action media/type /cgi-script/location
+# Format: Action handler-name /cgi-script/location
+#
+
+#
+# Customizable error responses come in three flavors:
+# 1) plain text 2) local redirects 3) external redirects
+#
+# Some examples:
+#ErrorDocument 500 "The server made a boo boo."
+#ErrorDocument 404 /missing.html
+#ErrorDocument 404 "/cgi-bin/missing_handler.pl"
+#ErrorDocument 402 http://www.example.com/subscription_info.html
+#
+
+#
+# Putting this all together, we can internationalize error responses.
+#
+# We use Alias to redirect any /error/HTTP_.html.var response to
+# our collection of by-error message multi-language collections.  We use 
+# includes to substitute the appropriate text.
+#
+# You can modify the messages' appearance without changing any of the
+# default HTTP_.html.var files by adding the line:
+#
+#   Alias /error/include/ "/your/include/path/"
+#
+# which allows you to create your own set of files by starting with the
+# /var/www/error/include/ files and
+# copying them to /your/include/path/, even on a per-VirtualHost basis.
+#
+
+Alias /error/ "/var/www/error/"
+
+
+
+    
+        AllowOverride None
+        Options IncludesNoExec
+        AddOutputFilter Includes html
+        AddHandler type-map var
+        Order allow,deny
+        Allow from all
+        LanguagePriority en es de fr
+        ForceLanguagePriority Prefer Fallback
+    
+
+#    ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var
+#    ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var
+#    ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var
+#    ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var
+#    ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var
+#    ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var
+#    ErrorDocument 410 /error/HTTP_GONE.html.var
+#    ErrorDocument 411 /error/HTTP_LENGTH_REQUIRED.html.var
+#    ErrorDocument 412 /error/HTTP_PRECONDITION_FAILED.html.var
+#    ErrorDocument 413 /error/HTTP_REQUEST_ENTITY_TOO_LARGE.html.var
+#    ErrorDocument 414 /error/HTTP_REQUEST_URI_TOO_LARGE.html.var
+#    ErrorDocument 415 /error/HTTP_UNSUPPORTED_MEDIA_TYPE.html.var
+#    ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var
+#    ErrorDocument 501 /error/HTTP_NOT_IMPLEMENTED.html.var
+#    ErrorDocument 502 /error/HTTP_BAD_GATEWAY.html.var
+#    ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var
+#    ErrorDocument 506 /error/HTTP_VARIANT_ALSO_VARIES.html.var
+
+
+
+
+#
+# The following directives modify normal HTTP response behavior to
+# handle known problems with browser implementations.
+#
+BrowserMatch "Mozilla/2" nokeepalive
+BrowserMatch "MSIE 4\.0b2;" nokeepalive downgrade-1.0 force-response-1.0
+BrowserMatch "RealPlayer 4\.0" force-response-1.0
+BrowserMatch "Java/1\.0" force-response-1.0
+BrowserMatch "JDK/1\.0" force-response-1.0
+
+#
+# The following directive disables redirects on non-GET requests for
+# a directory that does not include the trailing slash.  This fixes a 
+# problem with Microsoft WebFolders which does not appropriately handle 
+# redirects for folders with DAV methods.
+# Same deal with Apple's DAV filesystem and Gnome VFS support for DAV.
+#
+BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully
+BrowserMatch "MS FrontPage" redirect-carefully
+BrowserMatch "^WebDrive" redirect-carefully
+BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully
+BrowserMatch "^gnome-vfs/1.0" redirect-carefully
+BrowserMatch "^XML Spy" redirect-carefully
+BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully
+
+#
+# Allow server status reports generated by mod_status,
+# with the URL of http://servername/server-status
+# Change the ".example.com" to match your domain to enable.
+#
+#
+#    SetHandler server-status
+#    Order deny,allow
+#    Deny from all
+#    Allow from .example.com
+#
+
+#
+# Allow remote server configuration reports, with the URL of
+#  http://servername/server-info (requires that mod_info.c be loaded).
+# Change the ".example.com" to match your domain to enable.
+#
+#
+#    SetHandler server-info
+#    Order deny,allow
+#    Deny from all
+#    Allow from .example.com
+#
+
+#
+# Proxy Server directives. Uncomment the following lines to
+# enable the proxy server:
+#
+#
+#ProxyRequests On
+#
+#
+#    Order deny,allow
+#    Deny from all
+#    Allow from .example.com
+#
+
+#
+# Enable/disable the handling of HTTP/1.1 "Via:" headers.
+# ("Full" adds the server version; "Block" removes all outgoing Via: headers)
+# Set to one of: Off | On | Full | Block
+#
+#ProxyVia On
+
+#
+# To enable a cache of proxied content, uncomment the following lines.
+# See http://httpd.apache.org/docs/2.2/mod/mod_cache.html for more details.
+#
+#
+#   CacheEnable disk /
+#   CacheRoot "/var/cache/mod_proxy"
+#
+#
+
+#
+# End of proxy directives.
+
+### Section 3: Virtual Hosts
+#
+# VirtualHost: If you want to maintain multiple domains/hostnames on your
+# machine you can setup VirtualHost containers for them. Most configurations
+# use only name-based virtual hosts so the server doesn't need to worry about
+# IP addresses. This is indicated by the asterisks in the directives below.
+#
+# Please see the documentation at 
+# 
+# for further details before you try to setup virtual hosts.
+#
+# You may use the command line option '-S' to verify your virtual host
+# configuration.
+
+#
+# Use name-based virtual hosting.
+#
+#NameVirtualHost *:80
+#
+# NOTE: NameVirtualHost cannot be used without a port specifier 
+# (e.g. :80) if mod_ssl is being used, due to the nature of the
+# SSL protocol.
+#
+
+#
+# VirtualHost example:
+# Almost any Apache directive may go into a VirtualHost container.
+# The first VirtualHost section is used for requests without a known
+# server name.
+#
+#
+#    ServerAdmin webmaster@dummy-host.example.com
+#    DocumentRoot /www/docs/dummy-host.example.com
+#    ServerName dummy-host.example.com
+#    ErrorLog logs/dummy-host.example.com-error_log
+#    CustomLog logs/dummy-host.example.com-access_log common
+#
diff --git a/tools/systemvm/debian/config/etc/init.d/cloud b/tools/systemvm/debian/config/etc/init.d/cloud
new file mode 100755
index 00000000000..c437f77350f
--- /dev/null
+++ b/tools/systemvm/debian/config/etc/init.d/cloud
@@ -0,0 +1,135 @@
+#!/bin/bash 
+### BEGIN INIT INFO
+# Provides:          cloud
+# Required-Start:    mountkernfs $local_fs cloud-early-config
+# Required-Stop:     $local_fs
+# Should-Start:      
+# Should-Stop:       
+# Default-Start:     2 3 4 5
+# Default-Stop:      0 1 6
+# Short-Description: 	Start up the cloud.com service
+### END INIT INFO
+
+#set -x
+
+ENABLED=0
+[ -e /etc/default/cloud ] && . /etc/default/cloud
+
+if [ -f /mnt/cmdline ]
+then
+    CMDLINE=$(cat /mnt/cmdline)
+else
+    CMDLINE=$(cat /proc/cmdline)
+fi
+
+TYPE="router"
+for i in $CMDLINE
+  do
+    # search for foo=bar pattern and cut out foo
+    FIRSTPATTERN=$(echo $i | cut -d= -f1)
+    case $FIRSTPATTERN in 
+      type)
+          TYPE=$(echo $i | cut -d= -f2)
+      ;;
+    esac
+done
+
+# Source function library.
+if [ -f /etc/init.d/functions ]
+then
+  . /etc/init.d/functions
+fi
+
+if [ -f ./lib/lsb/init-functions ]
+then
+  . /lib/lsb/init-functions
+fi
+
+_success() {
+  if [ -f /etc/init.d/functions ]
+  then
+    success
+  else
+    echo "Success"
+  fi
+}
+
+_failure() {
+  if [ -f /etc/init.d/functions ]
+  then
+    failure
+  else
+    echo "Failed"
+  fi
+}
+RETVAL=$?
+CLOUD_COM_HOME="/usr/local/cloud"
+
+# mkdir -p /var/log/vmops
+
+get_pids() {
+  local i
+  for i in $(ps -ef| grep java | grep -v grep | awk '{print $2}'); 
+  do 
+    echo $(pwdx $i) | grep "$CLOUD_COM_HOME"  | awk -F: '{print $1}'; 
+  done
+}
+
+start() {
+   local pid=$(get_pids)
+   echo -n "Starting cloud.com service (type=$TYPE) "
+   if [ -f $CLOUD_COM_HOME/systemvm/run.sh ];
+   then
+     if [ "$pid" == "" ]
+     then
+       (cd $CLOUD_COM_HOME/systemvm; nohup ./run.sh > /var/log/cloud/cloud.out 2>&1 & )
+       pid=$(get_pids)
+       echo $pid > /var/run/cloud.pid 
+     fi
+     _success
+   else
+     _failure
+   fi
+   echo
+}
+
+stop() {
+  local pid
+  echo -n  "Stopping cloud.com service (type=$TYPE): "
+  for pid in $(get_pids)
+  do
+    kill $pid
+  done
+  _success
+  echo
+}
+
+status() {
+  local pids=$(get_pids)
+  if [ "$pids" == "" ]
+  then
+    echo "cloud.com service is not running"
+    return 1
+  fi
+  echo "cloud.com service (type=$TYPE) is running: process id: $pids"
+  return 0
+}
+
+[ "$ENABLED" != 0 ] || exit 0 
+
+case "$1" in
+   start) start
+	  ;;
+    stop) stop
+ 	  ;;
+    status) status
+ 	  ;;
+ restart) stop
+          start
+ 	  ;;
+       *) echo "Usage: $0 {start|stop|status|restart}"
+	  exit 1
+	  ;;
+esac
+
+exit $RETVAL
diff --git a/tools/systemvm/debian/config/etc/init.d/cloud-early-config b/tools/systemvm/debian/config/etc/init.d/cloud-early-config
new file mode 100755
index 00000000000..11efc5c1afc
--- /dev/null
+++ b/tools/systemvm/debian/config/etc/init.d/cloud-early-config
@@ -0,0 +1,391 @@
+#!/bin/bash 
+### BEGIN INIT INFO
+# Provides:          cloud-early-config
+# Required-Start:    mountkernfs $local_fs
+# Required-Stop:     $local_fs
+# Should-Start:      
+# Should-Stop:       
+# Default-Start:     S
+# Default-Stop:      0 6
+# Short-Description: configure according to cmdline
+### END INIT INFO
+
+PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin"
+
+[ -x /sbin/ifup ] || exit 0
+
+. /lib/lsb/init-functions
+
+init_interfaces() {
+  cat > /etc/network/interfaces << EOF
+auto lo $1 $2 $3
+iface lo inet loopback
+
+EOF
+}
+
+patch() {
+  local PATCH_MOUNT=/media/cdrom
+  local patchfile=$PATCH_MOUNT/cloud-scripts.tgz
+  local md5file=/var/cache/cloud/cloud-scripts-signature
+  local privkey=$PATCH_MOUNT/authorized_keys
+  local shouldpatch=false
+  mkdir -p $PATCH_MOUNT
+  if [ -e /dev/xvdd ]; then
+    mount -o ro /dev/xvdd $PATCH_MOUNT
+    [ -f $privkey ] && cp -f $privkey /root/.ssh/ && chmod go-rwx /root/.ssh/authorized_keys
+    local oldmd5=
+    [ -f ${md5file} ] && oldmd5=$(cat ${md5file})
+    local newmd5=
+    [ -f ${patchfile} ] && newmd5=$(md5sum ${patchfile} | awk '{print $1}')
+ 
+   if [ "$oldmd5" != "$newmd5" ] && [ -f ${patchfile} ] && [ "$newmd5" != "" ]
+    then
+      shouldpatch=true
+      logger -t cloud "Patching  scripts"
+      tar xzf $patchfile -C ${path}
+      echo ${newmd5} > ${md5file}
+    fi
+    cat /proc/cmdline > /var/cache/cloud/cmdline
+    logger -t cloud "Patching  cloud service"
+    /opt/cloud/bin/patchsystemvm.sh $PATCH_MOUNT 
+    umount $PATCH_MOUNT
+    if [ "$shouldpatch" == "true" ] 
+    then
+      logger -t cloud "Rebooting system since we patched init scripts"
+      sleep 2
+      reboot
+    fi
+  fi
+  if [ -f /mnt/cmdline ]; then
+    cat /mnt/cmdline > /var/cache/cloud/cmdline
+  fi
+  return 0
+}
+
+setup_interface() {
+  local intfnum=$1
+  local ip=$2
+  local mask=$3
+  local gw=$4
+  local intf=eth${intfnum} 
+  local bootproto="static"
+
+
+  if [ "$BOOTPROTO" == "dhcp" ]
+  then
+    if [ "$intfnum" != "0" ]
+    then
+       bootproto="dhcp"
+    fi
+  fi
+
+  if [ "$ip" != "0.0.0.0" -a "$ip" != "" ]
+  then
+     echo "iface  $intf inet $bootproto" >> /etc/network/interfaces
+     if [ "$bootproto" == "static" ]
+     then
+       echo "  address $ip " >> /etc/network/interfaces
+       echo "  netmask $mask" >> /etc/network/interfaces
+     fi
+  fi
+
+  ifdown $intf
+  ifup $intf
+}
+
+enable_fwding() {
+  logger -t cloud "enable_fwding = $1"
+  echo "$1" > /proc/sys/net/ipv4/ip_forward
+}
+
+enable_svc() {
+  local svc=$1
+  local enabled=$2
+
+  logger -t cloud "Enable service ${svc} = $enabled"
+  local cfg=/etc/default/${svc}
+  sed  -i "s/ENABLED=.*$/ENABLED=$enabled/" $cfg 
+}
+
+disable_hvc() {
+  [ ! -d /proc/xen ] && sed -i 's/^vc/#vc/' /etc/inittab && telinit q
+  [  -d /proc/xen ] && sed -i 's/^#vc/vc/' /etc/inittab && telinit q
+}
+
+setup_common() {
+  init_interfaces $1 $2 $3
+  setup_interface "0" $ETH0_IP $ETH0_MASK $GW
+  setup_interface "1" $ETH1_IP $ETH1_MASK $GW
+  setup_interface "2" $ETH2_IP $ETH2_MASK $GW
+  
+  echo $NAME > /etc/hostname
+  echo 'AVAHI_DAEMON_DETECT_LOCAL=0' > /etc/default/avahi-daemon
+  hostname $NAME
+  
+  #Nameserver
+  if [ -n "$NS1" ]
+  then
+    echo "nameserver $NS1" > /etc/dnsmasq-resolv.conf
+    echo "nameserver $NS1" > /etc/resolv.conf
+  fi
+  
+  if [ -n "$NS2" ]
+  then
+    echo "nameserver $NS2" >> /etc/dnsmasq-resolv.conf
+    echo "nameserver $NS2" >> /etc/resolv.conf
+  fi
+  if [ -n "$MGMTNET"  -a -n "$LOCAL_GW" ]
+  then
+    ip route add $MGMTNET via $LOCAL_GW dev eth1
+  fi
+
+  ip route  delete default 
+  ip route add default via $GW
+}
+
+setup_dnsmasq() {
+  [ -z $DHCP_RANGE ] && DHCP_RANGE=$ETH0_IP
+  [ -z $DOMAIN ] && DOMAIN="cloudnine.internal"
+  if [ -n "$DOMAIN" ]
+  then
+    #send domain name to dhcp clients
+    sed -i s/[#]*dhcp-option=15.*$/dhcp-option=15,\"$DOMAIN\"/ /etc/dnsmasq.conf
+    #DNS server will append $DOMAIN to local queries
+    sed -r -i s/^[#]?domain=.*$/domain=$DOMAIN/ /etc/dnsmasq.conf
+    #answer all local domain queries
+    sed  -i -e "s/^[#]*local=.*$/local=\/$DOMAIN\//" /etc/dnsmasq.conf
+  fi
+  sed -i -e "s/^dhcp-range=.*$/dhcp-range=$DHCP_RANGE,static/" /etc/dnsmasq.conf
+  sed -i -e "s/^[#]*listen-address=.*$/listen-address=$ETH0_IP/" /etc/dnsmasq.conf
+
+}
+
+setup_sshd(){
+  local ip=$1
+  [ -f /etc/ssh/sshd_config ] && sed -i -e "s/^[#]*ListenAddress.*$/ListenAddress $ip/" /etc/ssh/sshd_config
+}
+
+setup_apache2() {
+  [ -f /etc/apache2/sites-available/default ] && sed -i -e "s///" /etc/apache2/sites-available/default
+  [ -f /etc/apache2/sites-available/default-ssl ] && sed -i -e "s///" /etc/apache2/sites-available/default-ssl
+  [ -f /etc/apache2/ports.conf ] && sed -i -e "s/Listen .*:80/Listen $ETH0_IP:80/g" /etc/apache2/ports.conf
+  [ -f /etc/apache2/ports.conf ] && sed -i -e "s/Listen .*:443/Listen $ETH0_IP:443/g" /etc/apache2/ports.conf
+  [ -f /etc/apache2/ports.conf ] && sed -i -e "s/NameVirtualHost .*:80/NameVirtualHost $ETH0_IP:80/g" /etc/apache2/ports.conf
+}
+
+setup_router() {
+  setup_common eth0 eth1 eth2
+  setup_dnsmasq
+  setup_apache2
+
+  sed -i  /gateway/d /etc/hosts
+  echo "$ETH0_IP $NAME" >> /etc/hosts
+
+  setup_sshd $ETH1_IP
+
+  enable_svc dnsmasq 1
+  enable_svc haproxy 1
+  enable_svc cloud-passwd-srvr 1
+  enable_svc cloud 0
+  enable_fwding 1
+  cp /etc/iptables/iptables-router /etc/iptables/rules
+}
+
+setup_dhcpsrvr() {
+  setup_common eth0 eth1
+  setup_dnsmasq
+  setup_apache2
+
+  sed -i  /gateway/d /etc/hosts
+  echo "$ETH0_IP $NAME" >> /etc/hosts
+
+  setup_sshd $ETH1_IP
+
+  enable_svc dnsmasq 1
+  enable_svc haproxy 0
+  enable_svc cloud-passwd-srvr 1
+  enable_svc cloud 0
+  enable_fwding 0
+  cp /etc/iptables/iptables-router /etc/iptables/rules
+}
+
+setup_secstorage() {
+  setup_common eth0 eth1 eth2
+  sed -i  /gateway/d /etc/hosts
+  public_ip=$ETH2_IP
+  [ "$ETH2_IP" == "0.0.0.0" ] && public_ip=$ETH1_IP
+  echo "$public_ip $NAME" >> /etc/hosts
+
+  cp /etc/iptables/iptables-secstorage /etc/iptables/rules
+  setup_sshd $ETH0_IP
+
+  enable_fwding 0
+  enable_svc haproxy 0
+  enable_svc dnsmasq 0
+  enable_svc cloud-passwd-srvr 0
+  enable_svc cloud 1
+}
+
+
+setup_console_proxy() {
+  setup_common eth0 eth1 eth2
+  public_ip=$ETH2_IP
+  [ "$ETH2_IP" == "0.0.0.0" ] && public_ip=$ETH1_IP
+  sed -i  /gateway/d /etc/hosts
+  echo "$public_ip $NAME" >> /etc/hosts
+  cp /etc/iptables/iptables-consoleproxy /etc/iptables/rules
+  setup_sshd $ETH0_IP
+
+  enable_fwding 0
+  enable_svc haproxy 0
+  enable_svc dnsmasq 0
+  enable_svc cloud-passwd-srvr 0
+  enable_svc cloud 1
+}
+
+setup_default() {
+  cat > /etc/network/interfaces << EOF
+auto lo eth0
+iface lo inet loopback
+
+iface eth0 inet dhcp
+
+EOF
+}
+
+start() {
+  patch
+  case $TYPE in 
+     router)
+         [ "$NAME" == "" ] && NAME=router
+         setup_router
+	  ;;
+     dhcpsrvr)
+         [ "$NAME" == "" ] && NAME=dhcpsrvr
+         setup_dhcpsrvr
+	  ;;
+     secstorage)
+         [ "$NAME" == "" ] && NAME=secstorage
+         setup_secstorage;
+	  ;;
+     consoleproxy)
+         [ "$NAME" == "" ] && NAME=consoleproxy
+         setup_console_proxy;
+	  ;;
+     unknown)
+         [ "$NAME" == "" ] && NAME=systemvm
+         setup_default;
+          ;;
+  esac
+  return 0
+}
+
+disable_hvc
+if [ -f /mnt/cmdline ]
+then
+    CMDLINE=$(cat /mnt/cmdline)
+else
+    CMDLINE=$(cat /proc/cmdline)
+fi
+
+
+TYPE="unknown"
+BOOTPROTO="static"
+
+for i in $CMDLINE
+  do
+    # search for foo=bar pattern and cut out foo
+    KEY=$(echo $i | cut -d= -f1)
+    VALUE=$(echo $i | cut -d= -f2)
+    case $KEY in 
+      eth0ip)
+          ETH0_IP=$VALUE
+          ;;
+      eth1ip)
+          ETH1_IP=$VALUE
+          ;;
+      eth2ip)
+          ETH2_IP=$VALUE
+          ;;
+      gateway)
+          GW=$VALUE
+          ;;
+      eth0mask)
+          ETH0_MASK=$VALUE
+          ;;
+      eth1mask)
+          ETH1_MASK=$VALUE
+          ;;
+      eth2mask)
+          ETH2_MASK=$VALUE
+          ;;
+      dns1)
+          NS1=$VALUE
+          ;;
+      dns2)
+          NS2=$VALUE
+          ;;
+      domain)
+          DOMAIN=$VALUE
+          ;;
+      mgmtcidr)
+          MGMTNET=$VALUE
+          ;;
+      localgw)
+          LOCAL_GW=$VALUE
+          ;;
+      template)
+        TEMPLATE=$VALUE
+      	;;
+      name)
+	NAME=$VALUE
+	;;
+      dhcprange)
+        DHCP_RANGE=$(echo $VALUE | tr ':' ',')
+      	;;
+      bootproto)
+        BOOTPROTO=$VALUE 
+      	;;
+      type)
+        TYPE=$VALUE	
+	;;
+    esac
+done
+
+
+case "$1" in
+start)
+
+	log_action_begin_msg "Executing cloud-early-config"
+        logger -t cloud "Executing cloud-early-config"
+	if start; then
+	    log_action_end_msg $?
+	else
+	    log_action_end_msg $?
+	fi
+	;;
+
+stop)
+	log_action_begin_msg "Stopping cloud-early-config (noop)"
+	log_action_end_msg 0
+	;;
+
+force-reload|restart)
+
+	log_warning_msg "Running $0  is deprecated because it may not enable again some interfaces"
+	log_action_begin_msg "Executing cloud-early-config"
+	if start; then
+	    log_action_end_msg $?
+	else
+	    log_action_end_msg $?
+	fi
+	;;
+
+*)
+	echo "Usage: /etc/init.d/cloud-early-config {start|stop}"
+	exit 1
+	;;
+esac
+
+exit 0
diff --git a/tools/systemvm/debian/config/etc/init.d/cloud-passwd-srvr b/tools/systemvm/debian/config/etc/init.d/cloud-passwd-srvr
new file mode 100755
index 00000000000..f990e232a41
--- /dev/null
+++ b/tools/systemvm/debian/config/etc/init.d/cloud-passwd-srvr
@@ -0,0 +1,61 @@
+#!/bin/bash 
+### BEGIN INIT INFO
+# Provides:          cloud-passwd-srvr
+# Required-Start:    mountkernfs $local_fs cloud-early-config
+# Required-Stop:     $local_fs
+# Should-Start:      
+# Should-Stop:       
+# Default-Start:     S
+# Default-Stop:      0 6
+# Short-Description: Web server that sends passwords to User VMs
+### END INIT INFO
+
+
+ENABLED=0
+[ -e /etc/default/cloud-passwd-srvr ] && . /etc/default/cloud-passwd-srvr
+
+start() {
+  [ "$ENABLED" != 0 ]  || exit 0 
+  nohup bash /opt/cloud/bin/passwd_server&
+}
+
+getpid() {
+  pid=$(ps -ef | grep passwd_server | grep -v grep | awk '{print $2}')
+  echo $pid
+}
+
+stop_socat() {
+  spid=$(pidof socat)
+  [ "$spid" != "" ] && kill -9 $spid && echo "Killed socat (pid=$spid)" 
+  return 0
+}
+
+stop () {
+  stop_socat
+  pid=$(getpid)
+  [ "$pid" != "" ] && kill -9 $pid && echo "Stopped password server (pid=$pid)" && stop_socat && return 0
+  echo "Password server is not running" && return 0
+}
+
+status () {
+  pid=$(getpid)
+  [ "$pid" != "" ] && echo "Password server is running (pid=$pid)" && return 0
+  echo "Password server is not running" && return 0
+}
+
+case "$1" in
+   start) start
+	  ;;
+    stop) stop
+ 	  ;;
+    status) status
+ 	  ;;
+ restart) stop
+          start
+ 	  ;;
+       *) echo "Usage: $0 {start|stop|status|restart}"
+	  exit 1
+	  ;;
+esac
+
+exit 0
diff --git a/tools/systemvm/debian/config/etc/init.d/postinit b/tools/systemvm/debian/config/etc/init.d/postinit
new file mode 100755
index 00000000000..f9502408978
--- /dev/null
+++ b/tools/systemvm/debian/config/etc/init.d/postinit
@@ -0,0 +1,143 @@
+#!/bin/bash -e
+### BEGIN INIT INFO
+# Provides:          postinit
+# Required-Start:    mountkernfs $local_fs cloud-early-config
+# Required-Stop:     $local_fs
+# Should-Start:      
+# Should-Stop:       
+# Default-Start:     2 3 4 5
+# Default-Stop:      0 1 6
+# Short-Description: 	post-init
+### END INIT INFO
+
+replace_in_file() {
+  local filename=$1
+  local keyname=$2
+  local value=$3
+  sed -i /$keyname=/d $filename
+  echo "$keyname=$value" >> $filename
+  return $?
+}
+
+setup_secstorage() {
+  public_ip=$ETH2_IP
+  sed -i /$NAME/d /etc/hosts
+  echo "$public_ip $NAME" >> /etc/hosts
+  [ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*:80$/Listen $public_ip:80/" /etc/httpd/conf/httpd.conf
+  [ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*:443$/Listen $public_ip:443/" /etc/httpd/conf/httpd.conf
+}
+
+setup_console_proxy() {
+  public_ip=$ETH2_IP
+  sed -i /$NAME/d /etc/hosts
+  echo "$public_ip $NAME" >> /etc/hosts
+}
+
+start() {
+  case $TYPE in 
+     secstorage)
+         [ "$NAME" == "" ] && NAME=secstorage
+         setup_secstorage;
+	  ;;
+     consoleproxy)
+         [ "$NAME" == "" ] && NAME=consoleproxy
+         setup_console_proxy;
+	  ;;
+  esac
+}
+
+stop() {
+   echo ""
+}
+
+status() {
+   echo ""
+}
+
+CMDLINE=$(cat /proc/cmdline)
+TYPE="router"
+BOOTPROTO="static"
+
+for i in $CMDLINE
+  do
+    # search for foo=bar pattern and cut out foo
+    KEY=$(echo $i | cut -d= -f1)
+    VALUE=$(echo $i | cut -d= -f2)
+    case $KEY in 
+      eth0ip)
+          ETH0_IP=$VALUE
+          ;;
+      eth1ip)
+          ETH1_IP=$VALUE
+          ;;
+      eth2ip)
+          ETH2_IP=$VALUE
+          ;;
+      gateway)
+          GW=$VALUE
+          ;;
+      eth0mask)
+          ETH0_MASK=$VALUE
+          ;;
+      eth1mask)
+          ETH1_MASK=$VALUE
+          ;;
+      eth2mask)
+          ETH2_MASK=$VALUE
+          ;;
+      dns1)
+          NS1=$VALUE
+          ;;
+      dns2)
+          NS2=$VALUE
+          ;;
+      domain)
+          DOMAIN=$VALUE
+          ;;
+      mgmtcidr)
+          MGMTNET=$VALUE
+          ;;
+      localgw)
+          LOCAL_GW=$VALUE
+          ;;
+      template)
+        TEMPLATE=$VALUE
+      	;;
+      name)
+	NAME=$VALUE
+	;;
+      dhcprange)
+        DHCP_RANGE=$(echo $VALUE | tr ':' ',')
+      	;;
+      bootproto)
+        BOOTPROTO=$VALUE 
+      	;;
+      type)
+        TYPE=$VALUE	
+	;;
+    esac
+done
+
+if [ "$BOOTPROTO" == "static" ]
+then
+    exit 0
+fi
+
+ETH1_IP=$(ifconfig eth1|grep 'inet addr:'|cut -d : -f 2|cut -d \  -f 1)
+ETH2_IP=$(ifconfig eth2|grep 'inet addr:'|cut -d : -f 2|cut -d \  -f 1)
+
+
+case "$1" in
+   start) start
+	  ;;
+    stop) stop
+ 	  ;;
+    status) status
+ 	  ;;
+ restart) stop
+          start
+ 	  ;;
+       *) echo "Usage: $0 {start|stop|status|restart}"
+	  exit 1
+	  ;;
+esac
diff --git a/patches/kvm/etc/sysconfig/iptables-domp b/tools/systemvm/debian/config/etc/iptables/iptables-consoleproxy
old mode 100755
new mode 100644
similarity index 50%
rename from patches/kvm/etc/sysconfig/iptables-domp
rename to tools/systemvm/debian/config/etc/iptables/iptables-consoleproxy
index 0a29cd3454f..92a26f7b558
--- a/patches/kvm/etc/sysconfig/iptables-domp
+++ b/tools/systemvm/debian/config/etc/iptables/iptables-consoleproxy
@@ -1,18 +1,20 @@
-# @VERSION@
+# Generated by iptables-save v1.3.8 on Thu Oct  1 18:16:05 2009
 *nat
-:PREROUTING ACCEPT [499:70846]
-:POSTROUTING ACCEPT [1:85]
-:OUTPUT ACCEPT [1:85]
+:PREROUTING ACCEPT [0:0]
+:POSTROUTING ACCEPT [0:0]
+:OUTPUT ACCEPT [0:0]
 COMMIT
 *filter
-:INPUT DROP [288:42467]
+:INPUT DROP [0:0]
 :FORWARD DROP [0:0]
-:OUTPUT ACCEPT [65:9665]
+:OUTPUT ACCEPT [0:0]
 -A INPUT -i lo  -j ACCEPT 
+-A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT 
 -A INPUT -i eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT 
 -A INPUT -i eth2 -m state --state RELATED,ESTABLISHED -j ACCEPT 
 -A INPUT -p icmp -j ACCEPT 
--A INPUT -i eth1 -p tcp -m state --state NEW -m tcp --dport 8001 -j ACCEPT
+-A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 3922 -j ACCEPT
+-A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 8001 -j ACCEPT
 -A INPUT -i eth2 -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
 -A INPUT -i eth2 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
 COMMIT
diff --git a/patches/kvm/etc/sysconfig/iptables-domr b/tools/systemvm/debian/config/etc/iptables/iptables-router
old mode 100755
new mode 100644
similarity index 80%
rename from patches/kvm/etc/sysconfig/iptables-domr
rename to tools/systemvm/debian/config/etc/iptables/iptables-router
index c9f5010ed60..3bc7b50f74a
--- a/patches/kvm/etc/sysconfig/iptables-domr
+++ b/tools/systemvm/debian/config/etc/iptables/iptables-router
@@ -1,13 +1,12 @@
-# @VERSION@
 *nat
-:PREROUTING ACCEPT [499:70846]
-:POSTROUTING ACCEPT [1:85]
-:OUTPUT ACCEPT [1:85]
+:PREROUTING ACCEPT [0:0]
+:POSTROUTING ACCEPT [0:0]
+:OUTPUT ACCEPT [0:0]
 COMMIT
 *filter
-:INPUT DROP [288:42467]
+:INPUT DROP [0:0]
 :FORWARD DROP [0:0]
-:OUTPUT ACCEPT [65:9665]
+:OUTPUT ACCEPT [0:0]
 -A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
 -A INPUT -i eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
 -A INPUT -i eth2 -m state --state RELATED,ESTABLISHED -j ACCEPT
@@ -17,7 +16,9 @@ COMMIT
 -A INPUT -i eth0 -p udp -m udp --dport 53 -j ACCEPT
 -A INPUT -i eth1 -p tcp -m state --state NEW --dport 3922 -j ACCEPT
 -A INPUT -i eth0 -p tcp -m state --state NEW --dport 8080 -j ACCEPT
+-A INPUT -i eth0 -p tcp -m state --state NEW --dport 80 -j ACCEPT
 -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
 -A FORWARD -i eth0 -o eth2 -j ACCEPT
 -A FORWARD -i eth2 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
 COMMIT
+
diff --git a/tools/systemvm/debian/config/etc/iptables/iptables-secstorage b/tools/systemvm/debian/config/etc/iptables/iptables-secstorage
new file mode 100644
index 00000000000..ef733c431a0
--- /dev/null
+++ b/tools/systemvm/debian/config/etc/iptables/iptables-secstorage
@@ -0,0 +1,20 @@
+# Generated by iptables-save v1.3.8 on Thu Oct  1 18:16:05 2009
+*nat
+:PREROUTING ACCEPT [0:0]
+:POSTROUTING ACCEPT [0:0]
+:OUTPUT ACCEPT [0:0]
+COMMIT
+*filter
+:INPUT DROP [0:0]
+:FORWARD DROP [0:0]
+:OUTPUT ACCEPT [0:0]
+:HTTP - [0:0]
+-A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT 
+-A INPUT -i eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT 
+-A INPUT -i eth2 -m state --state RELATED,ESTABLISHED -j ACCEPT 
+-A INPUT -i eth2 -p tcp -m state --state NEW -m tcp --dport 80 -j HTTP 
+-A INPUT -i eth2 -p tcp -m state --state NEW -m tcp --dport 80 -j DROP 
+-A INPUT -i lo  -j ACCEPT 
+-A INPUT -p icmp -j ACCEPT 
+-A INPUT -i eth0 -p tcp -m state --state NEW --dport 3922 -j ACCEPT
+COMMIT
diff --git a/tools/systemvm/debian/config/etc/iptables/rules b/tools/systemvm/debian/config/etc/iptables/rules
new file mode 100644
index 00000000000..3bc7b50f74a
--- /dev/null
+++ b/tools/systemvm/debian/config/etc/iptables/rules
@@ -0,0 +1,24 @@
+*nat
+:PREROUTING ACCEPT [0:0]
+:POSTROUTING ACCEPT [0:0]
+:OUTPUT ACCEPT [0:0]
+COMMIT
+*filter
+:INPUT DROP [0:0]
+:FORWARD DROP [0:0]
+:OUTPUT ACCEPT [0:0]
+-A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
+-A INPUT -i eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
+-A INPUT -i eth2 -m state --state RELATED,ESTABLISHED -j ACCEPT
+-A INPUT -p icmp -j ACCEPT
+-A INPUT -i lo -j ACCEPT
+-A INPUT -i eth0 -p udp -m udp --dport 67 -j ACCEPT
+-A INPUT -i eth0 -p udp -m udp --dport 53 -j ACCEPT
+-A INPUT -i eth1 -p tcp -m state --state NEW --dport 3922 -j ACCEPT
+-A INPUT -i eth0 -p tcp -m state --state NEW --dport 8080 -j ACCEPT
+-A INPUT -i eth0 -p tcp -m state --state NEW --dport 80 -j ACCEPT
+-A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
+-A FORWARD -i eth0 -o eth2 -j ACCEPT
+-A FORWARD -i eth2 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
+COMMIT
+
diff --git a/tools/systemvm/debian/config/etc/rc.local b/tools/systemvm/debian/config/etc/rc.local
new file mode 100755
index 00000000000..cb434a23526
--- /dev/null
+++ b/tools/systemvm/debian/config/etc/rc.local
@@ -0,0 +1,15 @@
+#/bin/bash
+
+[ ! -f /var/cache/cloud/enabled_svcs ] && touch /var/cache/cloud/enabled_svcs
+for svc in $(cat /var/cache/cloud/enabled_svcs) 
+do
+   logger -t cloud "Starting $svc"
+   service $svc start
+done
+
+[ ! -f /var/cache/cloud/disabled_svcs ] && touch /var/cache/cloud/disabled_svcs
+for svc in $(cat /var/cache/cloud/disabled_svcs) 
+do
+   logger -t cloud "Stopping $svc"
+   service $svc stop
+done
diff --git a/tools/systemvm/debian/config/etc/ssh/sshd_config b/tools/systemvm/debian/config/etc/ssh/sshd_config
new file mode 100644
index 00000000000..2bcd6e5e580
--- /dev/null
+++ b/tools/systemvm/debian/config/etc/ssh/sshd_config
@@ -0,0 +1,128 @@
+#	$OpenBSD: sshd_config,v 1.75 2007/03/19 01:01:29 djm Exp $
+
+# This is the sshd server system-wide configuration file.  See
+# sshd_config(5) for more information.
+
+# This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin
+
+# The strategy used for options in the default sshd_config shipped with
+# OpenSSH is to specify options with their default value where
+# possible, but leave them commented.  Uncommented options change a
+# default value.
+
+Port 3922
+#AddressFamily any
+#ListenAddress 0.0.0.0
+#ListenAddress ::
+
+# Disable legacy (protocol version 1) support in the server for new
+# installations. In future the default will change to require explicit
+# activation of protocol 1
+Protocol 2
+
+# HostKey for protocol version 1
+#HostKey /etc/ssh/ssh_host_key
+# HostKeys for protocol version 2
+#HostKey /etc/ssh/ssh_host_rsa_key
+#HostKey /etc/ssh/ssh_host_dsa_key
+
+# Lifetime and size of ephemeral version 1 server key
+#KeyRegenerationInterval 1h
+#ServerKeyBits 768
+
+# Logging
+# obsoletes QuietMode and FascistLogging
+#SyslogFacility AUTH
+SyslogFacility AUTHPRIV
+#LogLevel INFO
+
+# Authentication:
+
+#LoginGraceTime 2m
+PermitRootLogin yes
+#StrictModes yes
+#MaxAuthTries 6
+
+#RSAAuthentication yes
+#PubkeyAuthentication yes
+#AuthorizedKeysFile	.ssh/authorized_keys
+
+# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
+#RhostsRSAAuthentication no
+# similar for protocol version 2
+#HostbasedAuthentication no
+# Change to yes if you don't trust ~/.ssh/known_hosts for
+# RhostsRSAAuthentication and HostbasedAuthentication
+#IgnoreUserKnownHosts no
+# Don't read the user's ~/.rhosts and ~/.shosts files
+#IgnoreRhosts yes
+
+# To disable tunneled clear text passwords, change to no here!
+#PasswordAuthentication yes
+#PermitEmptyPasswords no
+PasswordAuthentication no
+
+# Change to no to disable s/key passwords
+#ChallengeResponseAuthentication yes
+ChallengeResponseAuthentication no
+
+# Kerberos options
+#KerberosAuthentication no
+#KerberosOrLocalPasswd yes
+#KerberosTicketCleanup yes
+#KerberosGetAFSToken no
+
+# GSSAPI options
+#GSSAPIAuthentication no
+GSSAPIAuthentication no
+#GSSAPICleanupCredentials yes
+GSSAPICleanupCredentials yes
+
+# Set this to 'yes' to enable PAM authentication, account processing, 
+# and session processing. If this is enabled, PAM authentication will 
+# be allowed through the ChallengeResponseAuthentication and
+# PasswordAuthentication.  Depending on your PAM configuration,
+# PAM authentication via ChallengeResponseAuthentication may bypass
+# the setting of "PermitRootLogin without-password".
+# If you just want the PAM account and session checks to run without
+# PAM authentication, then enable this but set PasswordAuthentication
+# and ChallengeResponseAuthentication to 'no'.
+#UsePAM no
+UsePAM yes
+
+# Accept locale-related environment variables
+AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES 
+AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT 
+AcceptEnv LC_IDENTIFICATION LC_ALL
+#AllowTcpForwarding yes
+#GatewayPorts no
+#X11Forwarding no
+#X11Forwarding yes
+#X11DisplayOffset 10
+#X11UseLocalhost yes
+#PrintMotd yes
+#PrintLastLog yes
+#TCPKeepAlive yes
+#UseLogin no
+#UsePrivilegeSeparation yes
+#PermitUserEnvironment no
+#Compression delayed
+#ClientAliveInterval 0
+#ClientAliveCountMax 3
+#ShowPatchLevel no
+UseDNS no
+#PidFile /var/run/sshd.pid
+#MaxStartups 10
+#PermitTunnel no
+
+# no default banner path
+#Banner /some/path
+
+# override default of no subsystems
+Subsystem	sftp	/usr/libexec/openssh/sftp-server
+
+# Example of overriding settings on a per-user basis
+#Match User anoncvs
+#	X11Forwarding no
+#	AllowTcpForwarding no
+#	ForceCommand cvs server
diff --git a/patches/kvm/etc/sysctl.conf b/tools/systemvm/debian/config/etc/sysctl.conf
old mode 100755
new mode 100644
similarity index 79%
rename from patches/kvm/etc/sysctl.conf
rename to tools/systemvm/debian/config/etc/sysctl.conf
index 69704598684..d5fe5d43e8e
--- a/patches/kvm/etc/sysctl.conf
+++ b/tools/systemvm/debian/config/etc/sysctl.conf
@@ -13,6 +13,13 @@ net.ipv4.conf.default.rp_filter = 1
 # Do not accept source routing
 net.ipv4.conf.default.accept_source_route = 0
 
+# Respect local interface in ARP interactions
+net.ipv4.conf.default.arp_announce = 2
+net.ipv4.conf.default.arp_ignore = 2
+net.ipv4.conf.all.arp_announce = 2
+net.ipv4.conf.all.arp_ignore = 2
+
+
 # Controls the System Request debugging functionality of the kernel
 kernel.sysrq = 0
 
@@ -23,5 +30,4 @@ kernel.core_uses_pid = 1
 # Controls the use of TCP syncookies
 net.ipv4.tcp_syncookies = 1
 
-# VMOps Rudd-O increase conntrack limits, fix http://bugzilla.lab.vmops.com/show_bug.cgi?id=2008
 net.ipv4.netfilter.ip_conntrack_max=65536
diff --git a/tools/systemvm/debian/config/opt/cloud/bin/passwd_server b/tools/systemvm/debian/config/opt/cloud/bin/passwd_server
new file mode 100755
index 00000000000..ee9e531d72e
--- /dev/null
+++ b/tools/systemvm/debian/config/opt/cloud/bin/passwd_server
@@ -0,0 +1,18 @@
+#!/bin/bash
+
+. /etc/default/cloud-passwd-srvr
+guestIp=$(ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}')
+
+while [ "$ENABLED" == "1" ]
+do
+	socat TCP4-LISTEN:8080,reuseaddr,crnl,bind=$guestIp SYSTEM:"/opt/cloud/bin/serve_password.sh \"\$SOCAT_PEERADDR\""
+
+	rc=$?
+	if [ $rc -ne 0 ]
+	then
+		logger -t cloud "Password server failed with error code $rc. Restarting socat..."
+		sleep 3
+	fi
+        . /etc/default/cloud-passwd-srvr
+
+done
diff --git a/tools/systemvm/debian/config/opt/cloud/bin/patchsystemvm.sh b/tools/systemvm/debian/config/opt/cloud/bin/patchsystemvm.sh
new file mode 100755
index 00000000000..51f0bf1fbe6
--- /dev/null
+++ b/tools/systemvm/debian/config/opt/cloud/bin/patchsystemvm.sh
@@ -0,0 +1,121 @@
+#/bin/bash
+# $Id: patchsystemvm.sh 10800 2010-07-16 13:48:39Z edison $ $HeadURL: svn://svn.lab.vmops.com/repos/branches/2.1.x/java/scripts/vm/hypervisor/xenserver/prepsystemvm.sh $
+
+#set -x
+logfile="/var/log/patchsystemvm.log"
+#
+# To use existing console proxy .zip-based package file
+#
+patch_console_proxy() {
+   local patchfile=$1
+   rm /usr/local/cloud/systemvm -rf
+   mkdir -p /usr/local/cloud/systemvm
+   echo "All" | unzip $patchfile -d /usr/local/cloud/systemvm >$logfile 2>&1
+   find /usr/local/cloud/systemvm/ -name \*.sh | xargs chmod 555
+   return 0
+}
+
+consoleproxy_svcs() {
+   chkconfig cloud on
+   chkconfig postinit on
+   chkconfig cloud-passwd-srvr off
+   chkconfig haproxy off ;
+   chkconfig dnsmasq off
+   chkconfig ssh on
+   chkconfig apache2 off
+   chkconfig nfs-common off
+   chkconfig portmap off
+   echo "cloud postinit ssh" > /var/cache/cloud/enabled_svcs
+   echo "cloud-passwd-srvr haproxy dnsmasq apache2 nfs-common portmap" > /var/cache/cloud/disabled_svcs
+   mkdir -p /var/log/cloud
+}
+
+secstorage_svcs() {
+   chkconfig cloud on
+   chkconfig postinit on
+   chkconfig cloud-passwd-srvr off
+   chkconfig haproxy off ;
+   chkconfig dnsmasq off
+   chkconfig ssh on
+   chkconfig apache2 off
+   echo "cloud postinit ssh nfs-common portmap" > /var/cache/cloud/enabled_svcs
+   echo "cloud-passwd-srvr haproxy dnsmasq" > /var/cache/cloud/disabled_svcs
+   mkdir -p /var/log/cloud
+}
+
+routing_svcs() {
+   chkconfig cloud off
+   chkconfig cloud-passwd-srvr on ; 
+   chkconfig haproxy on ; 
+   chkconfig dnsmasq on
+   chkconfig ssh on
+   chkconfig nfs-common off
+   chkconfig portmap off
+   echo "cloud-passwd-srvr ssh dnsmasq haproxy apache2" > /var/cache/cloud/enabled_svcs
+   echo "cloud nfs-common portmap" > /var/cache/cloud/disabled_svcs
+}
+
+CMDLINE=$(cat /var/cache/cloud/cmdline)
+TYPE="router"
+PATCH_MOUNT=$1
+
+for i in $CMDLINE
+  do
+    # search for foo=bar pattern and cut out foo
+    KEY=$(echo $i | cut -d= -f1)
+    VALUE=$(echo $i | cut -d= -f2)
+    case $KEY in
+      type)
+        TYPE=$VALUE
+        ;;
+      *)
+        ;;
+    esac
+done
+
+if [ "$TYPE" == "consoleproxy" ] || [ "$TYPE" == "secstorage" ]  && [ -f ${PATCH_MOUNT}/systemvm.zip ]
+then
+  patch_console_proxy ${PATCH_MOUNT}/systemvm.zip
+  if [ $? -gt 0 ]
+  then
+    printf "Failed to apply patch systemvm\n" >$logfile
+    exit 5
+  fi
+fi
+
+
+#empty known hosts
+echo "" > /root/.ssh/known_hosts
+
+if [ "$TYPE" == "router" ]
+then
+  routing_svcs
+  if [ $? -gt 0 ]
+  then
+    printf "Failed to execute routing_svcs\n" >$logfile
+    exit 6
+  fi
+fi
+
+
+if [ "$TYPE" == "consoleproxy" ]
+then
+  consoleproxy_svcs
+  if [ $? -gt 0 ]
+  then
+    printf "Failed to execute consoleproxy_svcs\n" >$logfile
+    exit 7
+  fi
+fi
+
+if [ "$TYPE" == "secstorage" ]
+then
+  secstorage_svcs
+  if [ $? -gt 0 ]
+  then
+    printf "Failed to execute secstorage_svcs\n" >$logfile
+    exit 8
+  fi
+fi
+
+exit $?
diff --git a/tools/systemvm/debian/config/opt/cloud/bin/serve_password.sh b/tools/systemvm/debian/config/opt/cloud/bin/serve_password.sh
new file mode 100755
index 00000000000..398a5591266
--- /dev/null
+++ b/tools/systemvm/debian/config/opt/cloud/bin/serve_password.sh
@@ -0,0 +1,75 @@
+#!/bin/bash
+
+# set -x 
+
+PASSWD_FILE=/var/cache/cloud/passwords
+
+#replace a line in a file of the form key=value
+#   $1 filename
+#   $2 keyname
+#   $3 value
+replace_in_file() {
+  local filename=$1
+  local keyname=$2
+  local value=$3
+  sed -i /$keyname=/d $filename
+  echo "$keyname=$value" >> $filename
+  return $?
+}
+
+#get a value from a file in the form key=value
+#   $1 filename
+#   $2 keyname
+get_value() {
+  local filename=$1
+  local keyname=$2
+  grep -i $keyname= $filename | cut -d= -f2
+}
+
+ip=$1
+
+logger -t cloud "serve_password called to service a request for $ip."
+
+while read input
+do
+	if [ "$input" == "" ]
+	then
+		break
+	fi
+
+	request=$(echo $input | grep "VM Request:" | cut -d: -f2 | sed 's/^[ \t]*//')
+
+	if [ "$request" != "" ]
+	then
+		break
+	fi
+done
+
+# echo -e \"\\\"HTTP/1.0 200 OK\\\nDocumentType: text/plain\\\n\\\n\\\"\"; 
+
+if [ "$request" == "send_my_password" ]
+then
+	password=$(get_value $PASSWD_FILE $ip)
+	if [ "$password" == "" ]
+	then
+		logger -t cloud "serve_password sent bad_request to $ip."
+		echo "bad_request"
+	else
+		logger -t cloud "serve_password sent a password to $ip."
+		echo $password
+	fi
+else
+	if [ "$request" == "saved_password" ]
+	then
+		replace_in_file $PASSWD_FILE $ip "saved_password"
+		logger -t cloud "serve_password sent saved_password to $ip."
+		echo "saved_password"
+	else
+		logger -t cloud "serve_password sent bad_request to $ip."
+		echo "bad_request"
+	fi
+fi
+
+# echo -e \"\\\"\\\n\\\"\"
+
+exit 0
diff --git a/tools/systemvm/debian/config/root/.ssh/authorized_keys b/tools/systemvm/debian/config/root/.ssh/authorized_keys
new file mode 100644
index 00000000000..f738fe6cad7
--- /dev/null
+++ b/tools/systemvm/debian/config/root/.ssh/authorized_keys
@@ -0,0 +1 @@
+ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA1j2QZsaDk67SJT4dhzUDZohcuTG4AwBV/t1zn1yPkVQG7th6DkoEUck+c6qeNdSByk8ZVvf0M+24sL9RhpGTF1h/EmLp/fnfEohQ+ZxAgHI1U9AY67A9iqkw9JHnRShukUTXuJOiZte/VvTVJQlJyVNWNyAE/g9t/5sgtuNExq37veWPzyUaibhPIvdPnw3y+azb3LKnHCve/C2j0yf/qvV3S7jqf83OLCml9LIa4F6PVO6crXdCv4DnZiV8Qw/nhCRqQyKm+FXvMBT8mQziRsNUEDB4Mvmu32R7MJK0gvUxXUJOql0LoQqf6xkR8LNnMewKRrGfzuizM4XRp3UdRQ== root@gateway
diff --git a/tools/systemvm/debian/config/root/clearUsageRules.sh b/tools/systemvm/debian/config/root/clearUsageRules.sh
new file mode 100755
index 00000000000..2517d42e2e0
--- /dev/null
+++ b/tools/systemvm/debian/config/root/clearUsageRules.sh
@@ -0,0 +1,22 @@
+#!/usr/bin/env bash
+# clearUsageRules.sh - remove iptable rules for removed public interfaces
+#
+#
+# @VERSION@
+
+# if removedVifs file doesn't exist, no rules to be cleared
+if [ -f /root/removedVifs ]
+then
+    var=`cat /root/removedVifs`
+    # loop through even vif to be cleared
+    for i in $var; do
+        # Make sure vif doesn't exist
+        if [ ! -f /sys/class/net/$i ]
+        then
+            # remove rules
+            iptables -D NETWORK_STATS -i eth0 -o $i > /dev/null;
+            iptables -D NETWORK_STATS -i $i -o eth0 > /dev/null;
+        fi
+    done
+rm /root/removedVifs
+fi
diff --git a/patches/kvm/root/edithosts.sh b/tools/systemvm/debian/config/root/edithosts.sh
similarity index 76%
rename from patches/kvm/root/edithosts.sh
rename to tools/systemvm/debian/config/root/edithosts.sh
index 06c961cf8a5..0f2cca229bc 100755
--- a/patches/kvm/root/edithosts.sh
+++ b/tools/systemvm/debian/config/root/edithosts.sh
@@ -1,17 +1,15 @@
 #!/usr/bin/env bash
-# $Id: edithosts.sh 9947 2010-06-25 19:34:24Z manuel $ $HeadURL: svn://svn.lab.vmops.com/repos/vmdev/java/patches/kvm/root/edithosts.sh $
 # edithosts.sh -- edit the dhcphosts file on the routing domain
 # $1 : the mac address
 # $2 : the associated ip address
 # $3 : the hostname
-# @VERSION@
 
 wait_for_dnsmasq () {
-  local _pid=$(/sbin/pidof dnsmasq)
+  local _pid=$(pidof dnsmasq)
   for i in 0 1 2 3 4 5 6 7 8 9 10
   do
     sleep 1
-    _pid=$(/sbin/pidof dnsmasq)
+    _pid=$(pidof dnsmasq)
     [ "$_pid" != "" ] && break;
   done
   [ "$_pid" != "" ] && return 0;
@@ -19,6 +17,9 @@ wait_for_dnsmasq () {
   return 1
 }
 
+[ ! -f /etc/dhcphosts.txt ] && touch /etc/dhcphosts.txt
+[ ! -f /var/lib/misc/dnsmasq.leases ] && touch /var/lib/misc/dnsmasq.leases
+
 #delete any previous entries from the dhcp hosts file
 sed -i  /$1/d /etc/dhcphosts.txt 
 sed -i  /$2,/d /etc/dhcphosts.txt 
@@ -40,12 +41,13 @@ sed -i  /"$2 "/d /etc/hosts
 sed -i  /"$3"/d /etc/hosts
 echo "$2 $3" >> /etc/hosts
 
-# send SIGHUP to make dnsmasq re-read files
-pid=$(/sbin/pidof dnsmasq)
+# make dnsmasq re-read files
+pid=$(pidof dnsmasq)
 if [ "$pid" != "" ]
 then
-  kill -1 $(/sbin/pidof dnsmasq)
+  service dnsmasq restart
 else
   wait_for_dnsmasq
 fi
 
+exit $?
diff --git a/tools/systemvm/debian/config/root/firewall.sh b/tools/systemvm/debian/config/root/firewall.sh
new file mode 100755
index 00000000000..89cd0d4a95e
--- /dev/null
+++ b/tools/systemvm/debian/config/root/firewall.sh
@@ -0,0 +1,204 @@
+#!/usr/bin/env bash
+# $Id: firewall.sh 9947 2010-06-25 19:34:24Z manuel $ $HeadURL: svn://svn.lab.vmops.com/repos/vmdev/java/patches/xenserver/root/firewall.sh $
+# firewall.sh -- allow some ports / protocols to vm instances
+#
+#
+# @VERSION@
+
+usage() {
+  printf "Usage: %s: (-A|-D) -i   -r  -P protocol (-p port_range | -t icmp_type_code)  -l  -d  [-f  -u  -y  -z  ] \n" $(basename $0) >&2
+}
+
+set -x
+
+get_dom0_ip () {
+ eval "$1=$(ifconfig eth0 | awk '/inet addr/ {split ($2,A,":"); print A[2]}')"
+ return 0
+}
+
+
+#Add the tcp firewall entries into iptables in the routing domain
+tcp_entry() {
+  local instIp=$1
+  local dport=$2
+  local pubIp=$3
+  local port=$4
+  local op=$5
+  
+  for vif in $VIF_LIST; do 
+    iptables -t nat $op PREROUTING --proto tcp -i $vif -d $pubIp --destination-port $port -j DNAT --to-destination $instIp:$dport >/dev/null;
+  done;
+    	
+  iptables -t nat $op OUTPUT  --proto tcp -d $pubIp --destination-port $port -j DNAT --to-destination $instIp:$dport >/dev/null;
+  iptables $op FORWARD -p tcp -s 0/0 -d $instIp -m state --state ESTABLISHED,RELATED -j ACCEPT > /dev/null;
+  iptables $op FORWARD -p tcp -s 0/0 -d $instIp --destination-port $dport --syn -j ACCEPT > /dev/null;
+  	
+  return $?
+}
+
+#Add the udp firewall entries into iptables in the routing domain
+udp_entry() {
+  local instIp=$1
+  local dport=$2
+  local pubIp=$3
+  local port=$4
+  local op=$5
+  
+  for vif in $VIF_LIST; do 
+    iptables -t nat $op PREROUTING --proto udp -i $vif -d $pubIp --destination-port $port -j DNAT --to-destination $instIp:$dport >/dev/null;
+  done;
+   	
+  iptables -t nat $op OUTPUT  --proto udp -d $pubIp --destination-port $port -j DNAT --to-destination $instIp:$dport >/dev/null;
+  iptables $op FORWARD -p udp -s 0/0 -d $instIp --destination-port $dport  -j ACCEPT > /dev/null;
+  		
+  return $?
+}
+
+#Add the icmp firewall entries into iptables in the routing domain
+icmp_entry() {
+  local instIp=$1
+  local icmptype=$2
+  local pubIp=$3
+  local op=$4
+  
+  for vif in $VIF_LIST; do 
+    iptables -t nat $op PREROUTING --proto icmp -i $vif -d $pubIp --icmp-type $icmptype -j DNAT --to-destination $instIp >/dev/null;
+  done;
+   	
+  iptables -t nat $op OUTPUT  --proto icmp -d $pubIp --icmp-type $icmptype -j DNAT --to-destination $instIp:$dport >/dev/null;
+  iptables $op FORWARD -p icmp -s 0/0 -d $instIp --icmp-type $icmptype  -j ACCEPT > /dev/null;
+  	
+  return $?
+}
+
+get_vif_list() {
+  local vif_list=""
+  for i in /sys/class/net/eth*; do 
+    vif=$(basename $i);
+    if [ "$vif" != "eth0" ] && [ "$vif" != "eth1" ]
+    then
+      vif_list="$vif_list $vif";
+    fi
+  done
+  
+  echo $vif_list
+}
+
+reverse_op() {
+	local op=$1
+	
+	if [ "$op" == "-A" ]
+	then
+		echo "-D"
+	else
+		echo "-A"
+	fi
+}
+
+rflag=
+iflag=
+Pflag=
+pflag=
+tflag=
+lflag=
+dflag=
+oflag=
+wflag=
+xflag=
+nflag=
+Nflag=
+op=""
+oldPrivateIP=""
+oldPrivatePort=""
+
+while getopts 'ADr:i:P:p:t:l:d:w:x:n:N:' OPTION
+do
+  case $OPTION in
+  A)	Aflag=1
+		op="-A"
+		;;
+  D)	Dflag=1
+		op="-D"
+		;;
+  i)	iflag=1
+		domRIp="$OPTARG"
+		;;
+  r)	rflag=1
+		instanceIp="$OPTARG"
+		;;
+  P)	Pflag=1
+		protocol="$OPTARG"
+		;;
+  p)	pflag=1
+		ports="$OPTARG"
+		;;
+  t)	tflag=1
+		icmptype="$OPTARG"
+		;;
+  l)	lflag=1
+		publicIp="$OPTARG"
+		;;
+  d)	dflag=1
+		dport="$OPTARG"
+		;;
+  w)	wflag=1
+  		oldPrivateIP="$OPTARG"
+  		;;
+  x)	xflag=1
+  		oldPrivatePort="$OPTARG"
+  		;;	
+  n)	nflag=1
+  		domRName="$OPTARG"
+  		;;
+  N)	Nflag=1
+  		netmask="$OPTARG"
+  		;;
+  ?)	usage
+		exit 2
+		;;
+  esac
+done
+
+reverseOp=$(reverse_op $op)
+
+VIF_LIST=$(get_vif_list)
+
+case $protocol  in
+  "tcp")	
+  		# If oldPrivateIP was passed in, this is an update. Delete the old rule from DomR. 
+  		if [ "$oldPrivateIP" != "" ]
+  		then
+  			tcp_entry $oldPrivateIP $oldPrivatePort $publicIp $ports "-D"
+  		fi
+  		
+  		# Add/delete the new rule
+		tcp_entry $instanceIp $dport $publicIp $ports $op 
+		exit $?
+		;;
+  "udp")  
+  		# If oldPrivateIP was passed in, this is an update. Delete the old rule from DomR. 
+  		if [ "$oldPrivateIP" != "" ]
+  		then
+  			udp_entry $oldPrivateIP $oldPrivatePort $publicIp $ports "-D"
+		fi
+  
+		# Add/delete the new rule
+		udp_entry $instanceIp $dport $publicIp $ports $op 
+		exit $?
+        ;;
+  "icmp")  
+  		# If oldPrivateIP was passed in, this is an update. Delete the old rule from DomR. 
+  		if [ "$oldPrivateIP" != "" ]
+  		then
+  			icmp_entry $oldPrivateIp $icmptype $publicIp "-D"
+  		fi
+  
+  		# Add/delete the new rule
+		icmp_entry $instanceIp $icmptype $publicIp $op 
+		exit $?
+        ;;
+      *)
+        printf "Invalid protocol-- must be tcp, udp or icmp\n" >&2
+        exit 5
+        ;;
+esac
diff --git a/tools/systemvm/debian/config/root/loadbalancer.sh b/tools/systemvm/debian/config/root/loadbalancer.sh
new file mode 100755
index 00000000000..f6c2c5d7e93
--- /dev/null
+++ b/tools/systemvm/debian/config/root/loadbalancer.sh
@@ -0,0 +1,167 @@
+#!/usr/bin/env bash
+# $Id: loadbalancer.sh 9947 2010-06-25 19:34:24Z manuel $ $HeadURL: svn://svn.lab.vmops.com/repos/vmdev/java/patches/xenserver/root/loadbalancer.sh $
+# loadbalancer.sh -- reconfigure loadbalancer rules
+#
+#
+# @VERSION@
+
+usage() {
+  printf "Usage: %s:  -i   -a  -d  -f  \n" $(basename $0) >&2
+}
+
+# set -x
+
+# check if gateway domain is up and running
+check_gw() {
+  ping -c 1 -n -q $1 > /dev/null
+  if [ $? -gt 0 ]
+  then
+    sleep 1
+    ping -c 1 -n -q $1 > /dev/null
+  fi
+  return $?;
+}
+
+# firewall entry to ensure that haproxy can receive on specified port
+fw_entry() {
+  local added=$1
+  local removed=$2
+  
+  if [ "$added" == "none" ]
+  then
+  	added=""
+  fi
+  
+  if [ "$removed" == "none" ]
+  then
+  	removed=""
+  fi
+  
+  local a=$(echo $added | cut -d, -f1- --output-delimiter=" ")
+  local r=$(echo $removed | cut -d, -f1- --output-delimiter=" ")
+  
+  for i in $a
+  do
+    local pubIp=$(echo $i | cut -d: -f1)
+    local dport=$(echo $i | cut -d: -f2)
+    
+    for vif in $VIF_LIST; do 
+      iptables -D INPUT -i $vif -p tcp -d $pubIp --dport $dport -j ACCEPT 2> /dev/null
+      iptables -A INPUT -i $vif -p tcp -d $pubIp --dport $dport -j ACCEPT
+      
+      if [ $? -gt 0 ]
+      then
+        return 1
+      fi
+    done      
+  done
+
+  for i in $r
+  do
+    local pubIp=$(echo $i | cut -d: -f1)
+    local dport=$(echo $i | cut -d: -f2)
+    
+    for vif in $VIF_LIST; do 
+      iptables -D INPUT -i $vif -p tcp -d $pubIp --dport $dport -j ACCEPT
+    done
+  done
+  
+  return 0
+}
+
+#Hot reconfigure HA Proxy in the routing domain
+reconfig_lb() {
+  /root/reconfigLB.sh
+  return $?
+}
+
+# Restore the HA Proxy to its previous state, and revert iptables rules on DomR
+restore_lb() {
+  # Copy the old version of haproxy.cfg into the file that reconfigLB.sh uses
+  cp /etc/haproxy/haproxy.cfg.old /etc/haproxy/haproxy.cfg.new
+   
+  if [ $? -eq 0 ]
+  then
+    # Run reconfigLB.sh again
+    /root/reconfigLB.sh
+  fi
+}
+
+get_vif_list() {
+  local vif_list=""
+  for i in /sys/class/net/eth*; do 
+    vif=$(basename $i);
+    if [ "$vif" != "eth0" ] && [ "$vif" != "eth1" ]
+    then
+      vif_list="$vif_list $vif";
+    fi
+  done
+  
+  echo $vif_list
+}
+
+mflag=
+iflag=
+aflag=
+dflag=
+fflag=
+
+while getopts 'i:a:d:f:' OPTION
+do
+  case $OPTION in
+  i)	iflag=1
+		domRIp="$OPTARG"
+		;;
+  a)	aflag=1
+		addedIps="$OPTARG"
+		;;
+  d)	dflag=1
+		removedIps="$OPTARG"
+		;;
+  f)	fflag=1
+		cfgfile="$OPTARG"
+		;;
+  ?)	usage
+		exit 2
+		;;
+  esac
+done
+
+VIF_LIST=$(get_vif_list)
+
+# hot reconfigure haproxy
+reconfig_lb $cfgfile
+
+if [ $? -gt 0 ]
+then
+  printf "Reconfiguring loadbalancer failed\n"
+  exit 1
+fi
+
+if [ "$addedIps" == "" ]
+then
+  addedIps="none"
+fi
+
+if [ "$removedIps" == "" ]
+then
+  removedIps="none"
+fi
+
+# iptables entry to ensure that haproxy receives traffic
+fw_entry $addedIps $removedIps
+  	
+if [ $? -gt 0 ]
+then
+  # Restore the LB
+  restore_lb
+
+  # Revert iptables rules on DomR, with addedIps and removedIps swapped 
+  fw_entry $removedIps $addedIps
+
+  exit 1
+fi
+ 
+exit 0
+  	
+
diff --git a/tools/systemvm/debian/config/root/reconfigLB.sh b/tools/systemvm/debian/config/root/reconfigLB.sh
new file mode 100755
index 00000000000..0ce93a06d69
--- /dev/null
+++ b/tools/systemvm/debian/config/root/reconfigLB.sh
@@ -0,0 +1,23 @@
+#!/bin/bash
+
+# save previous state
+  mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.old
+  mv /var/run/haproxy.pid /var/run/haproxy.pid.old
+
+  mv /etc/haproxy/haproxy.cfg.new /etc/haproxy/haproxy.cfg
+  kill -TTOU $(cat /var/run/haproxy.pid.old)
+  sleep 2
+  if haproxy -D -p /var/run/haproxy.pid -f /etc/haproxy/haproxy.cfg; then
+    echo "New haproxy instance successfully loaded, stopping previous one."
+    kill -KILL $(cat /var/run/haproxy.pid.old)
+    rm -f /var/run/haproxy.pid.old
+    exit 0
+  else
+    echo "New instance failed to start, resuming previous one."
+    kill -TTIN $(cat /var/run/haproxy.pid.old)
+    rm -f /var/run/haproxy.pid
+    mv /var/run/haproxy.pid.old /var/run/haproxy.pid
+    mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.new
+    mv /etc/haproxy/haproxy.cfg.old /etc/haproxy/haproxy.cfg
+    exit 1
+  fi
diff --git a/tools/systemvm/debian/config/var/lib/misc/dnsmasq.leases b/tools/systemvm/debian/config/var/lib/misc/dnsmasq.leases
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/tools/systemvm/debian/config/var/www/html/latest/.htaccess b/tools/systemvm/debian/config/var/www/html/latest/.htaccess
new file mode 100644
index 00000000000..038a4c933cf
--- /dev/null
+++ b/tools/systemvm/debian/config/var/www/html/latest/.htaccess
@@ -0,0 +1,5 @@
+Options +FollowSymLinks  
+RewriteEngine On
+#RewriteBase /
+
+RewriteRule ^user-data$  ../userdata/%{REMOTE_ADDR}/user-data [L,NC,QSA]
diff --git a/tools/systemvm/debian/config/var/www/html/userdata/.htaccess b/tools/systemvm/debian/config/var/www/html/userdata/.htaccess
new file mode 100644
index 00000000000..5a928f6da25
--- /dev/null
+++ b/tools/systemvm/debian/config/var/www/html/userdata/.htaccess
@@ -0,0 +1 @@
+Options -Indexes
diff --git a/tools/systemvm/debian/systemvm.xml b/tools/systemvm/debian/systemvm.xml
new file mode 100644
index 00000000000..ce6ecaf6e49
--- /dev/null
+++ b/tools/systemvm/debian/systemvm.xml
@@ -0,0 +1,37 @@
+
+  systemvm2
+  1572864
+  1572864
+  1
+  
+    hvm
+  
+  
+    
+    
+    
+  
+  
+  destroy
+  restart
+  restart
+  
+    
+    /usr/bin/qemu-kvm
+    
+      
+      
+      
+      
+    
+    
+      
+      
+      
+      
+    
+    
+    
+  
+
+
diff --git a/tools/waf/javadir.py b/tools/waf/javadir.py
new file mode 100644
index 00000000000..9c71a64c80c
--- /dev/null
+++ b/tools/waf/javadir.py
@@ -0,0 +1,22 @@
+#!/usr/bin/env python
+
+import Options, Utils
+import os
+
+def detect(conf):
+	conf.check_message_1('Detecting JAVADIR')
+	javadir = getattr(Options.options, 'JAVADIR', '')
+	if javadir:
+		conf.env.JAVADIR = javadir
+		conf.check_message_2("%s (forced through --javadir)"%conf.env.JAVADIR,"GREEN")
+	else:
+		conf.env.JAVADIR = os.path.join(conf.env.DATADIR,'java')
+		conf.check_message_2("%s (using default ${DATADIR}/java directory)"%conf.env.JAVADIR,"GREEN")
+
+def set_options(opt):
+        inst_dir = opt.get_option_group('--datadir') # get the group that contains bindir
+        if not inst_dir: raise Utils.WafError, "DATADIR not set.  Did you load the gnu_dirs tool options with opt.tool_options('gnu_dirs') before running opt.tool_options('javadir')?"
+	inst_dir.add_option('--javadir', # add javadir to the group that contains bindir
+		help = 'Java class and jar files [Default: ${DATADIR}/java]',
+		default = '',
+		dest = 'JAVADIR')
diff --git a/tools/waf/mkisofs.py b/tools/waf/mkisofs.py
index eb611577208..440073d2de4 100644
--- a/tools/waf/mkisofs.py
+++ b/tools/waf/mkisofs.py
@@ -1,12 +1,14 @@
 import Utils
 from TaskGen import feature, before
+from Configure import ConfigurationError
 import Task
 import os
 
 # fixme: this seems to hang waf with 100% CPU
 
 def detect(conf):
-	conf.find_program("mkisofs",mandatory=True,var='MKISOFS')
+	conf.find_program("mkisofs",var='MKISOFS')
+	if not conf.env.MKISOFS: conf.find_program("genisoimage",mandatory=True,var='MKISOFS')
 
 def iso_up(task):
 	tgt = task.outputs[0].bldpath(task.env)
@@ -16,21 +18,22 @@ def iso_up(task):
 		if inp.id&3==Node.BUILD:
 			src = inp.bldpath(task.env)
 			srcname = src
-			srcname = "/".join(srcname.split("/")[1:]) # chop off default/
+			srcname = sep.join(srcname.split(sep)[1:]) # chop off default/
 		else:
 			src = inp.srcpath(task.env)
 			srcname = src
-			srcname = "/".join(srcname.split("/")[1:]) # chop off ../
-		inps.append(src)
+			srcname = sep.join(srcname.split(sep)[1:]) # chop off ../
+                if task.generator.rename: srcname = task.generator.rename(srcname)
+                inps.append(srcname+'='+src)
 	ret = Utils.exec_command(
 		[
 			task.generator.env.MKISOFS,
 			"-quiet",
 			"-r",
+                        "-graft-points",
 			"-o",tgt,
 		] + inps, shell=False)
 	if ret != 0: return ret
-	if task.chmod: os.chmod(tgt,task.chmod)
 
 def apply_iso(self):
 	Utils.def_attrs(self,fun=iso_up)
diff --git a/tools/waf/tar.py b/tools/waf/tar.py
index c0293337b90..bf9a91b7521 100644
--- a/tools/waf/tar.py
+++ b/tools/waf/tar.py
@@ -1,8 +1,9 @@
 import Utils
+import Options
 import tarfile
 from TaskGen import feature, before
 import Task
-import os
+import os, sys
 
 # this is a clever little thing
 # given a list of nodes, build or source
@@ -14,9 +15,9 @@ import os
 def tar_up(task):
 	tgt = task.outputs[0].bldpath(task.env)
 	if os.path.exists(tgt): os.unlink(tgt)
-	if tgt.lower().endswith(".bz2"): z = tarfile.open(tgt,"w:bz2")
-	elif tgt.lower().endswith(".gz"): z = tarfile.open(tgt,"w:gz")
-	elif tgt.lower().endswith(".tgz"): z = tarfile.open(tgt,"w:gz")
+        if tgt.lower().endswith(".bz2"): z = tarfile.open(tgt,"w:bz2")
+        elif tgt.lower().endswith(".gz"): z = tarfile.open(tgt,"w:gz")
+        elif tgt.lower().endswith(".tgz"): z = tarfile.open(tgt,"w:gz")
 	else: z = tarfile.open(tgt,"w")
 	fileset = {}
 	for inp in task.inputs:
@@ -25,16 +26,16 @@ def tar_up(task):
 			srcname = Utils.relpath(src,os.path.join("..",".")) # file in source dir
 		else:
 			srcname = Utils.relpath(src,os.path.join(task.env.variant(),".")) # file in artifacts dir
+		srcname = srcname.split(os.path.sep,len(task.generator.root.split(os.path.sep)))[-1]
 		if task.generator.rename: srcname = task.generator.rename(srcname)
-		for dummy in task.generator.root.split("/"):
-			splittedname = srcname.split("/")
-			srcname = "/".join(splittedname[1:])
 		fileset[srcname] = src
 	for srcname,src in fileset.items():
 		ti = tarfile.TarInfo(srcname)
 		ti.mode = 0755
 		ti.size = os.path.getsize(src)
-		f = file(src)
+                openmode = 'r'
+                if Options.platform == 'win32': openmode = openmode + 'b'
+                f = file(src,openmode)
 		z.addfile(ti,fileobj=f)
 		f.close()
 	z.close()
@@ -53,16 +54,9 @@ def apply_tar(self):
 		node = self.path.find_resource(x)
 		if not node:raise Utils.WafError('cannot find input file %s for processing'%x)
 		ins.append(node)
-	if self.dict and not self.env['DICT_HASH']:
-		self.env=self.env.copy()
-		keys=list(self.dict.keys())
-		keys.sort()
-		lst=[self.dict[x]for x in keys]
-		self.env['DICT_HASH']=str(Utils.h_list(lst))
 	tsk=self.create_task('tar',ins,out)
 	tsk.fun=self.fun
 	tsk.dict=self.dict
-	tsk.dep_vars=['DICT_HASH']
 	tsk.install_path=self.install_path
 	tsk.chmod=self.chmod
 	if not tsk.env:
@@ -71,4 +65,4 @@ def apply_tar(self):
 
 Task.task_type_from_func('tar',func=tar_up)
 feature('tar')(apply_tar)
-before('apply_core')(apply_tar)
\ No newline at end of file
+before('apply_core')(apply_tar)
diff --git a/tools/waf/tomcat.py b/tools/waf/tomcat.py
new file mode 100644
index 00000000000..e314c32beb0
--- /dev/null
+++ b/tools/waf/tomcat.py
@@ -0,0 +1,41 @@
+#!/usr/bin/env python
+
+import Options, Utils
+import os
+
+def detect(conf):
+	if not conf.env.DATADIR:
+		conf.fatal("DATADIR not found in the environment.  Did you run conf.check_tool('gnu_dirs') before running check_tool('tomcat')?")
+	conf.check_message_1('Detecting Tomcat')
+	conf.env.TOMCATHOME = ''
+	tomcathome = getattr(Options.options, 'TOMCATHOME', '')
+	if tomcathome:
+		conf.env.TOMCATHOME = tomcathome
+		method = "forced through --with-tomcat"
+	else:
+		if    "TOMCAT_HOME" in conf.environ and conf.environ['TOMCAT_HOME'].strip():
+			conf.env.TOMCATHOME = conf.environ["TOMCAT_HOME"]
+			method = 'got through environment variable %TOMCAT_HOME%'
+		elif  "CATALINA_HOME" in conf.environ and conf.environ['CATALINA_HOME'].strip():
+			conf.env.TOMCATHOME = conf.environ['CATALINA_HOME']
+			method = 'got through environment variable %CATALINA_HOME%'
+		elif os.path.isdir(os.path.join(conf.env.DATADIR,"tomcat6")):
+			conf.env.TOMCATHOME = os.path.join(conf.env.DATADIR,"tomcat6")
+			method = 'detected existence of Tomcat directory under $DATADIR'
+		elif os.path.isdir("/usr/share/tomcat6"):
+			conf.env.TOMCATHOME = "/usr/share/tomcat6"
+			method = 'detected existence of standard Linux system directory'
+	if not conf.env.TOMCATHOME:
+		conf.fatal("Could not detect Tomcat")
+	elif not os.path.isdir(conf.env.TOMCATHOME):
+		conf.fatal("Tomcat cannot be found at %s"%conf.env.TOMCATHOME)
+	else:
+		conf.check_message_2("%s (%s)"%(conf.env.TOMCATHOME,method),"GREEN")
+
+def set_options(opt):
+        inst_dir = opt.get_option_group('--datadir') # get the group that contains bindir
+        if not inst_dir: raise Utils.WafError, "DATADIR not set.  Did you load the gnu_dirs tool options with opt.tool_options('gnu_dirs') before running opt.tool_options('tomcat')?"
+	inst_dir.add_option('--with-tomcat', # add javadir to the group that contains bindir
+		help = 'Path to installed Tomcat 6 environment [Default: ${DATADIR}/tomcat6 (unless %%CATALINA_HOME%% is set)]',
+		default = '',
+		dest = 'TOMCATHOME')
\ No newline at end of file
diff --git a/tools/waf/usermgmt.py b/tools/waf/usermgmt.py
new file mode 100644
index 00000000000..65fd889d330
--- /dev/null
+++ b/tools/waf/usermgmt.py
@@ -0,0 +1,124 @@
+import Utils, Build
+from TaskGen import feature, before
+from Configure import ConfigurationError
+import Options
+import Task
+import os
+
+def detect(conf):
+	if Options.platform == 'win32': raise Utils.WafError('the usermgmt tool only works on Linux')
+	if Options.platform == 'darwin': raise Utils.WafError('the usermgmt tool only works on Linux')
+	path_list = ["/usr/local/sbin","/usr/sbin","/sbin"] + os.environ.get('PATH','').split(os.pathsep)
+	conf.find_program("useradd",var='USERADD',mandatory=True,path_list=path_list)
+	conf.find_program("userdel",var='USERDEL',mandatory=True,path_list=path_list)
+
+def set_options(opt):
+	if Options.platform == 'win32': raise Utils.WafError('the usermgmt tool only works on Linux')
+	if Options.platform == 'darwin': raise Utils.WafError('the usermgmt tool only works on Linux')
+	og = opt.get_option_group('--force')
+	og.add_option('--nochown',
+		action = 'store_true',
+		help = 'do not create or remove user accounts or change file ownership on installed files',
+		default = False,
+		dest = 'NOUSERMGMT')
+
+def _subst_add_destdir(x,bld):
+	a = "${DESTDIR}" + x
+	a = a.replace("${DESTDIR}",Options.options.destdir)
+	a = Utils.subst_vars(a,bld.env)
+	if a.startswith("//"): a = a[1:]
+	return a
+Build.BuildContext.subst_add_destdir = staticmethod(_subst_add_destdir)
+
+def _setownership(ctx,path,owner,group,mode=None):
+	if Options.platform == 'win32': return
+	if Options.platform == 'darwin': return
+	if not hasattr(os,"getuid"): return
+	if os.getuid() != 0: return
+	if Options.options.NOUSERMGMT: return
+
+	import pwd
+	import grp
+	import stat
+	from os import chown as _chown, chmod as _chmod
+
+	def f(bld,path,owner,group,mode):
+		
+		try: uid = pwd.getpwnam(owner).pw_uid
+		except KeyError,e:
+			raise Utils.WafError("Before using setownership() you have to create the user with bld.createuser(username...)")
+		try: gid = grp.getgrnam(group).gr_gid
+		except KeyError,e:
+			raise Utils.WafError("Before using setownership() you have to create the user with bld.createuser(username...)")
+		
+		path = bld.subst_add_destdir(path,bld)
+		current_uid,current_gid = os.stat(path).st_uid,os.stat(path).st_gid
+		if current_uid != uid:
+			Utils.pprint("GREEN","* setting owner of %s to UID %s"%(path,uid))
+			_chown(path,uid,current_gid)
+			current_uid = uid
+		if current_gid != gid:
+			Utils.pprint("GREEN","* setting group of %s to GID %s"%(path,gid))
+			_chown(path,current_uid,gid)
+			current_gid = gid
+		if mode is not None:
+			current_mode = stat.S_IMODE(os.stat(path).st_mode)
+			if current_mode != mode:
+				Utils.pprint("GREEN","* adjusting permissions on %s to mode %o"%(path,mode))
+				_chmod(path,mode)
+				current_mode = mode
+	
+	if ctx.is_install > 0:
+		ctx.add_post_fun(lambda ctx: f(ctx,path,owner,group,mode))
+Build.BuildContext.setownership = _setownership
+
+def _createuser(ctx,user,homedir,shell):
+	if Options.platform == 'win32': return
+	if Options.platform == 'darwin': return
+	if not hasattr(os,"getuid"): return
+	if os.getuid() != 0: return
+	if Options.options.NOUSERMGMT: return
+	
+	def f(ctx,user,homedir,shell):
+		import pwd
+		try:
+			pwd.getpwnam(user).pw_uid
+			user_exists = True
+		except KeyError,e:
+			user_exists = False
+		if user_exists: return
+		
+		Utils.pprint("GREEN","* creating user %s"%user)
+		cmd = [
+		  ctx.env.USERADD,
+		  '-M',
+		  '-r',
+		  '-s',shell,
+		  '-d',homedir,
+		  user,
+		]
+		ret = Utils.exec_command(cmd)
+		if ret: raise Utils.WafError("Failed to run command %s"%cmd)
+	
+	def g(ctx,user,homedir,shell):
+		import pwd
+		try:
+			pwd.getpwnam(user).pw_uid
+			user_exists = True
+		except KeyError,e:
+			user_exists = False
+		if not user_exists: return
+		
+		Utils.pprint("GREEN","* removing user %s"%user)
+		cmd = [
+		  ctx.env.USERDEL,
+		  user,
+		]
+		ret = Utils.exec_command(cmd)
+		if ret: raise Utils.WafError("Failed to run command %s"%cmd)
+	
+	if ctx.is_install > 0:
+		ctx.add_pre_fun(lambda ctx: f(ctx,user,homedir,shell))
+	elif ctx.is_install < 0:
+		ctx.add_pre_fun(lambda ctx: g(ctx,user,homedir,shell))
+Build.BuildContext.createuser = _createuser
\ No newline at end of file
diff --git a/ui/index.jsp b/ui/index.jsp
index 8be1f714663..3fe04850c2a 100755
--- a/ui/index.jsp
+++ b/ui/index.jsp
@@ -25,7 +25,7 @@ long milliseconds = new Date().getTime();
 		- Default Cloud.com styling of the site.  This file contains the easiest portion of the site
         that can be styled to your companie's need such as logo, top navigation, and dialogs.		
 	-->
-	
+	
 	
 	
 	
diff --git a/ui/jsp/tab_storage.jsp b/ui/jsp/tab_storage.jsp
index 92f6b5ae5fb..aaab49adb19 100755
--- a/ui/jsp/tab_storage.jsp
+++ b/ui/jsp/tab_storage.jsp
@@ -615,7 +615,7 @@ long milliseconds = new Date().getTime();
                 
Name
-
+
Type
@@ -707,7 +707,7 @@ long milliseconds = new Date().getTime();
Name
-
+
Type
diff --git a/ui/new/css/main.css b/ui/new/css/main.css index a4f239d1334..db4c9d171c5 100644 --- a/ui/new/css/main.css +++ b/ui/new/css/main.css @@ -15,8 +15,7 @@ html,body{ background:#00374e url(../images/login_bg.gif) repeat-x top left; margin:0; padding:0; - overflow-x:hidden; - overflow-y:auto; + overflow:auto; } - + - + - + - + + + + + + + + + + + + + Cloud.com CloudStack @@ -294,7 +312,7 @@
-
+
Dashboard
@@ -327,22 +345,18 @@
-
-
-
-
+
+
Routers
Routers
-
-
-
-
+
+
Storage
- Storage + System
@@ -358,7 +372,7 @@
-
+
Host
@@ -378,28 +392,28 @@
-
+
storage
Primary Storage
-
+
storage
secondary Storage
-
+
storage
Volumes
-
+
storage
@@ -419,15 +433,15 @@
-
-
+
+
Network
IP Addresses
-
-
+
+
Network
Network Groups @@ -446,14 +460,14 @@
-
+
Templates
Template
-
+
Templates
@@ -473,7 +487,7 @@
-
+
Accounts
@@ -493,7 +507,7 @@
-
+
Domain
@@ -513,14 +527,14 @@
-
+
Events
Events
-
+
Events
@@ -540,28 +554,28 @@
-
+
Configuration
Global Settings
-
+
Configuration
Zones
-
+
Configuration
Service Offerings
-
+
Configuration
@@ -599,16 +613,18 @@ Group 1
-