public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/7] [RFC] KVM autotest refactor stage 1
@ 2011-03-09  9:21 Lucas Meneghel Rodrigues
  2011-03-09  9:21 ` [PATCH 1/7] KVM test: Move test utilities to client/tools Lucas Meneghel Rodrigues
                   ` (7 more replies)
  0 siblings, 8 replies; 12+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-03-09  9:21 UTC (permalink / raw)
  To: autotest; +Cc: kvm

In order to maximize code reuse among different virtualization
technologies, refactor the KVM test code in a way that will allow
new implementations of virtualization testing, such as xen testing.

What was done
• Create autotest_lib.client.virt and move the libraries in there,
with some renaming and abstracting the KVM specific functions
• Create a dispatcher that can instantiate the appropriate vm class,
controlled by a new parameter 'vm_type'
(can be kvm, xen, futurely libvirt...)
• Make all the code use the new libraries
• Remove the 'old' libraries
• Make the KVM test loader to try finding the tests on a common
location, and if the test can't be found there, look for it on the
kvm subtest dir. This way other virt tests can benefit from thm
• Move the tests that have virt tech agnostic code to the common
location

Lucas Meneghel Rodrigues (7):
  KVM test: Move test utilities to client/tools
  KVM test: Create autotest_lib.client.virt namespace
  KVM test: tests_base.cfg: Introduce parameter 'vm_type'
  KVM test: Adapt the test code to use the new virt namespace
  KVM test: Removing the old libraries and programs
  KVM test: Try to load subtests on a shared tests location
  KVM test: Moving generic tests to common tests area

 client/common_lib/cartesian_config.py              |  698 ++++++++
 client/tests/kvm/cd_hash.py                        |   48 -
 client/tests/kvm/control                           |   18 +-
 client/tests/kvm/control.parallel                  |    8 +-
 client/tests/kvm/control.unittests                 |   14 +-
 client/tests/kvm/get_started.py                    |    5 +-
 client/tests/kvm/html_report.py                    | 1727 -------------------
 client/tests/kvm/installer.py                      |  797 ---------
 client/tests/kvm/kvm.py                            |   32 +-
 client/tests/kvm/kvm_config.py                     |  698 --------
 client/tests/kvm/kvm_monitor.py                    |  744 --------
 client/tests/kvm/kvm_preprocessing.py              |  467 -----
 client/tests/kvm/kvm_scheduler.py                  |  229 ---
 client/tests/kvm/kvm_subprocess.py                 | 1351 ---------------
 client/tests/kvm/kvm_test_utils.py                 |  753 ---------
 client/tests/kvm/kvm_utils.py                      | 1728 -------------------
 client/tests/kvm/kvm_vm.py                         | 1777 --------------------
 client/tests/kvm/migration_control.srv             |   12 +-
 client/tests/kvm/ppm_utils.py                      |  237 ---
 client/tests/kvm/rss_file_transfer.py              |  519 ------
 client/tests/kvm/scan_results.py                   |   97 --
 client/tests/kvm/stepeditor.py                     | 1401 ---------------
 client/tests/kvm/test_setup.py                     |  700 --------
 client/tests/kvm/tests/autotest.py                 |   25 -
 client/tests/kvm/tests/balloon_check.py            |    2 +-
 client/tests/kvm/tests/boot.py                     |   26 -
 client/tests/kvm/tests/boot_savevm.py              |    2 +-
 client/tests/kvm/tests/build.py                    |    6 +-
 client/tests/kvm/tests/clock_getres.py             |   37 -
 client/tests/kvm/tests/enospc.py                   |    2 +-
 client/tests/kvm/tests/ethtool.py                  |  235 ---
 client/tests/kvm/tests/file_transfer.py            |   83 -
 client/tests/kvm/tests/guest_s4.py                 |   76 -
 client/tests/kvm/tests/guest_test.py               |   80 -
 client/tests/kvm/tests/image_copy.py               |   45 -
 client/tests/kvm/tests/iofuzz.py                   |  136 --
 client/tests/kvm/tests/ioquit.py                   |   31 -
 client/tests/kvm/tests/iozone_windows.py           |   40 -
 client/tests/kvm/tests/jumbo.py                    |  127 --
 client/tests/kvm/tests/kdump.py                    |   75 -
 client/tests/kvm/tests/ksm_overcommit.py           |   37 +-
 client/tests/kvm/tests/linux_s3.py                 |   41 -
 client/tests/kvm/tests/mac_change.py               |   60 -
 client/tests/kvm/tests/migration.py                |    6 +-
 .../kvm/tests/migration_with_file_transfer.py      |    8 +-
 client/tests/kvm/tests/migration_with_reboot.py    |    4 +-
 client/tests/kvm/tests/module_probe.py             |    4 +-
 client/tests/kvm/tests/multicast.py                |   90 -
 client/tests/kvm/tests/netperf.py                  |   91 -
 client/tests/kvm/tests/nic_bonding.py              |    6 +-
 client/tests/kvm/tests/nic_hotplug.py              |   24 +-
 client/tests/kvm/tests/nic_promisc.py              |   39 -
 client/tests/kvm/tests/nicdriver_unload.py         |   56 -
 client/tests/kvm/tests/pci_hotplug.py              |   18 +-
 client/tests/kvm/tests/physical_resources_check.py |    2 +-
 client/tests/kvm/tests/ping.py                     |   73 -
 client/tests/kvm/tests/pxe.py                      |   30 -
 client/tests/kvm/tests/qemu_img.py                 |   22 +-
 client/tests/kvm/tests/qmp_basic.py                |    2 +-
 client/tests/kvm/tests/qmp_basic_rhel6.py          |    2 +-
 client/tests/kvm/tests/set_link.py                 |   14 +-
 client/tests/kvm/tests/shutdown.py                 |   43 -
 client/tests/kvm/tests/stepmaker.py                |   11 +-
 client/tests/kvm/tests/steps.py                    |    5 +-
 client/tests/kvm/tests/stress_boot.py              |   53 -
 client/tests/kvm/tests/timedrift.py                |   16 +-
 client/tests/kvm/tests/timedrift_with_migration.py |   10 +-
 client/tests/kvm/tests/timedrift_with_reboot.py    |   10 +-
 client/tests/kvm/tests/timedrift_with_stop.py      |   10 +-
 client/tests/kvm/tests/unattended_install.py       |    4 +-
 client/tests/kvm/tests/unittest.py                 |    6 +-
 client/tests/kvm/tests/virtio_console.py           |   22 +-
 client/tests/kvm/tests/vlan.py                     |  175 --
 client/tests/kvm/tests/vmstop.py                   |    6 +-
 client/tests/kvm/tests/whql_client_install.py      |  136 --
 client/tests/kvm/tests/whql_submission.py          |  275 ---
 client/tests/kvm/tests/yum_update.py               |   49 -
 client/tests/kvm/tests_base.cfg.sample             |    1 +
 client/tools/cd_hash.py                            |   48 +
 client/tools/html_report.py                        | 1727 +++++++++++++++++++
 client/tools/scan_results.py                       |   97 ++
 client/virt/aexpect.py                             | 1352 +++++++++++++++
 client/virt/kvm_installer.py                       |  797 +++++++++
 client/virt/kvm_monitor.py                         |  745 ++++++++
 client/virt/kvm_vm.py                              | 1500 +++++++++++++++++
 client/virt/ppm_utils.py                           |  237 +++
 client/virt/rss_client.py                          |  519 ++++++
 client/virt/tests/autotest.py                      |   25 +
 client/virt/tests/boot.py                          |   26 +
 client/virt/tests/clock_getres.py                  |   37 +
 client/virt/tests/ethtool.py                       |  235 +++
 client/virt/tests/file_transfer.py                 |   84 +
 client/virt/tests/guest_s4.py                      |   76 +
 client/virt/tests/guest_test.py                    |   80 +
 client/virt/tests/image_copy.py                    |   45 +
 client/virt/tests/iofuzz.py                        |  136 ++
 client/virt/tests/ioquit.py                        |   31 +
 client/virt/tests/iozone_windows.py                |   40 +
 client/virt/tests/jumbo.py                         |  127 ++
 client/virt/tests/kdump.py                         |   75 +
 client/virt/tests/linux_s3.py                      |   41 +
 client/virt/tests/mac_change.py                    |   60 +
 client/virt/tests/multicast.py                     |   90 +
 client/virt/tests/netperf.py                       |   90 +
 client/virt/tests/nic_promisc.py                   |   39 +
 client/virt/tests/nicdriver_unload.py              |   56 +
 client/virt/tests/ping.py                          |   73 +
 client/virt/tests/pxe.py                           |   29 +
 client/virt/tests/shutdown.py                      |   43 +
 client/virt/tests/stress_boot.py                   |   53 +
 client/virt/tests/vlan.py                          |  175 ++
 client/virt/tests/whql_client_install.py           |  136 ++
 client/virt/tests/whql_submission.py               |  275 +++
 client/virt/tests/yum_update.py                    |   49 +
 client/virt/virt_env_process.py                    |  438 +++++
 client/virt/virt_scheduler.py                      |  229 +++
 client/virt/virt_step_editor.py                    | 1401 +++++++++++++++
 client/virt/virt_test_setup.py                     |  700 ++++++++
 client/virt/virt_test_utils.py                     |  754 +++++++++
 client/virt/virt_utils.py                          | 1760 +++++++++++++++++++
 client/virt/virt_vm.py                             |  298 ++++
 121 files changed, 15706 insertions(+), 15671 deletions(-)
 create mode 100755 client/common_lib/cartesian_config.py
 delete mode 100755 client/tests/kvm/cd_hash.py
 delete mode 100755 client/tests/kvm/html_report.py
 delete mode 100644 client/tests/kvm/installer.py
 delete mode 100755 client/tests/kvm/kvm_config.py
 delete mode 100644 client/tests/kvm/kvm_monitor.py
 delete mode 100644 client/tests/kvm/kvm_preprocessing.py
 delete mode 100644 client/tests/kvm/kvm_scheduler.py
 delete mode 100755 client/tests/kvm/kvm_subprocess.py
 delete mode 100644 client/tests/kvm/kvm_test_utils.py
 delete mode 100644 client/tests/kvm/kvm_utils.py
 delete mode 100755 client/tests/kvm/kvm_vm.py
 delete mode 100644 client/tests/kvm/ppm_utils.py
 delete mode 100755 client/tests/kvm/rss_file_transfer.py
 delete mode 100755 client/tests/kvm/scan_results.py
 delete mode 100755 client/tests/kvm/stepeditor.py
 delete mode 100644 client/tests/kvm/test_setup.py
 delete mode 100644 client/tests/kvm/tests/autotest.py
 delete mode 100644 client/tests/kvm/tests/boot.py
 delete mode 100644 client/tests/kvm/tests/clock_getres.py
 delete mode 100644 client/tests/kvm/tests/ethtool.py
 delete mode 100644 client/tests/kvm/tests/file_transfer.py
 delete mode 100644 client/tests/kvm/tests/guest_s4.py
 delete mode 100644 client/tests/kvm/tests/guest_test.py
 delete mode 100644 client/tests/kvm/tests/image_copy.py
 delete mode 100644 client/tests/kvm/tests/iofuzz.py
 delete mode 100644 client/tests/kvm/tests/ioquit.py
 delete mode 100644 client/tests/kvm/tests/iozone_windows.py
 delete mode 100644 client/tests/kvm/tests/jumbo.py
 delete mode 100644 client/tests/kvm/tests/kdump.py
 delete mode 100644 client/tests/kvm/tests/linux_s3.py
 delete mode 100644 client/tests/kvm/tests/mac_change.py
 delete mode 100644 client/tests/kvm/tests/multicast.py
 delete mode 100644 client/tests/kvm/tests/netperf.py
 delete mode 100644 client/tests/kvm/tests/nic_promisc.py
 delete mode 100644 client/tests/kvm/tests/nicdriver_unload.py
 delete mode 100644 client/tests/kvm/tests/ping.py
 delete mode 100644 client/tests/kvm/tests/pxe.py
 delete mode 100644 client/tests/kvm/tests/shutdown.py
 delete mode 100644 client/tests/kvm/tests/stress_boot.py
 delete mode 100644 client/tests/kvm/tests/vlan.py
 delete mode 100644 client/tests/kvm/tests/whql_client_install.py
 delete mode 100644 client/tests/kvm/tests/whql_submission.py
 delete mode 100644 client/tests/kvm/tests/yum_update.py
 create mode 100644 client/tools/__init__.py
 create mode 100755 client/tools/cd_hash.py
 create mode 100755 client/tools/html_report.py
 create mode 100755 client/tools/scan_results.py
 create mode 100644 client/virt/__init__.py
 create mode 100755 client/virt/aexpect.py
 create mode 100644 client/virt/kvm_installer.py
 create mode 100644 client/virt/kvm_monitor.py
 create mode 100755 client/virt/kvm_vm.py
 create mode 100644 client/virt/ppm_utils.py
 create mode 100755 client/virt/rss_client.py
 create mode 100644 client/virt/tests/autotest.py
 create mode 100644 client/virt/tests/boot.py
 create mode 100644 client/virt/tests/clock_getres.py
 create mode 100644 client/virt/tests/ethtool.py
 create mode 100644 client/virt/tests/file_transfer.py
 create mode 100644 client/virt/tests/guest_s4.py
 create mode 100644 client/virt/tests/guest_test.py
 create mode 100644 client/virt/tests/image_copy.py
 create mode 100644 client/virt/tests/iofuzz.py
 create mode 100644 client/virt/tests/ioquit.py
 create mode 100644 client/virt/tests/iozone_windows.py
 create mode 100644 client/virt/tests/jumbo.py
 create mode 100644 client/virt/tests/kdump.py
 create mode 100644 client/virt/tests/linux_s3.py
 create mode 100644 client/virt/tests/mac_change.py
 create mode 100644 client/virt/tests/multicast.py
 create mode 100644 client/virt/tests/netperf.py
 create mode 100644 client/virt/tests/nic_promisc.py
 create mode 100644 client/virt/tests/nicdriver_unload.py
 create mode 100644 client/virt/tests/ping.py
 create mode 100644 client/virt/tests/pxe.py
 create mode 100644 client/virt/tests/shutdown.py
 create mode 100644 client/virt/tests/stress_boot.py
 create mode 100644 client/virt/tests/vlan.py
 create mode 100644 client/virt/tests/whql_client_install.py
 create mode 100644 client/virt/tests/whql_submission.py
 create mode 100644 client/virt/tests/yum_update.py
 create mode 100644 client/virt/virt_env_process.py
 create mode 100644 client/virt/virt_scheduler.py
 create mode 100755 client/virt/virt_step_editor.py
 create mode 100644 client/virt/virt_test_setup.py
 create mode 100644 client/virt/virt_test_utils.py
 create mode 100644 client/virt/virt_utils.py
 create mode 100644 client/virt/virt_vm.py

-- 
1.7.4

_______________________________________________
Autotest mailing list
Autotest@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/7] KVM test: Move test utilities to client/tools
  2011-03-09  9:21 [PATCH 0/7] [RFC] KVM autotest refactor stage 1 Lucas Meneghel Rodrigues
@ 2011-03-09  9:21 ` Lucas Meneghel Rodrigues
  2011-03-11  6:47   ` Amos Kong
  2011-03-09  9:21 ` [PATCH 2/7] KVM test: Create autotest_lib.client.virt namespace Lucas Meneghel Rodrigues
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 12+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-03-09  9:21 UTC (permalink / raw)
  To: autotest; +Cc: kvm

The programs cd_hash, html_report, scan_results can be
used by other users of autotest, so move them to the
tools directory inside the client directory.

Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
---
 client/tools/cd_hash.py      |   48 ++
 client/tools/html_report.py  | 1727 ++++++++++++++++++++++++++++++++++++++++++
 client/tools/scan_results.py |   97 +++
 3 files changed, 1872 insertions(+), 0 deletions(-)
 create mode 100644 client/tools/__init__.py
 create mode 100755 client/tools/cd_hash.py
 create mode 100755 client/tools/html_report.py
 create mode 100755 client/tools/scan_results.py

diff --git a/client/tools/__init__.py b/client/tools/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/client/tools/cd_hash.py b/client/tools/cd_hash.py
new file mode 100755
index 0000000..04f8cbe
--- /dev/null
+++ b/client/tools/cd_hash.py
@@ -0,0 +1,48 @@
+#!/usr/bin/python
+"""
+Program that calculates several hashes for a given CD image.
+
+@copyright: Red Hat 2008-2009
+"""
+
+import os, sys, optparse, logging
+import common
+import kvm_utils
+from autotest_lib.client.common_lib import logging_manager
+from autotest_lib.client.bin import utils
+
+
+if __name__ == "__main__":
+    parser = optparse.OptionParser("usage: %prog [options] [filenames]")
+    options, args = parser.parse_args()
+
+    logging_manager.configure_logging(kvm_utils.KvmLoggingConfig())
+
+    if args:
+        filenames = args
+    else:
+        parser.print_help()
+        sys.exit(1)
+
+    for filename in filenames:
+        filename = os.path.abspath(filename)
+
+        file_exists = os.path.isfile(filename)
+        can_read_file = os.access(filename, os.R_OK)
+        if not file_exists:
+            logging.critical("File %s does not exist!", filename)
+            continue
+        if not can_read_file:
+            logging.critical("File %s does not have read permissions!",
+                             filename)
+            continue
+
+        logging.info("Hash values for file %s", os.path.basename(filename))
+        logging.info("md5    (1m): %s", utils.hash_file(filename, 1024*1024,
+                                                        method="md5"))
+        logging.info("sha1   (1m): %s", utils.hash_file(filename, 1024*1024,
+                                                        method="sha1"))
+        logging.info("md5  (full): %s", utils.hash_file(filename, method="md5"))
+        logging.info("sha1 (full): %s", utils.hash_file(filename,
+                                                        method="sha1"))
+        logging.info("")
diff --git a/client/tools/html_report.py b/client/tools/html_report.py
new file mode 100755
index 0000000..8b4b109
--- /dev/null
+++ b/client/tools/html_report.py
@@ -0,0 +1,1727 @@
+#!/usr/bin/python
+"""
+Script used to parse the test results and generate an HTML report.
+
+@copyright: (c)2005-2007 Matt Kruse (javascripttoolbox.com)
+@copyright: Red Hat 2008-2009
+@author: Dror Russo (drusso@redhat.com)
+"""
+
+import os, sys, re, getopt, time, datetime, commands
+import common
+
+
+format_css = """
+html,body {
+    padding:0;
+    color:#222;
+    background:#FFFFFF;
+}
+
+body {
+    padding:0px;
+    font:76%/150% "Lucida Grande", "Lucida Sans Unicode", Lucida, Verdana, Geneva, Arial, Helvetica, sans-serif;
+}
+
+#page_title{
+    text-decoration:none;
+    font:bold 2em/2em Arial, Helvetica, sans-serif;
+    text-transform:none;
+    text-shadow: 2px 2px 2px #555;
+    text-align: left;
+    color:#555555;
+    border-bottom: 1px solid #555555;
+}
+
+#page_sub_title{
+        text-decoration:none;
+        font:bold 16px Arial, Helvetica, sans-serif;
+        text-transform:uppercase;
+        text-shadow: 2px 2px 2px #555;
+        text-align: left;
+        color:#555555;
+    margin-bottom:0;
+}
+
+#comment{
+        text-decoration:none;
+        font:bold 10px Arial, Helvetica, sans-serif;
+        text-transform:none;
+        text-align: left;
+        color:#999999;
+    margin-top:0;
+}
+
+
+#meta_headline{
+                text-decoration:none;
+                font-family: Verdana, Geneva, Arial, Helvetica, sans-serif ;
+                text-align: left;
+                color:black;
+                font-weight: bold;
+                font-size: 14px;
+        }
+
+
+table.meta_table
+{text-align: center;
+font-family: Verdana, Geneva, Arial, Helvetica, sans-serif ;
+width: 90%;
+background-color: #FFFFFF;
+border: 0px;
+border-top: 1px #003377 solid;
+border-bottom: 1px #003377 solid;
+border-right: 1px #003377 solid;
+border-left: 1px #003377 solid;
+border-collapse: collapse;
+border-spacing: 0px;}
+
+table.meta_table td
+{background-color: #FFFFFF;
+color: #000;
+padding: 4px;
+border-top: 1px #BBBBBB solid;
+border-bottom: 1px #BBBBBB solid;
+font-weight: normal;
+font-size: 13px;}
+
+
+table.stats
+{text-align: center;
+font-family: Verdana, Geneva, Arial, Helvetica, sans-serif ;
+width: 100%;
+background-color: #FFFFFF;
+border: 0px;
+border-top: 1px #003377 solid;
+border-bottom: 1px #003377 solid;
+border-right: 1px #003377 solid;
+border-left: 1px #003377 solid;
+border-collapse: collapse;
+border-spacing: 0px;}
+
+table.stats td{
+background-color: #FFFFFF;
+color: #000;
+padding: 4px;
+border-top: 1px #BBBBBB solid;
+border-bottom: 1px #BBBBBB solid;
+font-weight: normal;
+font-size: 11px;}
+
+table.stats th{
+background: #dcdcdc;
+color: #000;
+padding: 6px;
+font-size: 12px;
+border-bottom: 1px #003377 solid;
+font-weight: bold;}
+
+table.stats td.top{
+background-color: #dcdcdc;
+color: #000;
+padding: 6px;
+text-align: center;
+border: 0px;
+border-bottom: 1px #003377 solid;
+font-size: 10px;
+font-weight: bold;}
+
+table.stats th.table-sorted-asc{
+        background-image: url(ascending.gif);
+        background-position: top left  ;
+        background-repeat: no-repeat;
+}
+
+table.stats th.table-sorted-desc{
+        background-image: url(descending.gif);
+        background-position: top left;
+        background-repeat: no-repeat;
+}
+
+table.stats2
+{text-align: left;
+font-family: Verdana, Geneva, Arial, Helvetica, sans-serif ;
+width: 100%;
+background-color: #FFFFFF;
+border: 0px;
+}
+
+table.stats2 td{
+background-color: #FFFFFF;
+color: #000;
+padding: 0px;
+font-weight: bold;
+font-size: 13px;}
+
+
+
+/* Put this inside a @media qualifier so Netscape 4 ignores it */
+@media screen, print {
+        /* Turn off list bullets */
+        ul.mktree  li { list-style: none; }
+        /* Control how "spaced out" the tree is */
+        ul.mktree, ul.mktree ul , ul.mktree li { margin-left:10px; padding:0px; }
+        /* Provide space for our own "bullet" inside the LI */
+        ul.mktree  li           .bullet { padding-left: 15px; }
+        /* Show "bullets" in the links, depending on the class of the LI that the link's in */
+        ul.mktree  li.liOpen    .bullet { cursor: pointer; }
+        ul.mktree  li.liClosed  .bullet { cursor: pointer;  }
+        ul.mktree  li.liBullet  .bullet { cursor: default; }
+        /* Sublists are visible or not based on class of parent LI */
+        ul.mktree  li.liOpen    ul { display: block; }
+        ul.mktree  li.liClosed  ul { display: none; }
+
+        /* Format menu items differently depending on what level of the tree they are in */
+        /* Uncomment this if you want your fonts to decrease in size the deeper they are in the tree */
+/*
+        ul.mktree  li ul li { font-size: 90% }
+*/
+}
+"""
+
+
+table_js = """
+/**
+ * Copyright (c)2005-2007 Matt Kruse (javascripttoolbox.com)
+ *
+ * Dual licensed under the MIT and GPL licenses.
+ * This basically means you can use this code however you want for
+ * free, but don't claim to have written it yourself!
+ * Donations always accepted: http://www.JavascriptToolbox.com/donate/
+ *
+ * Please do not link to the .js files on javascripttoolbox.com from
+ * your site. Copy the files locally to your server instead.
+ *
+ */
+/**
+ * Table.js
+ * Functions for interactive Tables
+ *
+ * Copyright (c) 2007 Matt Kruse (javascripttoolbox.com)
+ * Dual licensed under the MIT and GPL licenses.
+ *
+ * @version 0.981
+ *
+ * @history 0.981 2007-03-19 Added Sort.numeric_comma, additional date parsing formats
+ * @history 0.980 2007-03-18 Release new BETA release pending some testing. Todo: Additional docs, examples, plus jQuery plugin.
+ * @history 0.959 2007-03-05 Added more "auto" functionality, couple bug fixes
+ * @history 0.958 2007-02-28 Added auto functionality based on class names
+ * @history 0.957 2007-02-21 Speed increases, more code cleanup, added Auto Sort functionality
+ * @history 0.956 2007-02-16 Cleaned up the code and added Auto Filter functionality.
+ * @history 0.950 2006-11-15 First BETA release.
+ *
+ * @todo Add more date format parsers
+ * @todo Add style classes to colgroup tags after sorting/filtering in case the user wants to highlight the whole column
+ * @todo Correct for colspans in data rows (this may slow it down)
+ * @todo Fix for IE losing form control values after sort?
+ */
+
+/**
+ * Sort Functions
+ */
+var Sort = (function(){
+        var sort = {};
+        // Default alpha-numeric sort
+        // --------------------------
+        sort.alphanumeric = function(a,b) {
+                return (a==b)?0:(a<b)?-1:1;
+        };
+        sort.alphanumeric_rev = function(a,b) {
+                return (a==b)?0:(a<b)?1:-1;
+        };
+        sort['default'] = sort.alphanumeric; // IE chokes on sort.default
+
+        // This conversion is generalized to work for either a decimal separator of , or .
+        sort.numeric_converter = function(separator) {
+                return function(val) {
+                        if (typeof(val)=="string") {
+                                val = parseFloat(val.replace(/^[^\d\.]*([\d., ]+).*/g,"$1").replace(new RegExp("[^\\\d"+separator+"]","g"),'').replace(/,/,'.')) || 0;
+                        }
+                        return val || 0;
+                };
+        };
+
+        // Numeric Reversed Sort
+        // ------------
+        sort.numeric_rev = function(a,b) {
+                if (sort.numeric.convert(a)>sort.numeric.convert(b)) {
+                        return (-1);
+                }
+                if (sort.numeric.convert(a)==sort.numeric.convert(b)) {
+                        return 0;
+                }
+                if (sort.numeric.convert(a)<sort.numeric.convert(b)) {
+                        return 1;
+                }
+        };
+
+
+        // Numeric Sort
+        // ------------
+        sort.numeric = function(a,b) {
+                return sort.numeric.convert(a)-sort.numeric.convert(b);
+        };
+        sort.numeric.convert = sort.numeric_converter(".");
+
+        // Numeric Sort - comma decimal separator
+        // --------------------------------------
+        sort.numeric_comma = function(a,b) {
+                return sort.numeric_comma.convert(a)-sort.numeric_comma.convert(b);
+        };
+        sort.numeric_comma.convert = sort.numeric_converter(",");
+
+        // Case-insensitive Sort
+        // ---------------------
+        sort.ignorecase = function(a,b) {
+                return sort.alphanumeric(sort.ignorecase.convert(a),sort.ignorecase.convert(b));
+        };
+        sort.ignorecase.convert = function(val) {
+                if (val==null) { return ""; }
+                return (""+val).toLowerCase();
+        };
+
+        // Currency Sort
+        // -------------
+        sort.currency = sort.numeric; // Just treat it as numeric!
+        sort.currency_comma = sort.numeric_comma;
+
+        // Date sort
+        // ---------
+        sort.date = function(a,b) {
+                return sort.numeric(sort.date.convert(a),sort.date.convert(b));
+        };
+        // Convert 2-digit years to 4
+        sort.date.fixYear=function(yr) {
+                yr = +yr;
+                if (yr<50) { yr += 2000; }
+                else if (yr<100) { yr += 1900; }
+                return yr;
+        };
+        sort.date.formats = [
+                // YY[YY]-MM-DD
+                { re:/(\d{2,4})-(\d{1,2})-(\d{1,2})/ , f:function(x){ return (new Date(sort.date.fixYear(x[1]),+x[2],+x[3])).getTime(); } }
+                // MM/DD/YY[YY] or MM-DD-YY[YY]
+                ,{ re:/(\d{1,2})[\/-](\d{1,2})[\/-](\d{2,4})/ , f:function(x){ return (new Date(sort.date.fixYear(x[3]),+x[1],+x[2])).getTime(); } }
+                // Any catch-all format that new Date() can handle. This is not reliable except for long formats, for example: 31 Jan 2000 01:23:45 GMT
+                ,{ re:/(.*\d{4}.*\d+:\d+\d+.*)/, f:function(x){ var d=new Date(x[1]); if(d){return d.getTime();} } }
+        ];
+        sort.date.convert = function(val) {
+                var m,v, f = sort.date.formats;
+                for (var i=0,L=f.length; i<L; i++) {
+                        if (m=val.match(f[i].re)) {
+                                v=f[i].f(m);
+                                if (typeof(v)!="undefined") { return v; }
+                        }
+                }
+                return 9999999999999; // So non-parsed dates will be last, not first
+        };
+
+        return sort;
+})();
+
+/**
+ * The main Table namespace
+ */
+var Table = (function(){
+
+        /**
+         * Determine if a reference is defined
+         */
+        function def(o) {return (typeof o!="undefined");};
+
+        /**
+         * Determine if an object or class string contains a given class.
+         */
+        function hasClass(o,name) {
+                return new RegExp("(^|\\\s)"+name+"(\\\s|$)").test(o.className);
+        };
+
+        /**
+         * Add a class to an object
+         */
+        function addClass(o,name) {
+                var c = o.className || "";
+                if (def(c) && !hasClass(o,name)) {
+                        o.className += (c?" ":"") + name;
+                }
+        };
+
+        /**
+         * Remove a class from an object
+         */
+        function removeClass(o,name) {
+                var c = o.className || "";
+                o.className = c.replace(new RegExp("(^|\\\s)"+name+"(\\\s|$)"),"$1");
+        };
+
+        /**
+         * For classes that match a given substring, return the rest
+         */
+        function classValue(o,prefix) {
+                var c = o.className;
+                if (c.match(new RegExp("(^|\\\s)"+prefix+"([^ ]+)"))) {
+                        return RegExp.$2;
+                }
+                return null;
+        };
+
+        /**
+         * Return true if an object is hidden.
+         * This uses the "russian doll" technique to unwrap itself to the most efficient
+         * function after the first pass. This avoids repeated feature detection that
+         * would always fall into the same block of code.
+         */
+         function isHidden(o) {
+                if (window.getComputedStyle) {
+                        var cs = window.getComputedStyle;
+                        return (isHidden = function(o) {
+                                return 'none'==cs(o,null).getPropertyValue('display');
+                        })(o);
+                }
+                else if (window.currentStyle) {
+                        return(isHidden = function(o) {
+                                return 'none'==o.currentStyle['display'];
+                        })(o);
+                }
+                return (isHidden = function(o) {
+                        return 'none'==o.style['display'];
+                })(o);
+        };
+
+        /**
+         * Get a parent element by tag name, or the original element if it is of the tag type
+         */
+        function getParent(o,a,b) {
+                if (o!=null && o.nodeName) {
+                        if (o.nodeName==a || (b && o.nodeName==b)) {
+                                return o;
+                        }
+                        while (o=o.parentNode) {
+                                if (o.nodeName && (o.nodeName==a || (b && o.nodeName==b))) {
+                                        return o;
+                                }
+                        }
+                }
+                return null;
+        };
+
+        /**
+         * Utility function to copy properties from one object to another
+         */
+        function copy(o1,o2) {
+                for (var i=2;i<arguments.length; i++) {
+                        var a = arguments[i];
+                        if (def(o1[a])) {
+                                o2[a] = o1[a];
+                        }
+                }
+        }
+
+        // The table object itself
+        var table = {
+                //Class names used in the code
+                AutoStripeClassName:"table-autostripe",
+                StripeClassNamePrefix:"table-stripeclass:",
+
+                AutoSortClassName:"table-autosort",
+                AutoSortColumnPrefix:"table-autosort:",
+                AutoSortTitle:"Click to sort",
+                SortedAscendingClassName:"table-sorted-asc",
+                SortedDescendingClassName:"table-sorted-desc",
+                SortableClassName:"table-sortable",
+                SortableColumnPrefix:"table-sortable:",
+                NoSortClassName:"table-nosort",
+
+                AutoFilterClassName:"table-autofilter",
+                FilteredClassName:"table-filtered",
+                FilterableClassName:"table-filterable",
+                FilteredRowcountPrefix:"table-filtered-rowcount:",
+                RowcountPrefix:"table-rowcount:",
+                FilterAllLabel:"Filter: All",
+
+                AutoPageSizePrefix:"table-autopage:",
+                AutoPageJumpPrefix:"table-page:",
+                PageNumberPrefix:"table-page-number:",
+                PageCountPrefix:"table-page-count:"
+        };
+
+        /**
+         * A place to store misc table information, rather than in the table objects themselves
+         */
+        table.tabledata = {};
+
+        /**
+         * Resolve a table given an element reference, and make sure it has a unique ID
+         */
+        table.uniqueId=1;
+        table.resolve = function(o,args) {
+                if (o!=null && o.nodeName && o.nodeName!="TABLE") {
+                        o = getParent(o,"TABLE");
+                }
+                if (o==null) { return null; }
+                if (!o.id) {
+                        var id = null;
+                        do { var id = "TABLE_"+(table.uniqueId++); }
+                                while (document.getElementById(id)!=null);
+                        o.id = id;
+                }
+                this.tabledata[o.id] = this.tabledata[o.id] || {};
+                if (args) {
+                        copy(args,this.tabledata[o.id],"stripeclass","ignorehiddenrows","useinnertext","sorttype","col","desc","page","pagesize");
+                }
+                return o;
+        };
+
+
+        /**
+         * Run a function against each cell in a table header or footer, usually
+         * to add or remove css classes based on sorting, filtering, etc.
+         */
+        table.processTableCells = function(t, type, func, arg) {
+                t = this.resolve(t);
+                if (t==null) { return; }
+                if (type!="TFOOT") {
+                        this.processCells(t.tHead, func, arg);
+                }
+                if (type!="THEAD") {
+                        this.processCells(t.tFoot, func, arg);
+                }
+        };
+
+        /**
+         * Internal method used to process an arbitrary collection of cells.
+         * Referenced by processTableCells.
+         * It's done this way to avoid getElementsByTagName() which would also return nested table cells.
+         */
+        table.processCells = function(section,func,arg) {
+                if (section!=null) {
+                        if (section.rows && section.rows.length && section.rows.length>0) {
+                                var rows = section.rows;
+                                for (var j=0,L2=rows.length; j<L2; j++) {
+                                        var row = rows[j];
+                                        if (row.cells && row.cells.length && row.cells.length>0) {
+                                                var cells = row.cells;
+                                                for (var k=0,L3=cells.length; k<L3; k++) {
+                                                        var cellsK = cells[k];
+                                                        func.call(this,cellsK,arg);
+                                                }
+                                        }
+                                }
+                        }
+                }
+        };
+
+        /**
+         * Get the cellIndex value for a cell. This is only needed because of a Safari
+         * bug that causes cellIndex to exist but always be 0.
+         * Rather than feature-detecting each time it is called, the function will
+         * re-write itself the first time it is called.
+         */
+        table.getCellIndex = function(td) {
+                var tr = td.parentNode;
+                var cells = tr.cells;
+                if (cells && cells.length) {
+                        if (cells.length>1 && cells[cells.length-1].cellIndex>0) {
+                                // Define the new function, overwrite the one we're running now, and then run the new one
+                                (this.getCellIndex = function(td) {
+                                        return td.cellIndex;
+                                })(td);
+                        }
+                        // Safari will always go through this slower block every time. Oh well.
+                        for (var i=0,L=cells.length; i<L; i++) {
+                                if (tr.cells[i]==td) {
+                                        return i;
+                                }
+                        }
+                }
+                return 0;
+        };
+
+        /**
+         * A map of node names and how to convert them into their "value" for sorting, filtering, etc.
+         * These are put here so it is extensible.
+         */
+        table.nodeValue = {
+                'INPUT':function(node) {
+                        if (def(node.value) && node.type && ((node.type!="checkbox" && node.type!="radio") || node.checked)) {
+                                return node.value;
+                        }
+                        return "";
+                },
+                'SELECT':function(node) {
+                        if (node.selectedIndex>=0 && node.options) {
+                                // Sort select elements by the visible text
+                                return node.options[node.selectedIndex].text;
+                        }
+                        return "";
+                },
+                'IMG':function(node) {
+                        return node.name || "";
+                }
+        };
+
+        /**
+         * Get the text value of a cell. Only use innerText if explicitly told to, because
+         * otherwise we want to be able to handle sorting on inputs and other types
+         */
+        table.getCellValue = function(td,useInnerText) {
+                if (useInnerText && def(td.innerText)) {
+                        return td.innerText;
+                }
+                if (!td.childNodes) {
+                        return "";
+                }
+                var childNodes=td.childNodes;
+                var ret = "";
+                for (var i=0,L=childNodes.length; i<L; i++) {
+                        var node = childNodes[i];
+                        var type = node.nodeType;
+                        // In order to get realistic sort results, we need to treat some elements in a special way.
+                        // These behaviors are defined in the nodeValue() object, keyed by node name
+                        if (type==1) {
+                                var nname = node.nodeName;
+                                if (this.nodeValue[nname]) {
+                                        ret += this.nodeValue[nname](node);
+                                }
+                                else {
+                                        ret += this.getCellValue(node);
+                                }
+                        }
+                        else if (type==3) {
+                                if (def(node.innerText)) {
+                                        ret += node.innerText;
+                                }
+                                else if (def(node.nodeValue)) {
+                                        ret += node.nodeValue;
+                                }
+                        }
+                }
+                return ret;
+        };
+
+        /**
+         * Consider colspan and rowspan values in table header cells to calculate the actual cellIndex
+         * of a given cell. This is necessary because if the first cell in row 0 has a rowspan of 2,
+         * then the first cell in row 1 will have a cellIndex of 0 rather than 1, even though it really
+         * starts in the second column rather than the first.
+         * See: http://www.javascripttoolbox.com/temp/table_cellindex.html
+         */
+        table.tableHeaderIndexes = {};
+        table.getActualCellIndex = function(tableCellObj) {
+                if (!def(tableCellObj.cellIndex)) { return null; }
+                var tableObj = getParent(tableCellObj,"TABLE");
+                var cellCoordinates = tableCellObj.parentNode.rowIndex+"-"+this.getCellIndex(tableCellObj);
+
+                // If it has already been computed, return the answer from the lookup table
+                if (def(this.tableHeaderIndexes[tableObj.id])) {
+                        return this.tableHeaderIndexes[tableObj.id][cellCoordinates];
+                }
+
+                var matrix = [];
+                this.tableHeaderIndexes[tableObj.id] = {};
+                var thead = getParent(tableCellObj,"THEAD");
+                var trs = thead.getElementsByTagName('TR');
+
+                // Loop thru every tr and every cell in the tr, building up a 2-d array "grid" that gets
+                // populated with an "x" for each space that a cell takes up. If the first cell is colspan
+                // 2, it will fill in values [0] and [1] in the first array, so that the second cell will
+                // find the first empty cell in the first row (which will be [2]) and know that this is
+                // where it sits, rather than its internal .cellIndex value of [1].
+                for (var i=0; i<trs.length; i++) {
+                        var cells = trs[i].cells;
+                        for (var j=0; j<cells.length; j++) {
+                                var c = cells[j];
+                                var rowIndex = c.parentNode.rowIndex;
+                                var cellId = rowIndex+"-"+this.getCellIndex(c);
+                                var rowSpan = c.rowSpan || 1;
+                                var colSpan = c.colSpan || 1;
+                                var firstAvailCol;
+                                if(!def(matrix[rowIndex])) {
+                                        matrix[rowIndex] = [];
+                                }
+                                var m = matrix[rowIndex];
+                                // Find first available column in the first row
+                                for (var k=0; k<m.length+1; k++) {
+                                        if (!def(m[k])) {
+                                                firstAvailCol = k;
+                                                break;
+                                        }
+                                }
+                                this.tableHeaderIndexes[tableObj.id][cellId] = firstAvailCol;
+                                for (var k=rowIndex; k<rowIndex+rowSpan; k++) {
+                                        if(!def(matrix[k])) {
+                                                matrix[k] = [];
+                                        }
+                                        var matrixrow = matrix[k];
+                                        for (var l=firstAvailCol; l<firstAvailCol+colSpan; l++) {
+                                                matrixrow[l] = "x";
+                                        }
+                                }
+                        }
+                }
+                // Store the map so future lookups are fast.
+                return this.tableHeaderIndexes[tableObj.id][cellCoordinates];
+        };
+
+        /**
+         * Sort all rows in each TBODY (tbodies are sorted independent of each other)
+         */
+        table.sort = function(o,args) {
+                var t, tdata, sortconvert=null;
+                // Allow for a simple passing of sort type as second parameter
+                if (typeof(args)=="function") {
+                        args={sorttype:args};
+                }
+                args = args || {};
+
+                // If no col is specified, deduce it from the object sent in
+                if (!def(args.col)) {
+                        args.col = this.getActualCellIndex(o) || 0;
+                }
+                // If no sort type is specified, default to the default sort
+                args.sorttype = args.sorttype || Sort['default'];
+
+                // Resolve the table
+                t = this.resolve(o,args);
+                tdata = this.tabledata[t.id];
+
+                // If we are sorting on the same column as last time, flip the sort direction
+                if (def(tdata.lastcol) && tdata.lastcol==tdata.col && def(tdata.lastdesc)) {
+                        tdata.desc = !tdata.lastdesc;
+                }
+                else {
+                        tdata.desc = !!args.desc;
+                }
+
+                // Store the last sorted column so clicking again will reverse the sort order
+                tdata.lastcol=tdata.col;
+                tdata.lastdesc=!!tdata.desc;
+
+                // If a sort conversion function exists, pre-convert cell values and then use a plain alphanumeric sort
+                var sorttype = tdata.sorttype;
+                if (typeof(sorttype.convert)=="function") {
+                        sortconvert=tdata.sorttype.convert;
+                        sorttype=Sort.alphanumeric;
+                }
+
+                // Loop through all THEADs and remove sorted class names, then re-add them for the col
+                // that is being sorted
+                this.processTableCells(t,"THEAD",
+                        function(cell) {
+                                if (hasClass(cell,this.SortableClassName)) {
+                                        removeClass(cell,this.SortedAscendingClassName);
+                                        removeClass(cell,this.SortedDescendingClassName);
+                                        // If the computed colIndex of the cell equals the sorted colIndex, flag it as sorted
+                                        if (tdata.col==table.getActualCellIndex(cell) && (classValue(cell,table.SortableClassName))) {
+                                                addClass(cell,tdata.desc?this.SortedAscendingClassName:this.SortedDescendingClassName);
+                                        }
+                                }
+                        }
+                );
+
+                // Sort each tbody independently
+                var bodies = t.tBodies;
+                if (bodies==null || bodies.length==0) { return; }
+
+                // Define a new sort function to be called to consider descending or not
+                var newSortFunc = (tdata.desc)?
+                        function(a,b){return sorttype(b[0],a[0]);}
+                        :function(a,b){return sorttype(a[0],b[0]);};
+
+                var useinnertext=!!tdata.useinnertext;
+                var col = tdata.col;
+
+                for (var i=0,L=bodies.length; i<L; i++) {
+                        var tb = bodies[i], tbrows = tb.rows, rows = [];
+
+                        // Allow tbodies to request that they not be sorted
+                        if(!hasClass(tb,table.NoSortClassName)) {
+                                // Create a separate array which will store the converted values and refs to the
+                                // actual rows. This is the array that will be sorted.
+                                var cRow, cRowIndex=0;
+                                if (cRow=tbrows[cRowIndex]){
+                                        // Funky loop style because it's considerably faster in IE
+                                        do {
+                                                if (rowCells = cRow.cells) {
+                                                        var cellValue = (col<rowCells.length)?this.getCellValue(rowCells[col],useinnertext):null;
+                                                        if (sortconvert) cellValue = sortconvert(cellValue);
+                                                        rows[cRowIndex] = [cellValue,tbrows[cRowIndex]];
+                                                }
+                                        } while (cRow=tbrows[++cRowIndex])
+                                }
+
+                                // Do the actual sorting
+                                rows.sort(newSortFunc);
+
+                                // Move the rows to the correctly sorted order. Appending an existing DOM object just moves it!
+                                cRowIndex=0;
+                                var displayedCount=0;
+                                var f=[removeClass,addClass];
+                                if (cRow=rows[cRowIndex]){
+                                        do {
+                                                tb.appendChild(cRow[1]);
+                                        } while (cRow=rows[++cRowIndex])
+                                }
+                        }
+                }
+
+                // If paging is enabled on the table, then we need to re-page because the order of rows has changed!
+                if (tdata.pagesize) {
+                        this.page(t); // This will internally do the striping
+                }
+                else {
+                        // Re-stripe if a class name was supplied
+                        if (tdata.stripeclass) {
+                                this.stripe(t,tdata.stripeclass,!!tdata.ignorehiddenrows);
+                        }
+                }
+        };
+
+        /**
+        * Apply a filter to rows in a table and hide those that do not match.
+        */
+        table.filter = function(o,filters,args) {
+                var cell;
+                args = args || {};
+
+                var t = this.resolve(o,args);
+                var tdata = this.tabledata[t.id];
+
+                // If new filters were passed in, apply them to the table's list of filters
+                if (!filters) {
+                        // If a null or blank value was sent in for 'filters' then that means reset the table to no filters
+                        tdata.filters = null;
+                }
+                else {
+                        // Allow for passing a select list in as the filter, since this is common design
+                        if (filters.nodeName=="SELECT" && filters.type=="select-one" && filters.selectedIndex>-1) {
+                                filters={ 'filter':filters.options[filters.selectedIndex].value };
+                        }
+                        // Also allow for a regular input
+                        if (filters.nodeName=="INPUT" && filters.type=="text") {
+                                filters={ 'filter':"/"+filters.value+"/" };
+                        }
+                        // Force filters to be an array
+                        if (typeof(filters)=="object" && !filters.length) {
+                                filters = [filters];
+                        }
+
+                        // Convert regular expression strings to RegExp objects and function strings to function objects
+                        for (var i=0,L=filters.length; i<L; i++) {
+                                var filter = filters[i];
+                                if (typeof(filter.filter)=="string") {
+                                        // If a filter string is like "/expr/" then turn it into a Regex
+                                        if (filter.filter.match(/^\/(.*)\/$/)) {
+                                                filter.filter = new RegExp(RegExp.$1);
+                                                filter.filter.regex=true;
+                                        }
+                                        // If filter string is like "function (x) { ... }" then turn it into a function
+                                        else if (filter.filter.match(/^function\s*\(([^\)]*)\)\s*\{(.*)}\s*$/)) {
+                                                filter.filter = Function(RegExp.$1,RegExp.$2);
+                                        }
+                                }
+                                // If some non-table object was passed in rather than a 'col' value, resolve it
+                                // and assign it's column index to the filter if it doesn't have one. This way,
+                                // passing in a cell reference or a select object etc instead of a table object
+                                // will automatically set the correct column to filter.
+                                if (filter && !def(filter.col) && (cell=getParent(o,"TD","TH"))) {
+                                        filter.col = this.getCellIndex(cell);
+                                }
+
+                                // Apply the passed-in filters to the existing list of filters for the table, removing those that have a filter of null or ""
+                                if ((!filter || !filter.filter) && tdata.filters) {
+                                        delete tdata.filters[filter.col];
+                                }
+                                else {
+                                        tdata.filters = tdata.filters || {};
+                                        tdata.filters[filter.col] = filter.filter;
+                                }
+                        }
+                        // If no more filters are left, then make sure to empty out the filters object
+                        for (var j in tdata.filters) { var keep = true; }
+                        if (!keep) {
+                                tdata.filters = null;
+                        }
+                }
+                // Everything's been setup, so now scrape the table rows
+                return table.scrape(o);
+        };
+
+        /**
+         * "Page" a table by showing only a subset of the rows
+         */
+        table.page = function(t,page,args) {
+                args = args || {};
+                if (def(page)) { args.page = page; }
+                return table.scrape(t,args);
+        };
+
+        /**
+         * Jump forward or back any number of pages
+         */
+        table.pageJump = function(t,count,args) {
+                t = this.resolve(t,args);
+                return this.page(t,(table.tabledata[t.id].page||0)+count,args);
+        };
+
+        /**
+         * Go to the next page of a paged table
+         */
+        table.pageNext = function(t,args) {
+                return this.pageJump(t,1,args);
+        };
+
+        /**
+         * Go to the previous page of a paged table
+         */
+        table.pagePrevious = function(t,args) {
+                return this.pageJump(t,-1,args);
+        };
+
+        /**
+        * Scrape a table to either hide or show each row based on filters and paging
+        */
+        table.scrape = function(o,args) {
+                var col,cell,filterList,filterReset=false,filter;
+                var page,pagesize,pagestart,pageend;
+                var unfilteredrows=[],unfilteredrowcount=0,totalrows=0;
+                var t,tdata,row,hideRow;
+                args = args || {};
+
+                // Resolve the table object
+                t = this.resolve(o,args);
+                tdata = this.tabledata[t.id];
+
+                // Setup for Paging
+                var page = tdata.page;
+                if (def(page)) {
+                        // Don't let the page go before the beginning
+                        if (page<0) { tdata.page=page=0; }
+                        pagesize = tdata.pagesize || 25; // 25=arbitrary default
+                        pagestart = page*pagesize+1;
+                        pageend = pagestart + pagesize - 1;
+                }
+
+                // Scrape each row of each tbody
+                var bodies = t.tBodies;
+                if (bodies==null || bodies.length==0) { return; }
+                for (var i=0,L=bodies.length; i<L; i++) {
+                        var tb = bodies[i];
+                        for (var j=0,L2=tb.rows.length; j<L2; j++) {
+                                row = tb.rows[j];
+                                hideRow = false;
+
+                                // Test if filters will hide the row
+                                if (tdata.filters && row.cells) {
+                                        var cells = row.cells;
+                                        var cellsLength = cells.length;
+                                        // Test each filter
+                                        for (col in tdata.filters) {
+                                                if (!hideRow) {
+                                                        filter = tdata.filters[col];
+                                                        if (filter && col<cellsLength) {
+                                                                var val = this.getCellValue(cells[col]);
+                                                                if (filter.regex && val.search) {
+                                                                        hideRow=(val.search(filter)<0);
+                                                                }
+                                                                else if (typeof(filter)=="function") {
+                                                                        hideRow=!filter(val,cells[col]);
+                                                                }
+                                                                else {
+                                                                        hideRow = (val!=filter);
+                                                                }
+                                                        }
+                                                }
+                                        }
+                                }
+
+                                // Keep track of the total rows scanned and the total runs _not_ filtered out
+                                totalrows++;
+                                if (!hideRow) {
+                                        unfilteredrowcount++;
+                                        if (def(page)) {
+                                                // Temporarily keep an array of unfiltered rows in case the page we're on goes past
+                                                // the last page and we need to back up. Don't want to filter again!
+                                                unfilteredrows.push(row);
+                                                if (unfilteredrowcount<pagestart || unfilteredrowcount>pageend) {
+                                                        hideRow = true;
+                                                }
+                                        }
+                                }
+
+                                row.style.display = hideRow?"none":"";
+                        }
+                }
+
+                if (def(page)) {
+                        // Check to see if filtering has put us past the requested page index. If it has,
+                        // then go back to the last page and show it.
+                        if (pagestart>=unfilteredrowcount) {
+                                pagestart = unfilteredrowcount-(unfilteredrowcount%pagesize);
+                                tdata.page = page = pagestart/pagesize;
+                                for (var i=pagestart,L=unfilteredrows.length; i<L; i++) {
+                                        unfilteredrows[i].style.display="";
+                                }
+                        }
+                }
+
+                // Loop through all THEADs and add/remove filtered class names
+                this.processTableCells(t,"THEAD",
+                        function(c) {
+                                ((tdata.filters && def(tdata.filters[table.getCellIndex(c)]) && hasClass(c,table.FilterableClassName))?addClass:removeClass)(c,table.FilteredClassName);
+                        }
+                );
+
+                // Stripe the table if necessary
+                if (tdata.stripeclass) {
+                        this.stripe(t);
+                }
+
+                // Calculate some values to be returned for info and updating purposes
+                var pagecount = Math.floor(unfilteredrowcount/pagesize)+1;
+                if (def(page)) {
+                        // Update the page number/total containers if they exist
+                        if (tdata.container_number) {
+                                tdata.container_number.innerHTML = page+1;
+                        }
+                        if (tdata.container_count) {
+                                tdata.container_count.innerHTML = pagecount;
+                        }
+                }
+
+                // Update the row count containers if they exist
+                if (tdata.container_filtered_count) {
+                        tdata.container_filtered_count.innerHTML = unfilteredrowcount;
+                }
+                if (tdata.container_all_count) {
+                        tdata.container_all_count.innerHTML = totalrows;
+                }
+                return { 'data':tdata, 'unfilteredcount':unfilteredrowcount, 'total':totalrows, 'pagecount':pagecount, 'page':page, 'pagesize':pagesize };
+        };
+
+        /**
+         * Shade alternate rows, aka Stripe the table.
+         */
+        table.stripe = function(t,className,args) {
+                args = args || {};
+                args.stripeclass = className;
+
+                t = this.resolve(t,args);
+                var tdata = this.tabledata[t.id];
+
+                var bodies = t.tBodies;
+                if (bodies==null || bodies.length==0) {
+                        return;
+                }
+
+                className = tdata.stripeclass;
+                // Cache a shorter, quicker reference to either the remove or add class methods
+                var f=[removeClass,addClass];
+                for (var i=0,L=bodies.length; i<L; i++) {
+                        var tb = bodies[i], tbrows = tb.rows, cRowIndex=0, cRow, displayedCount=0;
+                        if (cRow=tbrows[cRowIndex]){
+                                // The ignorehiddenrows test is pulled out of the loop for a slight speed increase.
+                                // Makes a bigger difference in FF than in IE.
+                                // In this case, speed always wins over brevity!
+                                if (tdata.ignoreHiddenRows) {
+                                        do {
+                                                f[displayedCount++%2](cRow,className);
+                                        } while (cRow=tbrows[++cRowIndex])
+                                }
+                                else {
+                                        do {
+                                                if (!isHidden(cRow)) {
+                                                        f[displayedCount++%2](cRow,className);
+                                                }
+                                        } while (cRow=tbrows[++cRowIndex])
+                                }
+                        }
+                }
+        };
+
+        /**
+         * Build up a list of unique values in a table column
+         */
+        table.getUniqueColValues = function(t,col) {
+                var values={}, bodies = this.resolve(t).tBodies;
+                for (var i=0,L=bodies.length; i<L; i++) {
+                        var tbody = bodies[i];
+                        for (var r=0,L2=tbody.rows.length; r<L2; r++) {
+                                values[this.getCellValue(tbody.rows[r].cells[col])] = true;
+                        }
+                }
+                var valArray = [];
+                for (var val in values) {
+                        valArray.push(val);
+                }
+                return valArray.sort();
+        };
+
+        /**
+         * Scan the document on load and add sorting, filtering, paging etc ability automatically
+         * based on existence of class names on the table and cells.
+         */
+        table.auto = function(args) {
+                var cells = [], tables = document.getElementsByTagName("TABLE");
+                var val,tdata;
+                if (tables!=null) {
+                        for (var i=0,L=tables.length; i<L; i++) {
+                                var t = table.resolve(tables[i]);
+                                tdata = table.tabledata[t.id];
+                                if (val=classValue(t,table.StripeClassNamePrefix)) {
+                                        tdata.stripeclass=val;
+                                }
+                                // Do auto-filter if necessary
+                                if (hasClass(t,table.AutoFilterClassName)) {
+                                        table.autofilter(t);
+                                }
+                                // Do auto-page if necessary
+                                if (val = classValue(t,table.AutoPageSizePrefix)) {
+                                        table.autopage(t,{'pagesize':+val});
+                                }
+                                // Do auto-sort if necessary
+                                if ((val = classValue(t,table.AutoSortColumnPrefix)) || (hasClass(t,table.AutoSortClassName))) {
+                                        table.autosort(t,{'col':(val==null)?null:+val});
+                                }
+                                // Do auto-stripe if necessary
+                                if (tdata.stripeclass && hasClass(t,table.AutoStripeClassName)) {
+                                        table.stripe(t);
+                                }
+                        }
+                }
+        };
+
+        /**
+         * Add sorting functionality to a table header cell
+         */
+        table.autosort = function(t,args) {
+                t = this.resolve(t,args);
+                var tdata = this.tabledata[t.id];
+                this.processTableCells(t, "THEAD", function(c) {
+                        var type = classValue(c,table.SortableColumnPrefix);
+                        if (type!=null) {
+                                type = type || "default";
+                                c.title =c.title || table.AutoSortTitle;
+                                addClass(c,table.SortableClassName);
+                                c.onclick = Function("","Table.sort(this,{'sorttype':Sort['"+type+"']})");
+                                // If we are going to auto sort on a column, we need to keep track of what kind of sort it will be
+                                if (args.col!=null) {
+                                        if (args.col==table.getActualCellIndex(c)) {
+                                                tdata.sorttype=Sort['"+type+"'];
+                                        }
+                                }
+                        }
+                } );
+                if (args.col!=null) {
+                        table.sort(t,args);
+                }
+        };
+
+        /**
+         * Add paging functionality to a table
+         */
+        table.autopage = function(t,args) {
+                t = this.resolve(t,args);
+                var tdata = this.tabledata[t.id];
+                if (tdata.pagesize) {
+                        this.processTableCells(t, "THEAD,TFOOT", function(c) {
+                                var type = classValue(c,table.AutoPageJumpPrefix);
+                                if (type=="next") { type = 1; }
+                                else if (type=="previous") { type = -1; }
+                                if (type!=null) {
+                                        c.onclick = Function("","Table.pageJump(this,"+type+")");
+                                }
+                        } );
+                        if (val = classValue(t,table.PageNumberPrefix)) {
+                                tdata.container_number = document.getElementById(val);
+                        }
+                        if (val = classValue(t,table.PageCountPrefix)) {
+                                tdata.container_count = document.getElementById(val);
+                        }
+                        return table.page(t,0,args);
+                }
+        };
+
+        /**
+         * A util function to cancel bubbling of clicks on filter dropdowns
+         */
+        table.cancelBubble = function(e) {
+                e = e || window.event;
+                if (typeof(e.stopPropagation)=="function") { e.stopPropagation(); }
+                if (def(e.cancelBubble)) { e.cancelBubble = true; }
+        };
+
+        /**
+         * Auto-filter a table
+         */
+        table.autofilter = function(t,args) {
+                args = args || {};
+                t = this.resolve(t,args);
+                var tdata = this.tabledata[t.id],val;
+                table.processTableCells(t, "THEAD", function(cell) {
+                        if (hasClass(cell,table.FilterableClassName)) {
+                                var cellIndex = table.getCellIndex(cell);
+                                var colValues = table.getUniqueColValues(t,cellIndex);
+                                if (colValues.length>0) {
+                                        if (typeof(args.insert)=="function") {
+                                                func.insert(cell,colValues);
+                                        }
+                                        else {
+                                                var sel = '<select onchange="Table.filter(this,this)" onclick="Table.cancelBubble(event)" class="'+table.AutoFilterClassName+'"><option value="">'+table.FilterAllLabel+'</option>';
+                                                for (var i=0; i<colValues.length; i++) {
+                                                        sel += '<option value="'+colValues[i]+'">'+colValues[i]+'</option>';
+                                                }
+                                                sel += '</select>';
+                                                cell.innerHTML += "<br>"+sel;
+                                        }
+                                }
+                        }
+                });
+                if (val = classValue(t,table.FilteredRowcountPrefix)) {
+                        tdata.container_filtered_count = document.getElementById(val);
+                }
+                if (val = classValue(t,table.RowcountPrefix)) {
+                        tdata.container_all_count = document.getElementById(val);
+                }
+        };
+
+        /**
+         * Attach the auto event so it happens on load.
+         * use jQuery's ready() function if available
+         */
+        if (typeof(jQuery)!="undefined") {
+                jQuery(table.auto);
+        }
+        else if (window.addEventListener) {
+                window.addEventListener( "load", table.auto, false );
+        }
+        else if (window.attachEvent) {
+                window.attachEvent( "onload", table.auto );
+        }
+
+        return table;
+})();
+"""
+
+
+maketree_js = """/**
+ * Copyright (c)2005-2007 Matt Kruse (javascripttoolbox.com)
+ *
+ * Dual licensed under the MIT and GPL licenses.
+ * This basically means you can use this code however you want for
+ * free, but don't claim to have written it yourself!
+ * Donations always accepted: http://www.JavascriptToolbox.com/donate/
+ *
+ * Please do not link to the .js files on javascripttoolbox.com from
+ * your site. Copy the files locally to your server instead.
+ *
+ */
+/*
+This code is inspired by and extended from Stuart Langridge's aqlist code:
+    http://www.kryogenix.org/code/browser/aqlists/
+    Stuart Langridge, November 2002
+    sil@kryogenix.org
+    Inspired by Aaron's labels.js (http://youngpup.net/demos/labels/)
+    and Dave Lindquist's menuDropDown.js (http://www.gazingus.org/dhtml/?id=109)
+*/
+
+// Automatically attach a listener to the window onload, to convert the trees
+addEvent(window,"load",convertTrees);
+
+// Utility function to add an event listener
+function addEvent(o,e,f){
+  if (o.addEventListener){ o.addEventListener(e,f,false); return true; }
+  else if (o.attachEvent){ return o.attachEvent("on"+e,f); }
+  else { return false; }
+}
+
+// utility function to set a global variable if it is not already set
+function setDefault(name,val) {
+  if (typeof(window[name])=="undefined" || window[name]==null) {
+    window[name]=val;
+  }
+}
+
+// Full expands a tree with a given ID
+function expandTree(treeId) {
+  var ul = document.getElementById(treeId);
+  if (ul == null) { return false; }
+  expandCollapseList(ul,nodeOpenClass);
+}
+
+// Fully collapses a tree with a given ID
+function collapseTree(treeId) {
+  var ul = document.getElementById(treeId);
+  if (ul == null) { return false; }
+  expandCollapseList(ul,nodeClosedClass);
+}
+
+// Expands enough nodes to expose an LI with a given ID
+function expandToItem(treeId,itemId) {
+  var ul = document.getElementById(treeId);
+  if (ul == null) { return false; }
+  var ret = expandCollapseList(ul,nodeOpenClass,itemId);
+  if (ret) {
+    var o = document.getElementById(itemId);
+    if (o.scrollIntoView) {
+      o.scrollIntoView(false);
+    }
+  }
+}
+
+// Performs 3 functions:
+// a) Expand all nodes
+// b) Collapse all nodes
+// c) Expand all nodes to reach a certain ID
+function expandCollapseList(ul,cName,itemId) {
+  if (!ul.childNodes || ul.childNodes.length==0) { return false; }
+  // Iterate LIs
+  for (var itemi=0;itemi<ul.childNodes.length;itemi++) {
+    var item = ul.childNodes[itemi];
+    if (itemId!=null && item.id==itemId) { return true; }
+    if (item.nodeName == "LI") {
+      // Iterate things in this LI
+      var subLists = false;
+      for (var sitemi=0;sitemi<item.childNodes.length;sitemi++) {
+        var sitem = item.childNodes[sitemi];
+        if (sitem.nodeName=="UL") {
+          subLists = true;
+          var ret = expandCollapseList(sitem,cName,itemId);
+          if (itemId!=null && ret) {
+            item.className=cName;
+            return true;
+          }
+        }
+      }
+      if (subLists && itemId==null) {
+        item.className = cName;
+      }
+    }
+  }
+}
+
+// Search the document for UL elements with the correct CLASS name, then process them
+function convertTrees() {
+  setDefault("treeClass","mktree");
+  setDefault("nodeClosedClass","liClosed");
+  setDefault("nodeOpenClass","liOpen");
+  setDefault("nodeBulletClass","liBullet");
+  setDefault("nodeLinkClass","bullet");
+  setDefault("preProcessTrees",true);
+  if (preProcessTrees) {
+    if (!document.createElement) { return; } // Without createElement, we can't do anything
+    var uls = document.getElementsByTagName("ul");
+    if (uls==null) { return; }
+    var uls_length = uls.length;
+    for (var uli=0;uli<uls_length;uli++) {
+      var ul=uls[uli];
+      if (ul.nodeName=="UL" && ul.className==treeClass) {
+        processList(ul);
+      }
+    }
+  }
+}
+
+function treeNodeOnclick() {
+  this.parentNode.className = (this.parentNode.className==nodeOpenClass) ? nodeClosedClass : nodeOpenClass;
+  return false;
+}
+function retFalse() {
+  return false;
+}
+// Process a UL tag and all its children, to convert to a tree
+function processList(ul) {
+  if (!ul.childNodes || ul.childNodes.length==0) { return; }
+  // Iterate LIs
+  var childNodesLength = ul.childNodes.length;
+  for (var itemi=0;itemi<childNodesLength;itemi++) {
+    var item = ul.childNodes[itemi];
+    if (item.nodeName == "LI") {
+      // Iterate things in this LI
+      var subLists = false;
+      var itemChildNodesLength = item.childNodes.length;
+      for (var sitemi=0;sitemi<itemChildNodesLength;sitemi++) {
+        var sitem = item.childNodes[sitemi];
+        if (sitem.nodeName=="UL") {
+          subLists = true;
+          processList(sitem);
+        }
+      }
+      var s= document.createElement("SPAN");
+      var t= '\u00A0'; // &nbsp;
+      s.className = nodeLinkClass;
+      if (subLists) {
+        // This LI has UL's in it, so it's a +/- node
+        if (item.className==null || item.className=="") {
+          item.className = nodeClosedClass;
+        }
+        // If it's just text, make the text work as the link also
+        if (item.firstChild.nodeName=="#text") {
+          t = t+item.firstChild.nodeValue;
+          item.removeChild(item.firstChild);
+        }
+        s.onclick = treeNodeOnclick;
+      }
+      else {
+        // No sublists, so it's just a bullet node
+        item.className = nodeBulletClass;
+        s.onclick = retFalse;
+      }
+      s.appendChild(document.createTextNode(t));
+      item.insertBefore(s,item.firstChild);
+    }
+  }
+}
+"""
+
+
+#################################################################
+##  This script gets kvm autotest results directory path as an ##
+##  input and create a single html formatted result page.      ##
+#################################################################
+
+stimelist = []
+
+
+def make_html_file(metadata, results, tag, host, output_file_name, dirname):
+    html_prefix = """
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
+<html>
+<head>
+<title>KVM Autotest Results</title>
+<style type="text/css">
+%s
+</style>
+<script type="text/javascript">
+%s
+%s
+function popup(tag,text) {
+var w = window.open('', tag, 'toolbar=no,location=no,directories=no,status=no,menubar=no,scrollbars=yes,resizable=yes, copyhistory=no,width=600,height=300,top=20,left=100');
+w.document.open("text/html", "replace");
+w.document.write(text);
+w.document.close();
+return true;
+}
+</script>
+</head>
+<body>
+""" % (format_css, table_js, maketree_js)
+
+
+    if output_file_name:
+        output = open(output_file_name, "w")
+    else:   #if no output file defined, print html file to console
+        output = sys.stdout
+    # create html page
+    print >> output, html_prefix
+    print >> output, '<h2 id=\"page_title\">KVM Autotest Execution Report</h2>'
+
+    # formating date and time to print
+    t = datetime.datetime.now()
+
+    epoch_sec = time.mktime(t.timetuple())
+    now = datetime.datetime.fromtimestamp(epoch_sec)
+
+    # basic statistics
+    total_executed = 0
+    total_failed = 0
+    total_passed = 0
+    for res in results:
+        total_executed += 1
+        if res['status'] == 'GOOD':
+            total_passed += 1
+        else:
+            total_failed += 1
+    stat_str = 'No test cases executed'
+    if total_executed > 0:
+        failed_perct = int(float(total_failed)/float(total_executed)*100)
+        stat_str = ('From %d tests executed, %d have passed (%d%% failures)' %
+                    (total_executed, total_passed, failed_perct))
+
+    kvm_ver_str = metadata['kvmver']
+
+    print >> output, '<table class="stats2">'
+    print >> output, '<tr><td>HOST</td><td>:</td><td>%s</td></tr>' % host
+    print >> output, '<tr><td>RESULTS DIR</td><td>:</td><td>%s</td></tr>'  % tag
+    print >> output, '<tr><td>DATE</td><td>:</td><td>%s</td></tr>' % now.ctime()
+    print >> output, '<tr><td>STATS</td><td>:</td><td>%s</td></tr>'% stat_str
+    print >> output, '<tr><td></td><td></td><td></td></tr>'
+    print >> output, '<tr><td>KVM VERSION</td><td>:</td><td>%s</td></tr>' % kvm_ver_str
+    print >> output, '</table>'
+
+
+    ## print test results
+    print >> output, '<br>'
+    print >> output, '<h2 id=\"page_sub_title\">Test Results</h2>'
+    print >> output, '<h2 id=\"comment\">click on table headers to asc/desc sort</h2>'
+    result_table_prefix = """<table
+id="t1" class="stats table-autosort:4 table-autofilter table-stripeclass:alternate table-page-number:t1page table-page-count:t1pages table-filtered-rowcount:t1filtercount table-rowcount:t1allcount">
+<thead class="th table-sorted-asc table-sorted-desc">
+<tr>
+<th align="left" class="table-sortable:alphanumeric">Date/Time</th>
+<th align="left" class="filterable table-sortable:alphanumeric">Test Case<br><input name="tc_filter" size="10" onkeyup="Table.filter(this,this)" onclick="Table.cancelBubble(event)"></th>
+<th align="left" class="table-filterable table-sortable:alphanumeric">Status</th>
+<th align="left">Time (sec)</th>
+<th align="left">Info</th>
+<th align="left">Debug</th>
+</tr></thead>
+<tbody>
+"""
+    print >> output, result_table_prefix
+    for res in results:
+        print >> output, '<tr>'
+        print >> output, '<td align="left">%s</td>' % res['time']
+        print >> output, '<td align="left">%s</td>' % res['testcase']
+        if res['status'] == 'GOOD':
+            print >> output, '<td align=\"left\"><b><font color="#00CC00">PASS</font></b></td>'
+        elif res['status'] == 'FAIL':
+            print >> output, '<td align=\"left\"><b><font color="red">FAIL</font></b></td>'
+        elif res['status'] == 'ERROR':
+            print >> output, '<td align=\"left\"><b><font color="red">ERROR!</font></b></td>'
+        else:
+            print >> output, '<td align=\"left\">%s</td>' % res['status']
+        # print exec time (seconds)
+        print >> output, '<td align="left">%s</td>' % res['exec_time_sec']
+        # print log only if test failed..
+        if res['log']:
+            #chop all '\n' from log text (to prevent html errors)
+            rx1 = re.compile('(\s+)')
+            log_text = rx1.sub(' ', res['log'])
+
+            # allow only a-zA-Z0-9_ in html title name
+            # (due to bug in MS-explorer)
+            rx2 = re.compile('([^a-zA-Z_0-9])')
+            updated_tag = rx2.sub('_', res['title'])
+
+            html_body_text = '<html><head><title>%s</title></head><body>%s</body></html>' % (str(updated_tag), log_text)
+            print >> output, '<td align=\"left\"><A HREF=\"#\" onClick=\"popup(\'%s\',\'%s\')\">Info</A></td>' % (str(updated_tag), str(html_body_text))
+        else:
+            print >> output, '<td align=\"left\"></td>'
+        # print execution time
+        print >> output, '<td align="left"><A HREF=\"%s\">Debug</A></td>' % os.path.join(dirname, res['title'], "debug")
+
+        print >> output, '</tr>'
+    print >> output, "</tbody></table>"
+
+
+    print >> output, '<h2 id=\"page_sub_title\">Host Info</h2>'
+    print >> output, '<h2 id=\"comment\">click on each item to expend/collapse</h2>'
+    ## Meta list comes here..
+    print >> output, '<p>'
+    print >> output, '<A href="#" class="button" onClick="expandTree(\'meta_tree\');return false;">Expand All</A>'
+    print >> output, '&nbsp;&nbsp;&nbsp'
+    print >> output, '<A class="button" href="#" onClick="collapseTree(\'meta_tree\'); return false;">Collapse All</A>'
+    print >> output, '</p>'
+
+    print >> output, '<ul class="mktree" id="meta_tree">'
+    counter = 0
+    keys = metadata.keys()
+    keys.sort()
+    for key in keys:
+        val = metadata[key]
+        print >> output, '<li id=\"meta_headline\">%s' % key
+        print >> output, '<ul><table class="meta_table"><tr><td align="left">%s</td></tr></table></ul></li>' % val
+    print >> output, '</ul>'
+
+    print >> output, "</body></html>"
+    if output_file_name:
+        output.close()
+
+
+def parse_result(dirname, line):
+    parts = line.split()
+    if len(parts) < 4:
+        return None
+    global stimelist
+    if parts[0] == 'START':
+        pair = parts[3].split('=')
+        stime = int(pair[1])
+        stimelist.append(stime)
+
+    elif (parts[0] == 'END'):
+        result = {}
+        exec_time = ''
+        # fetch time stamp
+        if len(parts) > 7:
+            temp = parts[5].split('=')
+            exec_time = temp[1] + ' ' + parts[6] + ' ' + parts[7]
+        # assign default values
+        result['time'] = exec_time
+        result['testcase'] = 'na'
+        result['status'] = 'na'
+        result['log'] = None
+        result['exec_time_sec'] = 'na'
+        tag = parts[3]
+
+        # assign actual values
+        rx = re.compile('^(\w+)\.(.*)$')
+        m1 = rx.findall(parts[3])
+        result['testcase'] = m1[0][1]
+        result['title'] = str(tag)
+        result['status'] = parts[1]
+        if result['status'] != 'GOOD':
+            result['log'] = get_exec_log(dirname, tag)
+        if len(stimelist)>0:
+            pair = parts[4].split('=')
+            etime = int(pair[1])
+            stime = stimelist.pop()
+            total_exec_time_sec = etime - stime
+            result['exec_time_sec'] = total_exec_time_sec
+        return result
+    return None
+
+
+def get_exec_log(resdir, tag):
+    stdout_file = os.path.join(resdir, tag) + '/debug/stdout'
+    stderr_file = os.path.join(resdir, tag) + '/debug/stderr'
+    status_file = os.path.join(resdir, tag) + '/status'
+    dmesg_file = os.path.join(resdir, tag) + '/sysinfo/dmesg'
+    log = ''
+    log += '<br><b>STDERR:</b><br>'
+    log += get_info_file(stderr_file)
+    log += '<br><b>STDOUT:</b><br>'
+    log += get_info_file(stdout_file)
+    log += '<br><b>STATUS:</b><br>'
+    log += get_info_file(status_file)
+    log += '<br><b>DMESG:</b><br>'
+    log += get_info_file(dmesg_file)
+    return log
+
+
+def get_info_file(filename):
+    data = ''
+    errors = re.compile(r"\b(error|fail|failed)\b", re.IGNORECASE)
+    if os.path.isfile(filename):
+        f = open('%s' % filename, "r")
+        lines = f.readlines()
+        f.close()
+        rx = re.compile('(\'|\")')
+        for line in lines:
+            new_line = rx.sub('', line)
+            errors_found = errors.findall(new_line)
+            if len(errors_found) > 0:
+                data += '<font color=red>%s</font><br>' % str(new_line)
+            else:
+                data += '%s<br>' % str(new_line)
+        if not data:
+            data = 'No Information Found.<br>'
+    else:
+        data = 'File not found.<br>'
+    return data
+
+
+
+def usage():
+    print 'usage:',
+    print 'make_html_report.py -r <result_directory> [-f output_file] [-R]'
+    print '(e.g. make_html_reporter.py -r '\
+          '/usr/local/autotest/client/results/default -f /tmp/myreport.html)'
+    print 'add "-R" for an html report with relative-paths (relative '\
+          'to results directory)'
+    print ''
+    sys.exit(1)
+
+
+def get_keyval_value(result_dir, key):
+    """
+    Return the value of the first appearance of key in any keyval file in
+    result_dir. If no appropriate line is found, return 'Unknown'.
+    """
+    keyval_pattern = os.path.join(result_dir, "kvm.*", "keyval")
+    keyval_lines = commands.getoutput(r"grep -h '\b%s\b.*=' %s"
+                                      % (key, keyval_pattern))
+    if not keyval_lines:
+        return "Unknown"
+    keyval_line = keyval_lines.splitlines()[0]
+    if key in keyval_line and "=" in keyval_line:
+        return keyval_line.split("=")[1].strip()
+    else:
+        return "Unknown"
+
+
+def get_kvm_version(result_dir):
+    """
+    Return an HTML string describing the KVM version.
+
+        @param result_dir: An Autotest job result dir
+    """
+    kvm_version = get_keyval_value(result_dir, "kvm_version")
+    kvm_userspace_version = get_keyval_value(result_dir,
+                                             "kvm_userspace_version")
+    return "Kernel: %s<br>Userspace: %s" % (kvm_version, kvm_userspace_version)
+
+
+def main(argv):
+    dirname = None
+    output_file_name = None
+    relative_path = False
+    try:
+        opts, args = getopt.getopt(argv, "r:f:h:R", ['help'])
+    except getopt.GetoptError:
+        usage()
+        sys.exit(2)
+    for opt, arg in opts:
+        if opt in ("-h", "--help"):
+            usage()
+            sys.exit()
+        elif opt == '-r':
+            dirname =  arg
+        elif opt == '-f':
+            output_file_name =  arg
+        elif opt == '-R':
+            relative_path = True
+        else:
+            usage()
+            sys.exit(1)
+
+    html_path = dirname
+    # don't use absolute path in html output if relative flag passed
+    if relative_path:
+        html_path = ''
+
+    if dirname:
+        if os.path.isdir(dirname): # TBD: replace it with a validation of
+                                   # autotest result dir
+            res_dir = os.path.abspath(dirname)
+            tag = res_dir
+            status_file_name = dirname + '/status'
+            sysinfo_dir = dirname + '/sysinfo'
+            host = get_info_file('%s/hostname' % sysinfo_dir)
+            rx = re.compile('^\s+[END|START].*$')
+            # create the results set dict
+            results_data = []
+            if os.path.exists(status_file_name):
+                f = open(status_file_name, "r")
+                lines = f.readlines()
+                f.close()
+                for line in lines:
+                    if rx.match(line):
+                        result_dict = parse_result(dirname, line)
+                        if result_dict:
+                            results_data.append(result_dict)
+            # create the meta info dict
+            metalist = {
+                        'uname': get_info_file('%s/uname' % sysinfo_dir),
+                        'cpuinfo':get_info_file('%s/cpuinfo' % sysinfo_dir),
+                        'meminfo':get_info_file('%s/meminfo' % sysinfo_dir),
+                        'df':get_info_file('%s/df' % sysinfo_dir),
+                        'modules':get_info_file('%s/modules' % sysinfo_dir),
+                        'gcc':get_info_file('%s/gcc_--version' % sysinfo_dir),
+                        'dmidecode':get_info_file('%s/dmidecode' % sysinfo_dir),
+                        'dmesg':get_info_file('%s/dmesg' % sysinfo_dir),
+                        'kvmver':get_kvm_version(dirname)
+            }
+
+            make_html_file(metalist, results_data, tag, host, output_file_name,
+                           html_path)
+            sys.exit(0)
+        else:
+            print 'Invalid result directory <%s>' % dirname
+            sys.exit(1)
+    else:
+        usage()
+        sys.exit(1)
+
+
+if __name__ == "__main__":
+    main(sys.argv[1:])
diff --git a/client/tools/scan_results.py b/client/tools/scan_results.py
new file mode 100755
index 0000000..562a05b
--- /dev/null
+++ b/client/tools/scan_results.py
@@ -0,0 +1,97 @@
+#!/usr/bin/python
+"""
+Program that parses the autotest results and return a nicely printed final test
+result.
+
+@copyright: Red Hat 2008-2009
+"""
+
+def parse_results(text):
+    """
+    Parse text containing Autotest results.
+
+    @return: A list of result 4-tuples.
+    """
+    result_list = []
+    start_time_list = []
+    info_list = []
+
+    lines = text.splitlines()
+    for line in lines:
+        line = line.strip()
+        parts = line.split("\t")
+
+        # Found a START line -- get start time
+        if (line.startswith("START") and len(parts) >= 5 and
+            parts[3].startswith("timestamp")):
+            start_time = float(parts[3].split("=")[1])
+            start_time_list.append(start_time)
+            info_list.append("")
+
+        # Found an END line -- get end time, name and status
+        elif (line.startswith("END") and len(parts) >= 5 and
+              parts[3].startswith("timestamp")):
+            end_time = float(parts[3].split("=")[1])
+            start_time = start_time_list.pop()
+            info = info_list.pop()
+            test_name = parts[2]
+            test_status = parts[0].split()[1]
+            # Remove "kvm." prefix
+            if test_name.startswith("kvm."):
+                test_name = test_name[4:]
+            result_list.append((test_name, test_status,
+                                int(end_time - start_time), info))
+
+        # Found a FAIL/ERROR/GOOD line -- get failure/success info
+        elif (len(parts) >= 6 and parts[3].startswith("timestamp") and
+              parts[4].startswith("localtime")):
+            info_list[-1] = parts[5]
+
+    return result_list
+
+
+def print_result(result, name_width):
+    """
+    Nicely print a single Autotest result.
+
+    @param result: a 4-tuple
+    @param name_width: test name maximum width
+    """
+    if result:
+        format = "%%-%ds    %%-10s %%-8s %%s" % name_width
+        print format % result
+
+
+def main(resfiles):
+    result_lists = []
+    name_width = 40
+
+    for resfile in resfiles:
+        try:
+            text = open(resfile).read()
+        except IOError:
+            print "Bad result file: %s" % resfile
+            continue
+        results = parse_results(text)
+        result_lists.append((resfile, results))
+        name_width = max([name_width] + [len(r[0]) for r in results])
+
+    print_result(("Test", "Status", "Seconds", "Info"), name_width)
+    print_result(("----", "------", "-------", "----"), name_width)
+
+    for resfile, results in result_lists:
+        print "        (Result file: %s)" % resfile
+        for r in results:
+            print_result(r, name_width)
+
+
+if __name__ == "__main__":
+    import sys, glob
+
+    resfiles = glob.glob("../results/default/status*")
+    if len(sys.argv) > 1:
+        if sys.argv[1] == "-h" or sys.argv[1] == "--help":
+            print "Usage: %s [result files]" % sys.argv[0]
+            sys.exit(0)
+        resfiles = sys.argv[1:]
+    main(resfiles)
-- 
1.7.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/7] KVM test: Create autotest_lib.client.virt namespace
  2011-03-09  9:21 [PATCH 0/7] [RFC] KVM autotest refactor stage 1 Lucas Meneghel Rodrigues
  2011-03-09  9:21 ` [PATCH 1/7] KVM test: Move test utilities to client/tools Lucas Meneghel Rodrigues
@ 2011-03-09  9:21 ` Lucas Meneghel Rodrigues
  2011-03-09  9:21 ` [PATCH 3/7] KVM test: tests_base.cfg: Introduce parameter 'vm_type' Lucas Meneghel Rodrigues
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-03-09  9:21 UTC (permalink / raw)
  To: autotest; +Cc: kvm

This patch moves all libraries from the KVM test to
autotest_lib.client.virt, and makes the needed adaptations
and abstracts the KVM implementation from the generic
infrastructure.

Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
---
 client/common_lib/cartesian_config.py |  698 +++++++++++++
 client/virt/aexpect.py                | 1352 +++++++++++++++++++++++++
 client/virt/kvm_installer.py          |  797 +++++++++++++++
 client/virt/kvm_monitor.py            |  745 ++++++++++++++
 client/virt/kvm_vm.py                 | 1500 ++++++++++++++++++++++++++++
 client/virt/ppm_utils.py              |  237 +++++
 client/virt/rss_client.py             |  519 ++++++++++
 client/virt/virt_env_process.py       |  438 ++++++++
 client/virt/virt_scheduler.py         |  229 +++++
 client/virt/virt_step_editor.py       | 1401 ++++++++++++++++++++++++++
 client/virt/virt_test_setup.py        |  700 +++++++++++++
 client/virt/virt_test_utils.py        |  754 ++++++++++++++
 client/virt/virt_utils.py             | 1760 +++++++++++++++++++++++++++++++++
 client/virt/virt_vm.py                |  298 ++++++
 14 files changed, 11428 insertions(+), 0 deletions(-)
 create mode 100755 client/common_lib/cartesian_config.py
 create mode 100644 client/virt/__init__.py
 create mode 100755 client/virt/aexpect.py
 create mode 100644 client/virt/kvm_installer.py
 create mode 100644 client/virt/kvm_monitor.py
 create mode 100755 client/virt/kvm_vm.py
 create mode 100644 client/virt/ppm_utils.py
 create mode 100755 client/virt/rss_client.py
 create mode 100644 client/virt/virt_env_process.py
 create mode 100644 client/virt/virt_scheduler.py
 create mode 100755 client/virt/virt_step_editor.py
 create mode 100644 client/virt/virt_test_setup.py
 create mode 100644 client/virt/virt_test_utils.py
 create mode 100644 client/virt/virt_utils.py
 create mode 100644 client/virt/virt_vm.py

diff --git a/client/common_lib/cartesian_config.py b/client/common_lib/cartesian_config.py
new file mode 100755
index 0000000..daf45d1
--- /dev/null
+++ b/client/common_lib/cartesian_config.py
@@ -0,0 +1,698 @@
+#!/usr/bin/python
+"""
+Cartesian configuration format file parser
+
+@copyright: Red Hat 2008-2011
+"""
+
+import re, os, sys, optparse, collections
+
+
+# Filter syntax:
+# , means OR
+# .. means AND
+# . means IMMEDIATELY-FOLLOWED-BY
+
+# Example:
+# qcow2..Fedora.14, RHEL.6..raw..boot, smp2..qcow2..migrate..ide
+# means match all dicts whose names have:
+# (qcow2 AND (Fedora IMMEDIATELY-FOLLOWED-BY 14)) OR
+# ((RHEL IMMEDIATELY-FOLLOWED-BY 6) AND raw AND boot) OR
+# (smp2 AND qcow2 AND migrate AND ide)
+
+# Note:
+# 'qcow2..Fedora.14' is equivalent to 'Fedora.14..qcow2'.
+# 'qcow2..Fedora.14' is not equivalent to 'qcow2..14.Fedora'.
+# 'ide, scsi' is equivalent to 'scsi, ide'.
+
+# Filters can be used in 3 ways:
+# only <filter>
+# no <filter>
+# <filter>:
+# The last one starts a conditional block.
+
+
+class ParserError:
+    def __init__(self, msg, line=None, filename=None, linenum=None):
+        self.msg = msg
+        self.line = line
+        self.filename = filename
+        self.linenum = linenum
+
+    def __str__(self):
+        if self.line:
+            return "%s: %r (%s:%s)" % (self.msg, self.line,
+                                       self.filename, self.linenum)
+        else:
+            return "%s (%s:%s)" % (self.msg, self.filename, self.linenum)
+
+
+num_failed_cases = 5
+
+
+class Node(object):
+    def __init__(self):
+        self.name = []
+        self.dep = []
+        self.content = []
+        self.children = []
+        self.labels = set()
+        self.append_to_shortname = False
+        self.failed_cases = collections.deque()
+
+
+def _match_adjacent(block, ctx, ctx_set):
+    # TODO: explain what this function does
+    if block[0] not in ctx_set:
+        return 0
+    if len(block) == 1:
+        return 1
+    if block[1] not in ctx_set:
+        return int(ctx[-1] == block[0])
+    k = 0
+    i = ctx.index(block[0])
+    while i < len(ctx):
+        if k > 0 and ctx[i] != block[k]:
+            i -= k - 1
+            k = 0
+        if ctx[i] == block[k]:
+            k += 1
+            if k >= len(block):
+                break
+            if block[k] not in ctx_set:
+                break
+        i += 1
+    return k
+
+
+def _might_match_adjacent(block, ctx, ctx_set, descendant_labels):
+    matched = _match_adjacent(block, ctx, ctx_set)
+    for elem in block[matched:]:
+        if elem not in descendant_labels:
+            return False
+    return True
+
+
+# Filter must inherit from object (otherwise type() won't work)
+class Filter(object):
+    def __init__(self, s):
+        self.filter = []
+        for char in s:
+            if not (char.isalnum() or char.isspace() or char in ".,_-"):
+                raise ParserError("Illegal characters in filter")
+        for word in s.replace(",", " ").split():
+            word = [block.split(".") for block in word.split("..")]
+            for block in word:
+                for elem in block:
+                    if not elem:
+                        raise ParserError("Syntax error")
+            self.filter += [word]
+
+
+    def match(self, ctx, ctx_set):
+        for word in self.filter:
+            for block in word:
+                if _match_adjacent(block, ctx, ctx_set) != len(block):
+                    break
+            else:
+                return True
+        return False
+
+
+    def might_match(self, ctx, ctx_set, descendant_labels):
+        for word in self.filter:
+            for block in word:
+                if not _might_match_adjacent(block, ctx, ctx_set,
+                                             descendant_labels):
+                    break
+            else:
+                return True
+        return False
+
+
+class NoOnlyFilter(Filter):
+    def __init__(self, line):
+        Filter.__init__(self, line.split(None, 1)[1])
+        self.line = line
+
+
+class OnlyFilter(NoOnlyFilter):
+    def is_irrelevant(self, ctx, ctx_set, descendant_labels):
+        return self.match(ctx, ctx_set)
+
+
+    def requires_action(self, ctx, ctx_set, descendant_labels):
+        return not self.might_match(ctx, ctx_set, descendant_labels)
+
+
+    def might_pass(self, failed_ctx, failed_ctx_set, ctx, ctx_set,
+                   descendant_labels):
+        for word in self.filter:
+            for block in word:
+                if (_match_adjacent(block, ctx, ctx_set) >
+                    _match_adjacent(block, failed_ctx, failed_ctx_set)):
+                    return self.might_match(ctx, ctx_set, descendant_labels)
+        return False
+
+
+class NoFilter(NoOnlyFilter):
+    def is_irrelevant(self, ctx, ctx_set, descendant_labels):
+        return not self.might_match(ctx, ctx_set, descendant_labels)
+
+
+    def requires_action(self, ctx, ctx_set, descendant_labels):
+        return self.match(ctx, ctx_set)
+
+
+    def might_pass(self, failed_ctx, failed_ctx_set, ctx, ctx_set,
+                   descendant_labels):
+        for word in self.filter:
+            for block in word:
+                if (_match_adjacent(block, ctx, ctx_set) <
+                    _match_adjacent(block, failed_ctx, failed_ctx_set)):
+                    return not self.match(ctx, ctx_set)
+        return False
+
+
+class Condition(NoFilter):
+    def __init__(self, line):
+        Filter.__init__(self, line.rstrip(":"))
+        self.line = line
+        self.content = []
+
+
+class NegativeCondition(OnlyFilter):
+    def __init__(self, line):
+        Filter.__init__(self, line.lstrip("!").rstrip(":"))
+        self.line = line
+        self.content = []
+
+
+class Parser(object):
+    """
+    Parse an input file or string that follows the KVM Test Config File format
+    and generate a list of dicts that will be later used as configuration
+    parameters by the KVM tests.
+
+    @see: http://www.linux-kvm.org/page/KVM-Autotest/Test_Config_File
+    """
+
+    def __init__(self, filename=None, debug=False):
+        """
+        Initialize the parser and optionally parse a file.
+
+        @param filename: Path of the file to parse.
+        @param debug: Whether to turn on debugging output.
+        """
+        self.node = Node()
+        self.debug = debug
+        if filename:
+            self.parse_file(filename)
+
+
+    def parse_file(self, filename):
+        """
+        Parse a file.
+
+        @param filename: Path of the configuration file.
+        """
+        self.node = self._parse(FileReader(filename), self.node)
+
+
+    def parse_string(self, s):
+        """
+        Parse a string.
+
+        @param s: String to parse.
+        """
+        self.node = self._parse(StrReader(s), self.node)
+
+
+    def get_dicts(self, node=None, ctx=[], content=[], shortname=[], dep=[]):
+        """
+        Generate dictionaries from the code parsed so far.  This should
+        be called after parsing something.
+
+        @return: A dict generator.
+        """
+        def process_content(content, failed_filters):
+            # 1. Check that the filters in content are OK with the current
+            #    context (ctx).
+            # 2. Move the parts of content that are still relevant into
+            #    new_content and unpack conditional blocks if appropriate.
+            #    For example, if an 'only' statement fully matches ctx, it
+            #    becomes irrelevant and is not appended to new_content.
+            #    If a conditional block fully matches, its contents are
+            #    unpacked into new_content.
+            # 3. Move failed filters into failed_filters, so that next time we
+            #    reach this node or one of its ancestors, we'll check those
+            #    filters first.
+            for t in content:
+                filename, linenum, obj = t
+                if type(obj) is Op:
+                    new_content.append(t)
+                    continue
+                # obj is an OnlyFilter/NoFilter/Condition/NegativeCondition
+                if obj.requires_action(ctx, ctx_set, labels):
+                    # This filter requires action now
+                    if type(obj) is OnlyFilter or type(obj) is NoFilter:
+                        self._debug("    filter did not pass: %r (%s:%s)",
+                                    obj.line, filename, linenum)
+                        failed_filters.append(t)
+                        return False
+                    else:
+                        self._debug("    conditional block matches: %r (%s:%s)",
+                                    obj.line, filename, linenum)
+                        # Check and unpack the content inside this Condition
+                        # object (note: the failed filters should go into
+                        # new_internal_filters because we don't expect them to
+                        # come from outside this node, even if the Condition
+                        # itself was external)
+                        if not process_content(obj.content,
+                                               new_internal_filters):
+                            failed_filters.append(t)
+                            return False
+                        continue
+                elif obj.is_irrelevant(ctx, ctx_set, labels):
+                    # This filter is no longer relevant and can be removed
+                    continue
+                else:
+                    # Keep the filter and check it again later
+                    new_content.append(t)
+            return True
+
+        def might_pass(failed_ctx,
+                       failed_ctx_set,
+                       failed_external_filters,
+                       failed_internal_filters):
+            for t in failed_external_filters:
+                if t not in content:
+                    return True
+                filename, linenum, filter = t
+                if filter.might_pass(failed_ctx, failed_ctx_set, ctx, ctx_set,
+                                     labels):
+                    return True
+            for t in failed_internal_filters:
+                filename, linenum, filter = t
+                if filter.might_pass(failed_ctx, failed_ctx_set, ctx, ctx_set,
+                                     labels):
+                    return True
+            return False
+
+        def add_failed_case():
+            node.failed_cases.appendleft((ctx, ctx_set,
+                                          new_external_filters,
+                                          new_internal_filters))
+            if len(node.failed_cases) > num_failed_cases:
+                node.failed_cases.pop()
+
+        node = node or self.node
+        # Update dep
+        for d in node.dep:
+            dep = dep + [".".join(ctx + [d])]
+        # Update ctx
+        ctx = ctx + node.name
+        ctx_set = set(ctx)
+        labels = node.labels
+        # Get the current name
+        name = ".".join(ctx)
+        if node.name:
+            self._debug("checking out %r", name)
+        # Check previously failed filters
+        for i, failed_case in enumerate(node.failed_cases):
+            if not might_pass(*failed_case):
+                self._debug("    this subtree has failed before")
+                del node.failed_cases[i]
+                node.failed_cases.appendleft(failed_case)
+                return
+        # Check content and unpack it into new_content
+        new_content = []
+        new_external_filters = []
+        new_internal_filters = []
+        if (not process_content(node.content, new_internal_filters) or
+            not process_content(content, new_external_filters)):
+            add_failed_case()
+            return
+        # Update shortname
+        if node.append_to_shortname:
+            shortname = shortname + node.name
+        # Recurse into children
+        count = 0
+        for n in node.children:
+            for d in self.get_dicts(n, ctx, new_content, shortname, dep):
+                count += 1
+                yield d
+        # Reached leaf?
+        if not node.children:
+            self._debug("    reached leaf, returning it")
+            d = {"name": name, "dep": dep, "shortname": ".".join(shortname)}
+            for filename, linenum, op in new_content:
+                op.apply_to_dict(d)
+            yield d
+        # If this node did not produce any dicts, remember the failed filters
+        # of its descendants
+        elif not count:
+            new_external_filters = []
+            new_internal_filters = []
+            for n in node.children:
+                (failed_ctx,
+                 failed_ctx_set,
+                 failed_external_filters,
+                 failed_internal_filters) = n.failed_cases[0]
+                for obj in failed_internal_filters:
+                    if obj not in new_internal_filters:
+                        new_internal_filters.append(obj)
+                for obj in failed_external_filters:
+                    if obj in content:
+                        if obj not in new_external_filters:
+                            new_external_filters.append(obj)
+                    else:
+                        if obj not in new_internal_filters:
+                            new_internal_filters.append(obj)
+            add_failed_case()
+
+
+    def _debug(self, s, *args):
+        if self.debug:
+            s = "DEBUG: %s" % s
+            print s % args
+
+
+    def _warn(self, s, *args):
+        s = "WARNING: %s" % s
+        print s % args
+
+
+    def _parse_variants(self, cr, node, prev_indent=-1):
+        """
+        Read and parse lines from a FileReader object until a line with an
+        indent level lower than or equal to prev_indent is encountered.
+
+        @param cr: A FileReader/StrReader object.
+        @param node: A node to operate on.
+        @param prev_indent: The indent level of the "parent" block.
+        @return: A node object.
+        """
+        node4 = Node()
+
+        while True:
+            line, indent, linenum = cr.get_next_line(prev_indent)
+            if not line:
+                break
+
+            name, dep = map(str.strip, line.lstrip("- ").split(":", 1))
+            for char in name:
+                if not (char.isalnum() or char in "@._-"):
+                    raise ParserError("Illegal characters in variant name",
+                                      line, cr.filename, linenum)
+            for char in dep:
+                if not (char.isalnum() or char.isspace() or char in ".,_-"):
+                    raise ParserError("Illegal characters in dependencies",
+                                      line, cr.filename, linenum)
+
+            node2 = Node()
+            node2.children = [node]
+            node2.labels = node.labels
+
+            node3 = self._parse(cr, node2, prev_indent=indent)
+            node3.name = name.lstrip("@").split(".")
+            node3.dep = dep.replace(",", " ").split()
+            node3.append_to_shortname = not name.startswith("@")
+
+            node4.children += [node3]
+            node4.labels.update(node3.labels)
+            node4.labels.update(node3.name)
+
+        return node4
+
+
+    def _parse(self, cr, node, prev_indent=-1):
+        """
+        Read and parse lines from a StrReader object until a line with an
+        indent level lower than or equal to prev_indent is encountered.
+
+        @param cr: A FileReader/StrReader object.
+        @param node: A Node or a Condition object to operate on.
+        @param prev_indent: The indent level of the "parent" block.
+        @return: A node object.
+        """
+        while True:
+            line, indent, linenum = cr.get_next_line(prev_indent)
+            if not line:
+                break
+
+            words = line.split(None, 1)
+
+            # Parse 'variants'
+            if line == "variants:":
+                # 'variants' is not allowed inside a conditional block
+                if (isinstance(node, Condition) or
+                    isinstance(node, NegativeCondition)):
+                    raise ParserError("'variants' is not allowed inside a "
+                                      "conditional block",
+                                      None, cr.filename, linenum)
+                node = self._parse_variants(cr, node, prev_indent=indent)
+                continue
+
+            # Parse 'include' statements
+            if words[0] == "include":
+                if len(words) < 2:
+                    raise ParserError("Syntax error: missing parameter",
+                                      line, cr.filename, linenum)
+                filename = os.path.expanduser(words[1])
+                if isinstance(cr, FileReader) and not os.path.isabs(filename):
+                    filename = os.path.join(os.path.dirname(cr.filename),
+                                            filename)
+                if not os.path.isfile(filename):
+                    self._warn("%r (%s:%s): file doesn't exist or is not a "
+                               "regular file", line, cr.filename, linenum)
+                    continue
+                node = self._parse(FileReader(filename), node)
+                continue
+
+            # Parse 'only' and 'no' filters
+            if words[0] in ("only", "no"):
+                if len(words) < 2:
+                    raise ParserError("Syntax error: missing parameter",
+                                      line, cr.filename, linenum)
+                try:
+                    if words[0] == "only":
+                        f = OnlyFilter(line)
+                    elif words[0] == "no":
+                        f = NoFilter(line)
+                except ParserError, e:
+                    e.line = line
+                    e.filename = cr.filename
+                    e.linenum = linenum
+                    raise
+                node.content += [(cr.filename, linenum, f)]
+                continue
+
+            # Look for operators
+            op_match = _ops_exp.search(line)
+
+            # Parse conditional blocks
+            if ":" in line:
+                index = line.index(":")
+                if not op_match or index < op_match.start():
+                    index += 1
+                    cr.set_next_line(line[index:], indent, linenum)
+                    line = line[:index]
+                    try:
+                        if line.startswith("!"):
+                            cond = NegativeCondition(line)
+                        else:
+                            cond = Condition(line)
+                    except ParserError, e:
+                        e.line = line
+                        e.filename = cr.filename
+                        e.linenum = linenum
+                        raise
+                    self._parse(cr, cond, prev_indent=indent)
+                    node.content += [(cr.filename, linenum, cond)]
+                    continue
+
+            # Parse regular operators
+            if not op_match:
+                raise ParserError("Syntax error", line, cr.filename, linenum)
+            node.content += [(cr.filename, linenum, Op(line, op_match))]
+
+        return node
+
+
+# Assignment operators
+
+_reserved_keys = set(("name", "shortname", "dep"))
+
+
+def _op_set(d, key, value):
+    if key not in _reserved_keys:
+        d[key] = value
+
+
+def _op_append(d, key, value):
+    if key not in _reserved_keys:
+        d[key] = d.get(key, "") + value
+
+
+def _op_prepend(d, key, value):
+    if key not in _reserved_keys:
+        d[key] = value + d.get(key, "")
+
+
+def _op_regex_set(d, exp, value):
+    exp = re.compile("%s$" % exp)
+    for key in d:
+        if key not in _reserved_keys and exp.match(key):
+            d[key] = value
+
+
+def _op_regex_append(d, exp, value):
+    exp = re.compile("%s$" % exp)
+    for key in d:
+        if key not in _reserved_keys and exp.match(key):
+            d[key] += value
+
+
+def _op_regex_prepend(d, exp, value):
+    exp = re.compile("%s$" % exp)
+    for key in d:
+        if key not in _reserved_keys and exp.match(key):
+            d[key] = value + d[key]
+
+
+def _op_regex_del(d, empty, exp):
+    exp = re.compile("%s$" % exp)
+    for key in d.keys():
+        if key not in _reserved_keys and exp.match(key):
+            del d[key]
+
+
+_ops = {"=": (r"\=", _op_set),
+        "+=": (r"\+\=", _op_append),
+        "<=": (r"\<\=", _op_prepend),
+        "?=": (r"\?\=", _op_regex_set),
+        "?+=": (r"\?\+\=", _op_regex_append),
+        "?<=": (r"\?\<\=", _op_regex_prepend),
+        "del": (r"^del\b", _op_regex_del)}
+
+_ops_exp = re.compile("|".join([op[0] for op in _ops.values()]))
+
+
+class Op(object):
+    def __init__(self, line, m):
+        self.func = _ops[m.group()][1]
+        self.key = line[:m.start()].strip()
+        value = line[m.end():].strip()
+        if value and (value[0] == value[-1] == '"' or
+                      value[0] == value[-1] == "'"):
+            value = value[1:-1]
+        self.value = value
+
+
+    def apply_to_dict(self, d):
+        self.func(d, self.key, self.value)
+
+
+# StrReader and FileReader
+
+class StrReader(object):
+    """
+    Preprocess an input string for easy reading.
+    """
+    def __init__(self, s):
+        """
+        Initialize the reader.
+
+        @param s: The string to parse.
+        """
+        self.filename = "<string>"
+        self._lines = []
+        self._line_index = 0
+        self._stored_line = None
+        for linenum, line in enumerate(s.splitlines()):
+            line = line.rstrip().expandtabs()
+            stripped_line = line.lstrip()
+            indent = len(line) - len(stripped_line)
+            if (not stripped_line
+                or stripped_line.startswith("#")
+                or stripped_line.startswith("//")):
+                continue
+            self._lines.append((stripped_line, indent, linenum + 1))
+
+
+    def get_next_line(self, prev_indent):
+        """
+        Get the next line in the current block.
+
+        @param prev_indent: The indentation level of the previous block.
+        @return: (line, indent, linenum), where indent is the line's
+            indentation level.  If no line is available, (None, -1, -1) is
+            returned.
+        """
+        if self._stored_line:
+            ret = self._stored_line
+            self._stored_line = None
+            return ret
+        if self._line_index >= len(self._lines):
+            return None, -1, -1
+        line, indent, linenum = self._lines[self._line_index]
+        if indent <= prev_indent:
+            return None, -1, -1
+        self._line_index += 1
+        return line, indent, linenum
+
+
+    def set_next_line(self, line, indent, linenum):
+        """
+        Make the next call to get_next_line() return the given line instead of
+        the real next line.
+        """
+        line = line.strip()
+        if line:
+            self._stored_line = line, indent, linenum
+
+
+class FileReader(StrReader):
+    """
+    Preprocess an input file for easy reading.
+    """
+    def __init__(self, filename):
+        """
+        Initialize the reader.
+
+        @parse filename: The name of the input file.
+        """
+        StrReader.__init__(self, open(filename).read())
+        self.filename = filename
+
+
+if __name__ == "__main__":
+    parser = optparse.OptionParser('usage: %prog [options] filename '
+                                   '[extra code] ...\n\nExample:\n\n    '
+                                   '%prog tests.cfg "only my_set" "no qcow2"')
+    parser.add_option("-v", "--verbose", dest="debug", action="store_true",
+                      help="include debug messages in console output")
+    parser.add_option("-f", "--fullname", dest="fullname", action="store_true",
+                      help="show full dict names instead of short names")
+    parser.add_option("-c", "--contents", dest="contents", action="store_true",
+                      help="show dict contents")
+
+    options, args = parser.parse_args()
+    if not args:
+        parser.error("filename required")
+
+    c = Parser(args[0], debug=options.debug)
+    for s in args[1:]:
+        c.parse_string(s)
+
+    for i, d in enumerate(c.get_dicts()):
+        if options.fullname:
+            print "dict %4d:  %s" % (i + 1, d["name"])
+        else:
+            print "dict %4d:  %s" % (i + 1, d["shortname"])
+        if options.contents:
+            keys = d.keys()
+            keys.sort()
+            for key in keys:
+                print "    %s = %s" % (key, d[key])
diff --git a/client/virt/__init__.py b/client/virt/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/client/virt/aexpect.py b/client/virt/aexpect.py
new file mode 100755
index 0000000..fe3f9ec
--- /dev/null
+++ b/client/virt/aexpect.py
@@ -0,0 +1,1352 @@
+#!/usr/bin/python
+"""
+A class and functions used for running and controlling child processes.
+
+@copyright: 2008-2009 Red Hat Inc.
+"""
+
+import os, sys, pty, select, termios, fcntl
+
+
+# The following helper functions are shared by the server and the client.
+
+def _lock(filename):
+    if not os.path.exists(filename):
+        open(filename, "w").close()
+    fd = os.open(filename, os.O_RDWR)
+    fcntl.lockf(fd, fcntl.LOCK_EX)
+    return fd
+
+
+def _unlock(fd):
+    fcntl.lockf(fd, fcntl.LOCK_UN)
+    os.close(fd)
+
+
+def _locked(filename):
+    try:
+        fd = os.open(filename, os.O_RDWR)
+    except:
+        return False
+    try:
+        fcntl.lockf(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
+    except:
+        os.close(fd)
+        return True
+    fcntl.lockf(fd, fcntl.LOCK_UN)
+    os.close(fd)
+    return False
+
+
+def _wait(filename):
+    fd = _lock(filename)
+    _unlock(fd)
+
+
+def _get_filenames(base_dir, id):
+    return [os.path.join(base_dir, s + id) for s in
+            "shell-pid-", "status-", "output-", "inpipe-",
+            "lock-server-running-", "lock-client-starting-"]
+
+
+def _get_reader_filename(base_dir, id, reader):
+    return os.path.join(base_dir, "outpipe-%s-%s" % (reader, id))
+
+
+# The following is the server part of the module.
+
+if __name__ == "__main__":
+    id = sys.stdin.readline().strip()
+    echo = sys.stdin.readline().strip() == "True"
+    readers = sys.stdin.readline().strip().split(",")
+    command = sys.stdin.readline().strip() + " && echo %s > /dev/null" % id
+
+    # Define filenames to be used for communication
+    base_dir = "/tmp/kvm_spawn"
+    (shell_pid_filename,
+     status_filename,
+     output_filename,
+     inpipe_filename,
+     lock_server_running_filename,
+     lock_client_starting_filename) = _get_filenames(base_dir, id)
+
+    # Populate the reader filenames list
+    reader_filenames = [_get_reader_filename(base_dir, id, reader)
+                        for reader in readers]
+
+    # Set $TERM = dumb
+    os.putenv("TERM", "dumb")
+
+    (shell_pid, shell_fd) = pty.fork()
+    if shell_pid == 0:
+        # Child process: run the command in a subshell
+        os.execv("/bin/sh", ["/bin/sh", "-c", command])
+    else:
+        # Parent process
+        lock_server_running = _lock(lock_server_running_filename)
+
+        # Set terminal echo on/off and disable pre- and post-processing
+        attr = termios.tcgetattr(shell_fd)
+        attr[0] &= ~termios.INLCR
+        attr[0] &= ~termios.ICRNL
+        attr[0] &= ~termios.IGNCR
+        attr[1] &= ~termios.OPOST
+        if echo:
+            attr[3] |= termios.ECHO
+        else:
+            attr[3] &= ~termios.ECHO
+        termios.tcsetattr(shell_fd, termios.TCSANOW, attr)
+
+        # Open output file
+        output_file = open(output_filename, "w")
+        # Open input pipe
+        os.mkfifo(inpipe_filename)
+        inpipe_fd = os.open(inpipe_filename, os.O_RDWR)
+        # Open output pipes (readers)
+        reader_fds = []
+        for filename in reader_filenames:
+            os.mkfifo(filename)
+            reader_fds.append(os.open(filename, os.O_RDWR))
+
+        # Write shell PID to file
+        file = open(shell_pid_filename, "w")
+        file.write(str(shell_pid))
+        file.close()
+
+        # Print something to stdout so the client can start working
+        print "Server %s ready" % id
+        sys.stdout.flush()
+
+        # Initialize buffers
+        buffers = ["" for reader in readers]
+
+        # Read from child and write to files/pipes
+        while True:
+            check_termination = False
+            # Make a list of reader pipes whose buffers are not empty
+            fds = [fd for (i, fd) in enumerate(reader_fds) if buffers[i]]
+            # Wait until there's something to do
+            r, w, x = select.select([shell_fd, inpipe_fd], fds, [], 0.5)
+            # If a reader pipe is ready for writing --
+            for (i, fd) in enumerate(reader_fds):
+                if fd in w:
+                    bytes_written = os.write(fd, buffers[i])
+                    buffers[i] = buffers[i][bytes_written:]
+            # If there's data to read from the child process --
+            if shell_fd in r:
+                try:
+                    data = os.read(shell_fd, 16384)
+                except OSError:
+                    data = ""
+                if not data:
+                    check_termination = True
+                # Remove carriage returns from the data -- they often cause
+                # trouble and are normally not needed
+                data = data.replace("\r", "")
+                output_file.write(data)
+                output_file.flush()
+                for i in range(len(readers)):
+                    buffers[i] += data
+            # If os.read() raised an exception or there was nothing to read --
+            if check_termination or shell_fd not in r:
+                pid, status = os.waitpid(shell_pid, os.WNOHANG)
+                if pid:
+                    status = os.WEXITSTATUS(status)
+                    break
+            # If there's data to read from the client --
+            if inpipe_fd in r:
+                data = os.read(inpipe_fd, 1024)
+                os.write(shell_fd, data)
+
+        # Write the exit status to a file
+        file = open(status_filename, "w")
+        file.write(str(status))
+        file.close()
+
+        # Wait for the client to finish initializing
+        _wait(lock_client_starting_filename)
+
+        # Delete FIFOs
+        for filename in reader_filenames + [inpipe_filename]:
+            try:
+                os.unlink(filename)
+            except OSError:
+                pass
+
+        # Close all files and pipes
+        output_file.close()
+        os.close(inpipe_fd)
+        for fd in reader_fds:
+            os.close(fd)
+
+        _unlock(lock_server_running)
+        exit(0)
+
+
+# The following is the client part of the module.
+
+import subprocess, time, signal, re, threading, logging
+import common
+import virt_utils
+
+
+class ExpectError(Exception):
+    def __init__(self, patterns, output):
+        Exception.__init__(self, patterns, output)
+        self.patterns = patterns
+        self.output = output
+
+    def _pattern_str(self):
+        if len(self.patterns) == 1:
+            return "pattern %r" % self.patterns[0]
+        else:
+            return "patterns %r" % self.patterns
+
+    def __str__(self):
+        return ("Unknown error occurred while looking for %s    (output: %r)" %
+                (self._pattern_str(), self.output))
+
+
+class ExpectTimeoutError(ExpectError):
+    def __str__(self):
+        return ("Timeout expired while looking for %s    (output: %r)" %
+                (self._pattern_str(), self.output))
+
+
+class ExpectProcessTerminatedError(ExpectError):
+    def __init__(self, patterns, status, output):
+        ExpectError.__init__(self, patterns, output)
+        self.status = status
+
+    def __str__(self):
+        return ("Process terminated while looking for %s    "
+                "(status: %s,    output: %r)" % (self._pattern_str(),
+                                                 self.status, self.output))
+
+
+class ShellError(Exception):
+    def __init__(self, cmd, output):
+        Exception.__init__(self, cmd, output)
+        self.cmd = cmd
+        self.output = output
+
+    def __str__(self):
+        return ("Could not execute shell command %r    (output: %r)" %
+                (self.cmd, self.output))
+
+
+class ShellTimeoutError(ShellError):
+    def __str__(self):
+        return ("Timeout expired while waiting for shell command to "
+                "complete: %r    (output: %r)" % (self.cmd, self.output))
+
+
+class ShellProcessTerminatedError(ShellError):
+    # Raised when the shell process itself (e.g. ssh, netcat, telnet)
+    # terminates unexpectedly
+    def __init__(self, cmd, status, output):
+        ShellError.__init__(self, cmd, output)
+        self.status = status
+
+    def __str__(self):
+        return ("Shell process terminated while waiting for command to "
+                "complete: %r    (status: %s,    output: %r)" %
+                (self.cmd, self.status, self.output))
+
+
+class ShellCmdError(ShellError):
+    # Raised when a command executed in a shell terminates with a nonzero
+    # exit code (status)
+    def __init__(self, cmd, status, output):
+        ShellError.__init__(self, cmd, output)
+        self.status = status
+
+    def __str__(self):
+        return ("Shell command failed: %r    (status: %s,    output: %r)" %
+                (self.cmd, self.status, self.output))
+
+
+class ShellStatusError(ShellError):
+    # Raised when the command's exit status cannot be obtained
+    def __str__(self):
+        return ("Could not get exit status of command: %r    (output: %r)" %
+                (self.cmd, self.output))
+
+
+def run_bg(command, termination_func=None, output_func=None, output_prefix="",
+           timeout=1.0):
+    """
+    Run command as a subprocess.  Call output_func with each line of output
+    from the subprocess (prefixed by output_prefix).  Call termination_func
+    when the subprocess terminates.  Return when timeout expires or when the
+    subprocess exits -- whichever occurs first.
+
+    @brief: Run a subprocess in the background and collect its output and
+            exit status.
+
+    @param command: The shell command to execute
+    @param termination_func: A function to call when the process terminates
+            (should take an integer exit status parameter)
+    @param output_func: A function to call with each line of output from
+            the subprocess (should take a string parameter)
+    @param output_prefix: A string to pre-pend to each line of the output,
+            before passing it to stdout_func
+    @param timeout: Time duration (in seconds) to wait for the subprocess to
+            terminate before returning
+
+    @return: A Tail object.
+    """
+    process = Tail(command=command,
+                   termination_func=termination_func,
+                   output_func=output_func,
+                   output_prefix=output_prefix)
+
+    end_time = time.time() + timeout
+    while time.time() < end_time and process.is_alive():
+        time.sleep(0.1)
+
+    return process
+
+
+def run_fg(command, output_func=None, output_prefix="", timeout=1.0):
+    """
+    Run command as a subprocess.  Call output_func with each line of output
+    from the subprocess (prefixed by prefix).  Return when timeout expires or
+    when the subprocess exits -- whichever occurs first.  If timeout expires
+    and the subprocess is still running, kill it before returning.
+
+    @brief: Run a subprocess in the foreground and collect its output and
+            exit status.
+
+    @param command: The shell command to execute
+    @param output_func: A function to call with each line of output from
+            the subprocess (should take a string parameter)
+    @param output_prefix: A string to pre-pend to each line of the output,
+            before passing it to stdout_func
+    @param timeout: Time duration (in seconds) to wait for the subprocess to
+            terminate before killing it and returning
+
+    @return: A 2-tuple containing the exit status of the process and its
+            STDOUT/STDERR output.  If timeout expires before the process
+            terminates, the returned status is None.
+    """
+    process = run_bg(command, None, output_func, output_prefix, timeout)
+    output = process.get_output()
+    if process.is_alive():
+        status = None
+    else:
+        status = process.get_status()
+    process.close()
+    return (status, output)
+
+
+class Spawn:
+    """
+    This class is used for spawning and controlling a child process.
+
+    A new instance of this class can either run a new server (a small Python
+    program that reads output from the child process and reports it to the
+    client and to a text file) or attach to an already running server.
+    When a server is started it runs the child process.
+    The server writes output from the child's STDOUT and STDERR to a text file.
+    The text file can be accessed at any time using get_output().
+    In addition, the server opens as many pipes as requested by the client and
+    writes the output to them.
+    The pipes are requested and accessed by classes derived from Spawn.
+    These pipes are referred to as "readers".
+    The server also receives input from the client and sends it to the child
+    process.
+    An instance of this class can be pickled.  Every derived class is
+    responsible for restoring its own state by properly defining
+    __getinitargs__().
+
+    The first named pipe is used by _tail(), a function that runs in the
+    background and reports new output from the child as it is produced.
+    The second named pipe is used by a set of functions that read and parse
+    output as requested by the user in an interactive manner, similar to
+    pexpect.
+    When unpickled it automatically
+    resumes _tail() if needed.
+    """
+
+    def __init__(self, command=None, id=None, auto_close=False, echo=False,
+                 linesep="\n"):
+        """
+        Initialize the class and run command as a child process.
+
+        @param command: Command to run, or None if accessing an already running
+                server.
+        @param id: ID of an already running server, if accessing a running
+                server, or None if starting a new one.
+        @param auto_close: If True, close() the instance automatically when its
+                reference count drops to zero (default False).
+        @param echo: Boolean indicating whether echo should be initially
+                enabled for the pseudo terminal running the subprocess.  This
+                parameter has an effect only when starting a new server.
+        @param linesep: Line separator to be appended to strings sent to the
+                child process by sendline().
+        """
+        self.id = id or virt_utils.generate_random_string(8)
+
+        # Define filenames for communication with server
+        base_dir = "/tmp/kvm_spawn"
+        try:
+            os.makedirs(base_dir)
+        except:
+            pass
+        (self.shell_pid_filename,
+         self.status_filename,
+         self.output_filename,
+         self.inpipe_filename,
+         self.lock_server_running_filename,
+         self.lock_client_starting_filename) = _get_filenames(base_dir,
+                                                              self.id)
+
+        # Remember some attributes
+        self.auto_close = auto_close
+        self.echo = echo
+        self.linesep = linesep
+
+        # Make sure the 'readers' and 'close_hooks' attributes exist
+        if not hasattr(self, "readers"):
+            self.readers = []
+        if not hasattr(self, "close_hooks"):
+            self.close_hooks = []
+
+        # Define the reader filenames
+        self.reader_filenames = dict(
+            (reader, _get_reader_filename(base_dir, self.id, reader))
+            for reader in self.readers)
+
+        # Let the server know a client intends to open some pipes;
+        # if the executed command terminates quickly, the server will wait for
+        # the client to release the lock before exiting
+        lock_client_starting = _lock(self.lock_client_starting_filename)
+
+        # Start the server (which runs the command)
+        if command:
+            sub = subprocess.Popen("%s %s" % (sys.executable, __file__),
+                                   shell=True,
+                                   stdin=subprocess.PIPE,
+                                   stdout=subprocess.PIPE,
+                                   stderr=subprocess.STDOUT)
+            # Send parameters to the server
+            sub.stdin.write("%s\n" % self.id)
+            sub.stdin.write("%s\n" % echo)
+            sub.stdin.write("%s\n" % ",".join(self.readers))
+            sub.stdin.write("%s\n" % command)
+            # Wait for the server to complete its initialization
+            while not "Server %s ready" % self.id in sub.stdout.readline():
+                pass
+
+        # Open the reading pipes
+        self.reader_fds = {}
+        try:
+            assert(_locked(self.lock_server_running_filename))
+            for reader, filename in self.reader_filenames.items():
+                self.reader_fds[reader] = os.open(filename, os.O_RDONLY)
+        except:
+            pass
+
+        # Allow the server to continue
+        _unlock(lock_client_starting)
+
+
+    # The following two functions are defined to make sure the state is set
+    # exclusively by the constructor call as specified in __getinitargs__().
+
+    def __getstate__(self):
+        pass
+
+
+    def __setstate__(self, state):
+        pass
+
+
+    def __getinitargs__(self):
+        # Save some information when pickling -- will be passed to the
+        # constructor upon unpickling
+        return (None, self.id, self.auto_close, self.echo, self.linesep)
+
+
+    def __del__(self):
+        if self.auto_close:
+            self.close()
+
+
+    def _add_reader(self, reader):
+        """
+        Add a reader whose file descriptor can be obtained with _get_fd().
+        Should be called before __init__().  Intended for use by derived
+        classes.
+
+        @param reader: The name of the reader.
+        """
+        if not hasattr(self, "readers"):
+            self.readers = []
+        self.readers.append(reader)
+
+
+    def _add_close_hook(self, hook):
+        """
+        Add a close hook function to be called when close() is called.
+        The function will be called after the process terminates but before
+        final cleanup.  Intended for use by derived classes.
+
+        @param hook: The hook function.
+        """
+        if not hasattr(self, "close_hooks"):
+            self.close_hooks = []
+        self.close_hooks.append(hook)
+
+
+    def _get_fd(self, reader):
+        """
+        Return an open file descriptor corresponding to the specified reader
+        pipe.  If no such reader exists, or the pipe could not be opened,
+        return None.  Intended for use by derived classes.
+
+        @param reader: The name of the reader.
+        """
+        return self.reader_fds.get(reader)
+
+
+    def get_id(self):
+        """
+        Return the instance's id attribute, which may be used to access the
+        process in the future.
+        """
+        return self.id
+
+
+    def get_pid(self):
+        """
+        Return the PID of the process.
+
+        Note: this may be the PID of the shell process running the user given
+        command.
+        """
+        try:
+            file = open(self.shell_pid_filename, "r")
+            pid = int(file.read())
+            file.close()
+            return pid
+        except:
+            return None
+
+
+    def get_status(self):
+        """
+        Wait for the process to exit and return its exit status, or None
+        if the exit status is not available.
+        """
+        _wait(self.lock_server_running_filename)
+        try:
+            file = open(self.status_filename, "r")
+            status = int(file.read())
+            file.close()
+            return status
+        except:
+            return None
+
+
+    def get_output(self):
+        """
+        Return the STDOUT and STDERR output of the process so far.
+        """
+        try:
+            file = open(self.output_filename, "r")
+            output = file.read()
+            file.close()
+            return output
+        except:
+            return ""
+
+
+    def is_alive(self):
+        """
+        Return True if the process is running.
+        """
+        return _locked(self.lock_server_running_filename)
+
+
+    def close(self, sig=signal.SIGKILL):
+        """
+        Kill the child process if it's alive and remove temporary files.
+
+        @param sig: The signal to send the process when attempting to kill it.
+        """
+        # Kill it if it's alive
+        if self.is_alive():
+            virt_utils.kill_process_tree(self.get_pid(), sig)
+        # Wait for the server to exit
+        _wait(self.lock_server_running_filename)
+        # Call all cleanup routines
+        for hook in self.close_hooks:
+            hook(self)
+        # Close reader file descriptors
+        for fd in self.reader_fds.values():
+            try:
+                os.close(fd)
+            except:
+                pass
+        self.reader_fds = {}
+        # Remove all used files
+        for filename in (_get_filenames("/tmp/kvm_spawn", self.id) +
+                         self.reader_filenames.values()):
+            try:
+                os.unlink(filename)
+            except OSError:
+                pass
+
+
+    def set_linesep(self, linesep):
+        """
+        Sets the line separator string (usually "\\n").
+
+        @param linesep: Line separator string.
+        """
+        self.linesep = linesep
+
+
+    def send(self, str=""):
+        """
+        Send a string to the child process.
+
+        @param str: String to send to the child process.
+        """
+        try:
+            fd = os.open(self.inpipe_filename, os.O_RDWR)
+            os.write(fd, str)
+            os.close(fd)
+        except:
+            pass
+
+
+    def sendline(self, str=""):
+        """
+        Send a string followed by a line separator to the child process.
+
+        @param str: String to send to the child process.
+        """
+        self.send(str + self.linesep)
+
+
+_thread_kill_requested = False
+
+def kill_tail_threads():
+    """
+    Kill all Tail threads.
+
+    After calling this function no new threads should be started.
+    """
+    global _thread_kill_requested
+    _thread_kill_requested = True
+    for t in threading.enumerate():
+        if hasattr(t, "name") and t.name.startswith("tail_thread"):
+            t.join(10)
+    _thread_kill_requested = False
+
+
+class Tail(Spawn):
+    """
+    This class runs a child process in the background and sends its output in
+    real time, line-by-line, to a callback function.
+
+    See Spawn's docstring.
+
+    This class uses a single pipe reader to read data in real time from the
+    child process and report it to a given callback function.
+    When the child process exits, its exit status is reported to an additional
+    callback function.
+
+    When this class is unpickled, it automatically resumes reporting output.
+    """
+
+    def __init__(self, command=None, id=None, auto_close=False, echo=False,
+                 linesep="\n", termination_func=None, termination_params=(),
+                 output_func=None, output_params=(), output_prefix=""):
+        """
+        Initialize the class and run command as a child process.
+
+        @param command: Command to run, or None if accessing an already running
+                server.
+        @param id: ID of an already running server, if accessing a running
+                server, or None if starting a new one.
+        @param auto_close: If True, close() the instance automatically when its
+                reference count drops to zero (default False).
+        @param echo: Boolean indicating whether echo should be initially
+                enabled for the pseudo terminal running the subprocess.  This
+                parameter has an effect only when starting a new server.
+        @param linesep: Line separator to be appended to strings sent to the
+                child process by sendline().
+        @param termination_func: Function to call when the process exits.  The
+                function must accept a single exit status parameter.
+        @param termination_params: Parameters to send to termination_func
+                before the exit status.
+        @param output_func: Function to call whenever a line of output is
+                available from the STDOUT or STDERR streams of the process.
+                The function must accept a single string parameter.  The string
+                does not include the final newline.
+        @param output_params: Parameters to send to output_func before the
+                output line.
+        @param output_prefix: String to prepend to lines sent to output_func.
+        """
+        # Add a reader and a close hook
+        self._add_reader("tail")
+        self._add_close_hook(Tail._join_thread)
+
+        # Init the superclass
+        Spawn.__init__(self, command, id, auto_close, echo, linesep)
+
+        # Remember some attributes
+        self.termination_func = termination_func
+        self.termination_params = termination_params
+        self.output_func = output_func
+        self.output_params = output_params
+        self.output_prefix = output_prefix
+
+        # Start the thread in the background
+        self.tail_thread = None
+        if termination_func or output_func:
+            self._start_thread()
+
+
+    def __getinitargs__(self):
+        return Spawn.__getinitargs__(self) + (self.termination_func,
+                                              self.termination_params,
+                                              self.output_func,
+                                              self.output_params,
+                                              self.output_prefix)
+
+
+    def set_termination_func(self, termination_func):
+        """
+        Set the termination_func attribute. See __init__() for details.
+
+        @param termination_func: Function to call when the process terminates.
+                Must take a single parameter -- the exit status.
+        """
+        self.termination_func = termination_func
+        if termination_func and not self.tail_thread:
+            self._start_thread()
+
+
+    def set_termination_params(self, termination_params):
+        """
+        Set the termination_params attribute. See __init__() for details.
+
+        @param termination_params: Parameters to send to termination_func
+                before the exit status.
+        """
+        self.termination_params = termination_params
+
+
+    def set_output_func(self, output_func):
+        """
+        Set the output_func attribute. See __init__() for details.
+
+        @param output_func: Function to call for each line of STDOUT/STDERR
+                output from the process.  Must take a single string parameter.
+        """
+        self.output_func = output_func
+        if output_func and not self.tail_thread:
+            self._start_thread()
+
+
+    def set_output_params(self, output_params):
+        """
+        Set the output_params attribute. See __init__() for details.
+
+        @param output_params: Parameters to send to output_func before the
+                output line.
+        """
+        self.output_params = output_params
+
+
+    def set_output_prefix(self, output_prefix):
+        """
+        Set the output_prefix attribute. See __init__() for details.
+
+        @param output_prefix: String to pre-pend to each line sent to
+                output_func (see set_output_callback()).
+        """
+        self.output_prefix = output_prefix
+
+
+    def _tail(self):
+        def print_line(text):
+            # Pre-pend prefix and remove trailing whitespace
+            text = self.output_prefix + text.rstrip()
+            # Pass text to output_func
+            try:
+                params = self.output_params + (text,)
+                self.output_func(*params)
+            except TypeError:
+                pass
+
+        try:
+            fd = self._get_fd("tail")
+            buffer = ""
+            while True:
+                global _thread_kill_requested
+                if _thread_kill_requested:
+                    return
+                try:
+                    # See if there's any data to read from the pipe
+                    r, w, x = select.select([fd], [], [], 0.05)
+                except:
+                    break
+                if fd in r:
+                    # Some data is available; read it
+                    new_data = os.read(fd, 1024)
+                    if not new_data:
+                        break
+                    buffer += new_data
+                    # Send the output to output_func line by line
+                    # (except for the last line)
+                    if self.output_func:
+                        lines = buffer.split("\n")
+                        for line in lines[:-1]:
+                            print_line(line)
+                    # Leave only the last line
+                    last_newline_index = buffer.rfind("\n")
+                    buffer = buffer[last_newline_index+1:]
+                else:
+                    # No output is available right now; flush the buffer
+                    if buffer:
+                        print_line(buffer)
+                        buffer = ""
+            # The process terminated; print any remaining output
+            if buffer:
+                print_line(buffer)
+            # Get the exit status, print it and send it to termination_func
+            status = self.get_status()
+            if status is None:
+                return
+            print_line("(Process terminated with status %s)" % status)
+            try:
+                params = self.termination_params + (status,)
+                self.termination_func(*params)
+            except TypeError:
+                pass
+        finally:
+            self.tail_thread = None
+
+
+    def _start_thread(self):
+        self.tail_thread = threading.Thread(target=self._tail,
+                                            name="tail_thread_%s" % self.id)
+        self.tail_thread.start()
+
+
+    def _join_thread(self):
+        # Wait for the tail thread to exit
+        # (it's done this way because self.tail_thread may become None at any
+        # time)
+        t = self.tail_thread
+        if t:
+            t.join()
+
+
+class Expect(Tail):
+    """
+    This class runs a child process in the background and provides expect-like
+    services.
+
+    It also provides all of Tail's functionality.
+    """
+
+    def __init__(self, command=None, id=None, auto_close=True, echo=False,
+                 linesep="\n", termination_func=None, termination_params=(),
+                 output_func=None, output_params=(), output_prefix=""):
+        """
+        Initialize the class and run command as a child process.
+
+        @param command: Command to run, or None if accessing an already running
+                server.
+        @param id: ID of an already running server, if accessing a running
+                server, or None if starting a new one.
+        @param auto_close: If True, close() the instance automatically when its
+                reference count drops to zero (default False).
+        @param echo: Boolean indicating whether echo should be initially
+                enabled for the pseudo terminal running the subprocess.  This
+                parameter has an effect only when starting a new server.
+        @param linesep: Line separator to be appended to strings sent to the
+                child process by sendline().
+        @param termination_func: Function to call when the process exits.  The
+                function must accept a single exit status parameter.
+        @param termination_params: Parameters to send to termination_func
+                before the exit status.
+        @param output_func: Function to call whenever a line of output is
+                available from the STDOUT or STDERR streams of the process.
+                The function must accept a single string parameter.  The string
+                does not include the final newline.
+        @param output_params: Parameters to send to output_func before the
+                output line.
+        @param output_prefix: String to prepend to lines sent to output_func.
+        """
+        # Add a reader
+        self._add_reader("expect")
+
+        # Init the superclass
+        Tail.__init__(self, command, id, auto_close, echo, linesep,
+                      termination_func, termination_params,
+                      output_func, output_params, output_prefix)
+
+
+    def __getinitargs__(self):
+        return Tail.__getinitargs__(self)
+
+
+    def read_nonblocking(self, timeout=None):
+        """
+        Read from child until there is nothing to read for timeout seconds.
+
+        @param timeout: Time (seconds) to wait before we give up reading from
+                the child process, or None to use the default value.
+        """
+        if timeout is None:
+            timeout = 0.1
+        fd = self._get_fd("expect")
+        data = ""
+        while True:
+            try:
+                r, w, x = select.select([fd], [], [], timeout)
+            except:
+                return data
+            if fd in r:
+                new_data = os.read(fd, 1024)
+                if not new_data:
+                    return data
+                data += new_data
+            else:
+                return data
+
+
+    def match_patterns(self, str, patterns):
+        """
+        Match str against a list of patterns.
+
+        Return the index of the first pattern that matches a substring of str.
+        None and empty strings in patterns are ignored.
+        If no match is found, return None.
+
+        @param patterns: List of strings (regular expression patterns).
+        """
+        for i in range(len(patterns)):
+            if not patterns[i]:
+                continue
+            if re.search(patterns[i], str):
+                return i
+
+
+    def read_until_output_matches(self, patterns, filter=lambda x: x,
+                                  timeout=60, internal_timeout=None,
+                                  print_func=None):
+        """
+        Read using read_nonblocking until a match is found using match_patterns,
+        or until timeout expires. Before attempting to search for a match, the
+        data is filtered using the filter function provided.
+
+        @brief: Read from child using read_nonblocking until a pattern
+                matches.
+        @param patterns: List of strings (regular expression patterns)
+        @param filter: Function to apply to the data read from the child before
+                attempting to match it against the patterns (should take and
+                return a string)
+        @param timeout: The duration (in seconds) to wait until a match is
+                found
+        @param internal_timeout: The timeout to pass to read_nonblocking
+        @param print_func: A function to be used to print the data being read
+                (should take a string parameter)
+        @return: Tuple containing the match index and the data read so far
+        @raise ExpectTimeoutError: Raised if timeout expires
+        @raise ExpectProcessTerminatedError: Raised if the child process
+                terminates while waiting for output
+        @raise ExpectError: Raised if an unknown error occurs
+        """
+        fd = self._get_fd("expect")
+        o = ""
+        end_time = time.time() + timeout
+        while True:
+            try:
+                r, w, x = select.select([fd], [], [],
+                                        max(0, end_time - time.time()))
+            except (select.error, TypeError):
+                break
+            if not r:
+                raise ExpectTimeoutError(patterns, o)
+            # Read data from child
+            data = self.read_nonblocking(internal_timeout)
+            if not data:
+                break
+            # Print it if necessary
+            if print_func:
+                for line in data.splitlines():
+                    print_func(line)
+            # Look for patterns
+            o += data
+            match = self.match_patterns(filter(o), patterns)
+            if match is not None:
+                return match, o
+
+        # Check if the child has terminated
+        if virt_utils.wait_for(lambda: not self.is_alive(), 5, 0, 0.1):
+            raise ExpectProcessTerminatedError(patterns, self.get_status(), o)
+        else:
+            # This shouldn't happen
+            raise ExpectError(patterns, o)
+
+
+    def read_until_last_word_matches(self, patterns, timeout=60,
+                                     internal_timeout=None, print_func=None):
+        """
+        Read using read_nonblocking until the last word of the output matches
+        one of the patterns (using match_patterns), or until timeout expires.
+
+        @param patterns: A list of strings (regular expression patterns)
+        @param timeout: The duration (in seconds) to wait until a match is
+                found
+        @param internal_timeout: The timeout to pass to read_nonblocking
+        @param print_func: A function to be used to print the data being read
+                (should take a string parameter)
+        @return: A tuple containing the match index and the data read so far
+        @raise ExpectTimeoutError: Raised if timeout expires
+        @raise ExpectProcessTerminatedError: Raised if the child process
+                terminates while waiting for output
+        @raise ExpectError: Raised if an unknown error occurs
+        """
+        def get_last_word(str):
+            if str:
+                return str.split()[-1]
+            else:
+                return ""
+
+        return self.read_until_output_matches(patterns, get_last_word,
+                                              timeout, internal_timeout,
+                                              print_func)
+
+
+    def read_until_last_line_matches(self, patterns, timeout=60,
+                                     internal_timeout=None, print_func=None):
+        """
+        Read using read_nonblocking until the last non-empty line of the output
+        matches one of the patterns (using match_patterns), or until timeout
+        expires. Return a tuple containing the match index (or None if no match
+        was found) and the data read so far.
+
+        @brief: Read using read_nonblocking until the last non-empty line
+                matches a pattern.
+
+        @param patterns: A list of strings (regular expression patterns)
+        @param timeout: The duration (in seconds) to wait until a match is
+                found
+        @param internal_timeout: The timeout to pass to read_nonblocking
+        @param print_func: A function to be used to print the data being read
+                (should take a string parameter)
+        @return: A tuple containing the match index and the data read so far
+        @raise ExpectTimeoutError: Raised if timeout expires
+        @raise ExpectProcessTerminatedError: Raised if the child process
+                terminates while waiting for output
+        @raise ExpectError: Raised if an unknown error occurs
+        """
+        def get_last_nonempty_line(str):
+            nonempty_lines = [l for l in str.splitlines() if l.strip()]
+            if nonempty_lines:
+                return nonempty_lines[-1]
+            else:
+                return ""
+
+        return self.read_until_output_matches(patterns, get_last_nonempty_line,
+                                              timeout, internal_timeout,
+                                              print_func)
+
+
+class ShellSession(Expect):
+    """
+    This class runs a child process in the background.  It it suited for
+    processes that provide an interactive shell, such as SSH and Telnet.
+
+    It provides all services of Expect and Tail.  In addition, it
+    provides command running services, and a utility function to test the
+    process for responsiveness.
+    """
+
+    def __init__(self, command=None, id=None, auto_close=True, echo=False,
+                 linesep="\n", termination_func=None, termination_params=(),
+                 output_func=None, output_params=(), output_prefix="",
+                 prompt=r"[\#\$]\s*$", status_test_command="echo $?"):
+        """
+        Initialize the class and run command as a child process.
+
+        @param command: Command to run, or None if accessing an already running
+                server.
+        @param id: ID of an already running server, if accessing a running
+                server, or None if starting a new one.
+        @param auto_close: If True, close() the instance automatically when its
+                reference count drops to zero (default True).
+        @param echo: Boolean indicating whether echo should be initially
+                enabled for the pseudo terminal running the subprocess.  This
+                parameter has an effect only when starting a new server.
+        @param linesep: Line separator to be appended to strings sent to the
+                child process by sendline().
+        @param termination_func: Function to call when the process exits.  The
+                function must accept a single exit status parameter.
+        @param termination_params: Parameters to send to termination_func
+                before the exit status.
+        @param output_func: Function to call whenever a line of output is
+                available from the STDOUT or STDERR streams of the process.
+                The function must accept a single string parameter.  The string
+                does not include the final newline.
+        @param output_params: Parameters to send to output_func before the
+                output line.
+        @param output_prefix: String to prepend to lines sent to output_func.
+        @param prompt: Regular expression describing the shell's prompt line.
+        @param status_test_command: Command to be used for getting the last
+                exit status of commands run inside the shell (used by
+                cmd_status_output() and friends).
+        """
+        # Init the superclass
+        Expect.__init__(self, command, id, auto_close, echo, linesep,
+                        termination_func, termination_params,
+                        output_func, output_params, output_prefix)
+
+        # Remember some attributes
+        self.prompt = prompt
+        self.status_test_command = status_test_command
+
+
+    def __getinitargs__(self):
+        return Expect.__getinitargs__(self) + (self.prompt,
+                                               self.status_test_command)
+
+
+    def set_prompt(self, prompt):
+        """
+        Set the prompt attribute for later use by read_up_to_prompt.
+
+        @param: String that describes the prompt contents.
+        """
+        self.prompt = prompt
+
+
+    def set_status_test_command(self, status_test_command):
+        """
+        Set the command to be sent in order to get the last exit status.
+
+        @param status_test_command: Command that will be sent to get the last
+                exit status.
+        """
+        self.status_test_command = status_test_command
+
+
+    def is_responsive(self, timeout=5.0):
+        """
+        Return True if the process responds to STDIN/terminal input.
+
+        Send a newline to the child process (e.g. SSH or Telnet) and read some
+        output using read_nonblocking().
+        If all is OK, some output should be available (e.g. the shell prompt).
+        In that case return True.  Otherwise return False.
+
+        @param timeout: Time duration to wait before the process is considered
+                unresponsive.
+        """
+        # Read all output that's waiting to be read, to make sure the output
+        # we read next is in response to the newline sent
+        self.read_nonblocking(timeout=0)
+        # Send a newline
+        self.sendline()
+        # Wait up to timeout seconds for some output from the child
+        end_time = time.time() + timeout
+        while time.time() < end_time:
+            time.sleep(0.5)
+            if self.read_nonblocking(timeout=0).strip():
+                return True
+        # No output -- report unresponsive
+        return False
+
+
+    def read_up_to_prompt(self, timeout=60, internal_timeout=None,
+                          print_func=None):
+        """
+        Read using read_nonblocking until the last non-empty line of the output
+        matches the prompt regular expression set by set_prompt, or until
+        timeout expires.
+
+        @brief: Read using read_nonblocking until the last non-empty line
+                matches the prompt.
+
+        @param timeout: The duration (in seconds) to wait until a match is
+                found
+        @param internal_timeout: The timeout to pass to read_nonblocking
+        @param print_func: A function to be used to print the data being
+                read (should take a string parameter)
+
+        @return: The data read so far
+        @raise ExpectTimeoutError: Raised if timeout expires
+        @raise ExpectProcessTerminatedError: Raised if the shell process
+                terminates while waiting for output
+        @raise ExpectError: Raised if an unknown error occurs
+        """
+        m, o = self.read_until_last_line_matches([self.prompt], timeout,
+                                                 internal_timeout, print_func)
+        return o
+
+
+    def cmd_output(self, cmd, timeout=60, internal_timeout=None,
+                   print_func=None):
+        """
+        Send a command and return its output.
+
+        @param cmd: Command to send (must not contain newline characters)
+        @param timeout: The duration (in seconds) to wait for the prompt to
+                return
+        @param internal_timeout: The timeout to pass to read_nonblocking
+        @param print_func: A function to be used to print the data being read
+                (should take a string parameter)
+
+        @return: The output of cmd
+        @raise ShellTimeoutError: Raised if timeout expires
+        @raise ShellProcessTerminatedError: Raised if the shell process
+                terminates while waiting for output
+        @raise ShellError: Raised if an unknown error occurs
+        """
+        def remove_command_echo(str, cmd):
+            if str and str.splitlines()[0] == cmd:
+                str = "".join(str.splitlines(True)[1:])
+            return str
+
+        def remove_last_nonempty_line(str):
+            return "".join(str.rstrip().splitlines(True)[:-1])
+
+        logging.debug("Sending command: %s" % cmd)
+        self.read_nonblocking(timeout=0)
+        self.sendline(cmd)
+        try:
+            o = self.read_up_to_prompt(timeout, internal_timeout, print_func)
+        except ExpectError, e:
+            o = remove_command_echo(e.output, cmd)
+            if isinstance(e, ExpectTimeoutError):
+                raise ShellTimeoutError(cmd, o)
+            elif isinstance(e, ExpectProcessTerminatedError):
+                raise ShellProcessTerminatedError(cmd, e.status, o)
+            else:
+                raise ShellError(cmd, o)
+
+        # Remove the echoed command and the final shell prompt
+        return remove_last_nonempty_line(remove_command_echo(o, cmd))
+
+
+    def cmd_status_output(self, cmd, timeout=60, internal_timeout=None,
+                          print_func=None):
+        """
+        Send a command and return its exit status and output.
+
+        @param cmd: Command to send (must not contain newline characters)
+        @param timeout: The duration (in seconds) to wait for the prompt to
+                return
+        @param internal_timeout: The timeout to pass to read_nonblocking
+        @param print_func: A function to be used to print the data being read
+                (should take a string parameter)
+
+        @return: A tuple (status, output) where status is the exit status and
+                output is the output of cmd
+        @raise ShellTimeoutError: Raised if timeout expires
+        @raise ShellProcessTerminatedError: Raised if the shell process
+                terminates while waiting for output
+        @raise ShellStatusError: Raised if the exit status cannot be obtained
+        @raise ShellError: Raised if an unknown error occurs
+        """
+        o = self.cmd_output(cmd, timeout, internal_timeout, print_func)
+        try:
+            # Send the 'echo $?' (or equivalent) command to get the exit status
+            s = self.cmd_output(self.status_test_command, 10, internal_timeout)
+        except ShellError:
+            raise ShellStatusError(cmd, o)
+
+        # Get the first line consisting of digits only
+        digit_lines = [l for l in s.splitlines() if l.strip().isdigit()]
+        if digit_lines:
+            return int(digit_lines[0].strip()), o
+        else:
+            raise ShellStatusError(cmd, o)
+
+
+    def cmd_status(self, cmd, timeout=60, internal_timeout=None,
+                   print_func=None):
+        """
+        Send a command and return its exit status.
+
+        @param cmd: Command to send (must not contain newline characters)
+        @param timeout: The duration (in seconds) to wait for the prompt to
+                return
+        @param internal_timeout: The timeout to pass to read_nonblocking
+        @param print_func: A function to be used to print the data being read
+                (should take a string parameter)
+
+        @return: The exit status of cmd
+        @raise ShellTimeoutError: Raised if timeout expires
+        @raise ShellProcessTerminatedError: Raised if the shell process
+                terminates while waiting for output
+        @raise ShellStatusError: Raised if the exit status cannot be obtained
+        @raise ShellError: Raised if an unknown error occurs
+        """
+        s, o = self.cmd_status_output(cmd, timeout, internal_timeout,
+                                      print_func)
+        return s
+
+
+    def cmd(self, cmd, timeout=60, internal_timeout=None, print_func=None):
+        """
+        Send a command and return its output. If the command's exit status is
+        nonzero, raise an exception.
+
+        @param cmd: Command to send (must not contain newline characters)
+        @param timeout: The duration (in seconds) to wait for the prompt to
+                return
+        @param internal_timeout: The timeout to pass to read_nonblocking
+        @param print_func: A function to be used to print the data being read
+                (should take a string parameter)
+
+        @return: The output of cmd
+        @raise ShellTimeoutError: Raised if timeout expires
+        @raise ShellProcessTerminatedError: Raised if the shell process
+                terminates while waiting for output
+        @raise ShellError: Raised if the exit status cannot be obtained or if
+                an unknown error occurs
+        @raise ShellStatusError: Raised if the exit status cannot be obtained
+        @raise ShellError: Raised if an unknown error occurs
+        @raise ShellCmdError: Raised if the exit status is nonzero
+        """
+        s, o = self.cmd_status_output(cmd, timeout, internal_timeout,
+                                      print_func)
+        if s != 0:
+            raise ShellCmdError(cmd, s, o)
+        return o
+
+
+    def get_command_output(self, cmd, timeout=60, internal_timeout=None,
+                           print_func=None):
+        """
+        Alias for cmd_output() for backward compatibility.
+        """
+        return self.cmd_output(cmd, timeout, internal_timeout, print_func)
+
+
+    def get_command_status_output(self, cmd, timeout=60, internal_timeout=None,
+                                  print_func=None):
+        """
+        Alias for cmd_status_output() for backward compatibility.
+        """
+        return self.cmd_status_output(cmd, timeout, internal_timeout,
+                                      print_func)
+
+
+    def get_command_status(self, cmd, timeout=60, internal_timeout=None,
+                           print_func=None):
+        """
+        Alias for cmd_status() for backward compatibility.
+        """
+        return self.cmd_status(cmd, timeout, internal_timeout, print_func)
diff --git a/client/virt/kvm_installer.py b/client/virt/kvm_installer.py
new file mode 100644
index 0000000..ea48e95
--- /dev/null
+++ b/client/virt/kvm_installer.py
@@ -0,0 +1,797 @@
+import os, logging, datetime, glob
+import shutil
+from autotest_lib.client.bin import utils, os_dep
+from autotest_lib.client.common_lib import error
+import virt_utils
+
+
+def check_configure_options(script_path):
+    """
+    Return the list of available options (flags) of a given kvm configure build
+    script.
+
+    @param script: Path to the configure script
+    """
+    abspath = os.path.abspath(script_path)
+    help_raw = utils.system_output('%s --help' % abspath, ignore_status=True)
+    help_output = help_raw.split("\n")
+    option_list = []
+    for line in help_output:
+        cleaned_line = line.lstrip()
+        if cleaned_line.startswith("--"):
+            option = cleaned_line.split()[0]
+            option = option.split("=")[0]
+            option_list.append(option)
+
+    return option_list
+
+
+def kill_qemu_processes():
+    """
+    Kills all qemu processes, also kills all processes holding /dev/kvm down.
+    """
+    logging.debug("Killing any qemu processes that might be left behind")
+    utils.system("pkill qemu", ignore_status=True)
+    # Let's double check to see if some other process is holding /dev/kvm
+    if os.path.isfile("/dev/kvm"):
+        utils.system("fuser -k /dev/kvm", ignore_status=True)
+
+
+def cpu_vendor():
+    vendor = "intel"
+    if os.system("grep vmx /proc/cpuinfo 1>/dev/null") != 0:
+        vendor = "amd"
+    logging.debug("Detected CPU vendor as '%s'", vendor)
+    return vendor
+
+
+def _unload_kvm_modules(mod_list):
+    logging.info("Unloading previously loaded KVM modules")
+    for module in reversed(mod_list):
+        utils.unload_module(module)
+
+
+def _load_kvm_modules(mod_list, module_dir=None, load_stock=False):
+    """
+    Just load the KVM modules, without killing Qemu or unloading previous
+    modules.
+
+    Load modules present on any sub directory of module_dir. Function will walk
+    through module_dir until it finds the modules.
+
+    @param module_dir: Directory where the KVM modules are located.
+    @param load_stock: Whether we are going to load system kernel modules.
+    @param extra_modules: List of extra modules to load.
+    """
+    if module_dir:
+        logging.info("Loading the built KVM modules...")
+        kvm_module_path = None
+        kvm_vendor_module_path = None
+        abort = False
+
+        list_modules = ['%s.ko' % (m) for m in mod_list]
+
+        list_module_paths = []
+        for folder, subdirs, files in os.walk(module_dir):
+            for module in list_modules:
+                if module in files:
+                    module_path = os.path.join(folder, module)
+                    list_module_paths.append(module_path)
+
+        # We might need to arrange the modules in the correct order
+        # to avoid module load problems
+        list_modules_load = []
+        for module in list_modules:
+            for module_path in list_module_paths:
+                if os.path.basename(module_path) == module:
+                    list_modules_load.append(module_path)
+
+        if len(list_module_paths) != len(list_modules):
+            logging.error("KVM modules not found. If you don't want to use the "
+                          "modules built by this test, make sure the option "
+                          "load_modules: 'no' is marked on the test control "
+                          "file.")
+            raise error.TestError("The modules %s were requested to be loaded, "
+                                  "but the only modules found were %s" %
+                                  (list_modules, list_module_paths))
+
+        for module_path in list_modules_load:
+            try:
+                utils.system("insmod %s" % module_path)
+            except Exception, e:
+                raise error.TestFail("Failed to load KVM modules: %s" % e)
+
+    if load_stock:
+        logging.info("Loading current system KVM modules...")
+        for module in mod_list:
+            utils.system("modprobe %s" % module)
+
+
+def create_symlinks(test_bindir, prefix=None, bin_list=None, unittest=None):
+    """
+    Create symbolic links for the appropriate qemu and qemu-img commands on
+    the kvm test bindir.
+
+    @param test_bindir: KVM test bindir
+    @param prefix: KVM prefix path
+    @param bin_list: List of qemu binaries to link
+    @param unittest: Path to configuration file unittests.cfg
+    """
+    qemu_path = os.path.join(test_bindir, "qemu")
+    qemu_img_path = os.path.join(test_bindir, "qemu-img")
+    qemu_unittest_path = os.path.join(test_bindir, "unittests")
+    if os.path.lexists(qemu_path):
+        os.unlink(qemu_path)
+    if os.path.lexists(qemu_img_path):
+        os.unlink(qemu_img_path)
+    if unittest and os.path.lexists(qemu_unittest_path):
+        os.unlink(qemu_unittest_path)
+
+    logging.debug("Linking qemu binaries")
+
+    if bin_list:
+        for bin in bin_list:
+            if os.path.basename(bin) == 'qemu-kvm':
+                os.symlink(bin, qemu_path)
+            elif os.path.basename(bin) == 'qemu-img':
+                os.symlink(bin, qemu_img_path)
+
+    elif prefix:
+        kvm_qemu = os.path.join(prefix, "bin", "qemu-system-x86_64")
+        if not os.path.isfile(kvm_qemu):
+            raise error.TestError('Invalid qemu path')
+        kvm_qemu_img = os.path.join(prefix, "bin", "qemu-img")
+        if not os.path.isfile(kvm_qemu_img):
+            raise error.TestError('Invalid qemu-img path')
+        os.symlink(kvm_qemu, qemu_path)
+        os.symlink(kvm_qemu_img, qemu_img_path)
+
+    if unittest:
+        logging.debug("Linking unittest dir")
+        os.symlink(unittest, qemu_unittest_path)
+
+
+def install_roms(rom_dir, prefix):
+    logging.debug("Path to roms specified. Copying roms to install prefix")
+    rom_dst_dir = os.path.join(prefix, 'share', 'qemu')
+    for rom_src in glob.glob('%s/*.bin' % rom_dir):
+        rom_dst = os.path.join(rom_dst_dir, os.path.basename(rom_src))
+        logging.debug("Copying rom file %s to %s", rom_src, rom_dst)
+        shutil.copy(rom_src, rom_dst)
+
+
+def save_build(build_dir, dest_dir):
+    logging.debug('Saving the result of the build on %s', dest_dir)
+    base_name = os.path.basename(build_dir)
+    tarball_name = base_name + '.tar.bz2'
+    os.chdir(os.path.dirname(build_dir))
+    utils.system('tar -cjf %s %s' % (tarball_name, base_name))
+    shutil.move(tarball_name, os.path.join(dest_dir, tarball_name))
+
+
+class KvmInstallException(Exception):
+    pass
+
+
+class FailedKvmInstall(KvmInstallException):
+    pass
+
+
+class KvmNotInstalled(KvmInstallException):
+    pass
+
+
+class BaseInstaller(object):
+    # default value for load_stock argument
+    load_stock_modules = True
+    def __init__(self, mode=None):
+        self.install_mode = mode
+        self._full_module_list = None
+
+    def set_install_params(self, test, params):
+        self.params = params
+
+        load_modules = params.get('load_modules', 'no')
+        if not load_modules or load_modules == 'yes':
+            self.should_load_modules = True
+        elif load_modules == 'no':
+            self.should_load_modules = False
+        default_extra_modules = str(None)
+        self.extra_modules = eval(params.get("extra_modules",
+                                             default_extra_modules))
+
+        self.cpu_vendor = cpu_vendor()
+
+        self.srcdir = test.srcdir
+        if not os.path.isdir(self.srcdir):
+            os.makedirs(self.srcdir)
+
+        self.test_bindir = test.bindir
+        self.results_dir = test.resultsdir
+
+        # KVM build prefix, for the modes that do need it
+        prefix = os.path.join(test.bindir, 'build')
+        self.prefix = os.path.abspath(prefix)
+
+        # Current host kernel directory
+        default_host_kernel_source = '/lib/modules/%s/build' % os.uname()[2]
+        self.host_kernel_srcdir = params.get('host_kernel_source',
+                                             default_host_kernel_source)
+
+        # Extra parameters that can be passed to the configure script
+        self.extra_configure_options = params.get('extra_configure_options',
+                                                  None)
+
+        # Do we want to save the result of the build on test.resultsdir?
+        self.save_results = True
+        save_results = params.get('save_results', 'no')
+        if save_results == 'no':
+            self.save_results = False
+
+        self._full_module_list = list(self._module_list())
+
+
+    def install_unittests(self):
+        userspace_srcdir = os.path.join(self.srcdir, "kvm_userspace")
+        test_repo = self.params.get("test_git_repo")
+        test_branch = self.params.get("test_branch", "master")
+        test_commit = self.params.get("test_commit", None)
+        test_lbranch = self.params.get("test_lbranch", "master")
+
+        if test_repo:
+            test_srcdir = os.path.join(self.srcdir, "kvm-unit-tests")
+            virt_utils.get_git_branch(test_repo, test_branch, test_srcdir,
+                                     test_commit, test_lbranch)
+            unittest_cfg = os.path.join(test_srcdir, 'x86',
+                                        'unittests.cfg')
+            self.test_srcdir = test_srcdir
+        else:
+            unittest_cfg = os.path.join(userspace_srcdir, 'kvm', 'test', 'x86',
+                                        'unittests.cfg')
+        self.unittest_cfg = None
+        if os.path.isfile(unittest_cfg):
+            self.unittest_cfg = unittest_cfg
+        else:
+            if test_repo:
+                logging.error("No unittest config file %s found, skipping "
+                              "unittest build", self.unittest_cfg)
+
+        self.unittest_prefix = None
+        if self.unittest_cfg:
+            logging.info("Building and installing unittests")
+            os.chdir(os.path.dirname(os.path.dirname(self.unittest_cfg)))
+            utils.system('./configure --prefix=%s' % self.prefix)
+            utils.system('make')
+            utils.system('make install')
+            self.unittest_prefix = os.path.join(self.prefix, 'share', 'qemu',
+                                                'tests')
+
+
+    def full_module_list(self):
+        """Return the module list used by the installer
+
+        Used by the module_probe test, to avoid using utils.unload_module().
+        """
+        if self._full_module_list is None:
+            raise KvmNotInstalled("KVM modules not installed yet (installer: %s)" % (type(self)))
+        return self._full_module_list
+
+
+    def _module_list(self):
+        """Generate the list of modules that need to be loaded
+        """
+        yield 'kvm'
+        yield 'kvm-%s' % (self.cpu_vendor)
+        if self.extra_modules:
+            for module in self.extra_modules:
+                yield module
+
+
+    def _load_modules(self, mod_list):
+        """
+        Load the KVM modules
+
+        May be overridden by subclasses.
+        """
+        _load_kvm_modules(mod_list, load_stock=self.load_stock_modules)
+
+
+    def load_modules(self, mod_list=None):
+        if mod_list is None:
+            mod_list = self.full_module_list()
+        self._load_modules(mod_list)
+
+
+    def _unload_modules(self, mod_list=None):
+        """
+        Just unload the KVM modules, without trying to kill Qemu
+        """
+        if mod_list is None:
+            mod_list = self.full_module_list()
+        _unload_kvm_modules(mod_list)
+
+
+    def unload_modules(self, mod_list=None):
+        """
+        Kill Qemu and unload the KVM modules
+        """
+        kill_qemu_processes()
+        self._unload_modules(mod_list)
+
+
+    def reload_modules(self):
+        """
+        Reload the KVM modules after killing Qemu and unloading the current modules
+        """
+        self.unload_modules()
+        self.load_modules()
+
+
+    def reload_modules_if_needed(self):
+        if self.should_load_modules:
+            self.reload_modules()
+
+
+class YumInstaller(BaseInstaller):
+    """
+    Class that uses yum to install and remove packages.
+    """
+    load_stock_modules = True
+    def set_install_params(self, test, params):
+        super(YumInstaller, self).set_install_params(test, params)
+        # Checking if all required dependencies are available
+        os_dep.command("rpm")
+        os_dep.command("yum")
+
+        default_pkg_list = str(['qemu-kvm', 'qemu-kvm-tools'])
+        default_qemu_bin_paths = str(['/usr/bin/qemu-kvm', '/usr/bin/qemu-img'])
+        default_pkg_path_list = str(None)
+        self.pkg_list = eval(params.get("pkg_list", default_pkg_list))
+        self.pkg_path_list = eval(params.get("pkg_path_list",
+                                             default_pkg_path_list))
+        self.qemu_bin_paths = eval(params.get("qemu_bin_paths",
+                                              default_qemu_bin_paths))
+
+
+    def _clean_previous_installs(self):
+        kill_qemu_processes()
+        removable_packages = ""
+        for pkg in self.pkg_list:
+            removable_packages += " %s" % pkg
+
+        utils.system("yum remove -y %s" % removable_packages)
+
+
+    def _get_packages(self):
+        for pkg in self.pkg_path_list:
+            utils.get_file(pkg, os.path.join(self.srcdir,
+                                             os.path.basename(pkg)))
+
+
+    def _install_packages(self):
+        """
+        Install all downloaded packages.
+        """
+        os.chdir(self.srcdir)
+        utils.system("yum install --nogpgcheck -y *.rpm")
+
+
+    def install(self):
+        self.install_unittests()
+        self._clean_previous_installs()
+        self._get_packages()
+        self._install_packages()
+        create_symlinks(test_bindir=self.test_bindir,
+                        bin_list=self.qemu_bin_paths,
+                        unittest=self.unittest_prefix)
+        self.reload_modules_if_needed()
+        if self.save_results:
+            save_build(self.srcdir, self.results_dir)
+
+
+class KojiInstaller(YumInstaller):
+    """
+    Class that handles installing KVM from the fedora build service, koji.
+    It uses yum to install and remove packages.
+    """
+    load_stock_modules = True
+    def set_install_params(self, test, params):
+        """
+        Gets parameters and initializes the package downloader.
+
+        @param test: kvm test object
+        @param params: Dictionary with test arguments
+        """
+        super(KojiInstaller, self).set_install_params(test, params)
+        default_koji_cmd = '/usr/bin/koji'
+        default_src_pkg = 'qemu'
+        self.src_pkg = params.get("src_pkg", default_src_pkg)
+        self.tag = params.get("koji_tag", None)
+        self.build = params.get("koji_build", None)
+        self.koji_cmd = params.get("koji_cmd", default_koji_cmd)
+
+
+    def _get_packages(self):
+        """
+        Downloads the specific arch RPMs for the specific build name.
+        """
+        downloader = virt_utils.KojiDownloader(cmd=self.koji_cmd)
+        downloader.get(src_package=self.src_pkg, tag=self.tag,
+                            build=self.build, dst_dir=self.srcdir)
+
+
+    def install(self):
+        super(KojiInstaller, self)._clean_previous_installs()
+        self._get_packages()
+        super(KojiInstaller, self)._install_packages()
+        self.install_unittests()
+        create_symlinks(test_bindir=self.test_bindir,
+                        bin_list=self.qemu_bin_paths,
+                        unittest=self.unittest_prefix)
+        self.reload_modules_if_needed()
+        if self.save_results:
+            save_build(self.srcdir, self.results_dir)
+
+
+class SourceDirInstaller(BaseInstaller):
+    """
+    Class that handles building/installing KVM directly from a tarball or
+    a single source code dir.
+    """
+    def set_install_params(self, test, params):
+        """
+        Initializes class attributes, and retrieves KVM code.
+
+        @param test: kvm test object
+        @param params: Dictionary with test arguments
+        """
+        super(SourceDirInstaller, self).set_install_params(test, params)
+
+        self.mod_install_dir = os.path.join(self.prefix, 'modules')
+        self.installed_kmods = False  # it will be set to True in case we
+                                      # installed our own modules
+
+        srcdir = params.get("srcdir", None)
+        self.path_to_roms = params.get("path_to_rom_images", None)
+
+        if self.install_mode == 'localsrc':
+            if srcdir is None:
+                raise error.TestError("Install from source directory specified"
+                                      "but no source directory provided on the"
+                                      "control file.")
+            else:
+                shutil.copytree(srcdir, self.srcdir)
+
+        if self.install_mode == 'release':
+            release_tag = params.get("release_tag")
+            release_dir = params.get("release_dir")
+            release_listing = params.get("release_listing")
+            logging.info("Installing KVM from release tarball")
+            if not release_tag:
+                release_tag = virt_utils.get_latest_kvm_release_tag(
+                                                                release_listing)
+            tarball = os.path.join(release_dir, 'kvm', release_tag,
+                                   "kvm-%s.tar.gz" % release_tag)
+            logging.info("Retrieving release kvm-%s" % release_tag)
+            tarball = utils.unmap_url("/", tarball, "/tmp")
+
+        elif self.install_mode == 'snapshot':
+            logging.info("Installing KVM from snapshot")
+            snapshot_dir = params.get("snapshot_dir")
+            if not snapshot_dir:
+                raise error.TestError("Snapshot dir not provided")
+            snapshot_date = params.get("snapshot_date")
+            if not snapshot_date:
+                # Take yesterday's snapshot
+                d = (datetime.date.today() -
+                     datetime.timedelta(1)).strftime("%Y%m%d")
+            else:
+                d = snapshot_date
+            tarball = os.path.join(snapshot_dir, "kvm-snapshot-%s.tar.gz" % d)
+            logging.info("Retrieving kvm-snapshot-%s" % d)
+            tarball = utils.unmap_url("/", tarball, "/tmp")
+
+        elif self.install_mode == 'localtar':
+            tarball = params.get("tarball")
+            if not tarball:
+                raise error.TestError("KVM Tarball install specified but no"
+                                      " tarball provided on control file.")
+            logging.info("Installing KVM from a local tarball")
+            logging.info("Using tarball %s")
+            tarball = utils.unmap_url("/", params.get("tarball"), "/tmp")
+
+        if self.install_mode in ['release', 'snapshot', 'localtar']:
+            utils.extract_tarball_to_dir(tarball, self.srcdir)
+
+        if self.install_mode in ['release', 'snapshot', 'localtar', 'srcdir']:
+            self.repo_type = virt_utils.check_kvm_source_dir(self.srcdir)
+            configure_script = os.path.join(self.srcdir, 'configure')
+            self.configure_options = check_configure_options(configure_script)
+
+
+    def _build(self):
+        make_jobs = utils.count_cpus()
+        os.chdir(self.srcdir)
+        # For testing purposes, it's better to build qemu binaries with
+        # debugging symbols, so we can extract more meaningful stack traces.
+        cfg = "./configure --prefix=%s" % self.prefix
+        if "--disable-strip" in self.configure_options:
+            cfg += " --disable-strip"
+        steps = [cfg, "make clean", "make -j %s" % make_jobs]
+        logging.info("Building KVM")
+        for step in steps:
+            utils.system(step)
+
+
+    def _install_kmods_old_userspace(self, userspace_path):
+        """
+        Run the module install command.
+
+        This is for the "old userspace" code, that contained a 'kernel' subdirectory
+        with the kmod build code.
+
+        The code would be much simpler if we could specify the module install
+        path as parameter to the toplevel Makefile. As we can't do that and
+        the module install code doesn't use --prefix, we have to call
+        'make -C kernel install' directly, setting the module directory
+        parameters.
+
+        If the userspace tree doens't have a 'kernel' subdirectory, the
+        module install step will be skipped.
+
+        @param userspace_path: the path the kvm-userspace directory
+        """
+        kdir = os.path.join(userspace_path, 'kernel')
+        if os.path.isdir(kdir):
+            os.chdir(kdir)
+            # INSTALLDIR is the target dir for the modules
+            # ORIGMODDIR is the dir where the old modules will be removed. we
+            #            don't want to mess with the system modules, so set it
+            #            to a non-existing directory
+            utils.system('make install INSTALLDIR=%s ORIGMODDIR=/tmp/no-old-modules' % (self.mod_install_dir))
+            self.installed_kmods = True
+
+
+    def _install_kmods(self, kmod_path):
+        """Run the module install command for the kmod-kvm repository
+
+        @param kmod_path: the path to the kmod-kvm.git working copy
+        """
+        os.chdir(kmod_path)
+        utils.system('make modules_install DESTDIR=%s' % (self.mod_install_dir))
+        self.installed_kmods = True
+
+
+    def _install(self):
+        os.chdir(self.srcdir)
+        logging.info("Installing KVM userspace")
+        if self.repo_type == 1:
+            utils.system("make -C qemu install")
+            self._install_kmods_old_userspace(self.srcdir)
+        elif self.repo_type == 2:
+            utils.system("make install")
+        if self.path_to_roms:
+            install_roms(self.path_to_roms, self.prefix)
+        self.install_unittests()
+        create_symlinks(test_bindir=self.test_bindir,
+                        prefix=self.prefix,
+                        unittest=self.unittest_prefix)
+
+
+    def _load_modules(self, mod_list):
+        # load the installed KVM modules in case we installed them
+        # ourselves. Otherwise, just load the system modules.
+        if self.installed_kmods:
+            logging.info("Loading installed KVM modules")
+            _load_kvm_modules(mod_list, module_dir=self.mod_install_dir)
+        else:
+            logging.info("Loading stock KVM modules")
+            _load_kvm_modules(mod_list, load_stock=True)
+
+
+    def install(self):
+        self._build()
+        self._install()
+        self.reload_modules_if_needed()
+        if self.save_results:
+            save_build(self.srcdir, self.results_dir)
+
+
+class GitInstaller(SourceDirInstaller):
+    def _pull_code(self):
+        """
+        Retrieves code from git repositories.
+        """
+        params = self.params
+
+        kernel_repo = params.get("git_repo")
+        user_repo = params.get("user_git_repo")
+        kmod_repo = params.get("kmod_repo")
+
+        kernel_branch = params.get("kernel_branch", "master")
+        user_branch = params.get("user_branch", "master")
+        kmod_branch = params.get("kmod_branch", "master")
+
+        kernel_lbranch = params.get("kernel_lbranch", "master")
+        user_lbranch = params.get("user_lbranch", "master")
+        kmod_lbranch = params.get("kmod_lbranch", "master")
+
+        kernel_commit = params.get("kernel_commit", None)
+        user_commit = params.get("user_commit", None)
+        kmod_commit = params.get("kmod_commit", None)
+
+        kernel_patches = eval(params.get("kernel_patches", "[]"))
+        user_patches = eval(params.get("user_patches", "[]"))
+        kmod_patches = eval(params.get("user_patches", "[]"))
+
+        if not user_repo:
+            message = "KVM user git repository path not specified"
+            logging.error(message)
+            raise error.TestError(message)
+
+        userspace_srcdir = os.path.join(self.srcdir, "kvm_userspace")
+        virt_utils.get_git_branch(user_repo, user_branch, userspace_srcdir,
+                                 user_commit, user_lbranch)
+        self.userspace_srcdir = userspace_srcdir
+
+        if user_patches:
+            os.chdir(self.userspace_srcdir)
+            for patch in user_patches:
+                utils.get_file(patch, os.path.join(self.userspace_srcdir,
+                                                   os.path.basename(patch)))
+                utils.system('patch -p1 %s' % os.path.basename(patch))
+
+        if kernel_repo:
+            kernel_srcdir = os.path.join(self.srcdir, "kvm")
+            virt_utils.get_git_branch(kernel_repo, kernel_branch, kernel_srcdir,
+                                     kernel_commit, kernel_lbranch)
+            self.kernel_srcdir = kernel_srcdir
+            if kernel_patches:
+                os.chdir(self.kernel_srcdir)
+                for patch in kernel_patches:
+                    utils.get_file(patch, os.path.join(self.userspace_srcdir,
+                                                       os.path.basename(patch)))
+                    utils.system('patch -p1 %s' % os.path.basename(patch))
+        else:
+            self.kernel_srcdir = None
+
+        if kmod_repo:
+            kmod_srcdir = os.path.join (self.srcdir, "kvm_kmod")
+            virt_utils.get_git_branch(kmod_repo, kmod_branch, kmod_srcdir,
+                                     kmod_commit, kmod_lbranch)
+            self.kmod_srcdir = kmod_srcdir
+            if kmod_patches:
+                os.chdir(self.kmod_srcdir)
+                for patch in kmod_patches:
+                    utils.get_file(patch, os.path.join(self.userspace_srcdir,
+                                                       os.path.basename(patch)))
+                    utils.system('patch -p1 %s' % os.path.basename(patch))
+        else:
+            self.kmod_srcdir = None
+
+        configure_script = os.path.join(self.userspace_srcdir, 'configure')
+        self.configure_options = check_configure_options(configure_script)
+
+
+    def _build(self):
+        make_jobs = utils.count_cpus()
+        cfg = './configure'
+        if self.kmod_srcdir:
+            logging.info('Building KVM modules')
+            os.chdir(self.kmod_srcdir)
+            module_build_steps = [cfg,
+                                  'make clean',
+                                  'make sync LINUX=%s' % self.kernel_srcdir,
+                                  'make']
+        elif self.kernel_srcdir:
+            logging.info('Building KVM modules')
+            os.chdir(self.userspace_srcdir)
+            cfg += ' --kerneldir=%s' % self.host_kernel_srcdir
+            module_build_steps = [cfg,
+                            'make clean',
+                            'make -C kernel LINUX=%s sync' % self.kernel_srcdir]
+        else:
+            module_build_steps = []
+
+        for step in module_build_steps:
+            utils.run(step)
+
+        logging.info('Building KVM userspace code')
+        os.chdir(self.userspace_srcdir)
+        cfg += ' --prefix=%s' % self.prefix
+        if "--disable-strip" in self.configure_options:
+            cfg += ' --disable-strip'
+        if self.extra_configure_options:
+            cfg += ' %s' % self.extra_configure_options
+        utils.system(cfg)
+        utils.system('make clean')
+        utils.system('make -j %s' % make_jobs)
+
+
+    def _install(self):
+        if self.kernel_srcdir:
+            os.chdir(self.userspace_srcdir)
+            # the kernel module install with --prefix doesn't work, and DESTDIR
+            # wouldn't work for the userspace stuff, so we clear WANT_MODULE:
+            utils.system('make install WANT_MODULE=')
+            # and install the old-style-kmod modules manually:
+            self._install_kmods_old_userspace(self.userspace_srcdir)
+        elif self.kmod_srcdir:
+            # if we have a kmod repository, it is easier:
+            # 1) install userspace:
+            os.chdir(self.userspace_srcdir)
+            utils.system('make install')
+            # 2) install kmod:
+            self._install_kmods(self.kmod_srcdir)
+        else:
+            # if we don't have kmod sources, we just install
+            # userspace:
+            os.chdir(self.userspace_srcdir)
+            utils.system('make install')
+
+        if self.path_to_roms:
+            install_roms(self.path_to_roms, self.prefix)
+        self.install_unittests()
+        create_symlinks(test_bindir=self.test_bindir, prefix=self.prefix,
+                        bin_list=None,
+                        unittest=self.unittest_prefix)
+
+
+    def install(self):
+        self._pull_code()
+        self._build()
+        self._install()
+        self.reload_modules_if_needed()
+        if self.save_results:
+            save_build(self.srcdir, self.results_dir)
+
+
+class PreInstalledKvm(BaseInstaller):
+    # load_modules() will use the stock modules:
+    load_stock_modules = True
+    def install(self):
+        logging.info("Expecting KVM to be already installed. Doing nothing")
+
+
+class FailedInstaller:
+    """
+    Class used to be returned instead of the installer if a installation fails
+
+    Useful to make sure no installer object is used if KVM installation fails.
+    """
+    def __init__(self, msg="KVM install failed"):
+        self._msg = msg
+
+
+    def load_modules(self):
+        """Will refuse to load the KVM modules as install failed"""
+        raise FailedKvmInstall("KVM modules not available. reason: %s" % (self._msg))
+
+
+installer_classes = {
+    'localsrc': SourceDirInstaller,
+    'localtar': SourceDirInstaller,
+    'release': SourceDirInstaller,
+    'snapshot': SourceDirInstaller,
+    'git': GitInstaller,
+    'yum': YumInstaller,
+    'koji': KojiInstaller,
+    'preinstalled': PreInstalledKvm,
+}
+
+
+def _installer_class(install_mode):
+    c = installer_classes.get(install_mode)
+    if c is None:
+        raise error.TestError('Invalid or unsupported'
+                              ' install mode: %s' % install_mode)
+    return c
+
+
+def make_installer(params):
+    # priority:
+    # - 'install_mode' param
+    # - 'mode' param
+    mode = params.get("install_mode", params.get("mode"))
+    klass = _installer_class(mode)
+    return klass(mode)
diff --git a/client/virt/kvm_monitor.py b/client/virt/kvm_monitor.py
new file mode 100644
index 0000000..d76f5c2
--- /dev/null
+++ b/client/virt/kvm_monitor.py
@@ -0,0 +1,745 @@
+"""
+Interfaces to the QEMU monitor.
+
+@copyright: 2008-2010 Red Hat Inc.
+"""
+
+import socket, time, threading, logging, select
+import virt_utils
+
+try:
+    import json
+except ImportError:
+    logging.warning("Could not import json module. "
+                    "QMP monitor functionality disabled.")
+
+
+class MonitorError(Exception):
+    pass
+
+
+class MonitorConnectError(MonitorError):
+    pass
+
+
+class MonitorSocketError(MonitorError):
+    def __init__(self, msg, e):
+        Exception.__init__(self, msg, e)
+        self.msg = msg
+        self.e = e
+
+    def __str__(self):
+        return "%s    (%s)" % (self.msg, self.e)
+
+
+class MonitorLockError(MonitorError):
+    pass
+
+
+class MonitorProtocolError(MonitorError):
+    pass
+
+
+class MonitorNotSupportedError(MonitorError):
+    pass
+
+
+class QMPCmdError(MonitorError):
+    def __init__(self, cmd, qmp_args, data):
+        MonitorError.__init__(self, cmd, qmp_args, data)
+        self.cmd = cmd
+        self.qmp_args = qmp_args
+        self.data = data
+
+    def __str__(self):
+        return ("QMP command %r failed    (arguments: %r,    "
+                "error message: %r)" % (self.cmd, self.qmp_args, self.data))
+
+
+class Monitor:
+    """
+    Common code for monitor classes.
+    """
+
+    def __init__(self, name, filename):
+        """
+        Initialize the instance.
+
+        @param name: Monitor identifier (a string)
+        @param filename: Monitor socket filename
+        @raise MonitorConnectError: Raised if the connection fails
+        """
+        self.name = name
+        self.filename = filename
+        self._lock = threading.RLock()
+        self._socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
+
+        try:
+            self._socket.connect(filename)
+        except socket.error:
+            raise MonitorConnectError("Could not connect to monitor socket")
+
+
+    def __del__(self):
+        # Automatically close the connection when the instance is garbage
+        # collected
+        try:
+            self._socket.shutdown(socket.SHUT_RDWR)
+        except socket.error:
+            pass
+        self._socket.close()
+
+
+    # The following two functions are defined to make sure the state is set
+    # exclusively by the constructor call as specified in __getinitargs__().
+
+    def __getstate__(self):
+        pass
+
+
+    def __setstate__(self, state):
+        pass
+
+
+    def __getinitargs__(self):
+        # Save some information when pickling -- will be passed to the
+        # constructor upon unpickling
+        return self.name, self.filename, True
+
+
+    def _acquire_lock(self, timeout=20):
+        end_time = time.time() + timeout
+        while time.time() < end_time:
+            if self._lock.acquire(False):
+                return True
+            time.sleep(0.05)
+        return False
+
+
+    def _data_available(self, timeout=0):
+        timeout = max(0, timeout)
+        return bool(select.select([self._socket], [], [], timeout)[0])
+
+
+    def _recvall(self):
+        s = ""
+        while self._data_available():
+            try:
+                data = self._socket.recv(1024)
+            except socket.error, e:
+                raise MonitorSocketError("Could not receive data from monitor",
+                                         e)
+            if not data:
+                break
+            s += data
+        return s
+
+
+    def is_responsive(self):
+        """
+        Return True iff the monitor is responsive.
+        """
+        try:
+            self.verify_responsive()
+            return True
+        except MonitorError:
+            return False
+
+
+class HumanMonitor(Monitor):
+    """
+    Wraps "human monitor" commands.
+    """
+
+    def __init__(self, name, filename, suppress_exceptions=False):
+        """
+        Connect to the monitor socket and find the (qemu) prompt.
+
+        @param name: Monitor identifier (a string)
+        @param filename: Monitor socket filename
+        @raise MonitorConnectError: Raised if the connection fails and
+                suppress_exceptions is False
+        @raise MonitorProtocolError: Raised if the initial (qemu) prompt isn't
+                found and suppress_exceptions is False
+        @note: Other exceptions may be raised.  See cmd()'s
+                docstring.
+        """
+        try:
+            Monitor.__init__(self, name, filename)
+
+            self.protocol = "human"
+
+            # Find the initial (qemu) prompt
+            s, o = self._read_up_to_qemu_prompt(20)
+            if not s:
+                raise MonitorProtocolError("Could not find (qemu) prompt "
+                                           "after connecting to monitor. "
+                                           "Output so far: %r" % o)
+
+            # Save the output of 'help' for future use
+            self._help_str = self.cmd("help")
+
+        except MonitorError, e:
+            if suppress_exceptions:
+                logging.warn(e)
+            else:
+                raise
+
+
+    # Private methods
+
+    def _read_up_to_qemu_prompt(self, timeout=20):
+        s = ""
+        end_time = time.time() + timeout
+        while self._data_available(end_time - time.time()):
+            data = self._recvall()
+            if not data:
+                break
+            s += data
+            try:
+                if s.splitlines()[-1].split()[-1] == "(qemu)":
+                    return True, "\n".join(s.splitlines()[:-1])
+            except IndexError:
+                continue
+        return False, "\n".join(s.splitlines())
+
+
+    def _send(self, cmd):
+        """
+        Send a command without waiting for output.
+
+        @param cmd: Command to send
+        @raise MonitorLockError: Raised if the lock cannot be acquired
+        @raise MonitorSocketError: Raised if a socket error occurs
+        """
+        if not self._acquire_lock(20):
+            raise MonitorLockError("Could not acquire exclusive lock to send "
+                                   "monitor command '%s'" % cmd)
+
+        try:
+            try:
+                self._socket.sendall(cmd + "\n")
+            except socket.error, e:
+                raise MonitorSocketError("Could not send monitor command %r" %
+                                         cmd, e)
+
+        finally:
+            self._lock.release()
+
+
+    # Public methods
+
+    def cmd(self, command, timeout=20):
+        """
+        Send command to the monitor.
+
+        @param command: Command to send to the monitor
+        @param timeout: Time duration to wait for the (qemu) prompt to return
+        @return: Output received from the monitor
+        @raise MonitorLockError: Raised if the lock cannot be acquired
+        @raise MonitorSocketError: Raised if a socket error occurs
+        @raise MonitorProtocolError: Raised if the (qemu) prompt cannot be
+                found after sending the command
+        """
+        if not self._acquire_lock(20):
+            raise MonitorLockError("Could not acquire exclusive lock to send "
+                                   "monitor command '%s'" % command)
+
+        try:
+            # Read any data that might be available
+            self._recvall()
+            # Send command
+            self._send(command)
+            # Read output
+            s, o = self._read_up_to_qemu_prompt(timeout)
+            # Remove command echo from output
+            o = "\n".join(o.splitlines()[1:])
+            # Report success/failure
+            if s:
+                return o
+            else:
+                msg = ("Could not find (qemu) prompt after command '%s'. "
+                       "Output so far: %r" % (command, o))
+                raise MonitorProtocolError(msg)
+
+        finally:
+            self._lock.release()
+
+
+    def verify_responsive(self):
+        """
+        Make sure the monitor is responsive by sending a command.
+        """
+        self.cmd("info status")
+
+
+    # Command wrappers
+    # Notes:
+    # - All of the following commands raise exceptions in a similar manner to
+    #   cmd().
+    # - A command wrapper should use self._help_str if it requires information
+    #   about the monitor's capabilities.
+
+    def quit(self):
+        """
+        Send "quit" without waiting for output.
+        """
+        self._send("quit")
+
+
+    def info(self, what):
+        """
+        Request info about something and return the output.
+        """
+        return self.cmd("info %s" % what)
+
+
+    def query(self, what):
+        """
+        Alias for info.
+        """
+        return self.info(what)
+
+
+    def screendump(self, filename):
+        """
+        Request a screendump.
+
+        @param filename: Location for the screendump
+        @return: The command's output
+        """
+        return self.cmd("screendump %s" % filename)
+
+
+    def migrate(self, uri, full_copy=False, incremental_copy=False, wait=False):
+        """
+        Migrate.
+
+        @param uri: destination URI
+        @param full_copy: If true, migrate with full disk copy
+        @param incremental_copy: If true, migrate with incremental disk copy
+        @param wait: If true, wait for completion
+        @return: The command's output
+        """
+        cmd = "migrate"
+        if not wait:
+            cmd += " -d"
+        if full_copy:
+            cmd += " -b"
+        if incremental_copy:
+            cmd += " -i"
+        cmd += " %s" % uri
+        return self.cmd(cmd)
+
+
+    def migrate_set_speed(self, value):
+        """
+        Set maximum speed (in bytes/sec) for migrations.
+
+        @param value: Speed in bytes/sec
+        @return: The command's output
+        """
+        return self.cmd("migrate_set_speed %s" % value)
+
+
+    def sendkey(self, keystr, hold_time=1):
+        """
+        Send key combination to VM.
+
+        @param keystr: Key combination string
+        @param hold_time: Hold time in ms (should normally stay 1 ms)
+        @return: The command's output
+        """
+        return self.cmd("sendkey %s %s" % (keystr, hold_time))
+
+
+    def mouse_move(self, dx, dy):
+        """
+        Move mouse.
+
+        @param dx: X amount
+        @param dy: Y amount
+        @return: The command's output
+        """
+        return self.cmd("mouse_move %d %d" % (dx, dy))
+
+
+    def mouse_button(self, state):
+        """
+        Set mouse button state.
+
+        @param state: Button state (1=L, 2=M, 4=R)
+        @return: The command's output
+        """
+        return self.cmd("mouse_button %d" % state)
+
+
+class QMPMonitor(Monitor):
+    """
+    Wraps QMP monitor commands.
+    """
+
+    def __init__(self, name, filename, suppress_exceptions=False):
+        """
+        Connect to the monitor socket, read the greeting message and issue the
+        qmp_capabilities command.  Also make sure the json module is available.
+
+        @param name: Monitor identifier (a string)
+        @param filename: Monitor socket filename
+        @raise MonitorConnectError: Raised if the connection fails and
+                suppress_exceptions is False
+        @raise MonitorProtocolError: Raised if the no QMP greeting message is
+                received and suppress_exceptions is False
+        @raise MonitorNotSupportedError: Raised if json isn't available and
+                suppress_exceptions is False
+        @note: Other exceptions may be raised if the qmp_capabilities command
+                fails.  See cmd()'s docstring.
+        """
+        try:
+            Monitor.__init__(self, name, filename)
+
+            self.protocol = "qmp"
+            self._greeting = None
+            self._events = []
+
+            # Make sure json is available
+            try:
+                json
+            except NameError:
+                raise MonitorNotSupportedError("QMP requires the json module "
+                                               "(Python 2.6 and up)")
+
+            # Read greeting message
+            end_time = time.time() + 20
+            while time.time() < end_time:
+                for obj in self._read_objects():
+                    if "QMP" in obj:
+                        self._greeting = obj
+                        break
+                if self._greeting:
+                    break
+                time.sleep(0.1)
+            else:
+                raise MonitorProtocolError("No QMP greeting message received")
+
+            # Issue qmp_capabilities
+            self.cmd("qmp_capabilities")
+
+        except MonitorError, e:
+            if suppress_exceptions:
+                logging.warn(e)
+            else:
+                raise
+
+
+    # Private methods
+
+    def _build_cmd(self, cmd, args=None, id=None):
+        obj = {"execute": cmd}
+        if args is not None:
+            obj["arguments"] = args
+        if id is not None:
+            obj["id"] = id
+        return obj
+
+
+    def _read_objects(self, timeout=5):
+        """
+        Read lines from the monitor and try to decode them.
+        Stop when all available lines have been successfully decoded, or when
+        timeout expires.  If any decoded objects are asynchronous events, store
+        them in self._events.  Return all decoded objects.
+
+        @param timeout: Time to wait for all lines to decode successfully
+        @return: A list of objects
+        """
+        if not self._data_available():
+            return []
+        s = ""
+        end_time = time.time() + timeout
+        while self._data_available(end_time - time.time()):
+            s += self._recvall()
+            # Make sure all lines are decodable
+            for line in s.splitlines():
+                if line:
+                    try:
+                        json.loads(line)
+                    except:
+                        # Found an incomplete or broken line -- keep reading
+                        break
+            else:
+                # All lines are OK -- stop reading
+                break
+        # Decode all decodable lines
+        objs = []
+        for line in s.splitlines():
+            try:
+                objs += [json.loads(line)]
+            except:
+                pass
+        # Keep track of asynchronous events
+        self._events += [obj for obj in objs if "event" in obj]
+        return objs
+
+
+    def _send(self, data):
+        """
+        Send raw data without waiting for response.
+
+        @param data: Data to send
+        @raise MonitorSocketError: Raised if a socket error occurs
+        """
+        try:
+            self._socket.sendall(data)
+        except socket.error, e:
+            raise MonitorSocketError("Could not send data: %r" % data, e)
+
+
+    def _get_response(self, id=None, timeout=20):
+        """
+        Read a response from the QMP monitor.
+
+        @param id: If not None, look for a response with this id
+        @param timeout: Time duration to wait for response
+        @return: The response dict, or None if none was found
+        """
+        end_time = time.time() + timeout
+        while self._data_available(end_time - time.time()):
+            for obj in self._read_objects():
+                if isinstance(obj, dict):
+                    if id is not None and obj.get("id") != id:
+                        continue
+                    if "return" in obj or "error" in obj:
+                        return obj
+
+
+    # Public methods
+
+    def cmd(self, cmd, args=None, timeout=20):
+        """
+        Send a QMP monitor command and return the response.
+
+        Note: an id is automatically assigned to the command and the response
+        is checked for the presence of the same id.
+
+        @param cmd: Command to send
+        @param args: A dict containing command arguments, or None
+        @param timeout: Time duration to wait for response
+        @return: The response received
+        @raise MonitorLockError: Raised if the lock cannot be acquired
+        @raise MonitorSocketError: Raised if a socket error occurs
+        @raise MonitorProtocolError: Raised if no response is received
+        @raise QMPCmdError: Raised if the response is an error message
+                (the exception's args are (cmd, args, data) where data is the
+                error data)
+        """
+        if not self._acquire_lock(20):
+            raise MonitorLockError("Could not acquire exclusive lock to send "
+                                   "QMP command '%s'" % cmd)
+
+        try:
+            # Read any data that might be available
+            self._read_objects()
+            # Send command
+            id = virt_utils.generate_random_string(8)
+            self._send(json.dumps(self._build_cmd(cmd, args, id)) + "\n")
+            # Read response
+            r = self._get_response(id, timeout)
+            if r is None:
+                raise MonitorProtocolError("Received no response to QMP "
+                                           "command '%s', or received a "
+                                           "response with an incorrect id"
+                                           % cmd)
+            if "return" in r:
+                return r["return"]
+            if "error" in r:
+                raise QMPCmdError(cmd, args, r["error"])
+
+        finally:
+            self._lock.release()
+
+
+    def cmd_raw(self, data, timeout=20):
+        """
+        Send a raw string to the QMP monitor and return the response.
+        Unlike cmd(), return the raw response dict without performing any
+        checks on it.
+
+        @param data: The data to send
+        @param timeout: Time duration to wait for response
+        @return: The response received
+        @raise MonitorLockError: Raised if the lock cannot be acquired
+        @raise MonitorSocketError: Raised if a socket error occurs
+        @raise MonitorProtocolError: Raised if no response is received
+        """
+        if not self._acquire_lock(20):
+            raise MonitorLockError("Could not acquire exclusive lock to send "
+                                   "data: %r" % data)
+
+        try:
+            self._read_objects()
+            self._send(data)
+            r = self._get_response(None, timeout)
+            if r is None:
+                raise MonitorProtocolError("Received no response to data: %r" %
+                                           data)
+            return r
+
+        finally:
+            self._lock.release()
+
+
+    def cmd_obj(self, obj, timeout=20):
+        """
+        Transform a Python object to JSON, send the resulting string to the QMP
+        monitor, and return the response.
+        Unlike cmd(), return the raw response dict without performing any
+        checks on it.
+
+        @param obj: The object to send
+        @param timeout: Time duration to wait for response
+        @return: The response received
+        @raise MonitorLockError: Raised if the lock cannot be acquired
+        @raise MonitorSocketError: Raised if a socket error occurs
+        @raise MonitorProtocolError: Raised if no response is received
+        """
+        return self.cmd_raw(json.dumps(obj) + "\n")
+
+
+    def cmd_qmp(self, cmd, args=None, id=None, timeout=20):
+        """
+        Build a QMP command from the passed arguments, send it to the monitor
+        and return the response.
+        Unlike cmd(), return the raw response dict without performing any
+        checks on it.
+
+        @param cmd: Command to send
+        @param args: A dict containing command arguments, or None
+        @param id:  An id for the command, or None
+        @param timeout: Time duration to wait for response
+        @return: The response received
+        @raise MonitorLockError: Raised if the lock cannot be acquired
+        @raise MonitorSocketError: Raised if a socket error occurs
+        @raise MonitorProtocolError: Raised if no response is received
+        """
+        return self.cmd_obj(self._build_cmd(cmd, args, id), timeout)
+
+
+    def verify_responsive(self):
+        """
+        Make sure the monitor is responsive by sending a command.
+        """
+        self.cmd("query-status")
+
+
+    def get_events(self):
+        """
+        Return a list of the asynchronous events received since the last
+        clear_events() call.
+
+        @return: A list of events (the objects returned have an "event" key)
+        @raise MonitorLockError: Raised if the lock cannot be acquired
+        """
+        if not self._acquire_lock(20):
+            raise MonitorLockError("Could not acquire exclusive lock to read "
+                                   "QMP events")
+        try:
+            self._read_objects()
+            return self._events[:]
+        finally:
+            self._lock.release()
+
+
+    def get_event(self, name):
+        """
+        Look for an event with the given name in the list of events.
+
+        @param name: The name of the event to look for (e.g. 'RESET')
+        @return: An event object or None if none is found
+        """
+        for e in self.get_events():
+            if e.get("event") == name:
+                return e
+
+
+    def clear_events(self):
+        """
+        Clear the list of asynchronous events.
+
+        @raise MonitorLockError: Raised if the lock cannot be acquired
+        """
+        if not self._acquire_lock(20):
+            raise MonitorLockError("Could not acquire exclusive lock to clear "
+                                   "QMP event list")
+        self._events = []
+        self._lock.release()
+
+
+    def get_greeting(self):
+        """
+        Return QMP greeting message.
+        """
+        return self._greeting
+
+
+    # Command wrappers
+    # Note: all of the following functions raise exceptions in a similar manner
+    # to cmd().
+
+    def quit(self):
+        """
+        Send "quit" and return the response.
+        """
+        return self.cmd("quit")
+
+
+    def info(self, what):
+        """
+        Request info about something and return the response.
+        """
+        return self.cmd("query-%s" % what)
+
+
+    def query(self, what):
+        """
+        Alias for info.
+        """
+        return self.info(what)
+
+
+    def screendump(self, filename):
+        """
+        Request a screendump.
+
+        @param filename: Location for the screendump
+        @return: The response to the command
+        """
+        args = {"filename": filename}
+        return self.cmd("screendump", args)
+
+
+    def migrate(self, uri, full_copy=False, incremental_copy=False, wait=False):
+        """
+        Migrate.
+
+        @param uri: destination URI
+        @param full_copy: If true, migrate with full disk copy
+        @param incremental_copy: If true, migrate with incremental disk copy
+        @param wait: If true, wait for completion
+        @return: The response to the command
+        """
+        args = {"uri": uri,
+                "blk": full_copy,
+                "inc": incremental_copy}
+        return self.cmd("migrate", args)
+
+
+    def migrate_set_speed(self, value):
+        """
+        Set maximum speed (in bytes/sec) for migrations.
+
+        @param value: Speed in bytes/sec
+        @return: The response to the command
+        """
+        args = {"value": value}
+        return self.cmd("migrate_set_speed", args)
diff --git a/client/virt/kvm_vm.py b/client/virt/kvm_vm.py
new file mode 100755
index 0000000..82dae3e
--- /dev/null
+++ b/client/virt/kvm_vm.py
@@ -0,0 +1,1500 @@
+#!/usr/bin/python
+"""
+Utility classes and functions to handle Virtual Machine creation using qemu.
+
+@copyright: 2008-2009 Red Hat Inc.
+"""
+
+import time, os, logging, fcntl, re, commands, glob
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+import virt_utils, virt_vm, kvm_monitor, aexpect
+
+
+class VM:
+    """
+    This class handles all basic VM operations.
+    """
+
+    def __init__(self, name, params, root_dir, address_cache, state=None):
+        """
+        Initialize the object and set a few attributes.
+
+        @param name: The name of the object
+        @param params: A dict containing VM params
+                (see method make_qemu_command for a full description)
+        @param root_dir: Base directory for relative filenames
+        @param address_cache: A dict that maps MAC addresses to IP addresses
+        @param state: If provided, use this as self.__dict__
+        """
+        if state:
+            self.__dict__ = state
+        else:
+            self.process = None
+            self.serial_console = None
+            self.redirs = {}
+            self.vnc_port = 5900
+            self.monitors = []
+            self.pci_assignable = None
+            self.netdev_id = []
+            self.device_id = []
+            self.uuid = None
+
+            # Find a unique identifier for this VM
+            while True:
+                self.instance = (time.strftime("%Y%m%d-%H%M%S-") +
+                                 virt_utils.generate_random_string(4))
+                if not glob.glob("/tmp/*%s" % self.instance):
+                    break
+
+        self.name = name
+        self.params = params
+        self.root_dir = root_dir
+        self.address_cache = address_cache
+
+
+    def clone(self, name=None, params=None, root_dir=None, address_cache=None,
+              copy_state=False):
+        """
+        Return a clone of the VM object with optionally modified parameters.
+        The clone is initially not alive and needs to be started using create().
+        Any parameters not passed to this function are copied from the source
+        VM.
+
+        @param name: Optional new VM name
+        @param params: Optional new VM creation parameters
+        @param root_dir: Optional new base directory for relative filenames
+        @param address_cache: A dict that maps MAC addresses to IP addresses
+        @param copy_state: If True, copy the original VM's state to the clone.
+                Mainly useful for make_qemu_command().
+        """
+        if name is None:
+            name = self.name
+        if params is None:
+            params = self.params.copy()
+        if root_dir is None:
+            root_dir = self.root_dir
+        if address_cache is None:
+            address_cache = self.address_cache
+        if copy_state:
+            state = self.__dict__.copy()
+        else:
+            state = None
+        return VM(name, params, root_dir, address_cache, state)
+
+
+    def make_qemu_command(self, name=None, params=None, root_dir=None):
+        """
+        Generate a qemu command line. All parameters are optional. If a
+        parameter is not supplied, the corresponding value stored in the
+        class attributes is used.
+
+        @param name: The name of the object
+        @param params: A dict containing VM params
+        @param root_dir: Base directory for relative filenames
+
+        @note: The params dict should contain:
+               mem -- memory size in MBs
+               cdrom -- ISO filename to use with the qemu -cdrom parameter
+               extra_params -- a string to append to the qemu command
+               shell_port -- port of the remote shell daemon on the guest
+               (SSH, Telnet or the home-made Remote Shell Server)
+               shell_client -- client program to use for connecting to the
+               remote shell daemon on the guest (ssh, telnet or nc)
+               x11_display -- if specified, the DISPLAY environment variable
+               will be be set to this value for the qemu process (useful for
+               SDL rendering)
+               images -- a list of image object names, separated by spaces
+               nics -- a list of NIC object names, separated by spaces
+
+               For each image in images:
+               drive_format -- string to pass as 'if' parameter for this
+               image (e.g. ide, scsi)
+               image_snapshot -- if yes, pass 'snapshot=on' to qemu for
+               this image
+               image_boot -- if yes, pass 'boot=on' to qemu for this image
+               In addition, all parameters required by get_image_filename.
+
+               For each NIC in nics:
+               nic_model -- string to pass as 'model' parameter for this
+               NIC (e.g. e1000)
+        """
+        # Helper function for command line option wrappers
+        def has_option(help, option):
+            return bool(re.search(r"^-%s(\s|$)" % option, help, re.MULTILINE))
+
+        # Wrappers for all supported qemu command line parameters.
+        # This is meant to allow support for multiple qemu versions.
+        # Each of these functions receives the output of 'qemu -help' as a
+        # parameter, and should add the requested command line option
+        # accordingly.
+
+        def add_name(help, name):
+            return " -name '%s'" % name
+
+        def add_human_monitor(help, filename):
+            return " -monitor unix:'%s',server,nowait" % filename
+
+        def add_qmp_monitor(help, filename):
+            return " -qmp unix:'%s',server,nowait" % filename
+
+        def add_serial(help, filename):
+            return " -serial unix:'%s',server,nowait" % filename
+
+        def add_mem(help, mem):
+            return " -m %s" % mem
+
+        def add_smp(help, smp):
+            return " -smp %s" % smp
+
+        def add_cdrom(help, filename, index=None):
+            if has_option(help, "drive"):
+                cmd = " -drive file='%s',media=cdrom" % filename
+                if index is not None: cmd += ",index=%s" % index
+                return cmd
+            else:
+                return " -cdrom '%s'" % filename
+
+        def add_drive(help, filename, index=None, format=None, cache=None,
+                      werror=None, serial=None, snapshot=False, boot=False):
+            cmd = " -drive file='%s'" % filename
+            if index is not None:
+                cmd += ",index=%s" % index
+            if format:
+                cmd += ",if=%s" % format
+            if cache:
+                cmd += ",cache=%s" % cache
+            if werror:
+                cmd += ",werror=%s" % werror
+            if serial:
+                cmd += ",serial='%s'" % serial
+            if snapshot:
+                cmd += ",snapshot=on"
+            if boot:
+                cmd += ",boot=on"
+            return cmd
+
+        def add_nic(help, vlan, model=None, mac=None, device_id=None, netdev_id=None,
+                    nic_extra_params=None):
+            if has_option(help, "netdev"):
+                netdev_vlan_str = ",netdev=%s" % netdev_id
+            else:
+                netdev_vlan_str = ",vlan=%d" % vlan
+            if has_option(help, "device"):
+                if not model:
+                    model = "rtl8139"
+                elif model == "virtio":
+                    model = "virtio-net-pci"
+                cmd = " -device %s" % model + netdev_vlan_str
+                if mac:
+                    cmd += ",mac='%s'" % mac
+                if nic_extra_params:
+                    cmd += ",%s" % nic_extra_params
+            else:
+                cmd = " -net nic" + netdev_vlan_str
+                if model:
+                    cmd += ",model=%s" % model
+                if mac:
+                    cmd += ",macaddr='%s'" % mac
+            if device_id:
+                cmd += ",id='%s'" % device_id
+            return cmd
+
+        def add_net(help, vlan, mode, ifname=None, script=None,
+                    downscript=None, tftp=None, bootfile=None, hostfwd=[],
+                    netdev_id=None, netdev_extra_params=None):
+            if has_option(help, "netdev"):
+                cmd = " -netdev %s,id=%s" % (mode, netdev_id)
+                if netdev_extra_params:
+                    cmd += ",%s" % netdev_extra_params
+            else:
+                cmd = " -net %s,vlan=%d" % (mode, vlan)
+            if mode == "tap":
+                if ifname: cmd += ",ifname='%s'" % ifname
+                if script: cmd += ",script='%s'" % script
+                cmd += ",downscript='%s'" % (downscript or "no")
+            elif mode == "user":
+                if tftp and "[,tftp=" in help:
+                    cmd += ",tftp='%s'" % tftp
+                if bootfile and "[,bootfile=" in help:
+                    cmd += ",bootfile='%s'" % bootfile
+                if "[,hostfwd=" in help:
+                    for host_port, guest_port in hostfwd:
+                        cmd += ",hostfwd=tcp::%s-:%s" % (host_port, guest_port)
+            return cmd
+
+        def add_floppy(help, filename):
+            return " -fda '%s'" % filename
+
+        def add_tftp(help, filename):
+            # If the new syntax is supported, don't add -tftp
+            if "[,tftp=" in help:
+                return ""
+            else:
+                return " -tftp '%s'" % filename
+
+        def add_bootp(help, filename):
+            # If the new syntax is supported, don't add -bootp
+            if "[,bootfile=" in help:
+                return ""
+            else:
+                return " -bootp '%s'" % filename
+
+        def add_tcp_redir(help, host_port, guest_port):
+            # If the new syntax is supported, don't add -redir
+            if "[,hostfwd=" in help:
+                return ""
+            else:
+                return " -redir tcp:%s::%s" % (host_port, guest_port)
+
+        def add_vnc(help, vnc_port):
+            return " -vnc :%d" % (vnc_port - 5900)
+
+        def add_sdl(help):
+            if has_option(help, "sdl"):
+                return " -sdl"
+            else:
+                return ""
+
+        def add_nographic(help):
+            return " -nographic"
+
+        def add_uuid(help, uuid):
+            return " -uuid '%s'" % uuid
+
+        def add_pcidevice(help, host):
+            return " -pcidevice host='%s'" % host
+
+        def add_kernel(help, filename):
+            return " -kernel '%s'" % filename
+
+        def add_initrd(help, filename):
+            return " -initrd '%s'" % filename
+
+        def add_kernel_cmdline(help, cmdline):
+            return " -append %s" % cmdline
+
+        def add_testdev(help, filename):
+            return (" -chardev file,id=testlog,path=%s"
+                    " -device testdev,chardev=testlog" % filename)
+
+        def add_no_hpet(help):
+            if has_option(help, "no-hpet"):
+                return " -no-hpet"
+            else:
+                return ""
+
+        # End of command line option wrappers
+
+        if name is None:
+            name = self.name
+        if params is None:
+            params = self.params
+        if root_dir is None:
+            root_dir = self.root_dir
+
+        # Clone this VM using the new params
+        vm = self.clone(name, params, root_dir, copy_state=True)
+
+        qemu_binary = virt_utils.get_path(root_dir, params.get("qemu_binary",
+                                                              "qemu"))
+        # Get the output of 'qemu -help' (log a message in case this call never
+        # returns or causes some other kind of trouble)
+        logging.debug("Getting output of 'qemu -help'")
+        help = commands.getoutput("%s -help" % qemu_binary)
+
+        # Start constructing the qemu command
+        qemu_cmd = ""
+        # Set the X11 display parameter if requested
+        if params.get("x11_display"):
+            qemu_cmd += "DISPLAY=%s " % params.get("x11_display")
+        # Add the qemu binary
+        qemu_cmd += qemu_binary
+        # Add the VM's name
+        qemu_cmd += add_name(help, name)
+        # Add monitors
+        for monitor_name in params.objects("monitors"):
+            monitor_params = params.object_params(monitor_name)
+            monitor_filename = vm.get_monitor_filename(monitor_name)
+            if monitor_params.get("monitor_type") == "qmp":
+                qemu_cmd += add_qmp_monitor(help, monitor_filename)
+            else:
+                qemu_cmd += add_human_monitor(help, monitor_filename)
+
+        # Add serial console redirection
+        qemu_cmd += add_serial(help, vm.get_serial_console_filename())
+
+        for image_name in params.objects("images"):
+            image_params = params.object_params(image_name)
+            if image_params.get("boot_drive") == "no":
+                continue
+            qemu_cmd += add_drive(help,
+                             virt_vm.get_image_filename(image_params, root_dir),
+                                  image_params.get("drive_index"),
+                                  image_params.get("drive_format"),
+                                  image_params.get("drive_cache"),
+                                  image_params.get("drive_werror"),
+                                  image_params.get("drive_serial"),
+                                  image_params.get("image_snapshot") == "yes",
+                                  image_params.get("image_boot") == "yes")
+
+        redirs = []
+        for redir_name in params.objects("redirs"):
+            redir_params = params.object_params(redir_name)
+            guest_port = int(redir_params.get("guest_port"))
+            host_port = vm.redirs.get(guest_port)
+            redirs += [(host_port, guest_port)]
+
+        vlan = 0
+        for nic_name in params.objects("nics"):
+            nic_params = params.object_params(nic_name)
+            try:
+                netdev_id = vm.netdev_id[vlan]
+                device_id = vm.device_id[vlan]
+            except IndexError:
+                netdev_id = None
+            # Handle the '-net nic' part
+            try:
+                mac = vm.get_mac_address(vlan)
+            except virt_vm.VMAddressError:
+                mac = None
+            qemu_cmd += add_nic(help, vlan, nic_params.get("nic_model"), mac,
+                                device_id, netdev_id, nic_params.get("nic_extra_params"))
+            # Handle the '-net tap' or '-net user' or '-netdev' part
+            script = nic_params.get("nic_script")
+            downscript = nic_params.get("nic_downscript")
+            tftp = nic_params.get("tftp")
+            if script:
+                script = virt_utils.get_path(root_dir, script)
+            if downscript:
+                downscript = virt_utils.get_path(root_dir, downscript)
+            if tftp:
+                tftp = virt_utils.get_path(root_dir, tftp)
+            qemu_cmd += add_net(help, vlan, nic_params.get("nic_mode", "user"),
+                                vm.get_ifname(vlan),
+                                script, downscript, tftp,
+                                nic_params.get("bootp"), redirs, netdev_id,
+                                nic_params.get("netdev_extra_params"))
+            # Proceed to next NIC
+            vlan += 1
+
+        mem = params.get("mem")
+        if mem:
+            qemu_cmd += add_mem(help, mem)
+
+        smp = params.get("smp")
+        if smp:
+            qemu_cmd += add_smp(help, smp)
+
+        for cdrom in params.objects("cdroms"):
+            cdrom_params = params.object_params(cdrom)
+            iso = cdrom_params.get("cdrom")
+            if iso:
+                qemu_cmd += add_cdrom(help, virt_utils.get_path(root_dir, iso),
+                                      cdrom_params.get("drive_index"))
+
+        # We may want to add {floppy_otps} parameter for -fda
+        # {fat:floppy:}/path/. However vvfat is not usually recommended.
+        floppy = params.get("floppy")
+        if floppy:
+            floppy = virt_utils.get_path(root_dir, floppy)
+            qemu_cmd += add_floppy(help, floppy)
+
+        tftp = params.get("tftp")
+        if tftp:
+            tftp = virt_utils.get_path(root_dir, tftp)
+            qemu_cmd += add_tftp(help, tftp)
+
+        bootp = params.get("bootp")
+        if bootp:
+            qemu_cmd += add_bootp(help, bootp)
+
+        kernel = params.get("kernel")
+        if kernel:
+            kernel = virt_utils.get_path(root_dir, kernel)
+            qemu_cmd += add_kernel(help, kernel)
+
+        kernel_cmdline = params.get("kernel_cmdline")
+        if kernel_cmdline:
+            qemu_cmd += add_kernel_cmdline(help, kernel_cmdline)
+
+        initrd = params.get("initrd")
+        if initrd:
+            initrd = virt_utils.get_path(root_dir, initrd)
+            qemu_cmd += add_initrd(help, initrd)
+
+        for host_port, guest_port in redirs:
+            qemu_cmd += add_tcp_redir(help, host_port, guest_port)
+
+        if params.get("display") == "vnc":
+            qemu_cmd += add_vnc(help, vm.vnc_port)
+        elif params.get("display") == "sdl":
+            qemu_cmd += add_sdl(help)
+        elif params.get("display") == "nographic":
+            qemu_cmd += add_nographic(help)
+
+        if params.get("uuid") == "random":
+            qemu_cmd += add_uuid(help, vm.uuid)
+        elif params.get("uuid"):
+            qemu_cmd += add_uuid(help, params.get("uuid"))
+
+        if params.get("testdev") == "yes":
+            qemu_cmd += add_testdev(help, vm.get_testlog_filename())
+
+        if params.get("disable_hpet") == "yes":
+            qemu_cmd += add_no_hpet(help)
+
+        # If the PCI assignment step went OK, add each one of the PCI assigned
+        # devices to the qemu command line.
+        if vm.pci_assignable:
+            for pci_id in vm.pa_pci_ids:
+                qemu_cmd += add_pcidevice(help, pci_id)
+
+        extra_params = params.get("extra_params")
+        if extra_params:
+            qemu_cmd += " %s" % extra_params
+
+        return qemu_cmd
+
+
+    @error.context_aware
+    def create(self, name=None, params=None, root_dir=None, timeout=5.0,
+               migration_mode=None, mac_source=None):
+        """
+        Start the VM by running a qemu command.
+        All parameters are optional. If name, params or root_dir are not
+        supplied, the respective values stored as class attributes are used.
+
+        @param name: The name of the object
+        @param params: A dict containing VM params
+        @param root_dir: Base directory for relative filenames
+        @param migration_mode: If supplied, start VM for incoming migration
+                using this protocol (either 'tcp', 'unix' or 'exec')
+        @param migration_exec_cmd: Command to embed in '-incoming "exec: ..."'
+                (e.g. 'gzip -c -d filename') if migration_mode is 'exec'
+        @param mac_source: A VM object from which to copy MAC addresses. If not
+                specified, new addresses will be generated.
+
+        @raise VMCreateError: If qemu terminates unexpectedly
+        @raise VMKVMInitError: If KVM initialization fails
+        @raise VMHugePageError: If hugepage initialization fails
+        @raise VMImageMissingError: If a CD image is missing
+        @raise VMHashMismatchError: If a CD image hash has doesn't match the
+                expected hash
+        @raise VMBadPATypeError: If an unsupported PCI assignment type is
+                requested
+        @raise VMPAError: If no PCI assignable devices could be assigned
+        """
+        error.context("creating '%s'" % self.name)
+        self.destroy(free_mac_addresses=False)
+
+        if name is not None:
+            self.name = name
+        if params is not None:
+            self.params = params
+        if root_dir is not None:
+            self.root_dir = root_dir
+        name = self.name
+        params = self.params
+        root_dir = self.root_dir
+
+        # Verify the md5sum of the ISO images
+        for cdrom in params.objects("cdroms"):
+            cdrom_params = params.object_params(cdrom)
+            iso = cdrom_params.get("cdrom")
+            if iso:
+                iso = virt_utils.get_path(root_dir, iso)
+                if not os.path.exists(iso):
+                    raise virt_vm.VMImageMissingError(iso)
+                compare = False
+                if cdrom_params.get("md5sum_1m"):
+                    logging.debug("Comparing expected MD5 sum with MD5 sum of "
+                                  "first MB of ISO file...")
+                    actual_hash = utils.hash_file(iso, 1048576, method="md5")
+                    expected_hash = cdrom_params.get("md5sum_1m")
+                    compare = True
+                elif cdrom_params.get("md5sum"):
+                    logging.debug("Comparing expected MD5 sum with MD5 sum of "
+                                  "ISO file...")
+                    actual_hash = utils.hash_file(iso, method="md5")
+                    expected_hash = cdrom_params.get("md5sum")
+                    compare = True
+                elif cdrom_params.get("sha1sum"):
+                    logging.debug("Comparing expected SHA1 sum with SHA1 sum "
+                                  "of ISO file...")
+                    actual_hash = utils.hash_file(iso, method="sha1")
+                    expected_hash = cdrom_params.get("sha1sum")
+                    compare = True
+                if compare:
+                    if actual_hash == expected_hash:
+                        logging.debug("Hashes match")
+                    else:
+                        raise virt_vm.VMHashMismatchError(actual_hash,
+                                                          expected_hash)
+
+        # Make sure the following code is not executed by more than one thread
+        # at the same time
+        lockfile = open("/tmp/kvm-autotest-vm-create.lock", "w+")
+        fcntl.lockf(lockfile, fcntl.LOCK_EX)
+
+        try:
+            # Handle port redirections
+            redir_names = params.objects("redirs")
+            host_ports = virt_utils.find_free_ports(5000, 6000, len(redir_names))
+            self.redirs = {}
+            for i in range(len(redir_names)):
+                redir_params = params.object_params(redir_names[i])
+                guest_port = int(redir_params.get("guest_port"))
+                self.redirs[guest_port] = host_ports[i]
+
+            # Generate netdev/device IDs for all NICs
+            self.netdev_id = []
+            self.device_id = []
+            for nic in params.objects("nics"):
+                self.netdev_id.append(virt_utils.generate_random_id())
+                self.device_id.append(virt_utils.generate_random_id())
+
+            # Find available VNC port, if needed
+            if params.get("display") == "vnc":
+                self.vnc_port = virt_utils.find_free_port(5900, 6100)
+
+            # Find random UUID if specified 'uuid = random' in config file
+            if params.get("uuid") == "random":
+                f = open("/proc/sys/kernel/random/uuid")
+                self.uuid = f.read().strip()
+                f.close()
+
+            # Generate or copy MAC addresses for all NICs
+            num_nics = len(params.objects("nics"))
+            for vlan in range(num_nics):
+                nic_name = params.objects("nics")[vlan]
+                nic_params = params.object_params(nic_name)
+                mac = (nic_params.get("nic_mac") or
+                       mac_source and mac_source.get_mac_address(vlan))
+                if mac:
+                    virt_utils.set_mac_address(self.instance, vlan, mac)
+                else:
+                    virt_utils.generate_mac_address(self.instance, vlan)
+
+            # Assign a PCI assignable device
+            self.pci_assignable = None
+            pa_type = params.get("pci_assignable")
+            if pa_type and pa_type != "no":
+                pa_devices_requested = params.get("devices_requested")
+
+                # Virtual Functions (VF) assignable devices
+                if pa_type == "vf":
+                    self.pci_assignable = virt_utils.PciAssignable(
+                        type=pa_type,
+                        driver=params.get("driver"),
+                        driver_option=params.get("driver_option"),
+                        devices_requested=pa_devices_requested)
+                # Physical NIC (PF) assignable devices
+                elif pa_type == "pf":
+                    self.pci_assignable = virt_utils.PciAssignable(
+                        type=pa_type,
+                        names=params.get("device_names"),
+                        devices_requested=pa_devices_requested)
+                # Working with both VF and PF
+                elif pa_type == "mixed":
+                    self.pci_assignable = virt_utils.PciAssignable(
+                        type=pa_type,
+                        driver=params.get("driver"),
+                        driver_option=params.get("driver_option"),
+                        names=params.get("device_names"),
+                        devices_requested=pa_devices_requested)
+                else:
+                    raise virt_vm.VMBadPATypeError(pa_type)
+
+                self.pa_pci_ids = self.pci_assignable.request_devs()
+
+                if self.pa_pci_ids:
+                    logging.debug("Successfuly assigned devices: %s",
+                                  self.pa_pci_ids)
+                else:
+                    raise virt_vm.VMPAError(pa_type)
+
+            # Make qemu command
+            qemu_command = self.make_qemu_command()
+
+            # Add migration parameters if required
+            if migration_mode == "tcp":
+                self.migration_port = virt_utils.find_free_port(5200, 6000)
+                qemu_command += " -incoming tcp:0:%d" % self.migration_port
+            elif migration_mode == "unix":
+                self.migration_file = "/tmp/migration-unix-%s" % self.instance
+                qemu_command += " -incoming unix:%s" % self.migration_file
+            elif migration_mode == "exec":
+                self.migration_port = virt_utils.find_free_port(5200, 6000)
+                qemu_command += (' -incoming "exec:nc -l %s"' %
+                                 self.migration_port)
+
+            logging.info("Running qemu command:\n%s", qemu_command)
+            self.process = aexpect.run_bg(qemu_command, None,
+                                                 logging.info, "(qemu) ")
+
+            # Make sure the process was started successfully
+            if not self.process.is_alive():
+                e = virt_vm.VMCreateError(qemu_command,
+                                          self.process.get_status(),
+                                          self.process.get_output())
+                self.destroy()
+                raise e
+
+            # Establish monitor connections
+            self.monitors = []
+            for monitor_name in params.objects("monitors"):
+                monitor_params = params.object_params(monitor_name)
+                # Wait for monitor connection to succeed
+                end_time = time.time() + timeout
+                while time.time() < end_time:
+                    try:
+                        if monitor_params.get("monitor_type") == "qmp":
+                            # Add a QMP monitor
+                            monitor = kvm_monitor.QMPMonitor(
+                                monitor_name,
+                                self.get_monitor_filename(monitor_name))
+                        else:
+                            # Add a "human" monitor
+                            monitor = kvm_monitor.HumanMonitor(
+                                monitor_name,
+                                self.get_monitor_filename(monitor_name))
+                        monitor.verify_responsive()
+                        break
+                    except kvm_monitor.MonitorError, e:
+                        logging.warn(e)
+                        time.sleep(1)
+                else:
+                    self.destroy()
+                    raise e
+                # Add this monitor to the list
+                self.monitors += [monitor]
+
+            # Get the output so far, to see if we have any problems with
+            # KVM modules or with hugepage setup.
+            output = self.process.get_output()
+
+            if re.search("Could not initialize KVM", output, re.IGNORECASE):
+                e = virt_vm.VMKVMInitError(qemu_command, self.process.get_output())
+                self.destroy()
+                raise e
+
+            if "alloc_mem_area" in output:
+                e = virt_vm.VMHugePageError(qemu_command, self.process.get_output())
+                self.destroy()
+                raise e
+
+            logging.debug("VM appears to be alive with PID %s", self.get_pid())
+
+            # Establish a session with the serial console -- requires a version
+            # of netcat that supports -U
+            self.serial_console = aexpect.ShellSession(
+                "nc -U %s" % self.get_serial_console_filename(),
+                auto_close=False,
+                output_func=virt_utils.log_line,
+                output_params=("serial-%s.log" % name,))
+
+        finally:
+            fcntl.lockf(lockfile, fcntl.LOCK_UN)
+            lockfile.close()
+
+
+    def destroy(self, gracefully=True, free_mac_addresses=True):
+        """
+        Destroy the VM.
+
+        If gracefully is True, first attempt to shutdown the VM with a shell
+        command.  Then, attempt to destroy the VM via the monitor with a 'quit'
+        command.  If that fails, send SIGKILL to the qemu process.
+
+        @param gracefully: If True, an attempt will be made to end the VM
+                using a shell command before trying to end the qemu process
+                with a 'quit' or a kill signal.
+        @param free_mac_addresses: If True, the MAC addresses used by the VM
+                will be freed.
+        """
+        try:
+            # Is it already dead?
+            if self.is_dead():
+                return
+
+            logging.debug("Destroying VM with PID %s...", self.get_pid())
+
+            if gracefully and self.params.get("shutdown_command"):
+                # Try to destroy with shell command
+                logging.debug("Trying to shutdown VM with shell command...")
+                try:
+                    session = self.login()
+                except (virt_utils.LoginError, virt_vm.VMError), e:
+                    logging.debug(e)
+                else:
+                    try:
+                        # Send the shutdown command
+                        session.sendline(self.params.get("shutdown_command"))
+                        logging.debug("Shutdown command sent; waiting for VM "
+                                      "to go down...")
+                        if virt_utils.wait_for(self.is_dead, 60, 1, 1):
+                            logging.debug("VM is down")
+                            return
+                    finally:
+                        session.close()
+
+            if self.monitor:
+                # Try to destroy with a monitor command
+                logging.debug("Trying to kill VM with monitor command...")
+                try:
+                    self.monitor.quit()
+                except kvm_monitor.MonitorError, e:
+                    logging.warn(e)
+                else:
+                    # Wait for the VM to be really dead
+                    if virt_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
+                        logging.debug("VM is down")
+                        return
+
+            # If the VM isn't dead yet...
+            logging.debug("Cannot quit normally; sending a kill to close the "
+                          "deal...")
+            virt_utils.kill_process_tree(self.process.get_pid(), 9)
+            # Wait for the VM to be really dead
+            if virt_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
+                logging.debug("VM is down")
+                return
+
+            logging.error("Process %s is a zombie!", self.process.get_pid())
+
+        finally:
+            self.monitors = []
+            if self.pci_assignable:
+                self.pci_assignable.release_devs()
+            if self.process:
+                self.process.close()
+            if self.serial_console:
+                self.serial_console.close()
+            for f in ([self.get_testlog_filename(),
+                       self.get_serial_console_filename()] +
+                      self.get_monitor_filenames()):
+                try:
+                    os.unlink(f)
+                except OSError:
+                    pass
+            if hasattr(self, "migration_file"):
+                try:
+                    os.unlink(self.migration_file)
+                except OSError:
+                    pass
+            if free_mac_addresses:
+                num_nics = len(self.params.objects("nics"))
+                for vlan in range(num_nics):
+                    self.free_mac_address(vlan)
+
+
+    @property
+    def monitor(self):
+        """
+        Return the main monitor object, selected by the parameter main_monitor.
+        If main_monitor isn't defined, return the first monitor.
+        If no monitors exist, or if main_monitor refers to a nonexistent
+        monitor, return None.
+        """
+        for m in self.monitors:
+            if m.name == self.params.get("main_monitor"):
+                return m
+        if self.monitors and not self.params.get("main_monitor"):
+            return self.monitors[0]
+
+
+    def verify_alive(self):
+        """
+        Make sure the VM is alive and that the main monitor is responsive.
+
+        @raise VMDeadError: If the VM is dead
+        @raise: Various monitor exceptions if the monitor is unresponsive
+        """
+        if self.is_dead():
+            raise virt_vm.VMDeadError(self.process.get_status(),
+                              self.process.get_output())
+        if self.monitors:
+            self.monitor.verify_responsive()
+
+
+    def is_alive(self):
+        """
+        Return True if the VM is alive and its monitor is responsive.
+        """
+        return not self.is_dead() and (not self.monitors or
+                                       self.monitor.is_responsive())
+
+
+    def is_dead(self):
+        """
+        Return True if the qemu process is dead.
+        """
+        return not self.process or not self.process.is_alive()
+
+
+    def get_params(self):
+        """
+        Return the VM's params dict. Most modified params take effect only
+        upon VM.create().
+        """
+        return self.params
+
+
+    def get_monitor_filename(self, monitor_name):
+        """
+        Return the filename corresponding to a given monitor name.
+        """
+        return "/tmp/monitor-%s-%s" % (monitor_name, self.instance)
+
+
+    def get_monitor_filenames(self):
+        """
+        Return a list of all monitor filenames (as specified in the VM's
+        params).
+        """
+        return [self.get_monitor_filename(m) for m in
+                self.params.objects("monitors")]
+
+
+    def get_serial_console_filename(self):
+        """
+        Return the serial console filename.
+        """
+        return "/tmp/serial-%s" % self.instance
+
+
+    def get_testlog_filename(self):
+        """
+        Return the testlog filename.
+        """
+        return "/tmp/testlog-%s" % self.instance
+
+
+    def get_address(self, index=0):
+        """
+        Return the address of a NIC of the guest, in host space.
+
+        If port redirection is used, return 'localhost' (the NIC has no IP
+        address of its own).  Otherwise return the NIC's IP address.
+
+        @param index: Index of the NIC whose address is requested.
+        @raise VMMACAddressMissingError: If no MAC address is defined for the
+                requested NIC
+        @raise VMIPAddressMissingError: If no IP address is found for the the
+                NIC's MAC address
+        @raise VMAddressVerificationError: If the MAC-IP address mapping cannot
+                be verified (using arping)
+        """
+        nics = self.params.objects("nics")
+        nic_name = nics[index]
+        nic_params = self.params.object_params(nic_name)
+        if nic_params.get("nic_mode") == "tap":
+            mac = self.get_mac_address(index).lower()
+            # Get the IP address from the cache
+            ip = self.address_cache.get(mac)
+            if not ip:
+                raise virt_vm.VMIPAddressMissingError(mac)
+            # Make sure the IP address is assigned to this guest
+            macs = [self.get_mac_address(i) for i in range(len(nics))]
+            if not virt_utils.verify_ip_address_ownership(ip, macs):
+                raise virt_vm.VMAddressVerificationError(mac, ip)
+            return ip
+        else:
+            return "localhost"
+
+
+    def get_port(self, port, nic_index=0):
+        """
+        Return the port in host space corresponding to port in guest space.
+
+        @param port: Port number in host space.
+        @param nic_index: Index of the NIC.
+        @return: If port redirection is used, return the host port redirected
+                to guest port port. Otherwise return port.
+        @raise VMPortNotRedirectedError: If an unredirected port is requested
+                in user mode
+        """
+        nic_name = self.params.objects("nics")[nic_index]
+        nic_params = self.params.object_params(nic_name)
+        if nic_params.get("nic_mode") == "tap":
+            return port
+        else:
+            try:
+                return self.redirs[port]
+            except KeyError:
+                raise virt_vm.VMPortNotRedirectedError(port)
+
+
+    def get_peer(self, netid):
+        """
+        Return the peer of netdev or network deivce.
+
+        @param netid: id of netdev or device
+        @return: id of the peer device otherwise None
+        """
+        network_info = self.monitor.info("network")
+        try:
+            return re.findall("%s:.*peer=(.*)" % netid, network_info)[0]
+        except IndexError:
+            return None
+
+
+    def get_ifname(self, nic_index=0):
+        """
+        Return the ifname of a tap device associated with a NIC.
+
+        @param nic_index: Index of the NIC
+        """
+        nics = self.params.objects("nics")
+        nic_name = nics[nic_index]
+        nic_params = self.params.object_params(nic_name)
+        if nic_params.get("nic_ifname"):
+            return nic_params.get("nic_ifname")
+        else:
+            return "t%d-%s" % (nic_index, self.instance[-11:])
+
+
+    def get_mac_address(self, nic_index=0):
+        """
+        Return the MAC address of a NIC.
+
+        @param nic_index: Index of the NIC
+        @raise VMMACAddressMissingError: If no MAC address is defined for the
+                requested NIC
+        """
+        nic_name = self.params.objects("nics")[nic_index]
+        nic_params = self.params.object_params(nic_name)
+        mac = (nic_params.get("nic_mac") or
+               virt_utils.get_mac_address(self.instance, nic_index))
+        if not mac:
+            raise virt_vm.VMMACAddressMissingError(nic_index)
+        return mac
+
+
+    def free_mac_address(self, nic_index=0):
+        """
+        Free a NIC's MAC address.
+
+        @param nic_index: Index of the NIC
+        """
+        virt_utils.free_mac_address(self.instance, nic_index)
+
+
+    def get_pid(self):
+        """
+        Return the VM's PID.  If the VM is dead return None.
+
+        @note: This works under the assumption that self.process.get_pid()
+        returns the PID of the parent shell process.
+        """
+        try:
+            children = commands.getoutput("ps --ppid=%d -o pid=" %
+                                          self.process.get_pid()).split()
+            return int(children[0])
+        except (TypeError, IndexError, ValueError):
+            return None
+
+
+    def get_shell_pid(self):
+        """
+        Return the PID of the parent shell process.
+
+        @note: This works under the assumption that self.process.get_pid()
+        returns the PID of the parent shell process.
+        """
+        return self.process.get_pid()
+
+
+    def get_shared_meminfo(self):
+        """
+        Returns the VM's shared memory information.
+
+        @return: Shared memory used by VM (MB)
+        """
+        if self.is_dead():
+            logging.error("Could not get shared memory info from dead VM.")
+            return None
+
+        filename = "/proc/%d/statm" % self.get_pid()
+        shm = int(open(filename).read().split()[2])
+        # statm stores informations in pages, translate it to MB
+        return shm * 4.0 / 1024
+
+
+    @error.context_aware
+    def login(self, nic_index=0, timeout=10):
+        """
+        Log into the guest via SSH/Telnet/Netcat.
+        If timeout expires while waiting for output from the guest (e.g. a
+        password prompt or a shell prompt) -- fail.
+
+        @param nic_index: The index of the NIC to connect to.
+        @param timeout: Time (seconds) before giving up logging into the
+                guest.
+        @return: A ShellSession object.
+        """
+        error.context("logging into '%s'" % self.name)
+        username = self.params.get("username", "")
+        password = self.params.get("password", "")
+        prompt = self.params.get("shell_prompt", "[\#\$]")
+        linesep = eval("'%s'" % self.params.get("shell_linesep", r"\n"))
+        client = self.params.get("shell_client")
+        address = self.get_address(nic_index)
+        port = self.get_port(int(self.params.get("shell_port")))
+        log_filename = ("session-%s-%s.log" %
+                        (self.name, virt_utils.generate_random_string(4)))
+        session = virt_utils.remote_login(client, address, port, username,
+                                         password, prompt, linesep,
+                                         log_filename, timeout)
+        session.set_status_test_command(self.params.get("status_test_command",
+                                                        ""))
+        return session
+
+
+    def remote_login(self, nic_index=0, timeout=10):
+        """
+        Alias for login() for backward compatibility.
+        """
+        return self.login(nic_index, timeout)
+
+
+    def wait_for_login(self, nic_index=0, timeout=240, internal_timeout=10):
+        """
+        Make multiple attempts to log into the guest via SSH/Telnet/Netcat.
+
+        @param nic_index: The index of the NIC to connect to.
+        @param timeout: Time (seconds) to keep trying to log in.
+        @param internal_timeout: Timeout to pass to login().
+        @return: A ShellSession object.
+        """
+        logging.debug("Attempting to log into '%s' (timeout %ds)", self.name,
+                      timeout)
+        end_time = time.time() + timeout
+        while time.time() < end_time:
+            try:
+                return self.login(nic_index, internal_timeout)
+            except (virt_utils.LoginError, virt_vm.VMError), e:
+                logging.debug(e)
+            time.sleep(2)
+        # Timeout expired; try one more time but don't catch exceptions
+        return self.login(nic_index, internal_timeout)
+
+
+    @error.context_aware
+    def copy_files_to(self, host_path, guest_path, nic_index=0, verbose=False,
+                      timeout=600):
+        """
+        Transfer files to the remote host(guest).
+
+        @param host_path: Host path
+        @param guest_path: Guest path
+        @param nic_index: The index of the NIC to connect to.
+        @param verbose: If True, log some stats using logging.debug (RSS only)
+        @param timeout: Time (seconds) before giving up on doing the remote
+                copy.
+        """
+        error.context("sending file(s) to '%s'" % self.name)
+        username = self.params.get("username", "")
+        password = self.params.get("password", "")
+        client = self.params.get("file_transfer_client")
+        address = self.get_address(nic_index)
+        port = self.get_port(int(self.params.get("file_transfer_port")))
+        log_filename = ("transfer-%s-to-%s-%s.log" %
+                        (self.name, address,
+                        virt_utils.generate_random_string(4)))
+        virt_utils.copy_files_to(address, client, username, password, port,
+                                host_path, guest_path, log_filename, verbose,
+                                timeout)
+
+
+    @error.context_aware
+    def copy_files_from(self, guest_path, host_path, nic_index=0,
+                        verbose=False, timeout=600):
+        """
+        Transfer files from the guest.
+
+        @param host_path: Guest path
+        @param guest_path: Host path
+        @param nic_index: The index of the NIC to connect to.
+        @param verbose: If True, log some stats using logging.debug (RSS only)
+        @param timeout: Time (seconds) before giving up on doing the remote
+                copy.
+        """
+        error.context("receiving file(s) from '%s'" % self.name)
+        username = self.params.get("username", "")
+        password = self.params.get("password", "")
+        client = self.params.get("file_transfer_client")
+        address = self.get_address(nic_index)
+        port = self.get_port(int(self.params.get("file_transfer_port")))
+        log_filename = ("transfer-%s-from-%s-%s.log" %
+                        (self.name, address,
+                        virt_utils.generate_random_string(4)))
+        virt_utils.copy_files_from(address, client, username, password, port,
+                                  guest_path, host_path, log_filename,
+                                  verbose, timeout)
+
+
+    @error.context_aware
+    def serial_login(self, timeout=10):
+        """
+        Log into the guest via the serial console.
+        If timeout expires while waiting for output from the guest (e.g. a
+        password prompt or a shell prompt) -- fail.
+
+        @param timeout: Time (seconds) before giving up logging into the guest.
+        @return: ShellSession object on success and None on failure.
+        """
+        error.context("logging into '%s' via serial console" % self.name)
+        username = self.params.get("username", "")
+        password = self.params.get("password", "")
+        prompt = self.params.get("shell_prompt", "[\#\$]")
+        linesep = eval("'%s'" % self.params.get("shell_linesep", r"\n"))
+        status_test_command = self.params.get("status_test_command", "")
+
+        self.serial_console.set_linesep(linesep)
+        self.serial_console.set_status_test_command(status_test_command)
+
+        # Try to get a login prompt
+        self.serial_console.sendline()
+
+        virt_utils._remote_login(self.serial_console, username, password,
+                                prompt, timeout)
+        return self.serial_console
+
+
+    def wait_for_serial_login(self, timeout=240, internal_timeout=10):
+        """
+        Make multiple attempts to log into the guest via serial console.
+
+        @param timeout: Time (seconds) to keep trying to log in.
+        @param internal_timeout: Timeout to pass to serial_login().
+        @return: A ShellSession object.
+        """
+        logging.debug("Attempting to log into '%s' via serial console "
+                      "(timeout %ds)", self.name, timeout)
+        end_time = time.time() + timeout
+        while time.time() < end_time:
+            try:
+                return self.serial_login(internal_timeout)
+            except virt_utils.LoginError, e:
+                logging.debug(e)
+            time.sleep(2)
+        # Timeout expired; try one more time but don't catch exceptions
+        return self.serial_login(internal_timeout)
+
+
+    @error.context_aware
+    def migrate(self, timeout=3600, protocol="tcp", cancel_delay=None,
+                offline=False, stable_check=False, clean=True,
+                save_path="/tmp", dest_host="localhost", remote_port=None):
+        """
+        Migrate the VM.
+
+        If the migration is local, the VM object's state is switched with that
+        of the destination VM.  Otherwise, the state is switched with that of
+        a dead VM (returned by self.clone()).
+
+        @param timeout: Time to wait for migration to complete.
+        @param protocol: Migration protocol ('tcp', 'unix' or 'exec').
+        @param cancel_delay: If provided, specifies a time duration after which
+                migration will be canceled.  Used for testing migrate_cancel.
+        @param offline: If True, pause the source VM before migration.
+        @param stable_check: If True, compare the VM's state after migration to
+                its state before migration and raise an exception if they
+                differ.
+        @param clean: If True, delete the saved state files (relevant only if
+                stable_check is also True).
+        @save_path: The path for state files.
+        @param dest_host: Destination host (defaults to 'localhost').
+        @param remote_port: Port to use for remote migration.
+        """
+        error.base_context("migrating '%s'" % self.name)
+
+        def mig_finished():
+            o = self.monitor.info("migrate")
+            if isinstance(o, str):
+                return "status: active" not in o
+            else:
+                return o.get("status") != "active"
+
+        def mig_succeeded():
+            o = self.monitor.info("migrate")
+            if isinstance(o, str):
+                return "status: completed" in o
+            else:
+                return o.get("status") == "completed"
+
+        def mig_failed():
+            o = self.monitor.info("migrate")
+            if isinstance(o, str):
+                return "status: failed" in o
+            else:
+                return o.get("status") == "failed"
+
+        def mig_cancelled():
+            o = self.monitor.info("migrate")
+            if isinstance(o, str):
+                return ("Migration status: cancelled" in o or
+                        "Migration status: canceled" in o)
+            else:
+                return (o.get("status") == "cancelled" or
+                        o.get("status") == "canceled")
+
+        def wait_for_migration():
+            if not virt_utils.wait_for(mig_finished, timeout, 2, 2,
+                                      "Waiting for migration to complete"):
+                raise virt_vm.VMMigrateTimeoutError("Timeout expired while waiting "
+                                            "for migration to finish")
+
+        local = dest_host == "localhost"
+
+        clone = self.clone()
+        if local:
+            error.context("creating destination VM")
+            if stable_check:
+                # Pause the dest vm after creation
+                extra_params = clone.params.get("extra_params", "") + " -S"
+                clone.params["extra_params"] = extra_params
+            clone.create(migration_mode=protocol, mac_source=self)
+            error.context()
+
+        try:
+            if protocol == "tcp":
+                if local:
+                    uri = "tcp:localhost:%d" % clone.migration_port
+                else:
+                    uri = "tcp:%s:%d" % (dest_host, remote_port)
+            elif protocol == "unix":
+                uri = "unix:%s" % clone.migration_file
+            elif protocol == "exec":
+                uri = '"exec:nc localhost %s"' % clone.migration_port
+
+            if offline:
+                self.monitor.cmd("stop")
+
+            logging.info("Migrating to %s", uri)
+            self.monitor.migrate(uri)
+
+            if cancel_delay:
+                time.sleep(cancel_delay)
+                self.monitor.cmd("migrate_cancel")
+                if not virt_utils.wait_for(mig_cancelled, 60, 2, 2,
+                                          "Waiting for migration "
+                                          "cancellation"):
+                    raise virt_vm.VMMigrateCancelError("Cannot cancel migration")
+                return
+
+            wait_for_migration()
+
+            # Report migration status
+            if mig_succeeded():
+                logging.info("Migration completed successfully")
+            elif mig_failed():
+                raise virt_vm.VMMigrateFailedError("Migration failed")
+            else:
+                raise virt_vm.VMMigrateFailedError("Migration ended with "
+                                                   "unknown status")
+
+            # Switch self <-> clone
+            temp = self.clone(copy_state=True)
+            self.__dict__ = clone.__dict__
+            clone = temp
+
+            # From now on, clone is the source VM that will soon be destroyed
+            # and self is the destination VM that will remain alive.  If this
+            # is remote migration, self is a dead VM object.
+
+            error.context("after migration")
+            if local:
+                time.sleep(1)
+                self.verify_alive()
+
+            if local and stable_check:
+                try:
+                    save1 = os.path.join(save_path, "src-" + clone.instance)
+                    save2 = os.path.join(save_path, "dst-" + self.instance)
+                    clone.save_to_file(save1)
+                    self.save_to_file(save2)
+                    # Fail if we see deltas
+                    md5_save1 = utils.hash_file(save1)
+                    md5_save2 = utils.hash_file(save2)
+                    if md5_save1 != md5_save2:
+                        raise virt_vm.VMMigrateStateMismatchError(md5_save1,
+                                                                  md5_save2)
+                finally:
+                    if clean:
+                        if os.path.isfile(save1):
+                            os.remove(save1)
+                        if os.path.isfile(save2):
+                            os.remove(save2)
+
+        finally:
+            # If we're doing remote migration and it's completed successfully,
+            # self points to a dead VM object
+            if self.is_alive():
+                self.monitor.cmd("cont")
+            clone.destroy(gracefully=False)
+
+
+    @error.context_aware
+    def reboot(self, session=None, method="shell", nic_index=0, timeout=240):
+        """
+        Reboot the VM and wait for it to come back up by trying to log in until
+        timeout expires.
+
+        @param session: A shell session object or None.
+        @param method: Reboot method.  Can be "shell" (send a shell reboot
+                command) or "system_reset" (send a system_reset monitor command).
+        @param nic_index: Index of NIC to access in the VM, when logging in
+                after rebooting.
+        @param timeout: Time to wait for login to succeed (after rebooting).
+        @return: A new shell session object.
+        """
+        error.base_context("rebooting '%s'" % self.name, logging.info)
+        error.context("before reboot")
+        session = session or self.login()
+        error.context()
+
+        if method == "shell":
+            session.sendline(self.params.get("reboot_command"))
+        elif method == "system_reset":
+            # Clear the event list of all QMP monitors
+            qmp_monitors = [m for m in self.monitors if m.protocol == "qmp"]
+            for m in qmp_monitors:
+                m.clear_events()
+            # Send a system_reset monitor command
+            self.monitor.cmd("system_reset")
+            # Look for RESET QMP events
+            time.sleep(1)
+            for m in qmp_monitors:
+                if m.get_event("RESET"):
+                    logging.info("RESET QMP event received")
+                else:
+                    raise virt_vm.VMRebootError("RESET QMP event not received "
+                                                "after system_reset "
+                                                "(monitor '%s')" % m.name)
+        else:
+            raise virt_vm.VMRebootError("Unknown reboot method: %s" % method)
+
+        error.context("waiting for guest to go down", logging.info)
+        if not virt_utils.wait_for(lambda:
+                                  not session.is_responsive(timeout=30),
+                                  120, 0, 1):
+            raise virt_vm.VMRebootError("Guest refuses to go down")
+        session.close()
+
+        error.context("logging in after reboot", logging.info)
+        return self.wait_for_login(nic_index, timeout=timeout)
+
+
+    def send_key(self, keystr):
+        """
+        Send a key event to the VM.
+
+        @param: keystr: A key event string (e.g. "ctrl-alt-delete")
+        """
+        # For compatibility with versions of QEMU that do not recognize all
+        # key names: replace keyname with the hex value from the dict, which
+        # QEMU will definitely accept
+        dict = {"comma": "0x33",
+                "dot":   "0x34",
+                "slash": "0x35"}
+        for key, value in dict.items():
+            keystr = keystr.replace(key, value)
+        self.monitor.sendkey(keystr)
+        time.sleep(0.2)
+
+
+    def send_string(self, str):
+        """
+        Send a string to the VM.
+
+        @param str: String, that must consist of alphanumeric characters only.
+                Capital letters are allowed.
+        """
+        for char in str:
+            if char.isupper():
+                self.send_key("shift-%s" % char.lower())
+            else:
+                self.send_key(char)
+
+
+    def screendump(self, filename):
+        try:
+            if self.monitor:
+                self.monitor.screendump(filename=filename)
+        except kvm_monitor.MonitorError, e:
+            logging.warn(e)
+
+
+    def get_uuid(self):
+        """
+        Catch UUID of the VM.
+
+        @return: None,if not specified in config file
+        """
+        if self.params.get("uuid") == "random":
+            return self.uuid
+        else:
+            return self.params.get("uuid", None)
+
+
+    def get_cpu_count(self):
+        """
+        Get the cpu count of the VM.
+        """
+        session = self.login()
+        try:
+            return int(session.cmd(self.params.get("cpu_chk_cmd")))
+        finally:
+            session.close()
+
+
+    def get_memory_size(self, cmd=None):
+        """
+        Get bootup memory size of the VM.
+
+        @param check_cmd: Command used to check memory. If not provided,
+                self.params.get("mem_chk_cmd") will be used.
+        """
+        session = self.login()
+        try:
+            if not cmd:
+                cmd = self.params.get("mem_chk_cmd")
+            mem_str = session.cmd(cmd)
+            mem = re.findall("([0-9]+)", mem_str)
+            mem_size = 0
+            for m in mem:
+                mem_size += int(m)
+            if "GB" in mem_str:
+                mem_size *= 1024
+            elif "MB" in mem_str:
+                pass
+            else:
+                mem_size /= 1024
+            return int(mem_size)
+        finally:
+            session.close()
+
+
+    def get_current_memory_size(self):
+        """
+        Get current memory size of the VM, rather than bootup memory.
+        """
+        cmd = self.params.get("mem_chk_cur_cmd")
+        return self.get_memory_size(cmd)
+
+
+    def save_to_file(self, path):
+        """
+        Save the state of virtual machine to a file through migrate to
+        exec
+        """
+        # Make sure we only get one iteration
+        self.monitor.cmd("migrate_set_speed 1000g")
+        self.monitor.cmd("migrate_set_downtime 100000000")
+        self.monitor.migrate('"exec:cat>%s"' % path)
+        # Restore the speed and downtime of migration
+        self.monitor.cmd("migrate_set_speed %d" % (32<<20))
+        self.monitor.cmd("migrate_set_downtime 0.03")
diff --git a/client/virt/ppm_utils.py b/client/virt/ppm_utils.py
new file mode 100644
index 0000000..90ff46d
--- /dev/null
+++ b/client/virt/ppm_utils.py
@@ -0,0 +1,237 @@
+"""
+Utility functions to deal with ppm (qemu screendump format) files.
+
+@copyright: Red Hat 2008-2009
+"""
+
+import os, struct, time, re
+from autotest_lib.client.bin import utils
+
+# Some directory/filename utils, for consistency
+
+def find_id_for_screendump(md5sum, dir):
+    """
+    Search dir for a PPM file whose name ends with md5sum.
+
+    @param md5sum: md5 sum string
+    @param dir: Directory that holds the PPM files.
+    @return: The file's basename without any preceding path, e.g.
+    '20080101_120000_d41d8cd98f00b204e9800998ecf8427e.ppm'.
+    """
+    try:
+        files = os.listdir(dir)
+    except OSError:
+        files = []
+    for file in files:
+        exp = re.compile(r"(.*_)?" + md5sum + r"\.ppm", re.IGNORECASE)
+        if exp.match(file):
+            return file
+
+
+def generate_id_for_screendump(md5sum, dir):
+    """
+    Generate a unique filename using the given MD5 sum.
+
+    @return: Only the file basename, without any preceding path. The
+    filename consists of the current date and time, the MD5 sum and a .ppm
+    extension, e.g. '20080101_120000_d41d8cd98f00b204e9800998ecf8427e.ppm'.
+    """
+    filename = time.strftime("%Y%m%d_%H%M%S") + "_" + md5sum + ".ppm"
+    return filename
+
+
+def get_data_dir(steps_filename):
+    """
+    Return the data dir of the given steps filename.
+    """
+    filename = os.path.basename(steps_filename)
+    return os.path.join(os.path.dirname(steps_filename), "..", "steps_data",
+                        filename + "_data")
+
+
+# Functions for working with PPM files
+
+def image_read_from_ppm_file(filename):
+    """
+    Read a PPM image.
+
+    @return: A 3 element tuple containing the width, height and data of the
+            image.
+    """
+    fin = open(filename,"rb")
+    l1 = fin.readline()
+    l2 = fin.readline()
+    l3 = fin.readline()
+    data = fin.read()
+    fin.close()
+
+    (w, h) = map(int, l2.split())
+    return (w, h, data)
+
+
+def image_write_to_ppm_file(filename, width, height, data):
+    """
+    Write a PPM image with the given width, height and data.
+
+    @param filename: PPM file path
+    @param width: PPM file width (pixels)
+    @param height: PPM file height (pixels)
+    """
+    fout = open(filename,"wb")
+    fout.write("P6\n")
+    fout.write("%d %d\n" % (width, height))
+    fout.write("255\n")
+    fout.write(data)
+    fout.close()
+
+
+def image_crop(width, height, data, x1, y1, dx, dy):
+    """
+    Crop an image.
+
+    @param width: Original image width
+    @param height: Original image height
+    @param data: Image data
+    @param x1: Desired x coordinate of the cropped region
+    @param y1: Desired y coordinate of the cropped region
+    @param dx: Desired width of the cropped region
+    @param dy: Desired height of the cropped region
+    @return: A 3-tuple containing the width, height and data of the
+    cropped image.
+    """
+    if x1 > width - 1: x1 = width - 1
+    if y1 > height - 1: y1 = height - 1
+    if dx > width - x1: dx = width - x1
+    if dy > height - y1: dy = height - y1
+    newdata = ""
+    index = (x1 + y1*width) * 3
+    for i in range(dy):
+        newdata += data[index:(index+dx*3)]
+        index += width*3
+    return (dx, dy, newdata)
+
+
+def image_md5sum(width, height, data):
+    """
+    Return the md5sum of an image.
+
+    @param width: PPM file width
+    @param height: PPM file height
+    @data: PPM file data
+    """
+    header = "P6\n%d %d\n255\n" % (width, height)
+    hash = utils.hash('md5', header)
+    hash.update(data)
+    return hash.hexdigest()
+
+
+def get_region_md5sum(width, height, data, x1, y1, dx, dy,
+                      cropped_image_filename=None):
+    """
+    Return the md5sum of a cropped region.
+
+    @param width: Original image width
+    @param height: Original image height
+    @param data: Image data
+    @param x1: Desired x coord of the cropped region
+    @param y1: Desired y coord of the cropped region
+    @param dx: Desired width of the cropped region
+    @param dy: Desired height of the cropped region
+    @param cropped_image_filename: if not None, write the resulting cropped
+            image to a file with this name
+    """
+    (cw, ch, cdata) = image_crop(width, height, data, x1, y1, dx, dy)
+    # Write cropped image for debugging
+    if cropped_image_filename:
+        image_write_to_ppm_file(cropped_image_filename, cw, ch, cdata)
+    return image_md5sum(cw, ch, cdata)
+
+
+def image_verify_ppm_file(filename):
+    """
+    Verify the validity of a PPM file.
+
+    @param filename: Path of the file being verified.
+    @return: True if filename is a valid PPM image file. This function
+    reads only the first few bytes of the file so it should be rather fast.
+    """
+    try:
+        size = os.path.getsize(filename)
+        fin = open(filename, "rb")
+        assert(fin.readline().strip() == "P6")
+        (width, height) = map(int, fin.readline().split())
+        assert(width > 0 and height > 0)
+        assert(fin.readline().strip() == "255")
+        size_read = fin.tell()
+        fin.close()
+        assert(size - size_read == width*height*3)
+        return True
+    except:
+        return False
+
+
+def image_comparison(width, height, data1, data2):
+    """
+    Generate a green-red comparison image from two given images.
+
+    @param width: Width of both images
+    @param height: Height of both images
+    @param data1: Data of first image
+    @param data2: Data of second image
+    @return: A 3-element tuple containing the width, height and data of the
+            generated comparison image.
+
+    @note: Input images must be the same size.
+    """
+    newdata = ""
+    i = 0
+    while i < width*height*3:
+        # Compute monochromatic value of current pixel in data1
+        pixel1_str = data1[i:i+3]
+        temp = struct.unpack("BBB", pixel1_str)
+        value1 = int((temp[0] + temp[1] + temp[2]) / 3)
+        # Compute monochromatic value of current pixel in data2
+        pixel2_str = data2[i:i+3]
+        temp = struct.unpack("BBB", pixel2_str)
+        value2 = int((temp[0] + temp[1] + temp[2]) / 3)
+        # Compute average of the two values
+        value = int((value1 + value2) / 2)
+        # Scale value to the upper half of the range [0, 255]
+        value = 128 + value / 2
+        # Compare pixels
+        if pixel1_str == pixel2_str:
+            # Equal -- give the pixel a greenish hue
+            newpixel = [0, value, 0]
+        else:
+            # Not equal -- give the pixel a reddish hue
+            newpixel = [value, 0, 0]
+        newdata += struct.pack("BBB", newpixel[0], newpixel[1], newpixel[2])
+        i += 3
+    return (width, height, newdata)
+
+
+def image_fuzzy_compare(width, height, data1, data2):
+    """
+    Return the degree of equality of two given images.
+
+    @param width: Width of both images
+    @param height: Height of both images
+    @param data1: Data of first image
+    @param data2: Data of second image
+    @return: Ratio equal_pixel_count / total_pixel_count.
+
+    @note: Input images must be the same size.
+    """
+    equal = 0.0
+    different = 0.0
+    i = 0
+    while i < width*height*3:
+        pixel1_str = data1[i:i+3]
+        pixel2_str = data2[i:i+3]
+        # Compare pixels
+        if pixel1_str == pixel2_str:
+            equal += 1.0
+        else:
+            different += 1.0
+        i += 3
+    return equal / (equal + different)
diff --git a/client/virt/rss_client.py b/client/virt/rss_client.py
new file mode 100755
index 0000000..4d00d17
--- /dev/null
+++ b/client/virt/rss_client.py
@@ -0,0 +1,519 @@
+#!/usr/bin/python
+"""
+Client for file transfer services offered by RSS (Remote Shell Server).
+
+@author: Michael Goldish (mgoldish@redhat.com)
+@copyright: 2008-2010 Red Hat Inc.
+"""
+
+import socket, struct, time, sys, os, glob
+
+# Globals
+CHUNKSIZE = 65536
+
+# Protocol message constants
+RSS_MAGIC           = 0x525353
+RSS_OK              = 1
+RSS_ERROR           = 2
+RSS_UPLOAD          = 3
+RSS_DOWNLOAD        = 4
+RSS_SET_PATH        = 5
+RSS_CREATE_FILE     = 6
+RSS_CREATE_DIR      = 7
+RSS_LEAVE_DIR       = 8
+RSS_DONE            = 9
+
+# See rss.cpp for protocol details.
+
+
+class FileTransferError(Exception):
+    def __init__(self, msg, e=None, filename=None):
+        Exception.__init__(self, msg, e, filename)
+        self.msg = msg
+        self.e = e
+        self.filename = filename
+
+    def __str__(self):
+        s = self.msg
+        if self.e and self.filename:
+            s += "    (error: %s,    filename: %s)" % (self.e, self.filename)
+        elif self.e:
+            s += "    (%s)" % self.e
+        elif self.filename:
+            s += "    (filename: %s)" % self.filename
+        return s
+
+
+class FileTransferConnectError(FileTransferError):
+    pass
+
+
+class FileTransferTimeoutError(FileTransferError):
+    pass
+
+
+class FileTransferProtocolError(FileTransferError):
+    pass
+
+
+class FileTransferSocketError(FileTransferError):
+    pass
+
+
+class FileTransferServerError(FileTransferError):
+    def __init__(self, errmsg):
+        FileTransferError.__init__(self, None, errmsg)
+
+    def __str__(self):
+        s = "Server said: %r" % self.e
+        if self.filename:
+            s += "    (filename: %s)" % self.filename
+        return s
+
+
+class FileTransferNotFoundError(FileTransferError):
+    pass
+
+
+class FileTransferClient(object):
+    """
+    Connect to a RSS (remote shell server) and transfer files.
+    """
+
+    def __init__(self, address, port, log_func=None, timeout=20):
+        """
+        Connect to a server.
+
+        @param address: The server's address
+        @param port: The server's port
+        @param log_func: If provided, transfer stats will be passed to this
+                function during the transfer
+        @param timeout: Time duration to wait for connection to succeed
+        @raise FileTransferConnectError: Raised if the connection fails
+        """
+        self._socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+        self._socket.settimeout(timeout)
+        try:
+            self._socket.connect((address, port))
+        except socket.error, e:
+            raise FileTransferConnectError("Cannot connect to server at "
+                                           "%s:%s" % (address, port), e)
+        try:
+            if self._receive_msg(timeout) != RSS_MAGIC:
+                raise FileTransferConnectError("Received wrong magic number")
+        except FileTransferTimeoutError:
+            raise FileTransferConnectError("Timeout expired while waiting to "
+                                           "receive magic number")
+        self._send(struct.pack("=i", CHUNKSIZE))
+        self._log_func = log_func
+        self._last_time = time.time()
+        self._last_transferred = 0
+        self.transferred = 0
+
+
+    def __del__(self):
+        self.close()
+
+
+    def close(self):
+        """
+        Close the connection.
+        """
+        self._socket.close()
+
+
+    def _send(self, str, timeout=60):
+        try:
+            if timeout <= 0:
+                raise socket.timeout
+            self._socket.settimeout(timeout)
+            self._socket.sendall(str)
+        except socket.timeout:
+            raise FileTransferTimeoutError("Timeout expired while sending "
+                                           "data to server")
+        except socket.error, e:
+            raise FileTransferSocketError("Could not send data to server", e)
+
+
+    def _receive(self, size, timeout=60):
+        strs = []
+        end_time = time.time() + timeout
+        try:
+            while size > 0:
+                timeout = end_time - time.time()
+                if timeout <= 0:
+                    raise socket.timeout
+                self._socket.settimeout(timeout)
+                data = self._socket.recv(size)
+                if not data:
+                    raise FileTransferProtocolError("Connection closed "
+                                                    "unexpectedly while "
+                                                    "receiving data from "
+                                                    "server")
+                strs.append(data)
+                size -= len(data)
+        except socket.timeout:
+            raise FileTransferTimeoutError("Timeout expired while receiving "
+                                           "data from server")
+        except socket.error, e:
+            raise FileTransferSocketError("Error receiving data from server",
+                                          e)
+        return "".join(strs)
+
+
+    def _report_stats(self, str):
+        if self._log_func:
+            dt = time.time() - self._last_time
+            if dt >= 1:
+                transferred = self.transferred / 1048576.
+                speed = (self.transferred - self._last_transferred) / dt
+                speed /= 1048576.
+                self._log_func("%s %.3f MB (%.3f MB/sec)" %
+                               (str, transferred, speed))
+                self._last_time = time.time()
+                self._last_transferred = self.transferred
+
+
+    def _send_packet(self, str, timeout=60):
+        self._send(struct.pack("=I", len(str)))
+        self._send(str, timeout)
+        self.transferred += len(str) + 4
+        self._report_stats("Sent")
+
+
+    def _receive_packet(self, timeout=60):
+        size = struct.unpack("=I", self._receive(4))[0]
+        str = self._receive(size, timeout)
+        self.transferred += len(str) + 4
+        self._report_stats("Received")
+        return str
+
+
+    def _send_file_chunks(self, filename, timeout=60):
+        if self._log_func:
+            self._log_func("Sending file %s" % filename)
+        f = open(filename, "rb")
+        try:
+            try:
+                end_time = time.time() + timeout
+                while True:
+                    data = f.read(CHUNKSIZE)
+                    self._send_packet(data, end_time - time.time())
+                    if len(data) < CHUNKSIZE:
+                        break
+            except FileTransferError, e:
+                e.filename = filename
+                raise
+        finally:
+            f.close()
+
+
+    def _receive_file_chunks(self, filename, timeout=60):
+        if self._log_func:
+            self._log_func("Receiving file %s" % filename)
+        f = open(filename, "wb")
+        try:
+            try:
+                end_time = time.time() + timeout
+                while True:
+                    data = self._receive_packet(end_time - time.time())
+                    f.write(data)
+                    if len(data) < CHUNKSIZE:
+                        break
+            except FileTransferError, e:
+                e.filename = filename
+                raise
+        finally:
+            f.close()
+
+
+    def _send_msg(self, msg, timeout=60):
+        self._send(struct.pack("=I", msg))
+
+
+    def _receive_msg(self, timeout=60):
+        s = self._receive(4, timeout)
+        return struct.unpack("=I", s)[0]
+
+
+    def _handle_transfer_error(self):
+        # Save original exception
+        e = sys.exc_info()
+        try:
+            # See if we can get an error message
+            msg = self._receive_msg()
+        except FileTransferError:
+            # No error message -- re-raise original exception
+            raise e[0], e[1], e[2]
+        if msg == RSS_ERROR:
+            errmsg = self._receive_packet()
+            raise FileTransferServerError(errmsg)
+        raise e[0], e[1], e[2]
+
+
+class FileUploadClient(FileTransferClient):
+    """
+    Connect to a RSS (remote shell server) and upload files or directory trees.
+    """
+
+    def __init__(self, address, port, log_func=None, timeout=20):
+        """
+        Connect to a server.
+
+        @param address: The server's address
+        @param port: The server's port
+        @param log_func: If provided, transfer stats will be passed to this
+                function during the transfer
+        @param timeout: Time duration to wait for connection to succeed
+        @raise FileTransferConnectError: Raised if the connection fails
+        @raise FileTransferProtocolError: Raised if an incorrect magic number
+                is received
+        @raise FileTransferSocketError: Raised if the RSS_UPLOAD message cannot
+                be sent to the server
+        """
+        super(FileUploadClient, self).__init__(address, port, log_func, timeout)
+        self._send_msg(RSS_UPLOAD)
+
+
+    def _upload_file(self, path, end_time):
+        if os.path.isfile(path):
+            self._send_msg(RSS_CREATE_FILE)
+            self._send_packet(os.path.basename(path))
+            self._send_file_chunks(path, end_time - time.time())
+        elif os.path.isdir(path):
+            self._send_msg(RSS_CREATE_DIR)
+            self._send_packet(os.path.basename(path))
+            for filename in os.listdir(path):
+                self._upload_file(os.path.join(path, filename), end_time)
+            self._send_msg(RSS_LEAVE_DIR)
+
+
+    def upload(self, src_pattern, dst_path, timeout=600):
+        """
+        Send files or directory trees to the server.
+        The semantics of src_pattern and dst_path are similar to those of scp.
+        For example, the following are OK:
+            src_pattern='/tmp/foo.txt', dst_path='C:\\'
+                (uploads a single file)
+            src_pattern='/usr/', dst_path='C:\\Windows\\'
+                (uploads a directory tree recursively)
+            src_pattern='/usr/*', dst_path='C:\\Windows\\'
+                (uploads all files and directory trees under /usr/)
+        The following is not OK:
+            src_pattern='/tmp/foo.txt', dst_path='C:\\Windows\\*'
+                (wildcards are only allowed in src_pattern)
+
+        @param src_pattern: A path or wildcard pattern specifying the files or
+                directories to send to the server
+        @param dst_path: A path in the server's filesystem where the files will
+                be saved
+        @param timeout: Time duration in seconds to wait for the transfer to
+                complete
+        @raise FileTransferTimeoutError: Raised if timeout expires
+        @raise FileTransferServerError: Raised if something goes wrong and the
+                server sends an informative error message to the client
+        @note: Other exceptions can be raised.
+        """
+        end_time = time.time() + timeout
+        try:
+            try:
+                self._send_msg(RSS_SET_PATH)
+                self._send_packet(dst_path)
+                matches = glob.glob(src_pattern)
+                for filename in matches:
+                    self._upload_file(os.path.abspath(filename), end_time)
+                self._send_msg(RSS_DONE)
+            except FileTransferTimeoutError:
+                raise
+            except FileTransferError:
+                self._handle_transfer_error()
+            else:
+                # If nothing was transferred, raise an exception
+                if not matches:
+                    raise FileTransferNotFoundError("Pattern %s does not "
+                                                    "match any files or "
+                                                    "directories" %
+                                                    src_pattern)
+                # Look for RSS_OK or RSS_ERROR
+                msg = self._receive_msg(end_time - time.time())
+                if msg == RSS_OK:
+                    return
+                elif msg == RSS_ERROR:
+                    errmsg = self._receive_packet()
+                    raise FileTransferServerError(errmsg)
+                else:
+                    # Neither RSS_OK nor RSS_ERROR found
+                    raise FileTransferProtocolError("Received unexpected msg")
+        except:
+            # In any case, if the transfer failed, close the connection
+            self.close()
+            raise
+
+
+class FileDownloadClient(FileTransferClient):
+    """
+    Connect to a RSS (remote shell server) and download files or directory trees.
+    """
+
+    def __init__(self, address, port, log_func=None, timeout=20):
+        """
+        Connect to a server.
+
+        @param address: The server's address
+        @param port: The server's port
+        @param log_func: If provided, transfer stats will be passed to this
+                function during the transfer
+        @param timeout: Time duration to wait for connection to succeed
+        @raise FileTransferConnectError: Raised if the connection fails
+        @raise FileTransferProtocolError: Raised if an incorrect magic number
+                is received
+        @raise FileTransferSendError: Raised if the RSS_UPLOAD message cannot
+                be sent to the server
+        """
+        super(FileDownloadClient, self).__init__(address, port, log_func, timeout)
+        self._send_msg(RSS_DOWNLOAD)
+
+
+    def download(self, src_pattern, dst_path, timeout=600):
+        """
+        Receive files or directory trees from the server.
+        The semantics of src_pattern and dst_path are similar to those of scp.
+        For example, the following are OK:
+            src_pattern='C:\\foo.txt', dst_path='/tmp'
+                (downloads a single file)
+            src_pattern='C:\\Windows', dst_path='/tmp'
+                (downloads a directory tree recursively)
+            src_pattern='C:\\Windows\\*', dst_path='/tmp'
+                (downloads all files and directory trees under C:\\Windows)
+        The following is not OK:
+            src_pattern='C:\\Windows', dst_path='/tmp/*'
+                (wildcards are only allowed in src_pattern)
+
+        @param src_pattern: A path or wildcard pattern specifying the files or
+                directories, in the server's filesystem, that will be sent to
+                the client
+        @param dst_path: A path in the local filesystem where the files will
+                be saved
+        @param timeout: Time duration in seconds to wait for the transfer to
+                complete
+        @raise FileTransferTimeoutError: Raised if timeout expires
+        @raise FileTransferServerError: Raised if something goes wrong and the
+                server sends an informative error message to the client
+        @note: Other exceptions can be raised.
+        """
+        dst_path = os.path.abspath(dst_path)
+        end_time = time.time() + timeout
+        file_count = 0
+        dir_count = 0
+        try:
+            try:
+                self._send_msg(RSS_SET_PATH)
+                self._send_packet(src_pattern)
+            except FileTransferError:
+                self._handle_transfer_error()
+            while True:
+                msg = self._receive_msg()
+                if msg == RSS_CREATE_FILE:
+                    # Receive filename and file contents
+                    filename = self._receive_packet()
+                    if os.path.isdir(dst_path):
+                        dst_path = os.path.join(dst_path, filename)
+                    self._receive_file_chunks(dst_path, end_time - time.time())
+                    dst_path = os.path.dirname(dst_path)
+                    file_count += 1
+                elif msg == RSS_CREATE_DIR:
+                    # Receive dirname and create the directory
+                    dirname = self._receive_packet()
+                    if os.path.isdir(dst_path):
+                        dst_path = os.path.join(dst_path, dirname)
+                    if not os.path.isdir(dst_path):
+                        os.mkdir(dst_path)
+                    dir_count += 1
+                elif msg == RSS_LEAVE_DIR:
+                    # Return to parent dir
+                    dst_path = os.path.dirname(dst_path)
+                elif msg == RSS_DONE:
+                    # Transfer complete
+                    if not file_count and not dir_count:
+                        raise FileTransferNotFoundError("Pattern %s does not "
+                                                        "match any files or "
+                                                        "directories that "
+                                                        "could be downloaded" %
+                                                        src_pattern)
+                    break
+                elif msg == RSS_ERROR:
+                    # Receive error message and abort
+                    errmsg = self._receive_packet()
+                    raise FileTransferServerError(errmsg)
+                else:
+                    # Unexpected msg
+                    raise FileTransferProtocolError("Received unexpected msg")
+        except:
+            # In any case, if the transfer failed, close the connection
+            self.close()
+            raise
+
+
+def upload(address, port, src_pattern, dst_path, log_func=None, timeout=60,
+           connect_timeout=20):
+    """
+    Connect to server and upload files.
+
+    @see: FileUploadClient
+    """
+    client = FileUploadClient(address, port, log_func, connect_timeout)
+    client.upload(src_pattern, dst_path, timeout)
+    client.close()
+
+
+def download(address, port, src_pattern, dst_path, log_func=None, timeout=60,
+             connect_timeout=20):
+    """
+    Connect to server and upload files.
+
+    @see: FileDownloadClient
+    """
+    client = FileDownloadClient(address, port, log_func, connect_timeout)
+    client.download(src_pattern, dst_path, timeout)
+    client.close()
+
+
+def main():
+    import optparse
+
+    usage = "usage: %prog [options] address port src_pattern dst_path"
+    parser = optparse.OptionParser(usage=usage)
+    parser.add_option("-d", "--download",
+                      action="store_true", dest="download",
+                      help="download files from server")
+    parser.add_option("-u", "--upload",
+                      action="store_true", dest="upload",
+                      help="upload files to server")
+    parser.add_option("-v", "--verbose",
+                      action="store_true", dest="verbose",
+                      help="be verbose")
+    parser.add_option("-t", "--timeout",
+                      type="int", dest="timeout", default=3600,
+                      help="transfer timeout")
+    options, args = parser.parse_args()
+    if options.download == options.upload:
+        parser.error("you must specify either -d or -u")
+    if len(args) != 4:
+        parser.error("incorrect number of arguments")
+    address, port, src_pattern, dst_path = args
+    port = int(port)
+
+    logger = None
+    if options.verbose:
+        def p(s):
+            print s
+        logger = p
+
+    if options.download:
+        download(address, port, src_pattern, dst_path, logger, options.timeout)
+    elif options.upload:
+        upload(address, port, src_pattern, dst_path, logger, options.timeout)
+
+
+if __name__ == "__main__":
+    main()
diff --git a/client/virt/virt_env_process.py b/client/virt/virt_env_process.py
new file mode 100644
index 0000000..ca00528
--- /dev/null
+++ b/client/virt/virt_env_process.py
@@ -0,0 +1,438 @@
+import os, time, commands, re, logging, glob, threading, shutil
+from autotest_lib.client.bin import utils
+from autotest_lib.client.common_lib import error
+import virt_utils, virt_vm, kvm_monitor, virt_test_setup, ppm_utils, aexpect
+try:
+    import PIL.Image
+except ImportError:
+    logging.warning('No python imaging library installed. PPM image '
+                    'conversion to JPEG disabled. In order to enable it, '
+                    'please install python-imaging or the equivalent for your '
+                    'distro.')
+
+
+_screendump_thread = None
+_screendump_thread_termination_event = None
+
+
+def preprocess_image(test, params):
+    """
+    Preprocess a single QEMU image according to the instructions in params.
+
+    @param test: Autotest test object.
+    @param params: A dict containing image preprocessing parameters.
+    @note: Currently this function just creates an image if requested.
+    """
+    image_filename = virt_vm.get_image_filename(params, test.bindir)
+
+    create_image = False
+
+    if params.get("force_create_image") == "yes":
+        logging.debug("'force_create_image' specified; creating image...")
+        create_image = True
+    elif (params.get("create_image") == "yes" and not
+          os.path.exists(image_filename)):
+        logging.debug("Creating image...")
+        create_image = True
+
+    if create_image and not virt_vm.create_image(params, test.bindir):
+        raise error.TestError("Could not create image")
+
+
+def preprocess_vm(test, params, env, name):
+    """
+    Preprocess a single VM object according to the instructions in params.
+    Start the VM if requested and get a screendump.
+
+    @param test: An Autotest test object.
+    @param params: A dict containing VM preprocessing parameters.
+    @param env: The environment (a dict-like object).
+    @param name: The name of the VM object.
+    """
+    logging.debug("Preprocessing VM '%s'..." % name)
+    vm = env.get_vm(name)
+    vm_type = params.get("vm_type")
+    if not vm:
+        logging.debug("VM object does not exist; creating it")
+        vm = virt_vm.instantiate_vm(vm_type=vm_type, name=name, params=params,
+                                    root_dir=test.bindir,
+                                    address_cache=env.get("address_cache"))
+        env.register_vm(name, vm)
+
+    start_vm = False
+
+    if params.get("restart_vm") == "yes":
+        logging.debug("'restart_vm' specified; (re)starting VM...")
+        start_vm = True
+    elif params.get("migration_mode"):
+        logging.debug("Starting VM in incoming migration mode...")
+        start_vm = True
+    elif params.get("start_vm") == "yes":
+        if not vm.is_alive():
+            logging.debug("VM is not alive; starting it...")
+            start_vm = True
+        elif vm.make_qemu_command() != vm.make_qemu_command(name, params,
+                                                            test.bindir):
+            logging.debug("VM's qemu command differs from requested one; "
+                          "restarting it...")
+            start_vm = True
+
+    if start_vm:
+        # Start the VM (or restart it if it's already up)
+        vm.create(name, params, test.bindir,
+                  migration_mode=params.get("migration_mode"))
+    else:
+        # Don't start the VM, just update its params
+        vm.params = params
+
+    scrdump_filename = os.path.join(test.debugdir, "pre_%s.ppm" % name)
+    try:
+        if vm.monitor:
+            vm.monitor.screendump(scrdump_filename)
+    except kvm_monitor.MonitorError, e:
+        logging.warn(e)
+
+
+def postprocess_image(test, params):
+    """
+    Postprocess a single image according to the instructions in params.
+
+    @param test: An Autotest test object.
+    @param params: A dict containing image postprocessing parameters.
+    """
+    if params.get("check_image") == "yes":
+        virt_vm.check_image(params, test.bindir)
+    if params.get("remove_image") == "yes":
+        virt_vm.remove_image(params, test.bindir)
+
+
+def postprocess_vm(test, params, env, name):
+    """
+    Postprocess a single VM object according to the instructions in params.
+    Kill the VM if requested and get a screendump.
+
+    @param test: An Autotest test object.
+    @param params: A dict containing VM postprocessing parameters.
+    @param env: The environment (a dict-like object).
+    @param name: The name of the VM object.
+    """
+    logging.debug("Postprocessing VM '%s'..." % name)
+    vm = env.get_vm(name)
+    if not vm:
+        return
+
+    scrdump_filename = os.path.join(test.debugdir, "post_%s.ppm" % name)
+    vm.screendump(filename=scrdump_filename)
+
+    if params.get("kill_vm") == "yes":
+        kill_vm_timeout = float(params.get("kill_vm_timeout", 0))
+        if kill_vm_timeout:
+            logging.debug("'kill_vm' specified; waiting for VM to shut down "
+                          "before killing it...")
+            virt_utils.wait_for(vm.is_dead, kill_vm_timeout, 0, 1)
+        else:
+            logging.debug("'kill_vm' specified; killing VM...")
+        vm.destroy(gracefully = params.get("kill_vm_gracefully") == "yes")
+
+
+def process_command(test, params, env, command, command_timeout,
+                    command_noncritical):
+    """
+    Pre- or post- custom commands to be executed before/after a test is run
+
+    @param test: An Autotest test object.
+    @param params: A dict containing all VM and image parameters.
+    @param env: The environment (a dict-like object).
+    @param command: Command to be run.
+    @param command_timeout: Timeout for command execution.
+    @param command_noncritical: If True test will not fail if command fails.
+    """
+    # Export environment vars
+    for k in params:
+        os.putenv("VIRT_TEST_%s" % k, str(params[k]))
+    # Execute commands
+    try:
+        utils.system("cd %s; %s" % (test.bindir, command))
+    except error.CmdError, e:
+        if command_noncritical:
+            logging.warn(e)
+        else:
+            raise
+
+def process(test, params, env, image_func, vm_func):
+    """
+    Pre- or post-process VMs and images according to the instructions in params.
+    Call image_func for each image listed in params and vm_func for each VM.
+
+    @param test: An Autotest test object.
+    @param params: A dict containing all VM and image parameters.
+    @param env: The environment (a dict-like object).
+    @param image_func: A function to call for each image.
+    @param vm_func: A function to call for each VM.
+    """
+    # Get list of VMs specified for this test
+    for vm_name in params.objects("vms"):
+        vm_params = params.object_params(vm_name)
+        # Get list of images specified for this VM
+        for image_name in vm_params.objects("images"):
+            image_params = vm_params.object_params(image_name)
+            # Call image_func for each image
+            image_func(test, image_params)
+        # Call vm_func for each vm
+        vm_func(test, vm_params, env, vm_name)
+
+
+@error.context_aware
+def preprocess(test, params, env):
+    """
+    Preprocess all VMs and images according to the instructions in params.
+    Also, collect some host information, such as the KVM version.
+
+    @param test: An Autotest test object.
+    @param params: A dict containing all VM and image parameters.
+    @param env: The environment (a dict-like object).
+    """
+    error.context("preprocessing")
+
+    # Start tcpdump if it isn't already running
+    if "address_cache" not in env:
+        env["address_cache"] = {}
+    if "tcpdump" in env and not env["tcpdump"].is_alive():
+        env["tcpdump"].close()
+        del env["tcpdump"]
+    if "tcpdump" not in env and params.get("run_tcpdump", "yes") == "yes":
+        cmd = "%s -npvi any 'dst port 68'" % virt_utils.find_command("tcpdump")
+        logging.debug("Starting tcpdump (%s)...", cmd)
+        env["tcpdump"] = aexpect.Tail(
+            command=cmd,
+            output_func=_update_address_cache,
+            output_params=(env["address_cache"],))
+        if virt_utils.wait_for(lambda: not env["tcpdump"].is_alive(),
+                              0.1, 0.1, 1.0):
+            logging.warn("Could not start tcpdump")
+            logging.warn("Status: %s" % env["tcpdump"].get_status())
+            logging.warn("Output:" + virt_utils.format_str_for_message(
+                env["tcpdump"].get_output()))
+
+    # Destroy and remove VMs that are no longer needed in the environment
+    requested_vms = params.objects("vms")
+    for key in env.keys():
+        vm = env[key]
+        if not virt_utils.is_vm(vm):
+            continue
+        if not vm.name in requested_vms:
+            logging.debug("VM '%s' found in environment but not required for "
+                          "test; removing it..." % vm.name)
+            vm.destroy()
+            del env[key]
+
+    virt_utils.get_virt_info(params, test)
+
+    if params.get("setup_hugepages") == "yes":
+        h = virt_test_setup.HugePageConfig(params)
+        h.setup()
+
+    if params.get("type") == "unattended_install":
+        u = virt_test_setup.UnattendedInstallConfig(test, params)
+        u.setup()
+
+    if params.get("type") == "enospc":
+        e = virt_test_setup.EnospcConfig(test, params)
+        e.setup()
+
+    # Execute any pre_commands
+    if params.get("pre_command"):
+        process_command(test, params, env, params.get("pre_command"),
+                        int(params.get("pre_command_timeout", "600")),
+                        params.get("pre_command_noncritical") == "yes")
+
+    # Preprocess all VMs and images
+    process(test, params, env, preprocess_image, preprocess_vm)
+
+    # Start the screendump thread
+    if params.get("take_regular_screendumps") == "yes":
+        logging.debug("Starting screendump thread")
+        global _screendump_thread, _screendump_thread_termination_event
+        _screendump_thread_termination_event = threading.Event()
+        _screendump_thread = threading.Thread(target=_take_screendumps,
+                                              args=(test, params, env))
+        _screendump_thread.start()
+
+
+@error.context_aware
+def postprocess(test, params, env):
+    """
+    Postprocess all VMs and images according to the instructions in params.
+
+    @param test: An Autotest test object.
+    @param params: Dict containing all VM and image parameters.
+    @param env: The environment (a dict-like object).
+    """
+    error.context("postprocessing")
+
+    # Postprocess all VMs and images
+    process(test, params, env, postprocess_image, postprocess_vm)
+
+    # Terminate the screendump thread
+    global _screendump_thread, _screendump_thread_termination_event
+    if _screendump_thread:
+        logging.debug("Terminating screendump thread...")
+        _screendump_thread_termination_event.set()
+        _screendump_thread.join(10)
+        _screendump_thread = None
+
+    # Warn about corrupt PPM files
+    for f in glob.glob(os.path.join(test.debugdir, "*.ppm")):
+        if not ppm_utils.image_verify_ppm_file(f):
+            logging.warn("Found corrupt PPM file: %s", f)
+
+    # Should we convert PPM files to PNG format?
+    if params.get("convert_ppm_files_to_png") == "yes":
+        logging.debug("'convert_ppm_files_to_png' specified; converting PPM "
+                      "files to PNG format...")
+        try:
+            for f in glob.glob(os.path.join(test.debugdir, "*.ppm")):
+                if ppm_utils.image_verify_ppm_file(f):
+                    new_path = f.replace(".ppm", ".png")
+                    image = PIL.Image.open(f)
+                    image.save(new_path, format='PNG')
+        except NameError:
+            pass
+
+    # Should we keep the PPM files?
+    if params.get("keep_ppm_files") != "yes":
+        logging.debug("'keep_ppm_files' not specified; removing all PPM files "
+                      "from debug dir...")
+        for f in glob.glob(os.path.join(test.debugdir, '*.ppm')):
+            os.unlink(f)
+
+    # Should we keep the screendump dirs?
+    if params.get("keep_screendumps") != "yes":
+        logging.debug("'keep_screendumps' not specified; removing screendump "
+                      "dirs...")
+        for d in glob.glob(os.path.join(test.debugdir, "screendumps_*")):
+            if os.path.isdir(d) and not os.path.islink(d):
+                shutil.rmtree(d, ignore_errors=True)
+
+    # Kill all unresponsive VMs
+    if params.get("kill_unresponsive_vms") == "yes":
+        logging.debug("'kill_unresponsive_vms' specified; killing all VMs "
+                      "that fail to respond to a remote login request...")
+        for vm in env.get_all_vms():
+            if vm.is_alive():
+                try:
+                    session = vm.login()
+                    session.close()
+                except (virt_utils.LoginError, virt_vm.VMError), e:
+                    logging.warn(e)
+                    vm.destroy(gracefully=False)
+
+    # Kill all aexpect tail threads
+    aexpect.kill_tail_threads()
+
+    # Terminate tcpdump if no VMs are alive
+    living_vms = [vm for vm in env.get_all_vms() if vm.is_alive()]
+    if not living_vms and "tcpdump" in env:
+        env["tcpdump"].close()
+        del env["tcpdump"]
+
+    if params.get("setup_hugepages") == "yes":
+        h = virt_test_setup.HugePageConfig(params)
+        h.cleanup()
+
+    if params.get("type") == "enospc":
+        e = virt_test_setup.EnospcConfig(test, params)
+        e.cleanup()
+
+    # Execute any post_commands
+    if params.get("post_command"):
+        process_command(test, params, env, params.get("post_command"),
+                        int(params.get("post_command_timeout", "600")),
+                        params.get("post_command_noncritical") == "yes")
+
+
+def postprocess_on_error(test, params, env):
+    """
+    Perform postprocessing operations required only if the test failed.
+
+    @param test: An Autotest test object.
+    @param params: A dict containing all VM and image parameters.
+    @param env: The environment (a dict-like object).
+    """
+    params.update(params.object_params("on_error"))
+
+
+def _update_address_cache(address_cache, line):
+    if re.search("Your.IP", line, re.IGNORECASE):
+        matches = re.findall(r"\d*\.\d*\.\d*\.\d*", line)
+        if matches:
+            address_cache["last_seen"] = matches[0]
+    if re.search("Client.Ethernet.Address", line, re.IGNORECASE):
+        matches = re.findall(r"\w*:\w*:\w*:\w*:\w*:\w*", line)
+        if matches and address_cache.get("last_seen"):
+            mac_address = matches[0].lower()
+            if time.time() - address_cache.get("time_%s" % mac_address, 0) > 5:
+                logging.debug("(address cache) Adding cache entry: %s ---> %s",
+                              mac_address, address_cache.get("last_seen"))
+            address_cache[mac_address] = address_cache.get("last_seen")
+            address_cache["time_%s" % mac_address] = time.time()
+            del address_cache["last_seen"]
+
+
+def _take_screendumps(test, params, env):
+    global _screendump_thread_termination_event
+    temp_dir = test.debugdir
+    if params.get("screendump_temp_dir"):
+        temp_dir = virt_utils.get_path(test.bindir,
+                                      params.get("screendump_temp_dir"))
+        try:
+            os.makedirs(temp_dir)
+        except OSError:
+            pass
+    temp_filename = os.path.join(temp_dir, "scrdump-%s.ppm" %
+                                 virt_utils.generate_random_string(6))
+    delay = float(params.get("screendump_delay", 5))
+    quality = int(params.get("screendump_quality", 30))
+
+    cache = {}
+
+    while True:
+        for vm in env.get_all_vms():
+            if not vm.is_alive():
+                continue
+            vm.screendump(filename=temp_filename)
+            if not os.path.exists(temp_filename):
+                logging.warn("VM '%s' failed to produce a screendump", vm.name)
+                continue
+            if not ppm_utils.image_verify_ppm_file(temp_filename):
+                logging.warn("VM '%s' produced an invalid screendump", vm.name)
+                os.unlink(temp_filename)
+                continue
+            screendump_dir = os.path.join(test.debugdir,
+                                          "screendumps_%s" % vm.name)
+            try:
+                os.makedirs(screendump_dir)
+            except OSError:
+                pass
+            screendump_filename = os.path.join(screendump_dir,
+                    "%s_%s.jpg" % (vm.name,
+                                   time.strftime("%Y-%m-%d_%H-%M-%S")))
+            hash = utils.hash_file(temp_filename)
+            if hash in cache:
+                try:
+                    os.link(cache[hash], screendump_filename)
+                except OSError:
+                    pass
+            else:
+                try:
+                    image = PIL.Image.open(temp_filename)
+                    image.save(screendump_filename, format="JPEG",
+                               quality=quality)
+                    cache[hash] = screendump_filename
+                except NameError:
+                    pass
+            os.unlink(temp_filename)
+        if _screendump_thread_termination_event.isSet():
+            _screendump_thread_termination_event = None
+            break
+        _screendump_thread_termination_event.wait(delay)
diff --git a/client/virt/virt_scheduler.py b/client/virt/virt_scheduler.py
new file mode 100644
index 0000000..09427f9
--- /dev/null
+++ b/client/virt/virt_scheduler.py
@@ -0,0 +1,229 @@
+import os, select
+import virt_utils, kvm_vm, aexpect
+
+
+class scheduler:
+    """
+    A scheduler that manages several parallel test execution pipelines on a
+    single host.
+    """
+
+    def __init__(self, tests, num_workers, total_cpus, total_mem, bindir):
+        """
+        Initialize the class.
+
+        @param tests: A list of test dictionaries.
+        @param num_workers: The number of workers (pipelines).
+        @param total_cpus: The total number of CPUs to dedicate to tests.
+        @param total_mem: The total amount of memory to dedicate to tests.
+        @param bindir: The directory where environment files reside.
+        """
+        self.tests = tests
+        self.num_workers = num_workers
+        self.total_cpus = total_cpus
+        self.total_mem = total_mem
+        self.bindir = bindir
+        # Pipes -- s stands for scheduler, w stands for worker
+        self.s2w = [os.pipe() for i in range(num_workers)]
+        self.w2s = [os.pipe() for i in range(num_workers)]
+        self.s2w_r = [os.fdopen(r, "r", 0) for r, w in self.s2w]
+        self.s2w_w = [os.fdopen(w, "w", 0) for r, w in self.s2w]
+        self.w2s_r = [os.fdopen(r, "r", 0) for r, w in self.w2s]
+        self.w2s_w = [os.fdopen(w, "w", 0) for r, w in self.w2s]
+        # "Personal" worker dicts contain modifications that are applied
+        # specifically to each worker.  For example, each worker must use a
+        # different environment file and a different MAC address pool.
+        self.worker_dicts = [{"env": "env%d" % i} for i in range(num_workers)]
+
+
+    def worker(self, index, run_test_func):
+        """
+        The worker function.
+
+        Waits for commands from the scheduler and processes them.
+
+        @param index: The index of this worker (in the range 0..num_workers-1).
+        @param run_test_func: A function to be called to run a test
+                (e.g. job.run_test).
+        """
+        r = self.s2w_r[index]
+        w = self.w2s_w[index]
+        self_dict = self.worker_dicts[index]
+
+        # Inform the scheduler this worker is ready
+        w.write("ready\n")
+
+        while True:
+            cmd = r.readline().split()
+            if not cmd:
+                continue
+
+            # The scheduler wants this worker to run a test
+            if cmd[0] == "run":
+                test_index = int(cmd[1])
+                test = self.tests[test_index].copy()
+                test.update(self_dict)
+                test_iterations = int(test.get("iterations", 1))
+                status = run_test_func("kvm", params=test,
+                                       tag=test.get("shortname"),
+                                       iterations=test_iterations)
+                w.write("done %s %s\n" % (test_index, status))
+                w.write("ready\n")
+
+            # The scheduler wants this worker to free its used resources
+            elif cmd[0] == "cleanup":
+                env_filename = os.path.join(self.bindir, self_dict["env"])
+                env = virt_utils.Env(env_filename)
+                for obj in env.values():
+                    if isinstance(obj, kvm_vm.VM):
+                        obj.destroy()
+                    elif isinstance(obj, aexpect.Spawn):
+                        obj.close()
+                env.save()
+                w.write("cleanup_done\n")
+                w.write("ready\n")
+
+            # There's no more work for this worker
+            elif cmd[0] == "terminate":
+                break
+
+
+    def scheduler(self):
+        """
+        The scheduler function.
+
+        Sends commands to workers, telling them to run tests, clean up or
+        terminate execution.
+        """
+        idle_workers = []
+        closing_workers = []
+        test_status = ["waiting"] * len(self.tests)
+        test_worker = [None] * len(self.tests)
+        used_cpus = [0] * self.num_workers
+        used_mem = [0] * self.num_workers
+
+        while True:
+            # Wait for a message from a worker
+            r, w, x = select.select(self.w2s_r, [], [])
+
+            someone_is_ready = False
+
+            for pipe in r:
+                worker_index = self.w2s_r.index(pipe)
+                msg = pipe.readline().split()
+                if not msg:
+                    continue
+
+                # A worker is ready -- add it to the idle_workers list
+                if msg[0] == "ready":
+                    idle_workers.append(worker_index)
+                    someone_is_ready = True
+
+                # A worker completed a test
+                elif msg[0] == "done":
+                    test_index = int(msg[1])
+                    test = self.tests[test_index]
+                    status = int(eval(msg[2]))
+                    test_status[test_index] = ("fail", "pass")[status]
+                    # If the test failed, mark all dependent tests as "failed" too
+                    if not status:
+                        for i, other_test in enumerate(self.tests):
+                            for dep in other_test.get("dep", []):
+                                if dep in test["name"]:
+                                    test_status[i] = "fail"
+
+                # A worker is done shutting down its VMs and other processes
+                elif msg[0] == "cleanup_done":
+                    used_cpus[worker_index] = 0
+                    used_mem[worker_index] = 0
+                    closing_workers.remove(worker_index)
+
+            if not someone_is_ready:
+                continue
+
+            for worker in idle_workers[:]:
+                # Find a test for this worker
+                test_found = False
+                for i, test in enumerate(self.tests):
+                    # We only want "waiting" tests
+                    if test_status[i] != "waiting":
+                        continue
+                    # Make sure the test isn't assigned to another worker
+                    if test_worker[i] is not None and test_worker[i] != worker:
+                        continue
+                    # Make sure the test's dependencies are satisfied
+                    dependencies_satisfied = True
+                    for dep in test["dep"]:
+                        dependencies = [j for j, t in enumerate(self.tests)
+                                        if dep in t["name"]]
+                        bad_status_deps = [j for j in dependencies
+                                           if test_status[j] != "pass"]
+                        if bad_status_deps:
+                            dependencies_satisfied = False
+                            break
+                    if not dependencies_satisfied:
+                        continue
+                    # Make sure we have enough resources to run the test
+                    test_used_cpus = int(test.get("used_cpus", 1))
+                    test_used_mem = int(test.get("used_mem", 128))
+                    # First make sure the other workers aren't using too many
+                    # CPUs (not including the workers currently shutting down)
+                    uc = (sum(used_cpus) - used_cpus[worker] -
+                          sum(used_cpus[i] for i in closing_workers))
+                    if uc and uc + test_used_cpus > self.total_cpus:
+                        continue
+                    # ... or too much memory
+                    um = (sum(used_mem) - used_mem[worker] -
+                          sum(used_mem[i] for i in closing_workers))
+                    if um and um + test_used_mem > self.total_mem:
+                        continue
+                    # If we reached this point it means there are, or will
+                    # soon be, enough resources to run the test
+                    test_found = True
+                    # Now check if the test can be run right now, i.e. if the
+                    # other workers, including the ones currently shutting
+                    # down, aren't using too many CPUs
+                    uc = (sum(used_cpus) - used_cpus[worker])
+                    if uc and uc + test_used_cpus > self.total_cpus:
+                        continue
+                    # ... or too much memory
+                    um = (sum(used_mem) - used_mem[worker])
+                    if um and um + test_used_mem > self.total_mem:
+                        continue
+                    # Everything is OK -- run the test
+                    test_status[i] = "running"
+                    test_worker[i] = worker
+                    idle_workers.remove(worker)
+                    # Update used_cpus and used_mem
+                    used_cpus[worker] = test_used_cpus
+                    used_mem[worker] = test_used_mem
+                    # Assign all related tests to this worker
+                    for j, other_test in enumerate(self.tests):
+                        for other_dep in other_test["dep"]:
+                            # All tests that depend on this test
+                            if other_dep in test["name"]:
+                                test_worker[j] = worker
+                                break
+                            # ... and all tests that share a dependency
+                            # with this test
+                            for dep in test["dep"]:
+                                if dep in other_dep or other_dep in dep:
+                                    test_worker[j] = worker
+                                    break
+                    # Tell the worker to run the test
+                    self.s2w_w[worker].write("run %s\n" % i)
+                    break
+
+                # If there won't be any tests for this worker to run soon, tell
+                # the worker to free its used resources
+                if not test_found and (used_cpus[worker] or used_mem[worker]):
+                    self.s2w_w[worker].write("cleanup\n")
+                    idle_workers.remove(worker)
+                    closing_workers.append(worker)
+
+            # If there are no more new tests to run, terminate the workers and
+            # the scheduler
+            if len(idle_workers) == self.num_workers:
+                for worker in idle_workers:
+                    self.s2w_w[worker].write("terminate\n")
+                break
diff --git a/client/virt/virt_step_editor.py b/client/virt/virt_step_editor.py
new file mode 100755
index 0000000..bcdf572
--- /dev/null
+++ b/client/virt/virt_step_editor.py
@@ -0,0 +1,1401 @@
+#!/usr/bin/python
+"""
+Step file creator/editor.
+
+@copyright: Red Hat Inc 2009
+@author: mgoldish@redhat.com (Michael Goldish)
+@version: "20090401"
+"""
+
+import pygtk, gtk, os, glob, shutil, sys, logging
+import common, ppm_utils
+pygtk.require('2.0')
+
+
+# General utilities
+
+def corner_and_size_clipped(startpoint, endpoint, limits):
+    c0 = startpoint[:]
+    c1 = endpoint[:]
+    if c0[0] < 0:
+        c0[0] = 0
+    if c0[1] < 0:
+        c0[1] = 0
+    if c1[0] < 0:
+        c1[0] = 0
+    if c1[1] < 0:
+        c1[1] = 0
+    if c0[0] > limits[0] - 1:
+        c0[0] = limits[0] - 1
+    if c0[1] > limits[1] - 1:
+        c0[1] = limits[1] - 1
+    if c1[0] > limits[0] - 1:
+        c1[0] = limits[0] - 1
+    if c1[1] > limits[1] - 1:
+        c1[1] = limits[1] - 1
+    return ([min(c0[0], c1[0]),
+             min(c0[1], c1[1])],
+            [abs(c1[0] - c0[0]) + 1,
+             abs(c1[1] - c0[1]) + 1])
+
+
+def key_event_to_qemu_string(event):
+    keymap = gtk.gdk.keymap_get_default()
+    keyvals = keymap.get_entries_for_keycode(event.hardware_keycode)
+    keyval = keyvals[0][0]
+    keyname = gtk.gdk.keyval_name(keyval)
+
+    dict = { "Return": "ret",
+             "Tab": "tab",
+             "space": "spc",
+             "Left": "left",
+             "Right": "right",
+             "Up": "up",
+             "Down": "down",
+             "F1": "f1",
+             "F2": "f2",
+             "F3": "f3",
+             "F4": "f4",
+             "F5": "f5",
+             "F6": "f6",
+             "F7": "f7",
+             "F8": "f8",
+             "F9": "f9",
+             "F10": "f10",
+             "F11": "f11",
+             "F12": "f12",
+             "Escape": "esc",
+             "minus": "minus",
+             "equal": "equal",
+             "BackSpace": "backspace",
+             "comma": "comma",
+             "period": "dot",
+             "slash": "slash",
+             "Insert": "insert",
+             "Delete": "delete",
+             "Home": "home",
+             "End": "end",
+             "Page_Up": "pgup",
+             "Page_Down": "pgdn",
+             "Menu": "menu",
+             "semicolon": "0x27",
+             "backslash": "0x2b",
+             "apostrophe": "0x28",
+             "grave": "0x29",
+             "less": "0x2b",
+             "bracketleft": "0x1a",
+             "bracketright": "0x1b",
+             "Super_L": "0xdc",
+             "Super_R": "0xdb",
+             }
+
+    if ord('a') <= keyval <= ord('z') or ord('0') <= keyval <= ord('9'):
+        str = keyname
+    elif keyname in dict.keys():
+        str = dict[keyname]
+    else:
+        return ""
+
+    if event.state & gtk.gdk.CONTROL_MASK:
+        str = "ctrl-" + str
+    if event.state & gtk.gdk.MOD1_MASK:
+        str = "alt-" + str
+    if event.state & gtk.gdk.SHIFT_MASK:
+        str = "shift-" + str
+
+    return str
+
+
+class StepMakerWindow:
+
+    # Constructor
+
+    def __init__(self):
+        # Window
+        self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)
+        self.window.set_title("Step Maker Window")
+        self.window.connect("delete-event", self.delete_event)
+        self.window.connect("destroy", self.destroy)
+        self.window.set_default_size(600, 800)
+
+        # Main box (inside a frame which is inside a VBox)
+        self.menu_vbox = gtk.VBox()
+        self.window.add(self.menu_vbox)
+        self.menu_vbox.show()
+
+        frame = gtk.Frame()
+        frame.set_border_width(10)
+        frame.set_shadow_type(gtk.SHADOW_NONE)
+        self.menu_vbox.pack_end(frame)
+        frame.show()
+
+        self.main_vbox = gtk.VBox(spacing=10)
+        frame.add(self.main_vbox)
+        self.main_vbox.show()
+
+        # EventBox
+        self.scrolledwindow = gtk.ScrolledWindow()
+        self.scrolledwindow.set_policy(gtk.POLICY_AUTOMATIC,
+                                       gtk.POLICY_AUTOMATIC)
+        self.scrolledwindow.set_shadow_type(gtk.SHADOW_NONE)
+        self.main_vbox.pack_start(self.scrolledwindow)
+        self.scrolledwindow.show()
+
+        table = gtk.Table(1, 1)
+        self.scrolledwindow.add_with_viewport(table)
+        table.show()
+        table.realize()
+
+        self.event_box = gtk.EventBox()
+        table.attach(self.event_box, 0, 1, 0, 1, gtk.EXPAND, gtk.EXPAND)
+        self.event_box.show()
+        self.event_box.realize()
+
+        # Image
+        self.image = gtk.Image()
+        self.event_box.add(self.image)
+        self.image.show()
+
+        # Data VBox
+        self.data_vbox = gtk.VBox(spacing=10)
+        self.main_vbox.pack_start(self.data_vbox, expand=False)
+        self.data_vbox.show()
+
+        # User VBox
+        self.user_vbox = gtk.VBox(spacing=10)
+        self.main_vbox.pack_start(self.user_vbox, expand=False)
+        self.user_vbox.show()
+
+        # Screendump ID HBox
+        box = gtk.HBox(spacing=10)
+        self.data_vbox.pack_start(box)
+        box.show()
+
+        label = gtk.Label("Screendump ID:")
+        box.pack_start(label, False)
+        label.show()
+
+        self.entry_screendump = gtk.Entry()
+        self.entry_screendump.set_editable(False)
+        box.pack_start(self.entry_screendump)
+        self.entry_screendump.show()
+
+        label = gtk.Label("Time:")
+        box.pack_start(label, False)
+        label.show()
+
+        self.entry_time = gtk.Entry()
+        self.entry_time.set_editable(False)
+        self.entry_time.set_width_chars(10)
+        box.pack_start(self.entry_time, False)
+        self.entry_time.show()
+
+        # Comment HBox
+        box = gtk.HBox(spacing=10)
+        self.data_vbox.pack_start(box)
+        box.show()
+
+        label = gtk.Label("Comment:")
+        box.pack_start(label, False)
+        label.show()
+
+        self.entry_comment = gtk.Entry()
+        box.pack_start(self.entry_comment)
+        self.entry_comment.show()
+
+        # Sleep HBox
+        box = gtk.HBox(spacing=10)
+        self.data_vbox.pack_start(box)
+        box.show()
+
+        self.check_sleep = gtk.CheckButton("Sleep:")
+        self.check_sleep.connect("toggled", self.event_check_sleep_toggled)
+        box.pack_start(self.check_sleep, False)
+        self.check_sleep.show()
+
+        self.spin_sleep = gtk.SpinButton(gtk.Adjustment(0, 0, 50000, 1, 10, 0),
+                                         climb_rate=0.0)
+        box.pack_start(self.spin_sleep, False)
+        self.spin_sleep.show()
+
+        # Barrier HBox
+        box = gtk.HBox(spacing=10)
+        self.data_vbox.pack_start(box)
+        box.show()
+
+        self.check_barrier = gtk.CheckButton("Barrier:")
+        self.check_barrier.connect("toggled", self.event_check_barrier_toggled)
+        box.pack_start(self.check_barrier, False)
+        self.check_barrier.show()
+
+        vbox = gtk.VBox()
+        box.pack_start(vbox)
+        vbox.show()
+
+        self.label_barrier_region = gtk.Label("Region:")
+        self.label_barrier_region.set_alignment(0, 0.5)
+        vbox.pack_start(self.label_barrier_region)
+        self.label_barrier_region.show()
+
+        self.label_barrier_md5sum = gtk.Label("MD5:")
+        self.label_barrier_md5sum.set_alignment(0, 0.5)
+        vbox.pack_start(self.label_barrier_md5sum)
+        self.label_barrier_md5sum.show()
+
+        self.label_barrier_timeout = gtk.Label("Timeout:")
+        box.pack_start(self.label_barrier_timeout, False)
+        self.label_barrier_timeout.show()
+
+        self.spin_barrier_timeout = gtk.SpinButton(gtk.Adjustment(0, 0, 50000,
+                                                                  1, 10, 0),
+                                                                 climb_rate=0.0)
+        box.pack_start(self.spin_barrier_timeout, False)
+        self.spin_barrier_timeout.show()
+
+        self.check_barrier_optional = gtk.CheckButton("Optional")
+        box.pack_start(self.check_barrier_optional, False)
+        self.check_barrier_optional.show()
+
+        # Keystrokes HBox
+        box = gtk.HBox(spacing=10)
+        self.data_vbox.pack_start(box)
+        box.show()
+
+        label = gtk.Label("Keystrokes:")
+        box.pack_start(label, False)
+        label.show()
+
+        frame = gtk.Frame()
+        frame.set_shadow_type(gtk.SHADOW_IN)
+        box.pack_start(frame)
+        frame.show()
+
+        self.text_buffer = gtk.TextBuffer()
+        self.entry_keys = gtk.TextView(self.text_buffer)
+        self.entry_keys.set_wrap_mode(gtk.WRAP_WORD)
+        self.entry_keys.connect("key-press-event", self.event_key_press)
+        frame.add(self.entry_keys)
+        self.entry_keys.show()
+
+        self.check_manual = gtk.CheckButton("Manual")
+        self.check_manual.connect("toggled", self.event_manual_toggled)
+        box.pack_start(self.check_manual, False)
+        self.check_manual.show()
+
+        button = gtk.Button("Clear")
+        button.connect("clicked", self.event_clear_clicked)
+        box.pack_start(button, False)
+        button.show()
+
+        # Mouse click HBox
+        box = gtk.HBox(spacing=10)
+        self.data_vbox.pack_start(box)
+        box.show()
+
+        label = gtk.Label("Mouse action:")
+        box.pack_start(label, False)
+        label.show()
+
+        self.button_capture = gtk.Button("Capture")
+        box.pack_start(self.button_capture, False)
+        self.button_capture.show()
+
+        self.check_mousemove = gtk.CheckButton("Move: ...")
+        box.pack_start(self.check_mousemove, False)
+        self.check_mousemove.show()
+
+        self.check_mouseclick = gtk.CheckButton("Click: ...")
+        box.pack_start(self.check_mouseclick, False)
+        self.check_mouseclick.show()
+
+        self.spin_sensitivity = gtk.SpinButton(gtk.Adjustment(1, 1, 100, 1, 10,
+                                                              0),
+                                                              climb_rate=0.0)
+        box.pack_end(self.spin_sensitivity, False)
+        self.spin_sensitivity.show()
+
+        label = gtk.Label("Sensitivity:")
+        box.pack_end(label, False)
+        label.show()
+
+        self.spin_latency = gtk.SpinButton(gtk.Adjustment(10, 1, 500, 1, 10, 0),
+                                           climb_rate=0.0)
+        box.pack_end(self.spin_latency, False)
+        self.spin_latency.show()
+
+        label = gtk.Label("Latency:")
+        box.pack_end(label, False)
+        label.show()
+
+        self.handler_event_box_press = None
+        self.handler_event_box_release = None
+        self.handler_event_box_scroll = None
+        self.handler_event_box_motion = None
+        self.handler_event_box_expose = None
+
+        self.window.realize()
+        self.window.show()
+
+        self.clear_state()
+
+    # Utilities
+
+    def message(self, text, title):
+        dlg = gtk.MessageDialog(self.window,
+                gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT,
+                gtk.MESSAGE_INFO,
+                gtk.BUTTONS_CLOSE,
+                title)
+        dlg.set_title(title)
+        dlg.format_secondary_text(text)
+        response = dlg.run()
+        dlg.destroy()
+
+
+    def question_yes_no(self, text, title):
+        dlg = gtk.MessageDialog(self.window,
+                gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT,
+                gtk.MESSAGE_QUESTION,
+                gtk.BUTTONS_YES_NO,
+                title)
+        dlg.set_title(title)
+        dlg.format_secondary_text(text)
+        response = dlg.run()
+        dlg.destroy()
+        if response == gtk.RESPONSE_YES:
+            return True
+        return False
+
+
+    def inputdialog(self, text, title, default_response=""):
+        # Define a little helper function
+        def inputdialog_entry_activated(entry):
+            dlg.response(gtk.RESPONSE_OK)
+
+        # Create the dialog
+        dlg = gtk.MessageDialog(self.window,
+                gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT,
+                gtk.MESSAGE_QUESTION,
+                gtk.BUTTONS_OK_CANCEL,
+                title)
+        dlg.set_title(title)
+        dlg.format_secondary_text(text)
+
+        # Create an entry widget
+        entry = gtk.Entry()
+        entry.set_text(default_response)
+        entry.connect("activate", inputdialog_entry_activated)
+        dlg.vbox.pack_start(entry)
+        entry.show()
+
+        # Run the dialog
+        response = dlg.run()
+        dlg.destroy()
+        if response == gtk.RESPONSE_OK:
+            return entry.get_text()
+        return None
+
+
+    def filedialog(self, title=None, default_filename=None):
+        chooser = gtk.FileChooserDialog(title=title, parent=self.window,
+                                        action=gtk.FILE_CHOOSER_ACTION_OPEN,
+                buttons=(gtk.STOCK_CANCEL, gtk.RESPONSE_CANCEL, gtk.STOCK_OPEN,
+                         gtk.RESPONSE_OK))
+        chooser.resize(700, 500)
+        if default_filename:
+            chooser.set_filename(os.path.abspath(default_filename))
+        filename = None
+        response = chooser.run()
+        if response == gtk.RESPONSE_OK:
+            filename = chooser.get_filename()
+        chooser.destroy()
+        return filename
+
+
+    def redirect_event_box_input(self, press=None, release=None, scroll=None,
+                                 motion=None, expose=None):
+        if self.handler_event_box_press != None: \
+        self.event_box.disconnect(self.handler_event_box_press)
+        if self.handler_event_box_release != None: \
+        self.event_box.disconnect(self.handler_event_box_release)
+        if self.handler_event_box_scroll != None: \
+        self.event_box.disconnect(self.handler_event_box_scroll)
+        if self.handler_event_box_motion != None: \
+        self.event_box.disconnect(self.handler_event_box_motion)
+        if self.handler_event_box_expose != None: \
+        self.event_box.disconnect(self.handler_event_box_expose)
+        self.handler_event_box_press = None
+        self.handler_event_box_release = None
+        self.handler_event_box_scroll = None
+        self.handler_event_box_motion = None
+        self.handler_event_box_expose = None
+        if press != None: self.handler_event_box_press = \
+        self.event_box.connect("button-press-event", press)
+        if release != None: self.handler_event_box_release = \
+        self.event_box.connect("button-release-event", release)
+        if scroll != None: self.handler_event_box_scroll = \
+        self.event_box.connect("scroll-event", scroll)
+        if motion != None: self.handler_event_box_motion = \
+        self.event_box.connect("motion-notify-event", motion)
+        if expose != None: self.handler_event_box_expose = \
+        self.event_box.connect_after("expose-event", expose)
+
+
+    def get_keys(self):
+        return self.text_buffer.get_text(
+                self.text_buffer.get_start_iter(),
+                self.text_buffer.get_end_iter())
+
+
+    def add_key(self, key):
+        text = self.get_keys()
+        if len(text) > 0 and text[-1] != ' ':
+            text += " "
+        text += key
+        self.text_buffer.set_text(text)
+
+
+    def clear_keys(self):
+        self.text_buffer.set_text("")
+
+
+    def update_barrier_info(self):
+        if self.barrier_selected:
+            self.label_barrier_region.set_text("Selected region: Corner: " + \
+                                            str(tuple(self.barrier_corner)) + \
+                                            " Size: " + \
+                                            str(tuple(self.barrier_size)))
+        else:
+            self.label_barrier_region.set_text("No region selected.")
+        self.label_barrier_md5sum.set_text("MD5: " + self.barrier_md5sum)
+
+
+    def update_mouse_click_info(self):
+        if self.mouse_click_captured:
+            self.check_mousemove.set_label("Move: " + \
+                                           str(tuple(self.mouse_click_coords)))
+            self.check_mouseclick.set_label("Click: button %d" %
+                                            self.mouse_click_button)
+        else:
+            self.check_mousemove.set_label("Move: ...")
+            self.check_mouseclick.set_label("Click: ...")
+
+
+    def clear_state(self, clear_screendump=True):
+        # Recording time
+        self.entry_time.set_text("unknown")
+        if clear_screendump:
+            # Screendump
+            self.clear_image()
+        # Screendump ID
+        self.entry_screendump.set_text("")
+        # Comment
+        self.entry_comment.set_text("")
+        # Sleep
+        self.check_sleep.set_active(True)
+        self.check_sleep.set_active(False)
+        self.spin_sleep.set_value(10)
+        # Barrier
+        self.clear_barrier_state()
+        # Keystrokes
+        self.check_manual.set_active(False)
+        self.clear_keys()
+        # Mouse actions
+        self.check_mousemove.set_sensitive(False)
+        self.check_mouseclick.set_sensitive(False)
+        self.check_mousemove.set_active(False)
+        self.check_mouseclick.set_active(False)
+        self.mouse_click_captured = False
+        self.mouse_click_coords = [0, 0]
+        self.mouse_click_button = 0
+        self.update_mouse_click_info()
+
+
+    def clear_barrier_state(self):
+        self.check_barrier.set_active(True)
+        self.check_barrier.set_active(False)
+        self.check_barrier_optional.set_active(False)
+        self.spin_barrier_timeout.set_value(10)
+        self.barrier_selection_started = False
+        self.barrier_selected = False
+        self.barrier_corner0 = [0, 0]
+        self.barrier_corner1 = [0, 0]
+        self.barrier_corner = [0, 0]
+        self.barrier_size = [0, 0]
+        self.barrier_md5sum = ""
+        self.update_barrier_info()
+
+
+    def set_image(self, w, h, data):
+        (self.image_width, self.image_height, self.image_data) = (w, h, data)
+        self.image.set_from_pixbuf(gtk.gdk.pixbuf_new_from_data(
+            data, gtk.gdk.COLORSPACE_RGB, False, 8,
+            w, h, w*3))
+        hscrollbar = self.scrolledwindow.get_hscrollbar()
+        hscrollbar.set_range(0, w)
+        vscrollbar = self.scrolledwindow.get_vscrollbar()
+        vscrollbar.set_range(0, h)
+
+
+    def set_image_from_file(self, filename):
+        if not ppm_utils.image_verify_ppm_file(filename):
+            logging.warning("set_image_from_file: Warning: received invalid"
+                            "screendump file")
+            return self.clear_image()
+        (w, h, data) = ppm_utils.image_read_from_ppm_file(filename)
+        self.set_image(w, h, data)
+
+
+    def clear_image(self):
+        self.image.clear()
+        self.image_width = 0
+        self.image_height = 0
+        self.image_data = ""
+
+
+    def update_screendump_id(self, data_dir):
+        if not self.image_data:
+            return
+        # Find a proper ID for the screendump
+        scrdump_md5sum = ppm_utils.image_md5sum(self.image_width,
+                                                self.image_height,
+                                                self.image_data)
+        scrdump_id = ppm_utils.find_id_for_screendump(scrdump_md5sum, data_dir)
+        if not scrdump_id:
+            # Not found; generate one
+            scrdump_id = ppm_utils.generate_id_for_screendump(scrdump_md5sum,
+                                                              data_dir)
+        self.entry_screendump.set_text(scrdump_id)
+
+
+    def get_step_lines(self, data_dir=None):
+        if self.check_barrier.get_active() and not self.barrier_selected:
+            self.message("No barrier region selected.", "Error")
+            return
+
+        str = "step"
+
+        # Add step recording time
+        if self.entry_time.get_text():
+            str += " " + self.entry_time.get_text()
+
+        str += "\n"
+
+        # Add screendump line
+        if self.image_data:
+            str += "screendump %s\n" % self.entry_screendump.get_text()
+
+        # Add comment
+        if self.entry_comment.get_text():
+            str += "# %s\n" % self.entry_comment.get_text()
+
+        # Add sleep line
+        if self.check_sleep.get_active():
+            str += "sleep %d\n" % self.spin_sleep.get_value()
+
+        # Add barrier_2 line
+        if self.check_barrier.get_active():
+            str += "barrier_2 %d %d %d %d %s %d" % (
+                    self.barrier_size[0], self.barrier_size[1],
+                    self.barrier_corner[0], self.barrier_corner[1],
+                    self.barrier_md5sum, self.spin_barrier_timeout.get_value())
+            if self.check_barrier_optional.get_active():
+                str += " optional"
+            str += "\n"
+
+        # Add "Sending keys" comment
+        keys_to_send = self.get_keys().split()
+        if keys_to_send:
+            str += "# Sending keys: %s\n" % self.get_keys()
+
+        # Add key and var lines
+        for key in keys_to_send:
+            if key.startswith("$"):
+                varname = key[1:]
+                str += "var %s\n" % varname
+            else:
+                str += "key %s\n" % key
+
+        # Add mousemove line
+        if self.check_mousemove.get_active():
+            str += "mousemove %d %d\n" % (self.mouse_click_coords[0],
+                                          self.mouse_click_coords[1])
+
+        # Add mouseclick line
+        if self.check_mouseclick.get_active():
+            dict = { 1 : 1,
+                     2 : 2,
+                     3 : 4 }
+            str += "mouseclick %d\n" % dict[self.mouse_click_button]
+
+        # Write screendump and cropped screendump image files
+        if data_dir and self.image_data:
+            # Create the data dir if it doesn't exist
+            if not os.path.exists(data_dir):
+                os.makedirs(data_dir)
+            # Get the full screendump filename
+            scrdump_filename = os.path.join(data_dir,
+                                            self.entry_screendump.get_text())
+            # Write screendump file if it doesn't exist
+            if not os.path.exists(scrdump_filename):
+                try:
+                    ppm_utils.image_write_to_ppm_file(scrdump_filename,
+                                                      self.image_width,
+                                                      self.image_height,
+                                                      self.image_data)
+                except IOError:
+                    self.message("Could not write screendump file.", "Error")
+
+            #if self.check_barrier.get_active():
+            #    # Crop image to get the cropped screendump
+            #    (cw, ch, cdata) = ppm_utils.image_crop(
+            #            self.image_width, self.image_height, self.image_data,
+            #            self.barrier_corner[0], self.barrier_corner[1],
+            #            self.barrier_size[0], self.barrier_size[1])
+            #    cropped_scrdump_md5sum = ppm_utils.image_md5sum(cw, ch, cdata)
+            #    cropped_scrdump_filename = \
+            #    ppm_utils.get_cropped_screendump_filename(scrdump_filename,
+            #                                            cropped_scrdump_md5sum)
+            #    # Write cropped screendump file
+            #    try:
+            #        ppm_utils.image_write_to_ppm_file(cropped_scrdump_filename,
+            #                                          cw, ch, cdata)
+            #    except IOError:
+            #        self.message("Could not write cropped screendump file.",
+            #                     "Error")
+
+        return str
+
+    def set_state_from_step_lines(self, str, data_dir, warn=True):
+        self.clear_state()
+
+        for line in str.splitlines():
+            words = line.split()
+            if not words:
+                continue
+
+            if line.startswith("#") \
+                    and not self.entry_comment.get_text() \
+                    and not line.startswith("# Sending keys:") \
+                    and not line.startswith("# ----"):
+                self.entry_comment.set_text(line.strip("#").strip())
+
+            elif words[0] == "step":
+                if len(words) >= 2:
+                    self.entry_time.set_text(words[1])
+
+            elif words[0] == "screendump":
+                self.entry_screendump.set_text(words[1])
+                self.set_image_from_file(os.path.join(data_dir, words[1]))
+
+            elif words[0] == "sleep":
+                self.spin_sleep.set_value(int(words[1]))
+                self.check_sleep.set_active(True)
+
+            elif words[0] == "key":
+                self.add_key(words[1])
+
+            elif words[0] == "var":
+                self.add_key("$%s" % words[1])
+
+            elif words[0] == "mousemove":
+                self.mouse_click_captured = True
+                self.mouse_click_coords = [int(words[1]), int(words[2])]
+                self.update_mouse_click_info()
+
+            elif words[0] == "mouseclick":
+                self.mouse_click_captured = True
+                self.mouse_click_button = int(words[1])
+                self.update_mouse_click_info()
+
+            elif words[0] == "barrier_2":
+                # Get region corner and size from step lines
+                self.barrier_corner = [int(words[3]), int(words[4])]
+                self.barrier_size = [int(words[1]), int(words[2])]
+                # Get corner0 and corner1 from step lines
+                self.barrier_corner0 = self.barrier_corner
+                self.barrier_corner1 = [self.barrier_corner[0] +
+                                        self.barrier_size[0] - 1,
+                                        self.barrier_corner[1] +
+                                        self.barrier_size[1] - 1]
+                # Get the md5sum
+                self.barrier_md5sum = words[5]
+                # Pretend the user selected the region with the mouse
+                self.barrier_selection_started = True
+                self.barrier_selected = True
+                # Update label widgets according to region information
+                self.update_barrier_info()
+                # Check the barrier checkbutton
+                self.check_barrier.set_active(True)
+                # Set timeout value
+                self.spin_barrier_timeout.set_value(int(words[6]))
+                # Set 'optional' checkbutton state
+                self.check_barrier_optional.set_active(words[-1] == "optional")
+                # Update the image widget
+                self.event_box.queue_draw()
+
+                if warn:
+                    # See if the computed md5sum matches the one recorded in
+                    # the file
+                    computed_md5sum = ppm_utils.get_region_md5sum(
+                            self.image_width, self.image_height,
+                            self.image_data, self.barrier_corner[0],
+                            self.barrier_corner[1], self.barrier_size[0],
+                            self.barrier_size[1])
+                    if computed_md5sum != self.barrier_md5sum:
+                        self.message("Computed MD5 sum (%s) differs from MD5"
+                                     " sum recorded in steps file (%s)" %
+                                     (computed_md5sum, self.barrier_md5sum),
+                                     "Warning")
+
+    # Events
+
+    def delete_event(self, widget, event):
+        pass
+
+    def destroy(self, widget):
+        gtk.main_quit()
+
+    def event_check_barrier_toggled(self, widget):
+        if self.check_barrier.get_active():
+            self.redirect_event_box_input(
+                    self.event_button_press,
+                    self.event_button_release,
+                    None,
+                    None,
+                    self.event_expose)
+            self.event_box.queue_draw()
+            self.event_box.window.set_cursor(gtk.gdk.Cursor(gtk.gdk.CROSSHAIR))
+            self.label_barrier_region.set_sensitive(True)
+            self.label_barrier_md5sum.set_sensitive(True)
+            self.label_barrier_timeout.set_sensitive(True)
+            self.spin_barrier_timeout.set_sensitive(True)
+            self.check_barrier_optional.set_sensitive(True)
+        else:
+            self.redirect_event_box_input()
+            self.event_box.queue_draw()
+            self.event_box.window.set_cursor(None)
+            self.label_barrier_region.set_sensitive(False)
+            self.label_barrier_md5sum.set_sensitive(False)
+            self.label_barrier_timeout.set_sensitive(False)
+            self.spin_barrier_timeout.set_sensitive(False)
+            self.check_barrier_optional.set_sensitive(False)
+
+    def event_check_sleep_toggled(self, widget):
+        if self.check_sleep.get_active():
+            self.spin_sleep.set_sensitive(True)
+        else:
+            self.spin_sleep.set_sensitive(False)
+
+    def event_manual_toggled(self, widget):
+        self.entry_keys.grab_focus()
+
+    def event_clear_clicked(self, widget):
+        self.clear_keys()
+        self.entry_keys.grab_focus()
+
+    def event_expose(self, widget, event):
+        if not self.barrier_selection_started:
+            return
+        (corner, size) = corner_and_size_clipped(self.barrier_corner0,
+                                                 self.barrier_corner1,
+                                                 self.event_box.size_request())
+        gc = self.event_box.window.new_gc(line_style=gtk.gdk.LINE_DOUBLE_DASH,
+                                          line_width=1)
+        gc.set_foreground(gc.get_colormap().alloc_color("red"))
+        gc.set_background(gc.get_colormap().alloc_color("dark red"))
+        gc.set_dashes(0, (4, 4))
+        self.event_box.window.draw_rectangle(
+                gc, False,
+                corner[0], corner[1],
+                size[0]-1, size[1]-1)
+
+    def event_drag_motion(self, widget, event):
+        old_corner1 = self.barrier_corner1
+        self.barrier_corner1 = [int(event.x), int(event.y)]
+        (corner, size) = corner_and_size_clipped(self.barrier_corner0,
+                                                 self.barrier_corner1,
+                                                 self.event_box.size_request())
+        (old_corner, old_size) = corner_and_size_clipped(self.barrier_corner0,
+                                                         old_corner1,
+                                                  self.event_box.size_request())
+        corner0 = [min(corner[0], old_corner[0]), min(corner[1], old_corner[1])]
+        corner1 = [max(corner[0] + size[0], old_corner[0] + old_size[0]),
+                   max(corner[1] + size[1], old_corner[1] + old_size[1])]
+        size = [corner1[0] - corner0[0] + 1,
+                corner1[1] - corner0[1] + 1]
+        self.event_box.queue_draw_area(corner0[0], corner0[1], size[0], size[1])
+
+    def event_button_press(self, widget, event):
+        (corner, size) = corner_and_size_clipped(self.barrier_corner0,
+                                                 self.barrier_corner1,
+                                                 self.event_box.size_request())
+        self.event_box.queue_draw_area(corner[0], corner[1], size[0], size[1])
+        self.barrier_corner0 = [int(event.x), int(event.y)]
+        self.barrier_corner1 = [int(event.x), int(event.y)]
+        self.redirect_event_box_input(
+                self.event_button_press,
+                self.event_button_release,
+                None,
+                self.event_drag_motion,
+                self.event_expose)
+        self.barrier_selection_started = True
+
+    def event_button_release(self, widget, event):
+        self.redirect_event_box_input(
+                self.event_button_press,
+                self.event_button_release,
+                None,
+                None,
+                self.event_expose)
+        (self.barrier_corner, self.barrier_size) = \
+        corner_and_size_clipped(self.barrier_corner0, self.barrier_corner1,
+                                self.event_box.size_request())
+        self.barrier_md5sum = ppm_utils.get_region_md5sum(
+                self.image_width, self.image_height, self.image_data,
+                self.barrier_corner[0], self.barrier_corner[1],
+                self.barrier_size[0], self.barrier_size[1])
+        self.barrier_selected = True
+        self.update_barrier_info()
+
+    def event_key_press(self, widget, event):
+        if self.check_manual.get_active():
+            return False
+        str = key_event_to_qemu_string(event)
+        self.add_key(str)
+        return True
+
+
+class StepEditor(StepMakerWindow):
+    ui = '''<ui>
+    <menubar name="MenuBar">
+        <menu action="File">
+            <menuitem action="Open"/>
+            <separator/>
+            <menuitem action="Quit"/>
+        </menu>
+        <menu action="Edit">
+            <menuitem action="CopyStep"/>
+            <menuitem action="DeleteStep"/>
+        </menu>
+        <menu action="Insert">
+            <menuitem action="InsertNewBefore"/>
+            <menuitem action="InsertNewAfter"/>
+            <separator/>
+            <menuitem action="InsertStepsBefore"/>
+            <menuitem action="InsertStepsAfter"/>
+        </menu>
+        <menu action="Tools">
+            <menuitem action="CleanUp"/>
+        </menu>
+    </menubar>
+</ui>'''
+
+    # Constructor
+
+    def __init__(self, filename=None):
+        StepMakerWindow.__init__(self)
+
+        self.steps_filename = None
+        self.steps = []
+
+        # Create a UIManager instance
+        uimanager = gtk.UIManager()
+
+        # Add the accelerator group to the toplevel window
+        accelgroup = uimanager.get_accel_group()
+        self.window.add_accel_group(accelgroup)
+
+        # Create an ActionGroup
+        actiongroup = gtk.ActionGroup('StepEditor')
+
+        # Create actions
+        actiongroup.add_actions([
+            ('Quit', gtk.STOCK_QUIT, '_Quit', None, 'Quit the Program',
+             self.quit),
+            ('Open', gtk.STOCK_OPEN, '_Open', None, 'Open steps file',
+             self.open_steps_file),
+            ('CopyStep', gtk.STOCK_COPY, '_Copy current step...', "",
+             'Copy current step to user specified position', self.copy_step),
+            ('DeleteStep', gtk.STOCK_DELETE, '_Delete current step', "",
+             'Delete current step', self.event_remove_clicked),
+            ('InsertNewBefore', gtk.STOCK_ADD, '_New step before current', "",
+             'Insert new step before current step', self.insert_before),
+            ('InsertNewAfter', gtk.STOCK_ADD, 'N_ew step after current', "",
+             'Insert new step after current step', self.insert_after),
+            ('InsertStepsBefore', gtk.STOCK_ADD, '_Steps before current...',
+             "", 'Insert steps (from file) before current step',
+             self.insert_steps_before),
+            ('InsertStepsAfter', gtk.STOCK_ADD, 'Steps _after current...', "",
+             'Insert steps (from file) after current step',
+             self.insert_steps_after),
+            ('CleanUp', gtk.STOCK_DELETE, '_Clean up data directory', "",
+             'Move unused PPM files to a backup directory', self.cleanup),
+            ('File', None, '_File'),
+            ('Edit', None, '_Edit'),
+            ('Insert', None, '_Insert'),
+            ('Tools', None, '_Tools')
+            ])
+
+        def create_shortcut(name, callback, keyname):
+            # Create an action
+            action = gtk.Action(name, None, None, None)
+            # Connect a callback to the action
+            action.connect("activate", callback)
+            actiongroup.add_action_with_accel(action, keyname)
+            # Have the action use accelgroup
+            action.set_accel_group(accelgroup)
+            # Connect the accelerator to the action
+            action.connect_accelerator()
+
+        create_shortcut("Next", self.event_next_clicked, "Page_Down")
+        create_shortcut("Previous", self.event_prev_clicked, "Page_Up")
+
+        # Add the actiongroup to the uimanager
+        uimanager.insert_action_group(actiongroup, 0)
+
+        # Add a UI description
+        uimanager.add_ui_from_string(self.ui)
+
+        # Create a MenuBar
+        menubar = uimanager.get_widget('/MenuBar')
+        self.menu_vbox.pack_start(menubar, False)
+
+        # Remember the Edit menu bar for future reference
+        self.menu_edit = uimanager.get_widget('/MenuBar/Edit')
+        self.menu_edit.set_sensitive(False)
+
+        # Remember the Insert menu bar for future reference
+        self.menu_insert = uimanager.get_widget('/MenuBar/Insert')
+        self.menu_insert.set_sensitive(False)
+
+        # Remember the Tools menu bar for future reference
+        self.menu_tools = uimanager.get_widget('/MenuBar/Tools')
+        self.menu_tools.set_sensitive(False)
+
+        # Next/Previous HBox
+        hbox = gtk.HBox(spacing=10)
+        self.user_vbox.pack_start(hbox)
+        hbox.show()
+
+        self.button_first = gtk.Button(stock=gtk.STOCK_GOTO_FIRST)
+        self.button_first.connect("clicked", self.event_first_clicked)
+        hbox.pack_start(self.button_first)
+        self.button_first.show()
+
+        #self.button_prev = gtk.Button("<< Previous")
+        self.button_prev = gtk.Button(stock=gtk.STOCK_GO_BACK)
+        self.button_prev.connect("clicked", self.event_prev_clicked)
+        hbox.pack_start(self.button_prev)
+        self.button_prev.show()
+
+        self.label_step = gtk.Label("Step:")
+        hbox.pack_start(self.label_step, False)
+        self.label_step.show()
+
+        self.entry_step_num = gtk.Entry()
+        self.entry_step_num.connect("activate", self.event_entry_step_activated)
+        self.entry_step_num.set_width_chars(3)
+        hbox.pack_start(self.entry_step_num, False)
+        self.entry_step_num.show()
+
+        #self.button_next = gtk.Button("Next >>")
+        self.button_next = gtk.Button(stock=gtk.STOCK_GO_FORWARD)
+        self.button_next.connect("clicked", self.event_next_clicked)
+        hbox.pack_start(self.button_next)
+        self.button_next.show()
+
+        self.button_last = gtk.Button(stock=gtk.STOCK_GOTO_LAST)
+        self.button_last.connect("clicked", self.event_last_clicked)
+        hbox.pack_start(self.button_last)
+        self.button_last.show()
+
+        # Save HBox
+        hbox = gtk.HBox(spacing=10)
+        self.user_vbox.pack_start(hbox)
+        hbox.show()
+
+        self.button_save = gtk.Button("_Save current step")
+        self.button_save.connect("clicked", self.event_save_clicked)
+        hbox.pack_start(self.button_save)
+        self.button_save.show()
+
+        self.button_remove = gtk.Button("_Delete current step")
+        self.button_remove.connect("clicked", self.event_remove_clicked)
+        hbox.pack_start(self.button_remove)
+        self.button_remove.show()
+
+        self.button_replace = gtk.Button("_Replace screendump")
+        self.button_replace.connect("clicked", self.event_replace_clicked)
+        hbox.pack_start(self.button_replace)
+        self.button_replace.show()
+
+        # Disable unused widgets
+        self.button_capture.set_sensitive(False)
+        self.spin_latency.set_sensitive(False)
+        self.spin_sensitivity.set_sensitive(False)
+
+        # Disable main vbox because no steps file is loaded
+        self.main_vbox.set_sensitive(False)
+
+        # Set title
+        self.window.set_title("Step Editor")
+
+    # Events
+
+    def delete_event(self, widget, event):
+        # Make sure the step is saved (if the user wants it to be)
+        self.verify_save()
+
+    def event_first_clicked(self, widget):
+        if not self.steps:
+            return
+        # Make sure the step is saved (if the user wants it to be)
+        self.verify_save()
+        # Go to first step
+        self.set_step(0)
+
+    def event_last_clicked(self, widget):
+        if not self.steps:
+            return
+        # Make sure the step is saved (if the user wants it to be)
+        self.verify_save()
+        # Go to last step
+        self.set_step(len(self.steps) - 1)
+
+    def event_prev_clicked(self, widget):
+        if not self.steps:
+            return
+        # Make sure the step is saved (if the user wants it to be)
+        self.verify_save()
+        # Go to previous step
+        index = self.current_step_index - 1
+        if self.steps:
+            index = index % len(self.steps)
+        self.set_step(index)
+
+    def event_next_clicked(self, widget):
+        if not self.steps:
+            return
+        # Make sure the step is saved (if the user wants it to be)
+        self.verify_save()
+        # Go to next step
+        index = self.current_step_index + 1
+        if self.steps:
+            index = index % len(self.steps)
+        self.set_step(index)
+
+    def event_entry_step_activated(self, widget):
+        if not self.steps:
+            return
+        step_index = self.entry_step_num.get_text()
+        if not step_index.isdigit():
+            return
+        step_index = int(step_index) - 1
+        if step_index == self.current_step_index:
+            return
+        self.verify_save()
+        self.set_step(step_index)
+
+    def event_save_clicked(self, widget):
+        if not self.steps:
+            return
+        self.save_step()
+
+    def event_remove_clicked(self, widget):
+        if not self.steps:
+            return
+        if not self.question_yes_no("This will modify the steps file."
+                                    " Are you sure?", "Remove step?"):
+            return
+        # Remove step
+        del self.steps[self.current_step_index]
+        # Write changes to file
+        self.write_steps_file(self.steps_filename)
+        # Move to previous step
+        self.set_step(self.current_step_index)
+
+    def event_replace_clicked(self, widget):
+        if not self.steps:
+            return
+        # Let the user choose a screendump file
+        current_filename = os.path.join(self.steps_data_dir,
+                                        self.entry_screendump.get_text())
+        filename = self.filedialog("Choose PPM image file",
+                                   default_filename=current_filename)
+        if not filename:
+            return
+        if not ppm_utils.image_verify_ppm_file(filename):
+            self.message("Not a valid PPM image file.", "Error")
+            return
+        self.clear_image()
+        self.clear_barrier_state()
+        self.set_image_from_file(filename)
+        self.update_screendump_id(self.steps_data_dir)
+
+    # Menu actions
+
+    def open_steps_file(self, action):
+        # Make sure the step is saved (if the user wants it to be)
+        self.verify_save()
+        # Let the user choose a steps file
+        current_filename = self.steps_filename
+        filename = self.filedialog("Open steps file",
+                                   default_filename=current_filename)
+        if not filename:
+            return
+        self.set_steps_file(filename)
+
+    def quit(self, action):
+        # Make sure the step is saved (if the user wants it to be)
+        self.verify_save()
+        # Quit
+        gtk.main_quit()
+
+    def copy_step(self, action):
+        if not self.steps:
+            return
+        self.verify_save()
+        self.set_step(self.current_step_index)
+        # Get the desired position
+        step_index = self.inputdialog("Copy step to position:",
+                                      "Copy step",
+                                      str(self.current_step_index + 2))
+        if not step_index:
+            return
+        step_index = int(step_index) - 1
+        # Get the lines of the current step
+        step = self.steps[self.current_step_index]
+        # Insert new step at position step_index
+        self.steps.insert(step_index, step)
+        # Go to new step
+        self.set_step(step_index)
+        # Write changes to disk
+        self.write_steps_file(self.steps_filename)
+
+    def insert_before(self, action):
+        if not self.steps_filename:
+            return
+        if not self.question_yes_no("This will modify the steps file."
+                                    " Are you sure?", "Insert new step?"):
+            return
+        self.verify_save()
+        step_index = self.current_step_index
+        # Get the lines of a blank step
+        self.clear_state()
+        step = self.get_step_lines()
+        # Insert new step at position step_index
+        self.steps.insert(step_index, step)
+        # Go to new step
+        self.set_step(step_index)
+        # Write changes to disk
+        self.write_steps_file(self.steps_filename)
+
+    def insert_after(self, action):
+        if not self.steps_filename:
+            return
+        if not self.question_yes_no("This will modify the steps file."
+                                    " Are you sure?", "Insert new step?"):
+            return
+        self.verify_save()
+        step_index = self.current_step_index + 1
+        # Get the lines of a blank step
+        self.clear_state()
+        step = self.get_step_lines()
+        # Insert new step at position step_index
+        self.steps.insert(step_index, step)
+        # Go to new step
+        self.set_step(step_index)
+        # Write changes to disk
+        self.write_steps_file(self.steps_filename)
+
+    def insert_steps(self, filename, index):
+        # Read the steps file
+        (steps, header) = self.read_steps_file(filename)
+
+        data_dir = ppm_utils.get_data_dir(filename)
+        for step in steps:
+            self.set_state_from_step_lines(step, data_dir, warn=False)
+            step = self.get_step_lines(self.steps_data_dir)
+
+        # Insert steps into self.steps
+        self.steps[index:index] = steps
+        # Write changes to disk
+        self.write_steps_file(self.steps_filename)
+
+    def insert_steps_before(self, action):
+        if not self.steps_filename:
+            return
+        # Let the user choose a steps file
+        current_filename = self.steps_filename
+        filename = self.filedialog("Choose steps file",
+                                   default_filename=current_filename)
+        if not filename:
+            return
+        self.verify_save()
+
+        step_index = self.current_step_index
+        # Insert steps at position step_index
+        self.insert_steps(filename, step_index)
+        # Go to new steps
+        self.set_step(step_index)
+
+    def insert_steps_after(self, action):
+        if not self.steps_filename:
+            return
+        # Let the user choose a steps file
+        current_filename = self.steps_filename
+        filename = self.filedialog("Choose steps file",
+                                   default_filename=current_filename)
+        if not filename:
+            return
+        self.verify_save()
+
+        step_index = self.current_step_index + 1
+        # Insert new steps at position step_index
+        self.insert_steps(filename, step_index)
+        # Go to new steps
+        self.set_step(step_index)
+
+    def cleanup(self, action):
+        if not self.steps_filename:
+            return
+        if not self.question_yes_no("All unused PPM files will be moved to a"
+                                    " backup directory. Are you sure?",
+                                    "Clean up data directory?"):
+            return
+        # Remember the current step index
+        current_step_index = self.current_step_index
+        # Get the backup dir
+        backup_dir = os.path.join(self.steps_data_dir, "backup")
+        # Create it if it doesn't exist
+        if not os.path.exists(backup_dir):
+            os.makedirs(backup_dir)
+        # Move all files to the backup dir
+        for filename in glob.glob(os.path.join(self.steps_data_dir,
+                                               "*.[Pp][Pp][Mm]")):
+            shutil.move(filename, backup_dir)
+        # Get the used files back
+        for step in self.steps:
+            self.set_state_from_step_lines(step, backup_dir, warn=False)
+            self.get_step_lines(self.steps_data_dir)
+        # Remove the used files from the backup dir
+        used_files = os.listdir(self.steps_data_dir)
+        for filename in os.listdir(backup_dir):
+            if filename in used_files:
+                os.unlink(os.path.join(backup_dir, filename))
+        # Restore step index
+        self.set_step(current_step_index)
+        # Inform the user
+        self.message("All unused PPM files may be found at %s." %
+                     os.path.abspath(backup_dir),
+                     "Clean up data directory")
+
+    # Methods
+
+    def read_steps_file(self, filename):
+        steps = []
+        header = ""
+
+        file = open(filename, "r")
+        for line in file.readlines():
+            words = line.split()
+            if not words:
+                continue
+            if line.startswith("# ----"):
+                continue
+            if words[0] == "step":
+                steps.append("")
+            if steps:
+                steps[-1] += line
+            else:
+                header += line
+        file.close()
+
+        return (steps, header)
+
+    def set_steps_file(self, filename):
+        try:
+            (self.steps, self.header) = self.read_steps_file(filename)
+        except (TypeError, IOError):
+            self.message("Cannot read file %s." % filename, "Error")
+            return
+
+        self.steps_filename = filename
+        self.steps_data_dir = ppm_utils.get_data_dir(filename)
+        # Go to step 0
+        self.set_step(0)
+
+    def set_step(self, index):
+        # Limit index to legal boundaries
+        if index < 0:
+            index = 0
+        if index > len(self.steps) - 1:
+            index = len(self.steps) - 1
+
+        # Enable the menus
+        self.menu_edit.set_sensitive(True)
+        self.menu_insert.set_sensitive(True)
+        self.menu_tools.set_sensitive(True)
+
+        # If no steps exist...
+        if self.steps == []:
+            self.current_step_index = index
+            self.current_step = None
+            # Set window title
+            self.window.set_title("Step Editor -- %s" %
+                                  os.path.basename(self.steps_filename))
+            # Set step entry widget text
+            self.entry_step_num.set_text("")
+            # Clear the state of all widgets
+            self.clear_state()
+            # Disable the main vbox
+            self.main_vbox.set_sensitive(False)
+            return
+
+        self.current_step_index = index
+        self.current_step = self.steps[index]
+        # Set window title
+        self.window.set_title("Step Editor -- %s -- step %d" %
+                              (os.path.basename(self.steps_filename),
+                               index + 1))
+        # Set step entry widget text
+        self.entry_step_num.set_text(str(self.current_step_index + 1))
+        # Load the state from the step lines
+        self.set_state_from_step_lines(self.current_step, self.steps_data_dir)
+        # Enable the main vbox
+        self.main_vbox.set_sensitive(True)
+        # Make sure the step lines in self.current_step are identical to the
+        # output of self.get_step_lines
+        self.current_step = self.get_step_lines()
+
+    def verify_save(self):
+        if not self.steps:
+            return
+        # See if the user changed anything
+        if self.get_step_lines() != self.current_step:
+            if self.question_yes_no("Step contents have been modified."
+                                    " Save step?", "Save changes?"):
+                self.save_step()
+
+    def save_step(self):
+        lines = self.get_step_lines(self.steps_data_dir)
+        if lines != None:
+            self.steps[self.current_step_index] = lines
+            self.current_step = lines
+            self.write_steps_file(self.steps_filename)
+
+    def write_steps_file(self, filename):
+        file = open(filename, "w")
+        file.write(self.header)
+        for step in self.steps:
+            file.write("# " + "-" * 32 + "\n")
+            file.write(step)
+        file.close()
+
+
+if __name__ == "__main__":
+    se = StepEditor()
+    if len(sys.argv) > 1:
+        se.set_steps_file(sys.argv[1])
+    gtk.main()
diff --git a/client/virt/virt_test_setup.py b/client/virt/virt_test_setup.py
new file mode 100644
index 0000000..1125aea
--- /dev/null
+++ b/client/virt/virt_test_setup.py
@@ -0,0 +1,700 @@
+"""
+Library to perform pre/post test setup for KVM autotest.
+"""
+import os, shutil, tempfile, re, ConfigParser, glob, inspect
+import logging, time
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+
+
+@error.context_aware
+def cleanup(dir):
+    """
+    If dir is a mountpoint, do what is possible to unmount it. Afterwards,
+    try to remove it.
+
+    @param dir: Directory to be cleaned up.
+    """
+    error.context("cleaning up unattended install directory %s" % dir)
+    if os.path.ismount(dir):
+        utils.run('fuser -k %s' % dir, ignore_status=True)
+        utils.run('umount %s' % dir)
+    if os.path.isdir(dir):
+        shutil.rmtree(dir)
+
+
+@error.context_aware
+def clean_old_image(image):
+    """
+    Clean a leftover image file from previous processes. If it contains a
+    mounted file system, do the proper cleanup procedures.
+
+    @param image: Path to image to be cleaned up.
+    """
+    error.context("cleaning up old leftover image %s" % image)
+    if os.path.exists(image):
+        mtab = open('/etc/mtab', 'r')
+        mtab_contents = mtab.read()
+        mtab.close()
+        if image in mtab_contents:
+            utils.run('fuser -k %s' % image, ignore_status=True)
+            utils.run('umount %s' % image)
+        os.remove(image)
+
+
+def display_attributes(instance):
+    """
+    Inspects a given class instance attributes and displays them, convenient
+    for debugging.
+    """
+    logging.debug("Attributes set:")
+    for member in inspect.getmembers(instance):
+        name, value = member
+        attribute = getattr(instance, name)
+        if not (name.startswith("__") or callable(attribute) or not value):
+            logging.debug("    %s: %s", name, value)
+
+
+class Disk(object):
+    """
+    Abstract class for Disk objects, with the common methods implemented.
+    """
+    def __init__(self):
+        self.path = None
+
+
+    def setup_answer_file(self, filename, contents):
+        utils.open_write_close(os.path.join(self.mount, filename), contents)
+
+
+    def copy_to(self, src):
+        logging.debug("Copying %s to disk image mount", src)
+        dst = os.path.join(self.mount, os.path.basename(src))
+        if os.path.isdir(src):
+            shutil.copytree(src, dst)
+        elif os.path.isfile(src):
+            shutil.copyfile(src, dst)
+
+
+    def close(self):
+        os.chmod(self.path, 0755)
+        cleanup(self.mount)
+        logging.debug("Disk %s successfuly set", self.path)
+
+
+class FloppyDisk(Disk):
+    """
+    Represents a 1.44 MB floppy disk. We can copy files to it, and setup it in
+    convenient ways.
+    """
+    @error.context_aware
+    def __init__(self, path, qemu_img_binary, tmpdir):
+        error.context("Creating unattended install floppy image %s" % path)
+        self.tmpdir = tmpdir
+        self.mount = tempfile.mkdtemp(prefix='floppy_', dir=self.tmpdir)
+        self.virtio_mount = None
+        self.path = path
+        clean_old_image(path)
+        if not os.path.isdir(os.path.dirname(path)):
+            os.makedirs(os.path.dirname(path))
+
+        try:
+            c_cmd = '%s create -f raw %s 1440k' % (qemu_img_binary, path)
+            utils.run(c_cmd)
+            f_cmd = 'mkfs.msdos -s 1 %s' % path
+            utils.run(f_cmd)
+            m_cmd = 'mount -o loop,rw %s %s' % (path, self.mount)
+            utils.run(m_cmd)
+        except error.CmdError, e:
+            cleanup(self.mount)
+            raise
+
+
+    def _copy_virtio_drivers(self, virtio_floppy):
+        """
+        Copy the virtio drivers on the virtio floppy to the install floppy.
+
+        1) Mount the floppy containing the viostor drivers
+        2) Copy its contents to the root of the install floppy
+        """
+        virtio_mount = tempfile.mkdtemp(prefix='virtio_floppy_',
+                                        dir=self.tmpdir)
+
+        pwd = os.getcwd()
+        try:
+            m_cmd = 'mount -o loop %s %s' % (virtio_floppy, virtio_mount)
+            utils.run(m_cmd)
+            os.chdir(virtio_mount)
+            path_list = glob.glob('*')
+            for path in path_list:
+                self.copy_to(path)
+        finally:
+            os.chdir(pwd)
+            cleanup(virtio_mount)
+
+
+    def setup_virtio_win2003(self, virtio_floppy, virtio_oemsetup_id):
+        """
+        Setup the install floppy with the virtio storage drivers, win2003 style.
+
+        Win2003 and WinXP depend on the file txtsetup.oem file to install
+        the virtio drivers from the floppy, which is a .ini file.
+        Process:
+
+        1) Copy the virtio drivers on the virtio floppy to the install floppy
+        2) Parse the ini file with config parser
+        3) Modify the identifier of the default session that is going to be
+           executed on the config parser object
+        4) Re-write the config file to the disk
+        """
+        self._copy_virtio_drivers(virtio_floppy)
+        txtsetup_oem = os.path.join(self.mount, 'txtsetup.oem')
+        if not os.path.isfile(txtsetup_oem):
+            raise IOError('File txtsetup.oem not found on the install '
+                          'floppy. Please verify if your floppy virtio '
+                          'driver image has this file')
+        parser = ConfigParser.ConfigParser()
+        parser.read(txtsetup_oem)
+        if not parser.has_section('Defaults'):
+            raise ValueError('File txtsetup.oem does not have the session '
+                             '"Defaults". Please check txtsetup.oem')
+        default_driver = parser.get('Defaults', 'SCSI')
+        if default_driver != virtio_oemsetup_id:
+            parser.set('Defaults', 'SCSI', virtio_oemsetup_id)
+            fp = open(txtsetup_oem, 'w')
+            parser.write(fp)
+            fp.close()
+
+
+    def setup_virtio_win2008(self, virtio_floppy):
+        """
+        Setup the install floppy with the virtio storage drivers, win2008 style.
+
+        Win2008, Vista and 7 require people to point out the path to the drivers
+        on the unattended file, so we just need to copy the drivers to the
+        driver floppy disk.
+        Process:
+
+        1) Copy the virtio drivers on the virtio floppy to the install floppy
+        """
+        self._copy_virtio_drivers(virtio_floppy)
+
+
+class CdromDisk(Disk):
+    """
+    Represents a CDROM disk that we can master according to our needs.
+    """
+    def __init__(self, path, tmpdir):
+        self.mount = tempfile.mkdtemp(prefix='cdrom_unattended_', dir=tmpdir)
+        self.path = path
+        clean_old_image(path)
+        if not os.path.isdir(os.path.dirname(path)):
+            os.makedirs(os.path.dirname(path))
+
+
+    @error.context_aware
+    def close(self):
+        error.context("Creating unattended install CD image %s" % self.path)
+        g_cmd = ('mkisofs -o %s -max-iso9660-filenames '
+                 '-relaxed-filenames -D --input-charset iso8859-1 '
+                 '%s' % (self.path, self.mount))
+        utils.run(g_cmd)
+
+        os.chmod(self.path, 0755)
+        cleanup(self.mount)
+        logging.debug("unattended install CD image %s successfuly created",
+                      self.path)
+
+
+class UnattendedInstallConfig(object):
+    """
+    Creates a floppy disk image that will contain a config file for unattended
+    OS install. The parameters to the script are retrieved from environment
+    variables.
+    """
+    def __init__(self, test, params):
+        """
+        Sets class atributes from test parameters.
+
+        @param test: KVM test object.
+        @param params: Dictionary with test parameters.
+        """
+        root_dir = test.bindir
+        images_dir = os.path.join(root_dir, 'images')
+        self.deps_dir = os.path.join(root_dir, 'deps')
+        self.unattended_dir = os.path.join(root_dir, 'unattended')
+
+        attributes = ['kernel_args', 'finish_program', 'cdrom_cd1',
+                      'unattended_file', 'medium', 'url', 'kernel', 'initrd',
+                      'nfs_server', 'nfs_dir', 'install_virtio', 'floppy',
+                      'cdrom_unattended', 'boot_path', 'extra_params',
+                      'qemu_img_binary', 'cdkey', 'finish_program']
+
+        for a in attributes:
+            setattr(self, a, params.get(a, ''))
+
+        if self.install_virtio == 'yes':
+            v_attributes = ['virtio_floppy', 'virtio_storage_path',
+                            'virtio_network_path', 'virtio_oemsetup_id',
+                            'virtio_network_installer']
+            for va in v_attributes:
+                setattr(self, va, params.get(va, ''))
+
+        self.tmpdir = test.tmpdir
+
+        if getattr(self, 'unattended_file'):
+            self.unattended_file = os.path.join(root_dir, self.unattended_file)
+
+        if getattr(self, 'finish_program'):
+            self.finish_program = os.path.join(root_dir, self.finish_program)
+
+        if getattr(self, 'qemu_img_binary'):
+            if not os.path.isfile(getattr(self, 'qemu_img_binary')):
+                self.qemu_img_binary = os.path.join(root_dir,
+                                                    self.qemu_img_binary)
+
+        if getattr(self, 'cdrom_cd1'):
+            self.cdrom_cd1 = os.path.join(root_dir, self.cdrom_cd1)
+        self.cdrom_cd1_mount = tempfile.mkdtemp(prefix='cdrom_cd1_',
+                                                dir=self.tmpdir)
+        if self.medium == 'nfs':
+            self.nfs_mount = tempfile.mkdtemp(prefix='nfs_',
+                                              dir=self.tmpdir)
+
+        if getattr(self, 'floppy'):
+            self.floppy = os.path.join(root_dir, self.floppy)
+            if not os.path.isdir(os.path.dirname(self.floppy)):
+                os.makedirs(os.path.dirname(self.floppy))
+
+        self.image_path = os.path.dirname(self.kernel)
+
+
+    @error.context_aware
+    def render_answer_file(self):
+        """
+        Replace KVM_TEST_CDKEY (in the unattended file) with the cdkey
+        provided for this test and replace the KVM_TEST_MEDIUM with
+        the tree url or nfs address provided for this test.
+
+        @return: Answer file contents
+        """
+        error.base_context('Rendering final answer file')
+        error.context('Reading answer file %s' % self.unattended_file)
+        unattended_contents = open(self.unattended_file).read()
+        dummy_cdkey_re = r'\bKVM_TEST_CDKEY\b'
+        if re.search(dummy_cdkey_re, unattended_contents):
+            if self.cdkey:
+                unattended_contents = re.sub(dummy_cdkey_re, self.cdkey,
+                                             unattended_contents)
+            else:
+                print ("WARNING: 'cdkey' required but not specified for "
+                       "this unattended installation")
+
+        dummy_medium_re = r'\bKVM_TEST_MEDIUM\b'
+        if self.medium == "cdrom":
+            content = "cdrom"
+        elif self.medium == "url":
+            content = "url --url %s" % self.url
+        elif self.medium == "nfs":
+            content = "nfs --server=%s --dir=%s" % (self.nfs_server,
+                                                    self.nfs_dir)
+        else:
+            raise ValueError("Unexpected installation medium %s" % self.url)
+
+        unattended_contents = re.sub(dummy_medium_re, content,
+                                     unattended_contents)
+
+        def replace_virtio_key(contents, dummy_re, attribute_name):
+            """
+            Replace a virtio dummy string with contents.
+
+            If install_virtio is not set, replace it with a dummy string.
+
+            @param contents: Contents of the unattended file
+            @param dummy_re: Regular expression used to search on the.
+                    unattended file contents.
+            @param env: Name of the environment variable.
+            """
+            dummy_path = "C:"
+            driver = getattr(self, attribute_name, '')
+
+            if re.search(dummy_re, contents):
+                if self.install_virtio == "yes":
+                    if driver.endswith("msi"):
+                        driver = 'msiexec /passive /package ' + driver
+                    else:
+                        try:
+                            # Let's escape windows style paths properly
+                            drive, path = driver.split(":")
+                            driver = drive + ":" + re.escape(path)
+                        except:
+                            pass
+                    contents = re.sub(dummy_re, driver, contents)
+                else:
+                    contents = re.sub(dummy_re, dummy_path, contents)
+            return contents
+
+        vdict = {r'\bKVM_TEST_STORAGE_DRIVER_PATH\b':
+                 'virtio_storage_path',
+                 r'\bKVM_TEST_NETWORK_DRIVER_PATH\b':
+                 'virtio_network_path',
+                 r'\bKVM_TEST_VIRTIO_NETWORK_INSTALLER\b':
+                 'virtio_network_installer_path'}
+
+        for vkey in vdict:
+            unattended_contents = replace_virtio_key(
+                                                   contents=unattended_contents,
+                                                   dummy_re=vkey,
+                                                   attribute_name=vdict[vkey])
+
+        logging.debug("Unattended install contents:")
+        for line in unattended_contents.splitlines():
+            logging.debug(line)
+        return unattended_contents
+
+
+    def setup_boot_disk(self):
+        answer_contents = self.render_answer_file()
+
+        if self.unattended_file.endswith('.sif'):
+            dest_fname = 'winnt.sif'
+            setup_file = 'winnt.bat'
+            boot_disk = FloppyDisk(self.floppy, self.qemu_img_binary,
+                                   self.tmpdir)
+            boot_disk.setup_answer_file(dest_fname, answer_contents)
+            setup_file_path = os.path.join(self.unattended_dir, setup_file)
+            boot_disk.copy_to(setup_file_path)
+            if self.install_virtio == "yes":
+                boot_disk.setup_virtio_win2003(self.virtio_floppy,
+                                               self.virtio_oemsetup_id)
+            boot_disk.copy_to(self.finish_program)
+
+        elif self.unattended_file.endswith('.ks'):
+            # Red Hat kickstart install
+            dest_fname = 'ks.cfg'
+            if self.cdrom_unattended:
+                boot_disk = CdromDisk(self.cdrom_unattended, self.tmpdir)
+            elif self.floppy:
+                boot_disk = FloppyDisk(self.floppy, self.qemu_img_binary,
+                                       self.tmpdir)
+            else:
+                raise ValueError("Neither cdrom_unattended nor floppy set "
+                                 "on the config file, please verify")
+            boot_disk.setup_answer_file(dest_fname, answer_contents)
+
+        elif self.unattended_file.endswith('.xml'):
+            if "autoyast" in self.extra_params:
+                # SUSE autoyast install
+                dest_fname = "autoinst.xml"
+                if self.cdrom_unattended:
+                    boot_disk = CdromDisk(self.cdrom_unattended)
+                elif self.floppy:
+                    boot_disk = FloppyDisk(self.floppy, self.qemu_img_binary,
+                                           self.tmpdir)
+                else:
+                    raise ValueError("Neither cdrom_unattended nor floppy set "
+                                     "on the config file, please verify")
+                boot_disk.setup_answer_file(dest_fname, answer_contents)
+
+            else:
+                # Windows unattended install
+                dest_fname = "autounattend.xml"
+                boot_disk = FloppyDisk(self.floppy, self.qemu_img_binary,
+                                       self.tmpdir)
+                boot_disk.setup_answer_file(dest_fname, answer_contents)
+                if self.install_virtio == "yes":
+                    boot_disk.setup_virtio_win2008(self.virtio_floppy)
+                boot_disk.copy_to(self.finish_program)
+
+        else:
+            raise ValueError('Unknown answer file type: %s' %
+                             self.unattended_file)
+
+        boot_disk.close()
+
+
+    @error.context_aware
+    def setup_cdrom(self):
+        """
+        Mount cdrom and copy vmlinuz and initrd.img.
+        """
+        error.context("Copying vmlinuz and initrd.img from install cdrom %s" %
+                      self.cdrom_cd1)
+        m_cmd = ('mount -t iso9660 -v -o loop,ro %s %s' %
+                 (self.cdrom_cd1, self.cdrom_cd1_mount))
+        utils.run(m_cmd)
+
+        try:
+            if not os.path.isdir(self.image_path):
+                os.makedirs(self.image_path)
+            kernel_fetch_cmd = ("cp %s/%s/%s %s" %
+                                (self.cdrom_cd1_mount, self.boot_path,
+                                 os.path.basename(self.kernel), self.kernel))
+            utils.run(kernel_fetch_cmd)
+            initrd_fetch_cmd = ("cp %s/%s/%s %s" %
+                                (self.cdrom_cd1_mount, self.boot_path,
+                                 os.path.basename(self.initrd), self.initrd))
+            utils.run(initrd_fetch_cmd)
+        finally:
+            cleanup(self.cdrom_cd1_mount)
+
+
+    @error.context_aware
+    def setup_url(self):
+        """
+        Download the vmlinuz and initrd.img from URL.
+        """
+        error.context("downloading vmlinuz and initrd.img from %s" % self.url)
+        os.chdir(self.image_path)
+        kernel_fetch_cmd = "wget -q %s/%s/%s" % (self.url, self.boot_path,
+                                                 os.path.basename(self.kernel))
+        initrd_fetch_cmd = "wget -q %s/%s/%s" % (self.url, self.boot_path,
+                                                 os.path.basename(self.initrd))
+
+        if os.path.exists(self.kernel):
+            os.remove(self.kernel)
+        if os.path.exists(self.initrd):
+            os.remove(self.initrd)
+
+        utils.run(kernel_fetch_cmd)
+        utils.run(initrd_fetch_cmd)
+
+
+    def setup_nfs(self):
+        """
+        Copy the vmlinuz and initrd.img from nfs.
+        """
+        error.context("copying the vmlinuz and initrd.img from NFS share")
+
+        m_cmd = ("mount %s:%s %s -o ro" %
+                 (self.nfs_server, self.nfs_dir, self.nfs_mount))
+        utils.run(m_cmd)
+
+        try:
+            kernel_fetch_cmd = ("cp %s/%s/%s %s" %
+                                (self.nfs_mount, self.boot_path,
+                                os.path.basename(self.kernel), self.image_path))
+            utils.run(kernel_fetch_cmd)
+            initrd_fetch_cmd = ("cp %s/%s/%s %s" %
+                                (self.nfs_mount, self.boot_path,
+                                os.path.basename(self.initrd), self.image_path))
+            utils.run(initrd_fetch_cmd)
+        finally:
+            cleanup(self.nfs_mount)
+
+
+    def setup(self):
+        """
+        Configure the environment for unattended install.
+
+        Uses an appropriate strategy according to each install model.
+        """
+        logging.info("Starting unattended install setup")
+        display_attributes(self)
+
+        if self.unattended_file and (self.floppy or self.cdrom_unattended):
+            self.setup_boot_disk()
+        if self.medium == "cdrom":
+            if self.kernel and self.initrd:
+                self.setup_cdrom()
+        elif self.medium == "url":
+            self.setup_url()
+        elif self.medium == "nfs":
+            self.setup_nfs()
+        else:
+            raise ValueError("Unexpected installation method %s" %
+                             self.medium)
+
+
+class HugePageConfig(object):
+    def __init__(self, params):
+        """
+        Gets environment variable values and calculates the target number
+        of huge memory pages.
+
+        @param params: Dict like object containing parameters for the test.
+        """
+        self.vms = len(params.objects("vms"))
+        self.mem = int(params.get("mem"))
+        self.max_vms = int(params.get("max_vms", 0))
+        self.hugepage_path = '/mnt/kvm_hugepage'
+        self.hugepage_size = self.get_hugepage_size()
+        self.target_hugepages = self.get_target_hugepages()
+        self.kernel_hp_file = '/proc/sys/vm/nr_hugepages'
+
+
+    def get_hugepage_size(self):
+        """
+        Get the current system setting for huge memory page size.
+        """
+        meminfo = open('/proc/meminfo', 'r').readlines()
+        huge_line_list = [h for h in meminfo if h.startswith("Hugepagesize")]
+        try:
+            return int(huge_line_list[0].split()[1])
+        except ValueError, e:
+            raise ValueError("Could not get huge page size setting from "
+                             "/proc/meminfo: %s" % e)
+
+
+    def get_target_hugepages(self):
+        """
+        Calculate the target number of hugepages for testing purposes.
+        """
+        if self.vms < self.max_vms:
+            self.vms = self.max_vms
+        # memory of all VMs plus qemu overhead of 64MB per guest
+        vmsm = (self.vms * self.mem) + (self.vms * 64)
+        return int(vmsm * 1024 / self.hugepage_size)
+
+
+    @error.context_aware
+    def set_hugepages(self):
+        """
+        Sets the hugepage limit to the target hugepage value calculated.
+        """
+        error.context("setting hugepages limit to %s" % self.target_hugepages)
+        hugepage_cfg = open(self.kernel_hp_file, "r+")
+        hp = hugepage_cfg.readline()
+        while int(hp) < self.target_hugepages:
+            loop_hp = hp
+            hugepage_cfg.write(str(self.target_hugepages))
+            hugepage_cfg.flush()
+            hugepage_cfg.seek(0)
+            hp = int(hugepage_cfg.readline())
+            if loop_hp == hp:
+                raise ValueError("Cannot set the kernel hugepage setting "
+                                 "to the target value of %d hugepages." %
+                                 self.target_hugepages)
+        hugepage_cfg.close()
+        logging.debug("Successfuly set %s large memory pages on host ",
+                      self.target_hugepages)
+
+
+    @error.context_aware
+    def mount_hugepage_fs(self):
+        """
+        Verify if there's a hugetlbfs mount set. If there's none, will set up
+        a hugetlbfs mount using the class attribute that defines the mount
+        point.
+        """
+        error.context("mounting hugepages path")
+        if not os.path.ismount(self.hugepage_path):
+            if not os.path.isdir(self.hugepage_path):
+                os.makedirs(self.hugepage_path)
+            cmd = "mount -t hugetlbfs none %s" % self.hugepage_path
+            utils.system(cmd)
+
+
+    def setup(self):
+        logging.debug("Number of VMs this test will use: %d", self.vms)
+        logging.debug("Amount of memory used by each vm: %s", self.mem)
+        logging.debug("System setting for large memory page size: %s",
+                      self.hugepage_size)
+        logging.debug("Number of large memory pages needed for this test: %s",
+                      self.target_hugepages)
+        self.set_hugepages()
+        self.mount_hugepage_fs()
+
+
+    @error.context_aware
+    def cleanup(self):
+        error.context("trying to dealocate hugepage memory")
+        try:
+            utils.system("umount %s" % self.hugepage_path)
+        except error.CmdError:
+            return
+        utils.system("echo 0 > %s" % self.kernel_hp_file)
+        logging.debug("Hugepage memory successfuly dealocated")
+
+
+class EnospcConfig(object):
+    """
+    Performs setup for the test enospc. This is a borg class, similar to a
+    singleton. The idea is to keep state in memory for when we call cleanup()
+    on postprocessing.
+    """
+    __shared_state = {}
+    def __init__(self, test, params):
+        self.__dict__ = self.__shared_state
+        root_dir = test.bindir
+        self.tmpdir = test.tmpdir
+        self.qemu_img_binary = params.get('qemu_img_binary')
+        if not os.path.isfile(self.qemu_img_binary):
+            self.qemu_img_binary = os.path.join(root_dir,
+                                                self.qemu_img_binary)
+        self.raw_file_path = os.path.join(self.tmpdir, 'enospc.raw')
+        # Here we're trying to choose fairly explanatory names so it's less
+        # likely that we run in conflict with other devices in the system
+        self.vgtest_name = params.get("vgtest_name")
+        self.lvtest_name = params.get("lvtest_name")
+        self.lvtest_device = "/dev/%s/%s" % (self.vgtest_name, self.lvtest_name)
+        image_dir = os.path.dirname(params.get("image_name"))
+        self.qcow_file_path = os.path.join(image_dir, 'enospc.qcow2')
+        try:
+            getattr(self, 'loopback')
+        except AttributeError:
+            self.loopback = ''
+
+
+    @error.context_aware
+    def setup(self):
+        logging.debug("Starting enospc setup")
+        error.context("performing enospc setup")
+        display_attributes(self)
+        # Double check if there aren't any leftovers
+        self.cleanup()
+        try:
+            utils.run("%s create -f raw %s 10G" %
+                      (self.qemu_img_binary, self.raw_file_path))
+            # Associate a loopback device with the raw file.
+            # Subject to race conditions, that's why try here to associate
+            # it with the raw file as quickly as possible
+            l_result = utils.run("losetup -f")
+            utils.run("losetup -f %s" % self.raw_file_path)
+            self.loopback = l_result.stdout.strip()
+            # Add the loopback device configured to the list of pvs
+            # recognized by LVM
+            utils.run("pvcreate %s" % self.loopback)
+            utils.run("vgcreate %s %s" % (self.vgtest_name, self.loopback))
+            # Create an lv inside the vg with starting size of 200M
+            utils.run("lvcreate -L 200M -n %s %s" %
+                      (self.lvtest_name, self.vgtest_name))
+            # Create a 10GB qcow2 image in the logical volume
+            utils.run("%s create -f qcow2 %s 10G" %
+                      (self.qemu_img_binary, self.lvtest_device))
+            # Let's symlink the logical volume with the image name that autotest
+            # expects this device to have
+            os.symlink(self.lvtest_device, self.qcow_file_path)
+        except Exception, e:
+            self.cleanup()
+            raise
+
+    @error.context_aware
+    def cleanup(self):
+        error.context("performing enospc cleanup")
+        if os.path.isfile(self.lvtest_device):
+            utils.run("fuser -k %s" % self.lvtest_device)
+            time.sleep(2)
+        l_result = utils.run("lvdisplay")
+        # Let's remove all volumes inside the volume group created
+        if self.lvtest_name in l_result.stdout:
+            utils.run("lvremove -f %s" % self.lvtest_device)
+        # Now, removing the volume group itself
+        v_result = utils.run("vgdisplay")
+        if self.vgtest_name in v_result.stdout:
+            utils.run("vgremove -f %s" % self.vgtest_name)
+        # Now, if we can, let's remove the physical volume from lvm list
+        if self.loopback:
+            p_result = utils.run("pvdisplay")
+            if self.loopback in p_result.stdout:
+                utils.run("pvremove -f %s" % self.loopback)
+        l_result = utils.run('losetup -a')
+        if self.loopback and (self.loopback in l_result.stdout):
+            try:
+                utils.run("losetup -d %s" % self.loopback)
+            except error.CmdError:
+                logging.error("Failed to liberate loopback %s", self.loopback)
+        if os.path.islink(self.qcow_file_path):
+            os.remove(self.qcow_file_path)
+        if os.path.isfile(self.raw_file_path):
+            os.remove(self.raw_file_path)
diff --git a/client/virt/virt_test_utils.py b/client/virt/virt_test_utils.py
new file mode 100644
index 0000000..f3d77ae
--- /dev/null
+++ b/client/virt/virt_test_utils.py
@@ -0,0 +1,754 @@
+"""
+High-level KVM test utility functions.
+
+This module is meant to reduce code size by performing common test procedures.
+Generally, code here should look like test code.
+More specifically:
+    - Functions in this module should raise exceptions if things go wrong
+      (unlike functions in kvm_utils.py and kvm_vm.py which report failure via
+      their returned values).
+    - Functions in this module may use logging.info(), in addition to
+      logging.debug() and logging.error(), to log messages the user may be
+      interested in (unlike kvm_utils.py and kvm_vm.py which use
+      logging.debug() for anything that isn't an error).
+    - Functions in this module typically use functions and classes from
+      lower-level modules (e.g. kvm_utils.py, kvm_vm.py, kvm_subprocess.py).
+    - Functions in this module should not be used by lower-level modules.
+    - Functions in this module should be used in the right context.
+      For example, a function should not be used where it may display
+      misleading or inaccurate info or debug messages.
+
+@copyright: 2008-2009 Red Hat Inc.
+"""
+
+import time, os, logging, re, signal
+from autotest_lib.client.bin import utils
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.tools import scan_results
+import virt_utils, virt_vm, aexpect
+
+
+def get_living_vm(env, vm_name):
+    """
+    Get a VM object from the environment and make sure it's alive.
+
+    @param env: Dictionary with test environment.
+    @param vm_name: Name of the desired VM object.
+    @return: A VM object.
+    """
+    vm = env.get_vm(vm_name)
+    if not vm:
+        raise error.TestError("VM '%s' not found in environment" % vm_name)
+    if not vm.is_alive():
+        raise error.TestError("VM '%s' seems to be dead; test requires a "
+                              "living VM" % vm_name)
+    return vm
+
+
+def wait_for_login(vm, nic_index=0, timeout=240, start=0, step=2, serial=None):
+    """
+    Try logging into a VM repeatedly.  Stop on success or when timeout expires.
+
+    @param vm: VM object.
+    @param nic_index: Index of NIC to access in the VM.
+    @param timeout: Time to wait before giving up.
+    @param serial: Whether to use a serial connection instead of a remote
+            (ssh, rss) one.
+    @return: A shell session object.
+    """
+    end_time = time.time() + timeout
+    session = None
+    if serial:
+        type = 'serial'
+        logging.info("Trying to log into guest %s using serial connection,"
+                     " timeout %ds", vm.name, timeout)
+        time.sleep(start)
+        while time.time() < end_time:
+            try:
+                session = vm.serial_login()
+                break
+            except virt_utils.LoginError, e:
+                logging.debug(e)
+            time.sleep(step)
+    else:
+        type = 'remote'
+        logging.info("Trying to log into guest %s using remote connection,"
+                     " timeout %ds", vm.name, timeout)
+        time.sleep(start)
+        while time.time() < end_time:
+            try:
+                session = vm.login(nic_index=nic_index)
+                break
+            except (virt_utils.LoginError, virt_vm.VMError), e:
+                logging.debug(e)
+            time.sleep(step)
+    if not session:
+        raise error.TestFail("Could not log into guest %s using %s connection" %
+                             (vm.name, type))
+    logging.info("Logged into guest %s using %s connection", vm.name, type)
+    return session
+
+
+def reboot(vm, session, method="shell", sleep_before_reset=10, nic_index=0,
+           timeout=240):
+    """
+    Reboot the VM and wait for it to come back up by trying to log in until
+    timeout expires.
+
+    @param vm: VM object.
+    @param session: A shell session object.
+    @param method: Reboot method.  Can be "shell" (send a shell reboot
+            command) or "system_reset" (send a system_reset monitor command).
+    @param nic_index: Index of NIC to access in the VM, when logging in after
+            rebooting.
+    @param timeout: Time to wait before giving up (after rebooting).
+    @return: A new shell session object.
+    """
+    if method == "shell":
+        # Send a reboot command to the guest's shell
+        session.sendline(vm.get_params().get("reboot_command"))
+        logging.info("Reboot command sent. Waiting for guest to go down...")
+    elif method == "system_reset":
+        # Sleep for a while before sending the command
+        time.sleep(sleep_before_reset)
+        # Clear the event list of all QMP monitors
+        monitors = [m for m in vm.monitors if m.protocol == "qmp"]
+        for m in monitors:
+            m.clear_events()
+        # Send a system_reset monitor command
+        vm.monitor.cmd("system_reset")
+        logging.info("Monitor command system_reset sent. Waiting for guest to "
+                     "go down...")
+        # Look for RESET QMP events
+        time.sleep(1)
+        for m in monitors:
+            if not m.get_event("RESET"):
+                raise error.TestFail("RESET QMP event not received after "
+                                     "system_reset (monitor '%s')" % m.name)
+            else:
+                logging.info("RESET QMP event received")
+    else:
+        logging.error("Unknown reboot method: %s", method)
+
+    # Wait for the session to become unresponsive and close it
+    if not virt_utils.wait_for(lambda: not session.is_responsive(timeout=30),
+                              120, 0, 1):
+        raise error.TestFail("Guest refuses to go down")
+    session.close()
+
+    # Try logging into the guest until timeout expires
+    logging.info("Guest is down. Waiting for it to go up again, timeout %ds",
+                 timeout)
+    session = vm.wait_for_login(nic_index, timeout=timeout)
+    logging.info("Guest is up again")
+    return session
+
+
+def migrate(vm, env=None, mig_timeout=3600, mig_protocol="tcp",
+            mig_cancel=False, offline=False, stable_check=False,
+            clean=False, save_path=None, dest_host='localhost', mig_port=None):
+    """
+    Migrate a VM locally and re-register it in the environment.
+
+    @param vm: The VM to migrate.
+    @param env: The environment dictionary.  If omitted, the migrated VM will
+            not be registered.
+    @param mig_timeout: timeout value for migration.
+    @param mig_protocol: migration protocol
+    @param mig_cancel: Test migrate_cancel or not when protocol is tcp.
+    @param dest_host: Destination host (defaults to 'localhost').
+    @param mig_port: Port that will be used for migration.
+    @return: The post-migration VM, in case of same host migration, True in
+            case of multi-host migration.
+    """
+    def mig_finished():
+        o = vm.monitor.info("migrate")
+        if isinstance(o, str):
+            return "status: active" not in o
+        else:
+            return o.get("status") != "active"
+
+    def mig_succeeded():
+        o = vm.monitor.info("migrate")
+        if isinstance(o, str):
+            return "status: completed" in o
+        else:
+            return o.get("status") == "completed"
+
+    def mig_failed():
+        o = vm.monitor.info("migrate")
+        if isinstance(o, str):
+            return "status: failed" in o
+        else:
+            return o.get("status") == "failed"
+
+    def mig_cancelled():
+        o = vm.monitor.info("migrate")
+        if isinstance(o, str):
+            return ("Migration status: cancelled" in o or
+                    "Migration status: canceled" in o)
+        else:
+            return (o.get("status") == "cancelled" or
+                    o.get("status") == "canceled")
+
+    def wait_for_migration():
+        if not virt_utils.wait_for(mig_finished, mig_timeout, 2, 2,
+                                  "Waiting for migration to finish..."):
+            raise error.TestFail("Timeout expired while waiting for migration "
+                                 "to finish")
+
+    if dest_host == 'localhost':
+        dest_vm = vm.clone()
+
+    if (dest_host == 'localhost') and stable_check:
+        # Pause the dest vm after creation
+        dest_vm.params['extra_params'] = (dest_vm.params.get('extra_params','')
+                                          + ' -S')
+
+    if dest_host == 'localhost':
+        dest_vm.create(migration_mode=mig_protocol, mac_source=vm)
+
+    try:
+        try:
+            if mig_protocol == "tcp":
+                if dest_host == 'localhost':
+                    uri = "tcp:localhost:%d" % dest_vm.migration_port
+                else:
+                    uri = 'tcp:%s:%d' % (dest_host, mig_port)
+            elif mig_protocol == "unix":
+                uri = "unix:%s" % dest_vm.migration_file
+            elif mig_protocol == "exec":
+                uri = '"exec:nc localhost %s"' % dest_vm.migration_port
+
+            if offline:
+                vm.monitor.cmd("stop")
+            vm.monitor.migrate(uri)
+
+            if mig_cancel:
+                time.sleep(2)
+                vm.monitor.cmd("migrate_cancel")
+                if not virt_utils.wait_for(mig_cancelled, 60, 2, 2,
+                                          "Waiting for migration "
+                                          "cancellation"):
+                    raise error.TestFail("Failed to cancel migration")
+                if offline:
+                    vm.monitor.cmd("cont")
+                if dest_host == 'localhost':
+                    dest_vm.destroy(gracefully=False)
+                return vm
+            else:
+                wait_for_migration()
+                if (dest_host == 'localhost') and stable_check:
+                    save_path = None or "/tmp"
+                    save1 = os.path.join(save_path, "src")
+                    save2 = os.path.join(save_path, "dst")
+
+                    vm.save_to_file(save1)
+                    dest_vm.save_to_file(save2)
+
+                    # Fail if we see deltas
+                    md5_save1 = utils.hash_file(save1)
+                    md5_save2 = utils.hash_file(save2)
+                    if md5_save1 != md5_save2:
+                        raise error.TestFail("Mismatch of VM state before "
+                                             "and after migration")
+
+                if (dest_host == 'localhost') and offline:
+                    dest_vm.monitor.cmd("cont")
+        except:
+            if dest_host == 'localhost':
+                dest_vm.destroy()
+            raise
+
+    finally:
+        if (dest_host == 'localhost') and stable_check and clean:
+            logging.debug("Cleaning the state files")
+            if os.path.isfile(save1):
+                os.remove(save1)
+            if os.path.isfile(save2):
+                os.remove(save2)
+
+    # Report migration status
+    if mig_succeeded():
+        logging.info("Migration finished successfully")
+    elif mig_failed():
+        raise error.TestFail("Migration failed")
+    else:
+        raise error.TestFail("Migration ended with unknown status")
+
+    if dest_host == 'localhost':
+        if "paused" in dest_vm.monitor.info("status"):
+            logging.debug("Destination VM is paused, resuming it...")
+            dest_vm.monitor.cmd("cont")
+
+    # Kill the source VM
+    vm.destroy(gracefully=False)
+
+    # Replace the source VM with the new cloned VM
+    if (dest_host == 'localhost') and (env is not None):
+        env.register_vm(vm.name, dest_vm)
+
+    # Return the new cloned VM
+    if dest_host == 'localhost':
+        return dest_vm
+    else:
+        return vm
+
+
+def stop_windows_service(session, service, timeout=120):
+    """
+    Stop a Windows service using sc.
+    If the service is already stopped or is not installed, do nothing.
+
+    @param service: The name of the service
+    @param timeout: Time duration to wait for service to stop
+    @raise error.TestError: Raised if the service can't be stopped
+    """
+    end_time = time.time() + timeout
+    while time.time() < end_time:
+        o = session.cmd_output("sc stop %s" % service, timeout=60)
+        # FAILED 1060 means the service isn't installed.
+        # FAILED 1062 means the service hasn't been started.
+        if re.search(r"\bFAILED (1060|1062)\b", o, re.I):
+            break
+        time.sleep(1)
+    else:
+        raise error.TestError("Could not stop service '%s'" % service)
+
+
+def start_windows_service(session, service, timeout=120):
+    """
+    Start a Windows service using sc.
+    If the service is already running, do nothing.
+    If the service isn't installed, fail.
+
+    @param service: The name of the service
+    @param timeout: Time duration to wait for service to start
+    @raise error.TestError: Raised if the service can't be started
+    """
+    end_time = time.time() + timeout
+    while time.time() < end_time:
+        o = session.cmd_output("sc start %s" % service, timeout=60)
+        # FAILED 1060 means the service isn't installed.
+        if re.search(r"\bFAILED 1060\b", o, re.I):
+            raise error.TestError("Could not start service '%s' "
+                                  "(service not installed)" % service)
+        # FAILED 1056 means the service is already running.
+        if re.search(r"\bFAILED 1056\b", o, re.I):
+            break
+        time.sleep(1)
+    else:
+        raise error.TestError("Could not start service '%s'" % service)
+
+
+def get_time(session, time_command, time_filter_re, time_format):
+    """
+    Return the host time and guest time.  If the guest time cannot be fetched
+    a TestError exception is raised.
+
+    Note that the shell session should be ready to receive commands
+    (i.e. should "display" a command prompt and should be done with all
+    previous commands).
+
+    @param session: A shell session.
+    @param time_command: Command to issue to get the current guest time.
+    @param time_filter_re: Regex filter to apply on the output of
+            time_command in order to get the current time.
+    @param time_format: Format string to pass to time.strptime() with the
+            result of the regex filter.
+    @return: A tuple containing the host time and guest time.
+    """
+    if len(re.findall("ntpdate|w32tm", time_command)) == 0:
+        host_time = time.time()
+        s = session.cmd_output(time_command)
+
+        try:
+            s = re.findall(time_filter_re, s)[0]
+        except IndexError:
+            logging.debug("The time string from guest is:\n%s", s)
+            raise error.TestError("The time string from guest is unexpected.")
+        except Exception, e:
+            logging.debug("(time_filter_re, time_string): (%s, %s)",
+                          time_filter_re, s)
+            raise e
+
+        guest_time = time.mktime(time.strptime(s, time_format))
+    else:
+        o = session.cmd(time_command)
+        if re.match('ntpdate', time_command):
+            offset = re.findall('offset (.*) sec', o)[0]
+            host_main, host_mantissa = re.findall(time_filter_re, o)[0]
+            host_time = (time.mktime(time.strptime(host_main, time_format)) +
+                         float("0.%s" % host_mantissa))
+            guest_time = host_time + float(offset)
+        else:
+            guest_time =  re.findall(time_filter_re, o)[0]
+            offset = re.findall("o:(.*)s", o)[0]
+            if re.match('PM', guest_time):
+                hour = re.findall('\d+ (\d+):', guest_time)[0]
+                hour = str(int(hour) + 12)
+                guest_time = re.sub('\d+\s\d+:', "\d+\s%s:" % hour,
+                                    guest_time)[:-3]
+            else:
+                guest_time = guest_time[:-3]
+            guest_time = time.mktime(time.strptime(guest_time, time_format))
+            host_time = guest_time - float(offset)
+
+    return (host_time, guest_time)
+
+
+def get_memory_info(lvms):
+    """
+    Get memory information from host and guests in format:
+    Host: memfree = XXXM; Guests memsh = {XXX,XXX,...}
+
+    @params lvms: List of VM objects
+    @return: String with memory info report
+    """
+    if not isinstance(lvms, list):
+        raise error.TestError("Invalid list passed to get_stat: %s " % lvms)
+
+    try:
+        meminfo = "Host: memfree = "
+        meminfo += str(int(utils.freememtotal()) / 1024) + "M; "
+        meminfo += "swapfree = "
+        mf = int(utils.read_from_meminfo("SwapFree")) / 1024
+        meminfo += str(mf) + "M; "
+    except Exception, e:
+        raise error.TestFail("Could not fetch host free memory info, "
+                             "reason: %s" % e)
+
+    meminfo += "Guests memsh = {"
+    for vm in lvms:
+        shm = vm.get_shared_meminfo()
+        if shm is None:
+            raise error.TestError("Could not get shared meminfo from "
+                                  "VM %s" % vm)
+        meminfo += "%dM; " % shm
+    meminfo = meminfo[0:-2] + "}"
+
+    return meminfo
+
+
+def run_autotest(vm, session, control_path, timeout, outputdir, params):
+    """
+    Run an autotest control file inside a guest (linux only utility).
+
+    @param vm: VM object.
+    @param session: A shell session on the VM provided.
+    @param control_path: A path to an autotest control file.
+    @param timeout: Timeout under which the autotest control file must complete.
+    @param outputdir: Path on host where we should copy the guest autotest
+            results to.
+
+    The following params is used by the migration
+    @param params: Test params used in the migration test
+    """
+    def copy_if_hash_differs(vm, local_path, remote_path):
+        """
+        Copy a file to a guest if it doesn't exist or if its MD5sum differs.
+
+        @param vm: VM object.
+        @param local_path: Local path.
+        @param remote_path: Remote path.
+        """
+        local_hash = utils.hash_file(local_path)
+        basename = os.path.basename(local_path)
+        output = session.cmd_output("md5sum %s" % remote_path)
+        if "such file" in output:
+            remote_hash = "0"
+        elif output:
+            remote_hash = output.split()[0]
+        else:
+            logging.warning("MD5 check for remote path %s did not return.",
+                            remote_path)
+            # Let's be a little more lenient here and see if it wasn't a
+            # temporary problem
+            remote_hash = "0"
+        if remote_hash != local_hash:
+            logging.debug("Copying %s to guest", basename)
+            vm.copy_files_to(local_path, remote_path)
+
+
+    def extract(vm, remote_path, dest_dir="."):
+        """
+        Extract a .tar.bz2 file on the guest.
+
+        @param vm: VM object
+        @param remote_path: Remote file path
+        @param dest_dir: Destination dir for the contents
+        """
+        basename = os.path.basename(remote_path)
+        logging.info("Extracting %s...", basename)
+        e_cmd = "tar xjvf %s -C %s" % (remote_path, dest_dir)
+        session.cmd(e_cmd, timeout=120)
+
+
+    def get_results():
+        """
+        Copy autotest results present on the guest back to the host.
+        """
+        logging.info("Trying to copy autotest results from guest")
+        guest_results_dir = os.path.join(outputdir, "guest_autotest_results")
+        if not os.path.exists(guest_results_dir):
+            os.mkdir(guest_results_dir)
+        vm.copy_files_from("%s/results/default/*" % autotest_path,
+                           guest_results_dir)
+
+
+    def get_results_summary():
+        """
+        Get the status of the tests that were executed on the host and close
+        the session where autotest was being executed.
+        """
+        output = session.cmd_output("cat results/*/status")
+        try:
+            results = scan_results.parse_results(output)
+            # Report test results
+            logging.info("Results (test, status, duration, info):")
+            for result in results:
+                logging.info(str(result))
+            session.close()
+            return results
+        except Exception, e:
+            logging.error("Error processing guest autotest results: %s", e)
+            return None
+
+
+    if not os.path.isfile(control_path):
+        raise error.TestError("Invalid path to autotest control file: %s" %
+                              control_path)
+
+    migrate_background = params.get("migrate_background") == "yes"
+    if migrate_background:
+        mig_timeout = float(params.get("mig_timeout", "3600"))
+        mig_protocol = params.get("migration_protocol", "tcp")
+
+    compressed_autotest_path = "/tmp/autotest.tar.bz2"
+
+    # To avoid problems, let's make the test use the current AUTODIR
+    # (autotest client path) location
+    autotest_path = os.environ['AUTODIR']
+
+    # tar the contents of bindir/autotest
+    cmd = "tar cvjf %s %s/*" % (compressed_autotest_path, autotest_path)
+    # Until we have nested virtualization, we don't need the kvm test :)
+    cmd += " --exclude=%s/tests/kvm" % autotest_path
+    cmd += " --exclude=%s/results" % autotest_path
+    cmd += " --exclude=%s/tmp" % autotest_path
+    cmd += " --exclude=%s/control*" % autotest_path
+    cmd += " --exclude=*.pyc"
+    cmd += " --exclude=*.svn"
+    cmd += " --exclude=*.git"
+    utils.run(cmd)
+
+    # Copy autotest.tar.bz2
+    copy_if_hash_differs(vm, compressed_autotest_path, compressed_autotest_path)
+
+    # Extract autotest.tar.bz2
+    extract(vm, compressed_autotest_path, "/")
+
+    vm.copy_files_to(control_path, os.path.join(autotest_path, 'control'))
+
+    # Run the test
+    logging.info("Running autotest control file %s on guest, timeout %ss",
+                 os.path.basename(control_path), timeout)
+    session.cmd("cd %s" % autotest_path)
+    try:
+        session.cmd("rm -f control.state")
+        session.cmd("rm -rf results/*")
+    except aexpect.ShellError:
+        pass
+    try:
+        bg = None
+        try:
+            logging.info("---------------- Test output ----------------")
+            if migrate_background:
+                mig_timeout = float(params.get("mig_timeout", "3600"))
+                mig_protocol = params.get("migration_protocol", "tcp")
+
+                bg = virt_utils.Thread(session.cmd_output,
+                                      kwargs={'cmd': "bin/autotest control",
+                                              'timeout': timeout,
+                                              'print_func': logging.info})
+
+                bg.start()
+
+                while bg.is_alive():
+                    logging.info("Tests is not ended, start a round of"
+                                 "migration ...")
+                    vm.migrate(timeout=mig_timeout, protocol=mig_protocol)
+            else:
+                session.cmd_output("bin/autotest control", timeout=timeout,
+                                   print_func=logging.info)
+        finally:
+            logging.info("------------- End of test output ------------")
+            if migrate_background and bg:
+                bg.join()
+    except aexpect.ShellTimeoutError:
+        if vm.is_alive():
+            get_results()
+            get_results_summary()
+            raise error.TestError("Timeout elapsed while waiting for job to "
+                                  "complete")
+        else:
+            raise error.TestError("Autotest job on guest failed "
+                                  "(VM terminated during job)")
+    except aexpect.ShellProcessTerminatedError:
+        get_results()
+        raise error.TestError("Autotest job on guest failed "
+                              "(Remote session terminated during job)")
+
+    results = get_results_summary()
+    get_results()
+
+    # Make a list of FAIL/ERROR/ABORT results (make sure FAIL results appear
+    # before ERROR results, and ERROR results appear before ABORT results)
+    bad_results = [r[0] for r in results if r[1] == "FAIL"]
+    bad_results += [r[0] for r in results if r[1] == "ERROR"]
+    bad_results += [r[0] for r in results if r[1] == "ABORT"]
+
+    # Fail the test if necessary
+    if not results:
+        raise error.TestFail("Autotest control file run did not produce any "
+                             "recognizable results")
+    if bad_results:
+        if len(bad_results) == 1:
+            e_msg = ("Test %s failed during control file execution" %
+                     bad_results[0])
+        else:
+            e_msg = ("Tests %s failed during control file execution" %
+                     " ".join(bad_results))
+        raise error.TestFail(e_msg)
+
+
+def get_loss_ratio(output):
+    """
+    Get the packet loss ratio from the output of ping
+.
+    @param output: Ping output.
+    """
+    try:
+        return int(re.findall('(\d+)% packet loss', output)[0])
+    except IndexError:
+        logging.debug(output)
+        return -1
+
+
+def raw_ping(command, timeout, session, output_func):
+    """
+    Low-level ping command execution.
+
+    @param command: Ping command.
+    @param timeout: Timeout of the ping command.
+    @param session: Local executon hint or session to execute the ping command.
+    """
+    if session is None:
+        process = aexpect.run_bg(command, output_func=output_func,
+                                        timeout=timeout)
+
+        # Send SIGINT signal to notify the timeout of running ping process,
+        # Because ping have the ability to catch the SIGINT signal so we can
+        # always get the packet loss ratio even if timeout.
+        if process.is_alive():
+            virt_utils.kill_process_tree(process.get_pid(), signal.SIGINT)
+
+        status = process.get_status()
+        output = process.get_output()
+
+        process.close()
+        return status, output
+    else:
+        output = ""
+        try:
+            output = session.cmd_output(command, timeout=timeout,
+                                        print_func=output_func)
+        except aexpect.ShellTimeoutError:
+            # Send ctrl+c (SIGINT) through ssh session
+            session.send("\003")
+            try:
+                output2 = session.read_up_to_prompt(print_func=output_func)
+                output += output2
+            except aexpect.ExpectTimeoutError, e:
+                output += e.output
+                # We also need to use this session to query the return value
+                session.send("\003")
+
+        session.sendline(session.status_test_command)
+        try:
+            o2 = session.read_up_to_prompt()
+        except aexpect.ExpectError:
+            status = -1
+        else:
+            try:
+                status = int(re.findall("\d+", o2)[0])
+            except:
+                status = -1
+
+        return status, output
+
+
+def ping(dest=None, count=None, interval=None, interface=None,
+         packetsize=None, ttl=None, hint=None, adaptive=False,
+         broadcast=False, flood=False, timeout=0,
+         output_func=logging.debug, session=None):
+    """
+    Wrapper of ping.
+
+    @param dest: Destination address.
+    @param count: Count of icmp packet.
+    @param interval: Interval of two icmp echo request.
+    @param interface: Specified interface of the source address.
+    @param packetsize: Packet size of icmp.
+    @param ttl: IP time to live.
+    @param hint: Path mtu discovery hint.
+    @param adaptive: Adaptive ping flag.
+    @param broadcast: Broadcast ping flag.
+    @param flood: Flood ping flag.
+    @param timeout: Timeout for the ping command.
+    @param output_func: Function used to log the result of ping.
+    @param session: Local executon hint or session to execute the ping command.
+    """
+    if dest is not None:
+        command = "ping %s " % dest
+    else:
+        command = "ping localhost "
+    if count is not None:
+        command += " -c %s" % count
+    if interval is not None:
+        command += " -i %s" % interval
+    if interface is not None:
+        command += " -I %s" % interface
+    if packetsize is not None:
+        command += " -s %s" % packetsize
+    if ttl is not None:
+        command += " -t %s" % ttl
+    if hint is not None:
+        command += " -M %s" % hint
+    if adaptive:
+        command += " -A"
+    if broadcast:
+        command += " -b"
+    if flood:
+        command += " -f -q"
+        output_func = None
+
+    return raw_ping(command, timeout, session, output_func)
+
+
+def get_linux_ifname(session, mac_address):
+    """
+    Get the interface name through the mac address.
+
+    @param session: session to the virtual machine
+    @mac_address: the macaddress of nic
+    """
+
+    output = session.cmd_output("ifconfig -a")
+
+    try:
+        ethname = re.findall("(\w+)\s+Link.*%s" % mac_address, output,
+                             re.IGNORECASE)[0]
+        return ethname
+    except:
+        return None
diff --git a/client/virt/virt_utils.py b/client/virt/virt_utils.py
new file mode 100644
index 0000000..26438df
--- /dev/null
+++ b/client/virt/virt_utils.py
@@ -0,0 +1,1760 @@
+"""
+KVM test utility functions.
+
+@copyright: 2008-2009 Red Hat Inc.
+"""
+
+import time, string, random, socket, os, signal, re, logging, commands, cPickle
+import fcntl, shelve, ConfigParser, threading, sys, UserDict
+from autotest_lib.client.bin import utils, os_dep
+from autotest_lib.client.common_lib import error, logging_config
+import rss_client, virt_utils, aexpect
+
+try:
+    import koji
+    KOJI_INSTALLED = True
+except ImportError:
+    KOJI_INSTALLED = False
+
+
+def _lock_file(filename):
+    f = open(filename, "w")
+    fcntl.lockf(f, fcntl.LOCK_EX)
+    return f
+
+
+def _unlock_file(f):
+    fcntl.lockf(f, fcntl.LOCK_UN)
+    f.close()
+
+
+def is_vm(obj):
+    """
+    Tests whether a given object is a VM object.
+
+    @param obj: Python object.
+    """
+    return obj.__class__.__name__ == "VM"
+
+
+class Env(UserDict.IterableUserDict):
+    """
+    A dict-like object containing global objects used by tests.
+    """
+    def __init__(self, filename=None, version=0):
+        """
+        Create an empty Env object or load an existing one from a file.
+
+        If the version recorded in the file is lower than version, or if some
+        error occurs during unpickling, or if filename is not supplied,
+        create an empty Env object.
+
+        @param filename: Path to an env file.
+        @param version: Required env version (int).
+        """
+        UserDict.IterableUserDict.__init__(self)
+        empty = {"version": version}
+        if filename:
+            self._filename = filename
+            try:
+                f = open(filename, "r")
+                env = cPickle.load(f)
+                f.close()
+                if env.get("version", 0) >= version:
+                    self.data = env
+                else:
+                    logging.warn("Incompatible env file found. Not using it.")
+                    self.data = empty
+            # Almost any exception can be raised during unpickling, so let's
+            # catch them all
+            except Exception, e:
+                logging.warn(e)
+                self.data = empty
+        else:
+            self.data = empty
+
+
+    def save(self, filename=None):
+        """
+        Pickle the contents of the Env object into a file.
+
+        @param filename: Filename to pickle the dict into.  If not supplied,
+                use the filename from which the dict was loaded.
+        """
+        filename = filename or self._filename
+        f = open(filename, "w")
+        cPickle.dump(self.data, f)
+        f.close()
+
+
+    def get_all_vms(self):
+        """
+        Return a list of all VM objects in this Env object.
+        """
+        return [o for o in self.values() if is_vm(o)]
+
+
+    def get_vm(self, name):
+        """
+        Return a VM object by its name.
+
+        @param name: VM name.
+        """
+        return self.get("vm__%s" % name)
+
+
+    def register_vm(self, name, vm):
+        """
+        Register a VM in this Env object.
+
+        @param name: VM name.
+        @param vm: VM object.
+        """
+        self["vm__%s" % name] = vm
+
+
+    def unregister_vm(self, name):
+        """
+        Remove a given VM.
+
+        @param name: VM name.
+        """
+        del self["vm__%s" % name]
+
+
+    def register_installer(self, installer):
+        """
+        Register a installer that was just run
+
+        The installer will be available for other tests, so that
+        information about the installed KVM modules and qemu-kvm can be used by
+        them.
+        """
+        self['last_installer'] = installer
+
+
+    def previous_installer(self):
+        """
+        Return the last installer that was registered
+        """
+        return self.get('last_installer')
+
+
+class Params(UserDict.IterableUserDict):
+    """
+    A dict-like object passed to every test.
+    """
+    def objects(self, key):
+        """
+        Return the names of objects defined using a given key.
+
+        @param key: The name of the key whose value lists the objects
+                (e.g. 'nics').
+        """
+        return self.get(key, "").split()
+
+
+    def object_params(self, obj_name):
+        """
+        Return a dict-like object containing the parameters of an individual
+        object.
+
+        This method behaves as follows: the suffix '_' + obj_name is removed
+        from all key names that have it.  Other key names are left unchanged.
+        The values of keys with the suffix overwrite the values of their
+        suffixless versions.
+
+        @param obj_name: The name of the object (objects are listed by the
+                objects() method).
+        """
+        suffix = "_" + obj_name
+        new_dict = self.copy()
+        for key in self:
+            if key.endswith(suffix):
+                new_key = key.split(suffix)[0]
+                new_dict[new_key] = self[key]
+        return new_dict
+
+
+# Functions related to MAC/IP addresses
+
+def _open_mac_pool(lock_mode):
+    lock_file = open("/tmp/mac_lock", "w+")
+    fcntl.lockf(lock_file, lock_mode)
+    pool = shelve.open("/tmp/address_pool")
+    return pool, lock_file
+
+
+def _close_mac_pool(pool, lock_file):
+    pool.close()
+    fcntl.lockf(lock_file, fcntl.LOCK_UN)
+    lock_file.close()
+
+
+def _generate_mac_address_prefix(mac_pool):
+    """
+    Generate a random MAC address prefix and add it to the MAC pool dictionary.
+    If there's a MAC prefix there already, do not update the MAC pool and just
+    return what's in there. By convention we will set KVM autotest MAC
+    addresses to start with 0x9a.
+
+    @param mac_pool: The MAC address pool object.
+    @return: The MAC address prefix.
+    """
+    if "prefix" in mac_pool:
+        prefix = mac_pool["prefix"]
+        logging.debug("Used previously generated MAC address prefix for this "
+                      "host: %s", prefix)
+    else:
+        r = random.SystemRandom()
+        prefix = "9a:%02x:%02x:%02x:" % (r.randint(0x00, 0xff),
+                                         r.randint(0x00, 0xff),
+                                         r.randint(0x00, 0xff))
+        mac_pool["prefix"] = prefix
+        logging.debug("Generated MAC address prefix for this host: %s", prefix)
+    return prefix
+
+
+def generate_mac_address(vm_instance, nic_index):
+    """
+    Randomly generate a MAC address and add it to the MAC address pool.
+
+    Try to generate a MAC address based on a randomly generated MAC address
+    prefix and add it to a persistent dictionary.
+    key = VM instance + NIC index, value = MAC address
+    e.g. {'20100310-165222-Wt7l:0': '9a:5d:94:6a:9b:f9'}
+
+    @param vm_instance: The instance attribute of a VM.
+    @param nic_index: The index of the NIC.
+    @return: MAC address string.
+    """
+    mac_pool, lock_file = _open_mac_pool(fcntl.LOCK_EX)
+    key = "%s:%s" % (vm_instance, nic_index)
+    if key in mac_pool:
+        mac = mac_pool[key]
+    else:
+        prefix = _generate_mac_address_prefix(mac_pool)
+        r = random.SystemRandom()
+        while key not in mac_pool:
+            mac = prefix + "%02x:%02x" % (r.randint(0x00, 0xff),
+                                          r.randint(0x00, 0xff))
+            if mac in mac_pool.values():
+                continue
+            mac_pool[key] = mac
+            logging.debug("Generated MAC address for NIC %s: %s", key, mac)
+    _close_mac_pool(mac_pool, lock_file)
+    return mac
+
+
+def free_mac_address(vm_instance, nic_index):
+    """
+    Remove a MAC address from the address pool.
+
+    @param vm_instance: The instance attribute of a VM.
+    @param nic_index: The index of the NIC.
+    """
+    mac_pool, lock_file = _open_mac_pool(fcntl.LOCK_EX)
+    key = "%s:%s" % (vm_instance, nic_index)
+    if key in mac_pool:
+        logging.debug("Freeing MAC address for NIC %s: %s", key, mac_pool[key])
+        del mac_pool[key]
+    _close_mac_pool(mac_pool, lock_file)
+
+
+def set_mac_address(vm_instance, nic_index, mac):
+    """
+    Set a MAC address in the pool.
+
+    @param vm_instance: The instance attribute of a VM.
+    @param nic_index: The index of the NIC.
+    """
+    mac_pool, lock_file = _open_mac_pool(fcntl.LOCK_EX)
+    mac_pool["%s:%s" % (vm_instance, nic_index)] = mac
+    _close_mac_pool(mac_pool, lock_file)
+
+
+def get_mac_address(vm_instance, nic_index):
+    """
+    Return a MAC address from the pool.
+
+    @param vm_instance: The instance attribute of a VM.
+    @param nic_index: The index of the NIC.
+    @return: MAC address string.
+    """
+    mac_pool, lock_file = _open_mac_pool(fcntl.LOCK_SH)
+    mac = mac_pool.get("%s:%s" % (vm_instance, nic_index))
+    _close_mac_pool(mac_pool, lock_file)
+    return mac
+
+
+def verify_ip_address_ownership(ip, macs, timeout=10.0):
+    """
+    Use arping and the ARP cache to make sure a given IP address belongs to one
+    of the given MAC addresses.
+
+    @param ip: An IP address.
+    @param macs: A list or tuple of MAC addresses.
+    @return: True iff ip is assigned to a MAC address in macs.
+    """
+    # Compile a regex that matches the given IP address and any of the given
+    # MAC addresses
+    mac_regex = "|".join("(%s)" % mac for mac in macs)
+    regex = re.compile(r"\b%s\b.*\b(%s)\b" % (ip, mac_regex), re.IGNORECASE)
+
+    # Check the ARP cache
+    o = commands.getoutput("%s -n" % find_command("arp"))
+    if regex.search(o):
+        return True
+
+    # Get the name of the bridge device for arping
+    o = commands.getoutput("%s route get %s" % (find_command("ip"), ip))
+    dev = re.findall("dev\s+\S+", o, re.IGNORECASE)
+    if not dev:
+        return False
+    dev = dev[0].split()[-1]
+
+    # Send an ARP request
+    o = commands.getoutput("%s -f -c 3 -I %s %s" %
+                           (find_command("arping"), dev, ip))
+    return bool(regex.search(o))
+
+
+# Utility functions for dealing with external processes
+
+def find_command(cmd):
+    for dir in ["/usr/local/sbin", "/usr/local/bin",
+                "/usr/sbin", "/usr/bin", "/sbin", "/bin"]:
+        file = os.path.join(dir, cmd)
+        if os.path.exists(file):
+            return file
+    raise ValueError('Missing command: %s' % cmd)
+
+
+def pid_exists(pid):
+    """
+    Return True if a given PID exists.
+
+    @param pid: Process ID number.
+    """
+    try:
+        os.kill(pid, 0)
+        return True
+    except:
+        return False
+
+
+def safe_kill(pid, signal):
+    """
+    Attempt to send a signal to a given process that may or may not exist.
+
+    @param signal: Signal number.
+    """
+    try:
+        os.kill(pid, signal)
+        return True
+    except:
+        return False
+
+
+def kill_process_tree(pid, sig=signal.SIGKILL):
+    """Signal a process and all of its children.
+
+    If the process does not exist -- return.
+
+    @param pid: The pid of the process to signal.
+    @param sig: The signal to send to the processes.
+    """
+    if not safe_kill(pid, signal.SIGSTOP):
+        return
+    children = commands.getoutput("ps --ppid=%d -o pid=" % pid).split()
+    for child in children:
+        kill_process_tree(int(child), sig)
+    safe_kill(pid, sig)
+    safe_kill(pid, signal.SIGCONT)
+
+
+def get_latest_kvm_release_tag(release_listing):
+    """
+    Fetches the latest release tag for KVM.
+
+    @param release_listing: URL that contains a list of the Source Forge
+            KVM project files.
+    """
+    try:
+        release_page = utils.urlopen(release_listing)
+        data = release_page.read()
+        release_page.close()
+        rx = re.compile("kvm-(\d+).tar.gz", re.IGNORECASE)
+        matches = rx.findall(data)
+        # In all regexp matches to something that looks like a release tag,
+        # get the largest integer. That will be our latest release tag.
+        latest_tag = max(int(x) for x in matches)
+        return str(latest_tag)
+    except Exception, e:
+        message = "Could not fetch latest KVM release tag: %s" % str(e)
+        logging.error(message)
+        raise error.TestError(message)
+
+
+def get_git_branch(repository, branch, srcdir, commit=None, lbranch=None):
+    """
+    Retrieves a given git code repository.
+
+    @param repository: Git repository URL
+    """
+    logging.info("Fetching git [REP '%s' BRANCH '%s' COMMIT '%s'] -> %s",
+                 repository, branch, commit, srcdir)
+    if not os.path.exists(srcdir):
+        os.makedirs(srcdir)
+    os.chdir(srcdir)
+
+    if os.path.exists(".git"):
+        utils.system("git reset --hard")
+    else:
+        utils.system("git init")
+
+    if not lbranch:
+        lbranch = branch
+
+    utils.system("git fetch -q -f -u -t %s %s:%s" %
+                 (repository, branch, lbranch))
+    utils.system("git checkout %s" % lbranch)
+    if commit:
+        utils.system("git checkout %s" % commit)
+
+    h = utils.system_output('git log --pretty=format:"%H" -1')
+    try:
+        desc = "tag %s" % utils.system_output("git describe")
+    except error.CmdError:
+        desc = "no tag found"
+
+    logging.info("Commit hash for %s is %s (%s)", repository, h.strip(), desc)
+    return srcdir
+
+
+def check_kvm_source_dir(source_dir):
+    """
+    Inspects the kvm source directory and verifies its disposition. In some
+    occasions build may be dependant on the source directory disposition.
+    The reason why the return codes are numbers is that we might have more
+    changes on the source directory layout, so it's not scalable to just use
+    strings like 'old_repo', 'new_repo' and such.
+
+    @param source_dir: Source code path that will be inspected.
+    """
+    os.chdir(source_dir)
+    has_qemu_dir = os.path.isdir('qemu')
+    has_kvm_dir = os.path.isdir('kvm')
+    if has_qemu_dir:
+        logging.debug("qemu directory detected, source dir layout 1")
+        return 1
+    if has_kvm_dir and not has_qemu_dir:
+        logging.debug("kvm directory detected, source dir layout 2")
+        return 2
+    else:
+        raise error.TestError("Unknown source dir layout, cannot proceed.")
+
+
+def get_virt_info(params, test):
+    vm_type = params.get('vm_type')
+    if vm_type == 'kvm':
+        # Get the KVM kernel module version and write it as a keyval
+        logging.debug("Fetching KVM module version...")
+        if os.path.exists("/dev/kvm"):
+            try:
+                kvm_version = open("/sys/module/kvm/version").read().strip()
+            except:
+                kvm_version = os.uname()[2]
+        else:
+            kvm_version = "Unknown"
+            logging.debug("KVM module not loaded")
+        logging.debug("KVM version: %s" % kvm_version)
+        test.write_test_keyval({"kvm_version": kvm_version})
+
+        # Get the KVM userspace version and write it as a keyval
+        logging.debug("Fetching KVM userspace version...")
+        qemu_path = virt_utils.get_path(test.bindir, params.get("qemu_binary",
+                                                               "qemu"))
+        version_line = commands.getoutput("%s -help | head -n 1" % qemu_path)
+        matches = re.findall("[Vv]ersion .*?,", version_line)
+        if matches:
+            kvm_userspace_version = " ".join(matches[0].split()[1:]).strip(",")
+        else:
+            kvm_userspace_version = "Unknown"
+            logging.debug("Could not fetch KVM userspace version")
+        logging.debug("KVM userspace version: %s" % kvm_userspace_version)
+        test.write_test_keyval({"kvm_userspace_version": kvm_userspace_version})
+
+
+# Functions and classes used for logging into guests and transferring files
+
+class LoginError(Exception):
+    def __init__(self, msg, output):
+        Exception.__init__(self, msg, output)
+        self.msg = msg
+        self.output = output
+
+    def __str__(self):
+        return "%s    (output: %r)" % (self.msg, self.output)
+
+
+class LoginAuthenticationError(LoginError):
+    pass
+
+
+class LoginTimeoutError(LoginError):
+    def __init__(self, output):
+        LoginError.__init__(self, "Login timeout expired", output)
+
+
+class LoginProcessTerminatedError(LoginError):
+    def __init__(self, status, output):
+        LoginError.__init__(self, None, output)
+        self.status = status
+
+    def __str__(self):
+        return ("Client process terminated    (status: %s,    output: %r)" %
+                (self.status, self.output))
+
+
+class LoginBadClientError(LoginError):
+    def __init__(self, client):
+        LoginError.__init__(self, None, None)
+        self.client = client
+
+    def __str__(self):
+        return "Unknown remote shell client: %r" % self.client
+
+
+class SCPError(Exception):
+    def __init__(self, msg, output):
+        Exception.__init__(self, msg, output)
+        self.msg = msg
+        self.output = output
+
+    def __str__(self):
+        return "%s    (output: %r)" % (self.msg, self.output)
+
+
+class SCPAuthenticationError(SCPError):
+    pass
+
+
+class SCPAuthenticationTimeoutError(SCPAuthenticationError):
+    def __init__(self, output):
+        SCPAuthenticationError.__init__(self, "Authentication timeout expired",
+                                        output)
+
+
+class SCPTransferTimeoutError(SCPError):
+    def __init__(self, output):
+        SCPError.__init__(self, "Transfer timeout expired", output)
+
+
+class SCPTransferFailedError(SCPError):
+    def __init__(self, status, output):
+        SCPError.__init__(self, None, output)
+        self.status = status
+
+    def __str__(self):
+        return ("SCP transfer failed    (status: %s,    output: %r)" %
+                (self.status, self.output))
+
+
+def _remote_login(session, username, password, prompt, timeout=10):
+    """
+    Log into a remote host (guest) using SSH or Telnet.  Wait for questions
+    and provide answers.  If timeout expires while waiting for output from the
+    child (e.g. a password prompt or a shell prompt) -- fail.
+
+    @brief: Log into a remote host (guest) using SSH or Telnet.
+
+    @param session: An Expect or ShellSession instance to operate on
+    @param username: The username to send in reply to a login prompt
+    @param password: The password to send in reply to a password prompt
+    @param prompt: The shell prompt that indicates a successful login
+    @param timeout: The maximal time duration (in seconds) to wait for each
+            step of the login procedure (i.e. the "Are you sure" prompt, the
+            password prompt, the shell prompt, etc)
+    @raise LoginTimeoutError: If timeout expires
+    @raise LoginAuthenticationError: If authentication fails
+    @raise LoginProcessTerminatedError: If the client terminates during login
+    @raise LoginError: If some other error occurs
+    """
+    password_prompt_count = 0
+    login_prompt_count = 0
+
+    while True:
+        try:
+            match, text = session.read_until_last_line_matches(
+                [r"[Aa]re you sure", r"[Pp]assword:\s*$", r"[Ll]ogin:\s*$",
+                 r"[Cc]onnection.*closed", r"[Cc]onnection.*refused",
+                 r"[Pp]lease wait", prompt],
+                timeout=timeout, internal_timeout=0.5)
+            if match == 0:  # "Are you sure you want to continue connecting"
+                logging.debug("Got 'Are you sure...'; sending 'yes'")
+                session.sendline("yes")
+                continue
+            elif match == 1:  # "password:"
+                if password_prompt_count == 0:
+                    logging.debug("Got password prompt; sending '%s'", password)
+                    session.sendline(password)
+                    password_prompt_count += 1
+                    continue
+                else:
+                    raise LoginAuthenticationError("Got password prompt twice",
+                                                   text)
+            elif match == 2:  # "login:"
+                if login_prompt_count == 0 and password_prompt_count == 0:
+                    logging.debug("Got username prompt; sending '%s'", username)
+                    session.sendline(username)
+                    login_prompt_count += 1
+                    continue
+                else:
+                    if login_prompt_count > 0:
+                        msg = "Got username prompt twice"
+                    else:
+                        msg = "Got username prompt after password prompt"
+                    raise LoginAuthenticationError(msg, text)
+            elif match == 3:  # "Connection closed"
+                raise LoginError("Client said 'connection closed'", text)
+            elif match == 4:  # "Connection refused"
+                raise LoginError("Client said 'connection refused'", text)
+            elif match == 5:  # "Please wait"
+                logging.debug("Got 'Please wait'")
+                timeout = 30
+                continue
+            elif match == 6:  # prompt
+                logging.debug("Got shell prompt -- logged in")
+                break
+        except aexpect.ExpectTimeoutError, e:
+            raise LoginTimeoutError(e.output)
+        except aexpect.ExpectProcessTerminatedError, e:
+            raise LoginProcessTerminatedError(e.status, e.output)
+
+
+def remote_login(client, host, port, username, password, prompt, linesep="\n",
+                 log_filename=None, timeout=10):
+    """
+    Log into a remote host (guest) using SSH/Telnet/Netcat.
+
+    @param client: The client to use ('ssh', 'telnet' or 'nc')
+    @param host: Hostname or IP address
+    @param port: Port to connect to
+    @param username: Username (if required)
+    @param password: Password (if required)
+    @param prompt: Shell prompt (regular expression)
+    @param linesep: The line separator to use when sending lines
+            (e.g. '\\n' or '\\r\\n')
+    @param log_filename: If specified, log all output to this file
+    @param timeout: The maximal time duration (in seconds) to wait for
+            each step of the login procedure (i.e. the "Are you sure" prompt
+            or the password prompt)
+    @raise LoginBadClientError: If an unknown client is requested
+    @raise: Whatever _remote_login() raises
+    @return: A ShellSession object.
+    """
+    if client == "ssh":
+        cmd = ("ssh -o UserKnownHostsFile=/dev/null "
+               "-o PreferredAuthentications=password -p %s %s@%s" %
+               (port, username, host))
+    elif client == "telnet":
+        cmd = "telnet -l %s %s %s" % (username, host, port)
+    elif client == "nc":
+        cmd = "nc %s %s" % (host, port)
+    else:
+        raise LoginBadClientError(client)
+
+    logging.debug("Trying to login with command '%s'", cmd)
+    session = aexpect.ShellSession(cmd, linesep=linesep, prompt=prompt)
+    try:
+        _remote_login(session, username, password, prompt, timeout)
+    except:
+        session.close()
+        raise
+    if log_filename:
+        session.set_output_func(log_line)
+        session.set_output_params((log_filename,))
+    return session
+
+
+def wait_for_login(client, host, port, username, password, prompt, linesep="\n",
+                   log_filename=None, timeout=240, internal_timeout=10):
+    """
+    Make multiple attempts to log into a remote host (guest) until one succeeds
+    or timeout expires.
+
+    @param timeout: Total time duration to wait for a successful login
+    @param internal_timeout: The maximal time duration (in seconds) to wait for
+            each step of the login procedure (e.g. the "Are you sure" prompt
+            or the password prompt)
+    @see: remote_login()
+    @raise: Whatever remote_login() raises
+    @return: A ShellSession object.
+    """
+    logging.debug("Attempting to log into %s:%s using %s (timeout %ds)",
+                  host, port, client, timeout)
+    end_time = time.time() + timeout
+    while time.time() < end_time:
+        try:
+            return remote_login(client, host, port, username, password, prompt,
+                                linesep, log_filename, internal_timeout)
+        except LoginError, e:
+            logging.debug(e)
+        time.sleep(2)
+    # Timeout expired; try one more time but don't catch exceptions
+    return remote_login(client, host, port, username, password, prompt,
+                        linesep, log_filename, internal_timeout)
+
+
+def _remote_scp(session, password, transfer_timeout=600, login_timeout=10):
+    """
+    Transfer file(s) to a remote host (guest) using SCP.  Wait for questions
+    and provide answers.  If login_timeout expires while waiting for output
+    from the child (e.g. a password prompt), fail.  If transfer_timeout expires
+    while waiting for the transfer to complete, fail.
+
+    @brief: Transfer files using SCP, given a command line.
+
+    @param session: An Expect or ShellSession instance to operate on
+    @param password: The password to send in reply to a password prompt.
+    @param transfer_timeout: The time duration (in seconds) to wait for the
+            transfer to complete.
+    @param login_timeout: The maximal time duration (in seconds) to wait for
+            each step of the login procedure (i.e. the "Are you sure" prompt or
+            the password prompt)
+    @raise SCPAuthenticationError: If authentication fails
+    @raise SCPTransferTimeoutError: If the transfer fails to complete in time
+    @raise SCPTransferFailedError: If the process terminates with a nonzero
+            exit code
+    @raise SCPError: If some other error occurs
+    """
+    password_prompt_count = 0
+    timeout = login_timeout
+    authentication_done = False
+
+    while True:
+        try:
+            match, text = session.read_until_last_line_matches(
+                [r"[Aa]re you sure", r"[Pp]assword:\s*$", r"lost connection"],
+                timeout=timeout, internal_timeout=0.5)
+            if match == 0:  # "Are you sure you want to continue connecting"
+                logging.debug("Got 'Are you sure...'; sending 'yes'")
+                session.sendline("yes")
+                continue
+            elif match == 1:  # "password:"
+                if password_prompt_count == 0:
+                    logging.debug("Got password prompt; sending '%s'", password)
+                    session.sendline(password)
+                    password_prompt_count += 1
+                    timeout = transfer_timeout
+                    authentication_done = True
+                    continue
+                else:
+                    raise SCPAuthenticationError("Got password prompt twice",
+                                                 text)
+            elif match == 2:  # "lost connection"
+                raise SCPError("SCP client said 'lost connection'", text)
+        except aexpect.ExpectTimeoutError, e:
+            if authentication_done:
+                raise SCPTransferTimeoutError(e.output)
+            else:
+                raise SCPAuthenticationTimeoutError(e.output)
+        except aexpect.ExpectProcessTerminatedError, e:
+            if e.status == 0:
+                logging.debug("SCP process terminated with status 0")
+                break
+            else:
+                raise SCPTransferFailedError(e.status, e.output)
+
+
+def remote_scp(command, password, log_filename=None, transfer_timeout=600,
+               login_timeout=10):
+    """
+    Transfer file(s) to a remote host (guest) using SCP.
+
+    @brief: Transfer files using SCP, given a command line.
+
+    @param command: The command to execute
+        (e.g. "scp -r foobar root@localhost:/tmp/").
+    @param password: The password to send in reply to a password prompt.
+    @param log_filename: If specified, log all output to this file
+    @param transfer_timeout: The time duration (in seconds) to wait for the
+            transfer to complete.
+    @param login_timeout: The maximal time duration (in seconds) to wait for
+            each step of the login procedure (i.e. the "Are you sure" prompt
+            or the password prompt)
+    @raise: Whatever _remote_scp() raises
+    """
+    logging.debug("Trying to SCP with command '%s', timeout %ss",
+                  command, transfer_timeout)
+    if log_filename:
+        output_func = log_line
+        output_params = (log_filename,)
+    else:
+        output_func = None
+        output_params = ()
+    session = aexpect.Expect(command,
+                                    output_func=output_func,
+                                    output_params=output_params)
+    try:
+        _remote_scp(session, password, transfer_timeout, login_timeout)
+    finally:
+        session.close()
+
+
+def scp_to_remote(host, port, username, password, local_path, remote_path,
+                  log_filename=None, timeout=600):
+    """
+    Copy files to a remote host (guest) through scp.
+
+    @param host: Hostname or IP address
+    @param username: Username (if required)
+    @param password: Password (if required)
+    @param local_path: Path on the local machine where we are copying from
+    @param remote_path: Path on the remote machine where we are copying to
+    @param log_filename: If specified, log all output to this file
+    @param timeout: The time duration (in seconds) to wait for the transfer
+            to complete.
+    @raise: Whatever remote_scp() raises
+    """
+    command = ("scp -v -o UserKnownHostsFile=/dev/null "
+               "-o PreferredAuthentications=password -r -P %s %s %s@%s:%s" %
+               (port, local_path, username, host, remote_path))
+    remote_scp(command, password, log_filename, timeout)
+
+
+def scp_from_remote(host, port, username, password, remote_path, local_path,
+                    log_filename=None, timeout=600):
+    """
+    Copy files from a remote host (guest).
+
+    @param host: Hostname or IP address
+    @param username: Username (if required)
+    @param password: Password (if required)
+    @param local_path: Path on the local machine where we are copying from
+    @param remote_path: Path on the remote machine where we are copying to
+    @param log_filename: If specified, log all output to this file
+    @param timeout: The time duration (in seconds) to wait for the transfer
+            to complete.
+    @raise: Whatever remote_scp() raises
+    """
+    command = ("scp -v -o UserKnownHostsFile=/dev/null "
+               "-o PreferredAuthentications=password -r -P %s %s@%s:%s %s" %
+               (port, username, host, remote_path, local_path))
+    remote_scp(command, password, log_filename, timeout)
+
+
+def copy_files_to(address, client, username, password, port, local_path,
+                  remote_path, log_filename=None, verbose=False, timeout=600):
+    """
+    Copy files to a remote host (guest) using the selected client.
+
+    @param client: Type of transfer client
+    @param username: Username (if required)
+    @param password: Password (if requried)
+    @param local_path: Path on the local machine where we are copying from
+    @param remote_path: Path on the remote machine where we are copying to
+    @param address: Address of remote host(guest)
+    @param log_filename: If specified, log all output to this file (SCP only)
+    @param verbose: If True, log some stats using logging.debug (RSS only)
+    @param timeout: The time duration (in seconds) to wait for the transfer to
+            complete.
+    @raise: Whatever remote_scp() raises
+    """
+    if client == "scp":
+        scp_to_remote(address, port, username, password, local_path,
+                      remote_path, log_filename, timeout)
+    elif client == "rss":
+        log_func = None
+        if verbose:
+            log_func = logging.debug
+        c = rss_client.FileUploadClient(address, port, log_func)
+        c.upload(local_path, remote_path, timeout)
+        c.close()
+
+
+def copy_files_from(address, client, username, password, port, remote_path,
+                    local_path, log_filename=None, verbose=False, timeout=600):
+    """
+    Copy files from a remote host (guest) using the selected client.
+
+    @param client: Type of transfer client
+    @param username: Username (if required)
+    @param password: Password (if requried)
+    @param remote_path: Path on the remote machine where we are copying from
+    @param local_path: Path on the local machine where we are copying to
+    @param address: Address of remote host(guest)
+    @param log_filename: If specified, log all output to this file (SCP only)
+    @param verbose: If True, log some stats using logging.debug (RSS only)
+    @param timeout: The time duration (in seconds) to wait for the transfer to
+    complete.
+    @raise: Whatever remote_scp() raises
+    """
+    if client == "scp":
+        scp_from_remote(address, port, username, password, remote_path,
+                        local_path, log_filename, timeout)
+    elif client == "rss":
+        log_func = None
+        if verbose:
+            log_func = logging.debug
+        c = rss_client.FileDownloadClient(address, port, log_func)
+        c.download(remote_path, local_path, timeout)
+        c.close()
+
+
+# The following are utility functions related to ports.
+
+def is_port_free(port, address):
+    """
+    Return True if the given port is available for use.
+
+    @param port: Port number
+    """
+    try:
+        s = socket.socket()
+        #s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
+        if address == "localhost":
+            s.bind(("localhost", port))
+            free = True
+        else:
+            s.connect((address, port))
+            free = False
+    except socket.error:
+        if address == "localhost":
+            free = False
+        else:
+            free = True
+    s.close()
+    return free
+
+
+def find_free_port(start_port, end_port, address="localhost"):
+    """
+    Return a host free port in the range [start_port, end_port].
+
+    @param start_port: First port that will be checked.
+    @param end_port: Port immediately after the last one that will be checked.
+    """
+    for i in range(start_port, end_port):
+        if is_port_free(i, address):
+            return i
+    return None
+
+
+def find_free_ports(start_port, end_port, count, address="localhost"):
+    """
+    Return count of host free ports in the range [start_port, end_port].
+
+    @count: Initial number of ports known to be free in the range.
+    @param start_port: First port that will be checked.
+    @param end_port: Port immediately after the last one that will be checked.
+    """
+    ports = []
+    i = start_port
+    while i < end_port and count > 0:
+        if is_port_free(i, address):
+            ports.append(i)
+            count -= 1
+        i += 1
+    return ports
+
+
+# An easy way to log lines to files when the logging system can't be used
+
+_open_log_files = {}
+_log_file_dir = "/tmp"
+
+
+def log_line(filename, line):
+    """
+    Write a line to a file.  '\n' is appended to the line.
+
+    @param filename: Path of file to write to, either absolute or relative to
+            the dir set by set_log_file_dir().
+    @param line: Line to write.
+    """
+    global _open_log_files, _log_file_dir
+    if filename not in _open_log_files:
+        path = get_path(_log_file_dir, filename)
+        try:
+            os.makedirs(os.path.dirname(path))
+        except OSError:
+            pass
+        _open_log_files[filename] = open(path, "w")
+    timestr = time.strftime("%Y-%m-%d %H:%M:%S")
+    _open_log_files[filename].write("%s: %s\n" % (timestr, line))
+    _open_log_files[filename].flush()
+
+
+def set_log_file_dir(dir):
+    """
+    Set the base directory for log files created by log_line().
+
+    @param dir: Directory for log files.
+    """
+    global _log_file_dir
+    _log_file_dir = dir
+
+
+# The following are miscellaneous utility functions.
+
+def get_path(base_path, user_path):
+    """
+    Translate a user specified path to a real path.
+    If user_path is relative, append it to base_path.
+    If user_path is absolute, return it as is.
+
+    @param base_path: The base path of relative user specified paths.
+    @param user_path: The user specified path.
+    """
+    if os.path.isabs(user_path):
+        return user_path
+    else:
+        return os.path.join(base_path, user_path)
+
+
+def generate_random_string(length):
+    """
+    Return a random string using alphanumeric characters.
+
+    @length: length of the string that will be generated.
+    """
+    r = random.SystemRandom()
+    str = ""
+    chars = string.letters + string.digits
+    while length > 0:
+        str += r.choice(chars)
+        length -= 1
+    return str
+
+def generate_random_id():
+    """
+    Return a random string suitable for use as a qemu id.
+    """
+    return "id" + generate_random_string(6)
+
+
+def generate_tmp_file_name(file, ext=None, dir='/tmp/'):
+    """
+    Returns a temporary file name. The file is not created.
+    """
+    while True:
+        file_name = (file + '-' + time.strftime("%Y%m%d-%H%M%S-") +
+                     generate_random_string(4))
+        if ext:
+            file_name += '.' + ext
+        file_name = os.path.join(dir, file_name)
+        if not os.path.exists(file_name):
+            break
+
+    return file_name
+
+
+def format_str_for_message(str):
+    """
+    Format str so that it can be appended to a message.
+    If str consists of one line, prefix it with a space.
+    If str consists of multiple lines, prefix it with a newline.
+
+    @param str: string that will be formatted.
+    """
+    lines = str.splitlines()
+    num_lines = len(lines)
+    str = "\n".join(lines)
+    if num_lines == 0:
+        return ""
+    elif num_lines == 1:
+        return " " + str
+    else:
+        return "\n" + str
+
+
+def wait_for(func, timeout, first=0.0, step=1.0, text=None):
+    """
+    If func() evaluates to True before timeout expires, return the
+    value of func(). Otherwise return None.
+
+    @brief: Wait until func() evaluates to True.
+
+    @param timeout: Timeout in seconds
+    @param first: Time to sleep before first attempt
+    @param steps: Time to sleep between attempts in seconds
+    @param text: Text to print while waiting, for debug purposes
+    """
+    start_time = time.time()
+    end_time = time.time() + timeout
+
+    time.sleep(first)
+
+    while time.time() < end_time:
+        if text:
+            logging.debug("%s (%f secs)", text, (time.time() - start_time))
+
+        output = func()
+        if output:
+            return output
+
+        time.sleep(step)
+
+    logging.debug("Timeout elapsed")
+    return None
+
+
+def get_hash_from_file(hash_path, dvd_basename):
+    """
+    Get the a hash from a given DVD image from a hash file
+    (Hash files are usually named MD5SUM or SHA1SUM and are located inside the
+    download directories of the DVDs)
+
+    @param hash_path: Local path to a hash file.
+    @param cd_image: Basename of a CD image
+    """
+    hash_file = open(hash_path, 'r')
+    for line in hash_file.readlines():
+        if dvd_basename in line:
+            return line.split()[0]
+
+
+def run_tests(parser, job):
+    """
+    Runs the sequence of KVM tests based on the list of dictionaries
+    generated by the configuration system, handling dependencies.
+
+    @param parser: Config parser object.
+    @param job: Autotest job object.
+
+    @return: True, if all tests ran passed, False if any of them failed.
+    """
+    for i, d in enumerate(parser.get_dicts()):
+        logging.info("Test %4d:  %s" % (i + 1, d["shortname"]))
+
+    status_dict = {}
+    failed = False
+
+    for dict in parser.get_dicts():
+        if dict.get("skip") == "yes":
+            continue
+        dependencies_satisfied = True
+        for dep in dict.get("dep"):
+            for test_name in status_dict.keys():
+                if not dep in test_name:
+                    continue
+                if not status_dict[test_name]:
+                    dependencies_satisfied = False
+                    break
+        if dependencies_satisfied:
+            test_iterations = int(dict.get("iterations", 1))
+            test_tag = dict.get("shortname")
+            # Setting up profilers during test execution.
+            profilers = dict.get("profilers", "").split()
+            for profiler in profilers:
+                job.profilers.add(profiler)
+
+            # We need only one execution, profiled, hence we're passing
+            # the profile_only parameter to job.run_test().
+            current_status = job.run_test("kvm", params=dict, tag=test_tag,
+                                          iterations=test_iterations,
+                                          profile_only= bool(profilers) or None)
+
+            for profiler in profilers:
+                job.profilers.delete(profiler)
+
+            if not current_status:
+                failed = True
+        else:
+            current_status = False
+        status_dict[dict.get("name")] = current_status
+
+    return not failed
+
+
+def create_report(report_dir, results_dir):
+    """
+    Creates a neatly arranged HTML results report in the results dir.
+
+    @param report_dir: Directory where the report script is located.
+    @param results_dir: Directory where the results will be output.
+    """
+    reporter = os.path.join(report_dir, 'html_report.py')
+    html_file = os.path.join(results_dir, 'results.html')
+    os.system('%s -r %s -f %s -R' % (reporter, results_dir, html_file))
+
+
+def get_full_pci_id(pci_id):
+    """
+    Get full PCI ID of pci_id.
+
+    @param pci_id: PCI ID of a device.
+    """
+    cmd = "lspci -D | awk '/%s/ {print $1}'" % pci_id
+    status, full_id = commands.getstatusoutput(cmd)
+    if status != 0:
+        return None
+    return full_id
+
+
+def get_vendor_from_pci_id(pci_id):
+    """
+    Check out the device vendor ID according to pci_id.
+
+    @param pci_id: PCI ID of a device.
+    """
+    cmd = "lspci -n | awk '/%s/ {print $3}'" % pci_id
+    return re.sub(":", " ", commands.getoutput(cmd))
+
+
+class Thread(threading.Thread):
+    """
+    Run a function in a background thread.
+    """
+    def __init__(self, target, args=(), kwargs={}):
+        """
+        Initialize the instance.
+
+        @param target: Function to run in the thread.
+        @param args: Arguments to pass to target.
+        @param kwargs: Keyword arguments to pass to target.
+        """
+        threading.Thread.__init__(self)
+        self._target = target
+        self._args = args
+        self._kwargs = kwargs
+
+
+    def run(self):
+        """
+        Run target (passed to the constructor).  No point in calling this
+        function directly.  Call start() to make this function run in a new
+        thread.
+        """
+        self._e = None
+        self._retval = None
+        try:
+            try:
+                self._retval = self._target(*self._args, **self._kwargs)
+            except:
+                self._e = sys.exc_info()
+                raise
+        finally:
+            # Avoid circular references (start() may be called only once so
+            # it's OK to delete these)
+            del self._target, self._args, self._kwargs
+
+
+    def join(self, timeout=None, suppress_exception=False):
+        """
+        Join the thread.  If target raised an exception, re-raise it.
+        Otherwise, return the value returned by target.
+
+        @param timeout: Timeout value to pass to threading.Thread.join().
+        @param suppress_exception: If True, don't re-raise the exception.
+        """
+        threading.Thread.join(self, timeout)
+        try:
+            if self._e:
+                if not suppress_exception:
+                    # Because the exception was raised in another thread, we
+                    # need to explicitly insert the current context into it
+                    s = error.exception_context(self._e[1])
+                    s = error.join_contexts(error.get_context(), s)
+                    error.set_exception_context(self._e[1], s)
+                    raise self._e[0], self._e[1], self._e[2]
+            else:
+                return self._retval
+        finally:
+            # Avoid circular references (join() may be called multiple times
+            # so we can't delete these)
+            self._e = None
+            self._retval = None
+
+
+def parallel(targets):
+    """
+    Run multiple functions in parallel.
+
+    @param targets: A sequence of tuples or functions.  If it's a sequence of
+            tuples, each tuple will be interpreted as (target, args, kwargs) or
+            (target, args) or (target,) depending on its length.  If it's a
+            sequence of functions, the functions will be called without
+            arguments.
+    @return: A list of the values returned by the functions called.
+    """
+    threads = []
+    for target in targets:
+        if isinstance(target, tuple) or isinstance(target, list):
+            t = Thread(*target)
+        else:
+            t = Thread(target)
+        threads.append(t)
+        t.start()
+    return [t.join() for t in threads]
+
+
+class KvmLoggingConfig(logging_config.LoggingConfig):
+    """
+    Used with the sole purpose of providing convenient logging setup
+    for the KVM test auxiliary programs.
+    """
+    def configure_logging(self, results_dir=None, verbose=False):
+        super(KvmLoggingConfig, self).configure_logging(use_console=True,
+                                                        verbose=verbose)
+
+
+class PciAssignable(object):
+    """
+    Request PCI assignable devices on host. It will check whether to request
+    PF (physical Functions) or VF (Virtual Functions).
+    """
+    def __init__(self, type="vf", driver=None, driver_option=None,
+                 names=None, devices_requested=None):
+        """
+        Initialize parameter 'type' which could be:
+        vf: Virtual Functions
+        pf: Physical Function (actual hardware)
+        mixed:  Both includes VFs and PFs
+
+        If pass through Physical NIC cards, we need to specify which devices
+        to be assigned, e.g. 'eth1 eth2'.
+
+        If pass through Virtual Functions, we need to specify how many vfs
+        are going to be assigned, e.g. passthrough_count = 8 and max_vfs in
+        config file.
+
+        @param type: PCI device type.
+        @param driver: Kernel module for the PCI assignable device.
+        @param driver_option: Module option to specify the maximum number of
+                VFs (eg 'max_vfs=7')
+        @param names: Physical NIC cards correspondent network interfaces,
+                e.g.'eth1 eth2 ...'
+        @param devices_requested: Number of devices being requested.
+        """
+        self.type = type
+        self.driver = driver
+        self.driver_option = driver_option
+        if names:
+            self.name_list = names.split()
+        if devices_requested:
+            self.devices_requested = int(devices_requested)
+        else:
+            self.devices_requested = None
+
+
+    def _get_pf_pci_id(self, name, search_str):
+        """
+        Get the PF PCI ID according to name.
+
+        @param name: Name of the PCI device.
+        @param search_str: Search string to be used on lspci.
+        """
+        cmd = "ethtool -i %s | awk '/bus-info/ {print $2}'" % name
+        s, pci_id = commands.getstatusoutput(cmd)
+        if not (s or "Cannot get driver information" in pci_id):
+            return pci_id[5:]
+        cmd = "lspci | awk '/%s/ {print $1}'" % search_str
+        pci_ids = [id for id in commands.getoutput(cmd).splitlines()]
+        nic_id = int(re.search('[0-9]+', name).group(0))
+        if (len(pci_ids) - 1) < nic_id:
+            return None
+        return pci_ids[nic_id]
+
+
+    def _release_dev(self, pci_id):
+        """
+        Release a single PCI device.
+
+        @param pci_id: PCI ID of a given PCI device.
+        """
+        base_dir = "/sys/bus/pci"
+        full_id = get_full_pci_id(pci_id)
+        vendor_id = get_vendor_from_pci_id(pci_id)
+        drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
+        if 'pci-stub' in os.readlink(drv_path):
+            cmd = "echo '%s' > %s/new_id" % (vendor_id, drv_path)
+            if os.system(cmd):
+                return False
+
+            stub_path = os.path.join(base_dir, "drivers/pci-stub")
+            cmd = "echo '%s' > %s/unbind" % (full_id, stub_path)
+            if os.system(cmd):
+                return False
+
+            driver = self.dev_drivers[pci_id]
+            cmd = "echo '%s' > %s/bind" % (full_id, driver)
+            if os.system(cmd):
+                return False
+
+        return True
+
+
+    def get_vf_devs(self):
+        """
+        Catch all VFs PCI IDs.
+
+        @return: List with all PCI IDs for the Virtual Functions avaliable
+        """
+        if not self.sr_iov_setup():
+            return []
+
+        cmd = "lspci | awk '/Virtual Function/ {print $1}'"
+        return commands.getoutput(cmd).split()
+
+
+    def get_pf_devs(self):
+        """
+        Catch all PFs PCI IDs.
+
+        @return: List with all PCI IDs for the physical hardware requested
+        """
+        pf_ids = []
+        for name in self.name_list:
+            pf_id = self._get_pf_pci_id(name, "Ethernet")
+            if not pf_id:
+                continue
+            pf_ids.append(pf_id)
+        return pf_ids
+
+
+    def get_devs(self, count):
+        """
+        Check out all devices' PCI IDs according to their name.
+
+        @param count: count number of PCI devices needed for pass through
+        @return: a list of all devices' PCI IDs
+        """
+        if self.type == "vf":
+            vf_ids = self.get_vf_devs()
+        elif self.type == "pf":
+            vf_ids = self.get_pf_devs()
+        elif self.type == "mixed":
+            vf_ids = self.get_vf_devs()
+            vf_ids.extend(self.get_pf_devs())
+        return vf_ids[0:count]
+
+
+    def get_vfs_count(self):
+        """
+        Get VFs count number according to lspci.
+        """
+        # FIXME: Need to think out a method of identify which
+        # 'virtual function' belongs to which physical card considering
+        # that if the host has more than one 82576 card. PCI_ID?
+        cmd = "lspci | grep 'Virtual Function' | wc -l"
+        return int(commands.getoutput(cmd))
+
+
+    def check_vfs_count(self):
+        """
+        Check VFs count number according to the parameter driver_options.
+        """
+        # Network card 82576 has two network interfaces and each can be
+        # virtualized up to 7 virtual functions, therefore we multiply
+        # two for the value of driver_option 'max_vfs'.
+        expected_count = int((re.findall("(\d)", self.driver_option)[0])) * 2
+        return (self.get_vfs_count == expected_count)
+
+
+    def is_binded_to_stub(self, full_id):
+        """
+        Verify whether the device with full_id is already binded to pci-stub.
+
+        @param full_id: Full ID for the given PCI device
+        """
+        base_dir = "/sys/bus/pci"
+        stub_path = os.path.join(base_dir, "drivers/pci-stub")
+        if os.path.exists(os.path.join(stub_path, full_id)):
+            return True
+        return False
+
+
+    def sr_iov_setup(self):
+        """
+        Ensure the PCI device is working in sr_iov mode.
+
+        Check if the PCI hardware device drive is loaded with the appropriate,
+        parameters (number of VFs), and if it's not, perform setup.
+
+        @return: True, if the setup was completed successfuly, False otherwise.
+        """
+        re_probe = False
+        s, o = commands.getstatusoutput('lsmod | grep %s' % self.driver)
+        if s:
+            re_probe = True
+        elif not self.check_vfs_count():
+            os.system("modprobe -r %s" % self.driver)
+            re_probe = True
+        else:
+            return True
+
+        # Re-probe driver with proper number of VFs
+        if re_probe:
+            cmd = "modprobe %s %s" % (self.driver, self.driver_option)
+            logging.info("Loading the driver '%s' with option '%s'",
+                         self.driver, self.driver_option)
+            s, o = commands.getstatusoutput(cmd)
+            if s:
+                return False
+            return True
+
+
+    def request_devs(self):
+        """
+        Implement setup process: unbind the PCI device and then bind it
+        to the pci-stub driver.
+
+        @return: a list of successfully requested devices' PCI IDs.
+        """
+        base_dir = "/sys/bus/pci"
+        stub_path = os.path.join(base_dir, "drivers/pci-stub")
+
+        self.pci_ids = self.get_devs(self.devices_requested)
+        logging.debug("The following pci_ids were found: %s", self.pci_ids)
+        requested_pci_ids = []
+        self.dev_drivers = {}
+
+        # Setup all devices specified for assignment to guest
+        for pci_id in self.pci_ids:
+            full_id = get_full_pci_id(pci_id)
+            if not full_id:
+                continue
+            drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
+            dev_prev_driver = os.path.realpath(os.path.join(drv_path,
+                                               os.readlink(drv_path)))
+            self.dev_drivers[pci_id] = dev_prev_driver
+
+            # Judge whether the device driver has been binded to stub
+            if not self.is_binded_to_stub(full_id):
+                logging.debug("Binding device %s to stub", full_id)
+                vendor_id = get_vendor_from_pci_id(pci_id)
+                stub_new_id = os.path.join(stub_path, 'new_id')
+                unbind_dev = os.path.join(drv_path, 'unbind')
+                stub_bind = os.path.join(stub_path, 'bind')
+
+                info_write_to_files = [(vendor_id, stub_new_id),
+                                       (full_id, unbind_dev),
+                                       (full_id, stub_bind)]
+
+                for content, file in info_write_to_files:
+                    try:
+                        utils.open_write_close(file, content)
+                    except IOError:
+                        logging.debug("Failed to write %s to file %s", content,
+                                      file)
+                        continue
+
+                if not self.is_binded_to_stub(full_id):
+                    logging.error("Binding device %s to stub failed", pci_id)
+                    continue
+            else:
+                logging.debug("Device %s already binded to stub", pci_id)
+            requested_pci_ids.append(pci_id)
+        self.pci_ids = requested_pci_ids
+        return self.pci_ids
+
+
+    def release_devs(self):
+        """
+        Release all PCI devices currently assigned to VMs back to the
+        virtualization host.
+        """
+        try:
+            for pci_id in self.dev_drivers:
+                if not self._release_dev(pci_id):
+                    logging.error("Failed to release device %s to host", pci_id)
+                else:
+                    logging.info("Released device %s successfully", pci_id)
+        except:
+            return
+
+
+class KojiDownloader(object):
+    """
+    Stablish a connection with the build system, either koji or brew.
+
+    This class provides a convenience methods to retrieve packages hosted on
+    the build system.
+    """
+    def __init__(self, cmd):
+        """
+        Verifies whether the system has koji or brew installed, then loads
+        the configuration file that will be used to download the files.
+
+        @param cmd: Command name, either 'brew' or 'koji'. It is important
+                to figure out the appropriate configuration used by the
+                downloader.
+        @param dst_dir: Destination dir for the packages.
+        """
+        if not KOJI_INSTALLED:
+            raise ValueError('No koji/brew installed on the machine')
+
+        if os.path.isfile(cmd):
+            koji_cmd = cmd
+        else:
+            koji_cmd = os_dep.command(cmd)
+
+        logging.debug("Found %s as the buildsystem interface", koji_cmd)
+
+        config_map = {'/usr/bin/koji': '/etc/koji.conf',
+                      '/usr/bin/brew': '/etc/brewkoji.conf'}
+
+        try:
+            config_file = config_map[koji_cmd]
+        except IndexError:
+            raise ValueError('Could not find config file for %s' % koji_cmd)
+
+        base_name = os.path.basename(koji_cmd)
+        if os.access(config_file, os.F_OK):
+            f = open(config_file)
+            config = ConfigParser.ConfigParser()
+            config.readfp(f)
+            f.close()
+        else:
+            raise IOError('Configuration file %s missing or with wrong '
+                          'permissions' % config_file)
+
+        if config.has_section(base_name):
+            self.koji_options = {}
+            session_options = {}
+            server = None
+            for name, value in config.items(base_name):
+                if name in ('user', 'password', 'debug_xmlrpc', 'debug'):
+                    session_options[name] = value
+                self.koji_options[name] = value
+            self.session = koji.ClientSession(self.koji_options['server'],
+                                              session_options)
+        else:
+            raise ValueError('Koji config file %s does not have a %s '
+                             'session' % (config_file, base_name))
+
+
+    def get(self, src_package, dst_dir, rfilter=None, tag=None, build=None,
+            arch=None):
+        """
+        Download a list of packages from the build system.
+
+        This will download all packages originated from source package [package]
+        with given [tag] or [build] for the architecture reported by the
+        machine.
+
+        @param src_package: Source package name.
+        @param dst_dir: Destination directory for the downloaded packages.
+        @param rfilter: Regexp filter, only download the packages that match
+                that particular filter.
+        @param tag: Build system tag.
+        @param build: Build system ID.
+        @param arch: Package arch. Useful when you want to download noarch
+                packages.
+
+        @return: List of paths with the downloaded rpm packages.
+        """
+        if build and build.isdigit():
+            build = int(build)
+
+        if tag and build:
+            logging.info("Both tag and build parameters provided, ignoring tag "
+                         "parameter...")
+
+        if not tag and not build:
+            raise ValueError("Koji install selected but neither koji_tag "
+                             "nor koji_build parameters provided. Please "
+                             "provide an appropriate tag or build name.")
+
+        if not build:
+            builds = self.session.listTagged(tag, latest=True, inherit=True,
+                                             package=src_package)
+            if not builds:
+                raise ValueError("Tag %s has no builds of %s" % (tag,
+                                                                 src_package))
+            info = builds[0]
+        else:
+            info = self.session.getBuild(build)
+
+        if info is None:
+            raise ValueError('No such brew/koji build: %s' % build)
+
+        if arch is None:
+            arch = utils.get_arch()
+
+        rpms = self.session.listRPMs(buildID=info['id'],
+                                     arches=arch)
+        if not rpms:
+            raise ValueError("No %s packages available for %s" %
+                             arch, koji.buildLabel(info))
+
+        rpm_paths = []
+        for rpm in rpms:
+            rpm_name = koji.pathinfo.rpm(rpm)
+            url = ("%s/%s/%s/%s/%s" % (self.koji_options['pkgurl'],
+                                       info['package_name'],
+                                       info['version'], info['release'],
+                                       rpm_name))
+            if rfilter:
+                filter_regexp = re.compile(rfilter, re.IGNORECASE)
+                if filter_regexp.match(os.path.basename(rpm_name)):
+                    download = True
+                else:
+                    download = False
+            else:
+                download = True
+
+            if download:
+                r = utils.get_file(url,
+                                   os.path.join(dst_dir, os.path.basename(url)))
+                rpm_paths.append(r)
+
+        return rpm_paths
+
+
+def umount(src, mount_point, type):
+    """
+    Umount the src mounted in mount_point.
+
+    @src: mount source
+    @mount_point: mount point
+    @type: file system type
+    """
+
+    mount_string = "%s %s %s" % (src, mount_point, type)
+    if mount_string in file("/etc/mtab").read():
+        umount_cmd = "umount %s" % mount_point
+        try:
+            utils.system(umount_cmd)
+            return True
+        except error.CmdError:
+            return False
+    else:
+        logging.debug("%s is not mounted under %s", src, mount_point)
+        return True
+
+
+def mount(src, mount_point, type, perm="rw"):
+    """
+    Mount the src into mount_point of the host.
+
+    @src: mount source
+    @mount_point: mount point
+    @type: file system type
+    @perm: mount premission
+    """
+    umount(src, mount_point, type)
+    mount_string = "%s %s %s %s" % (src, mount_point, type, perm)
+
+    if mount_string in file("/etc/mtab").read():
+        logging.debug("%s is already mounted in %s with %s",
+                      src, mount_point, perm)
+        return True
+
+    mount_cmd = "mount -t %s %s %s -o %s" % (type, src, mount_point, perm)
+    try:
+        utils.system(mount_cmd)
+    except error.CmdError:
+        return False
+
+    logging.debug("Verify the mount through /etc/mtab")
+    if mount_string in file("/etc/mtab").read():
+        logging.debug("%s is successfully mounted", src)
+        return True
+    else:
+        logging.error("Can't find mounted NFS share - /etc/mtab contents \n%s",
+                      file("/etc/mtab").read())
+        return False
diff --git a/client/virt/virt_vm.py b/client/virt/virt_vm.py
new file mode 100644
index 0000000..ece90c8
--- /dev/null
+++ b/client/virt/virt_vm.py
@@ -0,0 +1,298 @@
+import os, logging
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+import virt_utils, kvm_vm
+
+class VMError(Exception):
+    pass
+
+
+class VMCreateError(VMError):
+    def __init__(self, cmd, status, output):
+        VMError.__init__(self, cmd, status, output)
+        self.cmd = cmd
+        self.status = status
+        self.output = output
+
+    def __str__(self):
+        return ("VM creation command failed:    %r    (status: %s,    "
+                "output: %r)" % (self.cmd, self.status, self.output))
+
+
+class VMHashMismatchError(VMError):
+    def __init__(self, actual, expected):
+        VMError.__init__(self, actual, expected)
+        self.actual_hash = actual
+        self.expected_hash = expected
+
+    def __str__(self):
+        return ("CD image hash (%s) differs from expected one (%s)" %
+                (self.actual_hash, self.expected_hash))
+
+
+class VMImageMissingError(VMError):
+    def __init__(self, filename):
+        VMError.__init__(self, filename)
+        self.filename = filename
+
+    def __str__(self):
+        return "CD image file not found: %r" % self.filename
+
+
+class VMImageCheckError(VMError):
+    def __init__(self, filename):
+        VMError.__init__(self, filename)
+        self.filename = filename
+
+    def __str__(self):
+        return "Errors found on image: %r" % self.filename
+
+
+class VMBadPATypeError(VMError):
+    def __init__(self, pa_type):
+        VMError.__init__(self, pa_type)
+        self.pa_type = pa_type
+
+    def __str__(self):
+        return "Unsupported PCI assignable type: %r" % self.pa_type
+
+
+class VMPAError(VMError):
+    def __init__(self, pa_type):
+        VMError.__init__(self, pa_type)
+        self.pa_type = pa_type
+
+    def __str__(self):
+        return ("No PCI assignable devices could be assigned "
+                "(pci_assignable=%r)" % self.pa_type)
+
+
+class VMPostCreateError(VMError):
+    def __init__(self, cmd, output):
+        VMError.__init__(self, cmd, output)
+        self.cmd = cmd
+        self.output = output
+
+
+class VMHugePageError(VMPostCreateError):
+    def __str__(self):
+        return ("Cannot allocate hugepage memory    (command: %r,    "
+                "output: %r)" % (self.cmd, self.output))
+
+
+class VMKVMInitError(VMPostCreateError):
+    def __str__(self):
+        return ("Cannot initialize KVM    (command: %r,    output: %r)" %
+                (self.cmd, self.output))
+
+
+class VMDeadError(VMError):
+    def __init__(self, status, output):
+        VMError.__init__(self, status, output)
+        self.status = status
+        self.output = output
+
+    def __str__(self):
+        return ("VM process is dead    (status: %s,    output: %r)" %
+                (self.status, self.output))
+
+
+class VMAddressError(VMError):
+    pass
+
+
+class VMPortNotRedirectedError(VMAddressError):
+    def __init__(self, port):
+        VMAddressError.__init__(self, port)
+        self.port = port
+
+    def __str__(self):
+        return "Port not redirected: %s" % self.port
+
+
+class VMAddressVerificationError(VMAddressError):
+    def __init__(self, mac, ip):
+        VMAddressError.__init__(self, mac, ip)
+        self.mac = mac
+        self.ip = ip
+
+    def __str__(self):
+        return ("Cannot verify MAC-IP address mapping using arping: "
+                "%s ---> %s" % (self.mac, self.ip))
+
+
+class VMMACAddressMissingError(VMAddressError):
+    def __init__(self, nic_index):
+        VMAddressError.__init__(self, nic_index)
+        self.nic_index = nic_index
+
+    def __str__(self):
+        return "No MAC address defined for NIC #%s" % self.nic_index
+
+
+class VMIPAddressMissingError(VMAddressError):
+    def __init__(self, mac):
+        VMAddressError.__init__(self, mac)
+        self.mac = mac
+
+    def __str__(self):
+        return "Cannot find IP address for MAC address %s" % self.mac
+
+
+class VMMigrateError(VMError):
+    pass
+
+
+class VMMigrateTimeoutError(VMMigrateError):
+    pass
+
+
+class VMMigrateCancelError(VMMigrateError):
+    pass
+
+
+class VMMigrateFailedError(VMMigrateError):
+    pass
+
+
+class VMMigrateStateMismatchError(VMMigrateError):
+    def __init__(self, src_hash, dst_hash):
+        VMMigrateError.__init__(self, src_hash, dst_hash)
+        self.src_hash = src_hash
+        self.dst_hash = dst_hash
+
+    def __str__(self):
+        return ("Mismatch of VM state before and after migration (%s != %s)" %
+                (self.src_hash, self.dst_hash))
+
+
+class VMRebootError(VMError):
+    pass
+
+
+def get_image_filename(params, root_dir):
+    """
+    Generate an image path from params and root_dir.
+
+    @param params: Dictionary containing the test parameters.
+    @param root_dir: Base directory for relative filenames.
+
+    @note: params should contain:
+           image_name -- the name of the image file, without extension
+           image_format -- the format of the image (qcow2, raw etc)
+    """
+    image_name = params.get("image_name", "image")
+    image_format = params.get("image_format", "qcow2")
+    if params.get("image_raw_device") == "yes":
+        return image_name
+    image_filename = "%s.%s" % (image_name, image_format)
+    image_filename = virt_utils.get_path(root_dir, image_filename)
+    return image_filename
+
+
+def create_image(params, root_dir):
+    """
+    Create an image using qemu_image.
+
+    @param params: Dictionary containing the test parameters.
+    @param root_dir: Base directory for relative filenames.
+
+    @note: params should contain:
+           image_name -- the name of the image file, without extension
+           image_format -- the format of the image (qcow2, raw etc)
+           image_size -- the requested size of the image (a string
+           qemu-img can understand, such as '10G')
+    """
+    qemu_img_cmd = virt_utils.get_path(root_dir, params.get("qemu_img_binary",
+                                                           "qemu-img"))
+    qemu_img_cmd += " create"
+
+    format = params.get("image_format", "qcow2")
+    qemu_img_cmd += " -f %s" % format
+
+    image_filename = get_image_filename(params, root_dir)
+    qemu_img_cmd += " %s" % image_filename
+
+    size = params.get("image_size", "10G")
+    qemu_img_cmd += " %s" % size
+
+    utils.system(qemu_img_cmd)
+    logging.info("Image created in %r", image_filename)
+    return image_filename
+
+
+def remove_image(params, root_dir):
+    """
+    Remove an image file.
+
+    @param params: A dict
+    @param root_dir: Base directory for relative filenames.
+
+    @note: params should contain:
+           image_name -- the name of the image file, without extension
+           image_format -- the format of the image (qcow2, raw etc)
+    """
+    image_filename = get_image_filename(params, root_dir)
+    logging.debug("Removing image file %s...", image_filename)
+    if os.path.exists(image_filename):
+        os.unlink(image_filename)
+    else:
+        logging.debug("Image file %s not found")
+
+
+def check_image(params, root_dir):
+    """
+    Check an image using the appropriate tools for each virt backend.
+
+    @param params: Dictionary containing the test parameters.
+    @param root_dir: Base directory for relative filenames.
+
+    @note: params should contain:
+           image_name -- the name of the image file, without extension
+           image_format -- the format of the image (qcow2, raw etc)
+
+    @raise VMImageCheckError: In case qemu-img check fails on the image.
+    """
+    vm_type = params.get("vm_type")
+    if vm_type == 'kvm':
+        image_filename = get_image_filename(params, root_dir)
+        logging.debug("Checking image file %s...", image_filename)
+        qemu_img_cmd = virt_utils.get_path(root_dir,
+                                      params.get("qemu_img_binary", "qemu-img"))
+        image_is_qcow2 = params.get("image_format") == 'qcow2'
+        if os.path.exists(image_filename) and image_is_qcow2:
+            # Verifying if qemu-img supports 'check'
+            q_result = utils.run(qemu_img_cmd, ignore_status=True)
+            q_output = q_result.stdout
+            check_img = True
+            if not "check" in q_output:
+                logging.error("qemu-img does not support 'check', "
+                              "skipping check...")
+                check_img = False
+            if not "info" in q_output:
+                logging.error("qemu-img does not support 'info', "
+                              "skipping check...")
+                check_img = False
+            if check_img:
+                try:
+                    utils.system("%s info %s" % (qemu_img_cmd, image_filename))
+                except error.CmdError:
+                    logging.error("Error getting info from image %s",
+                                  image_filename)
+                try:
+                    utils.system("%s check %s" % (qemu_img_cmd, image_filename))
+                except error.CmdError:
+                    raise VMImageCheckError(image_filename)
+
+        else:
+            if not os.path.exists(image_filename):
+                logging.debug("Image file %s not found, skipping check...",
+                              image_filename)
+            elif not image_is_qcow2:
+                logging.debug("Image file %s not qcow2, skipping check...",
+                              image_filename)
+
+
+def instantiate_vm(vm_type, **kwargs):
+    if vm_type == 'kvm':
+        return kvm_vm.VM(**kwargs)
-- 
1.7.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/7] KVM test: tests_base.cfg: Introduce parameter 'vm_type'
  2011-03-09  9:21 [PATCH 0/7] [RFC] KVM autotest refactor stage 1 Lucas Meneghel Rodrigues
  2011-03-09  9:21 ` [PATCH 1/7] KVM test: Move test utilities to client/tools Lucas Meneghel Rodrigues
  2011-03-09  9:21 ` [PATCH 2/7] KVM test: Create autotest_lib.client.virt namespace Lucas Meneghel Rodrigues
@ 2011-03-09  9:21 ` Lucas Meneghel Rodrigues
  2011-03-09  9:21 ` [PATCH 4/7] KVM test: Adapt the test code to use the new virt namespace Lucas Meneghel Rodrigues
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-03-09  9:21 UTC (permalink / raw)
  To: autotest; +Cc: kvm

In order to allow the shared infrastructure to select
the correct vm class to instantiate a VM, introduce
the parameter vm_type, which for kvm based VMs is,
not surprisingly, 'kvm'.

Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
---
 client/tests/kvm/tests_base.cfg.sample |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/client/tests/kvm/tests_base.cfg.sample b/client/tests/kvm/tests_base.cfg.sample
index eef8c97..ff588de 100644
--- a/client/tests/kvm/tests_base.cfg.sample
+++ b/client/tests/kvm/tests_base.cfg.sample
@@ -2,6 +2,7 @@
 #
 # Define the objects we'll be using
 vms = vm1
+vm_type = kvm
 images = image1
 cdroms = cd1
 nics = nic1
-- 
1.7.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/7] KVM test: Adapt the test code to use the new virt namespace
  2011-03-09  9:21 [PATCH 0/7] [RFC] KVM autotest refactor stage 1 Lucas Meneghel Rodrigues
                   ` (2 preceding siblings ...)
  2011-03-09  9:21 ` [PATCH 3/7] KVM test: tests_base.cfg: Introduce parameter 'vm_type' Lucas Meneghel Rodrigues
@ 2011-03-09  9:21 ` Lucas Meneghel Rodrigues
  2011-03-09  9:21 ` [PATCH 5/7] KVM test: Removing the old libraries and programs Lucas Meneghel Rodrigues
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-03-09  9:21 UTC (permalink / raw)
  To: autotest; +Cc: kvm

Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
---
 client/tests/kvm/control                           |   18 ++++-----
 client/tests/kvm/control.parallel                  |    8 ++--
 client/tests/kvm/control.unittests                 |   14 +++----
 client/tests/kvm/get_started.py                    |    5 ++-
 client/tests/kvm/kvm.py                            |   14 ++++----
 client/tests/kvm/migration_control.srv             |   12 +++---
 client/tests/kvm/tests/autotest.py                 |    6 ++--
 client/tests/kvm/tests/balloon_check.py            |    2 +-
 client/tests/kvm/tests/boot_savevm.py              |    2 +-
 client/tests/kvm/tests/build.py                    |    6 ++--
 client/tests/kvm/tests/enospc.py                   |    2 +-
 client/tests/kvm/tests/ethtool.py                  |   12 +++---
 client/tests/kvm/tests/file_transfer.py            |    7 ++--
 client/tests/kvm/tests/guest_s4.py                 |    4 +-
 client/tests/kvm/tests/guest_test.py               |    4 +-
 client/tests/kvm/tests/image_copy.py               |    4 +-
 client/tests/kvm/tests/iofuzz.py                   |    8 ++--
 client/tests/kvm/tests/jumbo.py                    |   24 ++++++------
 client/tests/kvm/tests/kdump.py                    |    4 +-
 client/tests/kvm/tests/ksm_overcommit.py           |   37 ++++++++++---------
 client/tests/kvm/tests/mac_change.py               |    8 ++--
 client/tests/kvm/tests/migration.py                |    6 ++--
 .../kvm/tests/migration_with_file_transfer.py      |    8 ++--
 client/tests/kvm/tests/migration_with_reboot.py    |    4 +-
 client/tests/kvm/tests/module_probe.py             |    4 +-
 client/tests/kvm/tests/multicast.py                |   10 +++---
 client/tests/kvm/tests/netperf.py                  |    5 +--
 client/tests/kvm/tests/nic_bonding.py              |    6 ++--
 client/tests/kvm/tests/nic_hotplug.py              |   24 ++++++------
 client/tests/kvm/tests/nic_promisc.py              |    6 ++--
 client/tests/kvm/tests/nicdriver_unload.py         |    8 ++--
 client/tests/kvm/tests/pci_hotplug.py              |   18 +++++-----
 client/tests/kvm/tests/physical_resources_check.py |    2 +-
 client/tests/kvm/tests/ping.py                     |   12 +++---
 client/tests/kvm/tests/pxe.py                      |    5 +--
 client/tests/kvm/tests/qemu_img.py                 |   22 ++++++------
 client/tests/kvm/tests/qmp_basic.py                |    2 +-
 client/tests/kvm/tests/qmp_basic_rhel6.py          |    2 +-
 client/tests/kvm/tests/set_link.py                 |   14 ++++----
 client/tests/kvm/tests/shutdown.py                 |    4 +-
 client/tests/kvm/tests/stepmaker.py                |   11 +++---
 client/tests/kvm/tests/steps.py                    |    5 ++-
 client/tests/kvm/tests/stress_boot.py              |    4 +-
 client/tests/kvm/tests/timedrift.py                |   16 ++++----
 client/tests/kvm/tests/timedrift_with_migration.py |   10 +++---
 client/tests/kvm/tests/timedrift_with_reboot.py    |   10 +++---
 client/tests/kvm/tests/timedrift_with_stop.py      |   10 +++---
 client/tests/kvm/tests/unattended_install.py       |    4 +-
 client/tests/kvm/tests/unittest.py                 |    6 ++--
 client/tests/kvm/tests/virtio_console.py           |   22 ++++++------
 client/tests/kvm/tests/vlan.py                     |   14 ++++----
 client/tests/kvm/tests/vmstop.py                   |    6 ++--
 client/tests/kvm/tests/whql_client_install.py      |   12 +++---
 client/tests/kvm/tests/whql_submission.py          |   24 ++++++------
 54 files changed, 258 insertions(+), 259 deletions(-)

diff --git a/client/tests/kvm/control b/client/tests/kvm/control
index d9ff70c..6437d88 100644
--- a/client/tests/kvm/control
+++ b/client/tests/kvm/control
@@ -21,11 +21,8 @@ For online docs, please refer to http://www.linux-kvm.org/page/KVM-Autotest
 """
 
 import sys, os, logging
-# Add the KVM tests dir to the python path
-kvm_test_dir = os.path.join(os.environ['AUTODIR'],'tests/kvm')
-sys.path.append(kvm_test_dir)
-# Now we can import modules inside the KVM tests dir
-import kvm_utils, kvm_config
+from autotest_lib.client.common_lib import cartesian_config
+from autotest_lib.client.virt import virt_utils
 
 # set English environment (command output might be localized, need to be safe)
 os.environ['LANG'] = 'en_US.UTF-8'
@@ -36,10 +33,11 @@ str = """
 #release_tag = 84
 """
 
-parser = kvm_config.Parser()
+parser = cartesian_config.Parser()
+kvm_test_dir = os.path.join(os.environ['AUTODIR'],'tests/kvm')
 parser.parse_file(os.path.join(kvm_test_dir, "build.cfg"))
 parser.parse_string(str)
-if not kvm_utils.run_tests(parser, job):
+if not virt_utils.run_tests(parser, job):
     logging.error("KVM build step failed, exiting.")
     sys.exit(1)
 
@@ -50,7 +48,7 @@ str = """
 #install, setup: timeout_multiplier = 3
 """
 
-parser = kvm_config.Parser()
+parser = cartesian_config.Parser()
 parser.parse_file(os.path.join(kvm_test_dir, "tests.cfg"))
 
 if args:
@@ -68,7 +66,7 @@ if args:
             pass
 parser.parse_string(str)
 
-kvm_utils.run_tests(parser, job)
+virt_utils.run_tests(parser, job)
 
 # Generate a nice HTML report inside the job's results dir
-kvm_utils.create_report(kvm_test_dir, job.resultdir)
+virt_utils.create_report(kvm_test_dir, job.resultdir)
diff --git a/client/tests/kvm/control.parallel b/client/tests/kvm/control.parallel
index 640ccf5..966d8bc 100644
--- a/client/tests/kvm/control.parallel
+++ b/client/tests/kvm/control.parallel
@@ -158,7 +158,7 @@ if not params.get("mode") == "noinstall":
 # ----------------------------------------------------------
 # Get test set (dictionary list) from the configuration file
 # ----------------------------------------------------------
-import kvm_config
+from autotest_lib.client.common_lib import cartesian_config
 
 str = """
 # This string will be parsed after tests.cfg.  Make any desired changes to the
@@ -167,7 +167,7 @@ str = """
 #display = sdl
 """
 
-parser = kvm_config.Parser()
+parser = cartesian_config.Parser()
 parser.parse_file(os.path.join(pwd, "tests.cfg"))
 parser.parse_string(str)
 
@@ -176,7 +176,7 @@ tests = list(parser.get_dicts())
 # -------------
 # Run the tests
 # -------------
-import kvm_scheduler
+from autotest_lib.client.virt import virt_scheduler
 from autotest_lib.client.bin import utils
 
 # total_cpus defaults to the number of CPUs reported by /proc/cpuinfo
@@ -187,7 +187,7 @@ total_mem = int(commands.getoutput("free -m").splitlines()[1].split()[1]) * 3/4
 num_workers = total_cpus
 
 # Start the scheduler and workers
-s = kvm_scheduler.scheduler(tests, num_workers, total_cpus, total_mem, pwd)
+s = virt_scheduler.scheduler(tests, num_workers, total_cpus, total_mem, pwd)
 job.parallel([s.scheduler],
              *[(s.worker, i, job.run_test) for i in range(num_workers)])
 
diff --git a/client/tests/kvm/control.unittests b/client/tests/kvm/control.unittests
index 170c3f8..8f3fc62 100644
--- a/client/tests/kvm/control.unittests
+++ b/client/tests/kvm/control.unittests
@@ -14,15 +14,13 @@ Runs the unittests available for a given KVM build.
 """
 
 import sys, os, logging
-# Add the KVM tests dir to the python path
-kvm_test_dir = os.path.join(os.environ['AUTODIR'],'tests/kvm')
-sys.path.append(kvm_test_dir)
-# Now we can import modules inside the KVM tests dir
-import kvm_utils, kvm_config
+from autotest_lib.client.common_lib import cartesian_config
+from autotest_lib.client.virt import virt_utils
 
-tests_cfg = kvm_config.config()
+parser = cartesian_config.Parser()
+kvm_test_dir = os.path.join(os.environ['AUTODIR'],'tests/kvm')
 tests_cfg_path = os.path.join(kvm_test_dir, "unittests.cfg")
-tests_cfg.fork_and_parse(tests_cfg_path)
+parser.parse_file(tests_cfg_path)
 
 # Run the tests
-kvm_utils.run_tests(tests_cfg.get_generator(), job)
+virt_utils.run_tests(parser, job)
diff --git a/client/tests/kvm/get_started.py b/client/tests/kvm/get_started.py
index 5ce7349..c83fac4 100755
--- a/client/tests/kvm/get_started.py
+++ b/client/tests/kvm/get_started.py
@@ -6,9 +6,10 @@ Program to help setup kvm test environment
 """
 
 import os, sys, logging, shutil
-import common, kvm_utils
+import common
 from autotest_lib.client.common_lib import logging_manager
 from autotest_lib.client.bin import utils
+from autotest_lib.client.virt import virt_utils
 
 
 def check_iso(url, destination, hash):
@@ -40,7 +41,7 @@ def check_iso(url, destination, hash):
 
 
 if __name__ == "__main__":
-    logging_manager.configure_logging(kvm_utils.KvmLoggingConfig(),
+    logging_manager.configure_logging(virt_utils.KvmLoggingConfig(),
                                       verbose=True)
     logging.info("KVM test config helper")
 
diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
index b88fd51..6981b1b 100644
--- a/client/tests/kvm/kvm.py
+++ b/client/tests/kvm/kvm.py
@@ -1,7 +1,7 @@
 import os, logging, imp
 from autotest_lib.client.bin import test
 from autotest_lib.client.common_lib import error
-import kvm_utils, kvm_preprocessing
+from autotest_lib.client.virt import virt_utils, virt_env_process
 
 
 class kvm(test.test):
@@ -25,7 +25,7 @@ class kvm(test.test):
 
     def run_once(self, params):
         # Convert params to a Params object
-        params = kvm_utils.Params(params)
+        params = virt_utils.Params(params)
 
         # Report the parameters we've received and write them as keyvals
         logging.debug("Test parameters:")
@@ -37,13 +37,13 @@ class kvm(test.test):
 
         # Set the log file dir for the logging mechanism used by kvm_subprocess
         # (this must be done before unpickling env)
-        kvm_utils.set_log_file_dir(self.debugdir)
+        virt_utils.set_log_file_dir(self.debugdir)
 
         # Open the environment file
         logging.info("Unpickling env. You may see some harmless error "
                      "messages.")
         env_filename = os.path.join(self.bindir, params.get("env", "env"))
-        env = kvm_utils.Env(env_filename, self.env_version)
+        env = virt_utils.Env(env_filename, self.env_version)
 
         test_passed = False
 
@@ -66,7 +66,7 @@ class kvm(test.test):
 
                     # Preprocess
                     try:
-                        kvm_preprocessing.preprocess(self, params, env)
+                        virt_env_process.preprocess(self, params, env)
                     finally:
                         env.save()
                     # Run the test function
@@ -81,7 +81,7 @@ class kvm(test.test):
                     logging.error("Test failed: %s: %s",
                                   e.__class__.__name__, e)
                     try:
-                        kvm_preprocessing.postprocess_on_error(
+                        virt_env_process.postprocess_on_error(
                             self, params, env)
                     finally:
                         env.save()
@@ -91,7 +91,7 @@ class kvm(test.test):
                 # Postprocess
                 try:
                     try:
-                        kvm_preprocessing.postprocess(self, params, env)
+                        virt_env_process.postprocess(self, params, env)
                     except Exception, e:
                         if test_passed:
                             raise
diff --git a/client/tests/kvm/migration_control.srv b/client/tests/kvm/migration_control.srv
index 6b17a26..7c63317 100644
--- a/client/tests/kvm/migration_control.srv
+++ b/client/tests/kvm/migration_control.srv
@@ -12,6 +12,7 @@ so there's a distinction between the migration roles ('dest' or 'source').
 
 import sys, os, commands, glob, shutil, logging, random
 from autotest_lib.server import utils
+from autotest_lib.client.common_lib import cartesian_config
 
 # Specify the directory of autotest before you start this test
 AUTOTEST_DIR = '/usr/local/autotest'
@@ -19,11 +20,8 @@ AUTOTEST_DIR = '/usr/local/autotest'
 # Specify the root directory that on client machines
 rootdir = '/tmp/kvm_autotest_root'
 
-# Make possible to import the KVM test APIs
 KVM_DIR = os.path.join(AUTOTEST_DIR, 'client/tests/kvm')
-sys.path.append(KVM_DIR)
 
-import common, kvm_config
 
 def generate_mac_address():
     r = random.SystemRandom()
@@ -50,7 +48,7 @@ def run(pair):
         raise error.JobError("Config file %s was not found", cfg_file)
 
     # Get test set (dictionary list) from the configuration file
-    parser = kvm_config.Parser()
+    parser = cartesian_config.Parser()
     test_variants = """
 image_name(_.*)? ?<= /tmp/kvm_autotest_root/images/
 cdrom(_.*)? ?<= /tmp/kvm_autotest_root/
@@ -99,8 +97,10 @@ sys.path.append(kvm_test_dir)\n
         for key in keys:
             logging.debug("    %s = %s", key, params[key])
 
-        source_control_file += "job.run_test('kvm', tag='%s', params=%s)" % (source_params['shortname'], source_params)
-        dest_control_file += "job.run_test('kvm', tag='%s', params=%s)" % (dest_params['shortname'], dest_params)
+        source_control_file += ("job.run_test('kvm', tag='%s', params=%s)" %
+                                (source_params['shortname'], source_params))
+        dest_control_file += ("job.run_test('kvm', tag='%s', params=%s)" %
+                              (dest_params['shortname'], dest_params))
 
         logging.info('Source control file:\n%s', source_control_file)
         logging.info('Destination control file:\n%s', dest_control_file)
diff --git a/client/tests/kvm/tests/autotest.py b/client/tests/kvm/tests/autotest.py
index afc2e3b..cdea31a 100644
--- a/client/tests/kvm/tests/autotest.py
+++ b/client/tests/kvm/tests/autotest.py
@@ -1,5 +1,5 @@
 import os
-import kvm_test_utils
+from autotest_lib.client.virt import virt_test_utils
 
 
 def run_autotest(test, params, env):
@@ -21,5 +21,5 @@ def run_autotest(test, params, env):
                                 params.get("test_control_file"))
     outputdir = test.outputdir
 
-    kvm_test_utils.run_autotest(vm, session, control_path, timeout, outputdir,
-                                params)
+    virt_test_utils.run_autotest(vm, session, control_path, timeout, outputdir,
+                                 params)
diff --git a/client/tests/kvm/tests/balloon_check.py b/client/tests/kvm/tests/balloon_check.py
index 0c2a367..0b7f0f4 100644
--- a/client/tests/kvm/tests/balloon_check.py
+++ b/client/tests/kvm/tests/balloon_check.py
@@ -1,6 +1,6 @@
 import re, logging, random, time
 from autotest_lib.client.common_lib import error
-import kvm_monitor
+from autotest_lib.client.virt import kvm_monitor
 
 
 def run_balloon_check(test, params, env):
diff --git a/client/tests/kvm/tests/boot_savevm.py b/client/tests/kvm/tests/boot_savevm.py
index 6af4132..b5da338 100644
--- a/client/tests/kvm/tests/boot_savevm.py
+++ b/client/tests/kvm/tests/boot_savevm.py
@@ -1,6 +1,6 @@
 import logging, time
 from autotest_lib.client.common_lib import error
-import kvm_monitor
+from autotest_lib.client.virt import kvm_monitor
 
 
 def run_boot_savevm(test, params, env):
diff --git a/client/tests/kvm/tests/build.py b/client/tests/kvm/tests/build.py
index cbf4aed..cfebcd6 100644
--- a/client/tests/kvm/tests/build.py
+++ b/client/tests/kvm/tests/build.py
@@ -1,4 +1,4 @@
-import installer
+from autotest_lib.client.virt import kvm_installer
 
 
 def run_build(test, params, env):
@@ -14,7 +14,7 @@ def run_build(test, params, env):
     params["srcdir"] = srcdir
 
     try:
-        installer_object = installer.make_installer(params)
+        installer_object = kvm_installer.make_installer(params)
         installer_object.set_install_params(test, params)
         installer_object.install()
         env.register_installer(installer_object)
@@ -22,5 +22,5 @@ def run_build(test, params, env):
         # if the build/install fails, don't allow other tests
         # to get a installer.
         msg = "KVM install failed: %s" % (e)
-        env.register_installer(installer.FailedInstaller(msg))
+        env.register_installer(kvm_installer.FailedInstaller(msg))
         raise
diff --git a/client/tests/kvm/tests/enospc.py b/client/tests/kvm/tests/enospc.py
index 3c53b64..caa6100 100644
--- a/client/tests/kvm/tests/enospc.py
+++ b/client/tests/kvm/tests/enospc.py
@@ -1,7 +1,7 @@
 import logging, time, re
 from autotest_lib.client.common_lib import error
 from autotest_lib.client.bin import utils
-import kvm_vm
+from autotest_lib.client.virt import kvm_vm
 
 
 def run_enospc(test, params, env):
diff --git a/client/tests/kvm/tests/ethtool.py b/client/tests/kvm/tests/ethtool.py
index d7c6b57..1152f00 100644
--- a/client/tests/kvm/tests/ethtool.py
+++ b/client/tests/kvm/tests/ethtool.py
@@ -1,7 +1,7 @@
 import logging, re
 from autotest_lib.client.common_lib import error
 from autotest_lib.client.bin import utils
-import kvm_test_utils, kvm_utils, kvm_subprocess
+from autotest_lib.client.virt import virt_test_utils, virt_utils, aexpect
 
 
 def run_ethtool(test, params, env):
@@ -107,7 +107,7 @@ def run_ethtool(test, params, env):
             copy_files_from = vm.copy_files_from
             try:
                 session.cmd_output(dd_cmd, timeout=360)
-            except kvm_subprocess.ShellCmdError, e:
+            except aexpect.ShellCmdError, e:
                 return failure
         else:
             tcpdump_cmd += " and dst %s" % guest_ip
@@ -124,20 +124,20 @@ def run_ethtool(test, params, env):
             tcpdump_cmd += " and not port %s" % i
         logging.debug("Listen using command: %s", tcpdump_cmd)
         session2.sendline(tcpdump_cmd)
-        if not kvm_utils.wait_for(
+        if not virt_utils.wait_for(
                            lambda:session.cmd_status("pgrep tcpdump") == 0, 30):
             return (False, "Tcpdump process wasn't launched")
 
         logging.info("Start to transfer file")
         try:
             copy_files_from(filename, filename)
-        except kvm_utils.SCPError, e:
+        except virt_utils.SCPError, e:
             return (False, "File transfer failed (%s)" % e)
         logging.info("Transfer file completed")
         session.cmd("killall tcpdump")
         try:
             tcpdump_string = session2.read_up_to_prompt(timeout=60)
-        except kvm_subprocess.ExpectError:
+        except aexpect.ExpectError:
             return (False, "Fail to read tcpdump's output")
 
         if not compare_md5sum(filename):
@@ -190,7 +190,7 @@ def run_ethtool(test, params, env):
     feature_status = {}
     filename = "/tmp/ethtool.dd"
     guest_ip = vm.get_address()
-    ethname = kvm_test_utils.get_linux_ifname(session, vm.get_mac_address(0))
+    ethname = virt_test_utils.get_linux_ifname(session, vm.get_mac_address(0))
     supported_features = params.get("supported_features")
     if supported_features:
         supported_features = supported_features.split()
diff --git a/client/tests/kvm/tests/file_transfer.py b/client/tests/kvm/tests/file_transfer.py
index 61982a6..5f6672d 100644
--- a/client/tests/kvm/tests/file_transfer.py
+++ b/client/tests/kvm/tests/file_transfer.py
@@ -1,7 +1,8 @@
 import logging, time, os
 from autotest_lib.client.common_lib import error
 from autotest_lib.client.bin import utils
-import kvm_utils
+from autotest_lib.client.virt import virt_utils
+
 
 def run_file_transfer(test, params, env):
     """
@@ -34,11 +35,11 @@ def run_file_transfer(test, params, env):
         count = 1
 
     host_path = os.path.join(dir_name, "tmp-%s" %
-                             kvm_utils.generate_random_string(8))
+                             virt_utils.generate_random_string(8))
     host_path2 = host_path + ".2"
     cmd = "dd if=/dev/zero of=%s bs=10M count=%d" % (host_path, count)
     guest_path = (tmp_dir + "file_transfer-%s" %
-                  kvm_utils.generate_random_string(8))
+                  virt_utils.generate_random_string(8))
 
     try:
         logging.info("Creating %dMB file on host", filesize)
diff --git a/client/tests/kvm/tests/guest_s4.py b/client/tests/kvm/tests/guest_s4.py
index efd8e3b..5b5708d 100644
--- a/client/tests/kvm/tests/guest_s4.py
+++ b/client/tests/kvm/tests/guest_s4.py
@@ -1,6 +1,6 @@
 import logging, time
 from autotest_lib.client.common_lib import error
-import kvm_utils
+from autotest_lib.client.virt import virt_utils
 
 
 @error.context_aware
@@ -49,7 +49,7 @@ def run_guest_s4(test, params, env):
     # Make sure the VM goes down
     error.base_context("after S4")
     suspend_timeout = 240 + int(params.get("smp")) * 60
-    if not kvm_utils.wait_for(vm.is_dead, suspend_timeout, 2, 2):
+    if not virt_utils.wait_for(vm.is_dead, suspend_timeout, 2, 2):
         raise error.TestFail("VM refuses to go down. Suspend failed.")
     logging.info("VM suspended successfully. Sleeping for a while before "
                  "resuming it.")
diff --git a/client/tests/kvm/tests/guest_test.py b/client/tests/kvm/tests/guest_test.py
index 95c6f7f..3bc7da7 100644
--- a/client/tests/kvm/tests/guest_test.py
+++ b/client/tests/kvm/tests/guest_test.py
@@ -1,5 +1,5 @@
 import os, logging
-import kvm_utils
+from autotest_lib.client.virt import virt_utils
 
 
 def run_guest_test(test, params, env):
@@ -60,7 +60,7 @@ def run_guest_test(test, params, env):
             logging.info("Download resource finished.")
         else:
             session.cmd_output("del %s" % dst_rsc_path, internal_timeout=0)
-            script_path = kvm_utils.get_path(test.bindir, script)
+            script_path = virt_utils.get_path(test.bindir, script)
             vm.copy_files_to(script_path, dst_rsc_path, timeout=60)
 
         cmd = "%s %s %s" % (interpreter, dst_rsc_path, script_params)
diff --git a/client/tests/kvm/tests/image_copy.py b/client/tests/kvm/tests/image_copy.py
index 8a4d74c..cc921ab 100644
--- a/client/tests/kvm/tests/image_copy.py
+++ b/client/tests/kvm/tests/image_copy.py
@@ -1,7 +1,7 @@
 import os, logging
 from autotest_lib.client.common_lib import error
 from autotest_lib.client.bin import utils
-import kvm_utils
+from autotest_lib.client.virt import virt_utils
 
 
 def run_image_copy(test, params, env):
@@ -33,7 +33,7 @@ def run_image_copy(test, params, env):
     dst_path = '%s.%s' % (params['image_name'], params['image_format'])
     cmd = 'cp %s %s' % (src_path, dst_path)
 
-    if not kvm_utils.mount(src, mount_dest_dir, 'nfs', 'ro'):
+    if not virt_utils.mount(src, mount_dest_dir, 'nfs', 'ro'):
         raise error.TestError('Could not mount NFS share %s to %s' %
                               (src, mount_dest_dir))
 
diff --git a/client/tests/kvm/tests/iofuzz.py b/client/tests/kvm/tests/iofuzz.py
index 7189f91..d244012 100644
--- a/client/tests/kvm/tests/iofuzz.py
+++ b/client/tests/kvm/tests/iofuzz.py
@@ -1,6 +1,6 @@
 import logging, re, random
-from autotest_lib.client.common_lib import error
-import kvm_subprocess
+from autotest_lib.client.common_lib import error, aexpect
+from autotest_lib.client.virt import aexpect
 
 
 def run_iofuzz(test, params, env):
@@ -35,7 +35,7 @@ def run_iofuzz(test, params, env):
                     (oct(data), port))
         try:
             session.cmd(outb_cmd)
-        except kvm_subprocess.ShellError, e:
+        except aexpect.ShellError, e:
             logging.debug(e)
 
 
@@ -50,7 +50,7 @@ def run_iofuzz(test, params, env):
         inb_cmd = "dd if=/dev/port seek=%d of=/dev/null bs=1 count=1" % port
         try:
             session.cmd(inb_cmd)
-        except kvm_subprocess.ShellError, e:
+        except aexpect.ShellError, e:
             logging.debug(e)
 
 
diff --git a/client/tests/kvm/tests/jumbo.py b/client/tests/kvm/tests/jumbo.py
index b7f88ae..5108227 100644
--- a/client/tests/kvm/tests/jumbo.py
+++ b/client/tests/kvm/tests/jumbo.py
@@ -1,7 +1,7 @@
 import logging, commands, random
 from autotest_lib.client.common_lib import error
 from autotest_lib.client.bin import utils
-import kvm_test_utils, kvm_utils
+from autotest_lib.client.virt import virt_utils, virt_test_utils
 
 
 def run_jumbo(test, params, env):
@@ -37,7 +37,7 @@ def run_jumbo(test, params, env):
 
     try:
         # Environment preparation
-        ethname = kvm_test_utils.get_linux_ifname(session, vm.get_mac_address(0))
+        ethname = virt_test_utils.get_linux_ifname(session, vm.get_mac_address(0))
 
         logging.info("Changing the MTU of guest ...")
         guest_mtu_cmd = "ifconfig %s mtu %s" % (ethname , mtu)
@@ -52,36 +52,36 @@ def run_jumbo(test, params, env):
         utils.run(arp_add_cmd)
 
         def is_mtu_ok():
-            s, o = kvm_test_utils.ping(ip, 1, interface=ifname,
+            s, o = virt_test_utils.ping(ip, 1, interface=ifname,
                                        packetsize=max_icmp_pkt_size,
                                        hint="do", timeout=2)
             return s == 0
 
         def verify_mtu():
             logging.info("Verify the path MTU")
-            s, o = kvm_test_utils.ping(ip, 10, interface=ifname,
+            s, o = virt_test_utils.ping(ip, 10, interface=ifname,
                                        packetsize=max_icmp_pkt_size,
                                        hint="do", timeout=15)
             if s != 0 :
                 logging.error(o)
                 raise error.TestFail("Path MTU is not as expected")
-            if kvm_test_utils.get_loss_ratio(o) != 0:
+            if virt_test_utils.get_loss_ratio(o) != 0:
                 logging.error(o)
                 raise error.TestFail("Packet loss ratio during MTU "
                                      "verification is not zero")
 
         def flood_ping():
             logging.info("Flood with large frames")
-            kvm_test_utils.ping(ip, interface=ifname,
+            virt_test_utils.ping(ip, interface=ifname,
                                 packetsize=max_icmp_pkt_size,
                                 flood=True, timeout=float(flood_time))
 
         def large_frame_ping(count=100):
             logging.info("Large frame ping")
-            s, o = kvm_test_utils.ping(ip, count, interface=ifname,
+            s, o = virt_test_utils.ping(ip, count, interface=ifname,
                                        packetsize=max_icmp_pkt_size,
                                        timeout=float(count) * 2)
-            ratio = kvm_test_utils.get_loss_ratio(o)
+            ratio = virt_test_utils.get_loss_ratio(o)
             if ratio != 0:
                 raise error.TestFail("Loss ratio of large frame ping is %s" %
                                      ratio)
@@ -90,23 +90,23 @@ def run_jumbo(test, params, env):
             logging.info("Size increase ping")
             for size in range(0, max_icmp_pkt_size + 1, step):
                 logging.info("Ping %s with size %s", ip, size)
-                s, o = kvm_test_utils.ping(ip, 1, interface=ifname,
+                s, o = virt_test_utils.ping(ip, 1, interface=ifname,
                                            packetsize=size,
                                            hint="do", timeout=1)
                 if s != 0:
-                    s, o = kvm_test_utils.ping(ip, 10, interface=ifname,
+                    s, o = virt_test_utils.ping(ip, 10, interface=ifname,
                                                packetsize=size,
                                                adaptive=True, hint="do",
                                                timeout=20)
 
-                    if kvm_test_utils.get_loss_ratio(o) > int(params.get(
+                    if virt_test_utils.get_loss_ratio(o) > int(params.get(
                                                       "fail_ratio", 50)):
                         raise error.TestFail("Ping loss ratio is greater "
                                              "than 50% for size %s" % size)
 
         logging.info("Waiting for the MTU to be OK")
         wait_mtu_ok = 10
-        if not kvm_utils.wait_for(is_mtu_ok, wait_mtu_ok, 0, 1):
+        if not virt_utils.wait_for(is_mtu_ok, wait_mtu_ok, 0, 1):
             logging.debug(commands.getoutput("ifconfig -a"))
             raise error.TestError("MTU is not as expected even after %s "
                                   "seconds" % wait_mtu_ok)
diff --git a/client/tests/kvm/tests/kdump.py b/client/tests/kvm/tests/kdump.py
index c847131..90c004b 100644
--- a/client/tests/kvm/tests/kdump.py
+++ b/client/tests/kvm/tests/kdump.py
@@ -1,6 +1,6 @@
 import logging
 from autotest_lib.client.common_lib import error
-import kvm_utils
+from autotest_lib.client.virt import virt_utils
 
 
 def run_kdump(test, params, env):
@@ -41,7 +41,7 @@ def run_kdump(test, params, env):
         crash_cmd = "taskset -c %d echo c > /proc/sysrq-trigger" % vcpu
         session.sendline(crash_cmd)
 
-        if not kvm_utils.wait_for(lambda: not session.is_responsive(), 240, 0,
+        if not virt_utils.wait_for(lambda: not session.is_responsive(), 240, 0,
                                   1):
             raise error.TestFail("Could not trigger crash on vcpu %d" % vcpu)
 
diff --git a/client/tests/kvm/tests/ksm_overcommit.py b/client/tests/kvm/tests/ksm_overcommit.py
index 5aba25a..c3b4cad 100644
--- a/client/tests/kvm/tests/ksm_overcommit.py
+++ b/client/tests/kvm/tests/ksm_overcommit.py
@@ -1,7 +1,8 @@
 import logging, time, random, math, os
 from autotest_lib.client.common_lib import error
 from autotest_lib.client.bin import utils
-import kvm_subprocess, kvm_test_utils, kvm_utils, kvm_preprocessing
+from autotest_lib.client.virt import virt_utils, virt_test_utils, aexpect
+from autotest_lib.client.virt import virt_env_process
 
 
 def run_ksm_overcommit(test, params, env):
@@ -29,7 +30,7 @@ def run_ksm_overcommit(test, params, env):
         session.sendline("python /tmp/ksm_overcommit_guest.py")
         try:
             session.read_until_last_line_matches(["PASS:", "FAIL:"], timeout)
-        except kvm_subprocess.ExpectProcessTerminatedError, e:
+        except aexpect.ExpectProcessTerminatedError, e:
             e_msg = ("Command ksm_overcommit_guest.py on vm '%s' failed: %s" %
                      (vm.name, str(e)))
             raise error.TestFail(e_msg)
@@ -54,7 +55,7 @@ def run_ksm_overcommit(test, params, env):
             (match, data) = session.read_until_last_line_matches(
                                                              ["PASS:","FAIL:"],
                                                              timeout)
-        except kvm_subprocess.ExpectProcessTerminatedError, e:
+        except aexpect.ExpectProcessTerminatedError, e:
             e_msg = ("Failed to execute command '%s' on "
                      "ksm_overcommit_guest.py, vm '%s': %s" %
                      (command, vm.name, str(e)))
@@ -107,7 +108,7 @@ def run_ksm_overcommit(test, params, env):
             while ((new_ksm and (shm < (ksm_size*(i+1)))) or
                     (not new_ksm and (shm < (ksm_size)))):
                 if j > 64:
-                    logging.debug(kvm_test_utils.get_memory_info(lvms))
+                    logging.debug(virt_test_utils.get_memory_info(lvms))
                     raise error.TestError("SHM didn't merge the memory until "
                                           "the DL on guest: %s" % vm.name)
                 st = ksm_size / 200 * perf_ratio
@@ -126,7 +127,7 @@ def run_ksm_overcommit(test, params, env):
         logging.debug("Waiting %ds before proceeding...", rt)
         time.sleep(rt)
 
-        logging.debug(kvm_test_utils.get_memory_info(lvms))
+        logging.debug(virt_test_utils.get_memory_info(lvms))
         logging.info("Phase 1: PASS")
 
 
@@ -145,7 +146,7 @@ def run_ksm_overcommit(test, params, env):
         out = int(r_msg.split()[4])
         logging.debug("Performance: %dMB * 1000 / %dms = %dMB/s", ksm_size, out,
                      (ksm_size * 1000 / out))
-        logging.debug(kvm_test_utils.get_memory_info(lvms))
+        logging.debug(virt_test_utils.get_memory_info(lvms))
         logging.debug("Phase 2: PASS")
 
 
@@ -223,7 +224,7 @@ def run_ksm_overcommit(test, params, env):
         for i in range(last_vm + 1, vmsc):
             lsessions[i].close()
             if i == (vmsc - 1):
-                logging.debug(kvm_test_utils.get_memory_info([lvms[i]]))
+                logging.debug(virt_test_utils.get_memory_info([lvms[i]]))
             logging.debug("Destroying guest %s", lvms[i].name)
             lvms[i].destroy(gracefully = False)
 
@@ -231,7 +232,7 @@ def run_ksm_overcommit(test, params, env):
         a_cmd = "mem.static_random_verify()"
         _execute_allocator(a_cmd, lvms[last_vm], lsessions[last_vm],
                            (mem / 200 * 50 * perf_ratio))
-        logging.debug(kvm_test_utils.get_memory_info([lvms[last_vm]]))
+        logging.debug(virt_test_utils.get_memory_info([lvms[last_vm]]))
 
         lsessions[i].cmd_output("die()", 20)
         lvms[last_vm].destroy(gracefully = False)
@@ -277,7 +278,7 @@ def run_ksm_overcommit(test, params, env):
         logging.debug("Target shared memory size: %s", ksm_size)
         while (shm < ksm_size):
             if i > 64:
-                logging.debug(kvm_test_utils.get_memory_info(lvms))
+                logging.debug(virt_test_utils.get_memory_info(lvms))
                 raise error.TestError("SHM didn't merge the memory until DL")
             wt = ksm_size / 200 * perf_ratio
             logging.debug("Waiting %ds before proceed...", wt)
@@ -289,7 +290,7 @@ def run_ksm_overcommit(test, params, env):
             logging.debug("Shared meminfo after attempt %s: %s", i, shm)
             i += 1
 
-        logging.debug(kvm_test_utils.get_memory_info([vm]))
+        logging.debug(virt_test_utils.get_memory_info([vm]))
         logging.info("Phase 2a: PASS")
 
         logging.info("Phase 2b: Simultaneous spliting")
@@ -305,7 +306,7 @@ def run_ksm_overcommit(test, params, env):
             logging.debug("Performance: %dMB * 1000 / %dms = %dMB/s",
                           (ksm_size / max_alloc), out,
                           (ksm_size * 1000 / out / max_alloc))
-        logging.debug(kvm_test_utils.get_memory_info([vm]))
+        logging.debug(virt_test_utils.get_memory_info([vm]))
         logging.info("Phase 2b: PASS")
 
         logging.info("Phase 2c: Simultaneous verification")
@@ -321,7 +322,7 @@ def run_ksm_overcommit(test, params, env):
             a_cmd = "mem.value_fill(%d)" % skeys[0]
             data = _execute_allocator(a_cmd, vm, lsessions[i],
                                       120 * perf_ratio)[1]
-        logging.debug(kvm_test_utils.get_memory_info([vm]))
+        logging.debug(virt_test_utils.get_memory_info([vm]))
         logging.info("Phase 2d: PASS")
 
         logging.info("Phase 2e: Simultaneous verification")
@@ -343,7 +344,7 @@ def run_ksm_overcommit(test, params, env):
                          ksm_size/max_alloc, out,
                          (ksm_size * 1000 / out / max_alloc))
 
-        logging.debug(kvm_test_utils.get_memory_info([vm]))
+        logging.debug(virt_test_utils.get_memory_info([vm]))
         logging.info("Phase 2f: PASS")
 
         logging.info("Phase 2g: Simultaneous verification last 96B")
@@ -351,7 +352,7 @@ def run_ksm_overcommit(test, params, env):
             a_cmd = "mem.static_random_verify(96)"
             (match, data) = _execute_allocator(a_cmd, vm, lsessions[i],
                                                (mem / 200 * 50 * perf_ratio))
-        logging.debug(kvm_test_utils.get_memory_info([vm]))
+        logging.debug(virt_test_utils.get_memory_info([vm]))
         logging.info("Phase 2g: PASS")
 
         logging.debug("Cleaning up...")
@@ -528,7 +529,7 @@ def run_ksm_overcommit(test, params, env):
     params['mem'] = mem
     params['vms'] = vm_name
     # Associate pidfile name
-    params['pid_' + vm_name] = kvm_utils.generate_tmp_file_name(vm_name,
+    params['pid_' + vm_name] = virt_utils.generate_tmp_file_name(vm_name,
                                                                 'pid')
     if not params.get('extra_params'):
         params['extra_params'] = ' '
@@ -542,7 +543,7 @@ def run_ksm_overcommit(test, params, env):
     logging.debug("Memory used by allocator on guests = %dM", ksm_size)
 
     # Creating the first guest
-    kvm_preprocessing.preprocess_vm(test, params, env, vm_name)
+    virt_env_process.preprocess_vm(test, params, env, vm_name)
     lvms.append(env.get_vm(vm_name))
     if not lvms[0]:
         raise error.TestError("VM object not found in environment")
@@ -563,7 +564,7 @@ def run_ksm_overcommit(test, params, env):
     # Creating other guest systems
     for i in range(1, vmsc):
         vm_name = "vm" + str(i + 1)
-        params['pid_' + vm_name] = kvm_utils.generate_tmp_file_name(vm_name,
+        params['pid_' + vm_name] = virt_utils.generate_tmp_file_name(vm_name,
                                                                     'pid')
         params['extra_params_' + vm_name] = params.get('extra_params')
         params['extra_params_' + vm_name] += (" -pidfile %s" %
@@ -592,7 +593,7 @@ def run_ksm_overcommit(test, params, env):
     st = vmsc * 2 * perf_ratio
     logging.debug("Waiting %ds before proceed", st)
     time.sleep(vmsc * 2 * perf_ratio)
-    logging.debug(kvm_test_utils.get_memory_info(lvms))
+    logging.debug(virt_test_utils.get_memory_info(lvms))
 
     # Copy ksm_overcommit_guest.py into guests
     pwd = os.path.join(os.environ['AUTODIR'],'tests/kvm')
diff --git a/client/tests/kvm/tests/mac_change.py b/client/tests/kvm/tests/mac_change.py
index 3fd196f..d2eaf01 100644
--- a/client/tests/kvm/tests/mac_change.py
+++ b/client/tests/kvm/tests/mac_change.py
@@ -1,6 +1,6 @@
 import logging
 from autotest_lib.client.common_lib import error
-import kvm_utils, kvm_test_utils
+from autotest_lib.client.virt import virt_utils, virt_test_utils
 
 
 def run_mac_change(test, params, env):
@@ -24,11 +24,11 @@ def run_mac_change(test, params, env):
     old_mac = vm.get_mac_address(0)
     while True:
         vm.free_mac_address(0)
-        new_mac = kvm_utils.generate_mac_address(vm.instance, 0)
+        new_mac = virt_utils.generate_mac_address(vm.instance, 0)
         if old_mac != new_mac:
             break
     logging.info("The initial MAC address is %s", old_mac)
-    interface = kvm_test_utils.get_linux_ifname(session_serial, old_mac)
+    interface = virt_test_utils.get_linux_ifname(session_serial, old_mac)
     # Start change MAC address
     logging.info("Changing MAC address to %s", new_mac)
     change_cmd = ("ifconfig %s down && ifconfig %s hw ether %s && "
@@ -45,7 +45,7 @@ def run_mac_change(test, params, env):
     session_serial.sendline(dhclient_cmd)
 
     # Re-log into the guest after changing mac address
-    if kvm_utils.wait_for(session.is_responsive, 120, 20, 3):
+    if virt_utils.wait_for(session.is_responsive, 120, 20, 3):
         # Just warning when failed to see the session become dead,
         # because there is a little chance the ip does not change.
         logging.warn("The session is still responsive, settings may fail.")
diff --git a/client/tests/kvm/tests/migration.py b/client/tests/kvm/tests/migration.py
index b462e66..2426f3f 100644
--- a/client/tests/kvm/tests/migration.py
+++ b/client/tests/kvm/tests/migration.py
@@ -1,6 +1,6 @@
 import logging, time
 from autotest_lib.client.common_lib import error
-import kvm_utils
+from autotest_lib.client.virt import virt_utils
 
 
 def run_migration(test, params, env):
@@ -68,9 +68,9 @@ def run_migration(test, params, env):
                          "command output after migration")
             logging.info("Command: %s", test_command)
             logging.info("Output before:" +
-                         kvm_utils.format_str_for_message(reference_output))
+                         virt_utils.format_str_for_message(reference_output))
             logging.info("Output after:" +
-                         kvm_utils.format_str_for_message(output))
+                         virt_utils.format_str_for_message(output))
             raise error.TestFail("Command '%s' produced different output "
                                  "before and after migration" % test_command)
 
diff --git a/client/tests/kvm/tests/migration_with_file_transfer.py b/client/tests/kvm/tests/migration_with_file_transfer.py
index b38defd..25ada82 100644
--- a/client/tests/kvm/tests/migration_with_file_transfer.py
+++ b/client/tests/kvm/tests/migration_with_file_transfer.py
@@ -1,7 +1,7 @@
 import logging, time, os
 from autotest_lib.client.common_lib import utils, error
 from autotest_lib.client.bin import utils as client_utils
-import kvm_utils
+from autotest_lib.client.virt import virt_utils
 
 
 @error.context_aware
@@ -29,7 +29,7 @@ def run_migration_with_file_transfer(test, params, env):
     mig_protocol = params.get("migration_protocol", "tcp")
     mig_cancel_delay = int(params.get("mig_cancel") == "yes") * 2
 
-    host_path = "/tmp/file-%s" % kvm_utils.generate_random_string(6)
+    host_path = "/tmp/file-%s" % virt_utils.generate_random_string(6)
     host_path_returned = "%s-returned" % host_path
     guest_path = params.get("guest_path", "/tmp/file")
     file_size = params.get("file_size", "500")
@@ -56,13 +56,13 @@ def run_migration_with_file_transfer(test, params, env):
 
         error.context("transferring file to guest while migrating",
                       logging.info)
-        bg = kvm_utils.Thread(vm.copy_files_to, (host_path, guest_path),
+        bg = virt_utils.Thread(vm.copy_files_to, (host_path, guest_path),
                               dict(verbose=True, timeout=transfer_timeout))
         run_and_migrate(bg)
 
         error.context("transferring file back to host while migrating",
                       logging.info)
-        bg = kvm_utils.Thread(vm.copy_files_from,
+        bg = virt_utils.Thread(vm.copy_files_from,
                               (guest_path, host_path_returned),
                               dict(verbose=True, timeout=transfer_timeout))
         run_and_migrate(bg)
diff --git a/client/tests/kvm/tests/migration_with_reboot.py b/client/tests/kvm/tests/migration_with_reboot.py
index a15f983..5dddb12 100644
--- a/client/tests/kvm/tests/migration_with_reboot.py
+++ b/client/tests/kvm/tests/migration_with_reboot.py
@@ -1,4 +1,4 @@
-import kvm_utils
+from autotest_lib.client.virt import virt_utils
 
 
 def run_migration_with_reboot(test, params, env):
@@ -27,7 +27,7 @@ def run_migration_with_reboot(test, params, env):
 
     try:
         # Reboot the VM in the background
-        bg = kvm_utils.Thread(vm.reboot, (session,))
+        bg = virt_utils.Thread(vm.reboot, (session,))
         bg.start()
         try:
             while bg.isAlive():
diff --git a/client/tests/kvm/tests/module_probe.py b/client/tests/kvm/tests/module_probe.py
index 72f239b..b192b6d 100644
--- a/client/tests/kvm/tests/module_probe.py
+++ b/client/tests/kvm/tests/module_probe.py
@@ -1,6 +1,6 @@
 import re, commands, logging, os
 from autotest_lib.client.common_lib import error, utils
-import kvm_subprocess, kvm_test_utils, kvm_utils, installer
+from autotest_lib.client.virt import kvm_installer
 
 
 def run_module_probe(test, params, env):
@@ -19,7 +19,7 @@ def run_module_probe(test, params, env):
 
     installer_object = env.previous_installer()
     if installer_object is None:
-        installer_object = installer.PreInstalledKvm()
+        installer_object = kvm_installer.PreInstalledKvm()
         installer_object.set_install_params(test, params)
 
     logging.debug('installer object: %r', installer_object)
diff --git a/client/tests/kvm/tests/multicast.py b/client/tests/kvm/tests/multicast.py
index 5dfecbc..13e3f0d 100644
--- a/client/tests/kvm/tests/multicast.py
+++ b/client/tests/kvm/tests/multicast.py
@@ -1,7 +1,7 @@
 import logging, os, re
 from autotest_lib.client.common_lib import error
 from autotest_lib.client.bin import utils
-import kvm_test_utils, kvm_subprocess
+from autotest_lib.client.virt import virt_test_utils, aexpect
 
 
 def run_multicast(test, params, env):
@@ -25,7 +25,7 @@ def run_multicast(test, params, env):
     def run_guest(cmd):
         try:
             session.cmd(cmd)
-        except kvm_subprocess.ShellError, e:
+        except aexpect.ShellError, e:
             logging.warn(e)
 
     def run_host_guest(cmd):
@@ -70,16 +70,16 @@ def run_multicast(test, params, env):
             mcast = "%s.%d" % (prefix, new_suffix)
 
             logging.info("Initial ping test, mcast: %s", mcast)
-            s, o = kvm_test_utils.ping(mcast, 10, interface=ifname, timeout=20)
+            s, o = virt_test_utils.ping(mcast, 10, interface=ifname, timeout=20)
             if s != 0:
                 raise error.TestFail(" Ping return non-zero value %s" % o)
 
             logging.info("Flood ping test, mcast: %s", mcast)
-            kvm_test_utils.ping(mcast, None, interface=ifname, flood=True,
+            virt_test_utils.ping(mcast, None, interface=ifname, flood=True,
                                 output_func=None, timeout=flood_minutes*60)
 
             logging.info("Final ping test, mcast: %s", mcast)
-            s, o = kvm_test_utils.ping(mcast, 10, interface=ifname, timeout=20)
+            s, o = virt_test_utils.ping(mcast, 10, interface=ifname, timeout=20)
             if s != 0:
                 raise error.TestFail("Ping failed, status: %s, output: %s" %
                                      (s, o))
diff --git a/client/tests/kvm/tests/netperf.py b/client/tests/kvm/tests/netperf.py
index df2c839..72d9cde 100644
--- a/client/tests/kvm/tests/netperf.py
+++ b/client/tests/kvm/tests/netperf.py
@@ -1,8 +1,7 @@
 import logging, os, signal
 from autotest_lib.client.common_lib import error
 from autotest_lib.client.bin import utils
-import kvm_subprocess
-
+from autotest_lib.client.virt import aexpect
 
 def run_netperf(test, params, env):
     """
@@ -39,7 +38,7 @@ def run_netperf(test, params, env):
 
     try:
         session_serial.cmd(firewall_flush)
-    except kvm_subprocess.ShellError:
+    except aexpect.ShellError:
         logging.warning("Could not flush firewall rules on guest")
 
     session_serial.cmd(setup_cmd % "/tmp", timeout=200)
diff --git a/client/tests/kvm/tests/nic_bonding.py b/client/tests/kvm/tests/nic_bonding.py
index 1d53e0e..e998820 100644
--- a/client/tests/kvm/tests/nic_bonding.py
+++ b/client/tests/kvm/tests/nic_bonding.py
@@ -1,6 +1,6 @@
 import logging, time, threading
 from autotest_lib.client.tests.kvm.tests import file_transfer
-import kvm_utils, kvm_test_utils
+from autotest_lib.client.virt import virt_test_utils, virt_utils
 
 
 def run_nic_bonding(test, params, env):
@@ -30,7 +30,7 @@ def run_nic_bonding(test, params, env):
     session_serial.cmd(modprobe_cmd)
 
     session_serial.cmd("ifconfig bond0 up")
-    ifnames = [kvm_test_utils.get_linux_ifname(session_serial,
+    ifnames = [virt_test_utils.get_linux_ifname(session_serial,
                                                vm.get_mac_address(vlan))
                for vlan, nic in enumerate(params.get("nics").split())]
     setup_cmd = "ifenslave bond0 " + " ".join(ifnames)
@@ -42,7 +42,7 @@ def run_nic_bonding(test, params, env):
         file_transfer.run_file_transfer(test, params, env)
 
         logging.info("Failover test with file transfer")
-        transfer_thread = kvm_utils.Thread(file_transfer.run_file_transfer,
+        transfer_thread = virt_utils.Thread(file_transfer.run_file_transfer,
                                            (test, params, env))
         try:
             transfer_thread.start()
diff --git a/client/tests/kvm/tests/nic_hotplug.py b/client/tests/kvm/tests/nic_hotplug.py
index d44276a..059277e 100644
--- a/client/tests/kvm/tests/nic_hotplug.py
+++ b/client/tests/kvm/tests/nic_hotplug.py
@@ -1,6 +1,6 @@
 import logging
 from autotest_lib.client.common_lib import error
-import kvm_test_utils, kvm_utils
+from autotest_lib.client.virt import virt_test_utils, virt_utils
 
 
 def run_nic_hotplug(test, params, env):
@@ -20,10 +20,10 @@ def run_nic_hotplug(test, params, env):
     @param params: Dictionary with the test parameters.
     @param env:    Dictionary with test environment.
     """
-    vm = kvm_test_utils.get_living_vm(env, params.get("main_vm"))
+    vm = virt_test_utils.get_living_vm(env, params.get("main_vm"))
     timeout = int(params.get("login_timeout", 360))
     guest_delay = int(params.get("guest_delay", 20))
-    session = kvm_test_utils.wait_for_login(vm, timeout=timeout)
+    session = virt_test_utils.wait_for_login(vm, timeout=timeout)
     romfile = params.get("romfile")
 
     # Modprobe the module if specified in config file
@@ -32,11 +32,11 @@ def run_nic_hotplug(test, params, env):
         session.get_command_output("modprobe %s" % module)
 
     def netdev_add(vm):
-        netdev_id = kvm_utils.generate_random_id()
+        netdev_id = virt_utils.generate_random_id()
         attach_cmd = ("netdev_add tap,id=%s" % netdev_id)
         nic_script = params.get("nic_script")
         if nic_script:
-            attach_cmd += ",script=%s" % kvm_utils.get_path(vm.root_dir,
+            attach_cmd += ",script=%s" % virt_utils.get_path(vm.root_dir,
                                                             nic_script)
         netdev_extra_params = params.get("netdev_extra_params")
         if netdev_extra_params:
@@ -69,7 +69,7 @@ def run_nic_hotplug(test, params, env):
         @mac: Mac address of new nic
         @rom: Rom file
         """
-        nic_id = kvm_utils.generate_random_id()
+        nic_id = virt_utils.generate_random_id()
         if model == "virtio":
             model = "virtio-net-pci"
         device_add_cmd = "device_add %s,netdev=%s,mac=%s,id=%s" % (model,
@@ -100,7 +100,7 @@ def run_nic_hotplug(test, params, env):
         vm.monitor.cmd(nic_del_cmd)
         if wait:
             logging.info("waiting for the guest to finish the unplug")
-            if not kvm_utils.wait_for(lambda: nic_id not in
+            if not virt_utils.wait_for(lambda: nic_id not in
                                       vm.monitor.info("qtree"),
                                       guest_delay, 5 ,1):
                 logging.error(vm.monitor.info("qtree"))
@@ -109,7 +109,7 @@ def run_nic_hotplug(test, params, env):
                                       "hotplug module was loaded in guest")
 
     logging.info("Attach a virtio nic to vm")
-    mac = kvm_utils.generate_mac_address(vm.instance, 1)
+    mac = virt_utils.generate_mac_address(vm.instance, 1)
     if not mac:
         mac = "00:00:02:00:00:02"
     netdev_id = netdev_add(vm)
@@ -117,24 +117,24 @@ def run_nic_hotplug(test, params, env):
 
     if "Win" not in params.get("guest_name", ""):
         session.sendline("dhclient %s &" %
-                         kvm_test_utils.get_linux_ifname(session, mac))
+                         virt_test_utils.get_linux_ifname(session, mac))
 
     logging.info("Shutting down the primary link")
     vm.monitor.cmd("set_link %s down" % vm.netdev_id[0])
 
     try:
         logging.info("Waiting for new nic's ip address acquisition...")
-        if not kvm_utils.wait_for(lambda: (vm.address_cache.get(mac) is
+        if not virt_utils.wait_for(lambda: (vm.address_cache.get(mac) is
                                            not None), 10, 1):
             raise error.TestFail("Could not get ip address of new nic")
         ip = vm.address_cache.get(mac)
-        if not kvm_utils.verify_ip_address_ownership(ip, mac):
+        if not virt_utils.verify_ip_address_ownership(ip, mac):
             raise error.TestFail("Could not verify the ip address of new nic")
         else:
             logging.info("Got the ip address of new nic: %s", ip)
 
         logging.info("Ping test the new nic ...")
-        s, o = kvm_test_utils.ping(ip, 100)
+        s, o = virt_test_utils.ping(ip, 100)
         if s != 0:
             logging.error(o)
             raise error.TestFail("New nic failed ping test")
diff --git a/client/tests/kvm/tests/nic_promisc.py b/client/tests/kvm/tests/nic_promisc.py
index ac6f983..0ff07b8 100644
--- a/client/tests/kvm/tests/nic_promisc.py
+++ b/client/tests/kvm/tests/nic_promisc.py
@@ -2,7 +2,7 @@ import logging, threading
 from autotest_lib.client.common_lib import error
 from autotest_lib.client.bin import utils
 from autotest_lib.client.tests.kvm.tests import file_transfer
-import kvm_utils, kvm_test_utils
+from autotest_lib.client.virt import virt_test_utils, virt_utils
 
 
 def run_nic_promisc(test, params, env):
@@ -22,11 +22,11 @@ def run_nic_promisc(test, params, env):
     timeout = int(params.get("login_timeout", 360))
     session_serial = vm.wait_for_serial_login(timeout=timeout)
 
-    ethname = kvm_test_utils.get_linux_ifname(session_serial,
+    ethname = virt_test_utils.get_linux_ifname(session_serial,
                                               vm.get_mac_address(0))
 
     try:
-        transfer_thread = kvm_utils.Thread(file_transfer.run_file_transfer,
+        transfer_thread = virt_utils.Thread(file_transfer.run_file_transfer,
                                            (test, params, env))
         transfer_thread.start()
         while transfer_thread.isAlive():
diff --git a/client/tests/kvm/tests/nicdriver_unload.py b/client/tests/kvm/tests/nicdriver_unload.py
index 15a73ce..6d3d4da 100644
--- a/client/tests/kvm/tests/nicdriver_unload.py
+++ b/client/tests/kvm/tests/nicdriver_unload.py
@@ -2,7 +2,7 @@ import logging, threading, os, time
 from autotest_lib.client.common_lib import error
 from autotest_lib.client.bin import utils
 from autotest_lib.client.tests.kvm.tests import file_transfer
-import kvm_test_utils, kvm_utils
+from autotest_lib.client.virt import virt_test_utils, virt_utils
 
 
 def run_nicdriver_unload(test, params, env):
@@ -24,8 +24,8 @@ def run_nicdriver_unload(test, params, env):
     vm.verify_alive()
     session_serial = vm.wait_for_serial_login(timeout=timeout)
 
-    ethname = kvm_test_utils.get_linux_ifname(session_serial,
-                                              vm.get_mac_address(0))
+    ethname = virt_test_utils.get_linux_ifname(session_serial,
+                                               vm.get_mac_address(0))
     sys_path = "/sys/class/net/%s/device/driver" % (ethname)
     driver = os.path.basename(session_serial.cmd("readlink -e %s" %
                                                  sys_path).strip())
@@ -34,7 +34,7 @@ def run_nicdriver_unload(test, params, env):
     try:
         threads = []
         for t in range(int(params.get("sessions_num", "10"))):
-            thread = kvm_utils.Thread(file_transfer.run_file_transfer,
+            thread = virt_utils.Thread(file_transfer.run_file_transfer,
                                       (test, params, env))
             thread.start()
             threads.append(thread)
diff --git a/client/tests/kvm/tests/pci_hotplug.py b/client/tests/kvm/tests/pci_hotplug.py
index 0806120..5800fc5 100644
--- a/client/tests/kvm/tests/pci_hotplug.py
+++ b/client/tests/kvm/tests/pci_hotplug.py
@@ -1,6 +1,6 @@
 import re
 from autotest_lib.client.common_lib import error
-import kvm_subprocess, kvm_utils, kvm_vm
+from autotest_lib.client.virt import virt_utils, virt_vm, aexpect
 
 
 def run_pci_hotplug(test, params, env):
@@ -66,7 +66,7 @@ def run_pci_hotplug(test, params, env):
             pci_add_cmd = "pci_add pci_addr=auto nic model=%s" % tested_model
         elif test_type == "block":
             image_params = params.object_params("stg")
-            image_filename = kvm_vm.get_image_filename(image_params,
+            image_filename = virt_vm.get_image_filename(image_params,
                                                        test.bindir)
             pci_add_cmd = ("pci_add pci_addr=auto storage file=%s,if=%s" %
                            (image_filename, tested_model))
@@ -79,8 +79,8 @@ def run_pci_hotplug(test, params, env):
         after_add = vm.monitor.info("pci")
 
     elif cmd_type == "device_add":
-        driver_id = test_type + "-" + kvm_utils.generate_random_id()
-        device_id = test_type + "-" + kvm_utils.generate_random_id()
+        driver_id = test_type + "-" + virt_utils.generate_random_id()
+        device_id = test_type + "-" + virt_utils.generate_random_id()
         if test_type == "nic":
             if tested_model == "virtio":
                 tested_model = "virtio-net-pci"
@@ -89,7 +89,7 @@ def run_pci_hotplug(test, params, env):
 
         elif test_type == "block":
             image_params = params.object_params("stg")
-            image_filename = kvm_vm.get_image_filename(image_params,
+            image_filename = virt_vm.get_image_filename(image_params,
                                                        test.bindir)
             controller_model = None
             if tested_model == "virtio":
@@ -152,7 +152,7 @@ def run_pci_hotplug(test, params, env):
             after_del = vm.monitor.info("pci")
             return after_del != after_add
 
-        if (not kvm_utils.wait_for(device_removed, 10, 0, 1)
+        if (not virt_utils.wait_for(device_removed, 10, 0, 1)
             and not ignore_failure):
             raise error.TestFail("Failed to hot remove PCI device: %s. "
                                  "Monitor command: %s" %
@@ -170,7 +170,7 @@ def run_pci_hotplug(test, params, env):
             return o != reference
 
         secs = int(params.get("wait_secs_for_hook_up"))
-        if not kvm_utils.wait_for(new_shown, 30, secs, 3):
+        if not virt_utils.wait_for(new_shown, 30, secs, 3):
             raise error.TestFail("No new device shown in output of command "
                                  "executed inside the guest: %s" %
                                  params.get("reference_cmd"))
@@ -180,7 +180,7 @@ def run_pci_hotplug(test, params, env):
             o = session.cmd_output(params.get("find_pci_cmd"))
             return params.get("match_string") in o
 
-        if not kvm_utils.wait_for(find_pci, 30, 3, 3):
+        if not virt_utils.wait_for(find_pci, 30, 3, 3):
             raise error.TestFail("PCI %s %s device not found in guest. "
                                  "Command was: %s" %
                                  (tested_model, test_type,
@@ -189,7 +189,7 @@ def run_pci_hotplug(test, params, env):
         # Test the newly added device
         try:
             session.cmd(params.get("pci_test_cmd"))
-        except kvm_subprocess.ShellError, e:
+        except aexpect.ShellError, e:
             raise error.TestFail("Check for %s device failed after PCI "
                                  "hotplug. Output: %r" % (test_type, e.output))
 
diff --git a/client/tests/kvm/tests/physical_resources_check.py b/client/tests/kvm/tests/physical_resources_check.py
index f9e603c..1ef906f 100644
--- a/client/tests/kvm/tests/physical_resources_check.py
+++ b/client/tests/kvm/tests/physical_resources_check.py
@@ -1,6 +1,6 @@
 import re, string, logging
 from autotest_lib.client.common_lib import error
-import kvm_monitor
+from autotest_lib.client.virt import kvm_monitor
 
 
 def run_physical_resources_check(test, params, env):
diff --git a/client/tests/kvm/tests/ping.py b/client/tests/kvm/tests/ping.py
index 8dc4b9e..08791fb 100644
--- a/client/tests/kvm/tests/ping.py
+++ b/client/tests/kvm/tests/ping.py
@@ -1,6 +1,6 @@
 import logging
 from autotest_lib.client.common_lib import error
-import kvm_test_utils
+from autotest_lib.client.virt import virt_test_utils
 
 
 def run_ping(test, params, env):
@@ -40,11 +40,11 @@ def run_ping(test, params, env):
 
             for size in packet_size:
                 logging.info("Ping with packet size %s", size)
-                status, output = kvm_test_utils.ping(ip, 10,
+                status, output = virt_test_utils.ping(ip, 10,
                                                      packetsize=size,
                                                      timeout=20)
                 if strict_check:
-                    ratio = kvm_test_utils.get_loss_ratio(output)
+                    ratio = virt_test_utils.get_loss_ratio(output)
                     if ratio != 0:
                         raise error.TestFail("Loss ratio is %s for packet size"
                                              " %s" % (ratio, size))
@@ -54,14 +54,14 @@ def run_ping(test, params, env):
                                              " output: %s" % (status, output))
 
             logging.info("Flood ping test")
-            kvm_test_utils.ping(ip, None, flood=True, output_func=None,
+            virt_test_utils.ping(ip, None, flood=True, output_func=None,
                                 timeout=flood_minutes * 60)
 
             logging.info("Final ping test")
-            status, output = kvm_test_utils.ping(ip, counts,
+            status, output = virt_test_utils.ping(ip, counts,
                                                  timeout=float(counts) * 1.5)
             if strict_check:
-                ratio = kvm_test_utils.get_loss_ratio(output)
+                ratio = virt_test_utils.get_loss_ratio(output)
                 if ratio != 0:
                     raise error.TestFail("Ping failed, status: %s,"
                                          " output: %s" % (status, output))
diff --git a/client/tests/kvm/tests/pxe.py b/client/tests/kvm/tests/pxe.py
index 7c294c1..325e353 100644
--- a/client/tests/kvm/tests/pxe.py
+++ b/client/tests/kvm/tests/pxe.py
@@ -1,7 +1,6 @@
 import logging
 from autotest_lib.client.common_lib import error
-import kvm_subprocess
-
+from autotest_lib.client.virt import aexpect
 
 def run_pxe(test, params, env):
     """
@@ -20,7 +19,7 @@ def run_pxe(test, params, env):
     timeout = int(params.get("pxe_timeout", 60))
 
     logging.info("Try to boot from PXE")
-    output = kvm_subprocess.run_fg("tcpdump -nli %s" % vm.get_ifname(),
+    output = aexpect.run_fg("tcpdump -nli %s" % vm.get_ifname(),
                                    logging.debug, "(pxe capture) ", timeout)[1]
 
     logging.info("Analyzing the tcpdump result...")
diff --git a/client/tests/kvm/tests/qemu_img.py b/client/tests/kvm/tests/qemu_img.py
index c3449f4..a4f63a4 100644
--- a/client/tests/kvm/tests/qemu_img.py
+++ b/client/tests/kvm/tests/qemu_img.py
@@ -1,6 +1,6 @@
 import re, os, logging, commands
 from autotest_lib.client.common_lib import utils, error
-import kvm_vm, kvm_utils, kvm_preprocessing
+from autotest_lib.client.virt import virt_vm, virt_utils, virt_env_process
 
 
 def run_qemu_img(test, params, env):
@@ -13,12 +13,12 @@ def run_qemu_img(test, params, env):
     @param params: Dictionary with the test parameters
     @param env: Dictionary with test environment.
     """
-    cmd = kvm_utils.get_path(test.bindir, params.get("qemu_img_binary"))
+    cmd = virt_utils.get_path(test.bindir, params.get("qemu_img_binary"))
     if not os.path.exists(cmd):
         raise error.TestError("Binary of 'qemu-img' not found")
     image_format = params.get("image_format")
     image_size = params.get("image_size", "10G")
-    image_name = kvm_vm.get_image_filename(params, test.bindir)
+    image_name = virt_vm.get_image_filename(params, test.bindir)
 
 
     def _check(cmd, img):
@@ -49,7 +49,7 @@ def run_qemu_img(test, params, env):
 
         @param cmd: qemu-img base command.
         """
-        test_image = kvm_utils.get_path(test.bindir,
+        test_image = virt_utils.get_path(test.bindir,
                                         params.get("image_name_dd"))
         print "test_image = %s" % test_image
         create_image_cmd = params.get("create_image_cmd")
@@ -105,7 +105,7 @@ def run_qemu_img(test, params, env):
         @param cmd: qemu-img base command.
         """
         image_large = params.get("image_name_large")
-        img = kvm_utils.get_path(test.bindir, image_large)
+        img = virt_utils.get_path(test.bindir, image_large)
         img += '.' + image_format
         _create(cmd, img_name=img, fmt=image_format,
                img_size=params.get("image_size_large"))
@@ -288,7 +288,7 @@ def run_qemu_img(test, params, env):
 
             # Start a new VM, using backing file as its harddisk
             vm_name = params.get('main_vm')
-            kvm_preprocessing.preprocess_vm(test, params, env, vm_name)
+            virt_env_process.preprocess_vm(test, params, env, vm_name)
             vm = env.get_vm(vm_name)
             vm.create()
             timeout = int(params.get("login_timeout", 360))
@@ -316,7 +316,7 @@ def run_qemu_img(test, params, env):
             # Second, Start a new VM, using image_name as its harddisk
             # Here, the commit_testfile should not exist
             vm_name = params.get('main_vm')
-            kvm_preprocessing.preprocess_vm(test, params, env, vm_name)
+            virt_env_process.preprocess_vm(test, params, env, vm_name)
             vm = env.get_vm(vm_name)
             vm.create()
             timeout = int(params.get("login_timeout", 360))
@@ -342,7 +342,7 @@ def run_qemu_img(test, params, env):
 
             # Start a new VM, using image_name as its harddisk
             vm_name = params.get('main_vm')
-            kvm_preprocessing.preprocess_vm(test, params, env, vm_name)
+            virt_env_process.preprocess_vm(test, params, env, vm_name)
             vm = env.get_vm(vm_name)
             vm.create()
             timeout = int(params.get("login_timeout", 360))
@@ -401,13 +401,13 @@ def run_qemu_img(test, params, env):
                                     " support 'rebase' subcommand")
         sn_fmt = params.get("snapshot_format", "qcow2")
         sn1 = params.get("image_name_snapshot1")
-        sn1 = kvm_utils.get_path(test.bindir, sn1) + ".%s" % sn_fmt
-        base_img = kvm_vm.get_image_filename(params, test.bindir)
+        sn1 = virt_utils.get_path(test.bindir, sn1) + ".%s" % sn_fmt
+        base_img = virt_vm.get_image_filename(params, test.bindir)
         _create(cmd, sn1, sn_fmt, base_img=base_img, base_img_fmt=image_format)
 
         # Create snapshot2 based on snapshot1
         sn2 = params.get("image_name_snapshot2")
-        sn2 = kvm_utils.get_path(test.bindir, sn2) + ".%s" % sn_fmt
+        sn2 = virt_utils.get_path(test.bindir, sn2) + ".%s" % sn_fmt
         _create(cmd, sn2, sn_fmt, base_img=sn1, base_img_fmt=sn_fmt)
 
         rebase_mode = params.get("rebase_mode")
diff --git a/client/tests/kvm/tests/qmp_basic.py b/client/tests/kvm/tests/qmp_basic.py
index 9328c61..8e94fe9 100644
--- a/client/tests/kvm/tests/qmp_basic.py
+++ b/client/tests/kvm/tests/qmp_basic.py
@@ -1,5 +1,5 @@
 from autotest_lib.client.common_lib import error
-import kvm_test_utils, kvm_monitor
+from autotest_lib.client.virt import kvm_monitor
 
 
 def run_qmp_basic(test, params, env):
diff --git a/client/tests/kvm/tests/qmp_basic_rhel6.py b/client/tests/kvm/tests/qmp_basic_rhel6.py
index 24298b8..2ecad39 100644
--- a/client/tests/kvm/tests/qmp_basic_rhel6.py
+++ b/client/tests/kvm/tests/qmp_basic_rhel6.py
@@ -1,6 +1,6 @@
 import logging
 from autotest_lib.client.common_lib import error
-import kvm_monitor
+from autotest_lib.client.virt import kvm_monitor
 
 
 def run_qmp_basic_rhel6(test, params, env):
diff --git a/client/tests/kvm/tests/set_link.py b/client/tests/kvm/tests/set_link.py
index a4a78ea..94ca30a 100644
--- a/client/tests/kvm/tests/set_link.py
+++ b/client/tests/kvm/tests/set_link.py
@@ -1,7 +1,7 @@
 import logging
 from autotest_lib.client.common_lib import error
 from autotest_lib.client.tests.kvm.tests import file_transfer
-import kvm_test_utils
+from autotest_lib.client.virt import virt_test_utils
 
 
 def run_set_link(test, params, env):
@@ -17,9 +17,9 @@ def run_set_link(test, params, env):
     @param params: Dictionary with the test parameters
     @param env: Dictionary with test environment.
     """
-    vm = kvm_test_utils.get_living_vm(env, params.get("main_vm"))
+    vm = virt_test_utils.get_living_vm(env, params.get("main_vm"))
     timeout = float(params.get("login_timeout", 360))
-    session = kvm_test_utils.wait_for_login(vm, 0, timeout, 0, 2)
+    session = virt_test_utils.wait_for_login(vm, 0, timeout, 0, 2)
 
     def set_link_test(linkid):
         """
@@ -30,16 +30,16 @@ def run_set_link(test, params, env):
         ip = vm.get_address(0)
 
         vm.monitor.cmd("set_link %s down" % linkid)
-        s, o = kvm_test_utils.ping(ip, count=10, timeout=20)
-        if kvm_test_utils.get_loss_ratio(o) != 100:
+        s, o = virt_test_utils.ping(ip, count=10, timeout=20)
+        if virt_test_utils.get_loss_ratio(o) != 100:
             raise error.TestFail("Still can ping the %s after down %s" %
                                  (ip, linkid))
 
         vm.monitor.cmd("set_link %s up" % linkid)
-        s, o = kvm_test_utils.ping(ip, count=10, timeout=20)
+        s, o = virt_test_utils.ping(ip, count=10, timeout=20)
         # we use 100% here as the notification of link status changed may be
         # delayed in guest driver
-        if kvm_test_utils.get_loss_ratio(o) == 100:
+        if virt_test_utils.get_loss_ratio(o) == 100:
             raise error.TestFail("Packet loss during ping %s after up %s" %
                                  (ip, linkid))
 
diff --git a/client/tests/kvm/tests/shutdown.py b/client/tests/kvm/tests/shutdown.py
index fc0407f..ac41a4a 100644
--- a/client/tests/kvm/tests/shutdown.py
+++ b/client/tests/kvm/tests/shutdown.py
@@ -1,6 +1,6 @@
 import logging, time
 from autotest_lib.client.common_lib import error
-import kvm_subprocess, kvm_test_utils, kvm_utils
+from autotest_lib.client.virt import virt_utils
 
 
 def run_shutdown(test, params, env):
@@ -34,7 +34,7 @@ def run_shutdown(test, params, env):
             logging.info("system_powerdown monitor command sent; waiting for "
                          "guest to go down...")
 
-        if not kvm_utils.wait_for(vm.is_dead, 240, 0, 1):
+        if not virt_utils.wait_for(vm.is_dead, 240, 0, 1):
             raise error.TestFail("Guest refuses to go down")
 
         logging.info("Guest is down")
diff --git a/client/tests/kvm/tests/stepmaker.py b/client/tests/kvm/tests/stepmaker.py
index 5a9acdc..0d70c51 100755
--- a/client/tests/kvm/tests/stepmaker.py
+++ b/client/tests/kvm/tests/stepmaker.py
@@ -10,11 +10,12 @@ Step file creator/editor.
 import pygtk, gtk, gobject, time, os, commands, logging
 import common
 from autotest_lib.client.common_lib import error
-import kvm_utils, ppm_utils, stepeditor, kvm_monitor
+from autotest_lib.client.virt import virt_utils, ppm_utils, virt_step_editor
+from autotest_lib.client.virt import kvm_monitor
 pygtk.require('2.0')
 
 
-class StepMaker(stepeditor.StepMakerWindow):
+class StepMaker(virt_step_editor.StepMakerWindow):
     """
     Application used to create a step file. It will grab your input to the
     virtual machine and record it on a 'step file', that can be played
@@ -22,7 +23,7 @@ class StepMaker(stepeditor.StepMakerWindow):
     """
     # Constructor
     def __init__(self, vm, steps_filename, tempdir, params):
-        stepeditor.StepMakerWindow.__init__(self)
+        virt_step_editor.StepMakerWindow.__init__(self)
 
         self.vm = vm
         self.steps_filename = steps_filename
@@ -87,7 +88,7 @@ class StepMaker(stepeditor.StepMakerWindow):
         self.vm.monitor.cmd("cont")
         self.steps_file.close()
         self.vars_file.close()
-        stepeditor.StepMakerWindow.destroy(self, widget)
+        virt_step_editor.StepMakerWindow.destroy(self, widget)
 
 
     # Utilities
@@ -347,7 +348,7 @@ def run_stepmaker(test, params, env):
     steps_filename = params.get("steps")
     if not steps_filename:
         raise error.TestError("Steps filename not specified")
-    steps_filename = kvm_utils.get_path(test.bindir, steps_filename)
+    steps_filename = virt_utils.get_path(test.bindir, steps_filename)
     if os.path.exists(steps_filename):
         raise error.TestError("Steps file %s already exists" % steps_filename)
 
diff --git a/client/tests/kvm/tests/steps.py b/client/tests/kvm/tests/steps.py
index 91b864d..cc833fc 100644
--- a/client/tests/kvm/tests/steps.py
+++ b/client/tests/kvm/tests/steps.py
@@ -6,7 +6,8 @@ Utilities to perform automatic guest installation using step files.
 
 import os, time, shutil, logging
 from autotest_lib.client.common_lib import error
-import kvm_utils, ppm_utils, kvm_monitor
+from autotest_lib.client.virt import virt_utils, ppm_utils, kvm_monitor
+
 try:
     import PIL.Image
 except ImportError:
@@ -191,7 +192,7 @@ def run_steps(test, params, env):
     steps_filename = params.get("steps")
     if not steps_filename:
         raise error.TestError("Steps filename not specified")
-    steps_filename = kvm_utils.get_path(test.bindir, steps_filename)
+    steps_filename = virt_utils.get_path(test.bindir, steps_filename)
     if not os.path.exists(steps_filename):
         raise error.TestError("Steps file not found: %s" % steps_filename)
 
diff --git a/client/tests/kvm/tests/stress_boot.py b/client/tests/kvm/tests/stress_boot.py
index 0c422c0..e3ac14d 100644
--- a/client/tests/kvm/tests/stress_boot.py
+++ b/client/tests/kvm/tests/stress_boot.py
@@ -1,6 +1,6 @@
 import logging
 from autotest_lib.client.common_lib import error
-import kvm_preprocessing
+from autotest_lib.client.virt import virt_env_process
 
 
 @error.context_aware
@@ -35,7 +35,7 @@ def run_stress_boot(test, params, env):
             vm_params = vm.params.copy()
             curr_vm = vm.clone(vm_name, vm_params)
             env.register_vm(vm_name, curr_vm)
-            kvm_preprocessing.preprocess_vm(test, vm_params, env, vm_name)
+            virt_env_process.preprocess_vm(test, vm_params, env, vm_name)
             params["vms"] += " " + vm_name
 
             sessions.append(curr_vm.wait_for_login(timeout=login_timeout))
diff --git a/client/tests/kvm/tests/timedrift.py b/client/tests/kvm/tests/timedrift.py
index 9f62b4a..123a111 100644
--- a/client/tests/kvm/tests/timedrift.py
+++ b/client/tests/kvm/tests/timedrift.py
@@ -1,6 +1,6 @@
 import logging, time, commands
 from autotest_lib.client.common_lib import error
-import kvm_subprocess, kvm_test_utils
+from autotest_lib.client.virt import virt_test_utils, aexpect
 
 
 def run_timedrift(test, params, env):
@@ -100,7 +100,7 @@ def run_timedrift(test, params, env):
 
             # Get time before load
             # (ht stands for host time, gt stands for guest time)
-            (ht0, gt0) = kvm_test_utils.get_time(session,
+            (ht0, gt0) = virt_test_utils.get_time(session,
                                                  time_command,
                                                  time_filter_re,
                                                  time_format)
@@ -113,10 +113,10 @@ def run_timedrift(test, params, env):
             logging.info("Starting load on host...")
             for i in range(host_load_instances):
                 host_load_sessions.append(
-                    kvm_subprocess.run_bg(host_load_command,
-                                          output_func=logging.debug,
-                                          output_prefix="(host load %d) " % i,
-                                          timeout=0.5))
+                    aexpect.run_bg(host_load_command,
+                                   output_func=logging.debug,
+                                   output_prefix="(host load %d) " % i,
+                                   timeout=0.5))
                 # Set the CPU affinity of the load process
                 pid = host_load_sessions[-1].get_pid()
                 set_cpu_affinity(pid, cpu_mask)
@@ -126,7 +126,7 @@ def run_timedrift(test, params, env):
             time.sleep(load_duration)
 
             # Get time delta after load
-            (ht1, gt1) = kvm_test_utils.get_time(session,
+            (ht1, gt1) = virt_test_utils.get_time(session,
                                                  time_command,
                                                  time_filter_re,
                                                  time_format)
@@ -157,7 +157,7 @@ def run_timedrift(test, params, env):
         time.sleep(rest_duration)
 
         # Get time after rest
-        (ht2, gt2) = kvm_test_utils.get_time(session,
+        (ht2, gt2) = virt_test_utils.get_time(session,
                                              time_command,
                                              time_filter_re,
                                              time_format)
diff --git a/client/tests/kvm/tests/timedrift_with_migration.py b/client/tests/kvm/tests/timedrift_with_migration.py
index b1d4f3e..eb4cb4a 100644
--- a/client/tests/kvm/tests/timedrift_with_migration.py
+++ b/client/tests/kvm/tests/timedrift_with_migration.py
@@ -1,6 +1,6 @@
 import logging
 from autotest_lib.client.common_lib import error
-import kvm_test_utils
+from autotest_lib.client.virt import virt_test_utils
 
 
 def run_timedrift_with_migration(test, params, env):
@@ -36,13 +36,13 @@ def run_timedrift_with_migration(test, params, env):
     try:
         # Get initial time
         # (ht stands for host time, gt stands for guest time)
-        (ht0, gt0) = kvm_test_utils.get_time(session, time_command,
+        (ht0, gt0) = virt_test_utils.get_time(session, time_command,
                                              time_filter_re, time_format)
 
         # Migrate
         for i in range(migration_iterations):
             # Get time before current iteration
-            (ht0_, gt0_) = kvm_test_utils.get_time(session, time_command,
+            (ht0_, gt0_) = virt_test_utils.get_time(session, time_command,
                                                    time_filter_re, time_format)
             session.close()
             # Run current iteration
@@ -54,7 +54,7 @@ def run_timedrift_with_migration(test, params, env):
             session = vm.wait_for_login(timeout=30)
             logging.info("Logged in after migration")
             # Get time after current iteration
-            (ht1_, gt1_) = kvm_test_utils.get_time(session, time_command,
+            (ht1_, gt1_) = virt_test_utils.get_time(session, time_command,
                                                    time_filter_re, time_format)
             # Report iteration results
             host_delta = ht1_ - ht0_
@@ -72,7 +72,7 @@ def run_timedrift_with_migration(test, params, env):
                                      "%.2f seconds" % (i + 1, drift))
 
         # Get final time
-        (ht1, gt1) = kvm_test_utils.get_time(session, time_command,
+        (ht1, gt1) = virt_test_utils.get_time(session, time_command,
                                              time_filter_re, time_format)
 
     finally:
diff --git a/client/tests/kvm/tests/timedrift_with_reboot.py b/client/tests/kvm/tests/timedrift_with_reboot.py
index 05ef21f..2562163 100644
--- a/client/tests/kvm/tests/timedrift_with_reboot.py
+++ b/client/tests/kvm/tests/timedrift_with_reboot.py
@@ -1,6 +1,6 @@
 import logging
 from autotest_lib.client.common_lib import error
-import kvm_test_utils
+from autotest_lib.client.virt import virt_test_utils
 
 
 def run_timedrift_with_reboot(test, params, env):
@@ -36,20 +36,20 @@ def run_timedrift_with_reboot(test, params, env):
     try:
         # Get initial time
         # (ht stands for host time, gt stands for guest time)
-        (ht0, gt0) = kvm_test_utils.get_time(session, time_command,
+        (ht0, gt0) = virt_test_utils.get_time(session, time_command,
                                              time_filter_re, time_format)
 
         # Reboot
         for i in range(reboot_iterations):
             # Get time before current iteration
-            (ht0_, gt0_) = kvm_test_utils.get_time(session, time_command,
+            (ht0_, gt0_) = virt_test_utils.get_time(session, time_command,
                                                    time_filter_re, time_format)
             # Run current iteration
             logging.info("Rebooting: iteration %d of %d...",
                          (i + 1), reboot_iterations)
             session = vm.reboot(session)
             # Get time after current iteration
-            (ht1_, gt1_) = kvm_test_utils.get_time(session, time_command,
+            (ht1_, gt1_) = virt_test_utils.get_time(session, time_command,
                                                    time_filter_re, time_format)
             # Report iteration results
             host_delta = ht1_ - ht0_
@@ -67,7 +67,7 @@ def run_timedrift_with_reboot(test, params, env):
                                      "%.2f seconds" % (i + 1, drift))
 
         # Get final time
-        (ht1, gt1) = kvm_test_utils.get_time(session, time_command,
+        (ht1, gt1) = virt_test_utils.get_time(session, time_command,
                                              time_filter_re, time_format)
 
     finally:
diff --git a/client/tests/kvm/tests/timedrift_with_stop.py b/client/tests/kvm/tests/timedrift_with_stop.py
index 9f51ff9..c2b0402 100644
--- a/client/tests/kvm/tests/timedrift_with_stop.py
+++ b/client/tests/kvm/tests/timedrift_with_stop.py
@@ -1,6 +1,6 @@
 import logging, time
 from autotest_lib.client.common_lib import error
-import kvm_test_utils
+from autotest_lib.client.virt import virt_test_utils
 
 
 def run_timedrift_with_stop(test, params, env):
@@ -40,13 +40,13 @@ def run_timedrift_with_stop(test, params, env):
     try:
         # Get initial time
         # (ht stands for host time, gt stands for guest time)
-        (ht0, gt0) = kvm_test_utils.get_time(session, time_command,
+        (ht0, gt0) = virt_test_utils.get_time(session, time_command,
                                              time_filter_re, time_format)
 
         # Stop the guest
         for i in range(stop_iterations):
             # Get time before current iteration
-            (ht0_, gt0_) = kvm_test_utils.get_time(session, time_command,
+            (ht0_, gt0_) = virt_test_utils.get_time(session, time_command,
                                                    time_filter_re, time_format)
             # Run current iteration
             logging.info("Stop %s second: iteration %d of %d...",
@@ -61,7 +61,7 @@ def run_timedrift_with_stop(test, params, env):
             time.sleep(sleep_time)
 
             # Get time after current iteration
-            (ht1_, gt1_) = kvm_test_utils.get_time(session, time_command,
+            (ht1_, gt1_) = virt_test_utils.get_time(session, time_command,
                                                    time_filter_re, time_format)
             # Report iteration results
             host_delta = ht1_ - ht0_
@@ -79,7 +79,7 @@ def run_timedrift_with_stop(test, params, env):
                                      "%.2f seconds" % (i + 1, drift))
 
         # Get final time
-        (ht1, gt1) = kvm_test_utils.get_time(session, time_command,
+        (ht1, gt1) = virt_test_utils.get_time(session, time_command,
                                              time_filter_re, time_format)
 
     finally:
diff --git a/client/tests/kvm/tests/unattended_install.py b/client/tests/kvm/tests/unattended_install.py
index 7c6d845..66c123f 100644
--- a/client/tests/kvm/tests/unattended_install.py
+++ b/client/tests/kvm/tests/unattended_install.py
@@ -1,6 +1,6 @@
 import logging, time, socket, re
 from autotest_lib.client.common_lib import error
-import kvm_vm
+from autotest_lib.client.virt import virt_vm
 
 
 @error.context_aware
@@ -38,7 +38,7 @@ def run_unattended_install(test, params, env):
             client.connect((vm.get_address(), port))
             if client.recv(1024) == "done":
                 break
-        except (socket.error, kvm_vm.VMAddressError):
+        except (socket.error, virt_vm.VMAddressError):
             pass
         if migrate_background:
             # Drop the params which may break the migration
diff --git a/client/tests/kvm/tests/unittest.py b/client/tests/kvm/tests/unittest.py
index 9a126a5..16168fe 100644
--- a/client/tests/kvm/tests/unittest.py
+++ b/client/tests/kvm/tests/unittest.py
@@ -1,6 +1,6 @@
 import logging, os, shutil, glob, ConfigParser
 from autotest_lib.client.common_lib import error
-import kvm_utils, kvm_preprocessing
+from autotest_lib.client.virt import virt_utils, virt_env_process
 
 
 def run_unittest(test, params, env):
@@ -87,14 +87,14 @@ def run_unittest(test, params, env):
         try:
             try:
                 vm_name = params.get('main_vm')
-                kvm_preprocessing.preprocess_vm(test, params, env, vm_name)
+                virt_env_process.preprocess_vm(test, params, env, vm_name)
                 vm = env.get_vm(vm_name)
                 vm.create()
                 vm.monitor.cmd("cont")
                 logging.info("Waiting for unittest %s to complete, timeout %s, "
                              "output in %s", t, timeout,
                              vm.get_testlog_filename())
-                if not kvm_utils.wait_for(vm.is_dead, timeout):
+                if not virt_utils.wait_for(vm.is_dead, timeout):
                     raise error.TestFail("Timeout elapsed (%ss)" % timeout)
                 # Check qemu's exit status
                 status = vm.process.get_status()
diff --git a/client/tests/kvm/tests/virtio_console.py b/client/tests/kvm/tests/virtio_console.py
index bc40837..ee8facf 100644
--- a/client/tests/kvm/tests/virtio_console.py
+++ b/client/tests/kvm/tests/virtio_console.py
@@ -10,8 +10,8 @@ from threading import Thread
 
 from autotest_lib.client.common_lib import error
 from autotest_lib.client.bin import utils
-import kvm_subprocess, kvm_test_utils, kvm_utils
-import kvm_preprocessing, kvm_monitor
+from autotest_lib.client.virt import virt_utils, virt_test_utils, kvm_monitor
+from autotest_lib.client.virt import virt_env_process, aexpect
 
 
 def run_virtio_console(test, params, env):
@@ -536,7 +536,7 @@ def run_virtio_console(test, params, env):
                                                                 "FAIL:"],
                                                                timeout)
 
-        except (kvm_subprocess.ExpectError):
+        except (aexpect.ExpectError):
             match = None
             data = "Timeout."
 
@@ -674,14 +674,14 @@ def run_virtio_console(test, params, env):
         Restore old virtual machine when VM is destroyed.
         """
         logging.debug("Booting guest %s", params.get("main_vm"))
-        kvm_preprocessing.preprocess_vm(test, params, env,
+        virt_env_process.preprocess_vm(test, params, env,
                                         params.get("main_vm"))
 
         vm = env.get_vm(params.get("main_vm"))
 
         kernel_bug = None
         try:
-            session = kvm_test_utils.wait_for_login(vm, 0,
+            session = virt_test_utils.wait_for_login(vm, 0,
                                     float(params.get("boot_timeout", 100)),
                                     0, 2)
         except (error.TestFail):
@@ -694,7 +694,7 @@ def run_virtio_console(test, params, env):
         if kernel_bug is not None:
             logging.error(kernel_bug)
 
-        sserial = kvm_test_utils.wait_for_login(vm, 0,
+        sserial = virt_test_utils.wait_for_login(vm, 0,
                                          float(params.get("boot_timeout", 20)),
                                          0, 2, serial=True)
         return [vm, session, sserial]
@@ -1183,8 +1183,8 @@ def run_virtio_console(test, params, env):
         """
         # Migrate
         vm[1].close()
-        dest_vm = kvm_test_utils.migrate(vm[0], env, 3600, "exec", 0, 0)
-        vm[1] = kvm_utils.wait_for(dest_vm.remote_login, 30, 0, 2)
+        dest_vm = virt_test_utils.migrate(vm[0], env, 3600, "exec", 0, 0)
+        vm[1] = virt_utils.wait_for(dest_vm.remote_login, 30, 0, 2)
         if not vm[1]:
             raise error.TestFail("Could not log into guest after migration")
         logging.info("Logged in after migration")
@@ -1451,7 +1451,7 @@ def run_virtio_console(test, params, env):
                 match, tmp = _on_guest("guest_exit()", vm, 10)
                 if (match is None) or (match == 0):
                     vm[1].close()
-                    vm[1] = kvm_test_utils.wait_for_login(vm[0], 0,
+                    vm[1] = virt_test_utils.wait_for_login(vm[0], 0,
                                         float(params.get("boot_timeout", 5)),
                                         0, 10)
                 on_guest("killall -9 python "
@@ -1462,14 +1462,14 @@ def run_virtio_console(test, params, env):
                 init_guest(vm, consoles)
                 _clean_ports(vm, consoles)
 
-            except (error.TestFail, kvm_subprocess.ExpectError,
+            except (error.TestFail, aexpect.ExpectError,
                     Exception), inst:
                 logging.error(inst)
                 logging.error("Virtio-console driver is irreparably"
                               " blocked. Every comd end with sig KILL."
                               "Try reboot vm for continue in testing.")
                 try:
-                    vm[1] = kvm_test_utils.reboot(vm[0], vm[1], "system_reset")
+                    vm[1] = virt_test_utils.reboot(vm[0], vm[1], "system_reset")
                 except (kvm_monitor.MonitorProtocolError):
                     logging.error("Qemu is blocked. Monitor"
                                   " no longer communicate.")
diff --git a/client/tests/kvm/tests/vlan.py b/client/tests/kvm/tests/vlan.py
index b1864c9..9fc1f64 100644
--- a/client/tests/kvm/tests/vlan.py
+++ b/client/tests/kvm/tests/vlan.py
@@ -1,6 +1,6 @@
 import logging, time, re
 from autotest_lib.client.common_lib import error
-import kvm_test_utils, kvm_utils, kvm_subprocess
+from autotest_lib.client.virt import virt_utils, virt_test_utils, aexpect
 
 
 def run_vlan(test, params, env):
@@ -53,7 +53,7 @@ def run_vlan(test, params, env):
         return session.cmd_status(rem_vlan_cmd % (iface, iface))
 
     def nc_transfer(src, dst):
-        nc_port = kvm_utils.find_free_port(1025, 5334, vm_ip[dst])
+        nc_port = virt_utils.find_free_port(1025, 5334, vm_ip[dst])
         listen_cmd = params.get("listen_cmd")
         send_cmd = params.get("send_cmd")
 
@@ -66,7 +66,7 @@ def run_vlan(test, params, env):
         session[src].cmd(send_cmd, timeout=60)
         try:
             session[dst].read_up_to_prompt(timeout=60)
-        except kvm_subprocess.ExpectError:
+        except aexpect.ExpectError:
             raise error.TestFail ("Fail to receive file"
                                     " from vm%s to vm%s" % (src+1, dst+1))
         #check MD5 message digest of receive file in dst
@@ -87,7 +87,7 @@ def run_vlan(test, params, env):
             raise error.TestError("Could not log into guest(vm%d)" % i)
         logging.info("Logged in")
 
-        ifname.append(kvm_test_utils.get_linux_ifname(session[i],
+        ifname.append(virt_test_utils.get_linux_ifname(session[i],
                       vm[i].get_mac_address()))
         #get guest ip
         vm_ip.append(vm[i].get_address())
@@ -122,7 +122,7 @@ def run_vlan(test, params, env):
                 for i in range(2):
                     interface = ifname[i] + '.' + str(vlan)
                     dest = subnet +'.'+ str(vlan2)+ '.' + ip_unit[(i+1)%2]
-                    s, o = kvm_test_utils.ping(dest, count=2,
+                    s, o = virt_test_utils.ping(dest, count=2,
                                               interface=interface,
                                               session=session[i], timeout=30)
                     if ((vlan == vlan2) ^ (s == 0)):
@@ -134,11 +134,11 @@ def run_vlan(test, params, env):
 
             logging.info("Flood ping")
             def flood_ping(src, dst):
-                # we must use a dedicated session becuase the kvm_subprocess
+                # we must use a dedicated session becuase the aexpect
                 # does not have the other method to interrupt the process in
                 # the guest rather than close the session.
                 session_flood = vm[src].wait_for_login(timeout=60)
-                kvm_test_utils.ping(vlan_ip[dst], flood=True,
+                virt_test_utils.ping(vlan_ip[dst], flood=True,
                                    interface=ifname[src],
                                    session=session_flood, timeout=10)
                 session_flood.close()
diff --git a/client/tests/kvm/tests/vmstop.py b/client/tests/kvm/tests/vmstop.py
index 74ecb23..441ddb9 100644
--- a/client/tests/kvm/tests/vmstop.py
+++ b/client/tests/kvm/tests/vmstop.py
@@ -1,7 +1,7 @@
 import logging, time, os
 from autotest_lib.client.common_lib import error
 from autotest_lib.client.bin import utils
-import kvm_utils
+from autotest_lib.client.virt import virt_utils
 
 
 def run_vmstop(test, params, env):
@@ -35,8 +35,8 @@ def run_vmstop(test, params, env):
         utils.run("dd if=/dev/zero of=/tmp/file bs=1M count=%s" % file_size)
         # Transfer file from host to guest, we didn't expect the finish of
         # transfer, we just let it to be a kind of stress in guest.
-        bg = kvm_utils.Thread(vm.copy_files_to, ("/tmp/file", guest_path),
-                              dict(verbose=True, timeout=60))
+        bg = virt_utils.Thread(vm.copy_files_to, ("/tmp/file", guest_path),
+                               dict(verbose=True, timeout=60))
         logging.info("Start the background transfer")
         bg.start()
 
diff --git a/client/tests/kvm/tests/whql_client_install.py b/client/tests/kvm/tests/whql_client_install.py
index f5d725d..2d72a5e 100644
--- a/client/tests/kvm/tests/whql_client_install.py
+++ b/client/tests/kvm/tests/whql_client_install.py
@@ -1,6 +1,6 @@
 import logging, time, os
 from autotest_lib.client.common_lib import error
-import kvm_test_utils, kvm_utils, rss_file_transfer
+from autotest_lib.client.virt import virt_utils, virt_test_utils, rss_client
 
 
 def run_whql_client_install(test, params, env):
@@ -37,7 +37,7 @@ def run_whql_client_install(test, params, env):
     client_password = params.get("client_password")
     dsso_delete_machine_binary = params.get("dsso_delete_machine_binary",
                                             "deps/whql_delete_machine_15.exe")
-    dsso_delete_machine_binary = kvm_utils.get_path(test.bindir,
+    dsso_delete_machine_binary = virt_utils.get_path(test.bindir,
                                                     dsso_delete_machine_binary)
     install_timeout = float(params.get("install_timeout", 600))
     install_cmd = params.get("install_cmd")
@@ -45,15 +45,15 @@ def run_whql_client_install(test, params, env):
 
     # Stop WTT service(s) on client
     for svc in wtt_services.split():
-        kvm_test_utils.stop_windows_service(session, svc)
+        virt_test_utils.stop_windows_service(session, svc)
 
     # Copy dsso_delete_machine_binary to server
-    rss_file_transfer.upload(server_address, server_file_transfer_port,
+    rss_client.upload(server_address, server_file_transfer_port,
                              dsso_delete_machine_binary, server_studio_path,
                              timeout=60)
 
     # Open a shell session with server
-    server_session = kvm_utils.remote_login("nc", server_address,
+    server_session = virt_utils.remote_login("nc", server_address,
                                             server_shell_port, "", "",
                                             session.prompt, session.linesep)
     server_session.set_status_test_command(session.status_test_command)
@@ -81,7 +81,7 @@ def run_whql_client_install(test, params, env):
     server_session.close()
 
     # Rename the client machine
-    client_name = "autotest_%s" % kvm_utils.generate_random_string(4)
+    client_name = "autotest_%s" % virt_utils.generate_random_string(4)
     logging.info("Renaming client machine to '%s'", client_name)
     cmd = ('wmic computersystem where name="%%computername%%" rename name="%s"'
            % client_name)
diff --git a/client/tests/kvm/tests/whql_submission.py b/client/tests/kvm/tests/whql_submission.py
index c3621c4..bbeb836 100644
--- a/client/tests/kvm/tests/whql_submission.py
+++ b/client/tests/kvm/tests/whql_submission.py
@@ -1,6 +1,6 @@
 import logging, os, re
 from autotest_lib.client.common_lib import error
-import kvm_subprocess, kvm_utils, rss_file_transfer
+from autotest_lib.client.virt import virt_utils, rss_client, aexpect
 
 
 def run_whql_submission(test, params, env):
@@ -42,20 +42,20 @@ def run_whql_submission(test, params, env):
                                     "Microsoft Driver Test Manager\\Studio")
     dsso_test_binary = params.get("dsso_test_binary",
                                   "deps/whql_submission_15.exe")
-    dsso_test_binary = kvm_utils.get_path(test.bindir, dsso_test_binary)
+    dsso_test_binary = virt_utils.get_path(test.bindir, dsso_test_binary)
     dsso_delete_machine_binary = params.get("dsso_delete_machine_binary",
                                             "deps/whql_delete_machine_15.exe")
-    dsso_delete_machine_binary = kvm_utils.get_path(test.bindir,
+    dsso_delete_machine_binary = virt_utils.get_path(test.bindir,
                                                     dsso_delete_machine_binary)
     test_timeout = float(params.get("test_timeout", 600))
 
     # Copy dsso binaries to the server
     for filename in dsso_test_binary, dsso_delete_machine_binary:
-        rss_file_transfer.upload(server_address, server_file_transfer_port,
+        rss_client.upload(server_address, server_file_transfer_port,
                                  filename, server_studio_path, timeout=60)
 
     # Open a shell session with the server
-    server_session = kvm_utils.remote_login("nc", server_address,
+    server_session = virt_utils.remote_login("nc", server_address,
                                             server_shell_port, "", "",
                                             sessions[0].prompt,
                                             sessions[0].linesep)
@@ -74,7 +74,7 @@ def run_whql_submission(test, params, env):
         server_session.cmd(cmd, print_func=logging.debug)
 
     # Reboot the client machines
-    sessions = kvm_utils.parallel((vm.reboot, (session,))
+    sessions = virt_utils.parallel((vm.reboot, (session,))
                                   for vm, session in zip(vms, sessions))
 
     # Check the NICs again
@@ -171,7 +171,7 @@ def run_whql_submission(test, params, env):
         # (test_timeout + 300 is used here because the automation program is
         # supposed to terminate cleanly on its own when test_timeout expires)
         done = True
-    except kvm_subprocess.ExpectError, e:
+    except aexpect.ExpectError, e:
         o = e.output
         done = False
     server_session.close()
@@ -188,17 +188,17 @@ def run_whql_submission(test, params, env):
     for i, r in enumerate(results):
         if "report" in r:
             try:
-                rss_file_transfer.download(server_address,
+                rss_client.download(server_address,
                                            server_file_transfer_port,
                                            r["report"], test.debugdir)
-            except rss_file_transfer.FileTransferNotFoundError:
+            except rss_client.FileTransferNotFoundError:
                 pass
         if "logs" in r:
             try:
-                rss_file_transfer.download(server_address,
+                rss_client.download(server_address,
                                            server_file_transfer_port,
                                            r["logs"], test.debugdir)
-            except rss_file_transfer.FileTransferNotFoundError:
+            except rss_client.FileTransferNotFoundError:
                 pass
             else:
                 try:
@@ -254,7 +254,7 @@ def run_whql_submission(test, params, env):
     # Kill the client VMs and fail if the automation program did not terminate
     # on time
     if not done:
-        kvm_utils.parallel(vm.destroy for vm in vms)
+        virt_utils.parallel(vm.destroy for vm in vms)
         raise error.TestFail("The automation program did not terminate "
                              "on time")
 
-- 
1.7.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 5/7] KVM test: Removing the old libraries and programs
  2011-03-09  9:21 [PATCH 0/7] [RFC] KVM autotest refactor stage 1 Lucas Meneghel Rodrigues
                   ` (3 preceding siblings ...)
  2011-03-09  9:21 ` [PATCH 4/7] KVM test: Adapt the test code to use the new virt namespace Lucas Meneghel Rodrigues
@ 2011-03-09  9:21 ` Lucas Meneghel Rodrigues
  2011-03-09  9:21 ` [PATCH 6/7] KVM test: Try to load subtests on a shared tests location Lucas Meneghel Rodrigues
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-03-09  9:21 UTC (permalink / raw)
  To: autotest; +Cc: kvm

As they've been replaced by the new virt_namespace,
as well as the new program versions moved to tools.

Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
---
 client/tests/kvm/cd_hash.py           |   48 -
 client/tests/kvm/html_report.py       | 1727 --------------------------------
 client/tests/kvm/installer.py         |  797 ---------------
 client/tests/kvm/kvm_config.py        |  698 -------------
 client/tests/kvm/kvm_monitor.py       |  744 --------------
 client/tests/kvm/kvm_preprocessing.py |  467 ---------
 client/tests/kvm/kvm_scheduler.py     |  229 -----
 client/tests/kvm/kvm_subprocess.py    | 1351 -------------------------
 client/tests/kvm/kvm_test_utils.py    |  753 --------------
 client/tests/kvm/kvm_utils.py         | 1728 --------------------------------
 client/tests/kvm/kvm_vm.py            | 1777 ---------------------------------
 client/tests/kvm/ppm_utils.py         |  237 -----
 client/tests/kvm/rss_file_transfer.py |  519 ----------
 client/tests/kvm/scan_results.py      |   97 --
 client/tests/kvm/stepeditor.py        | 1401 --------------------------
 client/tests/kvm/test_setup.py        |  700 -------------
 16 files changed, 0 insertions(+), 13273 deletions(-)
 delete mode 100755 client/tests/kvm/cd_hash.py
 delete mode 100755 client/tests/kvm/html_report.py
 delete mode 100644 client/tests/kvm/installer.py
 delete mode 100755 client/tests/kvm/kvm_config.py
 delete mode 100644 client/tests/kvm/kvm_monitor.py
 delete mode 100644 client/tests/kvm/kvm_preprocessing.py
 delete mode 100644 client/tests/kvm/kvm_scheduler.py
 delete mode 100755 client/tests/kvm/kvm_subprocess.py
 delete mode 100644 client/tests/kvm/kvm_test_utils.py
 delete mode 100644 client/tests/kvm/kvm_utils.py
 delete mode 100755 client/tests/kvm/kvm_vm.py
 delete mode 100644 client/tests/kvm/ppm_utils.py
 delete mode 100755 client/tests/kvm/rss_file_transfer.py
 delete mode 100755 client/tests/kvm/scan_results.py
 delete mode 100755 client/tests/kvm/stepeditor.py
 delete mode 100644 client/tests/kvm/test_setup.py

diff --git a/client/tests/kvm/cd_hash.py b/client/tests/kvm/cd_hash.py
deleted file mode 100755
index 04f8cbe..0000000
--- a/client/tests/kvm/cd_hash.py
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/usr/bin/python
-"""
-Program that calculates several hashes for a given CD image.
-
-@copyright: Red Hat 2008-2009
-"""
-
-import os, sys, optparse, logging
-import common
-import kvm_utils
-from autotest_lib.client.common_lib import logging_manager
-from autotest_lib.client.bin import utils
-
-
-if __name__ == "__main__":
-    parser = optparse.OptionParser("usage: %prog [options] [filenames]")
-    options, args = parser.parse_args()
-
-    logging_manager.configure_logging(kvm_utils.KvmLoggingConfig())
-
-    if args:
-        filenames = args
-    else:
-        parser.print_help()
-        sys.exit(1)
-
-    for filename in filenames:
-        filename = os.path.abspath(filename)
-
-        file_exists = os.path.isfile(filename)
-        can_read_file = os.access(filename, os.R_OK)
-        if not file_exists:
-            logging.critical("File %s does not exist!", filename)
-            continue
-        if not can_read_file:
-            logging.critical("File %s does not have read permissions!",
-                             filename)
-            continue
-
-        logging.info("Hash values for file %s", os.path.basename(filename))
-        logging.info("md5    (1m): %s", utils.hash_file(filename, 1024*1024,
-                                                        method="md5"))
-        logging.info("sha1   (1m): %s", utils.hash_file(filename, 1024*1024,
-                                                        method="sha1"))
-        logging.info("md5  (full): %s", utils.hash_file(filename, method="md5"))
-        logging.info("sha1 (full): %s", utils.hash_file(filename,
-                                                        method="sha1"))
-        logging.info("")
diff --git a/client/tests/kvm/html_report.py b/client/tests/kvm/html_report.py
deleted file mode 100755
index 8b4b109..0000000
--- a/client/tests/kvm/html_report.py
+++ /dev/null
@@ -1,1727 +0,0 @@
-#!/usr/bin/python
-"""
-Script used to parse the test results and generate an HTML report.
-
-@copyright: (c)2005-2007 Matt Kruse (javascripttoolbox.com)
-@copyright: Red Hat 2008-2009
-@author: Dror Russo (drusso@redhat.com)
-"""
-
-import os, sys, re, getopt, time, datetime, commands
-import common
-
-
-format_css = """
-html,body {
-    padding:0;
-    color:#222;
-    background:#FFFFFF;
-}
-
-body {
-    padding:0px;
-    font:76%/150% "Lucida Grande", "Lucida Sans Unicode", Lucida, Verdana, Geneva, Arial, Helvetica, sans-serif;
-}
-
-#page_title{
-    text-decoration:none;
-    font:bold 2em/2em Arial, Helvetica, sans-serif;
-    text-transform:none;
-    text-shadow: 2px 2px 2px #555;
-    text-align: left;
-    color:#555555;
-    border-bottom: 1px solid #555555;
-}
-
-#page_sub_title{
-        text-decoration:none;
-        font:bold 16px Arial, Helvetica, sans-serif;
-        text-transform:uppercase;
-        text-shadow: 2px 2px 2px #555;
-        text-align: left;
-        color:#555555;
-    margin-bottom:0;
-}
-
-#comment{
-        text-decoration:none;
-        font:bold 10px Arial, Helvetica, sans-serif;
-        text-transform:none;
-        text-align: left;
-        color:#999999;
-    margin-top:0;
-}
-
-
-#meta_headline{
-                text-decoration:none;
-                font-family: Verdana, Geneva, Arial, Helvetica, sans-serif ;
-                text-align: left;
-                color:black;
-                font-weight: bold;
-                font-size: 14px;
-        }
-
-
-table.meta_table
-{text-align: center;
-font-family: Verdana, Geneva, Arial, Helvetica, sans-serif ;
-width: 90%;
-background-color: #FFFFFF;
-border: 0px;
-border-top: 1px #003377 solid;
-border-bottom: 1px #003377 solid;
-border-right: 1px #003377 solid;
-border-left: 1px #003377 solid;
-border-collapse: collapse;
-border-spacing: 0px;}
-
-table.meta_table td
-{background-color: #FFFFFF;
-color: #000;
-padding: 4px;
-border-top: 1px #BBBBBB solid;
-border-bottom: 1px #BBBBBB solid;
-font-weight: normal;
-font-size: 13px;}
-
-
-table.stats
-{text-align: center;
-font-family: Verdana, Geneva, Arial, Helvetica, sans-serif ;
-width: 100%;
-background-color: #FFFFFF;
-border: 0px;
-border-top: 1px #003377 solid;
-border-bottom: 1px #003377 solid;
-border-right: 1px #003377 solid;
-border-left: 1px #003377 solid;
-border-collapse: collapse;
-border-spacing: 0px;}
-
-table.stats td{
-background-color: #FFFFFF;
-color: #000;
-padding: 4px;
-border-top: 1px #BBBBBB solid;
-border-bottom: 1px #BBBBBB solid;
-font-weight: normal;
-font-size: 11px;}
-
-table.stats th{
-background: #dcdcdc;
-color: #000;
-padding: 6px;
-font-size: 12px;
-border-bottom: 1px #003377 solid;
-font-weight: bold;}
-
-table.stats td.top{
-background-color: #dcdcdc;
-color: #000;
-padding: 6px;
-text-align: center;
-border: 0px;
-border-bottom: 1px #003377 solid;
-font-size: 10px;
-font-weight: bold;}
-
-table.stats th.table-sorted-asc{
-        background-image: url(ascending.gif);
-        background-position: top left  ;
-        background-repeat: no-repeat;
-}
-
-table.stats th.table-sorted-desc{
-        background-image: url(descending.gif);
-        background-position: top left;
-        background-repeat: no-repeat;
-}
-
-table.stats2
-{text-align: left;
-font-family: Verdana, Geneva, Arial, Helvetica, sans-serif ;
-width: 100%;
-background-color: #FFFFFF;
-border: 0px;
-}
-
-table.stats2 td{
-background-color: #FFFFFF;
-color: #000;
-padding: 0px;
-font-weight: bold;
-font-size: 13px;}
-
-
-
-/* Put this inside a @media qualifier so Netscape 4 ignores it */
-@media screen, print {
-        /* Turn off list bullets */
-        ul.mktree  li { list-style: none; }
-        /* Control how "spaced out" the tree is */
-        ul.mktree, ul.mktree ul , ul.mktree li { margin-left:10px; padding:0px; }
-        /* Provide space for our own "bullet" inside the LI */
-        ul.mktree  li           .bullet { padding-left: 15px; }
-        /* Show "bullets" in the links, depending on the class of the LI that the link's in */
-        ul.mktree  li.liOpen    .bullet { cursor: pointer; }
-        ul.mktree  li.liClosed  .bullet { cursor: pointer;  }
-        ul.mktree  li.liBullet  .bullet { cursor: default; }
-        /* Sublists are visible or not based on class of parent LI */
-        ul.mktree  li.liOpen    ul { display: block; }
-        ul.mktree  li.liClosed  ul { display: none; }
-
-        /* Format menu items differently depending on what level of the tree they are in */
-        /* Uncomment this if you want your fonts to decrease in size the deeper they are in the tree */
-/*
-        ul.mktree  li ul li { font-size: 90% }
-*/
-}
-"""
-
-
-table_js = """
-/**
- * Copyright (c)2005-2007 Matt Kruse (javascripttoolbox.com)
- *
- * Dual licensed under the MIT and GPL licenses.
- * This basically means you can use this code however you want for
- * free, but don't claim to have written it yourself!
- * Donations always accepted: http://www.JavascriptToolbox.com/donate/
- *
- * Please do not link to the .js files on javascripttoolbox.com from
- * your site. Copy the files locally to your server instead.
- *
- */
-/**
- * Table.js
- * Functions for interactive Tables
- *
- * Copyright (c) 2007 Matt Kruse (javascripttoolbox.com)
- * Dual licensed under the MIT and GPL licenses.
- *
- * @version 0.981
- *
- * @history 0.981 2007-03-19 Added Sort.numeric_comma, additional date parsing formats
- * @history 0.980 2007-03-18 Release new BETA release pending some testing. Todo: Additional docs, examples, plus jQuery plugin.
- * @history 0.959 2007-03-05 Added more "auto" functionality, couple bug fixes
- * @history 0.958 2007-02-28 Added auto functionality based on class names
- * @history 0.957 2007-02-21 Speed increases, more code cleanup, added Auto Sort functionality
- * @history 0.956 2007-02-16 Cleaned up the code and added Auto Filter functionality.
- * @history 0.950 2006-11-15 First BETA release.
- *
- * @todo Add more date format parsers
- * @todo Add style classes to colgroup tags after sorting/filtering in case the user wants to highlight the whole column
- * @todo Correct for colspans in data rows (this may slow it down)
- * @todo Fix for IE losing form control values after sort?
- */
-
-/**
- * Sort Functions
- */
-var Sort = (function(){
-        var sort = {};
-        // Default alpha-numeric sort
-        // --------------------------
-        sort.alphanumeric = function(a,b) {
-                return (a==b)?0:(a<b)?-1:1;
-        };
-        sort.alphanumeric_rev = function(a,b) {
-                return (a==b)?0:(a<b)?1:-1;
-        };
-        sort['default'] = sort.alphanumeric; // IE chokes on sort.default
-
-        // This conversion is generalized to work for either a decimal separator of , or .
-        sort.numeric_converter = function(separator) {
-                return function(val) {
-                        if (typeof(val)=="string") {
-                                val = parseFloat(val.replace(/^[^\d\.]*([\d., ]+).*/g,"$1").replace(new RegExp("[^\\\d"+separator+"]","g"),'').replace(/,/,'.')) || 0;
-                        }
-                        return val || 0;
-                };
-        };
-
-        // Numeric Reversed Sort
-        // ------------
-        sort.numeric_rev = function(a,b) {
-                if (sort.numeric.convert(a)>sort.numeric.convert(b)) {
-                        return (-1);
-                }
-                if (sort.numeric.convert(a)==sort.numeric.convert(b)) {
-                        return 0;
-                }
-                if (sort.numeric.convert(a)<sort.numeric.convert(b)) {
-                        return 1;
-                }
-        };
-
-
-        // Numeric Sort
-        // ------------
-        sort.numeric = function(a,b) {
-                return sort.numeric.convert(a)-sort.numeric.convert(b);
-        };
-        sort.numeric.convert = sort.numeric_converter(".");
-
-        // Numeric Sort - comma decimal separator
-        // --------------------------------------
-        sort.numeric_comma = function(a,b) {
-                return sort.numeric_comma.convert(a)-sort.numeric_comma.convert(b);
-        };
-        sort.numeric_comma.convert = sort.numeric_converter(",");
-
-        // Case-insensitive Sort
-        // ---------------------
-        sort.ignorecase = function(a,b) {
-                return sort.alphanumeric(sort.ignorecase.convert(a),sort.ignorecase.convert(b));
-        };
-        sort.ignorecase.convert = function(val) {
-                if (val==null) { return ""; }
-                return (""+val).toLowerCase();
-        };
-
-        // Currency Sort
-        // -------------
-        sort.currency = sort.numeric; // Just treat it as numeric!
-        sort.currency_comma = sort.numeric_comma;
-
-        // Date sort
-        // ---------
-        sort.date = function(a,b) {
-                return sort.numeric(sort.date.convert(a),sort.date.convert(b));
-        };
-        // Convert 2-digit years to 4
-        sort.date.fixYear=function(yr) {
-                yr = +yr;
-                if (yr<50) { yr += 2000; }
-                else if (yr<100) { yr += 1900; }
-                return yr;
-        };
-        sort.date.formats = [
-                // YY[YY]-MM-DD
-                { re:/(\d{2,4})-(\d{1,2})-(\d{1,2})/ , f:function(x){ return (new Date(sort.date.fixYear(x[1]),+x[2],+x[3])).getTime(); } }
-                // MM/DD/YY[YY] or MM-DD-YY[YY]
-                ,{ re:/(\d{1,2})[\/-](\d{1,2})[\/-](\d{2,4})/ , f:function(x){ return (new Date(sort.date.fixYear(x[3]),+x[1],+x[2])).getTime(); } }
-                // Any catch-all format that new Date() can handle. This is not reliable except for long formats, for example: 31 Jan 2000 01:23:45 GMT
-                ,{ re:/(.*\d{4}.*\d+:\d+\d+.*)/, f:function(x){ var d=new Date(x[1]); if(d){return d.getTime();} } }
-        ];
-        sort.date.convert = function(val) {
-                var m,v, f = sort.date.formats;
-                for (var i=0,L=f.length; i<L; i++) {
-                        if (m=val.match(f[i].re)) {
-                                v=f[i].f(m);
-                                if (typeof(v)!="undefined") { return v; }
-                        }
-                }
-                return 9999999999999; // So non-parsed dates will be last, not first
-        };
-
-        return sort;
-})();
-
-/**
- * The main Table namespace
- */
-var Table = (function(){
-
-        /**
-         * Determine if a reference is defined
-         */
-        function def(o) {return (typeof o!="undefined");};
-
-        /**
-         * Determine if an object or class string contains a given class.
-         */
-        function hasClass(o,name) {
-                return new RegExp("(^|\\\s)"+name+"(\\\s|$)").test(o.className);
-        };
-
-        /**
-         * Add a class to an object
-         */
-        function addClass(o,name) {
-                var c = o.className || "";
-                if (def(c) && !hasClass(o,name)) {
-                        o.className += (c?" ":"") + name;
-                }
-        };
-
-        /**
-         * Remove a class from an object
-         */
-        function removeClass(o,name) {
-                var c = o.className || "";
-                o.className = c.replace(new RegExp("(^|\\\s)"+name+"(\\\s|$)"),"$1");
-        };
-
-        /**
-         * For classes that match a given substring, return the rest
-         */
-        function classValue(o,prefix) {
-                var c = o.className;
-                if (c.match(new RegExp("(^|\\\s)"+prefix+"([^ ]+)"))) {
-                        return RegExp.$2;
-                }
-                return null;
-        };
-
-        /**
-         * Return true if an object is hidden.
-         * This uses the "russian doll" technique to unwrap itself to the most efficient
-         * function after the first pass. This avoids repeated feature detection that
-         * would always fall into the same block of code.
-         */
-         function isHidden(o) {
-                if (window.getComputedStyle) {
-                        var cs = window.getComputedStyle;
-                        return (isHidden = function(o) {
-                                return 'none'==cs(o,null).getPropertyValue('display');
-                        })(o);
-                }
-                else if (window.currentStyle) {
-                        return(isHidden = function(o) {
-                                return 'none'==o.currentStyle['display'];
-                        })(o);
-                }
-                return (isHidden = function(o) {
-                        return 'none'==o.style['display'];
-                })(o);
-        };
-
-        /**
-         * Get a parent element by tag name, or the original element if it is of the tag type
-         */
-        function getParent(o,a,b) {
-                if (o!=null && o.nodeName) {
-                        if (o.nodeName==a || (b && o.nodeName==b)) {
-                                return o;
-                        }
-                        while (o=o.parentNode) {
-                                if (o.nodeName && (o.nodeName==a || (b && o.nodeName==b))) {
-                                        return o;
-                                }
-                        }
-                }
-                return null;
-        };
-
-        /**
-         * Utility function to copy properties from one object to another
-         */
-        function copy(o1,o2) {
-                for (var i=2;i<arguments.length; i++) {
-                        var a = arguments[i];
-                        if (def(o1[a])) {
-                                o2[a] = o1[a];
-                        }
-                }
-        }
-
-        // The table object itself
-        var table = {
-                //Class names used in the code
-                AutoStripeClassName:"table-autostripe",
-                StripeClassNamePrefix:"table-stripeclass:",
-
-                AutoSortClassName:"table-autosort",
-                AutoSortColumnPrefix:"table-autosort:",
-                AutoSortTitle:"Click to sort",
-                SortedAscendingClassName:"table-sorted-asc",
-                SortedDescendingClassName:"table-sorted-desc",
-                SortableClassName:"table-sortable",
-                SortableColumnPrefix:"table-sortable:",
-                NoSortClassName:"table-nosort",
-
-                AutoFilterClassName:"table-autofilter",
-                FilteredClassName:"table-filtered",
-                FilterableClassName:"table-filterable",
-                FilteredRowcountPrefix:"table-filtered-rowcount:",
-                RowcountPrefix:"table-rowcount:",
-                FilterAllLabel:"Filter: All",
-
-                AutoPageSizePrefix:"table-autopage:",
-                AutoPageJumpPrefix:"table-page:",
-                PageNumberPrefix:"table-page-number:",
-                PageCountPrefix:"table-page-count:"
-        };
-
-        /**
-         * A place to store misc table information, rather than in the table objects themselves
-         */
-        table.tabledata = {};
-
-        /**
-         * Resolve a table given an element reference, and make sure it has a unique ID
-         */
-        table.uniqueId=1;
-        table.resolve = function(o,args) {
-                if (o!=null && o.nodeName && o.nodeName!="TABLE") {
-                        o = getParent(o,"TABLE");
-                }
-                if (o==null) { return null; }
-                if (!o.id) {
-                        var id = null;
-                        do { var id = "TABLE_"+(table.uniqueId++); }
-                                while (document.getElementById(id)!=null);
-                        o.id = id;
-                }
-                this.tabledata[o.id] = this.tabledata[o.id] || {};
-                if (args) {
-                        copy(args,this.tabledata[o.id],"stripeclass","ignorehiddenrows","useinnertext","sorttype","col","desc","page","pagesize");
-                }
-                return o;
-        };
-
-
-        /**
-         * Run a function against each cell in a table header or footer, usually
-         * to add or remove css classes based on sorting, filtering, etc.
-         */
-        table.processTableCells = function(t, type, func, arg) {
-                t = this.resolve(t);
-                if (t==null) { return; }
-                if (type!="TFOOT") {
-                        this.processCells(t.tHead, func, arg);
-                }
-                if (type!="THEAD") {
-                        this.processCells(t.tFoot, func, arg);
-                }
-        };
-
-        /**
-         * Internal method used to process an arbitrary collection of cells.
-         * Referenced by processTableCells.
-         * It's done this way to avoid getElementsByTagName() which would also return nested table cells.
-         */
-        table.processCells = function(section,func,arg) {
-                if (section!=null) {
-                        if (section.rows && section.rows.length && section.rows.length>0) {
-                                var rows = section.rows;
-                                for (var j=0,L2=rows.length; j<L2; j++) {
-                                        var row = rows[j];
-                                        if (row.cells && row.cells.length && row.cells.length>0) {
-                                                var cells = row.cells;
-                                                for (var k=0,L3=cells.length; k<L3; k++) {
-                                                        var cellsK = cells[k];
-                                                        func.call(this,cellsK,arg);
-                                                }
-                                        }
-                                }
-                        }
-                }
-        };
-
-        /**
-         * Get the cellIndex value for a cell. This is only needed because of a Safari
-         * bug that causes cellIndex to exist but always be 0.
-         * Rather than feature-detecting each time it is called, the function will
-         * re-write itself the first time it is called.
-         */
-        table.getCellIndex = function(td) {
-                var tr = td.parentNode;
-                var cells = tr.cells;
-                if (cells && cells.length) {
-                        if (cells.length>1 && cells[cells.length-1].cellIndex>0) {
-                                // Define the new function, overwrite the one we're running now, and then run the new one
-                                (this.getCellIndex = function(td) {
-                                        return td.cellIndex;
-                                })(td);
-                        }
-                        // Safari will always go through this slower block every time. Oh well.
-                        for (var i=0,L=cells.length; i<L; i++) {
-                                if (tr.cells[i]==td) {
-                                        return i;
-                                }
-                        }
-                }
-                return 0;
-        };
-
-        /**
-         * A map of node names and how to convert them into their "value" for sorting, filtering, etc.
-         * These are put here so it is extensible.
-         */
-        table.nodeValue = {
-                'INPUT':function(node) {
-                        if (def(node.value) && node.type && ((node.type!="checkbox" && node.type!="radio") || node.checked)) {
-                                return node.value;
-                        }
-                        return "";
-                },
-                'SELECT':function(node) {
-                        if (node.selectedIndex>=0 && node.options) {
-                                // Sort select elements by the visible text
-                                return node.options[node.selectedIndex].text;
-                        }
-                        return "";
-                },
-                'IMG':function(node) {
-                        return node.name || "";
-                }
-        };
-
-        /**
-         * Get the text value of a cell. Only use innerText if explicitly told to, because
-         * otherwise we want to be able to handle sorting on inputs and other types
-         */
-        table.getCellValue = function(td,useInnerText) {
-                if (useInnerText && def(td.innerText)) {
-                        return td.innerText;
-                }
-                if (!td.childNodes) {
-                        return "";
-                }
-                var childNodes=td.childNodes;
-                var ret = "";
-                for (var i=0,L=childNodes.length; i<L; i++) {
-                        var node = childNodes[i];
-                        var type = node.nodeType;
-                        // In order to get realistic sort results, we need to treat some elements in a special way.
-                        // These behaviors are defined in the nodeValue() object, keyed by node name
-                        if (type==1) {
-                                var nname = node.nodeName;
-                                if (this.nodeValue[nname]) {
-                                        ret += this.nodeValue[nname](node);
-                                }
-                                else {
-                                        ret += this.getCellValue(node);
-                                }
-                        }
-                        else if (type==3) {
-                                if (def(node.innerText)) {
-                                        ret += node.innerText;
-                                }
-                                else if (def(node.nodeValue)) {
-                                        ret += node.nodeValue;
-                                }
-                        }
-                }
-                return ret;
-        };
-
-        /**
-         * Consider colspan and rowspan values in table header cells to calculate the actual cellIndex
-         * of a given cell. This is necessary because if the first cell in row 0 has a rowspan of 2,
-         * then the first cell in row 1 will have a cellIndex of 0 rather than 1, even though it really
-         * starts in the second column rather than the first.
-         * See: http://www.javascripttoolbox.com/temp/table_cellindex.html
-         */
-        table.tableHeaderIndexes = {};
-        table.getActualCellIndex = function(tableCellObj) {
-                if (!def(tableCellObj.cellIndex)) { return null; }
-                var tableObj = getParent(tableCellObj,"TABLE");
-                var cellCoordinates = tableCellObj.parentNode.rowIndex+"-"+this.getCellIndex(tableCellObj);
-
-                // If it has already been computed, return the answer from the lookup table
-                if (def(this.tableHeaderIndexes[tableObj.id])) {
-                        return this.tableHeaderIndexes[tableObj.id][cellCoordinates];
-                }
-
-                var matrix = [];
-                this.tableHeaderIndexes[tableObj.id] = {};
-                var thead = getParent(tableCellObj,"THEAD");
-                var trs = thead.getElementsByTagName('TR');
-
-                // Loop thru every tr and every cell in the tr, building up a 2-d array "grid" that gets
-                // populated with an "x" for each space that a cell takes up. If the first cell is colspan
-                // 2, it will fill in values [0] and [1] in the first array, so that the second cell will
-                // find the first empty cell in the first row (which will be [2]) and know that this is
-                // where it sits, rather than its internal .cellIndex value of [1].
-                for (var i=0; i<trs.length; i++) {
-                        var cells = trs[i].cells;
-                        for (var j=0; j<cells.length; j++) {
-                                var c = cells[j];
-                                var rowIndex = c.parentNode.rowIndex;
-                                var cellId = rowIndex+"-"+this.getCellIndex(c);
-                                var rowSpan = c.rowSpan || 1;
-                                var colSpan = c.colSpan || 1;
-                                var firstAvailCol;
-                                if(!def(matrix[rowIndex])) {
-                                        matrix[rowIndex] = [];
-                                }
-                                var m = matrix[rowIndex];
-                                // Find first available column in the first row
-                                for (var k=0; k<m.length+1; k++) {
-                                        if (!def(m[k])) {
-                                                firstAvailCol = k;
-                                                break;
-                                        }
-                                }
-                                this.tableHeaderIndexes[tableObj.id][cellId] = firstAvailCol;
-                                for (var k=rowIndex; k<rowIndex+rowSpan; k++) {
-                                        if(!def(matrix[k])) {
-                                                matrix[k] = [];
-                                        }
-                                        var matrixrow = matrix[k];
-                                        for (var l=firstAvailCol; l<firstAvailCol+colSpan; l++) {
-                                                matrixrow[l] = "x";
-                                        }
-                                }
-                        }
-                }
-                // Store the map so future lookups are fast.
-                return this.tableHeaderIndexes[tableObj.id][cellCoordinates];
-        };
-
-        /**
-         * Sort all rows in each TBODY (tbodies are sorted independent of each other)
-         */
-        table.sort = function(o,args) {
-                var t, tdata, sortconvert=null;
-                // Allow for a simple passing of sort type as second parameter
-                if (typeof(args)=="function") {
-                        args={sorttype:args};
-                }
-                args = args || {};
-
-                // If no col is specified, deduce it from the object sent in
-                if (!def(args.col)) {
-                        args.col = this.getActualCellIndex(o) || 0;
-                }
-                // If no sort type is specified, default to the default sort
-                args.sorttype = args.sorttype || Sort['default'];
-
-                // Resolve the table
-                t = this.resolve(o,args);
-                tdata = this.tabledata[t.id];
-
-                // If we are sorting on the same column as last time, flip the sort direction
-                if (def(tdata.lastcol) && tdata.lastcol==tdata.col && def(tdata.lastdesc)) {
-                        tdata.desc = !tdata.lastdesc;
-                }
-                else {
-                        tdata.desc = !!args.desc;
-                }
-
-                // Store the last sorted column so clicking again will reverse the sort order
-                tdata.lastcol=tdata.col;
-                tdata.lastdesc=!!tdata.desc;
-
-                // If a sort conversion function exists, pre-convert cell values and then use a plain alphanumeric sort
-                var sorttype = tdata.sorttype;
-                if (typeof(sorttype.convert)=="function") {
-                        sortconvert=tdata.sorttype.convert;
-                        sorttype=Sort.alphanumeric;
-                }
-
-                // Loop through all THEADs and remove sorted class names, then re-add them for the col
-                // that is being sorted
-                this.processTableCells(t,"THEAD",
-                        function(cell) {
-                                if (hasClass(cell,this.SortableClassName)) {
-                                        removeClass(cell,this.SortedAscendingClassName);
-                                        removeClass(cell,this.SortedDescendingClassName);
-                                        // If the computed colIndex of the cell equals the sorted colIndex, flag it as sorted
-                                        if (tdata.col==table.getActualCellIndex(cell) && (classValue(cell,table.SortableClassName))) {
-                                                addClass(cell,tdata.desc?this.SortedAscendingClassName:this.SortedDescendingClassName);
-                                        }
-                                }
-                        }
-                );
-
-                // Sort each tbody independently
-                var bodies = t.tBodies;
-                if (bodies==null || bodies.length==0) { return; }
-
-                // Define a new sort function to be called to consider descending or not
-                var newSortFunc = (tdata.desc)?
-                        function(a,b){return sorttype(b[0],a[0]);}
-                        :function(a,b){return sorttype(a[0],b[0]);};
-
-                var useinnertext=!!tdata.useinnertext;
-                var col = tdata.col;
-
-                for (var i=0,L=bodies.length; i<L; i++) {
-                        var tb = bodies[i], tbrows = tb.rows, rows = [];
-
-                        // Allow tbodies to request that they not be sorted
-                        if(!hasClass(tb,table.NoSortClassName)) {
-                                // Create a separate array which will store the converted values and refs to the
-                                // actual rows. This is the array that will be sorted.
-                                var cRow, cRowIndex=0;
-                                if (cRow=tbrows[cRowIndex]){
-                                        // Funky loop style because it's considerably faster in IE
-                                        do {
-                                                if (rowCells = cRow.cells) {
-                                                        var cellValue = (col<rowCells.length)?this.getCellValue(rowCells[col],useinnertext):null;
-                                                        if (sortconvert) cellValue = sortconvert(cellValue);
-                                                        rows[cRowIndex] = [cellValue,tbrows[cRowIndex]];
-                                                }
-                                        } while (cRow=tbrows[++cRowIndex])
-                                }
-
-                                // Do the actual sorting
-                                rows.sort(newSortFunc);
-
-                                // Move the rows to the correctly sorted order. Appending an existing DOM object just moves it!
-                                cRowIndex=0;
-                                var displayedCount=0;
-                                var f=[removeClass,addClass];
-                                if (cRow=rows[cRowIndex]){
-                                        do {
-                                                tb.appendChild(cRow[1]);
-                                        } while (cRow=rows[++cRowIndex])
-                                }
-                        }
-                }
-
-                // If paging is enabled on the table, then we need to re-page because the order of rows has changed!
-                if (tdata.pagesize) {
-                        this.page(t); // This will internally do the striping
-                }
-                else {
-                        // Re-stripe if a class name was supplied
-                        if (tdata.stripeclass) {
-                                this.stripe(t,tdata.stripeclass,!!tdata.ignorehiddenrows);
-                        }
-                }
-        };
-
-        /**
-        * Apply a filter to rows in a table and hide those that do not match.
-        */
-        table.filter = function(o,filters,args) {
-                var cell;
-                args = args || {};
-
-                var t = this.resolve(o,args);
-                var tdata = this.tabledata[t.id];
-
-                // If new filters were passed in, apply them to the table's list of filters
-                if (!filters) {
-                        // If a null or blank value was sent in for 'filters' then that means reset the table to no filters
-                        tdata.filters = null;
-                }
-                else {
-                        // Allow for passing a select list in as the filter, since this is common design
-                        if (filters.nodeName=="SELECT" && filters.type=="select-one" && filters.selectedIndex>-1) {
-                                filters={ 'filter':filters.options[filters.selectedIndex].value };
-                        }
-                        // Also allow for a regular input
-                        if (filters.nodeName=="INPUT" && filters.type=="text") {
-                                filters={ 'filter':"/"+filters.value+"/" };
-                        }
-                        // Force filters to be an array
-                        if (typeof(filters)=="object" && !filters.length) {
-                                filters = [filters];
-                        }
-
-                        // Convert regular expression strings to RegExp objects and function strings to function objects
-                        for (var i=0,L=filters.length; i<L; i++) {
-                                var filter = filters[i];
-                                if (typeof(filter.filter)=="string") {
-                                        // If a filter string is like "/expr/" then turn it into a Regex
-                                        if (filter.filter.match(/^\/(.*)\/$/)) {
-                                                filter.filter = new RegExp(RegExp.$1);
-                                                filter.filter.regex=true;
-                                        }
-                                        // If filter string is like "function (x) { ... }" then turn it into a function
-                                        else if (filter.filter.match(/^function\s*\(([^\)]*)\)\s*\{(.*)}\s*$/)) {
-                                                filter.filter = Function(RegExp.$1,RegExp.$2);
-                                        }
-                                }
-                                // If some non-table object was passed in rather than a 'col' value, resolve it
-                                // and assign it's column index to the filter if it doesn't have one. This way,
-                                // passing in a cell reference or a select object etc instead of a table object
-                                // will automatically set the correct column to filter.
-                                if (filter && !def(filter.col) && (cell=getParent(o,"TD","TH"))) {
-                                        filter.col = this.getCellIndex(cell);
-                                }
-
-                                // Apply the passed-in filters to the existing list of filters for the table, removing those that have a filter of null or ""
-                                if ((!filter || !filter.filter) && tdata.filters) {
-                                        delete tdata.filters[filter.col];
-                                }
-                                else {
-                                        tdata.filters = tdata.filters || {};
-                                        tdata.filters[filter.col] = filter.filter;
-                                }
-                        }
-                        // If no more filters are left, then make sure to empty out the filters object
-                        for (var j in tdata.filters) { var keep = true; }
-                        if (!keep) {
-                                tdata.filters = null;
-                        }
-                }
-                // Everything's been setup, so now scrape the table rows
-                return table.scrape(o);
-        };
-
-        /**
-         * "Page" a table by showing only a subset of the rows
-         */
-        table.page = function(t,page,args) {
-                args = args || {};
-                if (def(page)) { args.page = page; }
-                return table.scrape(t,args);
-        };
-
-        /**
-         * Jump forward or back any number of pages
-         */
-        table.pageJump = function(t,count,args) {
-                t = this.resolve(t,args);
-                return this.page(t,(table.tabledata[t.id].page||0)+count,args);
-        };
-
-        /**
-         * Go to the next page of a paged table
-         */
-        table.pageNext = function(t,args) {
-                return this.pageJump(t,1,args);
-        };
-
-        /**
-         * Go to the previous page of a paged table
-         */
-        table.pagePrevious = function(t,args) {
-                return this.pageJump(t,-1,args);
-        };
-
-        /**
-        * Scrape a table to either hide or show each row based on filters and paging
-        */
-        table.scrape = function(o,args) {
-                var col,cell,filterList,filterReset=false,filter;
-                var page,pagesize,pagestart,pageend;
-                var unfilteredrows=[],unfilteredrowcount=0,totalrows=0;
-                var t,tdata,row,hideRow;
-                args = args || {};
-
-                // Resolve the table object
-                t = this.resolve(o,args);
-                tdata = this.tabledata[t.id];
-
-                // Setup for Paging
-                var page = tdata.page;
-                if (def(page)) {
-                        // Don't let the page go before the beginning
-                        if (page<0) { tdata.page=page=0; }
-                        pagesize = tdata.pagesize || 25; // 25=arbitrary default
-                        pagestart = page*pagesize+1;
-                        pageend = pagestart + pagesize - 1;
-                }
-
-                // Scrape each row of each tbody
-                var bodies = t.tBodies;
-                if (bodies==null || bodies.length==0) { return; }
-                for (var i=0,L=bodies.length; i<L; i++) {
-                        var tb = bodies[i];
-                        for (var j=0,L2=tb.rows.length; j<L2; j++) {
-                                row = tb.rows[j];
-                                hideRow = false;
-
-                                // Test if filters will hide the row
-                                if (tdata.filters && row.cells) {
-                                        var cells = row.cells;
-                                        var cellsLength = cells.length;
-                                        // Test each filter
-                                        for (col in tdata.filters) {
-                                                if (!hideRow) {
-                                                        filter = tdata.filters[col];
-                                                        if (filter && col<cellsLength) {
-                                                                var val = this.getCellValue(cells[col]);
-                                                                if (filter.regex && val.search) {
-                                                                        hideRow=(val.search(filter)<0);
-                                                                }
-                                                                else if (typeof(filter)=="function") {
-                                                                        hideRow=!filter(val,cells[col]);
-                                                                }
-                                                                else {
-                                                                        hideRow = (val!=filter);
-                                                                }
-                                                        }
-                                                }
-                                        }
-                                }
-
-                                // Keep track of the total rows scanned and the total runs _not_ filtered out
-                                totalrows++;
-                                if (!hideRow) {
-                                        unfilteredrowcount++;
-                                        if (def(page)) {
-                                                // Temporarily keep an array of unfiltered rows in case the page we're on goes past
-                                                // the last page and we need to back up. Don't want to filter again!
-                                                unfilteredrows.push(row);
-                                                if (unfilteredrowcount<pagestart || unfilteredrowcount>pageend) {
-                                                        hideRow = true;
-                                                }
-                                        }
-                                }
-
-                                row.style.display = hideRow?"none":"";
-                        }
-                }
-
-                if (def(page)) {
-                        // Check to see if filtering has put us past the requested page index. If it has,
-                        // then go back to the last page and show it.
-                        if (pagestart>=unfilteredrowcount) {
-                                pagestart = unfilteredrowcount-(unfilteredrowcount%pagesize);
-                                tdata.page = page = pagestart/pagesize;
-                                for (var i=pagestart,L=unfilteredrows.length; i<L; i++) {
-                                        unfilteredrows[i].style.display="";
-                                }
-                        }
-                }
-
-                // Loop through all THEADs and add/remove filtered class names
-                this.processTableCells(t,"THEAD",
-                        function(c) {
-                                ((tdata.filters && def(tdata.filters[table.getCellIndex(c)]) && hasClass(c,table.FilterableClassName))?addClass:removeClass)(c,table.FilteredClassName);
-                        }
-                );
-
-                // Stripe the table if necessary
-                if (tdata.stripeclass) {
-                        this.stripe(t);
-                }
-
-                // Calculate some values to be returned for info and updating purposes
-                var pagecount = Math.floor(unfilteredrowcount/pagesize)+1;
-                if (def(page)) {
-                        // Update the page number/total containers if they exist
-                        if (tdata.container_number) {
-                                tdata.container_number.innerHTML = page+1;
-                        }
-                        if (tdata.container_count) {
-                                tdata.container_count.innerHTML = pagecount;
-                        }
-                }
-
-                // Update the row count containers if they exist
-                if (tdata.container_filtered_count) {
-                        tdata.container_filtered_count.innerHTML = unfilteredrowcount;
-                }
-                if (tdata.container_all_count) {
-                        tdata.container_all_count.innerHTML = totalrows;
-                }
-                return { 'data':tdata, 'unfilteredcount':unfilteredrowcount, 'total':totalrows, 'pagecount':pagecount, 'page':page, 'pagesize':pagesize };
-        };
-
-        /**
-         * Shade alternate rows, aka Stripe the table.
-         */
-        table.stripe = function(t,className,args) {
-                args = args || {};
-                args.stripeclass = className;
-
-                t = this.resolve(t,args);
-                var tdata = this.tabledata[t.id];
-
-                var bodies = t.tBodies;
-                if (bodies==null || bodies.length==0) {
-                        return;
-                }
-
-                className = tdata.stripeclass;
-                // Cache a shorter, quicker reference to either the remove or add class methods
-                var f=[removeClass,addClass];
-                for (var i=0,L=bodies.length; i<L; i++) {
-                        var tb = bodies[i], tbrows = tb.rows, cRowIndex=0, cRow, displayedCount=0;
-                        if (cRow=tbrows[cRowIndex]){
-                                // The ignorehiddenrows test is pulled out of the loop for a slight speed increase.
-                                // Makes a bigger difference in FF than in IE.
-                                // In this case, speed always wins over brevity!
-                                if (tdata.ignoreHiddenRows) {
-                                        do {
-                                                f[displayedCount++%2](cRow,className);
-                                        } while (cRow=tbrows[++cRowIndex])
-                                }
-                                else {
-                                        do {
-                                                if (!isHidden(cRow)) {
-                                                        f[displayedCount++%2](cRow,className);
-                                                }
-                                        } while (cRow=tbrows[++cRowIndex])
-                                }
-                        }
-                }
-        };
-
-        /**
-         * Build up a list of unique values in a table column
-         */
-        table.getUniqueColValues = function(t,col) {
-                var values={}, bodies = this.resolve(t).tBodies;
-                for (var i=0,L=bodies.length; i<L; i++) {
-                        var tbody = bodies[i];
-                        for (var r=0,L2=tbody.rows.length; r<L2; r++) {
-                                values[this.getCellValue(tbody.rows[r].cells[col])] = true;
-                        }
-                }
-                var valArray = [];
-                for (var val in values) {
-                        valArray.push(val);
-                }
-                return valArray.sort();
-        };
-
-        /**
-         * Scan the document on load and add sorting, filtering, paging etc ability automatically
-         * based on existence of class names on the table and cells.
-         */
-        table.auto = function(args) {
-                var cells = [], tables = document.getElementsByTagName("TABLE");
-                var val,tdata;
-                if (tables!=null) {
-                        for (var i=0,L=tables.length; i<L; i++) {
-                                var t = table.resolve(tables[i]);
-                                tdata = table.tabledata[t.id];
-                                if (val=classValue(t,table.StripeClassNamePrefix)) {
-                                        tdata.stripeclass=val;
-                                }
-                                // Do auto-filter if necessary
-                                if (hasClass(t,table.AutoFilterClassName)) {
-                                        table.autofilter(t);
-                                }
-                                // Do auto-page if necessary
-                                if (val = classValue(t,table.AutoPageSizePrefix)) {
-                                        table.autopage(t,{'pagesize':+val});
-                                }
-                                // Do auto-sort if necessary
-                                if ((val = classValue(t,table.AutoSortColumnPrefix)) || (hasClass(t,table.AutoSortClassName))) {
-                                        table.autosort(t,{'col':(val==null)?null:+val});
-                                }
-                                // Do auto-stripe if necessary
-                                if (tdata.stripeclass && hasClass(t,table.AutoStripeClassName)) {
-                                        table.stripe(t);
-                                }
-                        }
-                }
-        };
-
-        /**
-         * Add sorting functionality to a table header cell
-         */
-        table.autosort = function(t,args) {
-                t = this.resolve(t,args);
-                var tdata = this.tabledata[t.id];
-                this.processTableCells(t, "THEAD", function(c) {
-                        var type = classValue(c,table.SortableColumnPrefix);
-                        if (type!=null) {
-                                type = type || "default";
-                                c.title =c.title || table.AutoSortTitle;
-                                addClass(c,table.SortableClassName);
-                                c.onclick = Function("","Table.sort(this,{'sorttype':Sort['"+type+"']})");
-                                // If we are going to auto sort on a column, we need to keep track of what kind of sort it will be
-                                if (args.col!=null) {
-                                        if (args.col==table.getActualCellIndex(c)) {
-                                                tdata.sorttype=Sort['"+type+"'];
-                                        }
-                                }
-                        }
-                } );
-                if (args.col!=null) {
-                        table.sort(t,args);
-                }
-        };
-
-        /**
-         * Add paging functionality to a table
-         */
-        table.autopage = function(t,args) {
-                t = this.resolve(t,args);
-                var tdata = this.tabledata[t.id];
-                if (tdata.pagesize) {
-                        this.processTableCells(t, "THEAD,TFOOT", function(c) {
-                                var type = classValue(c,table.AutoPageJumpPrefix);
-                                if (type=="next") { type = 1; }
-                                else if (type=="previous") { type = -1; }
-                                if (type!=null) {
-                                        c.onclick = Function("","Table.pageJump(this,"+type+")");
-                                }
-                        } );
-                        if (val = classValue(t,table.PageNumberPrefix)) {
-                                tdata.container_number = document.getElementById(val);
-                        }
-                        if (val = classValue(t,table.PageCountPrefix)) {
-                                tdata.container_count = document.getElementById(val);
-                        }
-                        return table.page(t,0,args);
-                }
-        };
-
-        /**
-         * A util function to cancel bubbling of clicks on filter dropdowns
-         */
-        table.cancelBubble = function(e) {
-                e = e || window.event;
-                if (typeof(e.stopPropagation)=="function") { e.stopPropagation(); }
-                if (def(e.cancelBubble)) { e.cancelBubble = true; }
-        };
-
-        /**
-         * Auto-filter a table
-         */
-        table.autofilter = function(t,args) {
-                args = args || {};
-                t = this.resolve(t,args);
-                var tdata = this.tabledata[t.id],val;
-                table.processTableCells(t, "THEAD", function(cell) {
-                        if (hasClass(cell,table.FilterableClassName)) {
-                                var cellIndex = table.getCellIndex(cell);
-                                var colValues = table.getUniqueColValues(t,cellIndex);
-                                if (colValues.length>0) {
-                                        if (typeof(args.insert)=="function") {
-                                                func.insert(cell,colValues);
-                                        }
-                                        else {
-                                                var sel = '<select onchange="Table.filter(this,this)" onclick="Table.cancelBubble(event)" class="'+table.AutoFilterClassName+'"><option value="">'+table.FilterAllLabel+'</option>';
-                                                for (var i=0; i<colValues.length; i++) {
-                                                        sel += '<option value="'+colValues[i]+'">'+colValues[i]+'</option>';
-                                                }
-                                                sel += '</select>';
-                                                cell.innerHTML += "<br>"+sel;
-                                        }
-                                }
-                        }
-                });
-                if (val = classValue(t,table.FilteredRowcountPrefix)) {
-                        tdata.container_filtered_count = document.getElementById(val);
-                }
-                if (val = classValue(t,table.RowcountPrefix)) {
-                        tdata.container_all_count = document.getElementById(val);
-                }
-        };
-
-        /**
-         * Attach the auto event so it happens on load.
-         * use jQuery's ready() function if available
-         */
-        if (typeof(jQuery)!="undefined") {
-                jQuery(table.auto);
-        }
-        else if (window.addEventListener) {
-                window.addEventListener( "load", table.auto, false );
-        }
-        else if (window.attachEvent) {
-                window.attachEvent( "onload", table.auto );
-        }
-
-        return table;
-})();
-"""
-
-
-maketree_js = """/**
- * Copyright (c)2005-2007 Matt Kruse (javascripttoolbox.com)
- *
- * Dual licensed under the MIT and GPL licenses.
- * This basically means you can use this code however you want for
- * free, but don't claim to have written it yourself!
- * Donations always accepted: http://www.JavascriptToolbox.com/donate/
- *
- * Please do not link to the .js files on javascripttoolbox.com from
- * your site. Copy the files locally to your server instead.
- *
- */
-/*
-This code is inspired by and extended from Stuart Langridge's aqlist code:
-    http://www.kryogenix.org/code/browser/aqlists/
-    Stuart Langridge, November 2002
-    sil@kryogenix.org
-    Inspired by Aaron's labels.js (http://youngpup.net/demos/labels/)
-    and Dave Lindquist's menuDropDown.js (http://www.gazingus.org/dhtml/?id=109)
-*/
-
-// Automatically attach a listener to the window onload, to convert the trees
-addEvent(window,"load",convertTrees);
-
-// Utility function to add an event listener
-function addEvent(o,e,f){
-  if (o.addEventListener){ o.addEventListener(e,f,false); return true; }
-  else if (o.attachEvent){ return o.attachEvent("on"+e,f); }
-  else { return false; }
-}
-
-// utility function to set a global variable if it is not already set
-function setDefault(name,val) {
-  if (typeof(window[name])=="undefined" || window[name]==null) {
-    window[name]=val;
-  }
-}
-
-// Full expands a tree with a given ID
-function expandTree(treeId) {
-  var ul = document.getElementById(treeId);
-  if (ul == null) { return false; }
-  expandCollapseList(ul,nodeOpenClass);
-}
-
-// Fully collapses a tree with a given ID
-function collapseTree(treeId) {
-  var ul = document.getElementById(treeId);
-  if (ul == null) { return false; }
-  expandCollapseList(ul,nodeClosedClass);
-}
-
-// Expands enough nodes to expose an LI with a given ID
-function expandToItem(treeId,itemId) {
-  var ul = document.getElementById(treeId);
-  if (ul == null) { return false; }
-  var ret = expandCollapseList(ul,nodeOpenClass,itemId);
-  if (ret) {
-    var o = document.getElementById(itemId);
-    if (o.scrollIntoView) {
-      o.scrollIntoView(false);
-    }
-  }
-}
-
-// Performs 3 functions:
-// a) Expand all nodes
-// b) Collapse all nodes
-// c) Expand all nodes to reach a certain ID
-function expandCollapseList(ul,cName,itemId) {
-  if (!ul.childNodes || ul.childNodes.length==0) { return false; }
-  // Iterate LIs
-  for (var itemi=0;itemi<ul.childNodes.length;itemi++) {
-    var item = ul.childNodes[itemi];
-    if (itemId!=null && item.id==itemId) { return true; }
-    if (item.nodeName == "LI") {
-      // Iterate things in this LI
-      var subLists = false;
-      for (var sitemi=0;sitemi<item.childNodes.length;sitemi++) {
-        var sitem = item.childNodes[sitemi];
-        if (sitem.nodeName=="UL") {
-          subLists = true;
-          var ret = expandCollapseList(sitem,cName,itemId);
-          if (itemId!=null && ret) {
-            item.className=cName;
-            return true;
-          }
-        }
-      }
-      if (subLists && itemId==null) {
-        item.className = cName;
-      }
-    }
-  }
-}
-
-// Search the document for UL elements with the correct CLASS name, then process them
-function convertTrees() {
-  setDefault("treeClass","mktree");
-  setDefault("nodeClosedClass","liClosed");
-  setDefault("nodeOpenClass","liOpen");
-  setDefault("nodeBulletClass","liBullet");
-  setDefault("nodeLinkClass","bullet");
-  setDefault("preProcessTrees",true);
-  if (preProcessTrees) {
-    if (!document.createElement) { return; } // Without createElement, we can't do anything
-    var uls = document.getElementsByTagName("ul");
-    if (uls==null) { return; }
-    var uls_length = uls.length;
-    for (var uli=0;uli<uls_length;uli++) {
-      var ul=uls[uli];
-      if (ul.nodeName=="UL" && ul.className==treeClass) {
-        processList(ul);
-      }
-    }
-  }
-}
-
-function treeNodeOnclick() {
-  this.parentNode.className = (this.parentNode.className==nodeOpenClass) ? nodeClosedClass : nodeOpenClass;
-  return false;
-}
-function retFalse() {
-  return false;
-}
-// Process a UL tag and all its children, to convert to a tree
-function processList(ul) {
-  if (!ul.childNodes || ul.childNodes.length==0) { return; }
-  // Iterate LIs
-  var childNodesLength = ul.childNodes.length;
-  for (var itemi=0;itemi<childNodesLength;itemi++) {
-    var item = ul.childNodes[itemi];
-    if (item.nodeName == "LI") {
-      // Iterate things in this LI
-      var subLists = false;
-      var itemChildNodesLength = item.childNodes.length;
-      for (var sitemi=0;sitemi<itemChildNodesLength;sitemi++) {
-        var sitem = item.childNodes[sitemi];
-        if (sitem.nodeName=="UL") {
-          subLists = true;
-          processList(sitem);
-        }
-      }
-      var s= document.createElement("SPAN");
-      var t= '\u00A0'; // &nbsp;
-      s.className = nodeLinkClass;
-      if (subLists) {
-        // This LI has UL's in it, so it's a +/- node
-        if (item.className==null || item.className=="") {
-          item.className = nodeClosedClass;
-        }
-        // If it's just text, make the text work as the link also
-        if (item.firstChild.nodeName=="#text") {
-          t = t+item.firstChild.nodeValue;
-          item.removeChild(item.firstChild);
-        }
-        s.onclick = treeNodeOnclick;
-      }
-      else {
-        // No sublists, so it's just a bullet node
-        item.className = nodeBulletClass;
-        s.onclick = retFalse;
-      }
-      s.appendChild(document.createTextNode(t));
-      item.insertBefore(s,item.firstChild);
-    }
-  }
-}
-"""
-
-
-#################################################################
-##  This script gets kvm autotest results directory path as an ##
-##  input and create a single html formatted result page.      ##
-#################################################################
-
-stimelist = []
-
-
-def make_html_file(metadata, results, tag, host, output_file_name, dirname):
-    html_prefix = """
-<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
-<html>
-<head>
-<title>KVM Autotest Results</title>
-<style type="text/css">
-%s
-</style>
-<script type="text/javascript">
-%s
-%s
-function popup(tag,text) {
-var w = window.open('', tag, 'toolbar=no,location=no,directories=no,status=no,menubar=no,scrollbars=yes,resizable=yes, copyhistory=no,width=600,height=300,top=20,left=100');
-w.document.open("text/html", "replace");
-w.document.write(text);
-w.document.close();
-return true;
-}
-</script>
-</head>
-<body>
-""" % (format_css, table_js, maketree_js)
-
-
-    if output_file_name:
-        output = open(output_file_name, "w")
-    else:   #if no output file defined, print html file to console
-        output = sys.stdout
-    # create html page
-    print >> output, html_prefix
-    print >> output, '<h2 id=\"page_title\">KVM Autotest Execution Report</h2>'
-
-    # formating date and time to print
-    t = datetime.datetime.now()
-
-    epoch_sec = time.mktime(t.timetuple())
-    now = datetime.datetime.fromtimestamp(epoch_sec)
-
-    # basic statistics
-    total_executed = 0
-    total_failed = 0
-    total_passed = 0
-    for res in results:
-        total_executed += 1
-        if res['status'] == 'GOOD':
-            total_passed += 1
-        else:
-            total_failed += 1
-    stat_str = 'No test cases executed'
-    if total_executed > 0:
-        failed_perct = int(float(total_failed)/float(total_executed)*100)
-        stat_str = ('From %d tests executed, %d have passed (%d%% failures)' %
-                    (total_executed, total_passed, failed_perct))
-
-    kvm_ver_str = metadata['kvmver']
-
-    print >> output, '<table class="stats2">'
-    print >> output, '<tr><td>HOST</td><td>:</td><td>%s</td></tr>' % host
-    print >> output, '<tr><td>RESULTS DIR</td><td>:</td><td>%s</td></tr>'  % tag
-    print >> output, '<tr><td>DATE</td><td>:</td><td>%s</td></tr>' % now.ctime()
-    print >> output, '<tr><td>STATS</td><td>:</td><td>%s</td></tr>'% stat_str
-    print >> output, '<tr><td></td><td></td><td></td></tr>'
-    print >> output, '<tr><td>KVM VERSION</td><td>:</td><td>%s</td></tr>' % kvm_ver_str
-    print >> output, '</table>'
-
-
-    ## print test results
-    print >> output, '<br>'
-    print >> output, '<h2 id=\"page_sub_title\">Test Results</h2>'
-    print >> output, '<h2 id=\"comment\">click on table headers to asc/desc sort</h2>'
-    result_table_prefix = """<table
-id="t1" class="stats table-autosort:4 table-autofilter table-stripeclass:alternate table-page-number:t1page table-page-count:t1pages table-filtered-rowcount:t1filtercount table-rowcount:t1allcount">
-<thead class="th table-sorted-asc table-sorted-desc">
-<tr>
-<th align="left" class="table-sortable:alphanumeric">Date/Time</th>
-<th align="left" class="filterable table-sortable:alphanumeric">Test Case<br><input name="tc_filter" size="10" onkeyup="Table.filter(this,this)" onclick="Table.cancelBubble(event)"></th>
-<th align="left" class="table-filterable table-sortable:alphanumeric">Status</th>
-<th align="left">Time (sec)</th>
-<th align="left">Info</th>
-<th align="left">Debug</th>
-</tr></thead>
-<tbody>
-"""
-    print >> output, result_table_prefix
-    for res in results:
-        print >> output, '<tr>'
-        print >> output, '<td align="left">%s</td>' % res['time']
-        print >> output, '<td align="left">%s</td>' % res['testcase']
-        if res['status'] == 'GOOD':
-            print >> output, '<td align=\"left\"><b><font color="#00CC00">PASS</font></b></td>'
-        elif res['status'] == 'FAIL':
-            print >> output, '<td align=\"left\"><b><font color="red">FAIL</font></b></td>'
-        elif res['status'] == 'ERROR':
-            print >> output, '<td align=\"left\"><b><font color="red">ERROR!</font></b></td>'
-        else:
-            print >> output, '<td align=\"left\">%s</td>' % res['status']
-        # print exec time (seconds)
-        print >> output, '<td align="left">%s</td>' % res['exec_time_sec']
-        # print log only if test failed..
-        if res['log']:
-            #chop all '\n' from log text (to prevent html errors)
-            rx1 = re.compile('(\s+)')
-            log_text = rx1.sub(' ', res['log'])
-
-            # allow only a-zA-Z0-9_ in html title name
-            # (due to bug in MS-explorer)
-            rx2 = re.compile('([^a-zA-Z_0-9])')
-            updated_tag = rx2.sub('_', res['title'])
-
-            html_body_text = '<html><head><title>%s</title></head><body>%s</body></html>' % (str(updated_tag), log_text)
-            print >> output, '<td align=\"left\"><A HREF=\"#\" onClick=\"popup(\'%s\',\'%s\')\">Info</A></td>' % (str(updated_tag), str(html_body_text))
-        else:
-            print >> output, '<td align=\"left\"></td>'
-        # print execution time
-        print >> output, '<td align="left"><A HREF=\"%s\">Debug</A></td>' % os.path.join(dirname, res['title'], "debug")
-
-        print >> output, '</tr>'
-    print >> output, "</tbody></table>"
-
-
-    print >> output, '<h2 id=\"page_sub_title\">Host Info</h2>'
-    print >> output, '<h2 id=\"comment\">click on each item to expend/collapse</h2>'
-    ## Meta list comes here..
-    print >> output, '<p>'
-    print >> output, '<A href="#" class="button" onClick="expandTree(\'meta_tree\');return false;">Expand All</A>'
-    print >> output, '&nbsp;&nbsp;&nbsp'
-    print >> output, '<A class="button" href="#" onClick="collapseTree(\'meta_tree\'); return false;">Collapse All</A>'
-    print >> output, '</p>'
-
-    print >> output, '<ul class="mktree" id="meta_tree">'
-    counter = 0
-    keys = metadata.keys()
-    keys.sort()
-    for key in keys:
-        val = metadata[key]
-        print >> output, '<li id=\"meta_headline\">%s' % key
-        print >> output, '<ul><table class="meta_table"><tr><td align="left">%s</td></tr></table></ul></li>' % val
-    print >> output, '</ul>'
-
-    print >> output, "</body></html>"
-    if output_file_name:
-        output.close()
-
-
-def parse_result(dirname, line):
-    parts = line.split()
-    if len(parts) < 4:
-        return None
-    global stimelist
-    if parts[0] == 'START':
-        pair = parts[3].split('=')
-        stime = int(pair[1])
-        stimelist.append(stime)
-
-    elif (parts[0] == 'END'):
-        result = {}
-        exec_time = ''
-        # fetch time stamp
-        if len(parts) > 7:
-            temp = parts[5].split('=')
-            exec_time = temp[1] + ' ' + parts[6] + ' ' + parts[7]
-        # assign default values
-        result['time'] = exec_time
-        result['testcase'] = 'na'
-        result['status'] = 'na'
-        result['log'] = None
-        result['exec_time_sec'] = 'na'
-        tag = parts[3]
-
-        # assign actual values
-        rx = re.compile('^(\w+)\.(.*)$')
-        m1 = rx.findall(parts[3])
-        result['testcase'] = m1[0][1]
-        result['title'] = str(tag)
-        result['status'] = parts[1]
-        if result['status'] != 'GOOD':
-            result['log'] = get_exec_log(dirname, tag)
-        if len(stimelist)>0:
-            pair = parts[4].split('=')
-            etime = int(pair[1])
-            stime = stimelist.pop()
-            total_exec_time_sec = etime - stime
-            result['exec_time_sec'] = total_exec_time_sec
-        return result
-    return None
-
-
-def get_exec_log(resdir, tag):
-    stdout_file = os.path.join(resdir, tag) + '/debug/stdout'
-    stderr_file = os.path.join(resdir, tag) + '/debug/stderr'
-    status_file = os.path.join(resdir, tag) + '/status'
-    dmesg_file = os.path.join(resdir, tag) + '/sysinfo/dmesg'
-    log = ''
-    log += '<br><b>STDERR:</b><br>'
-    log += get_info_file(stderr_file)
-    log += '<br><b>STDOUT:</b><br>'
-    log += get_info_file(stdout_file)
-    log += '<br><b>STATUS:</b><br>'
-    log += get_info_file(status_file)
-    log += '<br><b>DMESG:</b><br>'
-    log += get_info_file(dmesg_file)
-    return log
-
-
-def get_info_file(filename):
-    data = ''
-    errors = re.compile(r"\b(error|fail|failed)\b", re.IGNORECASE)
-    if os.path.isfile(filename):
-        f = open('%s' % filename, "r")
-        lines = f.readlines()
-        f.close()
-        rx = re.compile('(\'|\")')
-        for line in lines:
-            new_line = rx.sub('', line)
-            errors_found = errors.findall(new_line)
-            if len(errors_found) > 0:
-                data += '<font color=red>%s</font><br>' % str(new_line)
-            else:
-                data += '%s<br>' % str(new_line)
-        if not data:
-            data = 'No Information Found.<br>'
-    else:
-        data = 'File not found.<br>'
-    return data
-
-
-
-def usage():
-    print 'usage:',
-    print 'make_html_report.py -r <result_directory> [-f output_file] [-R]'
-    print '(e.g. make_html_reporter.py -r '\
-          '/usr/local/autotest/client/results/default -f /tmp/myreport.html)'
-    print 'add "-R" for an html report with relative-paths (relative '\
-          'to results directory)'
-    print ''
-    sys.exit(1)
-
-
-def get_keyval_value(result_dir, key):
-    """
-    Return the value of the first appearance of key in any keyval file in
-    result_dir. If no appropriate line is found, return 'Unknown'.
-    """
-    keyval_pattern = os.path.join(result_dir, "kvm.*", "keyval")
-    keyval_lines = commands.getoutput(r"grep -h '\b%s\b.*=' %s"
-                                      % (key, keyval_pattern))
-    if not keyval_lines:
-        return "Unknown"
-    keyval_line = keyval_lines.splitlines()[0]
-    if key in keyval_line and "=" in keyval_line:
-        return keyval_line.split("=")[1].strip()
-    else:
-        return "Unknown"
-
-
-def get_kvm_version(result_dir):
-    """
-    Return an HTML string describing the KVM version.
-
-        @param result_dir: An Autotest job result dir
-    """
-    kvm_version = get_keyval_value(result_dir, "kvm_version")
-    kvm_userspace_version = get_keyval_value(result_dir,
-                                             "kvm_userspace_version")
-    return "Kernel: %s<br>Userspace: %s" % (kvm_version, kvm_userspace_version)
-
-
-def main(argv):
-    dirname = None
-    output_file_name = None
-    relative_path = False
-    try:
-        opts, args = getopt.getopt(argv, "r:f:h:R", ['help'])
-    except getopt.GetoptError:
-        usage()
-        sys.exit(2)
-    for opt, arg in opts:
-        if opt in ("-h", "--help"):
-            usage()
-            sys.exit()
-        elif opt == '-r':
-            dirname =  arg
-        elif opt == '-f':
-            output_file_name =  arg
-        elif opt == '-R':
-            relative_path = True
-        else:
-            usage()
-            sys.exit(1)
-
-    html_path = dirname
-    # don't use absolute path in html output if relative flag passed
-    if relative_path:
-        html_path = ''
-
-    if dirname:
-        if os.path.isdir(dirname): # TBD: replace it with a validation of
-                                   # autotest result dir
-            res_dir = os.path.abspath(dirname)
-            tag = res_dir
-            status_file_name = dirname + '/status'
-            sysinfo_dir = dirname + '/sysinfo'
-            host = get_info_file('%s/hostname' % sysinfo_dir)
-            rx = re.compile('^\s+[END|START].*$')
-            # create the results set dict
-            results_data = []
-            if os.path.exists(status_file_name):
-                f = open(status_file_name, "r")
-                lines = f.readlines()
-                f.close()
-                for line in lines:
-                    if rx.match(line):
-                        result_dict = parse_result(dirname, line)
-                        if result_dict:
-                            results_data.append(result_dict)
-            # create the meta info dict
-            metalist = {
-                        'uname': get_info_file('%s/uname' % sysinfo_dir),
-                        'cpuinfo':get_info_file('%s/cpuinfo' % sysinfo_dir),
-                        'meminfo':get_info_file('%s/meminfo' % sysinfo_dir),
-                        'df':get_info_file('%s/df' % sysinfo_dir),
-                        'modules':get_info_file('%s/modules' % sysinfo_dir),
-                        'gcc':get_info_file('%s/gcc_--version' % sysinfo_dir),
-                        'dmidecode':get_info_file('%s/dmidecode' % sysinfo_dir),
-                        'dmesg':get_info_file('%s/dmesg' % sysinfo_dir),
-                        'kvmver':get_kvm_version(dirname)
-            }
-
-            make_html_file(metalist, results_data, tag, host, output_file_name,
-                           html_path)
-            sys.exit(0)
-        else:
-            print 'Invalid result directory <%s>' % dirname
-            sys.exit(1)
-    else:
-        usage()
-        sys.exit(1)
-
-
-if __name__ == "__main__":
-    main(sys.argv[1:])
diff --git a/client/tests/kvm/installer.py b/client/tests/kvm/installer.py
deleted file mode 100644
index 6b2a6fe..0000000
--- a/client/tests/kvm/installer.py
+++ /dev/null
@@ -1,797 +0,0 @@
-import os, logging, datetime, glob
-import shutil
-from autotest_lib.client.bin import utils, os_dep
-from autotest_lib.client.common_lib import error
-import kvm_utils
-
-
-def check_configure_options(script_path):
-    """
-    Return the list of available options (flags) of a given kvm configure build
-    script.
-
-    @param script: Path to the configure script
-    """
-    abspath = os.path.abspath(script_path)
-    help_raw = utils.system_output('%s --help' % abspath, ignore_status=True)
-    help_output = help_raw.split("\n")
-    option_list = []
-    for line in help_output:
-        cleaned_line = line.lstrip()
-        if cleaned_line.startswith("--"):
-            option = cleaned_line.split()[0]
-            option = option.split("=")[0]
-            option_list.append(option)
-
-    return option_list
-
-
-def kill_qemu_processes():
-    """
-    Kills all qemu processes, also kills all processes holding /dev/kvm down.
-    """
-    logging.debug("Killing any qemu processes that might be left behind")
-    utils.system("pkill qemu", ignore_status=True)
-    # Let's double check to see if some other process is holding /dev/kvm
-    if os.path.isfile("/dev/kvm"):
-        utils.system("fuser -k /dev/kvm", ignore_status=True)
-
-
-def cpu_vendor():
-    vendor = "intel"
-    if os.system("grep vmx /proc/cpuinfo 1>/dev/null") != 0:
-        vendor = "amd"
-    logging.debug("Detected CPU vendor as '%s'", vendor)
-    return vendor
-
-
-def _unload_kvm_modules(mod_list):
-    logging.info("Unloading previously loaded KVM modules")
-    for module in reversed(mod_list):
-        utils.unload_module(module)
-
-
-def _load_kvm_modules(mod_list, module_dir=None, load_stock=False):
-    """
-    Just load the KVM modules, without killing Qemu or unloading previous
-    modules.
-
-    Load modules present on any sub directory of module_dir. Function will walk
-    through module_dir until it finds the modules.
-
-    @param module_dir: Directory where the KVM modules are located.
-    @param load_stock: Whether we are going to load system kernel modules.
-    @param extra_modules: List of extra modules to load.
-    """
-    if module_dir:
-        logging.info("Loading the built KVM modules...")
-        kvm_module_path = None
-        kvm_vendor_module_path = None
-        abort = False
-
-        list_modules = ['%s.ko' % (m) for m in mod_list]
-
-        list_module_paths = []
-        for folder, subdirs, files in os.walk(module_dir):
-            for module in list_modules:
-                if module in files:
-                    module_path = os.path.join(folder, module)
-                    list_module_paths.append(module_path)
-
-        # We might need to arrange the modules in the correct order
-        # to avoid module load problems
-        list_modules_load = []
-        for module in list_modules:
-            for module_path in list_module_paths:
-                if os.path.basename(module_path) == module:
-                    list_modules_load.append(module_path)
-
-        if len(list_module_paths) != len(list_modules):
-            logging.error("KVM modules not found. If you don't want to use the "
-                          "modules built by this test, make sure the option "
-                          "load_modules: 'no' is marked on the test control "
-                          "file.")
-            raise error.TestError("The modules %s were requested to be loaded, "
-                                  "but the only modules found were %s" %
-                                  (list_modules, list_module_paths))
-
-        for module_path in list_modules_load:
-            try:
-                utils.system("insmod %s" % module_path)
-            except Exception, e:
-                raise error.TestFail("Failed to load KVM modules: %s" % e)
-
-    if load_stock:
-        logging.info("Loading current system KVM modules...")
-        for module in mod_list:
-            utils.system("modprobe %s" % module)
-
-
-def create_symlinks(test_bindir, prefix=None, bin_list=None, unittest=None):
-    """
-    Create symbolic links for the appropriate qemu and qemu-img commands on
-    the kvm test bindir.
-
-    @param test_bindir: KVM test bindir
-    @param prefix: KVM prefix path
-    @param bin_list: List of qemu binaries to link
-    @param unittest: Path to configuration file unittests.cfg
-    """
-    qemu_path = os.path.join(test_bindir, "qemu")
-    qemu_img_path = os.path.join(test_bindir, "qemu-img")
-    qemu_unittest_path = os.path.join(test_bindir, "unittests")
-    if os.path.lexists(qemu_path):
-        os.unlink(qemu_path)
-    if os.path.lexists(qemu_img_path):
-        os.unlink(qemu_img_path)
-    if unittest and os.path.lexists(qemu_unittest_path):
-        os.unlink(qemu_unittest_path)
-
-    logging.debug("Linking qemu binaries")
-
-    if bin_list:
-        for bin in bin_list:
-            if os.path.basename(bin) == 'qemu-kvm':
-                os.symlink(bin, qemu_path)
-            elif os.path.basename(bin) == 'qemu-img':
-                os.symlink(bin, qemu_img_path)
-
-    elif prefix:
-        kvm_qemu = os.path.join(prefix, "bin", "qemu-system-x86_64")
-        if not os.path.isfile(kvm_qemu):
-            raise error.TestError('Invalid qemu path')
-        kvm_qemu_img = os.path.join(prefix, "bin", "qemu-img")
-        if not os.path.isfile(kvm_qemu_img):
-            raise error.TestError('Invalid qemu-img path')
-        os.symlink(kvm_qemu, qemu_path)
-        os.symlink(kvm_qemu_img, qemu_img_path)
-
-    if unittest:
-        logging.debug("Linking unittest dir")
-        os.symlink(unittest, qemu_unittest_path)
-
-
-def install_roms(rom_dir, prefix):
-    logging.debug("Path to roms specified. Copying roms to install prefix")
-    rom_dst_dir = os.path.join(prefix, 'share', 'qemu')
-    for rom_src in glob.glob('%s/*.bin' % rom_dir):
-        rom_dst = os.path.join(rom_dst_dir, os.path.basename(rom_src))
-        logging.debug("Copying rom file %s to %s", rom_src, rom_dst)
-        shutil.copy(rom_src, rom_dst)
-
-
-def save_build(build_dir, dest_dir):
-    logging.debug('Saving the result of the build on %s', dest_dir)
-    base_name = os.path.basename(build_dir)
-    tarball_name = base_name + '.tar.bz2'
-    os.chdir(os.path.dirname(build_dir))
-    utils.system('tar -cjf %s %s' % (tarball_name, base_name))
-    shutil.move(tarball_name, os.path.join(dest_dir, tarball_name))
-
-
-class KvmInstallException(Exception):
-    pass
-
-
-class FailedKvmInstall(KvmInstallException):
-    pass
-
-
-class KvmNotInstalled(KvmInstallException):
-    pass
-
-
-class BaseInstaller(object):
-    # default value for load_stock argument
-    load_stock_modules = True
-    def __init__(self, mode=None):
-        self.install_mode = mode
-        self._full_module_list = None
-
-    def set_install_params(self, test, params):
-        self.params = params
-
-        load_modules = params.get('load_modules', 'no')
-        if not load_modules or load_modules == 'yes':
-            self.should_load_modules = True
-        elif load_modules == 'no':
-            self.should_load_modules = False
-        default_extra_modules = str(None)
-        self.extra_modules = eval(params.get("extra_modules",
-                                             default_extra_modules))
-
-        self.cpu_vendor = cpu_vendor()
-
-        self.srcdir = test.srcdir
-        if not os.path.isdir(self.srcdir):
-            os.makedirs(self.srcdir)
-
-        self.test_bindir = test.bindir
-        self.results_dir = test.resultsdir
-
-        # KVM build prefix, for the modes that do need it
-        prefix = os.path.join(test.bindir, 'build')
-        self.prefix = os.path.abspath(prefix)
-
-        # Current host kernel directory
-        default_host_kernel_source = '/lib/modules/%s/build' % os.uname()[2]
-        self.host_kernel_srcdir = params.get('host_kernel_source',
-                                             default_host_kernel_source)
-
-        # Extra parameters that can be passed to the configure script
-        self.extra_configure_options = params.get('extra_configure_options',
-                                                  None)
-
-        # Do we want to save the result of the build on test.resultsdir?
-        self.save_results = True
-        save_results = params.get('save_results', 'no')
-        if save_results == 'no':
-            self.save_results = False
-
-        self._full_module_list = list(self._module_list())
-
-
-    def install_unittests(self):
-        userspace_srcdir = os.path.join(self.srcdir, "kvm_userspace")
-        test_repo = self.params.get("test_git_repo")
-        test_branch = self.params.get("test_branch", "master")
-        test_commit = self.params.get("test_commit", None)
-        test_lbranch = self.params.get("test_lbranch", "master")
-
-        if test_repo:
-            test_srcdir = os.path.join(self.srcdir, "kvm-unit-tests")
-            kvm_utils.get_git_branch(test_repo, test_branch, test_srcdir,
-                                     test_commit, test_lbranch)
-            unittest_cfg = os.path.join(test_srcdir, 'x86',
-                                        'unittests.cfg')
-            self.test_srcdir = test_srcdir
-        else:
-            unittest_cfg = os.path.join(userspace_srcdir, 'kvm', 'test', 'x86',
-                                        'unittests.cfg')
-        self.unittest_cfg = None
-        if os.path.isfile(unittest_cfg):
-            self.unittest_cfg = unittest_cfg
-        else:
-            if test_repo:
-                logging.error("No unittest config file %s found, skipping "
-                              "unittest build", self.unittest_cfg)
-
-        self.unittest_prefix = None
-        if self.unittest_cfg:
-            logging.info("Building and installing unittests")
-            os.chdir(os.path.dirname(os.path.dirname(self.unittest_cfg)))
-            utils.system('./configure --prefix=%s' % self.prefix)
-            utils.system('make')
-            utils.system('make install')
-            self.unittest_prefix = os.path.join(self.prefix, 'share', 'qemu',
-                                                'tests')
-
-
-    def full_module_list(self):
-        """Return the module list used by the installer
-
-        Used by the module_probe test, to avoid using utils.unload_module().
-        """
-        if self._full_module_list is None:
-            raise KvmNotInstalled("KVM modules not installed yet (installer: %s)" % (type(self)))
-        return self._full_module_list
-
-
-    def _module_list(self):
-        """Generate the list of modules that need to be loaded
-        """
-        yield 'kvm'
-        yield 'kvm-%s' % (self.cpu_vendor)
-        if self.extra_modules:
-            for module in self.extra_modules:
-                yield module
-
-
-    def _load_modules(self, mod_list):
-        """
-        Load the KVM modules
-
-        May be overridden by subclasses.
-        """
-        _load_kvm_modules(mod_list, load_stock=self.load_stock_modules)
-
-
-    def load_modules(self, mod_list=None):
-        if mod_list is None:
-            mod_list = self.full_module_list()
-        self._load_modules(mod_list)
-
-
-    def _unload_modules(self, mod_list=None):
-        """
-        Just unload the KVM modules, without trying to kill Qemu
-        """
-        if mod_list is None:
-            mod_list = self.full_module_list()
-        _unload_kvm_modules(mod_list)
-
-
-    def unload_modules(self, mod_list=None):
-        """
-        Kill Qemu and unload the KVM modules
-        """
-        kill_qemu_processes()
-        self._unload_modules(mod_list)
-
-
-    def reload_modules(self):
-        """
-        Reload the KVM modules after killing Qemu and unloading the current modules
-        """
-        self.unload_modules()
-        self.load_modules()
-
-
-    def reload_modules_if_needed(self):
-        if self.should_load_modules:
-            self.reload_modules()
-
-
-class YumInstaller(BaseInstaller):
-    """
-    Class that uses yum to install and remove packages.
-    """
-    load_stock_modules = True
-    def set_install_params(self, test, params):
-        super(YumInstaller, self).set_install_params(test, params)
-        # Checking if all required dependencies are available
-        os_dep.command("rpm")
-        os_dep.command("yum")
-
-        default_pkg_list = str(['qemu-kvm', 'qemu-kvm-tools'])
-        default_qemu_bin_paths = str(['/usr/bin/qemu-kvm', '/usr/bin/qemu-img'])
-        default_pkg_path_list = str(None)
-        self.pkg_list = eval(params.get("pkg_list", default_pkg_list))
-        self.pkg_path_list = eval(params.get("pkg_path_list",
-                                             default_pkg_path_list))
-        self.qemu_bin_paths = eval(params.get("qemu_bin_paths",
-                                              default_qemu_bin_paths))
-
-
-    def _clean_previous_installs(self):
-        kill_qemu_processes()
-        removable_packages = ""
-        for pkg in self.pkg_list:
-            removable_packages += " %s" % pkg
-
-        utils.system("yum remove -y %s" % removable_packages)
-
-
-    def _get_packages(self):
-        for pkg in self.pkg_path_list:
-            utils.get_file(pkg, os.path.join(self.srcdir,
-                                             os.path.basename(pkg)))
-
-
-    def _install_packages(self):
-        """
-        Install all downloaded packages.
-        """
-        os.chdir(self.srcdir)
-        utils.system("yum install --nogpgcheck -y *.rpm")
-
-
-    def install(self):
-        self.install_unittests()
-        self._clean_previous_installs()
-        self._get_packages()
-        self._install_packages()
-        create_symlinks(test_bindir=self.test_bindir,
-                        bin_list=self.qemu_bin_paths,
-                        unittest=self.unittest_prefix)
-        self.reload_modules_if_needed()
-        if self.save_results:
-            save_build(self.srcdir, self.results_dir)
-
-
-class KojiInstaller(YumInstaller):
-    """
-    Class that handles installing KVM from the fedora build service, koji.
-    It uses yum to install and remove packages.
-    """
-    load_stock_modules = True
-    def set_install_params(self, test, params):
-        """
-        Gets parameters and initializes the package downloader.
-
-        @param test: kvm test object
-        @param params: Dictionary with test arguments
-        """
-        super(KojiInstaller, self).set_install_params(test, params)
-        default_koji_cmd = '/usr/bin/koji'
-        default_src_pkg = 'qemu'
-        self.src_pkg = params.get("src_pkg", default_src_pkg)
-        self.tag = params.get("koji_tag", None)
-        self.build = params.get("koji_build", None)
-        self.koji_cmd = params.get("koji_cmd", default_koji_cmd)
-
-
-    def _get_packages(self):
-        """
-        Downloads the specific arch RPMs for the specific build name.
-        """
-        downloader = kvm_utils.KojiDownloader(cmd=self.koji_cmd)
-        downloader.get(src_package=self.src_pkg, tag=self.tag,
-                            build=self.build, dst_dir=self.srcdir)
-
-
-    def install(self):
-        super(KojiInstaller, self)._clean_previous_installs()
-        self._get_packages()
-        super(KojiInstaller, self)._install_packages()
-        self.install_unittests()
-        create_symlinks(test_bindir=self.test_bindir,
-                        bin_list=self.qemu_bin_paths,
-                        unittest=self.unittest_prefix)
-        self.reload_modules_if_needed()
-        if self.save_results:
-            save_build(self.srcdir, self.results_dir)
-
-
-class SourceDirInstaller(BaseInstaller):
-    """
-    Class that handles building/installing KVM directly from a tarball or
-    a single source code dir.
-    """
-    def set_install_params(self, test, params):
-        """
-        Initializes class attributes, and retrieves KVM code.
-
-        @param test: kvm test object
-        @param params: Dictionary with test arguments
-        """
-        super(SourceDirInstaller, self).set_install_params(test, params)
-
-        self.mod_install_dir = os.path.join(self.prefix, 'modules')
-        self.installed_kmods = False  # it will be set to True in case we
-                                      # installed our own modules
-
-        srcdir = params.get("srcdir", None)
-        self.path_to_roms = params.get("path_to_rom_images", None)
-
-        if self.install_mode == 'localsrc':
-            if srcdir is None:
-                raise error.TestError("Install from source directory specified"
-                                      "but no source directory provided on the"
-                                      "control file.")
-            else:
-                shutil.copytree(srcdir, self.srcdir)
-
-        if self.install_mode == 'release':
-            release_tag = params.get("release_tag")
-            release_dir = params.get("release_dir")
-            release_listing = params.get("release_listing")
-            logging.info("Installing KVM from release tarball")
-            if not release_tag:
-                release_tag = kvm_utils.get_latest_kvm_release_tag(
-                                                                release_listing)
-            tarball = os.path.join(release_dir, 'kvm', release_tag,
-                                   "kvm-%s.tar.gz" % release_tag)
-            logging.info("Retrieving release kvm-%s" % release_tag)
-            tarball = utils.unmap_url("/", tarball, "/tmp")
-
-        elif self.install_mode == 'snapshot':
-            logging.info("Installing KVM from snapshot")
-            snapshot_dir = params.get("snapshot_dir")
-            if not snapshot_dir:
-                raise error.TestError("Snapshot dir not provided")
-            snapshot_date = params.get("snapshot_date")
-            if not snapshot_date:
-                # Take yesterday's snapshot
-                d = (datetime.date.today() -
-                     datetime.timedelta(1)).strftime("%Y%m%d")
-            else:
-                d = snapshot_date
-            tarball = os.path.join(snapshot_dir, "kvm-snapshot-%s.tar.gz" % d)
-            logging.info("Retrieving kvm-snapshot-%s" % d)
-            tarball = utils.unmap_url("/", tarball, "/tmp")
-
-        elif self.install_mode == 'localtar':
-            tarball = params.get("tarball")
-            if not tarball:
-                raise error.TestError("KVM Tarball install specified but no"
-                                      " tarball provided on control file.")
-            logging.info("Installing KVM from a local tarball")
-            logging.info("Using tarball %s")
-            tarball = utils.unmap_url("/", params.get("tarball"), "/tmp")
-
-        if self.install_mode in ['release', 'snapshot', 'localtar']:
-            utils.extract_tarball_to_dir(tarball, self.srcdir)
-
-        if self.install_mode in ['release', 'snapshot', 'localtar', 'srcdir']:
-            self.repo_type = kvm_utils.check_kvm_source_dir(self.srcdir)
-            configure_script = os.path.join(self.srcdir, 'configure')
-            self.configure_options = check_configure_options(configure_script)
-
-
-    def _build(self):
-        make_jobs = utils.count_cpus()
-        os.chdir(self.srcdir)
-        # For testing purposes, it's better to build qemu binaries with
-        # debugging symbols, so we can extract more meaningful stack traces.
-        cfg = "./configure --prefix=%s" % self.prefix
-        if "--disable-strip" in self.configure_options:
-            cfg += " --disable-strip"
-        steps = [cfg, "make clean", "make -j %s" % make_jobs]
-        logging.info("Building KVM")
-        for step in steps:
-            utils.system(step)
-
-
-    def _install_kmods_old_userspace(self, userspace_path):
-        """
-        Run the module install command.
-
-        This is for the "old userspace" code, that contained a 'kernel' subdirectory
-        with the kmod build code.
-
-        The code would be much simpler if we could specify the module install
-        path as parameter to the toplevel Makefile. As we can't do that and
-        the module install code doesn't use --prefix, we have to call
-        'make -C kernel install' directly, setting the module directory
-        parameters.
-
-        If the userspace tree doens't have a 'kernel' subdirectory, the
-        module install step will be skipped.
-
-        @param userspace_path: the path the kvm-userspace directory
-        """
-        kdir = os.path.join(userspace_path, 'kernel')
-        if os.path.isdir(kdir):
-            os.chdir(kdir)
-            # INSTALLDIR is the target dir for the modules
-            # ORIGMODDIR is the dir where the old modules will be removed. we
-            #            don't want to mess with the system modules, so set it
-            #            to a non-existing directory
-            utils.system('make install INSTALLDIR=%s ORIGMODDIR=/tmp/no-old-modules' % (self.mod_install_dir))
-            self.installed_kmods = True
-
-
-    def _install_kmods(self, kmod_path):
-        """Run the module install command for the kmod-kvm repository
-
-        @param kmod_path: the path to the kmod-kvm.git working copy
-        """
-        os.chdir(kmod_path)
-        utils.system('make modules_install DESTDIR=%s' % (self.mod_install_dir))
-        self.installed_kmods = True
-
-
-    def _install(self):
-        os.chdir(self.srcdir)
-        logging.info("Installing KVM userspace")
-        if self.repo_type == 1:
-            utils.system("make -C qemu install")
-            self._install_kmods_old_userspace(self.srcdir)
-        elif self.repo_type == 2:
-            utils.system("make install")
-        if self.path_to_roms:
-            install_roms(self.path_to_roms, self.prefix)
-        self.install_unittests()
-        create_symlinks(test_bindir=self.test_bindir,
-                        prefix=self.prefix,
-                        unittest=self.unittest_prefix)
-
-
-    def _load_modules(self, mod_list):
-        # load the installed KVM modules in case we installed them
-        # ourselves. Otherwise, just load the system modules.
-        if self.installed_kmods:
-            logging.info("Loading installed KVM modules")
-            _load_kvm_modules(mod_list, module_dir=self.mod_install_dir)
-        else:
-            logging.info("Loading stock KVM modules")
-            _load_kvm_modules(mod_list, load_stock=True)
-
-
-    def install(self):
-        self._build()
-        self._install()
-        self.reload_modules_if_needed()
-        if self.save_results:
-            save_build(self.srcdir, self.results_dir)
-
-
-class GitInstaller(SourceDirInstaller):
-    def _pull_code(self):
-        """
-        Retrieves code from git repositories.
-        """
-        params = self.params
-
-        kernel_repo = params.get("git_repo")
-        user_repo = params.get("user_git_repo")
-        kmod_repo = params.get("kmod_repo")
-
-        kernel_branch = params.get("kernel_branch", "master")
-        user_branch = params.get("user_branch", "master")
-        kmod_branch = params.get("kmod_branch", "master")
-
-        kernel_lbranch = params.get("kernel_lbranch", "master")
-        user_lbranch = params.get("user_lbranch", "master")
-        kmod_lbranch = params.get("kmod_lbranch", "master")
-
-        kernel_commit = params.get("kernel_commit", None)
-        user_commit = params.get("user_commit", None)
-        kmod_commit = params.get("kmod_commit", None)
-
-        kernel_patches = eval(params.get("kernel_patches", "[]"))
-        user_patches = eval(params.get("user_patches", "[]"))
-        kmod_patches = eval(params.get("user_patches", "[]"))
-
-        if not user_repo:
-            message = "KVM user git repository path not specified"
-            logging.error(message)
-            raise error.TestError(message)
-
-        userspace_srcdir = os.path.join(self.srcdir, "kvm_userspace")
-        kvm_utils.get_git_branch(user_repo, user_branch, userspace_srcdir,
-                                 user_commit, user_lbranch)
-        self.userspace_srcdir = userspace_srcdir
-
-        if user_patches:
-            os.chdir(self.userspace_srcdir)
-            for patch in user_patches:
-                utils.get_file(patch, os.path.join(self.userspace_srcdir,
-                                                   os.path.basename(patch)))
-                utils.system('patch -p1 %s' % os.path.basename(patch))
-
-        if kernel_repo:
-            kernel_srcdir = os.path.join(self.srcdir, "kvm")
-            kvm_utils.get_git_branch(kernel_repo, kernel_branch, kernel_srcdir,
-                                     kernel_commit, kernel_lbranch)
-            self.kernel_srcdir = kernel_srcdir
-            if kernel_patches:
-                os.chdir(self.kernel_srcdir)
-                for patch in kernel_patches:
-                    utils.get_file(patch, os.path.join(self.userspace_srcdir,
-                                                       os.path.basename(patch)))
-                    utils.system('patch -p1 %s' % os.path.basename(patch))
-        else:
-            self.kernel_srcdir = None
-
-        if kmod_repo:
-            kmod_srcdir = os.path.join (self.srcdir, "kvm_kmod")
-            kvm_utils.get_git_branch(kmod_repo, kmod_branch, kmod_srcdir,
-                                     kmod_commit, kmod_lbranch)
-            self.kmod_srcdir = kmod_srcdir
-            if kmod_patches:
-                os.chdir(self.kmod_srcdir)
-                for patch in kmod_patches:
-                    utils.get_file(patch, os.path.join(self.userspace_srcdir,
-                                                       os.path.basename(patch)))
-                    utils.system('patch -p1 %s' % os.path.basename(patch))
-        else:
-            self.kmod_srcdir = None
-
-        configure_script = os.path.join(self.userspace_srcdir, 'configure')
-        self.configure_options = check_configure_options(configure_script)
-
-
-    def _build(self):
-        make_jobs = utils.count_cpus()
-        cfg = './configure'
-        if self.kmod_srcdir:
-            logging.info('Building KVM modules')
-            os.chdir(self.kmod_srcdir)
-            module_build_steps = [cfg,
-                                  'make clean',
-                                  'make sync LINUX=%s' % self.kernel_srcdir,
-                                  'make']
-        elif self.kernel_srcdir:
-            logging.info('Building KVM modules')
-            os.chdir(self.userspace_srcdir)
-            cfg += ' --kerneldir=%s' % self.host_kernel_srcdir
-            module_build_steps = [cfg,
-                            'make clean',
-                            'make -C kernel LINUX=%s sync' % self.kernel_srcdir]
-        else:
-            module_build_steps = []
-
-        for step in module_build_steps:
-            utils.run(step)
-
-        logging.info('Building KVM userspace code')
-        os.chdir(self.userspace_srcdir)
-        cfg += ' --prefix=%s' % self.prefix
-        if "--disable-strip" in self.configure_options:
-            cfg += ' --disable-strip'
-        if self.extra_configure_options:
-            cfg += ' %s' % self.extra_configure_options
-        utils.system(cfg)
-        utils.system('make clean')
-        utils.system('make -j %s' % make_jobs)
-
-
-    def _install(self):
-        if self.kernel_srcdir:
-            os.chdir(self.userspace_srcdir)
-            # the kernel module install with --prefix doesn't work, and DESTDIR
-            # wouldn't work for the userspace stuff, so we clear WANT_MODULE:
-            utils.system('make install WANT_MODULE=')
-            # and install the old-style-kmod modules manually:
-            self._install_kmods_old_userspace(self.userspace_srcdir)
-        elif self.kmod_srcdir:
-            # if we have a kmod repository, it is easier:
-            # 1) install userspace:
-            os.chdir(self.userspace_srcdir)
-            utils.system('make install')
-            # 2) install kmod:
-            self._install_kmods(self.kmod_srcdir)
-        else:
-            # if we don't have kmod sources, we just install
-            # userspace:
-            os.chdir(self.userspace_srcdir)
-            utils.system('make install')
-
-        if self.path_to_roms:
-            install_roms(self.path_to_roms, self.prefix)
-        self.install_unittests()
-        create_symlinks(test_bindir=self.test_bindir, prefix=self.prefix,
-                        bin_list=None,
-                        unittest=self.unittest_prefix)
-
-
-    def install(self):
-        self._pull_code()
-        self._build()
-        self._install()
-        self.reload_modules_if_needed()
-        if self.save_results:
-            save_build(self.srcdir, self.results_dir)
-
-
-class PreInstalledKvm(BaseInstaller):
-    # load_modules() will use the stock modules:
-    load_stock_modules = True
-    def install(self):
-        logging.info("Expecting KVM to be already installed. Doing nothing")
-
-
-class FailedInstaller:
-    """
-    Class used to be returned instead of the installer if a installation fails
-
-    Useful to make sure no installer object is used if KVM installation fails.
-    """
-    def __init__(self, msg="KVM install failed"):
-        self._msg = msg
-
-
-    def load_modules(self):
-        """Will refuse to load the KVM modules as install failed"""
-        raise FailedKvmInstall("KVM modules not available. reason: %s" % (self._msg))
-
-
-installer_classes = {
-    'localsrc': SourceDirInstaller,
-    'localtar': SourceDirInstaller,
-    'release': SourceDirInstaller,
-    'snapshot': SourceDirInstaller,
-    'git': GitInstaller,
-    'yum': YumInstaller,
-    'koji': KojiInstaller,
-    'preinstalled': PreInstalledKvm,
-}
-
-
-def _installer_class(install_mode):
-    c = installer_classes.get(install_mode)
-    if c is None:
-        raise error.TestError('Invalid or unsupported'
-                              ' install mode: %s' % install_mode)
-    return c
-
-
-def make_installer(params):
-    # priority:
-    # - 'install_mode' param
-    # - 'mode' param
-    mode = params.get("install_mode", params.get("mode"))
-    klass = _installer_class(mode)
-    return klass(mode)
diff --git a/client/tests/kvm/kvm_config.py b/client/tests/kvm/kvm_config.py
deleted file mode 100755
index 4dbb1d4..0000000
--- a/client/tests/kvm/kvm_config.py
+++ /dev/null
@@ -1,698 +0,0 @@
-#!/usr/bin/python
-"""
-KVM test configuration file parser
-
-@copyright: Red Hat 2008-2011
-"""
-
-import re, os, sys, optparse, collections
-
-
-# Filter syntax:
-# , means OR
-# .. means AND
-# . means IMMEDIATELY-FOLLOWED-BY
-
-# Example:
-# qcow2..Fedora.14, RHEL.6..raw..boot, smp2..qcow2..migrate..ide
-# means match all dicts whose names have:
-# (qcow2 AND (Fedora IMMEDIATELY-FOLLOWED-BY 14)) OR
-# ((RHEL IMMEDIATELY-FOLLOWED-BY 6) AND raw AND boot) OR
-# (smp2 AND qcow2 AND migrate AND ide)
-
-# Note:
-# 'qcow2..Fedora.14' is equivalent to 'Fedora.14..qcow2'.
-# 'qcow2..Fedora.14' is not equivalent to 'qcow2..14.Fedora'.
-# 'ide, scsi' is equivalent to 'scsi, ide'.
-
-# Filters can be used in 3 ways:
-# only <filter>
-# no <filter>
-# <filter>:
-# The last one starts a conditional block.
-
-
-class ParserError:
-    def __init__(self, msg, line=None, filename=None, linenum=None):
-        self.msg = msg
-        self.line = line
-        self.filename = filename
-        self.linenum = linenum
-
-    def __str__(self):
-        if self.line:
-            return "%s: %r (%s:%s)" % (self.msg, self.line,
-                                       self.filename, self.linenum)
-        else:
-            return "%s (%s:%s)" % (self.msg, self.filename, self.linenum)
-
-
-num_failed_cases = 5
-
-
-class Node(object):
-    def __init__(self):
-        self.name = []
-        self.dep = []
-        self.content = []
-        self.children = []
-        self.labels = set()
-        self.append_to_shortname = False
-        self.failed_cases = collections.deque()
-
-
-def _match_adjacent(block, ctx, ctx_set):
-    # TODO: explain what this function does
-    if block[0] not in ctx_set:
-        return 0
-    if len(block) == 1:
-        return 1
-    if block[1] not in ctx_set:
-        return int(ctx[-1] == block[0])
-    k = 0
-    i = ctx.index(block[0])
-    while i < len(ctx):
-        if k > 0 and ctx[i] != block[k]:
-            i -= k - 1
-            k = 0
-        if ctx[i] == block[k]:
-            k += 1
-            if k >= len(block):
-                break
-            if block[k] not in ctx_set:
-                break
-        i += 1
-    return k
-
-
-def _might_match_adjacent(block, ctx, ctx_set, descendant_labels):
-    matched = _match_adjacent(block, ctx, ctx_set)
-    for elem in block[matched:]:
-        if elem not in descendant_labels:
-            return False
-    return True
-
-
-# Filter must inherit from object (otherwise type() won't work)
-class Filter(object):
-    def __init__(self, s):
-        self.filter = []
-        for char in s:
-            if not (char.isalnum() or char.isspace() or char in ".,_-"):
-                raise ParserError("Illegal characters in filter")
-        for word in s.replace(",", " ").split():
-            word = [block.split(".") for block in word.split("..")]
-            for block in word:
-                for elem in block:
-                    if not elem:
-                        raise ParserError("Syntax error")
-            self.filter += [word]
-
-
-    def match(self, ctx, ctx_set):
-        for word in self.filter:
-            for block in word:
-                if _match_adjacent(block, ctx, ctx_set) != len(block):
-                    break
-            else:
-                return True
-        return False
-
-
-    def might_match(self, ctx, ctx_set, descendant_labels):
-        for word in self.filter:
-            for block in word:
-                if not _might_match_adjacent(block, ctx, ctx_set,
-                                             descendant_labels):
-                    break
-            else:
-                return True
-        return False
-
-
-class NoOnlyFilter(Filter):
-    def __init__(self, line):
-        Filter.__init__(self, line.split(None, 1)[1])
-        self.line = line
-
-
-class OnlyFilter(NoOnlyFilter):
-    def is_irrelevant(self, ctx, ctx_set, descendant_labels):
-        return self.match(ctx, ctx_set)
-
-
-    def requires_action(self, ctx, ctx_set, descendant_labels):
-        return not self.might_match(ctx, ctx_set, descendant_labels)
-
-
-    def might_pass(self, failed_ctx, failed_ctx_set, ctx, ctx_set,
-                   descendant_labels):
-        for word in self.filter:
-            for block in word:
-                if (_match_adjacent(block, ctx, ctx_set) >
-                    _match_adjacent(block, failed_ctx, failed_ctx_set)):
-                    return self.might_match(ctx, ctx_set, descendant_labels)
-        return False
-
-
-class NoFilter(NoOnlyFilter):
-    def is_irrelevant(self, ctx, ctx_set, descendant_labels):
-        return not self.might_match(ctx, ctx_set, descendant_labels)
-
-
-    def requires_action(self, ctx, ctx_set, descendant_labels):
-        return self.match(ctx, ctx_set)
-
-
-    def might_pass(self, failed_ctx, failed_ctx_set, ctx, ctx_set,
-                   descendant_labels):
-        for word in self.filter:
-            for block in word:
-                if (_match_adjacent(block, ctx, ctx_set) <
-                    _match_adjacent(block, failed_ctx, failed_ctx_set)):
-                    return not self.match(ctx, ctx_set)
-        return False
-
-
-class Condition(NoFilter):
-    def __init__(self, line):
-        Filter.__init__(self, line.rstrip(":"))
-        self.line = line
-        self.content = []
-
-
-class NegativeCondition(OnlyFilter):
-    def __init__(self, line):
-        Filter.__init__(self, line.lstrip("!").rstrip(":"))
-        self.line = line
-        self.content = []
-
-
-class Parser(object):
-    """
-    Parse an input file or string that follows the KVM Test Config File format
-    and generate a list of dicts that will be later used as configuration
-    parameters by the KVM tests.
-
-    @see: http://www.linux-kvm.org/page/KVM-Autotest/Test_Config_File
-    """
-
-    def __init__(self, filename=None, debug=False):
-        """
-        Initialize the parser and optionally parse a file.
-
-        @param filename: Path of the file to parse.
-        @param debug: Whether to turn on debugging output.
-        """
-        self.node = Node()
-        self.debug = debug
-        if filename:
-            self.parse_file(filename)
-
-
-    def parse_file(self, filename):
-        """
-        Parse a file.
-
-        @param filename: Path of the configuration file.
-        """
-        self.node = self._parse(FileReader(filename), self.node)
-
-
-    def parse_string(self, s):
-        """
-        Parse a string.
-
-        @param s: String to parse.
-        """
-        self.node = self._parse(StrReader(s), self.node)
-
-
-    def get_dicts(self, node=None, ctx=[], content=[], shortname=[], dep=[]):
-        """
-        Generate dictionaries from the code parsed so far.  This should
-        be called after parsing something.
-
-        @return: A dict generator.
-        """
-        def process_content(content, failed_filters):
-            # 1. Check that the filters in content are OK with the current
-            #    context (ctx).
-            # 2. Move the parts of content that are still relevant into
-            #    new_content and unpack conditional blocks if appropriate.
-            #    For example, if an 'only' statement fully matches ctx, it
-            #    becomes irrelevant and is not appended to new_content.
-            #    If a conditional block fully matches, its contents are
-            #    unpacked into new_content.
-            # 3. Move failed filters into failed_filters, so that next time we
-            #    reach this node or one of its ancestors, we'll check those
-            #    filters first.
-            for t in content:
-                filename, linenum, obj = t
-                if type(obj) is Op:
-                    new_content.append(t)
-                    continue
-                # obj is an OnlyFilter/NoFilter/Condition/NegativeCondition
-                if obj.requires_action(ctx, ctx_set, labels):
-                    # This filter requires action now
-                    if type(obj) is OnlyFilter or type(obj) is NoFilter:
-                        self._debug("    filter did not pass: %r (%s:%s)",
-                                    obj.line, filename, linenum)
-                        failed_filters.append(t)
-                        return False
-                    else:
-                        self._debug("    conditional block matches: %r (%s:%s)",
-                                    obj.line, filename, linenum)
-                        # Check and unpack the content inside this Condition
-                        # object (note: the failed filters should go into
-                        # new_internal_filters because we don't expect them to
-                        # come from outside this node, even if the Condition
-                        # itself was external)
-                        if not process_content(obj.content,
-                                               new_internal_filters):
-                            failed_filters.append(t)
-                            return False
-                        continue
-                elif obj.is_irrelevant(ctx, ctx_set, labels):
-                    # This filter is no longer relevant and can be removed
-                    continue
-                else:
-                    # Keep the filter and check it again later
-                    new_content.append(t)
-            return True
-
-        def might_pass(failed_ctx,
-                       failed_ctx_set,
-                       failed_external_filters,
-                       failed_internal_filters):
-            for t in failed_external_filters:
-                if t not in content:
-                    return True
-                filename, linenum, filter = t
-                if filter.might_pass(failed_ctx, failed_ctx_set, ctx, ctx_set,
-                                     labels):
-                    return True
-            for t in failed_internal_filters:
-                filename, linenum, filter = t
-                if filter.might_pass(failed_ctx, failed_ctx_set, ctx, ctx_set,
-                                     labels):
-                    return True
-            return False
-
-        def add_failed_case():
-            node.failed_cases.appendleft((ctx, ctx_set,
-                                          new_external_filters,
-                                          new_internal_filters))
-            if len(node.failed_cases) > num_failed_cases:
-                node.failed_cases.pop()
-
-        node = node or self.node
-        # Update dep
-        for d in node.dep:
-            dep = dep + [".".join(ctx + [d])]
-        # Update ctx
-        ctx = ctx + node.name
-        ctx_set = set(ctx)
-        labels = node.labels
-        # Get the current name
-        name = ".".join(ctx)
-        if node.name:
-            self._debug("checking out %r", name)
-        # Check previously failed filters
-        for i, failed_case in enumerate(node.failed_cases):
-            if not might_pass(*failed_case):
-                self._debug("    this subtree has failed before")
-                del node.failed_cases[i]
-                node.failed_cases.appendleft(failed_case)
-                return
-        # Check content and unpack it into new_content
-        new_content = []
-        new_external_filters = []
-        new_internal_filters = []
-        if (not process_content(node.content, new_internal_filters) or
-            not process_content(content, new_external_filters)):
-            add_failed_case()
-            return
-        # Update shortname
-        if node.append_to_shortname:
-            shortname = shortname + node.name
-        # Recurse into children
-        count = 0
-        for n in node.children:
-            for d in self.get_dicts(n, ctx, new_content, shortname, dep):
-                count += 1
-                yield d
-        # Reached leaf?
-        if not node.children:
-            self._debug("    reached leaf, returning it")
-            d = {"name": name, "dep": dep, "shortname": ".".join(shortname)}
-            for filename, linenum, op in new_content:
-                op.apply_to_dict(d)
-            yield d
-        # If this node did not produce any dicts, remember the failed filters
-        # of its descendants
-        elif not count:
-            new_external_filters = []
-            new_internal_filters = []
-            for n in node.children:
-                (failed_ctx,
-                 failed_ctx_set,
-                 failed_external_filters,
-                 failed_internal_filters) = n.failed_cases[0]
-                for obj in failed_internal_filters:
-                    if obj not in new_internal_filters:
-                        new_internal_filters.append(obj)
-                for obj in failed_external_filters:
-                    if obj in content:
-                        if obj not in new_external_filters:
-                            new_external_filters.append(obj)
-                    else:
-                        if obj not in new_internal_filters:
-                            new_internal_filters.append(obj)
-            add_failed_case()
-
-
-    def _debug(self, s, *args):
-        if self.debug:
-            s = "DEBUG: %s" % s
-            print s % args
-
-
-    def _warn(self, s, *args):
-        s = "WARNING: %s" % s
-        print s % args
-
-
-    def _parse_variants(self, cr, node, prev_indent=-1):
-        """
-        Read and parse lines from a FileReader object until a line with an
-        indent level lower than or equal to prev_indent is encountered.
-
-        @param cr: A FileReader/StrReader object.
-        @param node: A node to operate on.
-        @param prev_indent: The indent level of the "parent" block.
-        @return: A node object.
-        """
-        node4 = Node()
-
-        while True:
-            line, indent, linenum = cr.get_next_line(prev_indent)
-            if not line:
-                break
-
-            name, dep = map(str.strip, line.lstrip("- ").split(":", 1))
-            for char in name:
-                if not (char.isalnum() or char in "@._-"):
-                    raise ParserError("Illegal characters in variant name",
-                                      line, cr.filename, linenum)
-            for char in dep:
-                if not (char.isalnum() or char.isspace() or char in ".,_-"):
-                    raise ParserError("Illegal characters in dependencies",
-                                      line, cr.filename, linenum)
-
-            node2 = Node()
-            node2.children = [node]
-            node2.labels = node.labels
-
-            node3 = self._parse(cr, node2, prev_indent=indent)
-            node3.name = name.lstrip("@").split(".")
-            node3.dep = dep.replace(",", " ").split()
-            node3.append_to_shortname = not name.startswith("@")
-
-            node4.children += [node3]
-            node4.labels.update(node3.labels)
-            node4.labels.update(node3.name)
-
-        return node4
-
-
-    def _parse(self, cr, node, prev_indent=-1):
-        """
-        Read and parse lines from a StrReader object until a line with an
-        indent level lower than or equal to prev_indent is encountered.
-
-        @param cr: A FileReader/StrReader object.
-        @param node: A Node or a Condition object to operate on.
-        @param prev_indent: The indent level of the "parent" block.
-        @return: A node object.
-        """
-        while True:
-            line, indent, linenum = cr.get_next_line(prev_indent)
-            if not line:
-                break
-
-            words = line.split(None, 1)
-
-            # Parse 'variants'
-            if line == "variants:":
-                # 'variants' is not allowed inside a conditional block
-                if (isinstance(node, Condition) or
-                    isinstance(node, NegativeCondition)):
-                    raise ParserError("'variants' is not allowed inside a "
-                                      "conditional block",
-                                      None, cr.filename, linenum)
-                node = self._parse_variants(cr, node, prev_indent=indent)
-                continue
-
-            # Parse 'include' statements
-            if words[0] == "include":
-                if len(words) < 2:
-                    raise ParserError("Syntax error: missing parameter",
-                                      line, cr.filename, linenum)
-                filename = os.path.expanduser(words[1])
-                if isinstance(cr, FileReader) and not os.path.isabs(filename):
-                    filename = os.path.join(os.path.dirname(cr.filename),
-                                            filename)
-                if not os.path.isfile(filename):
-                    self._warn("%r (%s:%s): file doesn't exist or is not a "
-                               "regular file", line, cr.filename, linenum)
-                    continue
-                node = self._parse(FileReader(filename), node)
-                continue
-
-            # Parse 'only' and 'no' filters
-            if words[0] in ("only", "no"):
-                if len(words) < 2:
-                    raise ParserError("Syntax error: missing parameter",
-                                      line, cr.filename, linenum)
-                try:
-                    if words[0] == "only":
-                        f = OnlyFilter(line)
-                    elif words[0] == "no":
-                        f = NoFilter(line)
-                except ParserError, e:
-                    e.line = line
-                    e.filename = cr.filename
-                    e.linenum = linenum
-                    raise
-                node.content += [(cr.filename, linenum, f)]
-                continue
-
-            # Look for operators
-            op_match = _ops_exp.search(line)
-
-            # Parse conditional blocks
-            if ":" in line:
-                index = line.index(":")
-                if not op_match or index < op_match.start():
-                    index += 1
-                    cr.set_next_line(line[index:], indent, linenum)
-                    line = line[:index]
-                    try:
-                        if line.startswith("!"):
-                            cond = NegativeCondition(line)
-                        else:
-                            cond = Condition(line)
-                    except ParserError, e:
-                        e.line = line
-                        e.filename = cr.filename
-                        e.linenum = linenum
-                        raise
-                    self._parse(cr, cond, prev_indent=indent)
-                    node.content += [(cr.filename, linenum, cond)]
-                    continue
-
-            # Parse regular operators
-            if not op_match:
-                raise ParserError("Syntax error", line, cr.filename, linenum)
-            node.content += [(cr.filename, linenum, Op(line, op_match))]
-
-        return node
-
-
-# Assignment operators
-
-_reserved_keys = set(("name", "shortname", "dep"))
-
-
-def _op_set(d, key, value):
-    if key not in _reserved_keys:
-        d[key] = value
-
-
-def _op_append(d, key, value):
-    if key not in _reserved_keys:
-        d[key] = d.get(key, "") + value
-
-
-def _op_prepend(d, key, value):
-    if key not in _reserved_keys:
-        d[key] = value + d.get(key, "")
-
-
-def _op_regex_set(d, exp, value):
-    exp = re.compile("%s$" % exp)
-    for key in d:
-        if key not in _reserved_keys and exp.match(key):
-            d[key] = value
-
-
-def _op_regex_append(d, exp, value):
-    exp = re.compile("%s$" % exp)
-    for key in d:
-        if key not in _reserved_keys and exp.match(key):
-            d[key] += value
-
-
-def _op_regex_prepend(d, exp, value):
-    exp = re.compile("%s$" % exp)
-    for key in d:
-        if key not in _reserved_keys and exp.match(key):
-            d[key] = value + d[key]
-
-
-def _op_regex_del(d, empty, exp):
-    exp = re.compile("%s$" % exp)
-    for key in d.keys():
-        if key not in _reserved_keys and exp.match(key):
-            del d[key]
-
-
-_ops = {"=": (r"\=", _op_set),
-        "+=": (r"\+\=", _op_append),
-        "<=": (r"\<\=", _op_prepend),
-        "?=": (r"\?\=", _op_regex_set),
-        "?+=": (r"\?\+\=", _op_regex_append),
-        "?<=": (r"\?\<\=", _op_regex_prepend),
-        "del": (r"^del\b", _op_regex_del)}
-
-_ops_exp = re.compile("|".join([op[0] for op in _ops.values()]))
-
-
-class Op(object):
-    def __init__(self, line, m):
-        self.func = _ops[m.group()][1]
-        self.key = line[:m.start()].strip()
-        value = line[m.end():].strip()
-        if value and (value[0] == value[-1] == '"' or
-                      value[0] == value[-1] == "'"):
-            value = value[1:-1]
-        self.value = value
-
-
-    def apply_to_dict(self, d):
-        self.func(d, self.key, self.value)
-
-
-# StrReader and FileReader
-
-class StrReader(object):
-    """
-    Preprocess an input string for easy reading.
-    """
-    def __init__(self, s):
-        """
-        Initialize the reader.
-
-        @param s: The string to parse.
-        """
-        self.filename = "<string>"
-        self._lines = []
-        self._line_index = 0
-        self._stored_line = None
-        for linenum, line in enumerate(s.splitlines()):
-            line = line.rstrip().expandtabs()
-            stripped_line = line.lstrip()
-            indent = len(line) - len(stripped_line)
-            if (not stripped_line
-                or stripped_line.startswith("#")
-                or stripped_line.startswith("//")):
-                continue
-            self._lines.append((stripped_line, indent, linenum + 1))
-
-
-    def get_next_line(self, prev_indent):
-        """
-        Get the next line in the current block.
-
-        @param prev_indent: The indentation level of the previous block.
-        @return: (line, indent, linenum), where indent is the line's
-            indentation level.  If no line is available, (None, -1, -1) is
-            returned.
-        """
-        if self._stored_line:
-            ret = self._stored_line
-            self._stored_line = None
-            return ret
-        if self._line_index >= len(self._lines):
-            return None, -1, -1
-        line, indent, linenum = self._lines[self._line_index]
-        if indent <= prev_indent:
-            return None, -1, -1
-        self._line_index += 1
-        return line, indent, linenum
-
-
-    def set_next_line(self, line, indent, linenum):
-        """
-        Make the next call to get_next_line() return the given line instead of
-        the real next line.
-        """
-        line = line.strip()
-        if line:
-            self._stored_line = line, indent, linenum
-
-
-class FileReader(StrReader):
-    """
-    Preprocess an input file for easy reading.
-    """
-    def __init__(self, filename):
-        """
-        Initialize the reader.
-
-        @parse filename: The name of the input file.
-        """
-        StrReader.__init__(self, open(filename).read())
-        self.filename = filename
-
-
-if __name__ == "__main__":
-    parser = optparse.OptionParser('usage: %prog [options] filename '
-                                   '[extra code] ...\n\nExample:\n\n    '
-                                   '%prog tests.cfg "only my_set" "no qcow2"')
-    parser.add_option("-v", "--verbose", dest="debug", action="store_true",
-                      help="include debug messages in console output")
-    parser.add_option("-f", "--fullname", dest="fullname", action="store_true",
-                      help="show full dict names instead of short names")
-    parser.add_option("-c", "--contents", dest="contents", action="store_true",
-                      help="show dict contents")
-
-    options, args = parser.parse_args()
-    if not args:
-        parser.error("filename required")
-
-    c = Parser(args[0], debug=options.debug)
-    for s in args[1:]:
-        c.parse_string(s)
-
-    for i, d in enumerate(c.get_dicts()):
-        if options.fullname:
-            print "dict %4d:  %s" % (i + 1, d["name"])
-        else:
-            print "dict %4d:  %s" % (i + 1, d["shortname"])
-        if options.contents:
-            keys = d.keys()
-            keys.sort()
-            for key in keys:
-                print "    %s = %s" % (key, d[key])
diff --git a/client/tests/kvm/kvm_monitor.py b/client/tests/kvm/kvm_monitor.py
deleted file mode 100644
index 8cf2441..0000000
--- a/client/tests/kvm/kvm_monitor.py
+++ /dev/null
@@ -1,744 +0,0 @@
-"""
-Interfaces to the QEMU monitor.
-
-@copyright: 2008-2010 Red Hat Inc.
-"""
-
-import socket, time, threading, logging, select
-import kvm_utils
-try:
-    import json
-except ImportError:
-    logging.warning("Could not import json module. "
-                    "QMP monitor functionality disabled.")
-
-
-class MonitorError(Exception):
-    pass
-
-
-class MonitorConnectError(MonitorError):
-    pass
-
-
-class MonitorSocketError(MonitorError):
-    def __init__(self, msg, e):
-        Exception.__init__(self, msg, e)
-        self.msg = msg
-        self.e = e
-
-    def __str__(self):
-        return "%s    (%s)" % (self.msg, self.e)
-
-
-class MonitorLockError(MonitorError):
-    pass
-
-
-class MonitorProtocolError(MonitorError):
-    pass
-
-
-class MonitorNotSupportedError(MonitorError):
-    pass
-
-
-class QMPCmdError(MonitorError):
-    def __init__(self, cmd, qmp_args, data):
-        MonitorError.__init__(self, cmd, qmp_args, data)
-        self.cmd = cmd
-        self.qmp_args = qmp_args
-        self.data = data
-
-    def __str__(self):
-        return ("QMP command %r failed    (arguments: %r,    "
-                "error message: %r)" % (self.cmd, self.qmp_args, self.data))
-
-
-class Monitor:
-    """
-    Common code for monitor classes.
-    """
-
-    def __init__(self, name, filename):
-        """
-        Initialize the instance.
-
-        @param name: Monitor identifier (a string)
-        @param filename: Monitor socket filename
-        @raise MonitorConnectError: Raised if the connection fails
-        """
-        self.name = name
-        self.filename = filename
-        self._lock = threading.RLock()
-        self._socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
-
-        try:
-            self._socket.connect(filename)
-        except socket.error:
-            raise MonitorConnectError("Could not connect to monitor socket")
-
-
-    def __del__(self):
-        # Automatically close the connection when the instance is garbage
-        # collected
-        try:
-            self._socket.shutdown(socket.SHUT_RDWR)
-        except socket.error:
-            pass
-        self._socket.close()
-
-
-    # The following two functions are defined to make sure the state is set
-    # exclusively by the constructor call as specified in __getinitargs__().
-
-    def __getstate__(self):
-        pass
-
-
-    def __setstate__(self, state):
-        pass
-
-
-    def __getinitargs__(self):
-        # Save some information when pickling -- will be passed to the
-        # constructor upon unpickling
-        return self.name, self.filename, True
-
-
-    def _acquire_lock(self, timeout=20):
-        end_time = time.time() + timeout
-        while time.time() < end_time:
-            if self._lock.acquire(False):
-                return True
-            time.sleep(0.05)
-        return False
-
-
-    def _data_available(self, timeout=0):
-        timeout = max(0, timeout)
-        return bool(select.select([self._socket], [], [], timeout)[0])
-
-
-    def _recvall(self):
-        s = ""
-        while self._data_available():
-            try:
-                data = self._socket.recv(1024)
-            except socket.error, e:
-                raise MonitorSocketError("Could not receive data from monitor",
-                                         e)
-            if not data:
-                break
-            s += data
-        return s
-
-
-    def is_responsive(self):
-        """
-        Return True iff the monitor is responsive.
-        """
-        try:
-            self.verify_responsive()
-            return True
-        except MonitorError:
-            return False
-
-
-class HumanMonitor(Monitor):
-    """
-    Wraps "human monitor" commands.
-    """
-
-    def __init__(self, name, filename, suppress_exceptions=False):
-        """
-        Connect to the monitor socket and find the (qemu) prompt.
-
-        @param name: Monitor identifier (a string)
-        @param filename: Monitor socket filename
-        @raise MonitorConnectError: Raised if the connection fails and
-                suppress_exceptions is False
-        @raise MonitorProtocolError: Raised if the initial (qemu) prompt isn't
-                found and suppress_exceptions is False
-        @note: Other exceptions may be raised.  See cmd()'s
-                docstring.
-        """
-        try:
-            Monitor.__init__(self, name, filename)
-
-            self.protocol = "human"
-
-            # Find the initial (qemu) prompt
-            s, o = self._read_up_to_qemu_prompt(20)
-            if not s:
-                raise MonitorProtocolError("Could not find (qemu) prompt "
-                                           "after connecting to monitor. "
-                                           "Output so far: %r" % o)
-
-            # Save the output of 'help' for future use
-            self._help_str = self.cmd("help")
-
-        except MonitorError, e:
-            if suppress_exceptions:
-                logging.warn(e)
-            else:
-                raise
-
-
-    # Private methods
-
-    def _read_up_to_qemu_prompt(self, timeout=20):
-        s = ""
-        end_time = time.time() + timeout
-        while self._data_available(end_time - time.time()):
-            data = self._recvall()
-            if not data:
-                break
-            s += data
-            try:
-                if s.splitlines()[-1].split()[-1] == "(qemu)":
-                    return True, "\n".join(s.splitlines()[:-1])
-            except IndexError:
-                continue
-        return False, "\n".join(s.splitlines())
-
-
-    def _send(self, cmd):
-        """
-        Send a command without waiting for output.
-
-        @param cmd: Command to send
-        @raise MonitorLockError: Raised if the lock cannot be acquired
-        @raise MonitorSocketError: Raised if a socket error occurs
-        """
-        if not self._acquire_lock(20):
-            raise MonitorLockError("Could not acquire exclusive lock to send "
-                                   "monitor command '%s'" % cmd)
-
-        try:
-            try:
-                self._socket.sendall(cmd + "\n")
-            except socket.error, e:
-                raise MonitorSocketError("Could not send monitor command %r" %
-                                         cmd, e)
-
-        finally:
-            self._lock.release()
-
-
-    # Public methods
-
-    def cmd(self, command, timeout=20):
-        """
-        Send command to the monitor.
-
-        @param command: Command to send to the monitor
-        @param timeout: Time duration to wait for the (qemu) prompt to return
-        @return: Output received from the monitor
-        @raise MonitorLockError: Raised if the lock cannot be acquired
-        @raise MonitorSocketError: Raised if a socket error occurs
-        @raise MonitorProtocolError: Raised if the (qemu) prompt cannot be
-                found after sending the command
-        """
-        if not self._acquire_lock(20):
-            raise MonitorLockError("Could not acquire exclusive lock to send "
-                                   "monitor command '%s'" % command)
-
-        try:
-            # Read any data that might be available
-            self._recvall()
-            # Send command
-            self._send(command)
-            # Read output
-            s, o = self._read_up_to_qemu_prompt(timeout)
-            # Remove command echo from output
-            o = "\n".join(o.splitlines()[1:])
-            # Report success/failure
-            if s:
-                return o
-            else:
-                msg = ("Could not find (qemu) prompt after command '%s'. "
-                       "Output so far: %r" % (command, o))
-                raise MonitorProtocolError(msg)
-
-        finally:
-            self._lock.release()
-
-
-    def verify_responsive(self):
-        """
-        Make sure the monitor is responsive by sending a command.
-        """
-        self.cmd("info status")
-
-
-    # Command wrappers
-    # Notes:
-    # - All of the following commands raise exceptions in a similar manner to
-    #   cmd().
-    # - A command wrapper should use self._help_str if it requires information
-    #   about the monitor's capabilities.
-
-    def quit(self):
-        """
-        Send "quit" without waiting for output.
-        """
-        self._send("quit")
-
-
-    def info(self, what):
-        """
-        Request info about something and return the output.
-        """
-        return self.cmd("info %s" % what)
-
-
-    def query(self, what):
-        """
-        Alias for info.
-        """
-        return self.info(what)
-
-
-    def screendump(self, filename):
-        """
-        Request a screendump.
-
-        @param filename: Location for the screendump
-        @return: The command's output
-        """
-        return self.cmd("screendump %s" % filename)
-
-
-    def migrate(self, uri, full_copy=False, incremental_copy=False, wait=False):
-        """
-        Migrate.
-
-        @param uri: destination URI
-        @param full_copy: If true, migrate with full disk copy
-        @param incremental_copy: If true, migrate with incremental disk copy
-        @param wait: If true, wait for completion
-        @return: The command's output
-        """
-        cmd = "migrate"
-        if not wait:
-            cmd += " -d"
-        if full_copy:
-            cmd += " -b"
-        if incremental_copy:
-            cmd += " -i"
-        cmd += " %s" % uri
-        return self.cmd(cmd)
-
-
-    def migrate_set_speed(self, value):
-        """
-        Set maximum speed (in bytes/sec) for migrations.
-
-        @param value: Speed in bytes/sec
-        @return: The command's output
-        """
-        return self.cmd("migrate_set_speed %s" % value)
-
-
-    def sendkey(self, keystr, hold_time=1):
-        """
-        Send key combination to VM.
-
-        @param keystr: Key combination string
-        @param hold_time: Hold time in ms (should normally stay 1 ms)
-        @return: The command's output
-        """
-        return self.cmd("sendkey %s %s" % (keystr, hold_time))
-
-
-    def mouse_move(self, dx, dy):
-        """
-        Move mouse.
-
-        @param dx: X amount
-        @param dy: Y amount
-        @return: The command's output
-        """
-        return self.cmd("mouse_move %d %d" % (dx, dy))
-
-
-    def mouse_button(self, state):
-        """
-        Set mouse button state.
-
-        @param state: Button state (1=L, 2=M, 4=R)
-        @return: The command's output
-        """
-        return self.cmd("mouse_button %d" % state)
-
-
-class QMPMonitor(Monitor):
-    """
-    Wraps QMP monitor commands.
-    """
-
-    def __init__(self, name, filename, suppress_exceptions=False):
-        """
-        Connect to the monitor socket, read the greeting message and issue the
-        qmp_capabilities command.  Also make sure the json module is available.
-
-        @param name: Monitor identifier (a string)
-        @param filename: Monitor socket filename
-        @raise MonitorConnectError: Raised if the connection fails and
-                suppress_exceptions is False
-        @raise MonitorProtocolError: Raised if the no QMP greeting message is
-                received and suppress_exceptions is False
-        @raise MonitorNotSupportedError: Raised if json isn't available and
-                suppress_exceptions is False
-        @note: Other exceptions may be raised if the qmp_capabilities command
-                fails.  See cmd()'s docstring.
-        """
-        try:
-            Monitor.__init__(self, name, filename)
-
-            self.protocol = "qmp"
-            self._greeting = None
-            self._events = []
-
-            # Make sure json is available
-            try:
-                json
-            except NameError:
-                raise MonitorNotSupportedError("QMP requires the json module "
-                                               "(Python 2.6 and up)")
-
-            # Read greeting message
-            end_time = time.time() + 20
-            while time.time() < end_time:
-                for obj in self._read_objects():
-                    if "QMP" in obj:
-                        self._greeting = obj
-                        break
-                if self._greeting:
-                    break
-                time.sleep(0.1)
-            else:
-                raise MonitorProtocolError("No QMP greeting message received")
-
-            # Issue qmp_capabilities
-            self.cmd("qmp_capabilities")
-
-        except MonitorError, e:
-            if suppress_exceptions:
-                logging.warn(e)
-            else:
-                raise
-
-
-    # Private methods
-
-    def _build_cmd(self, cmd, args=None, id=None):
-        obj = {"execute": cmd}
-        if args is not None:
-            obj["arguments"] = args
-        if id is not None:
-            obj["id"] = id
-        return obj
-
-
-    def _read_objects(self, timeout=5):
-        """
-        Read lines from the monitor and try to decode them.
-        Stop when all available lines have been successfully decoded, or when
-        timeout expires.  If any decoded objects are asynchronous events, store
-        them in self._events.  Return all decoded objects.
-
-        @param timeout: Time to wait for all lines to decode successfully
-        @return: A list of objects
-        """
-        if not self._data_available():
-            return []
-        s = ""
-        end_time = time.time() + timeout
-        while self._data_available(end_time - time.time()):
-            s += self._recvall()
-            # Make sure all lines are decodable
-            for line in s.splitlines():
-                if line:
-                    try:
-                        json.loads(line)
-                    except:
-                        # Found an incomplete or broken line -- keep reading
-                        break
-            else:
-                # All lines are OK -- stop reading
-                break
-        # Decode all decodable lines
-        objs = []
-        for line in s.splitlines():
-            try:
-                objs += [json.loads(line)]
-            except:
-                pass
-        # Keep track of asynchronous events
-        self._events += [obj for obj in objs if "event" in obj]
-        return objs
-
-
-    def _send(self, data):
-        """
-        Send raw data without waiting for response.
-
-        @param data: Data to send
-        @raise MonitorSocketError: Raised if a socket error occurs
-        """
-        try:
-            self._socket.sendall(data)
-        except socket.error, e:
-            raise MonitorSocketError("Could not send data: %r" % data, e)
-
-
-    def _get_response(self, id=None, timeout=20):
-        """
-        Read a response from the QMP monitor.
-
-        @param id: If not None, look for a response with this id
-        @param timeout: Time duration to wait for response
-        @return: The response dict, or None if none was found
-        """
-        end_time = time.time() + timeout
-        while self._data_available(end_time - time.time()):
-            for obj in self._read_objects():
-                if isinstance(obj, dict):
-                    if id is not None and obj.get("id") != id:
-                        continue
-                    if "return" in obj or "error" in obj:
-                        return obj
-
-
-    # Public methods
-
-    def cmd(self, cmd, args=None, timeout=20):
-        """
-        Send a QMP monitor command and return the response.
-
-        Note: an id is automatically assigned to the command and the response
-        is checked for the presence of the same id.
-
-        @param cmd: Command to send
-        @param args: A dict containing command arguments, or None
-        @param timeout: Time duration to wait for response
-        @return: The response received
-        @raise MonitorLockError: Raised if the lock cannot be acquired
-        @raise MonitorSocketError: Raised if a socket error occurs
-        @raise MonitorProtocolError: Raised if no response is received
-        @raise QMPCmdError: Raised if the response is an error message
-                (the exception's args are (cmd, args, data) where data is the
-                error data)
-        """
-        if not self._acquire_lock(20):
-            raise MonitorLockError("Could not acquire exclusive lock to send "
-                                   "QMP command '%s'" % cmd)
-
-        try:
-            # Read any data that might be available
-            self._read_objects()
-            # Send command
-            id = kvm_utils.generate_random_string(8)
-            self._send(json.dumps(self._build_cmd(cmd, args, id)) + "\n")
-            # Read response
-            r = self._get_response(id, timeout)
-            if r is None:
-                raise MonitorProtocolError("Received no response to QMP "
-                                           "command '%s', or received a "
-                                           "response with an incorrect id"
-                                           % cmd)
-            if "return" in r:
-                return r["return"]
-            if "error" in r:
-                raise QMPCmdError(cmd, args, r["error"])
-
-        finally:
-            self._lock.release()
-
-
-    def cmd_raw(self, data, timeout=20):
-        """
-        Send a raw string to the QMP monitor and return the response.
-        Unlike cmd(), return the raw response dict without performing any
-        checks on it.
-
-        @param data: The data to send
-        @param timeout: Time duration to wait for response
-        @return: The response received
-        @raise MonitorLockError: Raised if the lock cannot be acquired
-        @raise MonitorSocketError: Raised if a socket error occurs
-        @raise MonitorProtocolError: Raised if no response is received
-        """
-        if not self._acquire_lock(20):
-            raise MonitorLockError("Could not acquire exclusive lock to send "
-                                   "data: %r" % data)
-
-        try:
-            self._read_objects()
-            self._send(data)
-            r = self._get_response(None, timeout)
-            if r is None:
-                raise MonitorProtocolError("Received no response to data: %r" %
-                                           data)
-            return r
-
-        finally:
-            self._lock.release()
-
-
-    def cmd_obj(self, obj, timeout=20):
-        """
-        Transform a Python object to JSON, send the resulting string to the QMP
-        monitor, and return the response.
-        Unlike cmd(), return the raw response dict without performing any
-        checks on it.
-
-        @param obj: The object to send
-        @param timeout: Time duration to wait for response
-        @return: The response received
-        @raise MonitorLockError: Raised if the lock cannot be acquired
-        @raise MonitorSocketError: Raised if a socket error occurs
-        @raise MonitorProtocolError: Raised if no response is received
-        """
-        return self.cmd_raw(json.dumps(obj) + "\n")
-
-
-    def cmd_qmp(self, cmd, args=None, id=None, timeout=20):
-        """
-        Build a QMP command from the passed arguments, send it to the monitor
-        and return the response.
-        Unlike cmd(), return the raw response dict without performing any
-        checks on it.
-
-        @param cmd: Command to send
-        @param args: A dict containing command arguments, or None
-        @param id:  An id for the command, or None
-        @param timeout: Time duration to wait for response
-        @return: The response received
-        @raise MonitorLockError: Raised if the lock cannot be acquired
-        @raise MonitorSocketError: Raised if a socket error occurs
-        @raise MonitorProtocolError: Raised if no response is received
-        """
-        return self.cmd_obj(self._build_cmd(cmd, args, id), timeout)
-
-
-    def verify_responsive(self):
-        """
-        Make sure the monitor is responsive by sending a command.
-        """
-        self.cmd("query-status")
-
-
-    def get_events(self):
-        """
-        Return a list of the asynchronous events received since the last
-        clear_events() call.
-
-        @return: A list of events (the objects returned have an "event" key)
-        @raise MonitorLockError: Raised if the lock cannot be acquired
-        """
-        if not self._acquire_lock(20):
-            raise MonitorLockError("Could not acquire exclusive lock to read "
-                                   "QMP events")
-        try:
-            self._read_objects()
-            return self._events[:]
-        finally:
-            self._lock.release()
-
-
-    def get_event(self, name):
-        """
-        Look for an event with the given name in the list of events.
-
-        @param name: The name of the event to look for (e.g. 'RESET')
-        @return: An event object or None if none is found
-        """
-        for e in self.get_events():
-            if e.get("event") == name:
-                return e
-
-
-    def clear_events(self):
-        """
-        Clear the list of asynchronous events.
-
-        @raise MonitorLockError: Raised if the lock cannot be acquired
-        """
-        if not self._acquire_lock(20):
-            raise MonitorLockError("Could not acquire exclusive lock to clear "
-                                   "QMP event list")
-        self._events = []
-        self._lock.release()
-
-
-    def get_greeting(self):
-        """
-        Return QMP greeting message.
-        """
-        return self._greeting
-
-
-    # Command wrappers
-    # Note: all of the following functions raise exceptions in a similar manner
-    # to cmd().
-
-    def quit(self):
-        """
-        Send "quit" and return the response.
-        """
-        return self.cmd("quit")
-
-
-    def info(self, what):
-        """
-        Request info about something and return the response.
-        """
-        return self.cmd("query-%s" % what)
-
-
-    def query(self, what):
-        """
-        Alias for info.
-        """
-        return self.info(what)
-
-
-    def screendump(self, filename):
-        """
-        Request a screendump.
-
-        @param filename: Location for the screendump
-        @return: The response to the command
-        """
-        args = {"filename": filename}
-        return self.cmd("screendump", args)
-
-
-    def migrate(self, uri, full_copy=False, incremental_copy=False, wait=False):
-        """
-        Migrate.
-
-        @param uri: destination URI
-        @param full_copy: If true, migrate with full disk copy
-        @param incremental_copy: If true, migrate with incremental disk copy
-        @param wait: If true, wait for completion
-        @return: The response to the command
-        """
-        args = {"uri": uri,
-                "blk": full_copy,
-                "inc": incremental_copy}
-        return self.cmd("migrate", args)
-
-
-    def migrate_set_speed(self, value):
-        """
-        Set maximum speed (in bytes/sec) for migrations.
-
-        @param value: Speed in bytes/sec
-        @return: The response to the command
-        """
-        args = {"value": value}
-        return self.cmd("migrate_set_speed", args)
diff --git a/client/tests/kvm/kvm_preprocessing.py b/client/tests/kvm/kvm_preprocessing.py
deleted file mode 100644
index 515e3a5..0000000
--- a/client/tests/kvm/kvm_preprocessing.py
+++ /dev/null
@@ -1,467 +0,0 @@
-import os, time, commands, re, logging, glob, threading, shutil
-from autotest_lib.client.bin import utils
-from autotest_lib.client.common_lib import error
-import kvm_vm, kvm_utils, kvm_subprocess, kvm_monitor, ppm_utils, test_setup
-try:
-    import PIL.Image
-except ImportError:
-    logging.warning('No python imaging library installed. PPM image '
-                    'conversion to JPEG disabled. In order to enable it, '
-                    'please install python-imaging or the equivalent for your '
-                    'distro.')
-
-
-_screendump_thread = None
-_screendump_thread_termination_event = None
-
-
-def preprocess_image(test, params):
-    """
-    Preprocess a single QEMU image according to the instructions in params.
-
-    @param test: Autotest test object.
-    @param params: A dict containing image preprocessing parameters.
-    @note: Currently this function just creates an image if requested.
-    """
-    image_filename = kvm_vm.get_image_filename(params, test.bindir)
-
-    create_image = False
-
-    if params.get("force_create_image") == "yes":
-        logging.debug("'force_create_image' specified; creating image...")
-        create_image = True
-    elif (params.get("create_image") == "yes" and not
-          os.path.exists(image_filename)):
-        logging.debug("Creating image...")
-        create_image = True
-
-    if create_image and not kvm_vm.create_image(params, test.bindir):
-        raise error.TestError("Could not create image")
-
-
-def preprocess_vm(test, params, env, name):
-    """
-    Preprocess a single VM object according to the instructions in params.
-    Start the VM if requested and get a screendump.
-
-    @param test: An Autotest test object.
-    @param params: A dict containing VM preprocessing parameters.
-    @param env: The environment (a dict-like object).
-    @param name: The name of the VM object.
-    """
-    logging.debug("Preprocessing VM '%s'..." % name)
-    vm = env.get_vm(name)
-    if not vm:
-        logging.debug("VM object does not exist; creating it")
-        vm = kvm_vm.VM(name, params, test.bindir, env.get("address_cache"))
-        env.register_vm(name, vm)
-
-    start_vm = False
-
-    if params.get("restart_vm") == "yes":
-        logging.debug("'restart_vm' specified; (re)starting VM...")
-        start_vm = True
-    elif params.get("migration_mode"):
-        logging.debug("Starting VM in incoming migration mode...")
-        start_vm = True
-    elif params.get("start_vm") == "yes":
-        if not vm.is_alive():
-            logging.debug("VM is not alive; starting it...")
-            start_vm = True
-        elif vm.make_qemu_command() != vm.make_qemu_command(name, params,
-                                                            test.bindir):
-            logging.debug("VM's qemu command differs from requested one; "
-                          "restarting it...")
-            start_vm = True
-
-    if start_vm:
-        # Start the VM (or restart it if it's already up)
-        vm.create(name, params, test.bindir,
-                  migration_mode=params.get("migration_mode"))
-    else:
-        # Don't start the VM, just update its params
-        vm.params = params
-
-    scrdump_filename = os.path.join(test.debugdir, "pre_%s.ppm" % name)
-    try:
-        if vm.monitor:
-            vm.monitor.screendump(scrdump_filename)
-    except kvm_monitor.MonitorError, e:
-        logging.warn(e)
-
-
-def postprocess_image(test, params):
-    """
-    Postprocess a single QEMU image according to the instructions in params.
-
-    @param test: An Autotest test object.
-    @param params: A dict containing image postprocessing parameters.
-    """
-    if params.get("check_image") == "yes":
-        kvm_vm.check_image(params, test.bindir)
-    if params.get("remove_image") == "yes":
-        kvm_vm.remove_image(params, test.bindir)
-
-
-def postprocess_vm(test, params, env, name):
-    """
-    Postprocess a single VM object according to the instructions in params.
-    Kill the VM if requested and get a screendump.
-
-    @param test: An Autotest test object.
-    @param params: A dict containing VM postprocessing parameters.
-    @param env: The environment (a dict-like object).
-    @param name: The name of the VM object.
-    """
-    logging.debug("Postprocessing VM '%s'..." % name)
-    vm = env.get_vm(name)
-    if not vm:
-        return
-
-    scrdump_filename = os.path.join(test.debugdir, "post_%s.ppm" % name)
-    try:
-        if vm.monitor:
-            vm.monitor.screendump(scrdump_filename)
-    except kvm_monitor.MonitorError, e:
-        logging.warn(e)
-
-    if params.get("kill_vm") == "yes":
-        kill_vm_timeout = float(params.get("kill_vm_timeout", 0))
-        if kill_vm_timeout:
-            logging.debug("'kill_vm' specified; waiting for VM to shut down "
-                          "before killing it...")
-            kvm_utils.wait_for(vm.is_dead, kill_vm_timeout, 0, 1)
-        else:
-            logging.debug("'kill_vm' specified; killing VM...")
-        vm.destroy(gracefully = params.get("kill_vm_gracefully") == "yes")
-
-
-def process_command(test, params, env, command, command_timeout,
-                    command_noncritical):
-    """
-    Pre- or post- custom commands to be executed before/after a test is run
-
-    @param test: An Autotest test object.
-    @param params: A dict containing all VM and image parameters.
-    @param env: The environment (a dict-like object).
-    @param command: Command to be run.
-    @param command_timeout: Timeout for command execution.
-    @param command_noncritical: If True test will not fail if command fails.
-    """
-    # Export environment vars
-    for k in params:
-        os.putenv("KVM_TEST_%s" % k, str(params[k]))
-    # Execute commands
-    try:
-        utils.system("cd %s; %s" % (test.bindir, command))
-    except error.CmdError, e:
-        if command_noncritical:
-            logging.warn(e)
-        else:
-            raise
-
-def process(test, params, env, image_func, vm_func):
-    """
-    Pre- or post-process VMs and images according to the instructions in params.
-    Call image_func for each image listed in params and vm_func for each VM.
-
-    @param test: An Autotest test object.
-    @param params: A dict containing all VM and image parameters.
-    @param env: The environment (a dict-like object).
-    @param image_func: A function to call for each image.
-    @param vm_func: A function to call for each VM.
-    """
-    # Get list of VMs specified for this test
-    for vm_name in params.objects("vms"):
-        vm_params = params.object_params(vm_name)
-        # Get list of images specified for this VM
-        for image_name in vm_params.objects("images"):
-            image_params = vm_params.object_params(image_name)
-            # Call image_func for each image
-            image_func(test, image_params)
-        # Call vm_func for each vm
-        vm_func(test, vm_params, env, vm_name)
-
-
-@error.context_aware
-def preprocess(test, params, env):
-    """
-    Preprocess all VMs and images according to the instructions in params.
-    Also, collect some host information, such as the KVM version.
-
-    @param test: An Autotest test object.
-    @param params: A dict containing all VM and image parameters.
-    @param env: The environment (a dict-like object).
-    """
-    error.context("preprocessing")
-
-    # Start tcpdump if it isn't already running
-    if "address_cache" not in env:
-        env["address_cache"] = {}
-    if "tcpdump" in env and not env["tcpdump"].is_alive():
-        env["tcpdump"].close()
-        del env["tcpdump"]
-    if "tcpdump" not in env and params.get("run_tcpdump", "yes") == "yes":
-        cmd = "%s -npvi any 'dst port 68'" % kvm_utils.find_command("tcpdump")
-        logging.debug("Starting tcpdump (%s)...", cmd)
-        env["tcpdump"] = kvm_subprocess.Tail(
-            command=cmd,
-            output_func=_update_address_cache,
-            output_params=(env["address_cache"],))
-        if kvm_utils.wait_for(lambda: not env["tcpdump"].is_alive(),
-                              0.1, 0.1, 1.0):
-            logging.warn("Could not start tcpdump")
-            logging.warn("Status: %s" % env["tcpdump"].get_status())
-            logging.warn("Output:" + kvm_utils.format_str_for_message(
-                env["tcpdump"].get_output()))
-
-    # Destroy and remove VMs that are no longer needed in the environment
-    requested_vms = params.objects("vms")
-    for key in env.keys():
-        vm = env[key]
-        if not kvm_utils.is_vm(vm):
-            continue
-        if not vm.name in requested_vms:
-            logging.debug("VM '%s' found in environment but not required for "
-                          "test; removing it..." % vm.name)
-            vm.destroy()
-            del env[key]
-
-    # Get the KVM kernel module version and write it as a keyval
-    logging.debug("Fetching KVM module version...")
-    if os.path.exists("/dev/kvm"):
-        try:
-            kvm_version = open("/sys/module/kvm/version").read().strip()
-        except:
-            kvm_version = os.uname()[2]
-    else:
-        kvm_version = "Unknown"
-        logging.debug("KVM module not loaded")
-    logging.debug("KVM version: %s" % kvm_version)
-    test.write_test_keyval({"kvm_version": kvm_version})
-
-    # Get the KVM userspace version and write it as a keyval
-    logging.debug("Fetching KVM userspace version...")
-    qemu_path = kvm_utils.get_path(test.bindir, params.get("qemu_binary",
-                                                           "qemu"))
-    version_line = commands.getoutput("%s -help | head -n 1" % qemu_path)
-    matches = re.findall("[Vv]ersion .*?,", version_line)
-    if matches:
-        kvm_userspace_version = " ".join(matches[0].split()[1:]).strip(",")
-    else:
-        kvm_userspace_version = "Unknown"
-        logging.debug("Could not fetch KVM userspace version")
-    logging.debug("KVM userspace version: %s" % kvm_userspace_version)
-    test.write_test_keyval({"kvm_userspace_version": kvm_userspace_version})
-
-    if params.get("setup_hugepages") == "yes":
-        h = test_setup.HugePageConfig(params)
-        h.setup()
-
-    if params.get("type") == "unattended_install":
-        u = test_setup.UnattendedInstallConfig(test, params)
-        u.setup()
-
-    if params.get("type") == "enospc":
-        e = test_setup.EnospcConfig(test, params)
-        e.setup()
-
-    # Execute any pre_commands
-    if params.get("pre_command"):
-        process_command(test, params, env, params.get("pre_command"),
-                        int(params.get("pre_command_timeout", "600")),
-                        params.get("pre_command_noncritical") == "yes")
-
-    # Preprocess all VMs and images
-    process(test, params, env, preprocess_image, preprocess_vm)
-
-    # Start the screendump thread
-    if params.get("take_regular_screendumps") == "yes":
-        logging.debug("Starting screendump thread")
-        global _screendump_thread, _screendump_thread_termination_event
-        _screendump_thread_termination_event = threading.Event()
-        _screendump_thread = threading.Thread(target=_take_screendumps,
-                                              args=(test, params, env))
-        _screendump_thread.start()
-
-
-@error.context_aware
-def postprocess(test, params, env):
-    """
-    Postprocess all VMs and images according to the instructions in params.
-
-    @param test: An Autotest test object.
-    @param params: Dict containing all VM and image parameters.
-    @param env: The environment (a dict-like object).
-    """
-    error.context("postprocessing")
-
-    # Postprocess all VMs and images
-    process(test, params, env, postprocess_image, postprocess_vm)
-
-    # Terminate the screendump thread
-    global _screendump_thread, _screendump_thread_termination_event
-    if _screendump_thread:
-        logging.debug("Terminating screendump thread...")
-        _screendump_thread_termination_event.set()
-        _screendump_thread.join(10)
-        _screendump_thread = None
-
-    # Warn about corrupt PPM files
-    for f in glob.glob(os.path.join(test.debugdir, "*.ppm")):
-        if not ppm_utils.image_verify_ppm_file(f):
-            logging.warn("Found corrupt PPM file: %s", f)
-
-    # Should we convert PPM files to PNG format?
-    if params.get("convert_ppm_files_to_png") == "yes":
-        logging.debug("'convert_ppm_files_to_png' specified; converting PPM "
-                      "files to PNG format...")
-        try:
-            for f in glob.glob(os.path.join(test.debugdir, "*.ppm")):
-                if ppm_utils.image_verify_ppm_file(f):
-                    new_path = f.replace(".ppm", ".png")
-                    image = PIL.Image.open(f)
-                    image.save(new_path, format='PNG')
-        except NameError:
-            pass
-
-    # Should we keep the PPM files?
-    if params.get("keep_ppm_files") != "yes":
-        logging.debug("'keep_ppm_files' not specified; removing all PPM files "
-                      "from debug dir...")
-        for f in glob.glob(os.path.join(test.debugdir, '*.ppm')):
-            os.unlink(f)
-
-    # Should we keep the screendump dirs?
-    if params.get("keep_screendumps") != "yes":
-        logging.debug("'keep_screendumps' not specified; removing screendump "
-                      "dirs...")
-        for d in glob.glob(os.path.join(test.debugdir, "screendumps_*")):
-            if os.path.isdir(d) and not os.path.islink(d):
-                shutil.rmtree(d, ignore_errors=True)
-
-    # Kill all unresponsive VMs
-    if params.get("kill_unresponsive_vms") == "yes":
-        logging.debug("'kill_unresponsive_vms' specified; killing all VMs "
-                      "that fail to respond to a remote login request...")
-        for vm in env.get_all_vms():
-            if vm.is_alive():
-                try:
-                    session = vm.login()
-                    session.close()
-                except (kvm_utils.LoginError, kvm_vm.VMError), e:
-                    logging.warn(e)
-                    vm.destroy(gracefully=False)
-
-    # Kill all kvm_subprocess tail threads
-    kvm_subprocess.kill_tail_threads()
-
-    # Terminate tcpdump if no VMs are alive
-    living_vms = [vm for vm in env.get_all_vms() if vm.is_alive()]
-    if not living_vms and "tcpdump" in env:
-        env["tcpdump"].close()
-        del env["tcpdump"]
-
-    if params.get("setup_hugepages") == "yes":
-        h = test_setup.HugePageConfig(params)
-        h.cleanup()
-
-    if params.get("type") == "enospc":
-        e = test_setup.EnospcConfig(test, params)
-        e.cleanup()
-
-    # Execute any post_commands
-    if params.get("post_command"):
-        process_command(test, params, env, params.get("post_command"),
-                        int(params.get("post_command_timeout", "600")),
-                        params.get("post_command_noncritical") == "yes")
-
-
-def postprocess_on_error(test, params, env):
-    """
-    Perform postprocessing operations required only if the test failed.
-
-    @param test: An Autotest test object.
-    @param params: A dict containing all VM and image parameters.
-    @param env: The environment (a dict-like object).
-    """
-    params.update(params.object_params("on_error"))
-
-
-def _update_address_cache(address_cache, line):
-    if re.search("Your.IP", line, re.IGNORECASE):
-        matches = re.findall(r"\d*\.\d*\.\d*\.\d*", line)
-        if matches:
-            address_cache["last_seen"] = matches[0]
-    if re.search("Client.Ethernet.Address", line, re.IGNORECASE):
-        matches = re.findall(r"\w*:\w*:\w*:\w*:\w*:\w*", line)
-        if matches and address_cache.get("last_seen"):
-            mac_address = matches[0].lower()
-            if time.time() - address_cache.get("time_%s" % mac_address, 0) > 5:
-                logging.debug("(address cache) Adding cache entry: %s ---> %s",
-                              mac_address, address_cache.get("last_seen"))
-            address_cache[mac_address] = address_cache.get("last_seen")
-            address_cache["time_%s" % mac_address] = time.time()
-            del address_cache["last_seen"]
-
-
-def _take_screendumps(test, params, env):
-    global _screendump_thread_termination_event
-    temp_dir = test.debugdir
-    if params.get("screendump_temp_dir"):
-        temp_dir = kvm_utils.get_path(test.bindir,
-                                      params.get("screendump_temp_dir"))
-        try:
-            os.makedirs(temp_dir)
-        except OSError:
-            pass
-    temp_filename = os.path.join(temp_dir, "scrdump-%s.ppm" %
-                                 kvm_utils.generate_random_string(6))
-    delay = float(params.get("screendump_delay", 5))
-    quality = int(params.get("screendump_quality", 30))
-
-    cache = {}
-
-    while True:
-        for vm in env.get_all_vms():
-            if not vm.is_alive():
-                continue
-            try:
-                vm.monitor.screendump(temp_filename)
-            except kvm_monitor.MonitorError, e:
-                logging.warn(e)
-                continue
-            if not os.path.exists(temp_filename):
-                logging.warn("VM '%s' failed to produce a screendump", vm.name)
-                continue
-            if not ppm_utils.image_verify_ppm_file(temp_filename):
-                logging.warn("VM '%s' produced an invalid screendump", vm.name)
-                os.unlink(temp_filename)
-                continue
-            screendump_dir = os.path.join(test.debugdir,
-                                          "screendumps_%s" % vm.name)
-            try:
-                os.makedirs(screendump_dir)
-            except OSError:
-                pass
-            screendump_filename = os.path.join(screendump_dir,
-                    "%s_%s.jpg" % (vm.name,
-                                   time.strftime("%Y-%m-%d_%H-%M-%S")))
-            hash = utils.hash_file(temp_filename)
-            if hash in cache:
-                try:
-                    os.link(cache[hash], screendump_filename)
-                except OSError:
-                    pass
-            else:
-                try:
-                    image = PIL.Image.open(temp_filename)
-                    image.save(screendump_filename, format="JPEG", quality=quality)
-                    cache[hash] = screendump_filename
-                except NameError:
-                    pass
-            os.unlink(temp_filename)
-        if _screendump_thread_termination_event.isSet():
-            _screendump_thread_termination_event = None
-            break
-        _screendump_thread_termination_event.wait(delay)
diff --git a/client/tests/kvm/kvm_scheduler.py b/client/tests/kvm/kvm_scheduler.py
deleted file mode 100644
index b96bb32..0000000
--- a/client/tests/kvm/kvm_scheduler.py
+++ /dev/null
@@ -1,229 +0,0 @@
-import os, select
-import kvm_utils, kvm_vm, kvm_subprocess
-
-
-class scheduler:
-    """
-    A scheduler that manages several parallel test execution pipelines on a
-    single host.
-    """
-
-    def __init__(self, tests, num_workers, total_cpus, total_mem, bindir):
-        """
-        Initialize the class.
-
-        @param tests: A list of test dictionaries.
-        @param num_workers: The number of workers (pipelines).
-        @param total_cpus: The total number of CPUs to dedicate to tests.
-        @param total_mem: The total amount of memory to dedicate to tests.
-        @param bindir: The directory where environment files reside.
-        """
-        self.tests = tests
-        self.num_workers = num_workers
-        self.total_cpus = total_cpus
-        self.total_mem = total_mem
-        self.bindir = bindir
-        # Pipes -- s stands for scheduler, w stands for worker
-        self.s2w = [os.pipe() for i in range(num_workers)]
-        self.w2s = [os.pipe() for i in range(num_workers)]
-        self.s2w_r = [os.fdopen(r, "r", 0) for r, w in self.s2w]
-        self.s2w_w = [os.fdopen(w, "w", 0) for r, w in self.s2w]
-        self.w2s_r = [os.fdopen(r, "r", 0) for r, w in self.w2s]
-        self.w2s_w = [os.fdopen(w, "w", 0) for r, w in self.w2s]
-        # "Personal" worker dicts contain modifications that are applied
-        # specifically to each worker.  For example, each worker must use a
-        # different environment file and a different MAC address pool.
-        self.worker_dicts = [{"env": "env%d" % i} for i in range(num_workers)]
-
-
-    def worker(self, index, run_test_func):
-        """
-        The worker function.
-
-        Waits for commands from the scheduler and processes them.
-
-        @param index: The index of this worker (in the range 0..num_workers-1).
-        @param run_test_func: A function to be called to run a test
-                (e.g. job.run_test).
-        """
-        r = self.s2w_r[index]
-        w = self.w2s_w[index]
-        self_dict = self.worker_dicts[index]
-
-        # Inform the scheduler this worker is ready
-        w.write("ready\n")
-
-        while True:
-            cmd = r.readline().split()
-            if not cmd:
-                continue
-
-            # The scheduler wants this worker to run a test
-            if cmd[0] == "run":
-                test_index = int(cmd[1])
-                test = self.tests[test_index].copy()
-                test.update(self_dict)
-                test_iterations = int(test.get("iterations", 1))
-                status = run_test_func("kvm", params=test,
-                                       tag=test.get("shortname"),
-                                       iterations=test_iterations)
-                w.write("done %s %s\n" % (test_index, status))
-                w.write("ready\n")
-
-            # The scheduler wants this worker to free its used resources
-            elif cmd[0] == "cleanup":
-                env_filename = os.path.join(self.bindir, self_dict["env"])
-                env = kvm_utils.Env(env_filename)
-                for obj in env.values():
-                    if isinstance(obj, kvm_vm.VM):
-                        obj.destroy()
-                    elif isinstance(obj, kvm_subprocess.Spawn):
-                        obj.close()
-                env.save()
-                w.write("cleanup_done\n")
-                w.write("ready\n")
-
-            # There's no more work for this worker
-            elif cmd[0] == "terminate":
-                break
-
-
-    def scheduler(self):
-        """
-        The scheduler function.
-
-        Sends commands to workers, telling them to run tests, clean up or
-        terminate execution.
-        """
-        idle_workers = []
-        closing_workers = []
-        test_status = ["waiting"] * len(self.tests)
-        test_worker = [None] * len(self.tests)
-        used_cpus = [0] * self.num_workers
-        used_mem = [0] * self.num_workers
-
-        while True:
-            # Wait for a message from a worker
-            r, w, x = select.select(self.w2s_r, [], [])
-
-            someone_is_ready = False
-
-            for pipe in r:
-                worker_index = self.w2s_r.index(pipe)
-                msg = pipe.readline().split()
-                if not msg:
-                    continue
-
-                # A worker is ready -- add it to the idle_workers list
-                if msg[0] == "ready":
-                    idle_workers.append(worker_index)
-                    someone_is_ready = True
-
-                # A worker completed a test
-                elif msg[0] == "done":
-                    test_index = int(msg[1])
-                    test = self.tests[test_index]
-                    status = int(eval(msg[2]))
-                    test_status[test_index] = ("fail", "pass")[status]
-                    # If the test failed, mark all dependent tests as "failed" too
-                    if not status:
-                        for i, other_test in enumerate(self.tests):
-                            for dep in other_test.get("dep", []):
-                                if dep in test["name"]:
-                                    test_status[i] = "fail"
-
-                # A worker is done shutting down its VMs and other processes
-                elif msg[0] == "cleanup_done":
-                    used_cpus[worker_index] = 0
-                    used_mem[worker_index] = 0
-                    closing_workers.remove(worker_index)
-
-            if not someone_is_ready:
-                continue
-
-            for worker in idle_workers[:]:
-                # Find a test for this worker
-                test_found = False
-                for i, test in enumerate(self.tests):
-                    # We only want "waiting" tests
-                    if test_status[i] != "waiting":
-                        continue
-                    # Make sure the test isn't assigned to another worker
-                    if test_worker[i] is not None and test_worker[i] != worker:
-                        continue
-                    # Make sure the test's dependencies are satisfied
-                    dependencies_satisfied = True
-                    for dep in test["dep"]:
-                        dependencies = [j for j, t in enumerate(self.tests)
-                                        if dep in t["name"]]
-                        bad_status_deps = [j for j in dependencies
-                                           if test_status[j] != "pass"]
-                        if bad_status_deps:
-                            dependencies_satisfied = False
-                            break
-                    if not dependencies_satisfied:
-                        continue
-                    # Make sure we have enough resources to run the test
-                    test_used_cpus = int(test.get("used_cpus", 1))
-                    test_used_mem = int(test.get("used_mem", 128))
-                    # First make sure the other workers aren't using too many
-                    # CPUs (not including the workers currently shutting down)
-                    uc = (sum(used_cpus) - used_cpus[worker] -
-                          sum(used_cpus[i] for i in closing_workers))
-                    if uc and uc + test_used_cpus > self.total_cpus:
-                        continue
-                    # ... or too much memory
-                    um = (sum(used_mem) - used_mem[worker] -
-                          sum(used_mem[i] for i in closing_workers))
-                    if um and um + test_used_mem > self.total_mem:
-                        continue
-                    # If we reached this point it means there are, or will
-                    # soon be, enough resources to run the test
-                    test_found = True
-                    # Now check if the test can be run right now, i.e. if the
-                    # other workers, including the ones currently shutting
-                    # down, aren't using too many CPUs
-                    uc = (sum(used_cpus) - used_cpus[worker])
-                    if uc and uc + test_used_cpus > self.total_cpus:
-                        continue
-                    # ... or too much memory
-                    um = (sum(used_mem) - used_mem[worker])
-                    if um and um + test_used_mem > self.total_mem:
-                        continue
-                    # Everything is OK -- run the test
-                    test_status[i] = "running"
-                    test_worker[i] = worker
-                    idle_workers.remove(worker)
-                    # Update used_cpus and used_mem
-                    used_cpus[worker] = test_used_cpus
-                    used_mem[worker] = test_used_mem
-                    # Assign all related tests to this worker
-                    for j, other_test in enumerate(self.tests):
-                        for other_dep in other_test["dep"]:
-                            # All tests that depend on this test
-                            if other_dep in test["name"]:
-                                test_worker[j] = worker
-                                break
-                            # ... and all tests that share a dependency
-                            # with this test
-                            for dep in test["dep"]:
-                                if dep in other_dep or other_dep in dep:
-                                    test_worker[j] = worker
-                                    break
-                    # Tell the worker to run the test
-                    self.s2w_w[worker].write("run %s\n" % i)
-                    break
-
-                # If there won't be any tests for this worker to run soon, tell
-                # the worker to free its used resources
-                if not test_found and (used_cpus[worker] or used_mem[worker]):
-                    self.s2w_w[worker].write("cleanup\n")
-                    idle_workers.remove(worker)
-                    closing_workers.append(worker)
-
-            # If there are no more new tests to run, terminate the workers and
-            # the scheduler
-            if len(idle_workers) == self.num_workers:
-                for worker in idle_workers:
-                    self.s2w_w[worker].write("terminate\n")
-                break
diff --git a/client/tests/kvm/kvm_subprocess.py b/client/tests/kvm/kvm_subprocess.py
deleted file mode 100755
index 0b8734f..0000000
--- a/client/tests/kvm/kvm_subprocess.py
+++ /dev/null
@@ -1,1351 +0,0 @@
-#!/usr/bin/python
-"""
-A class and functions used for running and controlling child processes.
-
-@copyright: 2008-2009 Red Hat Inc.
-"""
-
-import os, sys, pty, select, termios, fcntl
-
-
-# The following helper functions are shared by the server and the client.
-
-def _lock(filename):
-    if not os.path.exists(filename):
-        open(filename, "w").close()
-    fd = os.open(filename, os.O_RDWR)
-    fcntl.lockf(fd, fcntl.LOCK_EX)
-    return fd
-
-
-def _unlock(fd):
-    fcntl.lockf(fd, fcntl.LOCK_UN)
-    os.close(fd)
-
-
-def _locked(filename):
-    try:
-        fd = os.open(filename, os.O_RDWR)
-    except:
-        return False
-    try:
-        fcntl.lockf(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
-    except:
-        os.close(fd)
-        return True
-    fcntl.lockf(fd, fcntl.LOCK_UN)
-    os.close(fd)
-    return False
-
-
-def _wait(filename):
-    fd = _lock(filename)
-    _unlock(fd)
-
-
-def _get_filenames(base_dir, id):
-    return [os.path.join(base_dir, s + id) for s in
-            "shell-pid-", "status-", "output-", "inpipe-",
-            "lock-server-running-", "lock-client-starting-"]
-
-
-def _get_reader_filename(base_dir, id, reader):
-    return os.path.join(base_dir, "outpipe-%s-%s" % (reader, id))
-
-
-# The following is the server part of the module.
-
-if __name__ == "__main__":
-    id = sys.stdin.readline().strip()
-    echo = sys.stdin.readline().strip() == "True"
-    readers = sys.stdin.readline().strip().split(",")
-    command = sys.stdin.readline().strip() + " && echo %s > /dev/null" % id
-
-    # Define filenames to be used for communication
-    base_dir = "/tmp/kvm_spawn"
-    (shell_pid_filename,
-     status_filename,
-     output_filename,
-     inpipe_filename,
-     lock_server_running_filename,
-     lock_client_starting_filename) = _get_filenames(base_dir, id)
-
-    # Populate the reader filenames list
-    reader_filenames = [_get_reader_filename(base_dir, id, reader)
-                        for reader in readers]
-
-    # Set $TERM = dumb
-    os.putenv("TERM", "dumb")
-
-    (shell_pid, shell_fd) = pty.fork()
-    if shell_pid == 0:
-        # Child process: run the command in a subshell
-        os.execv("/bin/sh", ["/bin/sh", "-c", command])
-    else:
-        # Parent process
-        lock_server_running = _lock(lock_server_running_filename)
-
-        # Set terminal echo on/off and disable pre- and post-processing
-        attr = termios.tcgetattr(shell_fd)
-        attr[0] &= ~termios.INLCR
-        attr[0] &= ~termios.ICRNL
-        attr[0] &= ~termios.IGNCR
-        attr[1] &= ~termios.OPOST
-        if echo:
-            attr[3] |= termios.ECHO
-        else:
-            attr[3] &= ~termios.ECHO
-        termios.tcsetattr(shell_fd, termios.TCSANOW, attr)
-
-        # Open output file
-        output_file = open(output_filename, "w")
-        # Open input pipe
-        os.mkfifo(inpipe_filename)
-        inpipe_fd = os.open(inpipe_filename, os.O_RDWR)
-        # Open output pipes (readers)
-        reader_fds = []
-        for filename in reader_filenames:
-            os.mkfifo(filename)
-            reader_fds.append(os.open(filename, os.O_RDWR))
-
-        # Write shell PID to file
-        file = open(shell_pid_filename, "w")
-        file.write(str(shell_pid))
-        file.close()
-
-        # Print something to stdout so the client can start working
-        print "Server %s ready" % id
-        sys.stdout.flush()
-
-        # Initialize buffers
-        buffers = ["" for reader in readers]
-
-        # Read from child and write to files/pipes
-        while True:
-            check_termination = False
-            # Make a list of reader pipes whose buffers are not empty
-            fds = [fd for (i, fd) in enumerate(reader_fds) if buffers[i]]
-            # Wait until there's something to do
-            r, w, x = select.select([shell_fd, inpipe_fd], fds, [], 0.5)
-            # If a reader pipe is ready for writing --
-            for (i, fd) in enumerate(reader_fds):
-                if fd in w:
-                    bytes_written = os.write(fd, buffers[i])
-                    buffers[i] = buffers[i][bytes_written:]
-            # If there's data to read from the child process --
-            if shell_fd in r:
-                try:
-                    data = os.read(shell_fd, 16384)
-                except OSError:
-                    data = ""
-                if not data:
-                    check_termination = True
-                # Remove carriage returns from the data -- they often cause
-                # trouble and are normally not needed
-                data = data.replace("\r", "")
-                output_file.write(data)
-                output_file.flush()
-                for i in range(len(readers)):
-                    buffers[i] += data
-            # If os.read() raised an exception or there was nothing to read --
-            if check_termination or shell_fd not in r:
-                pid, status = os.waitpid(shell_pid, os.WNOHANG)
-                if pid:
-                    status = os.WEXITSTATUS(status)
-                    break
-            # If there's data to read from the client --
-            if inpipe_fd in r:
-                data = os.read(inpipe_fd, 1024)
-                os.write(shell_fd, data)
-
-        # Write the exit status to a file
-        file = open(status_filename, "w")
-        file.write(str(status))
-        file.close()
-
-        # Wait for the client to finish initializing
-        _wait(lock_client_starting_filename)
-
-        # Delete FIFOs
-        for filename in reader_filenames + [inpipe_filename]:
-            try:
-                os.unlink(filename)
-            except OSError:
-                pass
-
-        # Close all files and pipes
-        output_file.close()
-        os.close(inpipe_fd)
-        for fd in reader_fds:
-            os.close(fd)
-
-        _unlock(lock_server_running)
-        exit(0)
-
-
-# The following is the client part of the module.
-
-import subprocess, time, signal, re, threading, logging
-import common, kvm_utils
-
-
-class ExpectError(Exception):
-    def __init__(self, patterns, output):
-        Exception.__init__(self, patterns, output)
-        self.patterns = patterns
-        self.output = output
-
-    def _pattern_str(self):
-        if len(self.patterns) == 1:
-            return "pattern %r" % self.patterns[0]
-        else:
-            return "patterns %r" % self.patterns
-
-    def __str__(self):
-        return ("Unknown error occurred while looking for %s    (output: %r)" %
-                (self._pattern_str(), self.output))
-
-
-class ExpectTimeoutError(ExpectError):
-    def __str__(self):
-        return ("Timeout expired while looking for %s    (output: %r)" %
-                (self._pattern_str(), self.output))
-
-
-class ExpectProcessTerminatedError(ExpectError):
-    def __init__(self, patterns, status, output):
-        ExpectError.__init__(self, patterns, output)
-        self.status = status
-
-    def __str__(self):
-        return ("Process terminated while looking for %s    "
-                "(status: %s,    output: %r)" % (self._pattern_str(),
-                                                 self.status, self.output))
-
-
-class ShellError(Exception):
-    def __init__(self, cmd, output):
-        Exception.__init__(self, cmd, output)
-        self.cmd = cmd
-        self.output = output
-
-    def __str__(self):
-        return ("Could not execute shell command %r    (output: %r)" %
-                (self.cmd, self.output))
-
-
-class ShellTimeoutError(ShellError):
-    def __str__(self):
-        return ("Timeout expired while waiting for shell command to "
-                "complete: %r    (output: %r)" % (self.cmd, self.output))
-
-
-class ShellProcessTerminatedError(ShellError):
-    # Raised when the shell process itself (e.g. ssh, netcat, telnet)
-    # terminates unexpectedly
-    def __init__(self, cmd, status, output):
-        ShellError.__init__(self, cmd, output)
-        self.status = status
-
-    def __str__(self):
-        return ("Shell process terminated while waiting for command to "
-                "complete: %r    (status: %s,    output: %r)" %
-                (self.cmd, self.status, self.output))
-
-
-class ShellCmdError(ShellError):
-    # Raised when a command executed in a shell terminates with a nonzero
-    # exit code (status)
-    def __init__(self, cmd, status, output):
-        ShellError.__init__(self, cmd, output)
-        self.status = status
-
-    def __str__(self):
-        return ("Shell command failed: %r    (status: %s,    output: %r)" %
-                (self.cmd, self.status, self.output))
-
-
-class ShellStatusError(ShellError):
-    # Raised when the command's exit status cannot be obtained
-    def __str__(self):
-        return ("Could not get exit status of command: %r    (output: %r)" %
-                (self.cmd, self.output))
-
-
-def run_bg(command, termination_func=None, output_func=None, output_prefix="",
-           timeout=1.0):
-    """
-    Run command as a subprocess.  Call output_func with each line of output
-    from the subprocess (prefixed by output_prefix).  Call termination_func
-    when the subprocess terminates.  Return when timeout expires or when the
-    subprocess exits -- whichever occurs first.
-
-    @brief: Run a subprocess in the background and collect its output and
-            exit status.
-
-    @param command: The shell command to execute
-    @param termination_func: A function to call when the process terminates
-            (should take an integer exit status parameter)
-    @param output_func: A function to call with each line of output from
-            the subprocess (should take a string parameter)
-    @param output_prefix: A string to pre-pend to each line of the output,
-            before passing it to stdout_func
-    @param timeout: Time duration (in seconds) to wait for the subprocess to
-            terminate before returning
-
-    @return: A Tail object.
-    """
-    process = Tail(command=command,
-                   termination_func=termination_func,
-                   output_func=output_func,
-                   output_prefix=output_prefix)
-
-    end_time = time.time() + timeout
-    while time.time() < end_time and process.is_alive():
-        time.sleep(0.1)
-
-    return process
-
-
-def run_fg(command, output_func=None, output_prefix="", timeout=1.0):
-    """
-    Run command as a subprocess.  Call output_func with each line of output
-    from the subprocess (prefixed by prefix).  Return when timeout expires or
-    when the subprocess exits -- whichever occurs first.  If timeout expires
-    and the subprocess is still running, kill it before returning.
-
-    @brief: Run a subprocess in the foreground and collect its output and
-            exit status.
-
-    @param command: The shell command to execute
-    @param output_func: A function to call with each line of output from
-            the subprocess (should take a string parameter)
-    @param output_prefix: A string to pre-pend to each line of the output,
-            before passing it to stdout_func
-    @param timeout: Time duration (in seconds) to wait for the subprocess to
-            terminate before killing it and returning
-
-    @return: A 2-tuple containing the exit status of the process and its
-            STDOUT/STDERR output.  If timeout expires before the process
-            terminates, the returned status is None.
-    """
-    process = run_bg(command, None, output_func, output_prefix, timeout)
-    output = process.get_output()
-    if process.is_alive():
-        status = None
-    else:
-        status = process.get_status()
-    process.close()
-    return (status, output)
-
-
-class Spawn:
-    """
-    This class is used for spawning and controlling a child process.
-
-    A new instance of this class can either run a new server (a small Python
-    program that reads output from the child process and reports it to the
-    client and to a text file) or attach to an already running server.
-    When a server is started it runs the child process.
-    The server writes output from the child's STDOUT and STDERR to a text file.
-    The text file can be accessed at any time using get_output().
-    In addition, the server opens as many pipes as requested by the client and
-    writes the output to them.
-    The pipes are requested and accessed by classes derived from Spawn.
-    These pipes are referred to as "readers".
-    The server also receives input from the client and sends it to the child
-    process.
-    An instance of this class can be pickled.  Every derived class is
-    responsible for restoring its own state by properly defining
-    __getinitargs__().
-
-    The first named pipe is used by _tail(), a function that runs in the
-    background and reports new output from the child as it is produced.
-    The second named pipe is used by a set of functions that read and parse
-    output as requested by the user in an interactive manner, similar to
-    pexpect.
-    When unpickled it automatically
-    resumes _tail() if needed.
-    """
-
-    def __init__(self, command=None, id=None, auto_close=False, echo=False,
-                 linesep="\n"):
-        """
-        Initialize the class and run command as a child process.
-
-        @param command: Command to run, or None if accessing an already running
-                server.
-        @param id: ID of an already running server, if accessing a running
-                server, or None if starting a new one.
-        @param auto_close: If True, close() the instance automatically when its
-                reference count drops to zero (default False).
-        @param echo: Boolean indicating whether echo should be initially
-                enabled for the pseudo terminal running the subprocess.  This
-                parameter has an effect only when starting a new server.
-        @param linesep: Line separator to be appended to strings sent to the
-                child process by sendline().
-        """
-        self.id = id or kvm_utils.generate_random_string(8)
-
-        # Define filenames for communication with server
-        base_dir = "/tmp/kvm_spawn"
-        try:
-            os.makedirs(base_dir)
-        except:
-            pass
-        (self.shell_pid_filename,
-         self.status_filename,
-         self.output_filename,
-         self.inpipe_filename,
-         self.lock_server_running_filename,
-         self.lock_client_starting_filename) = _get_filenames(base_dir,
-                                                              self.id)
-
-        # Remember some attributes
-        self.auto_close = auto_close
-        self.echo = echo
-        self.linesep = linesep
-
-        # Make sure the 'readers' and 'close_hooks' attributes exist
-        if not hasattr(self, "readers"):
-            self.readers = []
-        if not hasattr(self, "close_hooks"):
-            self.close_hooks = []
-
-        # Define the reader filenames
-        self.reader_filenames = dict(
-            (reader, _get_reader_filename(base_dir, self.id, reader))
-            for reader in self.readers)
-
-        # Let the server know a client intends to open some pipes;
-        # if the executed command terminates quickly, the server will wait for
-        # the client to release the lock before exiting
-        lock_client_starting = _lock(self.lock_client_starting_filename)
-
-        # Start the server (which runs the command)
-        if command:
-            sub = subprocess.Popen("%s %s" % (sys.executable, __file__),
-                                   shell=True,
-                                   stdin=subprocess.PIPE,
-                                   stdout=subprocess.PIPE,
-                                   stderr=subprocess.STDOUT)
-            # Send parameters to the server
-            sub.stdin.write("%s\n" % self.id)
-            sub.stdin.write("%s\n" % echo)
-            sub.stdin.write("%s\n" % ",".join(self.readers))
-            sub.stdin.write("%s\n" % command)
-            # Wait for the server to complete its initialization
-            while not "Server %s ready" % self.id in sub.stdout.readline():
-                pass
-
-        # Open the reading pipes
-        self.reader_fds = {}
-        try:
-            assert(_locked(self.lock_server_running_filename))
-            for reader, filename in self.reader_filenames.items():
-                self.reader_fds[reader] = os.open(filename, os.O_RDONLY)
-        except:
-            pass
-
-        # Allow the server to continue
-        _unlock(lock_client_starting)
-
-
-    # The following two functions are defined to make sure the state is set
-    # exclusively by the constructor call as specified in __getinitargs__().
-
-    def __getstate__(self):
-        pass
-
-
-    def __setstate__(self, state):
-        pass
-
-
-    def __getinitargs__(self):
-        # Save some information when pickling -- will be passed to the
-        # constructor upon unpickling
-        return (None, self.id, self.auto_close, self.echo, self.linesep)
-
-
-    def __del__(self):
-        if self.auto_close:
-            self.close()
-
-
-    def _add_reader(self, reader):
-        """
-        Add a reader whose file descriptor can be obtained with _get_fd().
-        Should be called before __init__().  Intended for use by derived
-        classes.
-
-        @param reader: The name of the reader.
-        """
-        if not hasattr(self, "readers"):
-            self.readers = []
-        self.readers.append(reader)
-
-
-    def _add_close_hook(self, hook):
-        """
-        Add a close hook function to be called when close() is called.
-        The function will be called after the process terminates but before
-        final cleanup.  Intended for use by derived classes.
-
-        @param hook: The hook function.
-        """
-        if not hasattr(self, "close_hooks"):
-            self.close_hooks = []
-        self.close_hooks.append(hook)
-
-
-    def _get_fd(self, reader):
-        """
-        Return an open file descriptor corresponding to the specified reader
-        pipe.  If no such reader exists, or the pipe could not be opened,
-        return None.  Intended for use by derived classes.
-
-        @param reader: The name of the reader.
-        """
-        return self.reader_fds.get(reader)
-
-
-    def get_id(self):
-        """
-        Return the instance's id attribute, which may be used to access the
-        process in the future.
-        """
-        return self.id
-
-
-    def get_pid(self):
-        """
-        Return the PID of the process.
-
-        Note: this may be the PID of the shell process running the user given
-        command.
-        """
-        try:
-            file = open(self.shell_pid_filename, "r")
-            pid = int(file.read())
-            file.close()
-            return pid
-        except:
-            return None
-
-
-    def get_status(self):
-        """
-        Wait for the process to exit and return its exit status, or None
-        if the exit status is not available.
-        """
-        _wait(self.lock_server_running_filename)
-        try:
-            file = open(self.status_filename, "r")
-            status = int(file.read())
-            file.close()
-            return status
-        except:
-            return None
-
-
-    def get_output(self):
-        """
-        Return the STDOUT and STDERR output of the process so far.
-        """
-        try:
-            file = open(self.output_filename, "r")
-            output = file.read()
-            file.close()
-            return output
-        except:
-            return ""
-
-
-    def is_alive(self):
-        """
-        Return True if the process is running.
-        """
-        return _locked(self.lock_server_running_filename)
-
-
-    def close(self, sig=signal.SIGKILL):
-        """
-        Kill the child process if it's alive and remove temporary files.
-
-        @param sig: The signal to send the process when attempting to kill it.
-        """
-        # Kill it if it's alive
-        if self.is_alive():
-            kvm_utils.kill_process_tree(self.get_pid(), sig)
-        # Wait for the server to exit
-        _wait(self.lock_server_running_filename)
-        # Call all cleanup routines
-        for hook in self.close_hooks:
-            hook(self)
-        # Close reader file descriptors
-        for fd in self.reader_fds.values():
-            try:
-                os.close(fd)
-            except:
-                pass
-        self.reader_fds = {}
-        # Remove all used files
-        for filename in (_get_filenames("/tmp/kvm_spawn", self.id) +
-                         self.reader_filenames.values()):
-            try:
-                os.unlink(filename)
-            except OSError:
-                pass
-
-
-    def set_linesep(self, linesep):
-        """
-        Sets the line separator string (usually "\\n").
-
-        @param linesep: Line separator string.
-        """
-        self.linesep = linesep
-
-
-    def send(self, str=""):
-        """
-        Send a string to the child process.
-
-        @param str: String to send to the child process.
-        """
-        try:
-            fd = os.open(self.inpipe_filename, os.O_RDWR)
-            os.write(fd, str)
-            os.close(fd)
-        except:
-            pass
-
-
-    def sendline(self, str=""):
-        """
-        Send a string followed by a line separator to the child process.
-
-        @param str: String to send to the child process.
-        """
-        self.send(str + self.linesep)
-
-
-_thread_kill_requested = False
-
-def kill_tail_threads():
-    """
-    Kill all Tail threads.
-
-    After calling this function no new threads should be started.
-    """
-    global _thread_kill_requested
-    _thread_kill_requested = True
-    for t in threading.enumerate():
-        if hasattr(t, "name") and t.name.startswith("tail_thread"):
-            t.join(10)
-    _thread_kill_requested = False
-
-
-class Tail(Spawn):
-    """
-    This class runs a child process in the background and sends its output in
-    real time, line-by-line, to a callback function.
-
-    See Spawn's docstring.
-
-    This class uses a single pipe reader to read data in real time from the
-    child process and report it to a given callback function.
-    When the child process exits, its exit status is reported to an additional
-    callback function.
-
-    When this class is unpickled, it automatically resumes reporting output.
-    """
-
-    def __init__(self, command=None, id=None, auto_close=False, echo=False,
-                 linesep="\n", termination_func=None, termination_params=(),
-                 output_func=None, output_params=(), output_prefix=""):
-        """
-        Initialize the class and run command as a child process.
-
-        @param command: Command to run, or None if accessing an already running
-                server.
-        @param id: ID of an already running server, if accessing a running
-                server, or None if starting a new one.
-        @param auto_close: If True, close() the instance automatically when its
-                reference count drops to zero (default False).
-        @param echo: Boolean indicating whether echo should be initially
-                enabled for the pseudo terminal running the subprocess.  This
-                parameter has an effect only when starting a new server.
-        @param linesep: Line separator to be appended to strings sent to the
-                child process by sendline().
-        @param termination_func: Function to call when the process exits.  The
-                function must accept a single exit status parameter.
-        @param termination_params: Parameters to send to termination_func
-                before the exit status.
-        @param output_func: Function to call whenever a line of output is
-                available from the STDOUT or STDERR streams of the process.
-                The function must accept a single string parameter.  The string
-                does not include the final newline.
-        @param output_params: Parameters to send to output_func before the
-                output line.
-        @param output_prefix: String to prepend to lines sent to output_func.
-        """
-        # Add a reader and a close hook
-        self._add_reader("tail")
-        self._add_close_hook(Tail._join_thread)
-
-        # Init the superclass
-        Spawn.__init__(self, command, id, auto_close, echo, linesep)
-
-        # Remember some attributes
-        self.termination_func = termination_func
-        self.termination_params = termination_params
-        self.output_func = output_func
-        self.output_params = output_params
-        self.output_prefix = output_prefix
-
-        # Start the thread in the background
-        self.tail_thread = None
-        if termination_func or output_func:
-            self._start_thread()
-
-
-    def __getinitargs__(self):
-        return Spawn.__getinitargs__(self) + (self.termination_func,
-                                              self.termination_params,
-                                              self.output_func,
-                                              self.output_params,
-                                              self.output_prefix)
-
-
-    def set_termination_func(self, termination_func):
-        """
-        Set the termination_func attribute. See __init__() for details.
-
-        @param termination_func: Function to call when the process terminates.
-                Must take a single parameter -- the exit status.
-        """
-        self.termination_func = termination_func
-        if termination_func and not self.tail_thread:
-            self._start_thread()
-
-
-    def set_termination_params(self, termination_params):
-        """
-        Set the termination_params attribute. See __init__() for details.
-
-        @param termination_params: Parameters to send to termination_func
-                before the exit status.
-        """
-        self.termination_params = termination_params
-
-
-    def set_output_func(self, output_func):
-        """
-        Set the output_func attribute. See __init__() for details.
-
-        @param output_func: Function to call for each line of STDOUT/STDERR
-                output from the process.  Must take a single string parameter.
-        """
-        self.output_func = output_func
-        if output_func and not self.tail_thread:
-            self._start_thread()
-
-
-    def set_output_params(self, output_params):
-        """
-        Set the output_params attribute. See __init__() for details.
-
-        @param output_params: Parameters to send to output_func before the
-                output line.
-        """
-        self.output_params = output_params
-
-
-    def set_output_prefix(self, output_prefix):
-        """
-        Set the output_prefix attribute. See __init__() for details.
-
-        @param output_prefix: String to pre-pend to each line sent to
-                output_func (see set_output_callback()).
-        """
-        self.output_prefix = output_prefix
-
-
-    def _tail(self):
-        def print_line(text):
-            # Pre-pend prefix and remove trailing whitespace
-            text = self.output_prefix + text.rstrip()
-            # Pass text to output_func
-            try:
-                params = self.output_params + (text,)
-                self.output_func(*params)
-            except TypeError:
-                pass
-
-        try:
-            fd = self._get_fd("tail")
-            buffer = ""
-            while True:
-                global _thread_kill_requested
-                if _thread_kill_requested:
-                    return
-                try:
-                    # See if there's any data to read from the pipe
-                    r, w, x = select.select([fd], [], [], 0.05)
-                except:
-                    break
-                if fd in r:
-                    # Some data is available; read it
-                    new_data = os.read(fd, 1024)
-                    if not new_data:
-                        break
-                    buffer += new_data
-                    # Send the output to output_func line by line
-                    # (except for the last line)
-                    if self.output_func:
-                        lines = buffer.split("\n")
-                        for line in lines[:-1]:
-                            print_line(line)
-                    # Leave only the last line
-                    last_newline_index = buffer.rfind("\n")
-                    buffer = buffer[last_newline_index+1:]
-                else:
-                    # No output is available right now; flush the buffer
-                    if buffer:
-                        print_line(buffer)
-                        buffer = ""
-            # The process terminated; print any remaining output
-            if buffer:
-                print_line(buffer)
-            # Get the exit status, print it and send it to termination_func
-            status = self.get_status()
-            if status is None:
-                return
-            print_line("(Process terminated with status %s)" % status)
-            try:
-                params = self.termination_params + (status,)
-                self.termination_func(*params)
-            except TypeError:
-                pass
-        finally:
-            self.tail_thread = None
-
-
-    def _start_thread(self):
-        self.tail_thread = threading.Thread(target=self._tail,
-                                            name="tail_thread_%s" % self.id)
-        self.tail_thread.start()
-
-
-    def _join_thread(self):
-        # Wait for the tail thread to exit
-        # (it's done this way because self.tail_thread may become None at any
-        # time)
-        t = self.tail_thread
-        if t:
-            t.join()
-
-
-class Expect(Tail):
-    """
-    This class runs a child process in the background and provides expect-like
-    services.
-
-    It also provides all of Tail's functionality.
-    """
-
-    def __init__(self, command=None, id=None, auto_close=True, echo=False,
-                 linesep="\n", termination_func=None, termination_params=(),
-                 output_func=None, output_params=(), output_prefix=""):
-        """
-        Initialize the class and run command as a child process.
-
-        @param command: Command to run, or None if accessing an already running
-                server.
-        @param id: ID of an already running server, if accessing a running
-                server, or None if starting a new one.
-        @param auto_close: If True, close() the instance automatically when its
-                reference count drops to zero (default False).
-        @param echo: Boolean indicating whether echo should be initially
-                enabled for the pseudo terminal running the subprocess.  This
-                parameter has an effect only when starting a new server.
-        @param linesep: Line separator to be appended to strings sent to the
-                child process by sendline().
-        @param termination_func: Function to call when the process exits.  The
-                function must accept a single exit status parameter.
-        @param termination_params: Parameters to send to termination_func
-                before the exit status.
-        @param output_func: Function to call whenever a line of output is
-                available from the STDOUT or STDERR streams of the process.
-                The function must accept a single string parameter.  The string
-                does not include the final newline.
-        @param output_params: Parameters to send to output_func before the
-                output line.
-        @param output_prefix: String to prepend to lines sent to output_func.
-        """
-        # Add a reader
-        self._add_reader("expect")
-
-        # Init the superclass
-        Tail.__init__(self, command, id, auto_close, echo, linesep,
-                      termination_func, termination_params,
-                      output_func, output_params, output_prefix)
-
-
-    def __getinitargs__(self):
-        return Tail.__getinitargs__(self)
-
-
-    def read_nonblocking(self, timeout=None):
-        """
-        Read from child until there is nothing to read for timeout seconds.
-
-        @param timeout: Time (seconds) to wait before we give up reading from
-                the child process, or None to use the default value.
-        """
-        if timeout is None:
-            timeout = 0.1
-        fd = self._get_fd("expect")
-        data = ""
-        while True:
-            try:
-                r, w, x = select.select([fd], [], [], timeout)
-            except:
-                return data
-            if fd in r:
-                new_data = os.read(fd, 1024)
-                if not new_data:
-                    return data
-                data += new_data
-            else:
-                return data
-
-
-    def match_patterns(self, str, patterns):
-        """
-        Match str against a list of patterns.
-
-        Return the index of the first pattern that matches a substring of str.
-        None and empty strings in patterns are ignored.
-        If no match is found, return None.
-
-        @param patterns: List of strings (regular expression patterns).
-        """
-        for i in range(len(patterns)):
-            if not patterns[i]:
-                continue
-            if re.search(patterns[i], str):
-                return i
-
-
-    def read_until_output_matches(self, patterns, filter=lambda x: x,
-                                  timeout=60, internal_timeout=None,
-                                  print_func=None):
-        """
-        Read using read_nonblocking until a match is found using match_patterns,
-        or until timeout expires. Before attempting to search for a match, the
-        data is filtered using the filter function provided.
-
-        @brief: Read from child using read_nonblocking until a pattern
-                matches.
-        @param patterns: List of strings (regular expression patterns)
-        @param filter: Function to apply to the data read from the child before
-                attempting to match it against the patterns (should take and
-                return a string)
-        @param timeout: The duration (in seconds) to wait until a match is
-                found
-        @param internal_timeout: The timeout to pass to read_nonblocking
-        @param print_func: A function to be used to print the data being read
-                (should take a string parameter)
-        @return: Tuple containing the match index and the data read so far
-        @raise ExpectTimeoutError: Raised if timeout expires
-        @raise ExpectProcessTerminatedError: Raised if the child process
-                terminates while waiting for output
-        @raise ExpectError: Raised if an unknown error occurs
-        """
-        fd = self._get_fd("expect")
-        o = ""
-        end_time = time.time() + timeout
-        while True:
-            try:
-                r, w, x = select.select([fd], [], [],
-                                        max(0, end_time - time.time()))
-            except (select.error, TypeError):
-                break
-            if not r:
-                raise ExpectTimeoutError(patterns, o)
-            # Read data from child
-            data = self.read_nonblocking(internal_timeout)
-            if not data:
-                break
-            # Print it if necessary
-            if print_func:
-                for line in data.splitlines():
-                    print_func(line)
-            # Look for patterns
-            o += data
-            match = self.match_patterns(filter(o), patterns)
-            if match is not None:
-                return match, o
-
-        # Check if the child has terminated
-        if kvm_utils.wait_for(lambda: not self.is_alive(), 5, 0, 0.1):
-            raise ExpectProcessTerminatedError(patterns, self.get_status(), o)
-        else:
-            # This shouldn't happen
-            raise ExpectError(patterns, o)
-
-
-    def read_until_last_word_matches(self, patterns, timeout=60,
-                                     internal_timeout=None, print_func=None):
-        """
-        Read using read_nonblocking until the last word of the output matches
-        one of the patterns (using match_patterns), or until timeout expires.
-
-        @param patterns: A list of strings (regular expression patterns)
-        @param timeout: The duration (in seconds) to wait until a match is
-                found
-        @param internal_timeout: The timeout to pass to read_nonblocking
-        @param print_func: A function to be used to print the data being read
-                (should take a string parameter)
-        @return: A tuple containing the match index and the data read so far
-        @raise ExpectTimeoutError: Raised if timeout expires
-        @raise ExpectProcessTerminatedError: Raised if the child process
-                terminates while waiting for output
-        @raise ExpectError: Raised if an unknown error occurs
-        """
-        def get_last_word(str):
-            if str:
-                return str.split()[-1]
-            else:
-                return ""
-
-        return self.read_until_output_matches(patterns, get_last_word,
-                                              timeout, internal_timeout,
-                                              print_func)
-
-
-    def read_until_last_line_matches(self, patterns, timeout=60,
-                                     internal_timeout=None, print_func=None):
-        """
-        Read using read_nonblocking until the last non-empty line of the output
-        matches one of the patterns (using match_patterns), or until timeout
-        expires. Return a tuple containing the match index (or None if no match
-        was found) and the data read so far.
-
-        @brief: Read using read_nonblocking until the last non-empty line
-                matches a pattern.
-
-        @param patterns: A list of strings (regular expression patterns)
-        @param timeout: The duration (in seconds) to wait until a match is
-                found
-        @param internal_timeout: The timeout to pass to read_nonblocking
-        @param print_func: A function to be used to print the data being read
-                (should take a string parameter)
-        @return: A tuple containing the match index and the data read so far
-        @raise ExpectTimeoutError: Raised if timeout expires
-        @raise ExpectProcessTerminatedError: Raised if the child process
-                terminates while waiting for output
-        @raise ExpectError: Raised if an unknown error occurs
-        """
-        def get_last_nonempty_line(str):
-            nonempty_lines = [l for l in str.splitlines() if l.strip()]
-            if nonempty_lines:
-                return nonempty_lines[-1]
-            else:
-                return ""
-
-        return self.read_until_output_matches(patterns, get_last_nonempty_line,
-                                              timeout, internal_timeout,
-                                              print_func)
-
-
-class ShellSession(Expect):
-    """
-    This class runs a child process in the background.  It it suited for
-    processes that provide an interactive shell, such as SSH and Telnet.
-
-    It provides all services of Expect and Tail.  In addition, it
-    provides command running services, and a utility function to test the
-    process for responsiveness.
-    """
-
-    def __init__(self, command=None, id=None, auto_close=True, echo=False,
-                 linesep="\n", termination_func=None, termination_params=(),
-                 output_func=None, output_params=(), output_prefix="",
-                 prompt=r"[\#\$]\s*$", status_test_command="echo $?"):
-        """
-        Initialize the class and run command as a child process.
-
-        @param command: Command to run, or None if accessing an already running
-                server.
-        @param id: ID of an already running server, if accessing a running
-                server, or None if starting a new one.
-        @param auto_close: If True, close() the instance automatically when its
-                reference count drops to zero (default True).
-        @param echo: Boolean indicating whether echo should be initially
-                enabled for the pseudo terminal running the subprocess.  This
-                parameter has an effect only when starting a new server.
-        @param linesep: Line separator to be appended to strings sent to the
-                child process by sendline().
-        @param termination_func: Function to call when the process exits.  The
-                function must accept a single exit status parameter.
-        @param termination_params: Parameters to send to termination_func
-                before the exit status.
-        @param output_func: Function to call whenever a line of output is
-                available from the STDOUT or STDERR streams of the process.
-                The function must accept a single string parameter.  The string
-                does not include the final newline.
-        @param output_params: Parameters to send to output_func before the
-                output line.
-        @param output_prefix: String to prepend to lines sent to output_func.
-        @param prompt: Regular expression describing the shell's prompt line.
-        @param status_test_command: Command to be used for getting the last
-                exit status of commands run inside the shell (used by
-                cmd_status_output() and friends).
-        """
-        # Init the superclass
-        Expect.__init__(self, command, id, auto_close, echo, linesep,
-                        termination_func, termination_params,
-                        output_func, output_params, output_prefix)
-
-        # Remember some attributes
-        self.prompt = prompt
-        self.status_test_command = status_test_command
-
-
-    def __getinitargs__(self):
-        return Expect.__getinitargs__(self) + (self.prompt,
-                                               self.status_test_command)
-
-
-    def set_prompt(self, prompt):
-        """
-        Set the prompt attribute for later use by read_up_to_prompt.
-
-        @param: String that describes the prompt contents.
-        """
-        self.prompt = prompt
-
-
-    def set_status_test_command(self, status_test_command):
-        """
-        Set the command to be sent in order to get the last exit status.
-
-        @param status_test_command: Command that will be sent to get the last
-                exit status.
-        """
-        self.status_test_command = status_test_command
-
-
-    def is_responsive(self, timeout=5.0):
-        """
-        Return True if the process responds to STDIN/terminal input.
-
-        Send a newline to the child process (e.g. SSH or Telnet) and read some
-        output using read_nonblocking().
-        If all is OK, some output should be available (e.g. the shell prompt).
-        In that case return True.  Otherwise return False.
-
-        @param timeout: Time duration to wait before the process is considered
-                unresponsive.
-        """
-        # Read all output that's waiting to be read, to make sure the output
-        # we read next is in response to the newline sent
-        self.read_nonblocking(timeout=0)
-        # Send a newline
-        self.sendline()
-        # Wait up to timeout seconds for some output from the child
-        end_time = time.time() + timeout
-        while time.time() < end_time:
-            time.sleep(0.5)
-            if self.read_nonblocking(timeout=0).strip():
-                return True
-        # No output -- report unresponsive
-        return False
-
-
-    def read_up_to_prompt(self, timeout=60, internal_timeout=None,
-                          print_func=None):
-        """
-        Read using read_nonblocking until the last non-empty line of the output
-        matches the prompt regular expression set by set_prompt, or until
-        timeout expires.
-
-        @brief: Read using read_nonblocking until the last non-empty line
-                matches the prompt.
-
-        @param timeout: The duration (in seconds) to wait until a match is
-                found
-        @param internal_timeout: The timeout to pass to read_nonblocking
-        @param print_func: A function to be used to print the data being
-                read (should take a string parameter)
-
-        @return: The data read so far
-        @raise ExpectTimeoutError: Raised if timeout expires
-        @raise ExpectProcessTerminatedError: Raised if the shell process
-                terminates while waiting for output
-        @raise ExpectError: Raised if an unknown error occurs
-        """
-        m, o = self.read_until_last_line_matches([self.prompt], timeout,
-                                                 internal_timeout, print_func)
-        return o
-
-
-    def cmd_output(self, cmd, timeout=60, internal_timeout=None,
-                   print_func=None):
-        """
-        Send a command and return its output.
-
-        @param cmd: Command to send (must not contain newline characters)
-        @param timeout: The duration (in seconds) to wait for the prompt to
-                return
-        @param internal_timeout: The timeout to pass to read_nonblocking
-        @param print_func: A function to be used to print the data being read
-                (should take a string parameter)
-
-        @return: The output of cmd
-        @raise ShellTimeoutError: Raised if timeout expires
-        @raise ShellProcessTerminatedError: Raised if the shell process
-                terminates while waiting for output
-        @raise ShellError: Raised if an unknown error occurs
-        """
-        def remove_command_echo(str, cmd):
-            if str and str.splitlines()[0] == cmd:
-                str = "".join(str.splitlines(True)[1:])
-            return str
-
-        def remove_last_nonempty_line(str):
-            return "".join(str.rstrip().splitlines(True)[:-1])
-
-        logging.debug("Sending command: %s" % cmd)
-        self.read_nonblocking(timeout=0)
-        self.sendline(cmd)
-        try:
-            o = self.read_up_to_prompt(timeout, internal_timeout, print_func)
-        except ExpectError, e:
-            o = remove_command_echo(e.output, cmd)
-            if isinstance(e, ExpectTimeoutError):
-                raise ShellTimeoutError(cmd, o)
-            elif isinstance(e, ExpectProcessTerminatedError):
-                raise ShellProcessTerminatedError(cmd, e.status, o)
-            else:
-                raise ShellError(cmd, o)
-
-        # Remove the echoed command and the final shell prompt
-        return remove_last_nonempty_line(remove_command_echo(o, cmd))
-
-
-    def cmd_status_output(self, cmd, timeout=60, internal_timeout=None,
-                          print_func=None):
-        """
-        Send a command and return its exit status and output.
-
-        @param cmd: Command to send (must not contain newline characters)
-        @param timeout: The duration (in seconds) to wait for the prompt to
-                return
-        @param internal_timeout: The timeout to pass to read_nonblocking
-        @param print_func: A function to be used to print the data being read
-                (should take a string parameter)
-
-        @return: A tuple (status, output) where status is the exit status and
-                output is the output of cmd
-        @raise ShellTimeoutError: Raised if timeout expires
-        @raise ShellProcessTerminatedError: Raised if the shell process
-                terminates while waiting for output
-        @raise ShellStatusError: Raised if the exit status cannot be obtained
-        @raise ShellError: Raised if an unknown error occurs
-        """
-        o = self.cmd_output(cmd, timeout, internal_timeout, print_func)
-        try:
-            # Send the 'echo $?' (or equivalent) command to get the exit status
-            s = self.cmd_output(self.status_test_command, 10, internal_timeout)
-        except ShellError:
-            raise ShellStatusError(cmd, o)
-
-        # Get the first line consisting of digits only
-        digit_lines = [l for l in s.splitlines() if l.strip().isdigit()]
-        if digit_lines:
-            return int(digit_lines[0].strip()), o
-        else:
-            raise ShellStatusError(cmd, o)
-
-
-    def cmd_status(self, cmd, timeout=60, internal_timeout=None,
-                   print_func=None):
-        """
-        Send a command and return its exit status.
-
-        @param cmd: Command to send (must not contain newline characters)
-        @param timeout: The duration (in seconds) to wait for the prompt to
-                return
-        @param internal_timeout: The timeout to pass to read_nonblocking
-        @param print_func: A function to be used to print the data being read
-                (should take a string parameter)
-
-        @return: The exit status of cmd
-        @raise ShellTimeoutError: Raised if timeout expires
-        @raise ShellProcessTerminatedError: Raised if the shell process
-                terminates while waiting for output
-        @raise ShellStatusError: Raised if the exit status cannot be obtained
-        @raise ShellError: Raised if an unknown error occurs
-        """
-        s, o = self.cmd_status_output(cmd, timeout, internal_timeout,
-                                      print_func)
-        return s
-
-
-    def cmd(self, cmd, timeout=60, internal_timeout=None, print_func=None):
-        """
-        Send a command and return its output. If the command's exit status is
-        nonzero, raise an exception.
-
-        @param cmd: Command to send (must not contain newline characters)
-        @param timeout: The duration (in seconds) to wait for the prompt to
-                return
-        @param internal_timeout: The timeout to pass to read_nonblocking
-        @param print_func: A function to be used to print the data being read
-                (should take a string parameter)
-
-        @return: The output of cmd
-        @raise ShellTimeoutError: Raised if timeout expires
-        @raise ShellProcessTerminatedError: Raised if the shell process
-                terminates while waiting for output
-        @raise ShellError: Raised if the exit status cannot be obtained or if
-                an unknown error occurs
-        @raise ShellStatusError: Raised if the exit status cannot be obtained
-        @raise ShellError: Raised if an unknown error occurs
-        @raise ShellCmdError: Raised if the exit status is nonzero
-        """
-        s, o = self.cmd_status_output(cmd, timeout, internal_timeout,
-                                      print_func)
-        if s != 0:
-            raise ShellCmdError(cmd, s, o)
-        return o
-
-
-    def get_command_output(self, cmd, timeout=60, internal_timeout=None,
-                           print_func=None):
-        """
-        Alias for cmd_output() for backward compatibility.
-        """
-        return self.cmd_output(cmd, timeout, internal_timeout, print_func)
-
-
-    def get_command_status_output(self, cmd, timeout=60, internal_timeout=None,
-                                  print_func=None):
-        """
-        Alias for cmd_status_output() for backward compatibility.
-        """
-        return self.cmd_status_output(cmd, timeout, internal_timeout,
-                                      print_func)
-
-
-    def get_command_status(self, cmd, timeout=60, internal_timeout=None,
-                           print_func=None):
-        """
-        Alias for cmd_status() for backward compatibility.
-        """
-        return self.cmd_status(cmd, timeout, internal_timeout, print_func)
diff --git a/client/tests/kvm/kvm_test_utils.py b/client/tests/kvm/kvm_test_utils.py
deleted file mode 100644
index b5c4a24..0000000
--- a/client/tests/kvm/kvm_test_utils.py
+++ /dev/null
@@ -1,753 +0,0 @@
-"""
-High-level KVM test utility functions.
-
-This module is meant to reduce code size by performing common test procedures.
-Generally, code here should look like test code.
-More specifically:
-    - Functions in this module should raise exceptions if things go wrong
-      (unlike functions in kvm_utils.py and kvm_vm.py which report failure via
-      their returned values).
-    - Functions in this module may use logging.info(), in addition to
-      logging.debug() and logging.error(), to log messages the user may be
-      interested in (unlike kvm_utils.py and kvm_vm.py which use
-      logging.debug() for anything that isn't an error).
-    - Functions in this module typically use functions and classes from
-      lower-level modules (e.g. kvm_utils.py, kvm_vm.py, kvm_subprocess.py).
-    - Functions in this module should not be used by lower-level modules.
-    - Functions in this module should be used in the right context.
-      For example, a function should not be used where it may display
-      misleading or inaccurate info or debug messages.
-
-@copyright: 2008-2009 Red Hat Inc.
-"""
-
-import time, os, logging, re, signal
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.bin import utils
-import kvm_utils, kvm_vm, kvm_subprocess, scan_results
-
-
-def get_living_vm(env, vm_name):
-    """
-    Get a VM object from the environment and make sure it's alive.
-
-    @param env: Dictionary with test environment.
-    @param vm_name: Name of the desired VM object.
-    @return: A VM object.
-    """
-    vm = env.get_vm(vm_name)
-    if not vm:
-        raise error.TestError("VM '%s' not found in environment" % vm_name)
-    if not vm.is_alive():
-        raise error.TestError("VM '%s' seems to be dead; test requires a "
-                              "living VM" % vm_name)
-    return vm
-
-
-def wait_for_login(vm, nic_index=0, timeout=240, start=0, step=2, serial=None):
-    """
-    Try logging into a VM repeatedly.  Stop on success or when timeout expires.
-
-    @param vm: VM object.
-    @param nic_index: Index of NIC to access in the VM.
-    @param timeout: Time to wait before giving up.
-    @param serial: Whether to use a serial connection instead of a remote
-            (ssh, rss) one.
-    @return: A shell session object.
-    """
-    end_time = time.time() + timeout
-    session = None
-    if serial:
-        type = 'serial'
-        logging.info("Trying to log into guest %s using serial connection,"
-                     " timeout %ds", vm.name, timeout)
-        time.sleep(start)
-        while time.time() < end_time:
-            try:
-                session = vm.serial_login()
-                break
-            except kvm_utils.LoginError, e:
-                logging.debug(e)
-            time.sleep(step)
-    else:
-        type = 'remote'
-        logging.info("Trying to log into guest %s using remote connection,"
-                     " timeout %ds", vm.name, timeout)
-        time.sleep(start)
-        while time.time() < end_time:
-            try:
-                session = vm.login(nic_index=nic_index)
-                break
-            except (kvm_utils.LoginError, kvm_vm.VMError), e:
-                logging.debug(e)
-            time.sleep(step)
-    if not session:
-        raise error.TestFail("Could not log into guest %s using %s connection" %
-                             (vm.name, type))
-    logging.info("Logged into guest %s using %s connection", vm.name, type)
-    return session
-
-
-def reboot(vm, session, method="shell", sleep_before_reset=10, nic_index=0,
-           timeout=240):
-    """
-    Reboot the VM and wait for it to come back up by trying to log in until
-    timeout expires.
-
-    @param vm: VM object.
-    @param session: A shell session object.
-    @param method: Reboot method.  Can be "shell" (send a shell reboot
-            command) or "system_reset" (send a system_reset monitor command).
-    @param nic_index: Index of NIC to access in the VM, when logging in after
-            rebooting.
-    @param timeout: Time to wait before giving up (after rebooting).
-    @return: A new shell session object.
-    """
-    if method == "shell":
-        # Send a reboot command to the guest's shell
-        session.sendline(vm.get_params().get("reboot_command"))
-        logging.info("Reboot command sent. Waiting for guest to go down...")
-    elif method == "system_reset":
-        # Sleep for a while before sending the command
-        time.sleep(sleep_before_reset)
-        # Clear the event list of all QMP monitors
-        monitors = [m for m in vm.monitors if m.protocol == "qmp"]
-        for m in monitors:
-            m.clear_events()
-        # Send a system_reset monitor command
-        vm.monitor.cmd("system_reset")
-        logging.info("Monitor command system_reset sent. Waiting for guest to "
-                     "go down...")
-        # Look for RESET QMP events
-        time.sleep(1)
-        for m in monitors:
-            if not m.get_event("RESET"):
-                raise error.TestFail("RESET QMP event not received after "
-                                     "system_reset (monitor '%s')" % m.name)
-            else:
-                logging.info("RESET QMP event received")
-    else:
-        logging.error("Unknown reboot method: %s", method)
-
-    # Wait for the session to become unresponsive and close it
-    if not kvm_utils.wait_for(lambda: not session.is_responsive(timeout=30),
-                              120, 0, 1):
-        raise error.TestFail("Guest refuses to go down")
-    session.close()
-
-    # Try logging into the guest until timeout expires
-    logging.info("Guest is down. Waiting for it to go up again, timeout %ds",
-                 timeout)
-    session = vm.wait_for_login(nic_index, timeout=timeout)
-    logging.info("Guest is up again")
-    return session
-
-
-def migrate(vm, env=None, mig_timeout=3600, mig_protocol="tcp",
-            mig_cancel=False, offline=False, stable_check=False,
-            clean=False, save_path=None, dest_host='localhost', mig_port=None):
-    """
-    Migrate a VM locally and re-register it in the environment.
-
-    @param vm: The VM to migrate.
-    @param env: The environment dictionary.  If omitted, the migrated VM will
-            not be registered.
-    @param mig_timeout: timeout value for migration.
-    @param mig_protocol: migration protocol
-    @param mig_cancel: Test migrate_cancel or not when protocol is tcp.
-    @param dest_host: Destination host (defaults to 'localhost').
-    @param mig_port: Port that will be used for migration.
-    @return: The post-migration VM, in case of same host migration, True in
-            case of multi-host migration.
-    """
-    def mig_finished():
-        o = vm.monitor.info("migrate")
-        if isinstance(o, str):
-            return "status: active" not in o
-        else:
-            return o.get("status") != "active"
-
-    def mig_succeeded():
-        o = vm.monitor.info("migrate")
-        if isinstance(o, str):
-            return "status: completed" in o
-        else:
-            return o.get("status") == "completed"
-
-    def mig_failed():
-        o = vm.monitor.info("migrate")
-        if isinstance(o, str):
-            return "status: failed" in o
-        else:
-            return o.get("status") == "failed"
-
-    def mig_cancelled():
-        o = vm.monitor.info("migrate")
-        if isinstance(o, str):
-            return ("Migration status: cancelled" in o or
-                    "Migration status: canceled" in o)
-        else:
-            return (o.get("status") == "cancelled" or
-                    o.get("status") == "canceled")
-
-    def wait_for_migration():
-        if not kvm_utils.wait_for(mig_finished, mig_timeout, 2, 2,
-                                  "Waiting for migration to finish..."):
-            raise error.TestFail("Timeout expired while waiting for migration "
-                                 "to finish")
-
-    if dest_host == 'localhost':
-        dest_vm = vm.clone()
-
-    if (dest_host == 'localhost') and stable_check:
-        # Pause the dest vm after creation
-        dest_vm.params['extra_params'] = (dest_vm.params.get('extra_params','')
-                                          + ' -S')
-
-    if dest_host == 'localhost':
-        dest_vm.create(migration_mode=mig_protocol, mac_source=vm)
-
-    try:
-        try:
-            if mig_protocol == "tcp":
-                if dest_host == 'localhost':
-                    uri = "tcp:localhost:%d" % dest_vm.migration_port
-                else:
-                    uri = 'tcp:%s:%d' % (dest_host, mig_port)
-            elif mig_protocol == "unix":
-                uri = "unix:%s" % dest_vm.migration_file
-            elif mig_protocol == "exec":
-                uri = '"exec:nc localhost %s"' % dest_vm.migration_port
-
-            if offline:
-                vm.monitor.cmd("stop")
-            vm.monitor.migrate(uri)
-
-            if mig_cancel:
-                time.sleep(2)
-                vm.monitor.cmd("migrate_cancel")
-                if not kvm_utils.wait_for(mig_cancelled, 60, 2, 2,
-                                          "Waiting for migration "
-                                          "cancellation"):
-                    raise error.TestFail("Failed to cancel migration")
-                if offline:
-                    vm.monitor.cmd("cont")
-                if dest_host == 'localhost':
-                    dest_vm.destroy(gracefully=False)
-                return vm
-            else:
-                wait_for_migration()
-                if (dest_host == 'localhost') and stable_check:
-                    save_path = None or "/tmp"
-                    save1 = os.path.join(save_path, "src")
-                    save2 = os.path.join(save_path, "dst")
-
-                    vm.save_to_file(save1)
-                    dest_vm.save_to_file(save2)
-
-                    # Fail if we see deltas
-                    md5_save1 = utils.hash_file(save1)
-                    md5_save2 = utils.hash_file(save2)
-                    if md5_save1 != md5_save2:
-                        raise error.TestFail("Mismatch of VM state before "
-                                             "and after migration")
-
-                if (dest_host == 'localhost') and offline:
-                    dest_vm.monitor.cmd("cont")
-        except:
-            if dest_host == 'localhost':
-                dest_vm.destroy()
-            raise
-
-    finally:
-        if (dest_host == 'localhost') and stable_check and clean:
-            logging.debug("Cleaning the state files")
-            if os.path.isfile(save1):
-                os.remove(save1)
-            if os.path.isfile(save2):
-                os.remove(save2)
-
-    # Report migration status
-    if mig_succeeded():
-        logging.info("Migration finished successfully")
-    elif mig_failed():
-        raise error.TestFail("Migration failed")
-    else:
-        raise error.TestFail("Migration ended with unknown status")
-
-    if dest_host == 'localhost':
-        if "paused" in dest_vm.monitor.info("status"):
-            logging.debug("Destination VM is paused, resuming it...")
-            dest_vm.monitor.cmd("cont")
-
-    # Kill the source VM
-    vm.destroy(gracefully=False)
-
-    # Replace the source VM with the new cloned VM
-    if (dest_host == 'localhost') and (env is not None):
-        env.register_vm(vm.name, dest_vm)
-
-    # Return the new cloned VM
-    if dest_host == 'localhost':
-        return dest_vm
-    else:
-        return vm
-
-
-def stop_windows_service(session, service, timeout=120):
-    """
-    Stop a Windows service using sc.
-    If the service is already stopped or is not installed, do nothing.
-
-    @param service: The name of the service
-    @param timeout: Time duration to wait for service to stop
-    @raise error.TestError: Raised if the service can't be stopped
-    """
-    end_time = time.time() + timeout
-    while time.time() < end_time:
-        o = session.cmd_output("sc stop %s" % service, timeout=60)
-        # FAILED 1060 means the service isn't installed.
-        # FAILED 1062 means the service hasn't been started.
-        if re.search(r"\bFAILED (1060|1062)\b", o, re.I):
-            break
-        time.sleep(1)
-    else:
-        raise error.TestError("Could not stop service '%s'" % service)
-
-
-def start_windows_service(session, service, timeout=120):
-    """
-    Start a Windows service using sc.
-    If the service is already running, do nothing.
-    If the service isn't installed, fail.
-
-    @param service: The name of the service
-    @param timeout: Time duration to wait for service to start
-    @raise error.TestError: Raised if the service can't be started
-    """
-    end_time = time.time() + timeout
-    while time.time() < end_time:
-        o = session.cmd_output("sc start %s" % service, timeout=60)
-        # FAILED 1060 means the service isn't installed.
-        if re.search(r"\bFAILED 1060\b", o, re.I):
-            raise error.TestError("Could not start service '%s' "
-                                  "(service not installed)" % service)
-        # FAILED 1056 means the service is already running.
-        if re.search(r"\bFAILED 1056\b", o, re.I):
-            break
-        time.sleep(1)
-    else:
-        raise error.TestError("Could not start service '%s'" % service)
-
-
-def get_time(session, time_command, time_filter_re, time_format):
-    """
-    Return the host time and guest time.  If the guest time cannot be fetched
-    a TestError exception is raised.
-
-    Note that the shell session should be ready to receive commands
-    (i.e. should "display" a command prompt and should be done with all
-    previous commands).
-
-    @param session: A shell session.
-    @param time_command: Command to issue to get the current guest time.
-    @param time_filter_re: Regex filter to apply on the output of
-            time_command in order to get the current time.
-    @param time_format: Format string to pass to time.strptime() with the
-            result of the regex filter.
-    @return: A tuple containing the host time and guest time.
-    """
-    if len(re.findall("ntpdate|w32tm", time_command)) == 0:
-        host_time = time.time()
-        s = session.cmd_output(time_command)
-
-        try:
-            s = re.findall(time_filter_re, s)[0]
-        except IndexError:
-            logging.debug("The time string from guest is:\n%s", s)
-            raise error.TestError("The time string from guest is unexpected.")
-        except Exception, e:
-            logging.debug("(time_filter_re, time_string): (%s, %s)",
-                          time_filter_re, s)
-            raise e
-
-        guest_time = time.mktime(time.strptime(s, time_format))
-    else:
-        o = session.cmd(time_command)
-        if re.match('ntpdate', time_command):
-            offset = re.findall('offset (.*) sec', o)[0]
-            host_main, host_mantissa = re.findall(time_filter_re, o)[0]
-            host_time = (time.mktime(time.strptime(host_main, time_format)) +
-                         float("0.%s" % host_mantissa))
-            guest_time = host_time + float(offset)
-        else:
-            guest_time =  re.findall(time_filter_re, o)[0]
-            offset = re.findall("o:(.*)s", o)[0]
-            if re.match('PM', guest_time):
-                hour = re.findall('\d+ (\d+):', guest_time)[0]
-                hour = str(int(hour) + 12)
-                guest_time = re.sub('\d+\s\d+:', "\d+\s%s:" % hour,
-                                    guest_time)[:-3]
-            else:
-                guest_time = guest_time[:-3]
-            guest_time = time.mktime(time.strptime(guest_time, time_format))
-            host_time = guest_time - float(offset)
-
-    return (host_time, guest_time)
-
-
-def get_memory_info(lvms):
-    """
-    Get memory information from host and guests in format:
-    Host: memfree = XXXM; Guests memsh = {XXX,XXX,...}
-
-    @params lvms: List of VM objects
-    @return: String with memory info report
-    """
-    if not isinstance(lvms, list):
-        raise error.TestError("Invalid list passed to get_stat: %s " % lvms)
-
-    try:
-        meminfo = "Host: memfree = "
-        meminfo += str(int(utils.freememtotal()) / 1024) + "M; "
-        meminfo += "swapfree = "
-        mf = int(utils.read_from_meminfo("SwapFree")) / 1024
-        meminfo += str(mf) + "M; "
-    except Exception, e:
-        raise error.TestFail("Could not fetch host free memory info, "
-                             "reason: %s" % e)
-
-    meminfo += "Guests memsh = {"
-    for vm in lvms:
-        shm = vm.get_shared_meminfo()
-        if shm is None:
-            raise error.TestError("Could not get shared meminfo from "
-                                  "VM %s" % vm)
-        meminfo += "%dM; " % shm
-    meminfo = meminfo[0:-2] + "}"
-
-    return meminfo
-
-
-def run_autotest(vm, session, control_path, timeout, outputdir, params):
-    """
-    Run an autotest control file inside a guest (linux only utility).
-
-    @param vm: VM object.
-    @param session: A shell session on the VM provided.
-    @param control_path: A path to an autotest control file.
-    @param timeout: Timeout under which the autotest control file must complete.
-    @param outputdir: Path on host where we should copy the guest autotest
-            results to.
-
-    The following params is used by the migration
-    @param params: Test params used in the migration test
-    """
-    def copy_if_hash_differs(vm, local_path, remote_path):
-        """
-        Copy a file to a guest if it doesn't exist or if its MD5sum differs.
-
-        @param vm: VM object.
-        @param local_path: Local path.
-        @param remote_path: Remote path.
-        """
-        local_hash = utils.hash_file(local_path)
-        basename = os.path.basename(local_path)
-        output = session.cmd_output("md5sum %s" % remote_path)
-        if "such file" in output:
-            remote_hash = "0"
-        elif output:
-            remote_hash = output.split()[0]
-        else:
-            logging.warning("MD5 check for remote path %s did not return.",
-                            remote_path)
-            # Let's be a little more lenient here and see if it wasn't a
-            # temporary problem
-            remote_hash = "0"
-        if remote_hash != local_hash:
-            logging.debug("Copying %s to guest", basename)
-            vm.copy_files_to(local_path, remote_path)
-
-
-    def extract(vm, remote_path, dest_dir="."):
-        """
-        Extract a .tar.bz2 file on the guest.
-
-        @param vm: VM object
-        @param remote_path: Remote file path
-        @param dest_dir: Destination dir for the contents
-        """
-        basename = os.path.basename(remote_path)
-        logging.info("Extracting %s...", basename)
-        e_cmd = "tar xjvf %s -C %s" % (remote_path, dest_dir)
-        session.cmd(e_cmd, timeout=120)
-
-
-    def get_results():
-        """
-        Copy autotest results present on the guest back to the host.
-        """
-        logging.info("Trying to copy autotest results from guest")
-        guest_results_dir = os.path.join(outputdir, "guest_autotest_results")
-        if not os.path.exists(guest_results_dir):
-            os.mkdir(guest_results_dir)
-        vm.copy_files_from("%s/results/default/*" % autotest_path,
-                           guest_results_dir)
-
-
-    def get_results_summary():
-        """
-        Get the status of the tests that were executed on the host and close
-        the session where autotest was being executed.
-        """
-        output = session.cmd_output("cat results/*/status")
-        try:
-            results = scan_results.parse_results(output)
-            # Report test results
-            logging.info("Results (test, status, duration, info):")
-            for result in results:
-                logging.info(str(result))
-            session.close()
-            return results
-        except Exception, e:
-            logging.error("Error processing guest autotest results: %s", e)
-            return None
-
-
-    if not os.path.isfile(control_path):
-        raise error.TestError("Invalid path to autotest control file: %s" %
-                              control_path)
-
-    migrate_background = params.get("migrate_background") == "yes"
-    if migrate_background:
-        mig_timeout = float(params.get("mig_timeout", "3600"))
-        mig_protocol = params.get("migration_protocol", "tcp")
-
-    compressed_autotest_path = "/tmp/autotest.tar.bz2"
-
-    # To avoid problems, let's make the test use the current AUTODIR
-    # (autotest client path) location
-    autotest_path = os.environ['AUTODIR']
-
-    # tar the contents of bindir/autotest
-    cmd = "tar cvjf %s %s/*" % (compressed_autotest_path, autotest_path)
-    # Until we have nested virtualization, we don't need the kvm test :)
-    cmd += " --exclude=%s/tests/kvm" % autotest_path
-    cmd += " --exclude=%s/results" % autotest_path
-    cmd += " --exclude=%s/tmp" % autotest_path
-    cmd += " --exclude=%s/control*" % autotest_path
-    cmd += " --exclude=*.pyc"
-    cmd += " --exclude=*.svn"
-    cmd += " --exclude=*.git"
-    utils.run(cmd)
-
-    # Copy autotest.tar.bz2
-    copy_if_hash_differs(vm, compressed_autotest_path, compressed_autotest_path)
-
-    # Extract autotest.tar.bz2
-    extract(vm, compressed_autotest_path, "/")
-
-    vm.copy_files_to(control_path, os.path.join(autotest_path, 'control'))
-
-    # Run the test
-    logging.info("Running autotest control file %s on guest, timeout %ss",
-                 os.path.basename(control_path), timeout)
-    session.cmd("cd %s" % autotest_path)
-    try:
-        session.cmd("rm -f control.state")
-        session.cmd("rm -rf results/*")
-    except kvm_subprocess.ShellError:
-        pass
-    try:
-        bg = None
-        try:
-            logging.info("---------------- Test output ----------------")
-            if migrate_background:
-                mig_timeout = float(params.get("mig_timeout", "3600"))
-                mig_protocol = params.get("migration_protocol", "tcp")
-
-                bg = kvm_utils.Thread(session.cmd_output,
-                                      kwargs={'cmd': "bin/autotest control",
-                                              'timeout': timeout,
-                                              'print_func': logging.info})
-
-                bg.start()
-
-                while bg.is_alive():
-                    logging.info("Tests is not ended, start a round of"
-                                 "migration ...")
-                    vm.migrate(timeout=mig_timeout, protocol=mig_protocol)
-            else:
-                session.cmd_output("bin/autotest control", timeout=timeout,
-                                   print_func=logging.info)
-        finally:
-            logging.info("------------- End of test output ------------")
-            if migrate_background and bg:
-                bg.join()
-    except kvm_subprocess.ShellTimeoutError:
-        if vm.is_alive():
-            get_results()
-            get_results_summary()
-            raise error.TestError("Timeout elapsed while waiting for job to "
-                                  "complete")
-        else:
-            raise error.TestError("Autotest job on guest failed "
-                                  "(VM terminated during job)")
-    except kvm_subprocess.ShellProcessTerminatedError:
-        get_results()
-        raise error.TestError("Autotest job on guest failed "
-                              "(Remote session terminated during job)")
-
-    results = get_results_summary()
-    get_results()
-
-    # Make a list of FAIL/ERROR/ABORT results (make sure FAIL results appear
-    # before ERROR results, and ERROR results appear before ABORT results)
-    bad_results = [r[0] for r in results if r[1] == "FAIL"]
-    bad_results += [r[0] for r in results if r[1] == "ERROR"]
-    bad_results += [r[0] for r in results if r[1] == "ABORT"]
-
-    # Fail the test if necessary
-    if not results:
-        raise error.TestFail("Autotest control file run did not produce any "
-                             "recognizable results")
-    if bad_results:
-        if len(bad_results) == 1:
-            e_msg = ("Test %s failed during control file execution" %
-                     bad_results[0])
-        else:
-            e_msg = ("Tests %s failed during control file execution" %
-                     " ".join(bad_results))
-        raise error.TestFail(e_msg)
-
-
-def get_loss_ratio(output):
-    """
-    Get the packet loss ratio from the output of ping
-.
-    @param output: Ping output.
-    """
-    try:
-        return int(re.findall('(\d+)% packet loss', output)[0])
-    except IndexError:
-        logging.debug(output)
-        return -1
-
-
-def raw_ping(command, timeout, session, output_func):
-    """
-    Low-level ping command execution.
-
-    @param command: Ping command.
-    @param timeout: Timeout of the ping command.
-    @param session: Local executon hint or session to execute the ping command.
-    """
-    if session is None:
-        process = kvm_subprocess.run_bg(command, output_func=output_func,
-                                        timeout=timeout)
-
-        # Send SIGINT signal to notify the timeout of running ping process,
-        # Because ping have the ability to catch the SIGINT signal so we can
-        # always get the packet loss ratio even if timeout.
-        if process.is_alive():
-            kvm_utils.kill_process_tree(process.get_pid(), signal.SIGINT)
-
-        status = process.get_status()
-        output = process.get_output()
-
-        process.close()
-        return status, output
-    else:
-        output = ""
-        try:
-            output = session.cmd_output(command, timeout=timeout,
-                                        print_func=output_func)
-        except kvm_subprocess.ShellTimeoutError:
-            # Send ctrl+c (SIGINT) through ssh session
-            session.send("\003")
-            try:
-                output2 = session.read_up_to_prompt(print_func=output_func)
-                output += output2
-            except kvm_subprocess.ExpectTimeoutError, e:
-                output += e.output
-                # We also need to use this session to query the return value
-                session.send("\003")
-
-        session.sendline(session.status_test_command)
-        try:
-            o2 = session.read_up_to_prompt()
-        except kvm_subprocess.ExpectError:
-            status = -1
-        else:
-            try:
-                status = int(re.findall("\d+", o2)[0])
-            except:
-                status = -1
-
-        return status, output
-
-
-def ping(dest=None, count=None, interval=None, interface=None,
-         packetsize=None, ttl=None, hint=None, adaptive=False,
-         broadcast=False, flood=False, timeout=0,
-         output_func=logging.debug, session=None):
-    """
-    Wrapper of ping.
-
-    @param dest: Destination address.
-    @param count: Count of icmp packet.
-    @param interval: Interval of two icmp echo request.
-    @param interface: Specified interface of the source address.
-    @param packetsize: Packet size of icmp.
-    @param ttl: IP time to live.
-    @param hint: Path mtu discovery hint.
-    @param adaptive: Adaptive ping flag.
-    @param broadcast: Broadcast ping flag.
-    @param flood: Flood ping flag.
-    @param timeout: Timeout for the ping command.
-    @param output_func: Function used to log the result of ping.
-    @param session: Local executon hint or session to execute the ping command.
-    """
-    if dest is not None:
-        command = "ping %s " % dest
-    else:
-        command = "ping localhost "
-    if count is not None:
-        command += " -c %s" % count
-    if interval is not None:
-        command += " -i %s" % interval
-    if interface is not None:
-        command += " -I %s" % interface
-    if packetsize is not None:
-        command += " -s %s" % packetsize
-    if ttl is not None:
-        command += " -t %s" % ttl
-    if hint is not None:
-        command += " -M %s" % hint
-    if adaptive:
-        command += " -A"
-    if broadcast:
-        command += " -b"
-    if flood:
-        command += " -f -q"
-        output_func = None
-
-    return raw_ping(command, timeout, session, output_func)
-
-
-def get_linux_ifname(session, mac_address):
-    """
-    Get the interface name through the mac address.
-
-    @param session: session to the virtual machine
-    @mac_address: the macaddress of nic
-    """
-
-    output = session.cmd_output("ifconfig -a")
-
-    try:
-        ethname = re.findall("(\w+)\s+Link.*%s" % mac_address, output,
-                             re.IGNORECASE)[0]
-        return ethname
-    except:
-        return None
diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
deleted file mode 100644
index de52b65..0000000
--- a/client/tests/kvm/kvm_utils.py
+++ /dev/null
@@ -1,1728 +0,0 @@
-"""
-KVM test utility functions.
-
-@copyright: 2008-2009 Red Hat Inc.
-"""
-
-import time, string, random, socket, os, signal, re, logging, commands, cPickle
-import fcntl, shelve, ConfigParser, rss_file_transfer, threading, sys, UserDict
-from autotest_lib.client.bin import utils, os_dep
-from autotest_lib.client.common_lib import error, logging_config
-import kvm_subprocess
-try:
-    import koji
-    KOJI_INSTALLED = True
-except ImportError:
-    KOJI_INSTALLED = False
-
-
-def _lock_file(filename):
-    f = open(filename, "w")
-    fcntl.lockf(f, fcntl.LOCK_EX)
-    return f
-
-
-def _unlock_file(f):
-    fcntl.lockf(f, fcntl.LOCK_UN)
-    f.close()
-
-
-def is_vm(obj):
-    """
-    Tests whether a given object is a VM object.
-
-    @param obj: Python object.
-    """
-    return obj.__class__.__name__ == "VM"
-
-
-class Env(UserDict.IterableUserDict):
-    """
-    A dict-like object containing global objects used by tests.
-    """
-    def __init__(self, filename=None, version=0):
-        """
-        Create an empty Env object or load an existing one from a file.
-
-        If the version recorded in the file is lower than version, or if some
-        error occurs during unpickling, or if filename is not supplied,
-        create an empty Env object.
-
-        @param filename: Path to an env file.
-        @param version: Required env version (int).
-        """
-        UserDict.IterableUserDict.__init__(self)
-        empty = {"version": version}
-        if filename:
-            self._filename = filename
-            try:
-                f = open(filename, "r")
-                env = cPickle.load(f)
-                f.close()
-                if env.get("version", 0) >= version:
-                    self.data = env
-                else:
-                    logging.warn("Incompatible env file found. Not using it.")
-                    self.data = empty
-            # Almost any exception can be raised during unpickling, so let's
-            # catch them all
-            except Exception, e:
-                logging.warn(e)
-                self.data = empty
-        else:
-            self.data = empty
-
-
-    def save(self, filename=None):
-        """
-        Pickle the contents of the Env object into a file.
-
-        @param filename: Filename to pickle the dict into.  If not supplied,
-                use the filename from which the dict was loaded.
-        """
-        filename = filename or self._filename
-        f = open(filename, "w")
-        cPickle.dump(self.data, f)
-        f.close()
-
-
-    def get_all_vms(self):
-        """
-        Return a list of all VM objects in this Env object.
-        """
-        return [o for o in self.values() if is_vm(o)]
-
-
-    def get_vm(self, name):
-        """
-        Return a VM object by its name.
-
-        @param name: VM name.
-        """
-        return self.get("vm__%s" % name)
-
-
-    def register_vm(self, name, vm):
-        """
-        Register a VM in this Env object.
-
-        @param name: VM name.
-        @param vm: VM object.
-        """
-        self["vm__%s" % name] = vm
-
-
-    def unregister_vm(self, name):
-        """
-        Remove a given VM.
-
-        @param name: VM name.
-        """
-        del self["vm__%s" % name]
-
-
-    def register_installer(self, installer):
-        """
-        Register a installer that was just run
-
-        The installer will be available for other tests, so that
-        information about the installed KVM modules and qemu-kvm can be used by
-        them.
-        """
-        self['last_installer'] = installer
-
-
-    def previous_installer(self):
-        """
-        Return the last installer that was registered
-        """
-        return self.get('last_installer')
-
-
-class Params(UserDict.IterableUserDict):
-    """
-    A dict-like object passed to every test.
-    """
-    def objects(self, key):
-        """
-        Return the names of objects defined using a given key.
-
-        @param key: The name of the key whose value lists the objects
-                (e.g. 'nics').
-        """
-        return self.get(key, "").split()
-
-
-    def object_params(self, obj_name):
-        """
-        Return a dict-like object containing the parameters of an individual
-        object.
-
-        This method behaves as follows: the suffix '_' + obj_name is removed
-        from all key names that have it.  Other key names are left unchanged.
-        The values of keys with the suffix overwrite the values of their
-        suffixless versions.
-
-        @param obj_name: The name of the object (objects are listed by the
-                objects() method).
-        """
-        suffix = "_" + obj_name
-        new_dict = self.copy()
-        for key in self:
-            if key.endswith(suffix):
-                new_key = key.split(suffix)[0]
-                new_dict[new_key] = self[key]
-        return new_dict
-
-
-# Functions related to MAC/IP addresses
-
-def _open_mac_pool(lock_mode):
-    lock_file = open("/tmp/mac_lock", "w+")
-    fcntl.lockf(lock_file, lock_mode)
-    pool = shelve.open("/tmp/address_pool")
-    return pool, lock_file
-
-
-def _close_mac_pool(pool, lock_file):
-    pool.close()
-    fcntl.lockf(lock_file, fcntl.LOCK_UN)
-    lock_file.close()
-
-
-def _generate_mac_address_prefix(mac_pool):
-    """
-    Generate a random MAC address prefix and add it to the MAC pool dictionary.
-    If there's a MAC prefix there already, do not update the MAC pool and just
-    return what's in there. By convention we will set KVM autotest MAC
-    addresses to start with 0x9a.
-
-    @param mac_pool: The MAC address pool object.
-    @return: The MAC address prefix.
-    """
-    if "prefix" in mac_pool:
-        prefix = mac_pool["prefix"]
-        logging.debug("Used previously generated MAC address prefix for this "
-                      "host: %s", prefix)
-    else:
-        r = random.SystemRandom()
-        prefix = "9a:%02x:%02x:%02x:" % (r.randint(0x00, 0xff),
-                                         r.randint(0x00, 0xff),
-                                         r.randint(0x00, 0xff))
-        mac_pool["prefix"] = prefix
-        logging.debug("Generated MAC address prefix for this host: %s", prefix)
-    return prefix
-
-
-def generate_mac_address(vm_instance, nic_index):
-    """
-    Randomly generate a MAC address and add it to the MAC address pool.
-
-    Try to generate a MAC address based on a randomly generated MAC address
-    prefix and add it to a persistent dictionary.
-    key = VM instance + NIC index, value = MAC address
-    e.g. {'20100310-165222-Wt7l:0': '9a:5d:94:6a:9b:f9'}
-
-    @param vm_instance: The instance attribute of a VM.
-    @param nic_index: The index of the NIC.
-    @return: MAC address string.
-    """
-    mac_pool, lock_file = _open_mac_pool(fcntl.LOCK_EX)
-    key = "%s:%s" % (vm_instance, nic_index)
-    if key in mac_pool:
-        mac = mac_pool[key]
-    else:
-        prefix = _generate_mac_address_prefix(mac_pool)
-        r = random.SystemRandom()
-        while key not in mac_pool:
-            mac = prefix + "%02x:%02x" % (r.randint(0x00, 0xff),
-                                          r.randint(0x00, 0xff))
-            if mac in mac_pool.values():
-                continue
-            mac_pool[key] = mac
-            logging.debug("Generated MAC address for NIC %s: %s", key, mac)
-    _close_mac_pool(mac_pool, lock_file)
-    return mac
-
-
-def free_mac_address(vm_instance, nic_index):
-    """
-    Remove a MAC address from the address pool.
-
-    @param vm_instance: The instance attribute of a VM.
-    @param nic_index: The index of the NIC.
-    """
-    mac_pool, lock_file = _open_mac_pool(fcntl.LOCK_EX)
-    key = "%s:%s" % (vm_instance, nic_index)
-    if key in mac_pool:
-        logging.debug("Freeing MAC address for NIC %s: %s", key, mac_pool[key])
-        del mac_pool[key]
-    _close_mac_pool(mac_pool, lock_file)
-
-
-def set_mac_address(vm_instance, nic_index, mac):
-    """
-    Set a MAC address in the pool.
-
-    @param vm_instance: The instance attribute of a VM.
-    @param nic_index: The index of the NIC.
-    """
-    mac_pool, lock_file = _open_mac_pool(fcntl.LOCK_EX)
-    mac_pool["%s:%s" % (vm_instance, nic_index)] = mac
-    _close_mac_pool(mac_pool, lock_file)
-
-
-def get_mac_address(vm_instance, nic_index):
-    """
-    Return a MAC address from the pool.
-
-    @param vm_instance: The instance attribute of a VM.
-    @param nic_index: The index of the NIC.
-    @return: MAC address string.
-    """
-    mac_pool, lock_file = _open_mac_pool(fcntl.LOCK_SH)
-    mac = mac_pool.get("%s:%s" % (vm_instance, nic_index))
-    _close_mac_pool(mac_pool, lock_file)
-    return mac
-
-
-def verify_ip_address_ownership(ip, macs, timeout=10.0):
-    """
-    Use arping and the ARP cache to make sure a given IP address belongs to one
-    of the given MAC addresses.
-
-    @param ip: An IP address.
-    @param macs: A list or tuple of MAC addresses.
-    @return: True iff ip is assigned to a MAC address in macs.
-    """
-    # Compile a regex that matches the given IP address and any of the given
-    # MAC addresses
-    mac_regex = "|".join("(%s)" % mac for mac in macs)
-    regex = re.compile(r"\b%s\b.*\b(%s)\b" % (ip, mac_regex), re.IGNORECASE)
-
-    # Check the ARP cache
-    o = commands.getoutput("%s -n" % find_command("arp"))
-    if regex.search(o):
-        return True
-
-    # Get the name of the bridge device for arping
-    o = commands.getoutput("%s route get %s" % (find_command("ip"), ip))
-    dev = re.findall("dev\s+\S+", o, re.IGNORECASE)
-    if not dev:
-        return False
-    dev = dev[0].split()[-1]
-
-    # Send an ARP request
-    o = commands.getoutput("%s -f -c 3 -I %s %s" %
-                           (find_command("arping"), dev, ip))
-    return bool(regex.search(o))
-
-
-# Utility functions for dealing with external processes
-
-def find_command(cmd):
-    for dir in ["/usr/local/sbin", "/usr/local/bin",
-                "/usr/sbin", "/usr/bin", "/sbin", "/bin"]:
-        file = os.path.join(dir, cmd)
-        if os.path.exists(file):
-            return file
-    raise ValueError('Missing command: %s' % cmd)
-
-
-def pid_exists(pid):
-    """
-    Return True if a given PID exists.
-
-    @param pid: Process ID number.
-    """
-    try:
-        os.kill(pid, 0)
-        return True
-    except:
-        return False
-
-
-def safe_kill(pid, signal):
-    """
-    Attempt to send a signal to a given process that may or may not exist.
-
-    @param signal: Signal number.
-    """
-    try:
-        os.kill(pid, signal)
-        return True
-    except:
-        return False
-
-
-def kill_process_tree(pid, sig=signal.SIGKILL):
-    """Signal a process and all of its children.
-
-    If the process does not exist -- return.
-
-    @param pid: The pid of the process to signal.
-    @param sig: The signal to send to the processes.
-    """
-    if not safe_kill(pid, signal.SIGSTOP):
-        return
-    children = commands.getoutput("ps --ppid=%d -o pid=" % pid).split()
-    for child in children:
-        kill_process_tree(int(child), sig)
-    safe_kill(pid, sig)
-    safe_kill(pid, signal.SIGCONT)
-
-
-def get_latest_kvm_release_tag(release_listing):
-    """
-    Fetches the latest release tag for KVM.
-
-    @param release_listing: URL that contains a list of the Source Forge
-            KVM project files.
-    """
-    try:
-        release_page = utils.urlopen(release_listing)
-        data = release_page.read()
-        release_page.close()
-        rx = re.compile("kvm-(\d+).tar.gz", re.IGNORECASE)
-        matches = rx.findall(data)
-        # In all regexp matches to something that looks like a release tag,
-        # get the largest integer. That will be our latest release tag.
-        latest_tag = max(int(x) for x in matches)
-        return str(latest_tag)
-    except Exception, e:
-        message = "Could not fetch latest KVM release tag: %s" % str(e)
-        logging.error(message)
-        raise error.TestError(message)
-
-
-def get_git_branch(repository, branch, srcdir, commit=None, lbranch=None):
-    """
-    Retrieves a given git code repository.
-
-    @param repository: Git repository URL
-    """
-    logging.info("Fetching git [REP '%s' BRANCH '%s' COMMIT '%s'] -> %s",
-                 repository, branch, commit, srcdir)
-    if not os.path.exists(srcdir):
-        os.makedirs(srcdir)
-    os.chdir(srcdir)
-
-    if os.path.exists(".git"):
-        utils.system("git reset --hard")
-    else:
-        utils.system("git init")
-
-    if not lbranch:
-        lbranch = branch
-
-    utils.system("git fetch -q -f -u -t %s %s:%s" %
-                 (repository, branch, lbranch))
-    utils.system("git checkout %s" % lbranch)
-    if commit:
-        utils.system("git checkout %s" % commit)
-
-    h = utils.system_output('git log --pretty=format:"%H" -1')
-    try:
-        desc = "tag %s" % utils.system_output("git describe")
-    except error.CmdError:
-        desc = "no tag found"
-
-    logging.info("Commit hash for %s is %s (%s)", repository, h.strip(), desc)
-    return srcdir
-
-
-def check_kvm_source_dir(source_dir):
-    """
-    Inspects the kvm source directory and verifies its disposition. In some
-    occasions build may be dependant on the source directory disposition.
-    The reason why the return codes are numbers is that we might have more
-    changes on the source directory layout, so it's not scalable to just use
-    strings like 'old_repo', 'new_repo' and such.
-
-    @param source_dir: Source code path that will be inspected.
-    """
-    os.chdir(source_dir)
-    has_qemu_dir = os.path.isdir('qemu')
-    has_kvm_dir = os.path.isdir('kvm')
-    if has_qemu_dir:
-        logging.debug("qemu directory detected, source dir layout 1")
-        return 1
-    if has_kvm_dir and not has_qemu_dir:
-        logging.debug("kvm directory detected, source dir layout 2")
-        return 2
-    else:
-        raise error.TestError("Unknown source dir layout, cannot proceed.")
-
-
-# Functions and classes used for logging into guests and transferring files
-
-class LoginError(Exception):
-    def __init__(self, msg, output):
-        Exception.__init__(self, msg, output)
-        self.msg = msg
-        self.output = output
-
-    def __str__(self):
-        return "%s    (output: %r)" % (self.msg, self.output)
-
-
-class LoginAuthenticationError(LoginError):
-    pass
-
-
-class LoginTimeoutError(LoginError):
-    def __init__(self, output):
-        LoginError.__init__(self, "Login timeout expired", output)
-
-
-class LoginProcessTerminatedError(LoginError):
-    def __init__(self, status, output):
-        LoginError.__init__(self, None, output)
-        self.status = status
-
-    def __str__(self):
-        return ("Client process terminated    (status: %s,    output: %r)" %
-                (self.status, self.output))
-
-
-class LoginBadClientError(LoginError):
-    def __init__(self, client):
-        LoginError.__init__(self, None, None)
-        self.client = client
-
-    def __str__(self):
-        return "Unknown remote shell client: %r" % self.client
-
-
-class SCPError(Exception):
-    def __init__(self, msg, output):
-        Exception.__init__(self, msg, output)
-        self.msg = msg
-        self.output = output
-
-    def __str__(self):
-        return "%s    (output: %r)" % (self.msg, self.output)
-
-
-class SCPAuthenticationError(SCPError):
-    pass
-
-
-class SCPAuthenticationTimeoutError(SCPAuthenticationError):
-    def __init__(self, output):
-        SCPAuthenticationError.__init__(self, "Authentication timeout expired",
-                                        output)
-
-
-class SCPTransferTimeoutError(SCPError):
-    def __init__(self, output):
-        SCPError.__init__(self, "Transfer timeout expired", output)
-
-
-class SCPTransferFailedError(SCPError):
-    def __init__(self, status, output):
-        SCPError.__init__(self, None, output)
-        self.status = status
-
-    def __str__(self):
-        return ("SCP transfer failed    (status: %s,    output: %r)" %
-                (self.status, self.output))
-
-
-def _remote_login(session, username, password, prompt, timeout=10):
-    """
-    Log into a remote host (guest) using SSH or Telnet.  Wait for questions
-    and provide answers.  If timeout expires while waiting for output from the
-    child (e.g. a password prompt or a shell prompt) -- fail.
-
-    @brief: Log into a remote host (guest) using SSH or Telnet.
-
-    @param session: An Expect or ShellSession instance to operate on
-    @param username: The username to send in reply to a login prompt
-    @param password: The password to send in reply to a password prompt
-    @param prompt: The shell prompt that indicates a successful login
-    @param timeout: The maximal time duration (in seconds) to wait for each
-            step of the login procedure (i.e. the "Are you sure" prompt, the
-            password prompt, the shell prompt, etc)
-    @raise LoginTimeoutError: If timeout expires
-    @raise LoginAuthenticationError: If authentication fails
-    @raise LoginProcessTerminatedError: If the client terminates during login
-    @raise LoginError: If some other error occurs
-    """
-    password_prompt_count = 0
-    login_prompt_count = 0
-
-    while True:
-        try:
-            match, text = session.read_until_last_line_matches(
-                [r"[Aa]re you sure", r"[Pp]assword:\s*$", r"[Ll]ogin:\s*$",
-                 r"[Cc]onnection.*closed", r"[Cc]onnection.*refused",
-                 r"[Pp]lease wait", prompt],
-                timeout=timeout, internal_timeout=0.5)
-            if match == 0:  # "Are you sure you want to continue connecting"
-                logging.debug("Got 'Are you sure...'; sending 'yes'")
-                session.sendline("yes")
-                continue
-            elif match == 1:  # "password:"
-                if password_prompt_count == 0:
-                    logging.debug("Got password prompt; sending '%s'", password)
-                    session.sendline(password)
-                    password_prompt_count += 1
-                    continue
-                else:
-                    raise LoginAuthenticationError("Got password prompt twice",
-                                                   text)
-            elif match == 2:  # "login:"
-                if login_prompt_count == 0 and password_prompt_count == 0:
-                    logging.debug("Got username prompt; sending '%s'", username)
-                    session.sendline(username)
-                    login_prompt_count += 1
-                    continue
-                else:
-                    if login_prompt_count > 0:
-                        msg = "Got username prompt twice"
-                    else:
-                        msg = "Got username prompt after password prompt"
-                    raise LoginAuthenticationError(msg, text)
-            elif match == 3:  # "Connection closed"
-                raise LoginError("Client said 'connection closed'", text)
-            elif match == 4:  # "Connection refused"
-                raise LoginError("Client said 'connection refused'", text)
-            elif match == 5:  # "Please wait"
-                logging.debug("Got 'Please wait'")
-                timeout = 30
-                continue
-            elif match == 6:  # prompt
-                logging.debug("Got shell prompt -- logged in")
-                break
-        except kvm_subprocess.ExpectTimeoutError, e:
-            raise LoginTimeoutError(e.output)
-        except kvm_subprocess.ExpectProcessTerminatedError, e:
-            raise LoginProcessTerminatedError(e.status, e.output)
-
-
-def remote_login(client, host, port, username, password, prompt, linesep="\n",
-                 log_filename=None, timeout=10):
-    """
-    Log into a remote host (guest) using SSH/Telnet/Netcat.
-
-    @param client: The client to use ('ssh', 'telnet' or 'nc')
-    @param host: Hostname or IP address
-    @param port: Port to connect to
-    @param username: Username (if required)
-    @param password: Password (if required)
-    @param prompt: Shell prompt (regular expression)
-    @param linesep: The line separator to use when sending lines
-            (e.g. '\\n' or '\\r\\n')
-    @param log_filename: If specified, log all output to this file
-    @param timeout: The maximal time duration (in seconds) to wait for
-            each step of the login procedure (i.e. the "Are you sure" prompt
-            or the password prompt)
-    @raise LoginBadClientError: If an unknown client is requested
-    @raise: Whatever _remote_login() raises
-    @return: A ShellSession object.
-    """
-    if client == "ssh":
-        cmd = ("ssh -o UserKnownHostsFile=/dev/null "
-               "-o PreferredAuthentications=password -p %s %s@%s" %
-               (port, username, host))
-    elif client == "telnet":
-        cmd = "telnet -l %s %s %s" % (username, host, port)
-    elif client == "nc":
-        cmd = "nc %s %s" % (host, port)
-    else:
-        raise LoginBadClientError(client)
-
-    logging.debug("Trying to login with command '%s'", cmd)
-    session = kvm_subprocess.ShellSession(cmd, linesep=linesep, prompt=prompt)
-    try:
-        _remote_login(session, username, password, prompt, timeout)
-    except:
-        session.close()
-        raise
-    if log_filename:
-        session.set_output_func(log_line)
-        session.set_output_params((log_filename,))
-    return session
-
-
-def wait_for_login(client, host, port, username, password, prompt, linesep="\n",
-                   log_filename=None, timeout=240, internal_timeout=10):
-    """
-    Make multiple attempts to log into a remote host (guest) until one succeeds
-    or timeout expires.
-
-    @param timeout: Total time duration to wait for a successful login
-    @param internal_timeout: The maximal time duration (in seconds) to wait for
-            each step of the login procedure (e.g. the "Are you sure" prompt
-            or the password prompt)
-    @see: remote_login()
-    @raise: Whatever remote_login() raises
-    @return: A ShellSession object.
-    """
-    logging.debug("Attempting to log into %s:%s using %s (timeout %ds)",
-                  host, port, client, timeout)
-    end_time = time.time() + timeout
-    while time.time() < end_time:
-        try:
-            return remote_login(client, host, port, username, password, prompt,
-                                linesep, log_filename, internal_timeout)
-        except LoginError, e:
-            logging.debug(e)
-        time.sleep(2)
-    # Timeout expired; try one more time but don't catch exceptions
-    return remote_login(client, host, port, username, password, prompt,
-                        linesep, log_filename, internal_timeout)
-
-
-def _remote_scp(session, password, transfer_timeout=600, login_timeout=10):
-    """
-    Transfer file(s) to a remote host (guest) using SCP.  Wait for questions
-    and provide answers.  If login_timeout expires while waiting for output
-    from the child (e.g. a password prompt), fail.  If transfer_timeout expires
-    while waiting for the transfer to complete, fail.
-
-    @brief: Transfer files using SCP, given a command line.
-
-    @param session: An Expect or ShellSession instance to operate on
-    @param password: The password to send in reply to a password prompt.
-    @param transfer_timeout: The time duration (in seconds) to wait for the
-            transfer to complete.
-    @param login_timeout: The maximal time duration (in seconds) to wait for
-            each step of the login procedure (i.e. the "Are you sure" prompt or
-            the password prompt)
-    @raise SCPAuthenticationError: If authentication fails
-    @raise SCPTransferTimeoutError: If the transfer fails to complete in time
-    @raise SCPTransferFailedError: If the process terminates with a nonzero
-            exit code
-    @raise SCPError: If some other error occurs
-    """
-    password_prompt_count = 0
-    timeout = login_timeout
-    authentication_done = False
-
-    while True:
-        try:
-            match, text = session.read_until_last_line_matches(
-                [r"[Aa]re you sure", r"[Pp]assword:\s*$", r"lost connection"],
-                timeout=timeout, internal_timeout=0.5)
-            if match == 0:  # "Are you sure you want to continue connecting"
-                logging.debug("Got 'Are you sure...'; sending 'yes'")
-                session.sendline("yes")
-                continue
-            elif match == 1:  # "password:"
-                if password_prompt_count == 0:
-                    logging.debug("Got password prompt; sending '%s'", password)
-                    session.sendline(password)
-                    password_prompt_count += 1
-                    timeout = transfer_timeout
-                    authentication_done = True
-                    continue
-                else:
-                    raise SCPAuthenticationError("Got password prompt twice",
-                                                 text)
-            elif match == 2:  # "lost connection"
-                raise SCPError("SCP client said 'lost connection'", text)
-        except kvm_subprocess.ExpectTimeoutError, e:
-            if authentication_done:
-                raise SCPTransferTimeoutError(e.output)
-            else:
-                raise SCPAuthenticationTimeoutError(e.output)
-        except kvm_subprocess.ExpectProcessTerminatedError, e:
-            if e.status == 0:
-                logging.debug("SCP process terminated with status 0")
-                break
-            else:
-                raise SCPTransferFailedError(e.status, e.output)
-
-
-def remote_scp(command, password, log_filename=None, transfer_timeout=600,
-               login_timeout=10):
-    """
-    Transfer file(s) to a remote host (guest) using SCP.
-
-    @brief: Transfer files using SCP, given a command line.
-
-    @param command: The command to execute
-        (e.g. "scp -r foobar root@localhost:/tmp/").
-    @param password: The password to send in reply to a password prompt.
-    @param log_filename: If specified, log all output to this file
-    @param transfer_timeout: The time duration (in seconds) to wait for the
-            transfer to complete.
-    @param login_timeout: The maximal time duration (in seconds) to wait for
-            each step of the login procedure (i.e. the "Are you sure" prompt
-            or the password prompt)
-    @raise: Whatever _remote_scp() raises
-    """
-    logging.debug("Trying to SCP with command '%s', timeout %ss",
-                  command, transfer_timeout)
-    if log_filename:
-        output_func = log_line
-        output_params = (log_filename,)
-    else:
-        output_func = None
-        output_params = ()
-    session = kvm_subprocess.Expect(command,
-                                    output_func=output_func,
-                                    output_params=output_params)
-    try:
-        _remote_scp(session, password, transfer_timeout, login_timeout)
-    finally:
-        session.close()
-
-
-def scp_to_remote(host, port, username, password, local_path, remote_path,
-                  log_filename=None, timeout=600):
-    """
-    Copy files to a remote host (guest) through scp.
-
-    @param host: Hostname or IP address
-    @param username: Username (if required)
-    @param password: Password (if required)
-    @param local_path: Path on the local machine where we are copying from
-    @param remote_path: Path on the remote machine where we are copying to
-    @param log_filename: If specified, log all output to this file
-    @param timeout: The time duration (in seconds) to wait for the transfer
-            to complete.
-    @raise: Whatever remote_scp() raises
-    """
-    command = ("scp -v -o UserKnownHostsFile=/dev/null "
-               "-o PreferredAuthentications=password -r -P %s %s %s@%s:%s" %
-               (port, local_path, username, host, remote_path))
-    remote_scp(command, password, log_filename, timeout)
-
-
-def scp_from_remote(host, port, username, password, remote_path, local_path,
-                    log_filename=None, timeout=600):
-    """
-    Copy files from a remote host (guest).
-
-    @param host: Hostname or IP address
-    @param username: Username (if required)
-    @param password: Password (if required)
-    @param local_path: Path on the local machine where we are copying from
-    @param remote_path: Path on the remote machine where we are copying to
-    @param log_filename: If specified, log all output to this file
-    @param timeout: The time duration (in seconds) to wait for the transfer
-            to complete.
-    @raise: Whatever remote_scp() raises
-    """
-    command = ("scp -v -o UserKnownHostsFile=/dev/null "
-               "-o PreferredAuthentications=password -r -P %s %s@%s:%s %s" %
-               (port, username, host, remote_path, local_path))
-    remote_scp(command, password, log_filename, timeout)
-
-
-def copy_files_to(address, client, username, password, port, local_path,
-                  remote_path, log_filename=None, verbose=False, timeout=600):
-    """
-    Copy files to a remote host (guest) using the selected client.
-
-    @param client: Type of transfer client
-    @param username: Username (if required)
-    @param password: Password (if requried)
-    @param local_path: Path on the local machine where we are copying from
-    @param remote_path: Path on the remote machine where we are copying to
-    @param address: Address of remote host(guest)
-    @param log_filename: If specified, log all output to this file (SCP only)
-    @param verbose: If True, log some stats using logging.debug (RSS only)
-    @param timeout: The time duration (in seconds) to wait for the transfer to
-            complete.
-    @raise: Whatever remote_scp() raises
-    """
-    if client == "scp":
-        scp_to_remote(address, port, username, password, local_path,
-                      remote_path, log_filename, timeout)
-    elif client == "rss":
-        log_func = None
-        if verbose:
-            log_func = logging.debug
-        c = rss_file_transfer.FileUploadClient(address, port, log_func)
-        c.upload(local_path, remote_path, timeout)
-        c.close()
-
-
-def copy_files_from(address, client, username, password, port, remote_path,
-                    local_path, log_filename=None, verbose=False, timeout=600):
-    """
-    Copy files from a remote host (guest) using the selected client.
-
-    @param client: Type of transfer client
-    @param username: Username (if required)
-    @param password: Password (if requried)
-    @param remote_path: Path on the remote machine where we are copying from
-    @param local_path: Path on the local machine where we are copying to
-    @param address: Address of remote host(guest)
-    @param log_filename: If specified, log all output to this file (SCP only)
-    @param verbose: If True, log some stats using logging.debug (RSS only)
-    @param timeout: The time duration (in seconds) to wait for the transfer to
-    complete.
-    @raise: Whatever remote_scp() raises
-    """
-    if client == "scp":
-        scp_from_remote(address, port, username, password, remote_path,
-                        local_path, log_filename, timeout)
-    elif client == "rss":
-        log_func = None
-        if verbose:
-            log_func = logging.debug
-        c = rss_file_transfer.FileDownloadClient(address, port, log_func)
-        c.download(remote_path, local_path, timeout)
-        c.close()
-
-
-# The following are utility functions related to ports.
-
-def is_port_free(port, address):
-    """
-    Return True if the given port is available for use.
-
-    @param port: Port number
-    """
-    try:
-        s = socket.socket()
-        #s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
-        if address == "localhost":
-            s.bind(("localhost", port))
-            free = True
-        else:
-            s.connect((address, port))
-            free = False
-    except socket.error:
-        if address == "localhost":
-            free = False
-        else:
-            free = True
-    s.close()
-    return free
-
-
-def find_free_port(start_port, end_port, address="localhost"):
-    """
-    Return a host free port in the range [start_port, end_port].
-
-    @param start_port: First port that will be checked.
-    @param end_port: Port immediately after the last one that will be checked.
-    """
-    for i in range(start_port, end_port):
-        if is_port_free(i, address):
-            return i
-    return None
-
-
-def find_free_ports(start_port, end_port, count, address="localhost"):
-    """
-    Return count of host free ports in the range [start_port, end_port].
-
-    @count: Initial number of ports known to be free in the range.
-    @param start_port: First port that will be checked.
-    @param end_port: Port immediately after the last one that will be checked.
-    """
-    ports = []
-    i = start_port
-    while i < end_port and count > 0:
-        if is_port_free(i, address):
-            ports.append(i)
-            count -= 1
-        i += 1
-    return ports
-
-
-# An easy way to log lines to files when the logging system can't be used
-
-_open_log_files = {}
-_log_file_dir = "/tmp"
-
-
-def log_line(filename, line):
-    """
-    Write a line to a file.  '\n' is appended to the line.
-
-    @param filename: Path of file to write to, either absolute or relative to
-            the dir set by set_log_file_dir().
-    @param line: Line to write.
-    """
-    global _open_log_files, _log_file_dir
-    if filename not in _open_log_files:
-        path = get_path(_log_file_dir, filename)
-        try:
-            os.makedirs(os.path.dirname(path))
-        except OSError:
-            pass
-        _open_log_files[filename] = open(path, "w")
-    timestr = time.strftime("%Y-%m-%d %H:%M:%S")
-    _open_log_files[filename].write("%s: %s\n" % (timestr, line))
-    _open_log_files[filename].flush()
-
-
-def set_log_file_dir(dir):
-    """
-    Set the base directory for log files created by log_line().
-
-    @param dir: Directory for log files.
-    """
-    global _log_file_dir
-    _log_file_dir = dir
-
-
-# The following are miscellaneous utility functions.
-
-def get_path(base_path, user_path):
-    """
-    Translate a user specified path to a real path.
-    If user_path is relative, append it to base_path.
-    If user_path is absolute, return it as is.
-
-    @param base_path: The base path of relative user specified paths.
-    @param user_path: The user specified path.
-    """
-    if os.path.isabs(user_path):
-        return user_path
-    else:
-        return os.path.join(base_path, user_path)
-
-
-def generate_random_string(length):
-    """
-    Return a random string using alphanumeric characters.
-
-    @length: length of the string that will be generated.
-    """
-    r = random.SystemRandom()
-    str = ""
-    chars = string.letters + string.digits
-    while length > 0:
-        str += r.choice(chars)
-        length -= 1
-    return str
-
-def generate_random_id():
-    """
-    Return a random string suitable for use as a qemu id.
-    """
-    return "id" + generate_random_string(6)
-
-
-def generate_tmp_file_name(file, ext=None, dir='/tmp/'):
-    """
-    Returns a temporary file name. The file is not created.
-    """
-    while True:
-        file_name = (file + '-' + time.strftime("%Y%m%d-%H%M%S-") +
-                     generate_random_string(4))
-        if ext:
-            file_name += '.' + ext
-        file_name = os.path.join(dir, file_name)
-        if not os.path.exists(file_name):
-            break
-
-    return file_name
-
-
-def format_str_for_message(str):
-    """
-    Format str so that it can be appended to a message.
-    If str consists of one line, prefix it with a space.
-    If str consists of multiple lines, prefix it with a newline.
-
-    @param str: string that will be formatted.
-    """
-    lines = str.splitlines()
-    num_lines = len(lines)
-    str = "\n".join(lines)
-    if num_lines == 0:
-        return ""
-    elif num_lines == 1:
-        return " " + str
-    else:
-        return "\n" + str
-
-
-def wait_for(func, timeout, first=0.0, step=1.0, text=None):
-    """
-    If func() evaluates to True before timeout expires, return the
-    value of func(). Otherwise return None.
-
-    @brief: Wait until func() evaluates to True.
-
-    @param timeout: Timeout in seconds
-    @param first: Time to sleep before first attempt
-    @param steps: Time to sleep between attempts in seconds
-    @param text: Text to print while waiting, for debug purposes
-    """
-    start_time = time.time()
-    end_time = time.time() + timeout
-
-    time.sleep(first)
-
-    while time.time() < end_time:
-        if text:
-            logging.debug("%s (%f secs)", text, (time.time() - start_time))
-
-        output = func()
-        if output:
-            return output
-
-        time.sleep(step)
-
-    logging.debug("Timeout elapsed")
-    return None
-
-
-def get_hash_from_file(hash_path, dvd_basename):
-    """
-    Get the a hash from a given DVD image from a hash file
-    (Hash files are usually named MD5SUM or SHA1SUM and are located inside the
-    download directories of the DVDs)
-
-    @param hash_path: Local path to a hash file.
-    @param cd_image: Basename of a CD image
-    """
-    hash_file = open(hash_path, 'r')
-    for line in hash_file.readlines():
-        if dvd_basename in line:
-            return line.split()[0]
-
-
-def run_tests(parser, job):
-    """
-    Runs the sequence of KVM tests based on the list of dictionaries
-    generated by the configuration system, handling dependencies.
-
-    @param parser: Config parser object.
-    @param job: Autotest job object.
-
-    @return: True, if all tests ran passed, False if any of them failed.
-    """
-    for i, d in enumerate(parser.get_dicts()):
-        logging.info("Test %4d:  %s" % (i + 1, d["shortname"]))
-
-    status_dict = {}
-    failed = False
-
-    for dict in parser.get_dicts():
-        if dict.get("skip") == "yes":
-            continue
-        dependencies_satisfied = True
-        for dep in dict.get("dep"):
-            for test_name in status_dict.keys():
-                if not dep in test_name:
-                    continue
-                if not status_dict[test_name]:
-                    dependencies_satisfied = False
-                    break
-        if dependencies_satisfied:
-            test_iterations = int(dict.get("iterations", 1))
-            test_tag = dict.get("shortname")
-            # Setting up profilers during test execution.
-            profilers = dict.get("profilers", "").split()
-            for profiler in profilers:
-                job.profilers.add(profiler)
-
-            # We need only one execution, profiled, hence we're passing
-            # the profile_only parameter to job.run_test().
-            current_status = job.run_test("kvm", params=dict, tag=test_tag,
-                                          iterations=test_iterations,
-                                          profile_only= bool(profilers) or None)
-
-            for profiler in profilers:
-                job.profilers.delete(profiler)
-
-            if not current_status:
-                failed = True
-        else:
-            current_status = False
-        status_dict[dict.get("name")] = current_status
-
-    return not failed
-
-
-def create_report(report_dir, results_dir):
-    """
-    Creates a neatly arranged HTML results report in the results dir.
-
-    @param report_dir: Directory where the report script is located.
-    @param results_dir: Directory where the results will be output.
-    """
-    reporter = os.path.join(report_dir, 'html_report.py')
-    html_file = os.path.join(results_dir, 'results.html')
-    os.system('%s -r %s -f %s -R' % (reporter, results_dir, html_file))
-
-
-def get_full_pci_id(pci_id):
-    """
-    Get full PCI ID of pci_id.
-
-    @param pci_id: PCI ID of a device.
-    """
-    cmd = "lspci -D | awk '/%s/ {print $1}'" % pci_id
-    status, full_id = commands.getstatusoutput(cmd)
-    if status != 0:
-        return None
-    return full_id
-
-
-def get_vendor_from_pci_id(pci_id):
-    """
-    Check out the device vendor ID according to pci_id.
-
-    @param pci_id: PCI ID of a device.
-    """
-    cmd = "lspci -n | awk '/%s/ {print $3}'" % pci_id
-    return re.sub(":", " ", commands.getoutput(cmd))
-
-
-class Thread(threading.Thread):
-    """
-    Run a function in a background thread.
-    """
-    def __init__(self, target, args=(), kwargs={}):
-        """
-        Initialize the instance.
-
-        @param target: Function to run in the thread.
-        @param args: Arguments to pass to target.
-        @param kwargs: Keyword arguments to pass to target.
-        """
-        threading.Thread.__init__(self)
-        self._target = target
-        self._args = args
-        self._kwargs = kwargs
-
-
-    def run(self):
-        """
-        Run target (passed to the constructor).  No point in calling this
-        function directly.  Call start() to make this function run in a new
-        thread.
-        """
-        self._e = None
-        self._retval = None
-        try:
-            try:
-                self._retval = self._target(*self._args, **self._kwargs)
-            except:
-                self._e = sys.exc_info()
-                raise
-        finally:
-            # Avoid circular references (start() may be called only once so
-            # it's OK to delete these)
-            del self._target, self._args, self._kwargs
-
-
-    def join(self, timeout=None, suppress_exception=False):
-        """
-        Join the thread.  If target raised an exception, re-raise it.
-        Otherwise, return the value returned by target.
-
-        @param timeout: Timeout value to pass to threading.Thread.join().
-        @param suppress_exception: If True, don't re-raise the exception.
-        """
-        threading.Thread.join(self, timeout)
-        try:
-            if self._e:
-                if not suppress_exception:
-                    # Because the exception was raised in another thread, we
-                    # need to explicitly insert the current context into it
-                    s = error.exception_context(self._e[1])
-                    s = error.join_contexts(error.get_context(), s)
-                    error.set_exception_context(self._e[1], s)
-                    raise self._e[0], self._e[1], self._e[2]
-            else:
-                return self._retval
-        finally:
-            # Avoid circular references (join() may be called multiple times
-            # so we can't delete these)
-            self._e = None
-            self._retval = None
-
-
-def parallel(targets):
-    """
-    Run multiple functions in parallel.
-
-    @param targets: A sequence of tuples or functions.  If it's a sequence of
-            tuples, each tuple will be interpreted as (target, args, kwargs) or
-            (target, args) or (target,) depending on its length.  If it's a
-            sequence of functions, the functions will be called without
-            arguments.
-    @return: A list of the values returned by the functions called.
-    """
-    threads = []
-    for target in targets:
-        if isinstance(target, tuple) or isinstance(target, list):
-            t = Thread(*target)
-        else:
-            t = Thread(target)
-        threads.append(t)
-        t.start()
-    return [t.join() for t in threads]
-
-
-class KvmLoggingConfig(logging_config.LoggingConfig):
-    """
-    Used with the sole purpose of providing convenient logging setup
-    for the KVM test auxiliary programs.
-    """
-    def configure_logging(self, results_dir=None, verbose=False):
-        super(KvmLoggingConfig, self).configure_logging(use_console=True,
-                                                        verbose=verbose)
-
-
-class PciAssignable(object):
-    """
-    Request PCI assignable devices on host. It will check whether to request
-    PF (physical Functions) or VF (Virtual Functions).
-    """
-    def __init__(self, type="vf", driver=None, driver_option=None,
-                 names=None, devices_requested=None):
-        """
-        Initialize parameter 'type' which could be:
-        vf: Virtual Functions
-        pf: Physical Function (actual hardware)
-        mixed:  Both includes VFs and PFs
-
-        If pass through Physical NIC cards, we need to specify which devices
-        to be assigned, e.g. 'eth1 eth2'.
-
-        If pass through Virtual Functions, we need to specify how many vfs
-        are going to be assigned, e.g. passthrough_count = 8 and max_vfs in
-        config file.
-
-        @param type: PCI device type.
-        @param driver: Kernel module for the PCI assignable device.
-        @param driver_option: Module option to specify the maximum number of
-                VFs (eg 'max_vfs=7')
-        @param names: Physical NIC cards correspondent network interfaces,
-                e.g.'eth1 eth2 ...'
-        @param devices_requested: Number of devices being requested.
-        """
-        self.type = type
-        self.driver = driver
-        self.driver_option = driver_option
-        if names:
-            self.name_list = names.split()
-        if devices_requested:
-            self.devices_requested = int(devices_requested)
-        else:
-            self.devices_requested = None
-
-
-    def _get_pf_pci_id(self, name, search_str):
-        """
-        Get the PF PCI ID according to name.
-
-        @param name: Name of the PCI device.
-        @param search_str: Search string to be used on lspci.
-        """
-        cmd = "ethtool -i %s | awk '/bus-info/ {print $2}'" % name
-        s, pci_id = commands.getstatusoutput(cmd)
-        if not (s or "Cannot get driver information" in pci_id):
-            return pci_id[5:]
-        cmd = "lspci | awk '/%s/ {print $1}'" % search_str
-        pci_ids = [id for id in commands.getoutput(cmd).splitlines()]
-        nic_id = int(re.search('[0-9]+', name).group(0))
-        if (len(pci_ids) - 1) < nic_id:
-            return None
-        return pci_ids[nic_id]
-
-
-    def _release_dev(self, pci_id):
-        """
-        Release a single PCI device.
-
-        @param pci_id: PCI ID of a given PCI device.
-        """
-        base_dir = "/sys/bus/pci"
-        full_id = get_full_pci_id(pci_id)
-        vendor_id = get_vendor_from_pci_id(pci_id)
-        drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
-        if 'pci-stub' in os.readlink(drv_path):
-            cmd = "echo '%s' > %s/new_id" % (vendor_id, drv_path)
-            if os.system(cmd):
-                return False
-
-            stub_path = os.path.join(base_dir, "drivers/pci-stub")
-            cmd = "echo '%s' > %s/unbind" % (full_id, stub_path)
-            if os.system(cmd):
-                return False
-
-            driver = self.dev_drivers[pci_id]
-            cmd = "echo '%s' > %s/bind" % (full_id, driver)
-            if os.system(cmd):
-                return False
-
-        return True
-
-
-    def get_vf_devs(self):
-        """
-        Catch all VFs PCI IDs.
-
-        @return: List with all PCI IDs for the Virtual Functions avaliable
-        """
-        if not self.sr_iov_setup():
-            return []
-
-        cmd = "lspci | awk '/Virtual Function/ {print $1}'"
-        return commands.getoutput(cmd).split()
-
-
-    def get_pf_devs(self):
-        """
-        Catch all PFs PCI IDs.
-
-        @return: List with all PCI IDs for the physical hardware requested
-        """
-        pf_ids = []
-        for name in self.name_list:
-            pf_id = self._get_pf_pci_id(name, "Ethernet")
-            if not pf_id:
-                continue
-            pf_ids.append(pf_id)
-        return pf_ids
-
-
-    def get_devs(self, count):
-        """
-        Check out all devices' PCI IDs according to their name.
-
-        @param count: count number of PCI devices needed for pass through
-        @return: a list of all devices' PCI IDs
-        """
-        if self.type == "vf":
-            vf_ids = self.get_vf_devs()
-        elif self.type == "pf":
-            vf_ids = self.get_pf_devs()
-        elif self.type == "mixed":
-            vf_ids = self.get_vf_devs()
-            vf_ids.extend(self.get_pf_devs())
-        return vf_ids[0:count]
-
-
-    def get_vfs_count(self):
-        """
-        Get VFs count number according to lspci.
-        """
-        # FIXME: Need to think out a method of identify which
-        # 'virtual function' belongs to which physical card considering
-        # that if the host has more than one 82576 card. PCI_ID?
-        cmd = "lspci | grep 'Virtual Function' | wc -l"
-        return int(commands.getoutput(cmd))
-
-
-    def check_vfs_count(self):
-        """
-        Check VFs count number according to the parameter driver_options.
-        """
-        # Network card 82576 has two network interfaces and each can be
-        # virtualized up to 7 virtual functions, therefore we multiply
-        # two for the value of driver_option 'max_vfs'.
-        expected_count = int((re.findall("(\d)", self.driver_option)[0])) * 2
-        return (self.get_vfs_count == expected_count)
-
-
-    def is_binded_to_stub(self, full_id):
-        """
-        Verify whether the device with full_id is already binded to pci-stub.
-
-        @param full_id: Full ID for the given PCI device
-        """
-        base_dir = "/sys/bus/pci"
-        stub_path = os.path.join(base_dir, "drivers/pci-stub")
-        if os.path.exists(os.path.join(stub_path, full_id)):
-            return True
-        return False
-
-
-    def sr_iov_setup(self):
-        """
-        Ensure the PCI device is working in sr_iov mode.
-
-        Check if the PCI hardware device drive is loaded with the appropriate,
-        parameters (number of VFs), and if it's not, perform setup.
-
-        @return: True, if the setup was completed successfuly, False otherwise.
-        """
-        re_probe = False
-        s, o = commands.getstatusoutput('lsmod | grep %s' % self.driver)
-        if s:
-            re_probe = True
-        elif not self.check_vfs_count():
-            os.system("modprobe -r %s" % self.driver)
-            re_probe = True
-        else:
-            return True
-
-        # Re-probe driver with proper number of VFs
-        if re_probe:
-            cmd = "modprobe %s %s" % (self.driver, self.driver_option)
-            logging.info("Loading the driver '%s' with option '%s'",
-                         self.driver, self.driver_option)
-            s, o = commands.getstatusoutput(cmd)
-            if s:
-                return False
-            return True
-
-
-    def request_devs(self):
-        """
-        Implement setup process: unbind the PCI device and then bind it
-        to the pci-stub driver.
-
-        @return: a list of successfully requested devices' PCI IDs.
-        """
-        base_dir = "/sys/bus/pci"
-        stub_path = os.path.join(base_dir, "drivers/pci-stub")
-
-        self.pci_ids = self.get_devs(self.devices_requested)
-        logging.debug("The following pci_ids were found: %s", self.pci_ids)
-        requested_pci_ids = []
-        self.dev_drivers = {}
-
-        # Setup all devices specified for assignment to guest
-        for pci_id in self.pci_ids:
-            full_id = get_full_pci_id(pci_id)
-            if not full_id:
-                continue
-            drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
-            dev_prev_driver = os.path.realpath(os.path.join(drv_path,
-                                               os.readlink(drv_path)))
-            self.dev_drivers[pci_id] = dev_prev_driver
-
-            # Judge whether the device driver has been binded to stub
-            if not self.is_binded_to_stub(full_id):
-                logging.debug("Binding device %s to stub", full_id)
-                vendor_id = get_vendor_from_pci_id(pci_id)
-                stub_new_id = os.path.join(stub_path, 'new_id')
-                unbind_dev = os.path.join(drv_path, 'unbind')
-                stub_bind = os.path.join(stub_path, 'bind')
-
-                info_write_to_files = [(vendor_id, stub_new_id),
-                                       (full_id, unbind_dev),
-                                       (full_id, stub_bind)]
-
-                for content, file in info_write_to_files:
-                    try:
-                        utils.open_write_close(file, content)
-                    except IOError:
-                        logging.debug("Failed to write %s to file %s", content,
-                                      file)
-                        continue
-
-                if not self.is_binded_to_stub(full_id):
-                    logging.error("Binding device %s to stub failed", pci_id)
-                    continue
-            else:
-                logging.debug("Device %s already binded to stub", pci_id)
-            requested_pci_ids.append(pci_id)
-        self.pci_ids = requested_pci_ids
-        return self.pci_ids
-
-
-    def release_devs(self):
-        """
-        Release all PCI devices currently assigned to VMs back to the
-        virtualization host.
-        """
-        try:
-            for pci_id in self.dev_drivers:
-                if not self._release_dev(pci_id):
-                    logging.error("Failed to release device %s to host", pci_id)
-                else:
-                    logging.info("Released device %s successfully", pci_id)
-        except:
-            return
-
-
-class KojiDownloader(object):
-    """
-    Stablish a connection with the build system, either koji or brew.
-
-    This class provides a convenience methods to retrieve packages hosted on
-    the build system.
-    """
-    def __init__(self, cmd):
-        """
-        Verifies whether the system has koji or brew installed, then loads
-        the configuration file that will be used to download the files.
-
-        @param cmd: Command name, either 'brew' or 'koji'. It is important
-                to figure out the appropriate configuration used by the
-                downloader.
-        @param dst_dir: Destination dir for the packages.
-        """
-        if not KOJI_INSTALLED:
-            raise ValueError('No koji/brew installed on the machine')
-
-        if os.path.isfile(cmd):
-            koji_cmd = cmd
-        else:
-            koji_cmd = os_dep.command(cmd)
-
-        logging.debug("Found %s as the buildsystem interface", koji_cmd)
-
-        config_map = {'/usr/bin/koji': '/etc/koji.conf',
-                      '/usr/bin/brew': '/etc/brewkoji.conf'}
-
-        try:
-            config_file = config_map[koji_cmd]
-        except IndexError:
-            raise ValueError('Could not find config file for %s' % koji_cmd)
-
-        base_name = os.path.basename(koji_cmd)
-        if os.access(config_file, os.F_OK):
-            f = open(config_file)
-            config = ConfigParser.ConfigParser()
-            config.readfp(f)
-            f.close()
-        else:
-            raise IOError('Configuration file %s missing or with wrong '
-                          'permissions' % config_file)
-
-        if config.has_section(base_name):
-            self.koji_options = {}
-            session_options = {}
-            server = None
-            for name, value in config.items(base_name):
-                if name in ('user', 'password', 'debug_xmlrpc', 'debug'):
-                    session_options[name] = value
-                self.koji_options[name] = value
-            self.session = koji.ClientSession(self.koji_options['server'],
-                                              session_options)
-        else:
-            raise ValueError('Koji config file %s does not have a %s '
-                             'session' % (config_file, base_name))
-
-
-    def get(self, src_package, dst_dir, rfilter=None, tag=None, build=None,
-            arch=None):
-        """
-        Download a list of packages from the build system.
-
-        This will download all packages originated from source package [package]
-        with given [tag] or [build] for the architecture reported by the
-        machine.
-
-        @param src_package: Source package name.
-        @param dst_dir: Destination directory for the downloaded packages.
-        @param rfilter: Regexp filter, only download the packages that match
-                that particular filter.
-        @param tag: Build system tag.
-        @param build: Build system ID.
-        @param arch: Package arch. Useful when you want to download noarch
-                packages.
-
-        @return: List of paths with the downloaded rpm packages.
-        """
-        if build and build.isdigit():
-            build = int(build)
-
-        if tag and build:
-            logging.info("Both tag and build parameters provided, ignoring tag "
-                         "parameter...")
-
-        if not tag and not build:
-            raise ValueError("Koji install selected but neither koji_tag "
-                             "nor koji_build parameters provided. Please "
-                             "provide an appropriate tag or build name.")
-
-        if not build:
-            builds = self.session.listTagged(tag, latest=True, inherit=True,
-                                             package=src_package)
-            if not builds:
-                raise ValueError("Tag %s has no builds of %s" % (tag,
-                                                                 src_package))
-            info = builds[0]
-        else:
-            info = self.session.getBuild(build)
-
-        if info is None:
-            raise ValueError('No such brew/koji build: %s' % build)
-
-        if arch is None:
-            arch = utils.get_arch()
-
-        rpms = self.session.listRPMs(buildID=info['id'],
-                                     arches=arch)
-        if not rpms:
-            raise ValueError("No %s packages available for %s" %
-                             arch, koji.buildLabel(info))
-
-        rpm_paths = []
-        for rpm in rpms:
-            rpm_name = koji.pathinfo.rpm(rpm)
-            url = ("%s/%s/%s/%s/%s" % (self.koji_options['pkgurl'],
-                                       info['package_name'],
-                                       info['version'], info['release'],
-                                       rpm_name))
-            if rfilter:
-                filter_regexp = re.compile(rfilter, re.IGNORECASE)
-                if filter_regexp.match(os.path.basename(rpm_name)):
-                    download = True
-                else:
-                    download = False
-            else:
-                download = True
-
-            if download:
-                r = utils.get_file(url,
-                                   os.path.join(dst_dir, os.path.basename(url)))
-                rpm_paths.append(r)
-
-        return rpm_paths
-
-
-def umount(src, mount_point, type):
-    """
-    Umount the src mounted in mount_point.
-
-    @src: mount source
-    @mount_point: mount point
-    @type: file system type
-    """
-
-    mount_string = "%s %s %s" % (src, mount_point, type)
-    if mount_string in file("/etc/mtab").read():
-        umount_cmd = "umount %s" % mount_point
-        try:
-            utils.system(umount_cmd)
-            return True
-        except error.CmdError:
-            return False
-    else:
-        logging.debug("%s is not mounted under %s", src, mount_point)
-        return True
-
-
-def mount(src, mount_point, type, perm="rw"):
-    """
-    Mount the src into mount_point of the host.
-
-    @src: mount source
-    @mount_point: mount point
-    @type: file system type
-    @perm: mount premission
-    """
-    umount(src, mount_point, type)
-    mount_string = "%s %s %s %s" % (src, mount_point, type, perm)
-
-    if mount_string in file("/etc/mtab").read():
-        logging.debug("%s is already mounted in %s with %s",
-                      src, mount_point, perm)
-        return True
-
-    mount_cmd = "mount -t %s %s %s -o %s" % (type, src, mount_point, perm)
-    try:
-        utils.system(mount_cmd)
-    except error.CmdError:
-        return False
-
-    logging.debug("Verify the mount through /etc/mtab")
-    if mount_string in file("/etc/mtab").read():
-        logging.debug("%s is successfully mounted", src)
-        return True
-    else:
-        logging.error("Can't find mounted NFS share - /etc/mtab contents \n%s",
-                      file("/etc/mtab").read())
-        return False
diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
deleted file mode 100755
index 41f7491..0000000
--- a/client/tests/kvm/kvm_vm.py
+++ /dev/null
@@ -1,1777 +0,0 @@
-#!/usr/bin/python
-"""
-Utility classes and functions to handle Virtual Machine creation using qemu.
-
-@copyright: 2008-2009 Red Hat Inc.
-"""
-
-import time, os, logging, fcntl, re, commands, glob
-import kvm_utils, kvm_subprocess, kvm_monitor
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.bin import utils
-
-
-class VMError(Exception):
-    pass
-
-
-class VMCreateError(VMError):
-    def __init__(self, cmd, status, output):
-        VMError.__init__(self, cmd, status, output)
-        self.cmd = cmd
-        self.status = status
-        self.output = output
-
-    def __str__(self):
-        return ("VM creation command failed:    %r    (status: %s,    "
-                "output: %r)" % (self.cmd, self.status, self.output))
-
-
-class VMHashMismatchError(VMError):
-    def __init__(self, actual, expected):
-        VMError.__init__(self, actual, expected)
-        self.actual_hash = actual
-        self.expected_hash = expected
-
-    def __str__(self):
-        return ("CD image hash (%s) differs from expected one (%s)" %
-                (self.actual_hash, self.expected_hash))
-
-
-class VMImageMissingError(VMError):
-    def __init__(self, filename):
-        VMError.__init__(self, filename)
-        self.filename = filename
-
-    def __str__(self):
-        return "CD image file not found: %r" % self.filename
-
-
-class VMImageCheckError(VMError):
-    def __init__(self, filename):
-        VMError.__init__(self, filename)
-        self.filename = filename
-
-    def __str__(self):
-        return "Errors found on image: %r" % self.filename
-
-
-class VMBadPATypeError(VMError):
-    def __init__(self, pa_type):
-        VMError.__init__(self, pa_type)
-        self.pa_type = pa_type
-
-    def __str__(self):
-        return "Unsupported PCI assignable type: %r" % self.pa_type
-
-
-class VMPAError(VMError):
-    def __init__(self, pa_type):
-        VMError.__init__(self, pa_type)
-        self.pa_type = pa_type
-
-    def __str__(self):
-        return ("No PCI assignable devices could be assigned "
-                "(pci_assignable=%r)" % self.pa_type)
-
-
-class VMPostCreateError(VMError):
-    def __init__(self, cmd, output):
-        VMError.__init__(self, cmd, output)
-        self.cmd = cmd
-        self.output = output
-
-
-class VMHugePageError(VMPostCreateError):
-    def __str__(self):
-        return ("Cannot allocate hugepage memory    (command: %r,    "
-                "output: %r)" % (self.cmd, self.output))
-
-
-class VMKVMInitError(VMPostCreateError):
-    def __str__(self):
-        return ("Cannot initialize KVM    (command: %r,    output: %r)" %
-                (self.cmd, self.output))
-
-
-class VMDeadError(VMError):
-    def __init__(self, status, output):
-        VMError.__init__(self, status, output)
-        self.status = status
-        self.output = output
-
-    def __str__(self):
-        return ("VM process is dead    (status: %s,    output: %r)" %
-                (self.status, self.output))
-
-
-class VMAddressError(VMError):
-    pass
-
-
-class VMPortNotRedirectedError(VMAddressError):
-    def __init__(self, port):
-        VMAddressError.__init__(self, port)
-        self.port = port
-
-    def __str__(self):
-        return "Port not redirected: %s" % self.port
-
-
-class VMAddressVerificationError(VMAddressError):
-    def __init__(self, mac, ip):
-        VMAddressError.__init__(self, mac, ip)
-        self.mac = mac
-        self.ip = ip
-
-    def __str__(self):
-        return ("Cannot verify MAC-IP address mapping using arping: "
-                "%s ---> %s" % (self.mac, self.ip))
-
-
-class VMMACAddressMissingError(VMAddressError):
-    def __init__(self, nic_index):
-        VMAddressError.__init__(self, nic_index)
-        self.nic_index = nic_index
-
-    def __str__(self):
-        return "No MAC address defined for NIC #%s" % self.nic_index
-
-
-class VMIPAddressMissingError(VMAddressError):
-    def __init__(self, mac):
-        VMAddressError.__init__(self, mac)
-        self.mac = mac
-
-    def __str__(self):
-        return "Cannot find IP address for MAC address %s" % self.mac
-
-
-class VMMigrateError(VMError):
-    pass
-
-
-class VMMigrateTimeoutError(VMMigrateError):
-    pass
-
-
-class VMMigrateCancelError(VMMigrateError):
-    pass
-
-
-class VMMigrateFailedError(VMMigrateError):
-    pass
-
-
-class VMMigrateStateMismatchError(VMMigrateError):
-    def __init__(self, src_hash, dst_hash):
-        VMMigrateError.__init__(self, src_hash, dst_hash)
-        self.src_hash = src_hash
-        self.dst_hash = dst_hash
-
-    def __str__(self):
-        return ("Mismatch of VM state before and after migration (%s != %s)" %
-                (self.src_hash, self.dst_hash))
-
-
-class VMRebootError(VMError):
-    pass
-
-
-def get_image_filename(params, root_dir):
-    """
-    Generate an image path from params and root_dir.
-
-    @param params: Dictionary containing the test parameters.
-    @param root_dir: Base directory for relative filenames.
-
-    @note: params should contain:
-           image_name -- the name of the image file, without extension
-           image_format -- the format of the image (qcow2, raw etc)
-    """
-    image_name = params.get("image_name", "image")
-    image_format = params.get("image_format", "qcow2")
-    if params.get("image_raw_device") == "yes":
-        return image_name
-    image_filename = "%s.%s" % (image_name, image_format)
-    image_filename = kvm_utils.get_path(root_dir, image_filename)
-    return image_filename
-
-
-def create_image(params, root_dir):
-    """
-    Create an image using qemu_image.
-
-    @param params: Dictionary containing the test parameters.
-    @param root_dir: Base directory for relative filenames.
-
-    @note: params should contain:
-           image_name -- the name of the image file, without extension
-           image_format -- the format of the image (qcow2, raw etc)
-           image_size -- the requested size of the image (a string
-           qemu-img can understand, such as '10G')
-    """
-    qemu_img_cmd = kvm_utils.get_path(root_dir, params.get("qemu_img_binary",
-                                                           "qemu-img"))
-    qemu_img_cmd += " create"
-
-    format = params.get("image_format", "qcow2")
-    qemu_img_cmd += " -f %s" % format
-
-    image_filename = get_image_filename(params, root_dir)
-    qemu_img_cmd += " %s" % image_filename
-
-    size = params.get("image_size", "10G")
-    qemu_img_cmd += " %s" % size
-
-    utils.system(qemu_img_cmd)
-    logging.info("Image created in %r", image_filename)
-    return image_filename
-
-
-def remove_image(params, root_dir):
-    """
-    Remove an image file.
-
-    @param params: A dict
-    @param root_dir: Base directory for relative filenames.
-
-    @note: params should contain:
-           image_name -- the name of the image file, without extension
-           image_format -- the format of the image (qcow2, raw etc)
-    """
-    image_filename = get_image_filename(params, root_dir)
-    logging.debug("Removing image file %s...", image_filename)
-    if os.path.exists(image_filename):
-        os.unlink(image_filename)
-    else:
-        logging.debug("Image file %s not found")
-
-
-def check_image(params, root_dir):
-    """
-    Check an image using qemu-img.
-
-    @param params: Dictionary containing the test parameters.
-    @param root_dir: Base directory for relative filenames.
-
-    @note: params should contain:
-           image_name -- the name of the image file, without extension
-           image_format -- the format of the image (qcow2, raw etc)
-
-    @raise VMImageCheckError: In case qemu-img check fails on the image.
-    """
-    image_filename = get_image_filename(params, root_dir)
-    logging.debug("Checking image file %s...", image_filename)
-    qemu_img_cmd = kvm_utils.get_path(root_dir,
-                                      params.get("qemu_img_binary", "qemu-img"))
-    image_is_qcow2 = params.get("image_format") == 'qcow2'
-    if os.path.exists(image_filename) and image_is_qcow2:
-        # Verifying if qemu-img supports 'check'
-        q_result = utils.run(qemu_img_cmd, ignore_status=True)
-        q_output = q_result.stdout
-        check_img = True
-        if not "check" in q_output:
-            logging.error("qemu-img does not support 'check', "
-                          "skipping check...")
-            check_img = False
-        if not "info" in q_output:
-            logging.error("qemu-img does not support 'info', "
-                          "skipping check...")
-            check_img = False
-        if check_img:
-            try:
-                utils.system("%s info %s" % (qemu_img_cmd, image_filename))
-            except error.CmdError:
-                logging.error("Error getting info from image %s",
-                              image_filename)
-            try:
-                utils.system("%s check %s" % (qemu_img_cmd, image_filename))
-            except error.CmdError:
-                raise VMImageCheckError(image_filename)
-
-    else:
-        if not os.path.exists(image_filename):
-            logging.debug("Image file %s not found, skipping check...",
-                          image_filename)
-        elif not image_is_qcow2:
-            logging.debug("Image file %s not qcow2, skipping check...",
-                          image_filename)
-
-
-class VM:
-    """
-    This class handles all basic VM operations.
-    """
-
-    def __init__(self, name, params, root_dir, address_cache, state=None):
-        """
-        Initialize the object and set a few attributes.
-
-        @param name: The name of the object
-        @param params: A dict containing VM params
-                (see method make_qemu_command for a full description)
-        @param root_dir: Base directory for relative filenames
-        @param address_cache: A dict that maps MAC addresses to IP addresses
-        @param state: If provided, use this as self.__dict__
-        """
-        if state:
-            self.__dict__ = state
-        else:
-            self.process = None
-            self.serial_console = None
-            self.redirs = {}
-            self.vnc_port = 5900
-            self.monitors = []
-            self.pci_assignable = None
-            self.netdev_id = []
-            self.device_id = []
-            self.uuid = None
-
-            # Find a unique identifier for this VM
-            while True:
-                self.instance = (time.strftime("%Y%m%d-%H%M%S-") +
-                                 kvm_utils.generate_random_string(4))
-                if not glob.glob("/tmp/*%s" % self.instance):
-                    break
-
-        self.name = name
-        self.params = params
-        self.root_dir = root_dir
-        self.address_cache = address_cache
-
-
-    def clone(self, name=None, params=None, root_dir=None, address_cache=None,
-              copy_state=False):
-        """
-        Return a clone of the VM object with optionally modified parameters.
-        The clone is initially not alive and needs to be started using create().
-        Any parameters not passed to this function are copied from the source
-        VM.
-
-        @param name: Optional new VM name
-        @param params: Optional new VM creation parameters
-        @param root_dir: Optional new base directory for relative filenames
-        @param address_cache: A dict that maps MAC addresses to IP addresses
-        @param copy_state: If True, copy the original VM's state to the clone.
-                Mainly useful for make_qemu_command().
-        """
-        if name is None:
-            name = self.name
-        if params is None:
-            params = self.params.copy()
-        if root_dir is None:
-            root_dir = self.root_dir
-        if address_cache is None:
-            address_cache = self.address_cache
-        if copy_state:
-            state = self.__dict__.copy()
-        else:
-            state = None
-        return VM(name, params, root_dir, address_cache, state)
-
-
-    def make_qemu_command(self, name=None, params=None, root_dir=None):
-        """
-        Generate a qemu command line. All parameters are optional. If a
-        parameter is not supplied, the corresponding value stored in the
-        class attributes is used.
-
-        @param name: The name of the object
-        @param params: A dict containing VM params
-        @param root_dir: Base directory for relative filenames
-
-        @note: The params dict should contain:
-               mem -- memory size in MBs
-               cdrom -- ISO filename to use with the qemu -cdrom parameter
-               extra_params -- a string to append to the qemu command
-               shell_port -- port of the remote shell daemon on the guest
-               (SSH, Telnet or the home-made Remote Shell Server)
-               shell_client -- client program to use for connecting to the
-               remote shell daemon on the guest (ssh, telnet or nc)
-               x11_display -- if specified, the DISPLAY environment variable
-               will be be set to this value for the qemu process (useful for
-               SDL rendering)
-               images -- a list of image object names, separated by spaces
-               nics -- a list of NIC object names, separated by spaces
-
-               For each image in images:
-               drive_format -- string to pass as 'if' parameter for this
-               image (e.g. ide, scsi)
-               image_snapshot -- if yes, pass 'snapshot=on' to qemu for
-               this image
-               image_boot -- if yes, pass 'boot=on' to qemu for this image
-               In addition, all parameters required by get_image_filename.
-
-               For each NIC in nics:
-               nic_model -- string to pass as 'model' parameter for this
-               NIC (e.g. e1000)
-        """
-        # Helper function for command line option wrappers
-        def has_option(help, option):
-            return bool(re.search(r"^-%s(\s|$)" % option, help, re.MULTILINE))
-
-        # Wrappers for all supported qemu command line parameters.
-        # This is meant to allow support for multiple qemu versions.
-        # Each of these functions receives the output of 'qemu -help' as a
-        # parameter, and should add the requested command line option
-        # accordingly.
-
-        def add_name(help, name):
-            return " -name '%s'" % name
-
-        def add_human_monitor(help, filename):
-            return " -monitor unix:'%s',server,nowait" % filename
-
-        def add_qmp_monitor(help, filename):
-            return " -qmp unix:'%s',server,nowait" % filename
-
-        def add_serial(help, filename):
-            return " -serial unix:'%s',server,nowait" % filename
-
-        def add_mem(help, mem):
-            return " -m %s" % mem
-
-        def add_smp(help, smp):
-            return " -smp %s" % smp
-
-        def add_cdrom(help, filename, index=None):
-            if has_option(help, "drive"):
-                cmd = " -drive file='%s',media=cdrom" % filename
-                if index is not None: cmd += ",index=%s" % index
-                return cmd
-            else:
-                return " -cdrom '%s'" % filename
-
-        def add_drive(help, filename, index=None, format=None, cache=None,
-                      werror=None, serial=None, snapshot=False, boot=False):
-            cmd = " -drive file='%s'" % filename
-            if index is not None:
-                cmd += ",index=%s" % index
-            if format:
-                cmd += ",if=%s" % format
-            if cache:
-                cmd += ",cache=%s" % cache
-            if werror:
-                cmd += ",werror=%s" % werror
-            if serial:
-                cmd += ",serial='%s'" % serial
-            if snapshot:
-                cmd += ",snapshot=on"
-            if boot:
-                cmd += ",boot=on"
-            return cmd
-
-        def add_nic(help, vlan, model=None, mac=None, device_id=None, netdev_id=None,
-                    nic_extra_params=None):
-            if has_option(help, "netdev"):
-                netdev_vlan_str = ",netdev=%s" % netdev_id
-            else:
-                netdev_vlan_str = ",vlan=%d" % vlan
-            if has_option(help, "device"):
-                if not model:
-                    model = "rtl8139"
-                elif model == "virtio":
-                    model = "virtio-net-pci"
-                cmd = " -device %s" % model + netdev_vlan_str
-                if mac:
-                    cmd += ",mac='%s'" % mac
-                if nic_extra_params:
-                    cmd += ",%s" % nic_extra_params
-            else:
-                cmd = " -net nic" + netdev_vlan_str
-                if model:
-                    cmd += ",model=%s" % model
-                if mac:
-                    cmd += ",macaddr='%s'" % mac
-            if device_id:
-                cmd += ",id='%s'" % device_id
-            return cmd
-
-        def add_net(help, vlan, mode, ifname=None, script=None,
-                    downscript=None, tftp=None, bootfile=None, hostfwd=[],
-                    netdev_id=None, netdev_extra_params=None):
-            if has_option(help, "netdev"):
-                cmd = " -netdev %s,id=%s" % (mode, netdev_id)
-                if netdev_extra_params:
-                    cmd += ",%s" % netdev_extra_params
-            else:
-                cmd = " -net %s,vlan=%d" % (mode, vlan)
-            if mode == "tap":
-                if ifname: cmd += ",ifname='%s'" % ifname
-                if script: cmd += ",script='%s'" % script
-                cmd += ",downscript='%s'" % (downscript or "no")
-            elif mode == "user":
-                if tftp and "[,tftp=" in help:
-                    cmd += ",tftp='%s'" % tftp
-                if bootfile and "[,bootfile=" in help:
-                    cmd += ",bootfile='%s'" % bootfile
-                if "[,hostfwd=" in help:
-                    for host_port, guest_port in hostfwd:
-                        cmd += ",hostfwd=tcp::%s-:%s" % (host_port, guest_port)
-            return cmd
-
-        def add_floppy(help, filename):
-            return " -fda '%s'" % filename
-
-        def add_tftp(help, filename):
-            # If the new syntax is supported, don't add -tftp
-            if "[,tftp=" in help:
-                return ""
-            else:
-                return " -tftp '%s'" % filename
-
-        def add_bootp(help, filename):
-            # If the new syntax is supported, don't add -bootp
-            if "[,bootfile=" in help:
-                return ""
-            else:
-                return " -bootp '%s'" % filename
-
-        def add_tcp_redir(help, host_port, guest_port):
-            # If the new syntax is supported, don't add -redir
-            if "[,hostfwd=" in help:
-                return ""
-            else:
-                return " -redir tcp:%s::%s" % (host_port, guest_port)
-
-        def add_vnc(help, vnc_port):
-            return " -vnc :%d" % (vnc_port - 5900)
-
-        def add_sdl(help):
-            if has_option(help, "sdl"):
-                return " -sdl"
-            else:
-                return ""
-
-        def add_nographic(help):
-            return " -nographic"
-
-        def add_uuid(help, uuid):
-            return " -uuid '%s'" % uuid
-
-        def add_pcidevice(help, host):
-            return " -pcidevice host='%s'" % host
-
-        def add_kernel(help, filename):
-            return " -kernel '%s'" % filename
-
-        def add_initrd(help, filename):
-            return " -initrd '%s'" % filename
-
-        def add_kernel_cmdline(help, cmdline):
-            return " -append %s" % cmdline
-
-        def add_testdev(help, filename):
-            return (" -chardev file,id=testlog,path=%s"
-                    " -device testdev,chardev=testlog" % filename)
-
-        def add_no_hpet(help):
-            if has_option(help, "no-hpet"):
-                return " -no-hpet"
-            else:
-                return ""
-
-        # End of command line option wrappers
-
-        if name is None:
-            name = self.name
-        if params is None:
-            params = self.params
-        if root_dir is None:
-            root_dir = self.root_dir
-
-        # Clone this VM using the new params
-        vm = self.clone(name, params, root_dir, copy_state=True)
-
-        qemu_binary = kvm_utils.get_path(root_dir, params.get("qemu_binary",
-                                                              "qemu"))
-        # Get the output of 'qemu -help' (log a message in case this call never
-        # returns or causes some other kind of trouble)
-        logging.debug("Getting output of 'qemu -help'")
-        help = commands.getoutput("%s -help" % qemu_binary)
-
-        # Start constructing the qemu command
-        qemu_cmd = ""
-        # Set the X11 display parameter if requested
-        if params.get("x11_display"):
-            qemu_cmd += "DISPLAY=%s " % params.get("x11_display")
-        # Add the qemu binary
-        qemu_cmd += qemu_binary
-        # Add the VM's name
-        qemu_cmd += add_name(help, name)
-        # Add monitors
-        for monitor_name in params.objects("monitors"):
-            monitor_params = params.object_params(monitor_name)
-            monitor_filename = vm.get_monitor_filename(monitor_name)
-            if monitor_params.get("monitor_type") == "qmp":
-                qemu_cmd += add_qmp_monitor(help, monitor_filename)
-            else:
-                qemu_cmd += add_human_monitor(help, monitor_filename)
-
-        # Add serial console redirection
-        qemu_cmd += add_serial(help, vm.get_serial_console_filename())
-
-        for image_name in params.objects("images"):
-            image_params = params.object_params(image_name)
-            if image_params.get("boot_drive") == "no":
-                continue
-            qemu_cmd += add_drive(help,
-                                  get_image_filename(image_params, root_dir),
-                                  image_params.get("drive_index"),
-                                  image_params.get("drive_format"),
-                                  image_params.get("drive_cache"),
-                                  image_params.get("drive_werror"),
-                                  image_params.get("drive_serial"),
-                                  image_params.get("image_snapshot") == "yes",
-                                  image_params.get("image_boot") == "yes")
-
-        redirs = []
-        for redir_name in params.objects("redirs"):
-            redir_params = params.object_params(redir_name)
-            guest_port = int(redir_params.get("guest_port"))
-            host_port = vm.redirs.get(guest_port)
-            redirs += [(host_port, guest_port)]
-
-        vlan = 0
-        for nic_name in params.objects("nics"):
-            nic_params = params.object_params(nic_name)
-            try:
-                netdev_id = vm.netdev_id[vlan]
-                device_id = vm.device_id[vlan]
-            except IndexError:
-                netdev_id = None
-            # Handle the '-net nic' part
-            try:
-                mac = vm.get_mac_address(vlan)
-            except VMAddressError:
-                mac = None
-            qemu_cmd += add_nic(help, vlan, nic_params.get("nic_model"), mac,
-                                device_id, netdev_id, nic_params.get("nic_extra_params"))
-            # Handle the '-net tap' or '-net user' or '-netdev' part
-            script = nic_params.get("nic_script")
-            downscript = nic_params.get("nic_downscript")
-            tftp = nic_params.get("tftp")
-            if script:
-                script = kvm_utils.get_path(root_dir, script)
-            if downscript:
-                downscript = kvm_utils.get_path(root_dir, downscript)
-            if tftp:
-                tftp = kvm_utils.get_path(root_dir, tftp)
-            qemu_cmd += add_net(help, vlan, nic_params.get("nic_mode", "user"),
-                                vm.get_ifname(vlan),
-                                script, downscript, tftp,
-                                nic_params.get("bootp"), redirs, netdev_id,
-                                nic_params.get("netdev_extra_params"))
-            # Proceed to next NIC
-            vlan += 1
-
-        mem = params.get("mem")
-        if mem:
-            qemu_cmd += add_mem(help, mem)
-
-        smp = params.get("smp")
-        if smp:
-            qemu_cmd += add_smp(help, smp)
-
-        for cdrom in params.objects("cdroms"):
-            cdrom_params = params.object_params(cdrom)
-            iso = cdrom_params.get("cdrom")
-            if iso:
-                qemu_cmd += add_cdrom(help, kvm_utils.get_path(root_dir, iso),
-                                      cdrom_params.get("drive_index"))
-
-        # We may want to add {floppy_otps} parameter for -fda
-        # {fat:floppy:}/path/. However vvfat is not usually recommended.
-        floppy = params.get("floppy")
-        if floppy:
-            floppy = kvm_utils.get_path(root_dir, floppy)
-            qemu_cmd += add_floppy(help, floppy)
-
-        tftp = params.get("tftp")
-        if tftp:
-            tftp = kvm_utils.get_path(root_dir, tftp)
-            qemu_cmd += add_tftp(help, tftp)
-
-        bootp = params.get("bootp")
-        if bootp:
-            qemu_cmd += add_bootp(help, bootp)
-
-        kernel = params.get("kernel")
-        if kernel:
-            kernel = kvm_utils.get_path(root_dir, kernel)
-            qemu_cmd += add_kernel(help, kernel)
-
-        kernel_cmdline = params.get("kernel_cmdline")
-        if kernel_cmdline:
-            qemu_cmd += add_kernel_cmdline(help, kernel_cmdline)
-
-        initrd = params.get("initrd")
-        if initrd:
-            initrd = kvm_utils.get_path(root_dir, initrd)
-            qemu_cmd += add_initrd(help, initrd)
-
-        for host_port, guest_port in redirs:
-            qemu_cmd += add_tcp_redir(help, host_port, guest_port)
-
-        if params.get("display") == "vnc":
-            qemu_cmd += add_vnc(help, vm.vnc_port)
-        elif params.get("display") == "sdl":
-            qemu_cmd += add_sdl(help)
-        elif params.get("display") == "nographic":
-            qemu_cmd += add_nographic(help)
-
-        if params.get("uuid") == "random":
-            qemu_cmd += add_uuid(help, vm.uuid)
-        elif params.get("uuid"):
-            qemu_cmd += add_uuid(help, params.get("uuid"))
-
-        if params.get("testdev") == "yes":
-            qemu_cmd += add_testdev(help, vm.get_testlog_filename())
-
-        if params.get("disable_hpet") == "yes":
-            qemu_cmd += add_no_hpet(help)
-
-        # If the PCI assignment step went OK, add each one of the PCI assigned
-        # devices to the qemu command line.
-        if vm.pci_assignable:
-            for pci_id in vm.pa_pci_ids:
-                qemu_cmd += add_pcidevice(help, pci_id)
-
-        extra_params = params.get("extra_params")
-        if extra_params:
-            qemu_cmd += " %s" % extra_params
-
-        return qemu_cmd
-
-
-    @error.context_aware
-    def create(self, name=None, params=None, root_dir=None, timeout=5.0,
-               migration_mode=None, mac_source=None):
-        """
-        Start the VM by running a qemu command.
-        All parameters are optional. If name, params or root_dir are not
-        supplied, the respective values stored as class attributes are used.
-
-        @param name: The name of the object
-        @param params: A dict containing VM params
-        @param root_dir: Base directory for relative filenames
-        @param migration_mode: If supplied, start VM for incoming migration
-                using this protocol (either 'tcp', 'unix' or 'exec')
-        @param migration_exec_cmd: Command to embed in '-incoming "exec: ..."'
-                (e.g. 'gzip -c -d filename') if migration_mode is 'exec'
-        @param mac_source: A VM object from which to copy MAC addresses. If not
-                specified, new addresses will be generated.
-
-        @raise VMCreateError: If qemu terminates unexpectedly
-        @raise VMKVMInitError: If KVM initialization fails
-        @raise VMHugePageError: If hugepage initialization fails
-        @raise VMImageMissingError: If a CD image is missing
-        @raise VMHashMismatchError: If a CD image hash has doesn't match the
-                expected hash
-        @raise VMBadPATypeError: If an unsupported PCI assignment type is
-                requested
-        @raise VMPAError: If no PCI assignable devices could be assigned
-        """
-        error.context("creating '%s'" % self.name)
-        self.destroy(free_mac_addresses=False)
-
-        if name is not None:
-            self.name = name
-        if params is not None:
-            self.params = params
-        if root_dir is not None:
-            self.root_dir = root_dir
-        name = self.name
-        params = self.params
-        root_dir = self.root_dir
-
-        # Verify the md5sum of the ISO images
-        for cdrom in params.objects("cdroms"):
-            cdrom_params = params.object_params(cdrom)
-            iso = cdrom_params.get("cdrom")
-            if iso:
-                iso = kvm_utils.get_path(root_dir, iso)
-                if not os.path.exists(iso):
-                    raise VMImageMissingError(iso)
-                compare = False
-                if cdrom_params.get("md5sum_1m"):
-                    logging.debug("Comparing expected MD5 sum with MD5 sum of "
-                                  "first MB of ISO file...")
-                    actual_hash = utils.hash_file(iso, 1048576, method="md5")
-                    expected_hash = cdrom_params.get("md5sum_1m")
-                    compare = True
-                elif cdrom_params.get("md5sum"):
-                    logging.debug("Comparing expected MD5 sum with MD5 sum of "
-                                  "ISO file...")
-                    actual_hash = utils.hash_file(iso, method="md5")
-                    expected_hash = cdrom_params.get("md5sum")
-                    compare = True
-                elif cdrom_params.get("sha1sum"):
-                    logging.debug("Comparing expected SHA1 sum with SHA1 sum "
-                                  "of ISO file...")
-                    actual_hash = utils.hash_file(iso, method="sha1")
-                    expected_hash = cdrom_params.get("sha1sum")
-                    compare = True
-                if compare:
-                    if actual_hash == expected_hash:
-                        logging.debug("Hashes match")
-                    else:
-                        raise VMHashMismatchError(actual_hash, expected_hash)
-
-        # Make sure the following code is not executed by more than one thread
-        # at the same time
-        lockfile = open("/tmp/kvm-autotest-vm-create.lock", "w+")
-        fcntl.lockf(lockfile, fcntl.LOCK_EX)
-
-        try:
-            # Handle port redirections
-            redir_names = params.objects("redirs")
-            host_ports = kvm_utils.find_free_ports(5000, 6000, len(redir_names))
-            self.redirs = {}
-            for i in range(len(redir_names)):
-                redir_params = params.object_params(redir_names[i])
-                guest_port = int(redir_params.get("guest_port"))
-                self.redirs[guest_port] = host_ports[i]
-
-            # Generate netdev/device IDs for all NICs
-            self.netdev_id = []
-            self.device_id = []
-            for nic in params.objects("nics"):
-                self.netdev_id.append(kvm_utils.generate_random_id())
-                self.device_id.append(kvm_utils.generate_random_id())
-
-            # Find available VNC port, if needed
-            if params.get("display") == "vnc":
-                self.vnc_port = kvm_utils.find_free_port(5900, 6100)
-
-            # Find random UUID if specified 'uuid = random' in config file
-            if params.get("uuid") == "random":
-                f = open("/proc/sys/kernel/random/uuid")
-                self.uuid = f.read().strip()
-                f.close()
-
-            # Generate or copy MAC addresses for all NICs
-            num_nics = len(params.objects("nics"))
-            for vlan in range(num_nics):
-                nic_name = params.objects("nics")[vlan]
-                nic_params = params.object_params(nic_name)
-                mac = (nic_params.get("nic_mac") or
-                       mac_source and mac_source.get_mac_address(vlan))
-                if mac:
-                    kvm_utils.set_mac_address(self.instance, vlan, mac)
-                else:
-                    kvm_utils.generate_mac_address(self.instance, vlan)
-
-            # Assign a PCI assignable device
-            self.pci_assignable = None
-            pa_type = params.get("pci_assignable")
-            if pa_type and pa_type != "no":
-                pa_devices_requested = params.get("devices_requested")
-
-                # Virtual Functions (VF) assignable devices
-                if pa_type == "vf":
-                    self.pci_assignable = kvm_utils.PciAssignable(
-                        type=pa_type,
-                        driver=params.get("driver"),
-                        driver_option=params.get("driver_option"),
-                        devices_requested=pa_devices_requested)
-                # Physical NIC (PF) assignable devices
-                elif pa_type == "pf":
-                    self.pci_assignable = kvm_utils.PciAssignable(
-                        type=pa_type,
-                        names=params.get("device_names"),
-                        devices_requested=pa_devices_requested)
-                # Working with both VF and PF
-                elif pa_type == "mixed":
-                    self.pci_assignable = kvm_utils.PciAssignable(
-                        type=pa_type,
-                        driver=params.get("driver"),
-                        driver_option=params.get("driver_option"),
-                        names=params.get("device_names"),
-                        devices_requested=pa_devices_requested)
-                else:
-                    raise VMBadPATypeError(pa_type)
-
-                self.pa_pci_ids = self.pci_assignable.request_devs()
-
-                if self.pa_pci_ids:
-                    logging.debug("Successfuly assigned devices: %s",
-                                  self.pa_pci_ids)
-                else:
-                    raise VMPAError(pa_type)
-
-            # Make qemu command
-            qemu_command = self.make_qemu_command()
-
-            # Add migration parameters if required
-            if migration_mode == "tcp":
-                self.migration_port = kvm_utils.find_free_port(5200, 6000)
-                qemu_command += " -incoming tcp:0:%d" % self.migration_port
-            elif migration_mode == "unix":
-                self.migration_file = "/tmp/migration-unix-%s" % self.instance
-                qemu_command += " -incoming unix:%s" % self.migration_file
-            elif migration_mode == "exec":
-                self.migration_port = kvm_utils.find_free_port(5200, 6000)
-                qemu_command += (' -incoming "exec:nc -l %s"' %
-                                 self.migration_port)
-
-            logging.info("Running qemu command:\n%s", qemu_command)
-            self.process = kvm_subprocess.run_bg(qemu_command, None,
-                                                 logging.info, "(qemu) ")
-
-            # Make sure the process was started successfully
-            if not self.process.is_alive():
-                e = VMCreateError(qemu_command,
-                                  self.process.get_status(),
-                                  self.process.get_output())
-                self.destroy()
-                raise e
-
-            # Establish monitor connections
-            self.monitors = []
-            for monitor_name in params.objects("monitors"):
-                monitor_params = params.object_params(monitor_name)
-                # Wait for monitor connection to succeed
-                end_time = time.time() + timeout
-                while time.time() < end_time:
-                    try:
-                        if monitor_params.get("monitor_type") == "qmp":
-                            # Add a QMP monitor
-                            monitor = kvm_monitor.QMPMonitor(
-                                monitor_name,
-                                self.get_monitor_filename(monitor_name))
-                        else:
-                            # Add a "human" monitor
-                            monitor = kvm_monitor.HumanMonitor(
-                                monitor_name,
-                                self.get_monitor_filename(monitor_name))
-                        monitor.verify_responsive()
-                        break
-                    except kvm_monitor.MonitorError, e:
-                        logging.warn(e)
-                        time.sleep(1)
-                else:
-                    self.destroy()
-                    raise e
-                # Add this monitor to the list
-                self.monitors += [monitor]
-
-            # Get the output so far, to see if we have any problems with
-            # KVM modules or with hugepage setup.
-            output = self.process.get_output()
-
-            if re.search("Could not initialize KVM", output, re.IGNORECASE):
-                e = VMKVMInitError(qemu_command, self.process.get_output())
-                self.destroy()
-                raise e
-
-            if "alloc_mem_area" in output:
-                e = VMHugePageError(qemu_command, self.process.get_output())
-                self.destroy()
-                raise e
-
-            logging.debug("VM appears to be alive with PID %s", self.get_pid())
-
-            # Establish a session with the serial console -- requires a version
-            # of netcat that supports -U
-            self.serial_console = kvm_subprocess.ShellSession(
-                "nc -U %s" % self.get_serial_console_filename(),
-                auto_close=False,
-                output_func=kvm_utils.log_line,
-                output_params=("serial-%s.log" % name,))
-
-        finally:
-            fcntl.lockf(lockfile, fcntl.LOCK_UN)
-            lockfile.close()
-
-
-    def destroy(self, gracefully=True, free_mac_addresses=True):
-        """
-        Destroy the VM.
-
-        If gracefully is True, first attempt to shutdown the VM with a shell
-        command.  Then, attempt to destroy the VM via the monitor with a 'quit'
-        command.  If that fails, send SIGKILL to the qemu process.
-
-        @param gracefully: If True, an attempt will be made to end the VM
-                using a shell command before trying to end the qemu process
-                with a 'quit' or a kill signal.
-        @param free_mac_addresses: If True, the MAC addresses used by the VM
-                will be freed.
-        """
-        try:
-            # Is it already dead?
-            if self.is_dead():
-                return
-
-            logging.debug("Destroying VM with PID %s...", self.get_pid())
-
-            if gracefully and self.params.get("shutdown_command"):
-                # Try to destroy with shell command
-                logging.debug("Trying to shutdown VM with shell command...")
-                try:
-                    session = self.login()
-                except (kvm_utils.LoginError, VMError), e:
-                    logging.debug(e)
-                else:
-                    try:
-                        # Send the shutdown command
-                        session.sendline(self.params.get("shutdown_command"))
-                        logging.debug("Shutdown command sent; waiting for VM "
-                                      "to go down...")
-                        if kvm_utils.wait_for(self.is_dead, 60, 1, 1):
-                            logging.debug("VM is down")
-                            return
-                    finally:
-                        session.close()
-
-            if self.monitor:
-                # Try to destroy with a monitor command
-                logging.debug("Trying to kill VM with monitor command...")
-                try:
-                    self.monitor.quit()
-                except kvm_monitor.MonitorError, e:
-                    logging.warn(e)
-                else:
-                    # Wait for the VM to be really dead
-                    if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
-                        logging.debug("VM is down")
-                        return
-
-            # If the VM isn't dead yet...
-            logging.debug("Cannot quit normally; sending a kill to close the "
-                          "deal...")
-            kvm_utils.kill_process_tree(self.process.get_pid(), 9)
-            # Wait for the VM to be really dead
-            if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
-                logging.debug("VM is down")
-                return
-
-            logging.error("Process %s is a zombie!", self.process.get_pid())
-
-        finally:
-            self.monitors = []
-            if self.pci_assignable:
-                self.pci_assignable.release_devs()
-            if self.process:
-                self.process.close()
-            if self.serial_console:
-                self.serial_console.close()
-            for f in ([self.get_testlog_filename(),
-                       self.get_serial_console_filename()] +
-                      self.get_monitor_filenames()):
-                try:
-                    os.unlink(f)
-                except OSError:
-                    pass
-            if hasattr(self, "migration_file"):
-                try:
-                    os.unlink(self.migration_file)
-                except OSError:
-                    pass
-            if free_mac_addresses:
-                num_nics = len(self.params.objects("nics"))
-                for vlan in range(num_nics):
-                    self.free_mac_address(vlan)
-
-
-    @property
-    def monitor(self):
-        """
-        Return the main monitor object, selected by the parameter main_monitor.
-        If main_monitor isn't defined, return the first monitor.
-        If no monitors exist, or if main_monitor refers to a nonexistent
-        monitor, return None.
-        """
-        for m in self.monitors:
-            if m.name == self.params.get("main_monitor"):
-                return m
-        if self.monitors and not self.params.get("main_monitor"):
-            return self.monitors[0]
-
-
-    def verify_alive(self):
-        """
-        Make sure the VM is alive and that the main monitor is responsive.
-
-        @raise VMDeadError: If the VM is dead
-        @raise: Various monitor exceptions if the monitor is unresponsive
-        """
-        if self.is_dead():
-            raise VMDeadError(self.process.get_status(),
-                              self.process.get_output())
-        if self.monitors:
-            self.monitor.verify_responsive()
-
-
-    def is_alive(self):
-        """
-        Return True if the VM is alive and its monitor is responsive.
-        """
-        return not self.is_dead() and (not self.monitors or
-                                       self.monitor.is_responsive())
-
-
-    def is_dead(self):
-        """
-        Return True if the qemu process is dead.
-        """
-        return not self.process or not self.process.is_alive()
-
-
-    def get_params(self):
-        """
-        Return the VM's params dict. Most modified params take effect only
-        upon VM.create().
-        """
-        return self.params
-
-
-    def get_monitor_filename(self, monitor_name):
-        """
-        Return the filename corresponding to a given monitor name.
-        """
-        return "/tmp/monitor-%s-%s" % (monitor_name, self.instance)
-
-
-    def get_monitor_filenames(self):
-        """
-        Return a list of all monitor filenames (as specified in the VM's
-        params).
-        """
-        return [self.get_monitor_filename(m) for m in
-                self.params.objects("monitors")]
-
-
-    def get_serial_console_filename(self):
-        """
-        Return the serial console filename.
-        """
-        return "/tmp/serial-%s" % self.instance
-
-
-    def get_testlog_filename(self):
-        """
-        Return the testlog filename.
-        """
-        return "/tmp/testlog-%s" % self.instance
-
-
-    def get_address(self, index=0):
-        """
-        Return the address of a NIC of the guest, in host space.
-
-        If port redirection is used, return 'localhost' (the NIC has no IP
-        address of its own).  Otherwise return the NIC's IP address.
-
-        @param index: Index of the NIC whose address is requested.
-        @raise VMMACAddressMissingError: If no MAC address is defined for the
-                requested NIC
-        @raise VMIPAddressMissingError: If no IP address is found for the the
-                NIC's MAC address
-        @raise VMAddressVerificationError: If the MAC-IP address mapping cannot
-                be verified (using arping)
-        """
-        nics = self.params.objects("nics")
-        nic_name = nics[index]
-        nic_params = self.params.object_params(nic_name)
-        if nic_params.get("nic_mode") == "tap":
-            mac = self.get_mac_address(index).lower()
-            # Get the IP address from the cache
-            ip = self.address_cache.get(mac)
-            if not ip:
-                raise VMIPAddressMissingError(mac)
-            # Make sure the IP address is assigned to this guest
-            macs = [self.get_mac_address(i) for i in range(len(nics))]
-            if not kvm_utils.verify_ip_address_ownership(ip, macs):
-                raise VMAddressVerificationError(mac, ip)
-            return ip
-        else:
-            return "localhost"
-
-
-    def get_port(self, port, nic_index=0):
-        """
-        Return the port in host space corresponding to port in guest space.
-
-        @param port: Port number in host space.
-        @param nic_index: Index of the NIC.
-        @return: If port redirection is used, return the host port redirected
-                to guest port port. Otherwise return port.
-        @raise VMPortNotRedirectedError: If an unredirected port is requested
-                in user mode
-        """
-        nic_name = self.params.objects("nics")[nic_index]
-        nic_params = self.params.object_params(nic_name)
-        if nic_params.get("nic_mode") == "tap":
-            return port
-        else:
-            try:
-                return self.redirs[port]
-            except KeyError:
-                raise VMPortNotRedirectedError(port)
-
-
-    def get_peer(self, netid):
-        """
-        Return the peer of netdev or network deivce.
-
-        @param netid: id of netdev or device
-        @return: id of the peer device otherwise None
-        """
-        network_info = self.monitor.info("network")
-        try:
-            return re.findall("%s:.*peer=(.*)" % netid, network_info)[0]
-        except IndexError:
-            return None
-
-
-    def get_ifname(self, nic_index=0):
-        """
-        Return the ifname of a tap device associated with a NIC.
-
-        @param nic_index: Index of the NIC
-        """
-        nics = self.params.objects("nics")
-        nic_name = nics[nic_index]
-        nic_params = self.params.object_params(nic_name)
-        if nic_params.get("nic_ifname"):
-            return nic_params.get("nic_ifname")
-        else:
-            return "t%d-%s" % (nic_index, self.instance[-11:])
-
-
-    def get_mac_address(self, nic_index=0):
-        """
-        Return the MAC address of a NIC.
-
-        @param nic_index: Index of the NIC
-        @raise VMMACAddressMissingError: If no MAC address is defined for the
-                requested NIC
-        """
-        nic_name = self.params.objects("nics")[nic_index]
-        nic_params = self.params.object_params(nic_name)
-        mac = (nic_params.get("nic_mac") or
-               kvm_utils.get_mac_address(self.instance, nic_index))
-        if not mac:
-            raise VMMACAddressMissingError(nic_index)
-        return mac
-
-
-    def free_mac_address(self, nic_index=0):
-        """
-        Free a NIC's MAC address.
-
-        @param nic_index: Index of the NIC
-        """
-        kvm_utils.free_mac_address(self.instance, nic_index)
-
-
-    def get_pid(self):
-        """
-        Return the VM's PID.  If the VM is dead return None.
-
-        @note: This works under the assumption that self.process.get_pid()
-        returns the PID of the parent shell process.
-        """
-        try:
-            children = commands.getoutput("ps --ppid=%d -o pid=" %
-                                          self.process.get_pid()).split()
-            return int(children[0])
-        except (TypeError, IndexError, ValueError):
-            return None
-
-
-    def get_shell_pid(self):
-        """
-        Return the PID of the parent shell process.
-
-        @note: This works under the assumption that self.process.get_pid()
-        returns the PID of the parent shell process.
-        """
-        return self.process.get_pid()
-
-
-    def get_shared_meminfo(self):
-        """
-        Returns the VM's shared memory information.
-
-        @return: Shared memory used by VM (MB)
-        """
-        if self.is_dead():
-            logging.error("Could not get shared memory info from dead VM.")
-            return None
-
-        filename = "/proc/%d/statm" % self.get_pid()
-        shm = int(open(filename).read().split()[2])
-        # statm stores informations in pages, translate it to MB
-        return shm * 4.0 / 1024
-
-
-    @error.context_aware
-    def login(self, nic_index=0, timeout=10):
-        """
-        Log into the guest via SSH/Telnet/Netcat.
-        If timeout expires while waiting for output from the guest (e.g. a
-        password prompt or a shell prompt) -- fail.
-
-        @param nic_index: The index of the NIC to connect to.
-        @param timeout: Time (seconds) before giving up logging into the
-                guest.
-        @return: A ShellSession object.
-        """
-        error.context("logging into '%s'" % self.name)
-        username = self.params.get("username", "")
-        password = self.params.get("password", "")
-        prompt = self.params.get("shell_prompt", "[\#\$]")
-        linesep = eval("'%s'" % self.params.get("shell_linesep", r"\n"))
-        client = self.params.get("shell_client")
-        address = self.get_address(nic_index)
-        port = self.get_port(int(self.params.get("shell_port")))
-        log_filename = ("session-%s-%s.log" %
-                        (self.name, kvm_utils.generate_random_string(4)))
-        session = kvm_utils.remote_login(client, address, port, username,
-                                         password, prompt, linesep,
-                                         log_filename, timeout)
-        session.set_status_test_command(self.params.get("status_test_command",
-                                                        ""))
-        return session
-
-
-    def remote_login(self, nic_index=0, timeout=10):
-        """
-        Alias for login() for backward compatibility.
-        """
-        return self.login(nic_index, timeout)
-
-
-    def wait_for_login(self, nic_index=0, timeout=240, internal_timeout=10):
-        """
-        Make multiple attempts to log into the guest via SSH/Telnet/Netcat.
-
-        @param nic_index: The index of the NIC to connect to.
-        @param timeout: Time (seconds) to keep trying to log in.
-        @param internal_timeout: Timeout to pass to login().
-        @return: A ShellSession object.
-        """
-        logging.debug("Attempting to log into '%s' (timeout %ds)", self.name,
-                      timeout)
-        end_time = time.time() + timeout
-        while time.time() < end_time:
-            try:
-                return self.login(nic_index, internal_timeout)
-            except (kvm_utils.LoginError, VMError), e:
-                logging.debug(e)
-            time.sleep(2)
-        # Timeout expired; try one more time but don't catch exceptions
-        return self.login(nic_index, internal_timeout)
-
-
-    @error.context_aware
-    def copy_files_to(self, host_path, guest_path, nic_index=0, verbose=False,
-                      timeout=600):
-        """
-        Transfer files to the remote host(guest).
-
-        @param host_path: Host path
-        @param guest_path: Guest path
-        @param nic_index: The index of the NIC to connect to.
-        @param verbose: If True, log some stats using logging.debug (RSS only)
-        @param timeout: Time (seconds) before giving up on doing the remote
-                copy.
-        """
-        error.context("sending file(s) to '%s'" % self.name)
-        username = self.params.get("username", "")
-        password = self.params.get("password", "")
-        client = self.params.get("file_transfer_client")
-        address = self.get_address(nic_index)
-        port = self.get_port(int(self.params.get("file_transfer_port")))
-        log_filename = ("transfer-%s-to-%s-%s.log" %
-                        (self.name, address,
-                        kvm_utils.generate_random_string(4)))
-        kvm_utils.copy_files_to(address, client, username, password, port,
-                                host_path, guest_path, log_filename, verbose,
-                                timeout)
-
-
-    @error.context_aware
-    def copy_files_from(self, guest_path, host_path, nic_index=0,
-                        verbose=False, timeout=600):
-        """
-        Transfer files from the guest.
-
-        @param host_path: Guest path
-        @param guest_path: Host path
-        @param nic_index: The index of the NIC to connect to.
-        @param verbose: If True, log some stats using logging.debug (RSS only)
-        @param timeout: Time (seconds) before giving up on doing the remote
-                copy.
-        """
-        error.context("receiving file(s) from '%s'" % self.name)
-        username = self.params.get("username", "")
-        password = self.params.get("password", "")
-        client = self.params.get("file_transfer_client")
-        address = self.get_address(nic_index)
-        port = self.get_port(int(self.params.get("file_transfer_port")))
-        log_filename = ("transfer-%s-from-%s-%s.log" %
-                        (self.name, address,
-                        kvm_utils.generate_random_string(4)))
-        kvm_utils.copy_files_from(address, client, username, password, port,
-                                  guest_path, host_path, log_filename,
-                                  verbose, timeout)
-
-
-    @error.context_aware
-    def serial_login(self, timeout=10):
-        """
-        Log into the guest via the serial console.
-        If timeout expires while waiting for output from the guest (e.g. a
-        password prompt or a shell prompt) -- fail.
-
-        @param timeout: Time (seconds) before giving up logging into the guest.
-        @return: ShellSession object on success and None on failure.
-        """
-        error.context("logging into '%s' via serial console" % self.name)
-        username = self.params.get("username", "")
-        password = self.params.get("password", "")
-        prompt = self.params.get("shell_prompt", "[\#\$]")
-        linesep = eval("'%s'" % self.params.get("shell_linesep", r"\n"))
-        status_test_command = self.params.get("status_test_command", "")
-
-        self.serial_console.set_linesep(linesep)
-        self.serial_console.set_status_test_command(status_test_command)
-
-        # Try to get a login prompt
-        self.serial_console.sendline()
-
-        kvm_utils._remote_login(self.serial_console, username, password,
-                                prompt, timeout)
-        return self.serial_console
-
-
-    def wait_for_serial_login(self, timeout=240, internal_timeout=10):
-        """
-        Make multiple attempts to log into the guest via serial console.
-
-        @param timeout: Time (seconds) to keep trying to log in.
-        @param internal_timeout: Timeout to pass to serial_login().
-        @return: A ShellSession object.
-        """
-        logging.debug("Attempting to log into '%s' via serial console "
-                      "(timeout %ds)", self.name, timeout)
-        end_time = time.time() + timeout
-        while time.time() < end_time:
-            try:
-                return self.serial_login(internal_timeout)
-            except kvm_utils.LoginError, e:
-                logging.debug(e)
-            time.sleep(2)
-        # Timeout expired; try one more time but don't catch exceptions
-        return self.serial_login(internal_timeout)
-
-
-    @error.context_aware
-    def migrate(self, timeout=3600, protocol="tcp", cancel_delay=None,
-                offline=False, stable_check=False, clean=True,
-                save_path="/tmp", dest_host="localhost", remote_port=None):
-        """
-        Migrate the VM.
-
-        If the migration is local, the VM object's state is switched with that
-        of the destination VM.  Otherwise, the state is switched with that of
-        a dead VM (returned by self.clone()).
-
-        @param timeout: Time to wait for migration to complete.
-        @param protocol: Migration protocol ('tcp', 'unix' or 'exec').
-        @param cancel_delay: If provided, specifies a time duration after which
-                migration will be canceled.  Used for testing migrate_cancel.
-        @param offline: If True, pause the source VM before migration.
-        @param stable_check: If True, compare the VM's state after migration to
-                its state before migration and raise an exception if they
-                differ.
-        @param clean: If True, delete the saved state files (relevant only if
-                stable_check is also True).
-        @save_path: The path for state files.
-        @param dest_host: Destination host (defaults to 'localhost').
-        @param remote_port: Port to use for remote migration.
-        """
-        error.base_context("migrating '%s'" % self.name)
-
-        def mig_finished():
-            o = self.monitor.info("migrate")
-            if isinstance(o, str):
-                return "status: active" not in o
-            else:
-                return o.get("status") != "active"
-
-        def mig_succeeded():
-            o = self.monitor.info("migrate")
-            if isinstance(o, str):
-                return "status: completed" in o
-            else:
-                return o.get("status") == "completed"
-
-        def mig_failed():
-            o = self.monitor.info("migrate")
-            if isinstance(o, str):
-                return "status: failed" in o
-            else:
-                return o.get("status") == "failed"
-
-        def mig_cancelled():
-            o = self.monitor.info("migrate")
-            if isinstance(o, str):
-                return ("Migration status: cancelled" in o or
-                        "Migration status: canceled" in o)
-            else:
-                return (o.get("status") == "cancelled" or
-                        o.get("status") == "canceled")
-
-        def wait_for_migration():
-            if not kvm_utils.wait_for(mig_finished, timeout, 2, 2,
-                                      "Waiting for migration to complete"):
-                raise VMMigrateTimeoutError("Timeout expired while waiting "
-                                            "for migration to finish")
-
-        local = dest_host == "localhost"
-
-        clone = self.clone()
-        if local:
-            error.context("creating destination VM")
-            if stable_check:
-                # Pause the dest vm after creation
-                extra_params = clone.params.get("extra_params", "") + " -S"
-                clone.params["extra_params"] = extra_params
-            clone.create(migration_mode=protocol, mac_source=self)
-            error.context()
-
-        try:
-            if protocol == "tcp":
-                if local:
-                    uri = "tcp:localhost:%d" % clone.migration_port
-                else:
-                    uri = "tcp:%s:%d" % (dest_host, remote_port)
-            elif protocol == "unix":
-                uri = "unix:%s" % clone.migration_file
-            elif protocol == "exec":
-                uri = '"exec:nc localhost %s"' % clone.migration_port
-
-            if offline:
-                self.monitor.cmd("stop")
-
-            logging.info("Migrating to %s", uri)
-            self.monitor.migrate(uri)
-
-            if cancel_delay:
-                time.sleep(cancel_delay)
-                self.monitor.cmd("migrate_cancel")
-                if not kvm_utils.wait_for(mig_cancelled, 60, 2, 2,
-                                          "Waiting for migration "
-                                          "cancellation"):
-                    raise VMMigrateCancelError("Cannot cancel migration")
-                return
-
-            wait_for_migration()
-
-            # Report migration status
-            if mig_succeeded():
-                logging.info("Migration completed successfully")
-            elif mig_failed():
-                raise VMMigrateFailedError("Migration failed")
-            else:
-                raise VMMigrateFailedError("Migration ended with unknown "
-                                           "status")
-
-            # Switch self <-> clone
-            temp = self.clone(copy_state=True)
-            self.__dict__ = clone.__dict__
-            clone = temp
-
-            # From now on, clone is the source VM that will soon be destroyed
-            # and self is the destination VM that will remain alive.  If this
-            # is remote migration, self is a dead VM object.
-
-            error.context("after migration")
-            if local:
-                time.sleep(1)
-                self.verify_alive()
-
-            if local and stable_check:
-                try:
-                    save1 = os.path.join(save_path, "src-" + clone.instance)
-                    save2 = os.path.join(save_path, "dst-" + self.instance)
-                    clone.save_to_file(save1)
-                    self.save_to_file(save2)
-                    # Fail if we see deltas
-                    md5_save1 = utils.hash_file(save1)
-                    md5_save2 = utils.hash_file(save2)
-                    if md5_save1 != md5_save2:
-                        raise VMMigrateStateMismatchError(md5_save1, md5_save2)
-                finally:
-                    if clean:
-                        if os.path.isfile(save1):
-                            os.remove(save1)
-                        if os.path.isfile(save2):
-                            os.remove(save2)
-
-        finally:
-            # If we're doing remote migration and it's completed successfully,
-            # self points to a dead VM object
-            if self.is_alive():
-                self.monitor.cmd("cont")
-            clone.destroy(gracefully=False)
-
-
-    @error.context_aware
-    def reboot(self, session=None, method="shell", nic_index=0, timeout=240):
-        """
-        Reboot the VM and wait for it to come back up by trying to log in until
-        timeout expires.
-
-        @param session: A shell session object or None.
-        @param method: Reboot method.  Can be "shell" (send a shell reboot
-                command) or "system_reset" (send a system_reset monitor command).
-        @param nic_index: Index of NIC to access in the VM, when logging in
-                after rebooting.
-        @param timeout: Time to wait for login to succeed (after rebooting).
-        @return: A new shell session object.
-        """
-        error.base_context("rebooting '%s'" % self.name, logging.info)
-        error.context("before reboot")
-        session = session or self.login()
-        error.context()
-
-        if method == "shell":
-            session.sendline(self.params.get("reboot_command"))
-        elif method == "system_reset":
-            # Clear the event list of all QMP monitors
-            qmp_monitors = [m for m in self.monitors if m.protocol == "qmp"]
-            for m in qmp_monitors:
-                m.clear_events()
-            # Send a system_reset monitor command
-            self.monitor.cmd("system_reset")
-            # Look for RESET QMP events
-            time.sleep(1)
-            for m in qmp_monitors:
-                if m.get_event("RESET"):
-                    logging.info("RESET QMP event received")
-                else:
-                    raise VMRebootError("RESET QMP event not received after "
-                                        "system_reset (monitor '%s')" % m.name)
-        else:
-            raise VMRebootError("Unknown reboot method: %s" % method)
-
-        error.context("waiting for guest to go down", logging.info)
-        if not kvm_utils.wait_for(lambda:
-                                  not session.is_responsive(timeout=30),
-                                  120, 0, 1):
-            raise VMRebootError("Guest refuses to go down")
-        session.close()
-
-        error.context("logging in after reboot", logging.info)
-        return self.wait_for_login(nic_index, timeout=timeout)
-
-
-    def send_key(self, keystr):
-        """
-        Send a key event to the VM.
-
-        @param: keystr: A key event string (e.g. "ctrl-alt-delete")
-        """
-        # For compatibility with versions of QEMU that do not recognize all
-        # key names: replace keyname with the hex value from the dict, which
-        # QEMU will definitely accept
-        dict = {"comma": "0x33",
-                "dot":   "0x34",
-                "slash": "0x35"}
-        for key, value in dict.items():
-            keystr = keystr.replace(key, value)
-        self.monitor.sendkey(keystr)
-        time.sleep(0.2)
-
-
-    def send_string(self, str):
-        """
-        Send a string to the VM.
-
-        @param str: String, that must consist of alphanumeric characters only.
-                Capital letters are allowed.
-        """
-        for char in str:
-            if char.isupper():
-                self.send_key("shift-%s" % char.lower())
-            else:
-                self.send_key(char)
-
-
-    def get_uuid(self):
-        """
-        Catch UUID of the VM.
-
-        @return: None,if not specified in config file
-        """
-        if self.params.get("uuid") == "random":
-            return self.uuid
-        else:
-            return self.params.get("uuid", None)
-
-
-    def get_cpu_count(self):
-        """
-        Get the cpu count of the VM.
-        """
-        session = self.login()
-        try:
-            return int(session.cmd(self.params.get("cpu_chk_cmd")))
-        finally:
-            session.close()
-
-
-    def get_memory_size(self, cmd=None):
-        """
-        Get bootup memory size of the VM.
-
-        @param check_cmd: Command used to check memory. If not provided,
-                self.params.get("mem_chk_cmd") will be used.
-        """
-        session = self.login()
-        try:
-            if not cmd:
-                cmd = self.params.get("mem_chk_cmd")
-            mem_str = session.cmd(cmd)
-            mem = re.findall("([0-9]+)", mem_str)
-            mem_size = 0
-            for m in mem:
-                mem_size += int(m)
-            if "GB" in mem_str:
-                mem_size *= 1024
-            elif "MB" in mem_str:
-                pass
-            else:
-                mem_size /= 1024
-            return int(mem_size)
-        finally:
-            session.close()
-
-
-    def get_current_memory_size(self):
-        """
-        Get current memory size of the VM, rather than bootup memory.
-        """
-        cmd = self.params.get("mem_chk_cur_cmd")
-        return self.get_memory_size(cmd)
-
-
-    def save_to_file(self, path):
-        """
-        Save the state of virtual machine to a file through migrate to
-        exec
-        """
-        # Make sure we only get one iteration
-        self.monitor.cmd("migrate_set_speed 1000g")
-        self.monitor.cmd("migrate_set_downtime 100000000")
-        self.monitor.migrate('"exec:cat>%s"' % path)
-        # Restore the speed and downtime of migration
-        self.monitor.cmd("migrate_set_speed %d" % (32<<20))
-        self.monitor.cmd("migrate_set_downtime 0.03")
diff --git a/client/tests/kvm/ppm_utils.py b/client/tests/kvm/ppm_utils.py
deleted file mode 100644
index 90ff46d..0000000
--- a/client/tests/kvm/ppm_utils.py
+++ /dev/null
@@ -1,237 +0,0 @@
-"""
-Utility functions to deal with ppm (qemu screendump format) files.
-
-@copyright: Red Hat 2008-2009
-"""
-
-import os, struct, time, re
-from autotest_lib.client.bin import utils
-
-# Some directory/filename utils, for consistency
-
-def find_id_for_screendump(md5sum, dir):
-    """
-    Search dir for a PPM file whose name ends with md5sum.
-
-    @param md5sum: md5 sum string
-    @param dir: Directory that holds the PPM files.
-    @return: The file's basename without any preceding path, e.g.
-    '20080101_120000_d41d8cd98f00b204e9800998ecf8427e.ppm'.
-    """
-    try:
-        files = os.listdir(dir)
-    except OSError:
-        files = []
-    for file in files:
-        exp = re.compile(r"(.*_)?" + md5sum + r"\.ppm", re.IGNORECASE)
-        if exp.match(file):
-            return file
-
-
-def generate_id_for_screendump(md5sum, dir):
-    """
-    Generate a unique filename using the given MD5 sum.
-
-    @return: Only the file basename, without any preceding path. The
-    filename consists of the current date and time, the MD5 sum and a .ppm
-    extension, e.g. '20080101_120000_d41d8cd98f00b204e9800998ecf8427e.ppm'.
-    """
-    filename = time.strftime("%Y%m%d_%H%M%S") + "_" + md5sum + ".ppm"
-    return filename
-
-
-def get_data_dir(steps_filename):
-    """
-    Return the data dir of the given steps filename.
-    """
-    filename = os.path.basename(steps_filename)
-    return os.path.join(os.path.dirname(steps_filename), "..", "steps_data",
-                        filename + "_data")
-
-
-# Functions for working with PPM files
-
-def image_read_from_ppm_file(filename):
-    """
-    Read a PPM image.
-
-    @return: A 3 element tuple containing the width, height and data of the
-            image.
-    """
-    fin = open(filename,"rb")
-    l1 = fin.readline()
-    l2 = fin.readline()
-    l3 = fin.readline()
-    data = fin.read()
-    fin.close()
-
-    (w, h) = map(int, l2.split())
-    return (w, h, data)
-
-
-def image_write_to_ppm_file(filename, width, height, data):
-    """
-    Write a PPM image with the given width, height and data.
-
-    @param filename: PPM file path
-    @param width: PPM file width (pixels)
-    @param height: PPM file height (pixels)
-    """
-    fout = open(filename,"wb")
-    fout.write("P6\n")
-    fout.write("%d %d\n" % (width, height))
-    fout.write("255\n")
-    fout.write(data)
-    fout.close()
-
-
-def image_crop(width, height, data, x1, y1, dx, dy):
-    """
-    Crop an image.
-
-    @param width: Original image width
-    @param height: Original image height
-    @param data: Image data
-    @param x1: Desired x coordinate of the cropped region
-    @param y1: Desired y coordinate of the cropped region
-    @param dx: Desired width of the cropped region
-    @param dy: Desired height of the cropped region
-    @return: A 3-tuple containing the width, height and data of the
-    cropped image.
-    """
-    if x1 > width - 1: x1 = width - 1
-    if y1 > height - 1: y1 = height - 1
-    if dx > width - x1: dx = width - x1
-    if dy > height - y1: dy = height - y1
-    newdata = ""
-    index = (x1 + y1*width) * 3
-    for i in range(dy):
-        newdata += data[index:(index+dx*3)]
-        index += width*3
-    return (dx, dy, newdata)
-
-
-def image_md5sum(width, height, data):
-    """
-    Return the md5sum of an image.
-
-    @param width: PPM file width
-    @param height: PPM file height
-    @data: PPM file data
-    """
-    header = "P6\n%d %d\n255\n" % (width, height)
-    hash = utils.hash('md5', header)
-    hash.update(data)
-    return hash.hexdigest()
-
-
-def get_region_md5sum(width, height, data, x1, y1, dx, dy,
-                      cropped_image_filename=None):
-    """
-    Return the md5sum of a cropped region.
-
-    @param width: Original image width
-    @param height: Original image height
-    @param data: Image data
-    @param x1: Desired x coord of the cropped region
-    @param y1: Desired y coord of the cropped region
-    @param dx: Desired width of the cropped region
-    @param dy: Desired height of the cropped region
-    @param cropped_image_filename: if not None, write the resulting cropped
-            image to a file with this name
-    """
-    (cw, ch, cdata) = image_crop(width, height, data, x1, y1, dx, dy)
-    # Write cropped image for debugging
-    if cropped_image_filename:
-        image_write_to_ppm_file(cropped_image_filename, cw, ch, cdata)
-    return image_md5sum(cw, ch, cdata)
-
-
-def image_verify_ppm_file(filename):
-    """
-    Verify the validity of a PPM file.
-
-    @param filename: Path of the file being verified.
-    @return: True if filename is a valid PPM image file. This function
-    reads only the first few bytes of the file so it should be rather fast.
-    """
-    try:
-        size = os.path.getsize(filename)
-        fin = open(filename, "rb")
-        assert(fin.readline().strip() == "P6")
-        (width, height) = map(int, fin.readline().split())
-        assert(width > 0 and height > 0)
-        assert(fin.readline().strip() == "255")
-        size_read = fin.tell()
-        fin.close()
-        assert(size - size_read == width*height*3)
-        return True
-    except:
-        return False
-
-
-def image_comparison(width, height, data1, data2):
-    """
-    Generate a green-red comparison image from two given images.
-
-    @param width: Width of both images
-    @param height: Height of both images
-    @param data1: Data of first image
-    @param data2: Data of second image
-    @return: A 3-element tuple containing the width, height and data of the
-            generated comparison image.
-
-    @note: Input images must be the same size.
-    """
-    newdata = ""
-    i = 0
-    while i < width*height*3:
-        # Compute monochromatic value of current pixel in data1
-        pixel1_str = data1[i:i+3]
-        temp = struct.unpack("BBB", pixel1_str)
-        value1 = int((temp[0] + temp[1] + temp[2]) / 3)
-        # Compute monochromatic value of current pixel in data2
-        pixel2_str = data2[i:i+3]
-        temp = struct.unpack("BBB", pixel2_str)
-        value2 = int((temp[0] + temp[1] + temp[2]) / 3)
-        # Compute average of the two values
-        value = int((value1 + value2) / 2)
-        # Scale value to the upper half of the range [0, 255]
-        value = 128 + value / 2
-        # Compare pixels
-        if pixel1_str == pixel2_str:
-            # Equal -- give the pixel a greenish hue
-            newpixel = [0, value, 0]
-        else:
-            # Not equal -- give the pixel a reddish hue
-            newpixel = [value, 0, 0]
-        newdata += struct.pack("BBB", newpixel[0], newpixel[1], newpixel[2])
-        i += 3
-    return (width, height, newdata)
-
-
-def image_fuzzy_compare(width, height, data1, data2):
-    """
-    Return the degree of equality of two given images.
-
-    @param width: Width of both images
-    @param height: Height of both images
-    @param data1: Data of first image
-    @param data2: Data of second image
-    @return: Ratio equal_pixel_count / total_pixel_count.
-
-    @note: Input images must be the same size.
-    """
-    equal = 0.0
-    different = 0.0
-    i = 0
-    while i < width*height*3:
-        pixel1_str = data1[i:i+3]
-        pixel2_str = data2[i:i+3]
-        # Compare pixels
-        if pixel1_str == pixel2_str:
-            equal += 1.0
-        else:
-            different += 1.0
-        i += 3
-    return equal / (equal + different)
diff --git a/client/tests/kvm/rss_file_transfer.py b/client/tests/kvm/rss_file_transfer.py
deleted file mode 100755
index 4d00d17..0000000
--- a/client/tests/kvm/rss_file_transfer.py
+++ /dev/null
@@ -1,519 +0,0 @@
-#!/usr/bin/python
-"""
-Client for file transfer services offered by RSS (Remote Shell Server).
-
-@author: Michael Goldish (mgoldish@redhat.com)
-@copyright: 2008-2010 Red Hat Inc.
-"""
-
-import socket, struct, time, sys, os, glob
-
-# Globals
-CHUNKSIZE = 65536
-
-# Protocol message constants
-RSS_MAGIC           = 0x525353
-RSS_OK              = 1
-RSS_ERROR           = 2
-RSS_UPLOAD          = 3
-RSS_DOWNLOAD        = 4
-RSS_SET_PATH        = 5
-RSS_CREATE_FILE     = 6
-RSS_CREATE_DIR      = 7
-RSS_LEAVE_DIR       = 8
-RSS_DONE            = 9
-
-# See rss.cpp for protocol details.
-
-
-class FileTransferError(Exception):
-    def __init__(self, msg, e=None, filename=None):
-        Exception.__init__(self, msg, e, filename)
-        self.msg = msg
-        self.e = e
-        self.filename = filename
-
-    def __str__(self):
-        s = self.msg
-        if self.e and self.filename:
-            s += "    (error: %s,    filename: %s)" % (self.e, self.filename)
-        elif self.e:
-            s += "    (%s)" % self.e
-        elif self.filename:
-            s += "    (filename: %s)" % self.filename
-        return s
-
-
-class FileTransferConnectError(FileTransferError):
-    pass
-
-
-class FileTransferTimeoutError(FileTransferError):
-    pass
-
-
-class FileTransferProtocolError(FileTransferError):
-    pass
-
-
-class FileTransferSocketError(FileTransferError):
-    pass
-
-
-class FileTransferServerError(FileTransferError):
-    def __init__(self, errmsg):
-        FileTransferError.__init__(self, None, errmsg)
-
-    def __str__(self):
-        s = "Server said: %r" % self.e
-        if self.filename:
-            s += "    (filename: %s)" % self.filename
-        return s
-
-
-class FileTransferNotFoundError(FileTransferError):
-    pass
-
-
-class FileTransferClient(object):
-    """
-    Connect to a RSS (remote shell server) and transfer files.
-    """
-
-    def __init__(self, address, port, log_func=None, timeout=20):
-        """
-        Connect to a server.
-
-        @param address: The server's address
-        @param port: The server's port
-        @param log_func: If provided, transfer stats will be passed to this
-                function during the transfer
-        @param timeout: Time duration to wait for connection to succeed
-        @raise FileTransferConnectError: Raised if the connection fails
-        """
-        self._socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
-        self._socket.settimeout(timeout)
-        try:
-            self._socket.connect((address, port))
-        except socket.error, e:
-            raise FileTransferConnectError("Cannot connect to server at "
-                                           "%s:%s" % (address, port), e)
-        try:
-            if self._receive_msg(timeout) != RSS_MAGIC:
-                raise FileTransferConnectError("Received wrong magic number")
-        except FileTransferTimeoutError:
-            raise FileTransferConnectError("Timeout expired while waiting to "
-                                           "receive magic number")
-        self._send(struct.pack("=i", CHUNKSIZE))
-        self._log_func = log_func
-        self._last_time = time.time()
-        self._last_transferred = 0
-        self.transferred = 0
-
-
-    def __del__(self):
-        self.close()
-
-
-    def close(self):
-        """
-        Close the connection.
-        """
-        self._socket.close()
-
-
-    def _send(self, str, timeout=60):
-        try:
-            if timeout <= 0:
-                raise socket.timeout
-            self._socket.settimeout(timeout)
-            self._socket.sendall(str)
-        except socket.timeout:
-            raise FileTransferTimeoutError("Timeout expired while sending "
-                                           "data to server")
-        except socket.error, e:
-            raise FileTransferSocketError("Could not send data to server", e)
-
-
-    def _receive(self, size, timeout=60):
-        strs = []
-        end_time = time.time() + timeout
-        try:
-            while size > 0:
-                timeout = end_time - time.time()
-                if timeout <= 0:
-                    raise socket.timeout
-                self._socket.settimeout(timeout)
-                data = self._socket.recv(size)
-                if not data:
-                    raise FileTransferProtocolError("Connection closed "
-                                                    "unexpectedly while "
-                                                    "receiving data from "
-                                                    "server")
-                strs.append(data)
-                size -= len(data)
-        except socket.timeout:
-            raise FileTransferTimeoutError("Timeout expired while receiving "
-                                           "data from server")
-        except socket.error, e:
-            raise FileTransferSocketError("Error receiving data from server",
-                                          e)
-        return "".join(strs)
-
-
-    def _report_stats(self, str):
-        if self._log_func:
-            dt = time.time() - self._last_time
-            if dt >= 1:
-                transferred = self.transferred / 1048576.
-                speed = (self.transferred - self._last_transferred) / dt
-                speed /= 1048576.
-                self._log_func("%s %.3f MB (%.3f MB/sec)" %
-                               (str, transferred, speed))
-                self._last_time = time.time()
-                self._last_transferred = self.transferred
-
-
-    def _send_packet(self, str, timeout=60):
-        self._send(struct.pack("=I", len(str)))
-        self._send(str, timeout)
-        self.transferred += len(str) + 4
-        self._report_stats("Sent")
-
-
-    def _receive_packet(self, timeout=60):
-        size = struct.unpack("=I", self._receive(4))[0]
-        str = self._receive(size, timeout)
-        self.transferred += len(str) + 4
-        self._report_stats("Received")
-        return str
-
-
-    def _send_file_chunks(self, filename, timeout=60):
-        if self._log_func:
-            self._log_func("Sending file %s" % filename)
-        f = open(filename, "rb")
-        try:
-            try:
-                end_time = time.time() + timeout
-                while True:
-                    data = f.read(CHUNKSIZE)
-                    self._send_packet(data, end_time - time.time())
-                    if len(data) < CHUNKSIZE:
-                        break
-            except FileTransferError, e:
-                e.filename = filename
-                raise
-        finally:
-            f.close()
-
-
-    def _receive_file_chunks(self, filename, timeout=60):
-        if self._log_func:
-            self._log_func("Receiving file %s" % filename)
-        f = open(filename, "wb")
-        try:
-            try:
-                end_time = time.time() + timeout
-                while True:
-                    data = self._receive_packet(end_time - time.time())
-                    f.write(data)
-                    if len(data) < CHUNKSIZE:
-                        break
-            except FileTransferError, e:
-                e.filename = filename
-                raise
-        finally:
-            f.close()
-
-
-    def _send_msg(self, msg, timeout=60):
-        self._send(struct.pack("=I", msg))
-
-
-    def _receive_msg(self, timeout=60):
-        s = self._receive(4, timeout)
-        return struct.unpack("=I", s)[0]
-
-
-    def _handle_transfer_error(self):
-        # Save original exception
-        e = sys.exc_info()
-        try:
-            # See if we can get an error message
-            msg = self._receive_msg()
-        except FileTransferError:
-            # No error message -- re-raise original exception
-            raise e[0], e[1], e[2]
-        if msg == RSS_ERROR:
-            errmsg = self._receive_packet()
-            raise FileTransferServerError(errmsg)
-        raise e[0], e[1], e[2]
-
-
-class FileUploadClient(FileTransferClient):
-    """
-    Connect to a RSS (remote shell server) and upload files or directory trees.
-    """
-
-    def __init__(self, address, port, log_func=None, timeout=20):
-        """
-        Connect to a server.
-
-        @param address: The server's address
-        @param port: The server's port
-        @param log_func: If provided, transfer stats will be passed to this
-                function during the transfer
-        @param timeout: Time duration to wait for connection to succeed
-        @raise FileTransferConnectError: Raised if the connection fails
-        @raise FileTransferProtocolError: Raised if an incorrect magic number
-                is received
-        @raise FileTransferSocketError: Raised if the RSS_UPLOAD message cannot
-                be sent to the server
-        """
-        super(FileUploadClient, self).__init__(address, port, log_func, timeout)
-        self._send_msg(RSS_UPLOAD)
-
-
-    def _upload_file(self, path, end_time):
-        if os.path.isfile(path):
-            self._send_msg(RSS_CREATE_FILE)
-            self._send_packet(os.path.basename(path))
-            self._send_file_chunks(path, end_time - time.time())
-        elif os.path.isdir(path):
-            self._send_msg(RSS_CREATE_DIR)
-            self._send_packet(os.path.basename(path))
-            for filename in os.listdir(path):
-                self._upload_file(os.path.join(path, filename), end_time)
-            self._send_msg(RSS_LEAVE_DIR)
-
-
-    def upload(self, src_pattern, dst_path, timeout=600):
-        """
-        Send files or directory trees to the server.
-        The semantics of src_pattern and dst_path are similar to those of scp.
-        For example, the following are OK:
-            src_pattern='/tmp/foo.txt', dst_path='C:\\'
-                (uploads a single file)
-            src_pattern='/usr/', dst_path='C:\\Windows\\'
-                (uploads a directory tree recursively)
-            src_pattern='/usr/*', dst_path='C:\\Windows\\'
-                (uploads all files and directory trees under /usr/)
-        The following is not OK:
-            src_pattern='/tmp/foo.txt', dst_path='C:\\Windows\\*'
-                (wildcards are only allowed in src_pattern)
-
-        @param src_pattern: A path or wildcard pattern specifying the files or
-                directories to send to the server
-        @param dst_path: A path in the server's filesystem where the files will
-                be saved
-        @param timeout: Time duration in seconds to wait for the transfer to
-                complete
-        @raise FileTransferTimeoutError: Raised if timeout expires
-        @raise FileTransferServerError: Raised if something goes wrong and the
-                server sends an informative error message to the client
-        @note: Other exceptions can be raised.
-        """
-        end_time = time.time() + timeout
-        try:
-            try:
-                self._send_msg(RSS_SET_PATH)
-                self._send_packet(dst_path)
-                matches = glob.glob(src_pattern)
-                for filename in matches:
-                    self._upload_file(os.path.abspath(filename), end_time)
-                self._send_msg(RSS_DONE)
-            except FileTransferTimeoutError:
-                raise
-            except FileTransferError:
-                self._handle_transfer_error()
-            else:
-                # If nothing was transferred, raise an exception
-                if not matches:
-                    raise FileTransferNotFoundError("Pattern %s does not "
-                                                    "match any files or "
-                                                    "directories" %
-                                                    src_pattern)
-                # Look for RSS_OK or RSS_ERROR
-                msg = self._receive_msg(end_time - time.time())
-                if msg == RSS_OK:
-                    return
-                elif msg == RSS_ERROR:
-                    errmsg = self._receive_packet()
-                    raise FileTransferServerError(errmsg)
-                else:
-                    # Neither RSS_OK nor RSS_ERROR found
-                    raise FileTransferProtocolError("Received unexpected msg")
-        except:
-            # In any case, if the transfer failed, close the connection
-            self.close()
-            raise
-
-
-class FileDownloadClient(FileTransferClient):
-    """
-    Connect to a RSS (remote shell server) and download files or directory trees.
-    """
-
-    def __init__(self, address, port, log_func=None, timeout=20):
-        """
-        Connect to a server.
-
-        @param address: The server's address
-        @param port: The server's port
-        @param log_func: If provided, transfer stats will be passed to this
-                function during the transfer
-        @param timeout: Time duration to wait for connection to succeed
-        @raise FileTransferConnectError: Raised if the connection fails
-        @raise FileTransferProtocolError: Raised if an incorrect magic number
-                is received
-        @raise FileTransferSendError: Raised if the RSS_UPLOAD message cannot
-                be sent to the server
-        """
-        super(FileDownloadClient, self).__init__(address, port, log_func, timeout)
-        self._send_msg(RSS_DOWNLOAD)
-
-
-    def download(self, src_pattern, dst_path, timeout=600):
-        """
-        Receive files or directory trees from the server.
-        The semantics of src_pattern and dst_path are similar to those of scp.
-        For example, the following are OK:
-            src_pattern='C:\\foo.txt', dst_path='/tmp'
-                (downloads a single file)
-            src_pattern='C:\\Windows', dst_path='/tmp'
-                (downloads a directory tree recursively)
-            src_pattern='C:\\Windows\\*', dst_path='/tmp'
-                (downloads all files and directory trees under C:\\Windows)
-        The following is not OK:
-            src_pattern='C:\\Windows', dst_path='/tmp/*'
-                (wildcards are only allowed in src_pattern)
-
-        @param src_pattern: A path or wildcard pattern specifying the files or
-                directories, in the server's filesystem, that will be sent to
-                the client
-        @param dst_path: A path in the local filesystem where the files will
-                be saved
-        @param timeout: Time duration in seconds to wait for the transfer to
-                complete
-        @raise FileTransferTimeoutError: Raised if timeout expires
-        @raise FileTransferServerError: Raised if something goes wrong and the
-                server sends an informative error message to the client
-        @note: Other exceptions can be raised.
-        """
-        dst_path = os.path.abspath(dst_path)
-        end_time = time.time() + timeout
-        file_count = 0
-        dir_count = 0
-        try:
-            try:
-                self._send_msg(RSS_SET_PATH)
-                self._send_packet(src_pattern)
-            except FileTransferError:
-                self._handle_transfer_error()
-            while True:
-                msg = self._receive_msg()
-                if msg == RSS_CREATE_FILE:
-                    # Receive filename and file contents
-                    filename = self._receive_packet()
-                    if os.path.isdir(dst_path):
-                        dst_path = os.path.join(dst_path, filename)
-                    self._receive_file_chunks(dst_path, end_time - time.time())
-                    dst_path = os.path.dirname(dst_path)
-                    file_count += 1
-                elif msg == RSS_CREATE_DIR:
-                    # Receive dirname and create the directory
-                    dirname = self._receive_packet()
-                    if os.path.isdir(dst_path):
-                        dst_path = os.path.join(dst_path, dirname)
-                    if not os.path.isdir(dst_path):
-                        os.mkdir(dst_path)
-                    dir_count += 1
-                elif msg == RSS_LEAVE_DIR:
-                    # Return to parent dir
-                    dst_path = os.path.dirname(dst_path)
-                elif msg == RSS_DONE:
-                    # Transfer complete
-                    if not file_count and not dir_count:
-                        raise FileTransferNotFoundError("Pattern %s does not "
-                                                        "match any files or "
-                                                        "directories that "
-                                                        "could be downloaded" %
-                                                        src_pattern)
-                    break
-                elif msg == RSS_ERROR:
-                    # Receive error message and abort
-                    errmsg = self._receive_packet()
-                    raise FileTransferServerError(errmsg)
-                else:
-                    # Unexpected msg
-                    raise FileTransferProtocolError("Received unexpected msg")
-        except:
-            # In any case, if the transfer failed, close the connection
-            self.close()
-            raise
-
-
-def upload(address, port, src_pattern, dst_path, log_func=None, timeout=60,
-           connect_timeout=20):
-    """
-    Connect to server and upload files.
-
-    @see: FileUploadClient
-    """
-    client = FileUploadClient(address, port, log_func, connect_timeout)
-    client.upload(src_pattern, dst_path, timeout)
-    client.close()
-
-
-def download(address, port, src_pattern, dst_path, log_func=None, timeout=60,
-             connect_timeout=20):
-    """
-    Connect to server and upload files.
-
-    @see: FileDownloadClient
-    """
-    client = FileDownloadClient(address, port, log_func, connect_timeout)
-    client.download(src_pattern, dst_path, timeout)
-    client.close()
-
-
-def main():
-    import optparse
-
-    usage = "usage: %prog [options] address port src_pattern dst_path"
-    parser = optparse.OptionParser(usage=usage)
-    parser.add_option("-d", "--download",
-                      action="store_true", dest="download",
-                      help="download files from server")
-    parser.add_option("-u", "--upload",
-                      action="store_true", dest="upload",
-                      help="upload files to server")
-    parser.add_option("-v", "--verbose",
-                      action="store_true", dest="verbose",
-                      help="be verbose")
-    parser.add_option("-t", "--timeout",
-                      type="int", dest="timeout", default=3600,
-                      help="transfer timeout")
-    options, args = parser.parse_args()
-    if options.download == options.upload:
-        parser.error("you must specify either -d or -u")
-    if len(args) != 4:
-        parser.error("incorrect number of arguments")
-    address, port, src_pattern, dst_path = args
-    port = int(port)
-
-    logger = None
-    if options.verbose:
-        def p(s):
-            print s
-        logger = p
-
-    if options.download:
-        download(address, port, src_pattern, dst_path, logger, options.timeout)
-    elif options.upload:
-        upload(address, port, src_pattern, dst_path, logger, options.timeout)
-
-
-if __name__ == "__main__":
-    main()
diff --git a/client/tests/kvm/scan_results.py b/client/tests/kvm/scan_results.py
deleted file mode 100755
index be825f6..0000000
--- a/client/tests/kvm/scan_results.py
+++ /dev/null
@@ -1,97 +0,0 @@
-#!/usr/bin/python
-"""
-Program that parses the autotest results and return a nicely printed final test
-result.
-
-@copyright: Red Hat 2008-2009
-"""
-
-def parse_results(text):
-    """
-    Parse text containing Autotest results.
-
-    @return: A list of result 4-tuples.
-    """
-    result_list = []
-    start_time_list = []
-    info_list = []
-
-    lines = text.splitlines()
-    for line in lines:
-        line = line.strip()
-        parts = line.split("\t")
-
-        # Found a START line -- get start time
-        if (line.startswith("START") and len(parts) >= 5 and
-            parts[3].startswith("timestamp")):
-            start_time = float(parts[3].split("=")[1])
-            start_time_list.append(start_time)
-            info_list.append("")
-
-        # Found an END line -- get end time, name and status
-        elif (line.startswith("END") and len(parts) >= 5 and
-              parts[3].startswith("timestamp")):
-            end_time = float(parts[3].split("=")[1])
-            start_time = start_time_list.pop()
-            info = info_list.pop()
-            test_name = parts[2]
-            test_status = parts[0].split()[1]
-            # Remove "kvm." prefix
-            if test_name.startswith("kvm."):
-                test_name = test_name[4:]
-            result_list.append((test_name, test_status,
-                                int(end_time - start_time), info))
-
-        # Found a FAIL/ERROR/GOOD line -- get failure/success info
-        elif (len(parts) >= 6 and parts[3].startswith("timestamp") and
-              parts[4].startswith("localtime")):
-            info_list[-1] = parts[5]
-
-    return result_list
-
-
-def print_result(result, name_width):
-    """
-    Nicely print a single Autotest result.
-
-    @param result: a 4-tuple
-    @param name_width: test name maximum width
-    """
-    if result:
-        format = "%%-%ds    %%-10s %%-8s %%s" % name_width
-        print format % result
-
-
-def main(resfiles):
-    result_lists = []
-    name_width = 40
-
-    for resfile in resfiles:
-        try:
-            text = open(resfile).read()
-        except IOError:
-            print "Bad result file: %s" % resfile
-            continue
-        results = parse_results(text)
-        result_lists.append((resfile, results))
-        name_width = max([name_width] + [len(r[0]) for r in results])
-
-    print_result(("Test", "Status", "Seconds", "Info"), name_width)
-    print_result(("----", "------", "-------", "----"), name_width)
-
-    for resfile, results in result_lists:
-        print "        (Result file: %s)" % resfile
-        for r in results:
-            print_result(r, name_width)
-
-
-if __name__ == "__main__":
-    import sys, glob
-
-    resfiles = glob.glob("../../results/default/status*")
-    if len(sys.argv) > 1:
-        if sys.argv[1] == "-h" or sys.argv[1] == "--help":
-            print "Usage: %s [result files]" % sys.argv[0]
-            sys.exit(0)
-        resfiles = sys.argv[1:]
-    main(resfiles)
diff --git a/client/tests/kvm/stepeditor.py b/client/tests/kvm/stepeditor.py
deleted file mode 100755
index bcdf572..0000000
--- a/client/tests/kvm/stepeditor.py
+++ /dev/null
@@ -1,1401 +0,0 @@
-#!/usr/bin/python
-"""
-Step file creator/editor.
-
-@copyright: Red Hat Inc 2009
-@author: mgoldish@redhat.com (Michael Goldish)
-@version: "20090401"
-"""
-
-import pygtk, gtk, os, glob, shutil, sys, logging
-import common, ppm_utils
-pygtk.require('2.0')
-
-
-# General utilities
-
-def corner_and_size_clipped(startpoint, endpoint, limits):
-    c0 = startpoint[:]
-    c1 = endpoint[:]
-    if c0[0] < 0:
-        c0[0] = 0
-    if c0[1] < 0:
-        c0[1] = 0
-    if c1[0] < 0:
-        c1[0] = 0
-    if c1[1] < 0:
-        c1[1] = 0
-    if c0[0] > limits[0] - 1:
-        c0[0] = limits[0] - 1
-    if c0[1] > limits[1] - 1:
-        c0[1] = limits[1] - 1
-    if c1[0] > limits[0] - 1:
-        c1[0] = limits[0] - 1
-    if c1[1] > limits[1] - 1:
-        c1[1] = limits[1] - 1
-    return ([min(c0[0], c1[0]),
-             min(c0[1], c1[1])],
-            [abs(c1[0] - c0[0]) + 1,
-             abs(c1[1] - c0[1]) + 1])
-
-
-def key_event_to_qemu_string(event):
-    keymap = gtk.gdk.keymap_get_default()
-    keyvals = keymap.get_entries_for_keycode(event.hardware_keycode)
-    keyval = keyvals[0][0]
-    keyname = gtk.gdk.keyval_name(keyval)
-
-    dict = { "Return": "ret",
-             "Tab": "tab",
-             "space": "spc",
-             "Left": "left",
-             "Right": "right",
-             "Up": "up",
-             "Down": "down",
-             "F1": "f1",
-             "F2": "f2",
-             "F3": "f3",
-             "F4": "f4",
-             "F5": "f5",
-             "F6": "f6",
-             "F7": "f7",
-             "F8": "f8",
-             "F9": "f9",
-             "F10": "f10",
-             "F11": "f11",
-             "F12": "f12",
-             "Escape": "esc",
-             "minus": "minus",
-             "equal": "equal",
-             "BackSpace": "backspace",
-             "comma": "comma",
-             "period": "dot",
-             "slash": "slash",
-             "Insert": "insert",
-             "Delete": "delete",
-             "Home": "home",
-             "End": "end",
-             "Page_Up": "pgup",
-             "Page_Down": "pgdn",
-             "Menu": "menu",
-             "semicolon": "0x27",
-             "backslash": "0x2b",
-             "apostrophe": "0x28",
-             "grave": "0x29",
-             "less": "0x2b",
-             "bracketleft": "0x1a",
-             "bracketright": "0x1b",
-             "Super_L": "0xdc",
-             "Super_R": "0xdb",
-             }
-
-    if ord('a') <= keyval <= ord('z') or ord('0') <= keyval <= ord('9'):
-        str = keyname
-    elif keyname in dict.keys():
-        str = dict[keyname]
-    else:
-        return ""
-
-    if event.state & gtk.gdk.CONTROL_MASK:
-        str = "ctrl-" + str
-    if event.state & gtk.gdk.MOD1_MASK:
-        str = "alt-" + str
-    if event.state & gtk.gdk.SHIFT_MASK:
-        str = "shift-" + str
-
-    return str
-
-
-class StepMakerWindow:
-
-    # Constructor
-
-    def __init__(self):
-        # Window
-        self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)
-        self.window.set_title("Step Maker Window")
-        self.window.connect("delete-event", self.delete_event)
-        self.window.connect("destroy", self.destroy)
-        self.window.set_default_size(600, 800)
-
-        # Main box (inside a frame which is inside a VBox)
-        self.menu_vbox = gtk.VBox()
-        self.window.add(self.menu_vbox)
-        self.menu_vbox.show()
-
-        frame = gtk.Frame()
-        frame.set_border_width(10)
-        frame.set_shadow_type(gtk.SHADOW_NONE)
-        self.menu_vbox.pack_end(frame)
-        frame.show()
-
-        self.main_vbox = gtk.VBox(spacing=10)
-        frame.add(self.main_vbox)
-        self.main_vbox.show()
-
-        # EventBox
-        self.scrolledwindow = gtk.ScrolledWindow()
-        self.scrolledwindow.set_policy(gtk.POLICY_AUTOMATIC,
-                                       gtk.POLICY_AUTOMATIC)
-        self.scrolledwindow.set_shadow_type(gtk.SHADOW_NONE)
-        self.main_vbox.pack_start(self.scrolledwindow)
-        self.scrolledwindow.show()
-
-        table = gtk.Table(1, 1)
-        self.scrolledwindow.add_with_viewport(table)
-        table.show()
-        table.realize()
-
-        self.event_box = gtk.EventBox()
-        table.attach(self.event_box, 0, 1, 0, 1, gtk.EXPAND, gtk.EXPAND)
-        self.event_box.show()
-        self.event_box.realize()
-
-        # Image
-        self.image = gtk.Image()
-        self.event_box.add(self.image)
-        self.image.show()
-
-        # Data VBox
-        self.data_vbox = gtk.VBox(spacing=10)
-        self.main_vbox.pack_start(self.data_vbox, expand=False)
-        self.data_vbox.show()
-
-        # User VBox
-        self.user_vbox = gtk.VBox(spacing=10)
-        self.main_vbox.pack_start(self.user_vbox, expand=False)
-        self.user_vbox.show()
-
-        # Screendump ID HBox
-        box = gtk.HBox(spacing=10)
-        self.data_vbox.pack_start(box)
-        box.show()
-
-        label = gtk.Label("Screendump ID:")
-        box.pack_start(label, False)
-        label.show()
-
-        self.entry_screendump = gtk.Entry()
-        self.entry_screendump.set_editable(False)
-        box.pack_start(self.entry_screendump)
-        self.entry_screendump.show()
-
-        label = gtk.Label("Time:")
-        box.pack_start(label, False)
-        label.show()
-
-        self.entry_time = gtk.Entry()
-        self.entry_time.set_editable(False)
-        self.entry_time.set_width_chars(10)
-        box.pack_start(self.entry_time, False)
-        self.entry_time.show()
-
-        # Comment HBox
-        box = gtk.HBox(spacing=10)
-        self.data_vbox.pack_start(box)
-        box.show()
-
-        label = gtk.Label("Comment:")
-        box.pack_start(label, False)
-        label.show()
-
-        self.entry_comment = gtk.Entry()
-        box.pack_start(self.entry_comment)
-        self.entry_comment.show()
-
-        # Sleep HBox
-        box = gtk.HBox(spacing=10)
-        self.data_vbox.pack_start(box)
-        box.show()
-
-        self.check_sleep = gtk.CheckButton("Sleep:")
-        self.check_sleep.connect("toggled", self.event_check_sleep_toggled)
-        box.pack_start(self.check_sleep, False)
-        self.check_sleep.show()
-
-        self.spin_sleep = gtk.SpinButton(gtk.Adjustment(0, 0, 50000, 1, 10, 0),
-                                         climb_rate=0.0)
-        box.pack_start(self.spin_sleep, False)
-        self.spin_sleep.show()
-
-        # Barrier HBox
-        box = gtk.HBox(spacing=10)
-        self.data_vbox.pack_start(box)
-        box.show()
-
-        self.check_barrier = gtk.CheckButton("Barrier:")
-        self.check_barrier.connect("toggled", self.event_check_barrier_toggled)
-        box.pack_start(self.check_barrier, False)
-        self.check_barrier.show()
-
-        vbox = gtk.VBox()
-        box.pack_start(vbox)
-        vbox.show()
-
-        self.label_barrier_region = gtk.Label("Region:")
-        self.label_barrier_region.set_alignment(0, 0.5)
-        vbox.pack_start(self.label_barrier_region)
-        self.label_barrier_region.show()
-
-        self.label_barrier_md5sum = gtk.Label("MD5:")
-        self.label_barrier_md5sum.set_alignment(0, 0.5)
-        vbox.pack_start(self.label_barrier_md5sum)
-        self.label_barrier_md5sum.show()
-
-        self.label_barrier_timeout = gtk.Label("Timeout:")
-        box.pack_start(self.label_barrier_timeout, False)
-        self.label_barrier_timeout.show()
-
-        self.spin_barrier_timeout = gtk.SpinButton(gtk.Adjustment(0, 0, 50000,
-                                                                  1, 10, 0),
-                                                                 climb_rate=0.0)
-        box.pack_start(self.spin_barrier_timeout, False)
-        self.spin_barrier_timeout.show()
-
-        self.check_barrier_optional = gtk.CheckButton("Optional")
-        box.pack_start(self.check_barrier_optional, False)
-        self.check_barrier_optional.show()
-
-        # Keystrokes HBox
-        box = gtk.HBox(spacing=10)
-        self.data_vbox.pack_start(box)
-        box.show()
-
-        label = gtk.Label("Keystrokes:")
-        box.pack_start(label, False)
-        label.show()
-
-        frame = gtk.Frame()
-        frame.set_shadow_type(gtk.SHADOW_IN)
-        box.pack_start(frame)
-        frame.show()
-
-        self.text_buffer = gtk.TextBuffer()
-        self.entry_keys = gtk.TextView(self.text_buffer)
-        self.entry_keys.set_wrap_mode(gtk.WRAP_WORD)
-        self.entry_keys.connect("key-press-event", self.event_key_press)
-        frame.add(self.entry_keys)
-        self.entry_keys.show()
-
-        self.check_manual = gtk.CheckButton("Manual")
-        self.check_manual.connect("toggled", self.event_manual_toggled)
-        box.pack_start(self.check_manual, False)
-        self.check_manual.show()
-
-        button = gtk.Button("Clear")
-        button.connect("clicked", self.event_clear_clicked)
-        box.pack_start(button, False)
-        button.show()
-
-        # Mouse click HBox
-        box = gtk.HBox(spacing=10)
-        self.data_vbox.pack_start(box)
-        box.show()
-
-        label = gtk.Label("Mouse action:")
-        box.pack_start(label, False)
-        label.show()
-
-        self.button_capture = gtk.Button("Capture")
-        box.pack_start(self.button_capture, False)
-        self.button_capture.show()
-
-        self.check_mousemove = gtk.CheckButton("Move: ...")
-        box.pack_start(self.check_mousemove, False)
-        self.check_mousemove.show()
-
-        self.check_mouseclick = gtk.CheckButton("Click: ...")
-        box.pack_start(self.check_mouseclick, False)
-        self.check_mouseclick.show()
-
-        self.spin_sensitivity = gtk.SpinButton(gtk.Adjustment(1, 1, 100, 1, 10,
-                                                              0),
-                                                              climb_rate=0.0)
-        box.pack_end(self.spin_sensitivity, False)
-        self.spin_sensitivity.show()
-
-        label = gtk.Label("Sensitivity:")
-        box.pack_end(label, False)
-        label.show()
-
-        self.spin_latency = gtk.SpinButton(gtk.Adjustment(10, 1, 500, 1, 10, 0),
-                                           climb_rate=0.0)
-        box.pack_end(self.spin_latency, False)
-        self.spin_latency.show()
-
-        label = gtk.Label("Latency:")
-        box.pack_end(label, False)
-        label.show()
-
-        self.handler_event_box_press = None
-        self.handler_event_box_release = None
-        self.handler_event_box_scroll = None
-        self.handler_event_box_motion = None
-        self.handler_event_box_expose = None
-
-        self.window.realize()
-        self.window.show()
-
-        self.clear_state()
-
-    # Utilities
-
-    def message(self, text, title):
-        dlg = gtk.MessageDialog(self.window,
-                gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT,
-                gtk.MESSAGE_INFO,
-                gtk.BUTTONS_CLOSE,
-                title)
-        dlg.set_title(title)
-        dlg.format_secondary_text(text)
-        response = dlg.run()
-        dlg.destroy()
-
-
-    def question_yes_no(self, text, title):
-        dlg = gtk.MessageDialog(self.window,
-                gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT,
-                gtk.MESSAGE_QUESTION,
-                gtk.BUTTONS_YES_NO,
-                title)
-        dlg.set_title(title)
-        dlg.format_secondary_text(text)
-        response = dlg.run()
-        dlg.destroy()
-        if response == gtk.RESPONSE_YES:
-            return True
-        return False
-
-
-    def inputdialog(self, text, title, default_response=""):
-        # Define a little helper function
-        def inputdialog_entry_activated(entry):
-            dlg.response(gtk.RESPONSE_OK)
-
-        # Create the dialog
-        dlg = gtk.MessageDialog(self.window,
-                gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT,
-                gtk.MESSAGE_QUESTION,
-                gtk.BUTTONS_OK_CANCEL,
-                title)
-        dlg.set_title(title)
-        dlg.format_secondary_text(text)
-
-        # Create an entry widget
-        entry = gtk.Entry()
-        entry.set_text(default_response)
-        entry.connect("activate", inputdialog_entry_activated)
-        dlg.vbox.pack_start(entry)
-        entry.show()
-
-        # Run the dialog
-        response = dlg.run()
-        dlg.destroy()
-        if response == gtk.RESPONSE_OK:
-            return entry.get_text()
-        return None
-
-
-    def filedialog(self, title=None, default_filename=None):
-        chooser = gtk.FileChooserDialog(title=title, parent=self.window,
-                                        action=gtk.FILE_CHOOSER_ACTION_OPEN,
-                buttons=(gtk.STOCK_CANCEL, gtk.RESPONSE_CANCEL, gtk.STOCK_OPEN,
-                         gtk.RESPONSE_OK))
-        chooser.resize(700, 500)
-        if default_filename:
-            chooser.set_filename(os.path.abspath(default_filename))
-        filename = None
-        response = chooser.run()
-        if response == gtk.RESPONSE_OK:
-            filename = chooser.get_filename()
-        chooser.destroy()
-        return filename
-
-
-    def redirect_event_box_input(self, press=None, release=None, scroll=None,
-                                 motion=None, expose=None):
-        if self.handler_event_box_press != None: \
-        self.event_box.disconnect(self.handler_event_box_press)
-        if self.handler_event_box_release != None: \
-        self.event_box.disconnect(self.handler_event_box_release)
-        if self.handler_event_box_scroll != None: \
-        self.event_box.disconnect(self.handler_event_box_scroll)
-        if self.handler_event_box_motion != None: \
-        self.event_box.disconnect(self.handler_event_box_motion)
-        if self.handler_event_box_expose != None: \
-        self.event_box.disconnect(self.handler_event_box_expose)
-        self.handler_event_box_press = None
-        self.handler_event_box_release = None
-        self.handler_event_box_scroll = None
-        self.handler_event_box_motion = None
-        self.handler_event_box_expose = None
-        if press != None: self.handler_event_box_press = \
-        self.event_box.connect("button-press-event", press)
-        if release != None: self.handler_event_box_release = \
-        self.event_box.connect("button-release-event", release)
-        if scroll != None: self.handler_event_box_scroll = \
-        self.event_box.connect("scroll-event", scroll)
-        if motion != None: self.handler_event_box_motion = \
-        self.event_box.connect("motion-notify-event", motion)
-        if expose != None: self.handler_event_box_expose = \
-        self.event_box.connect_after("expose-event", expose)
-
-
-    def get_keys(self):
-        return self.text_buffer.get_text(
-                self.text_buffer.get_start_iter(),
-                self.text_buffer.get_end_iter())
-
-
-    def add_key(self, key):
-        text = self.get_keys()
-        if len(text) > 0 and text[-1] != ' ':
-            text += " "
-        text += key
-        self.text_buffer.set_text(text)
-
-
-    def clear_keys(self):
-        self.text_buffer.set_text("")
-
-
-    def update_barrier_info(self):
-        if self.barrier_selected:
-            self.label_barrier_region.set_text("Selected region: Corner: " + \
-                                            str(tuple(self.barrier_corner)) + \
-                                            " Size: " + \
-                                            str(tuple(self.barrier_size)))
-        else:
-            self.label_barrier_region.set_text("No region selected.")
-        self.label_barrier_md5sum.set_text("MD5: " + self.barrier_md5sum)
-
-
-    def update_mouse_click_info(self):
-        if self.mouse_click_captured:
-            self.check_mousemove.set_label("Move: " + \
-                                           str(tuple(self.mouse_click_coords)))
-            self.check_mouseclick.set_label("Click: button %d" %
-                                            self.mouse_click_button)
-        else:
-            self.check_mousemove.set_label("Move: ...")
-            self.check_mouseclick.set_label("Click: ...")
-
-
-    def clear_state(self, clear_screendump=True):
-        # Recording time
-        self.entry_time.set_text("unknown")
-        if clear_screendump:
-            # Screendump
-            self.clear_image()
-        # Screendump ID
-        self.entry_screendump.set_text("")
-        # Comment
-        self.entry_comment.set_text("")
-        # Sleep
-        self.check_sleep.set_active(True)
-        self.check_sleep.set_active(False)
-        self.spin_sleep.set_value(10)
-        # Barrier
-        self.clear_barrier_state()
-        # Keystrokes
-        self.check_manual.set_active(False)
-        self.clear_keys()
-        # Mouse actions
-        self.check_mousemove.set_sensitive(False)
-        self.check_mouseclick.set_sensitive(False)
-        self.check_mousemove.set_active(False)
-        self.check_mouseclick.set_active(False)
-        self.mouse_click_captured = False
-        self.mouse_click_coords = [0, 0]
-        self.mouse_click_button = 0
-        self.update_mouse_click_info()
-
-
-    def clear_barrier_state(self):
-        self.check_barrier.set_active(True)
-        self.check_barrier.set_active(False)
-        self.check_barrier_optional.set_active(False)
-        self.spin_barrier_timeout.set_value(10)
-        self.barrier_selection_started = False
-        self.barrier_selected = False
-        self.barrier_corner0 = [0, 0]
-        self.barrier_corner1 = [0, 0]
-        self.barrier_corner = [0, 0]
-        self.barrier_size = [0, 0]
-        self.barrier_md5sum = ""
-        self.update_barrier_info()
-
-
-    def set_image(self, w, h, data):
-        (self.image_width, self.image_height, self.image_data) = (w, h, data)
-        self.image.set_from_pixbuf(gtk.gdk.pixbuf_new_from_data(
-            data, gtk.gdk.COLORSPACE_RGB, False, 8,
-            w, h, w*3))
-        hscrollbar = self.scrolledwindow.get_hscrollbar()
-        hscrollbar.set_range(0, w)
-        vscrollbar = self.scrolledwindow.get_vscrollbar()
-        vscrollbar.set_range(0, h)
-
-
-    def set_image_from_file(self, filename):
-        if not ppm_utils.image_verify_ppm_file(filename):
-            logging.warning("set_image_from_file: Warning: received invalid"
-                            "screendump file")
-            return self.clear_image()
-        (w, h, data) = ppm_utils.image_read_from_ppm_file(filename)
-        self.set_image(w, h, data)
-
-
-    def clear_image(self):
-        self.image.clear()
-        self.image_width = 0
-        self.image_height = 0
-        self.image_data = ""
-
-
-    def update_screendump_id(self, data_dir):
-        if not self.image_data:
-            return
-        # Find a proper ID for the screendump
-        scrdump_md5sum = ppm_utils.image_md5sum(self.image_width,
-                                                self.image_height,
-                                                self.image_data)
-        scrdump_id = ppm_utils.find_id_for_screendump(scrdump_md5sum, data_dir)
-        if not scrdump_id:
-            # Not found; generate one
-            scrdump_id = ppm_utils.generate_id_for_screendump(scrdump_md5sum,
-                                                              data_dir)
-        self.entry_screendump.set_text(scrdump_id)
-
-
-    def get_step_lines(self, data_dir=None):
-        if self.check_barrier.get_active() and not self.barrier_selected:
-            self.message("No barrier region selected.", "Error")
-            return
-
-        str = "step"
-
-        # Add step recording time
-        if self.entry_time.get_text():
-            str += " " + self.entry_time.get_text()
-
-        str += "\n"
-
-        # Add screendump line
-        if self.image_data:
-            str += "screendump %s\n" % self.entry_screendump.get_text()
-
-        # Add comment
-        if self.entry_comment.get_text():
-            str += "# %s\n" % self.entry_comment.get_text()
-
-        # Add sleep line
-        if self.check_sleep.get_active():
-            str += "sleep %d\n" % self.spin_sleep.get_value()
-
-        # Add barrier_2 line
-        if self.check_barrier.get_active():
-            str += "barrier_2 %d %d %d %d %s %d" % (
-                    self.barrier_size[0], self.barrier_size[1],
-                    self.barrier_corner[0], self.barrier_corner[1],
-                    self.barrier_md5sum, self.spin_barrier_timeout.get_value())
-            if self.check_barrier_optional.get_active():
-                str += " optional"
-            str += "\n"
-
-        # Add "Sending keys" comment
-        keys_to_send = self.get_keys().split()
-        if keys_to_send:
-            str += "# Sending keys: %s\n" % self.get_keys()
-
-        # Add key and var lines
-        for key in keys_to_send:
-            if key.startswith("$"):
-                varname = key[1:]
-                str += "var %s\n" % varname
-            else:
-                str += "key %s\n" % key
-
-        # Add mousemove line
-        if self.check_mousemove.get_active():
-            str += "mousemove %d %d\n" % (self.mouse_click_coords[0],
-                                          self.mouse_click_coords[1])
-
-        # Add mouseclick line
-        if self.check_mouseclick.get_active():
-            dict = { 1 : 1,
-                     2 : 2,
-                     3 : 4 }
-            str += "mouseclick %d\n" % dict[self.mouse_click_button]
-
-        # Write screendump and cropped screendump image files
-        if data_dir and self.image_data:
-            # Create the data dir if it doesn't exist
-            if not os.path.exists(data_dir):
-                os.makedirs(data_dir)
-            # Get the full screendump filename
-            scrdump_filename = os.path.join(data_dir,
-                                            self.entry_screendump.get_text())
-            # Write screendump file if it doesn't exist
-            if not os.path.exists(scrdump_filename):
-                try:
-                    ppm_utils.image_write_to_ppm_file(scrdump_filename,
-                                                      self.image_width,
-                                                      self.image_height,
-                                                      self.image_data)
-                except IOError:
-                    self.message("Could not write screendump file.", "Error")
-
-            #if self.check_barrier.get_active():
-            #    # Crop image to get the cropped screendump
-            #    (cw, ch, cdata) = ppm_utils.image_crop(
-            #            self.image_width, self.image_height, self.image_data,
-            #            self.barrier_corner[0], self.barrier_corner[1],
-            #            self.barrier_size[0], self.barrier_size[1])
-            #    cropped_scrdump_md5sum = ppm_utils.image_md5sum(cw, ch, cdata)
-            #    cropped_scrdump_filename = \
-            #    ppm_utils.get_cropped_screendump_filename(scrdump_filename,
-            #                                            cropped_scrdump_md5sum)
-            #    # Write cropped screendump file
-            #    try:
-            #        ppm_utils.image_write_to_ppm_file(cropped_scrdump_filename,
-            #                                          cw, ch, cdata)
-            #    except IOError:
-            #        self.message("Could not write cropped screendump file.",
-            #                     "Error")
-
-        return str
-
-    def set_state_from_step_lines(self, str, data_dir, warn=True):
-        self.clear_state()
-
-        for line in str.splitlines():
-            words = line.split()
-            if not words:
-                continue
-
-            if line.startswith("#") \
-                    and not self.entry_comment.get_text() \
-                    and not line.startswith("# Sending keys:") \
-                    and not line.startswith("# ----"):
-                self.entry_comment.set_text(line.strip("#").strip())
-
-            elif words[0] == "step":
-                if len(words) >= 2:
-                    self.entry_time.set_text(words[1])
-
-            elif words[0] == "screendump":
-                self.entry_screendump.set_text(words[1])
-                self.set_image_from_file(os.path.join(data_dir, words[1]))
-
-            elif words[0] == "sleep":
-                self.spin_sleep.set_value(int(words[1]))
-                self.check_sleep.set_active(True)
-
-            elif words[0] == "key":
-                self.add_key(words[1])
-
-            elif words[0] == "var":
-                self.add_key("$%s" % words[1])
-
-            elif words[0] == "mousemove":
-                self.mouse_click_captured = True
-                self.mouse_click_coords = [int(words[1]), int(words[2])]
-                self.update_mouse_click_info()
-
-            elif words[0] == "mouseclick":
-                self.mouse_click_captured = True
-                self.mouse_click_button = int(words[1])
-                self.update_mouse_click_info()
-
-            elif words[0] == "barrier_2":
-                # Get region corner and size from step lines
-                self.barrier_corner = [int(words[3]), int(words[4])]
-                self.barrier_size = [int(words[1]), int(words[2])]
-                # Get corner0 and corner1 from step lines
-                self.barrier_corner0 = self.barrier_corner
-                self.barrier_corner1 = [self.barrier_corner[0] +
-                                        self.barrier_size[0] - 1,
-                                        self.barrier_corner[1] +
-                                        self.barrier_size[1] - 1]
-                # Get the md5sum
-                self.barrier_md5sum = words[5]
-                # Pretend the user selected the region with the mouse
-                self.barrier_selection_started = True
-                self.barrier_selected = True
-                # Update label widgets according to region information
-                self.update_barrier_info()
-                # Check the barrier checkbutton
-                self.check_barrier.set_active(True)
-                # Set timeout value
-                self.spin_barrier_timeout.set_value(int(words[6]))
-                # Set 'optional' checkbutton state
-                self.check_barrier_optional.set_active(words[-1] == "optional")
-                # Update the image widget
-                self.event_box.queue_draw()
-
-                if warn:
-                    # See if the computed md5sum matches the one recorded in
-                    # the file
-                    computed_md5sum = ppm_utils.get_region_md5sum(
-                            self.image_width, self.image_height,
-                            self.image_data, self.barrier_corner[0],
-                            self.barrier_corner[1], self.barrier_size[0],
-                            self.barrier_size[1])
-                    if computed_md5sum != self.barrier_md5sum:
-                        self.message("Computed MD5 sum (%s) differs from MD5"
-                                     " sum recorded in steps file (%s)" %
-                                     (computed_md5sum, self.barrier_md5sum),
-                                     "Warning")
-
-    # Events
-
-    def delete_event(self, widget, event):
-        pass
-
-    def destroy(self, widget):
-        gtk.main_quit()
-
-    def event_check_barrier_toggled(self, widget):
-        if self.check_barrier.get_active():
-            self.redirect_event_box_input(
-                    self.event_button_press,
-                    self.event_button_release,
-                    None,
-                    None,
-                    self.event_expose)
-            self.event_box.queue_draw()
-            self.event_box.window.set_cursor(gtk.gdk.Cursor(gtk.gdk.CROSSHAIR))
-            self.label_barrier_region.set_sensitive(True)
-            self.label_barrier_md5sum.set_sensitive(True)
-            self.label_barrier_timeout.set_sensitive(True)
-            self.spin_barrier_timeout.set_sensitive(True)
-            self.check_barrier_optional.set_sensitive(True)
-        else:
-            self.redirect_event_box_input()
-            self.event_box.queue_draw()
-            self.event_box.window.set_cursor(None)
-            self.label_barrier_region.set_sensitive(False)
-            self.label_barrier_md5sum.set_sensitive(False)
-            self.label_barrier_timeout.set_sensitive(False)
-            self.spin_barrier_timeout.set_sensitive(False)
-            self.check_barrier_optional.set_sensitive(False)
-
-    def event_check_sleep_toggled(self, widget):
-        if self.check_sleep.get_active():
-            self.spin_sleep.set_sensitive(True)
-        else:
-            self.spin_sleep.set_sensitive(False)
-
-    def event_manual_toggled(self, widget):
-        self.entry_keys.grab_focus()
-
-    def event_clear_clicked(self, widget):
-        self.clear_keys()
-        self.entry_keys.grab_focus()
-
-    def event_expose(self, widget, event):
-        if not self.barrier_selection_started:
-            return
-        (corner, size) = corner_and_size_clipped(self.barrier_corner0,
-                                                 self.barrier_corner1,
-                                                 self.event_box.size_request())
-        gc = self.event_box.window.new_gc(line_style=gtk.gdk.LINE_DOUBLE_DASH,
-                                          line_width=1)
-        gc.set_foreground(gc.get_colormap().alloc_color("red"))
-        gc.set_background(gc.get_colormap().alloc_color("dark red"))
-        gc.set_dashes(0, (4, 4))
-        self.event_box.window.draw_rectangle(
-                gc, False,
-                corner[0], corner[1],
-                size[0]-1, size[1]-1)
-
-    def event_drag_motion(self, widget, event):
-        old_corner1 = self.barrier_corner1
-        self.barrier_corner1 = [int(event.x), int(event.y)]
-        (corner, size) = corner_and_size_clipped(self.barrier_corner0,
-                                                 self.barrier_corner1,
-                                                 self.event_box.size_request())
-        (old_corner, old_size) = corner_and_size_clipped(self.barrier_corner0,
-                                                         old_corner1,
-                                                  self.event_box.size_request())
-        corner0 = [min(corner[0], old_corner[0]), min(corner[1], old_corner[1])]
-        corner1 = [max(corner[0] + size[0], old_corner[0] + old_size[0]),
-                   max(corner[1] + size[1], old_corner[1] + old_size[1])]
-        size = [corner1[0] - corner0[0] + 1,
-                corner1[1] - corner0[1] + 1]
-        self.event_box.queue_draw_area(corner0[0], corner0[1], size[0], size[1])
-
-    def event_button_press(self, widget, event):
-        (corner, size) = corner_and_size_clipped(self.barrier_corner0,
-                                                 self.barrier_corner1,
-                                                 self.event_box.size_request())
-        self.event_box.queue_draw_area(corner[0], corner[1], size[0], size[1])
-        self.barrier_corner0 = [int(event.x), int(event.y)]
-        self.barrier_corner1 = [int(event.x), int(event.y)]
-        self.redirect_event_box_input(
-                self.event_button_press,
-                self.event_button_release,
-                None,
-                self.event_drag_motion,
-                self.event_expose)
-        self.barrier_selection_started = True
-
-    def event_button_release(self, widget, event):
-        self.redirect_event_box_input(
-                self.event_button_press,
-                self.event_button_release,
-                None,
-                None,
-                self.event_expose)
-        (self.barrier_corner, self.barrier_size) = \
-        corner_and_size_clipped(self.barrier_corner0, self.barrier_corner1,
-                                self.event_box.size_request())
-        self.barrier_md5sum = ppm_utils.get_region_md5sum(
-                self.image_width, self.image_height, self.image_data,
-                self.barrier_corner[0], self.barrier_corner[1],
-                self.barrier_size[0], self.barrier_size[1])
-        self.barrier_selected = True
-        self.update_barrier_info()
-
-    def event_key_press(self, widget, event):
-        if self.check_manual.get_active():
-            return False
-        str = key_event_to_qemu_string(event)
-        self.add_key(str)
-        return True
-
-
-class StepEditor(StepMakerWindow):
-    ui = '''<ui>
-    <menubar name="MenuBar">
-        <menu action="File">
-            <menuitem action="Open"/>
-            <separator/>
-            <menuitem action="Quit"/>
-        </menu>
-        <menu action="Edit">
-            <menuitem action="CopyStep"/>
-            <menuitem action="DeleteStep"/>
-        </menu>
-        <menu action="Insert">
-            <menuitem action="InsertNewBefore"/>
-            <menuitem action="InsertNewAfter"/>
-            <separator/>
-            <menuitem action="InsertStepsBefore"/>
-            <menuitem action="InsertStepsAfter"/>
-        </menu>
-        <menu action="Tools">
-            <menuitem action="CleanUp"/>
-        </menu>
-    </menubar>
-</ui>'''
-
-    # Constructor
-
-    def __init__(self, filename=None):
-        StepMakerWindow.__init__(self)
-
-        self.steps_filename = None
-        self.steps = []
-
-        # Create a UIManager instance
-        uimanager = gtk.UIManager()
-
-        # Add the accelerator group to the toplevel window
-        accelgroup = uimanager.get_accel_group()
-        self.window.add_accel_group(accelgroup)
-
-        # Create an ActionGroup
-        actiongroup = gtk.ActionGroup('StepEditor')
-
-        # Create actions
-        actiongroup.add_actions([
-            ('Quit', gtk.STOCK_QUIT, '_Quit', None, 'Quit the Program',
-             self.quit),
-            ('Open', gtk.STOCK_OPEN, '_Open', None, 'Open steps file',
-             self.open_steps_file),
-            ('CopyStep', gtk.STOCK_COPY, '_Copy current step...', "",
-             'Copy current step to user specified position', self.copy_step),
-            ('DeleteStep', gtk.STOCK_DELETE, '_Delete current step', "",
-             'Delete current step', self.event_remove_clicked),
-            ('InsertNewBefore', gtk.STOCK_ADD, '_New step before current', "",
-             'Insert new step before current step', self.insert_before),
-            ('InsertNewAfter', gtk.STOCK_ADD, 'N_ew step after current', "",
-             'Insert new step after current step', self.insert_after),
-            ('InsertStepsBefore', gtk.STOCK_ADD, '_Steps before current...',
-             "", 'Insert steps (from file) before current step',
-             self.insert_steps_before),
-            ('InsertStepsAfter', gtk.STOCK_ADD, 'Steps _after current...', "",
-             'Insert steps (from file) after current step',
-             self.insert_steps_after),
-            ('CleanUp', gtk.STOCK_DELETE, '_Clean up data directory', "",
-             'Move unused PPM files to a backup directory', self.cleanup),
-            ('File', None, '_File'),
-            ('Edit', None, '_Edit'),
-            ('Insert', None, '_Insert'),
-            ('Tools', None, '_Tools')
-            ])
-
-        def create_shortcut(name, callback, keyname):
-            # Create an action
-            action = gtk.Action(name, None, None, None)
-            # Connect a callback to the action
-            action.connect("activate", callback)
-            actiongroup.add_action_with_accel(action, keyname)
-            # Have the action use accelgroup
-            action.set_accel_group(accelgroup)
-            # Connect the accelerator to the action
-            action.connect_accelerator()
-
-        create_shortcut("Next", self.event_next_clicked, "Page_Down")
-        create_shortcut("Previous", self.event_prev_clicked, "Page_Up")
-
-        # Add the actiongroup to the uimanager
-        uimanager.insert_action_group(actiongroup, 0)
-
-        # Add a UI description
-        uimanager.add_ui_from_string(self.ui)
-
-        # Create a MenuBar
-        menubar = uimanager.get_widget('/MenuBar')
-        self.menu_vbox.pack_start(menubar, False)
-
-        # Remember the Edit menu bar for future reference
-        self.menu_edit = uimanager.get_widget('/MenuBar/Edit')
-        self.menu_edit.set_sensitive(False)
-
-        # Remember the Insert menu bar for future reference
-        self.menu_insert = uimanager.get_widget('/MenuBar/Insert')
-        self.menu_insert.set_sensitive(False)
-
-        # Remember the Tools menu bar for future reference
-        self.menu_tools = uimanager.get_widget('/MenuBar/Tools')
-        self.menu_tools.set_sensitive(False)
-
-        # Next/Previous HBox
-        hbox = gtk.HBox(spacing=10)
-        self.user_vbox.pack_start(hbox)
-        hbox.show()
-
-        self.button_first = gtk.Button(stock=gtk.STOCK_GOTO_FIRST)
-        self.button_first.connect("clicked", self.event_first_clicked)
-        hbox.pack_start(self.button_first)
-        self.button_first.show()
-
-        #self.button_prev = gtk.Button("<< Previous")
-        self.button_prev = gtk.Button(stock=gtk.STOCK_GO_BACK)
-        self.button_prev.connect("clicked", self.event_prev_clicked)
-        hbox.pack_start(self.button_prev)
-        self.button_prev.show()
-
-        self.label_step = gtk.Label("Step:")
-        hbox.pack_start(self.label_step, False)
-        self.label_step.show()
-
-        self.entry_step_num = gtk.Entry()
-        self.entry_step_num.connect("activate", self.event_entry_step_activated)
-        self.entry_step_num.set_width_chars(3)
-        hbox.pack_start(self.entry_step_num, False)
-        self.entry_step_num.show()
-
-        #self.button_next = gtk.Button("Next >>")
-        self.button_next = gtk.Button(stock=gtk.STOCK_GO_FORWARD)
-        self.button_next.connect("clicked", self.event_next_clicked)
-        hbox.pack_start(self.button_next)
-        self.button_next.show()
-
-        self.button_last = gtk.Button(stock=gtk.STOCK_GOTO_LAST)
-        self.button_last.connect("clicked", self.event_last_clicked)
-        hbox.pack_start(self.button_last)
-        self.button_last.show()
-
-        # Save HBox
-        hbox = gtk.HBox(spacing=10)
-        self.user_vbox.pack_start(hbox)
-        hbox.show()
-
-        self.button_save = gtk.Button("_Save current step")
-        self.button_save.connect("clicked", self.event_save_clicked)
-        hbox.pack_start(self.button_save)
-        self.button_save.show()
-
-        self.button_remove = gtk.Button("_Delete current step")
-        self.button_remove.connect("clicked", self.event_remove_clicked)
-        hbox.pack_start(self.button_remove)
-        self.button_remove.show()
-
-        self.button_replace = gtk.Button("_Replace screendump")
-        self.button_replace.connect("clicked", self.event_replace_clicked)
-        hbox.pack_start(self.button_replace)
-        self.button_replace.show()
-
-        # Disable unused widgets
-        self.button_capture.set_sensitive(False)
-        self.spin_latency.set_sensitive(False)
-        self.spin_sensitivity.set_sensitive(False)
-
-        # Disable main vbox because no steps file is loaded
-        self.main_vbox.set_sensitive(False)
-
-        # Set title
-        self.window.set_title("Step Editor")
-
-    # Events
-
-    def delete_event(self, widget, event):
-        # Make sure the step is saved (if the user wants it to be)
-        self.verify_save()
-
-    def event_first_clicked(self, widget):
-        if not self.steps:
-            return
-        # Make sure the step is saved (if the user wants it to be)
-        self.verify_save()
-        # Go to first step
-        self.set_step(0)
-
-    def event_last_clicked(self, widget):
-        if not self.steps:
-            return
-        # Make sure the step is saved (if the user wants it to be)
-        self.verify_save()
-        # Go to last step
-        self.set_step(len(self.steps) - 1)
-
-    def event_prev_clicked(self, widget):
-        if not self.steps:
-            return
-        # Make sure the step is saved (if the user wants it to be)
-        self.verify_save()
-        # Go to previous step
-        index = self.current_step_index - 1
-        if self.steps:
-            index = index % len(self.steps)
-        self.set_step(index)
-
-    def event_next_clicked(self, widget):
-        if not self.steps:
-            return
-        # Make sure the step is saved (if the user wants it to be)
-        self.verify_save()
-        # Go to next step
-        index = self.current_step_index + 1
-        if self.steps:
-            index = index % len(self.steps)
-        self.set_step(index)
-
-    def event_entry_step_activated(self, widget):
-        if not self.steps:
-            return
-        step_index = self.entry_step_num.get_text()
-        if not step_index.isdigit():
-            return
-        step_index = int(step_index) - 1
-        if step_index == self.current_step_index:
-            return
-        self.verify_save()
-        self.set_step(step_index)
-
-    def event_save_clicked(self, widget):
-        if not self.steps:
-            return
-        self.save_step()
-
-    def event_remove_clicked(self, widget):
-        if not self.steps:
-            return
-        if not self.question_yes_no("This will modify the steps file."
-                                    " Are you sure?", "Remove step?"):
-            return
-        # Remove step
-        del self.steps[self.current_step_index]
-        # Write changes to file
-        self.write_steps_file(self.steps_filename)
-        # Move to previous step
-        self.set_step(self.current_step_index)
-
-    def event_replace_clicked(self, widget):
-        if not self.steps:
-            return
-        # Let the user choose a screendump file
-        current_filename = os.path.join(self.steps_data_dir,
-                                        self.entry_screendump.get_text())
-        filename = self.filedialog("Choose PPM image file",
-                                   default_filename=current_filename)
-        if not filename:
-            return
-        if not ppm_utils.image_verify_ppm_file(filename):
-            self.message("Not a valid PPM image file.", "Error")
-            return
-        self.clear_image()
-        self.clear_barrier_state()
-        self.set_image_from_file(filename)
-        self.update_screendump_id(self.steps_data_dir)
-
-    # Menu actions
-
-    def open_steps_file(self, action):
-        # Make sure the step is saved (if the user wants it to be)
-        self.verify_save()
-        # Let the user choose a steps file
-        current_filename = self.steps_filename
-        filename = self.filedialog("Open steps file",
-                                   default_filename=current_filename)
-        if not filename:
-            return
-        self.set_steps_file(filename)
-
-    def quit(self, action):
-        # Make sure the step is saved (if the user wants it to be)
-        self.verify_save()
-        # Quit
-        gtk.main_quit()
-
-    def copy_step(self, action):
-        if not self.steps:
-            return
-        self.verify_save()
-        self.set_step(self.current_step_index)
-        # Get the desired position
-        step_index = self.inputdialog("Copy step to position:",
-                                      "Copy step",
-                                      str(self.current_step_index + 2))
-        if not step_index:
-            return
-        step_index = int(step_index) - 1
-        # Get the lines of the current step
-        step = self.steps[self.current_step_index]
-        # Insert new step at position step_index
-        self.steps.insert(step_index, step)
-        # Go to new step
-        self.set_step(step_index)
-        # Write changes to disk
-        self.write_steps_file(self.steps_filename)
-
-    def insert_before(self, action):
-        if not self.steps_filename:
-            return
-        if not self.question_yes_no("This will modify the steps file."
-                                    " Are you sure?", "Insert new step?"):
-            return
-        self.verify_save()
-        step_index = self.current_step_index
-        # Get the lines of a blank step
-        self.clear_state()
-        step = self.get_step_lines()
-        # Insert new step at position step_index
-        self.steps.insert(step_index, step)
-        # Go to new step
-        self.set_step(step_index)
-        # Write changes to disk
-        self.write_steps_file(self.steps_filename)
-
-    def insert_after(self, action):
-        if not self.steps_filename:
-            return
-        if not self.question_yes_no("This will modify the steps file."
-                                    " Are you sure?", "Insert new step?"):
-            return
-        self.verify_save()
-        step_index = self.current_step_index + 1
-        # Get the lines of a blank step
-        self.clear_state()
-        step = self.get_step_lines()
-        # Insert new step at position step_index
-        self.steps.insert(step_index, step)
-        # Go to new step
-        self.set_step(step_index)
-        # Write changes to disk
-        self.write_steps_file(self.steps_filename)
-
-    def insert_steps(self, filename, index):
-        # Read the steps file
-        (steps, header) = self.read_steps_file(filename)
-
-        data_dir = ppm_utils.get_data_dir(filename)
-        for step in steps:
-            self.set_state_from_step_lines(step, data_dir, warn=False)
-            step = self.get_step_lines(self.steps_data_dir)
-
-        # Insert steps into self.steps
-        self.steps[index:index] = steps
-        # Write changes to disk
-        self.write_steps_file(self.steps_filename)
-
-    def insert_steps_before(self, action):
-        if not self.steps_filename:
-            return
-        # Let the user choose a steps file
-        current_filename = self.steps_filename
-        filename = self.filedialog("Choose steps file",
-                                   default_filename=current_filename)
-        if not filename:
-            return
-        self.verify_save()
-
-        step_index = self.current_step_index
-        # Insert steps at position step_index
-        self.insert_steps(filename, step_index)
-        # Go to new steps
-        self.set_step(step_index)
-
-    def insert_steps_after(self, action):
-        if not self.steps_filename:
-            return
-        # Let the user choose a steps file
-        current_filename = self.steps_filename
-        filename = self.filedialog("Choose steps file",
-                                   default_filename=current_filename)
-        if not filename:
-            return
-        self.verify_save()
-
-        step_index = self.current_step_index + 1
-        # Insert new steps at position step_index
-        self.insert_steps(filename, step_index)
-        # Go to new steps
-        self.set_step(step_index)
-
-    def cleanup(self, action):
-        if not self.steps_filename:
-            return
-        if not self.question_yes_no("All unused PPM files will be moved to a"
-                                    " backup directory. Are you sure?",
-                                    "Clean up data directory?"):
-            return
-        # Remember the current step index
-        current_step_index = self.current_step_index
-        # Get the backup dir
-        backup_dir = os.path.join(self.steps_data_dir, "backup")
-        # Create it if it doesn't exist
-        if not os.path.exists(backup_dir):
-            os.makedirs(backup_dir)
-        # Move all files to the backup dir
-        for filename in glob.glob(os.path.join(self.steps_data_dir,
-                                               "*.[Pp][Pp][Mm]")):
-            shutil.move(filename, backup_dir)
-        # Get the used files back
-        for step in self.steps:
-            self.set_state_from_step_lines(step, backup_dir, warn=False)
-            self.get_step_lines(self.steps_data_dir)
-        # Remove the used files from the backup dir
-        used_files = os.listdir(self.steps_data_dir)
-        for filename in os.listdir(backup_dir):
-            if filename in used_files:
-                os.unlink(os.path.join(backup_dir, filename))
-        # Restore step index
-        self.set_step(current_step_index)
-        # Inform the user
-        self.message("All unused PPM files may be found at %s." %
-                     os.path.abspath(backup_dir),
-                     "Clean up data directory")
-
-    # Methods
-
-    def read_steps_file(self, filename):
-        steps = []
-        header = ""
-
-        file = open(filename, "r")
-        for line in file.readlines():
-            words = line.split()
-            if not words:
-                continue
-            if line.startswith("# ----"):
-                continue
-            if words[0] == "step":
-                steps.append("")
-            if steps:
-                steps[-1] += line
-            else:
-                header += line
-        file.close()
-
-        return (steps, header)
-
-    def set_steps_file(self, filename):
-        try:
-            (self.steps, self.header) = self.read_steps_file(filename)
-        except (TypeError, IOError):
-            self.message("Cannot read file %s." % filename, "Error")
-            return
-
-        self.steps_filename = filename
-        self.steps_data_dir = ppm_utils.get_data_dir(filename)
-        # Go to step 0
-        self.set_step(0)
-
-    def set_step(self, index):
-        # Limit index to legal boundaries
-        if index < 0:
-            index = 0
-        if index > len(self.steps) - 1:
-            index = len(self.steps) - 1
-
-        # Enable the menus
-        self.menu_edit.set_sensitive(True)
-        self.menu_insert.set_sensitive(True)
-        self.menu_tools.set_sensitive(True)
-
-        # If no steps exist...
-        if self.steps == []:
-            self.current_step_index = index
-            self.current_step = None
-            # Set window title
-            self.window.set_title("Step Editor -- %s" %
-                                  os.path.basename(self.steps_filename))
-            # Set step entry widget text
-            self.entry_step_num.set_text("")
-            # Clear the state of all widgets
-            self.clear_state()
-            # Disable the main vbox
-            self.main_vbox.set_sensitive(False)
-            return
-
-        self.current_step_index = index
-        self.current_step = self.steps[index]
-        # Set window title
-        self.window.set_title("Step Editor -- %s -- step %d" %
-                              (os.path.basename(self.steps_filename),
-                               index + 1))
-        # Set step entry widget text
-        self.entry_step_num.set_text(str(self.current_step_index + 1))
-        # Load the state from the step lines
-        self.set_state_from_step_lines(self.current_step, self.steps_data_dir)
-        # Enable the main vbox
-        self.main_vbox.set_sensitive(True)
-        # Make sure the step lines in self.current_step are identical to the
-        # output of self.get_step_lines
-        self.current_step = self.get_step_lines()
-
-    def verify_save(self):
-        if not self.steps:
-            return
-        # See if the user changed anything
-        if self.get_step_lines() != self.current_step:
-            if self.question_yes_no("Step contents have been modified."
-                                    " Save step?", "Save changes?"):
-                self.save_step()
-
-    def save_step(self):
-        lines = self.get_step_lines(self.steps_data_dir)
-        if lines != None:
-            self.steps[self.current_step_index] = lines
-            self.current_step = lines
-            self.write_steps_file(self.steps_filename)
-
-    def write_steps_file(self, filename):
-        file = open(filename, "w")
-        file.write(self.header)
-        for step in self.steps:
-            file.write("# " + "-" * 32 + "\n")
-            file.write(step)
-        file.close()
-
-
-if __name__ == "__main__":
-    se = StepEditor()
-    if len(sys.argv) > 1:
-        se.set_steps_file(sys.argv[1])
-    gtk.main()
diff --git a/client/tests/kvm/test_setup.py b/client/tests/kvm/test_setup.py
deleted file mode 100644
index 1125aea..0000000
--- a/client/tests/kvm/test_setup.py
+++ /dev/null
@@ -1,700 +0,0 @@
-"""
-Library to perform pre/post test setup for KVM autotest.
-"""
-import os, shutil, tempfile, re, ConfigParser, glob, inspect
-import logging, time
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.bin import utils
-
-
-@error.context_aware
-def cleanup(dir):
-    """
-    If dir is a mountpoint, do what is possible to unmount it. Afterwards,
-    try to remove it.
-
-    @param dir: Directory to be cleaned up.
-    """
-    error.context("cleaning up unattended install directory %s" % dir)
-    if os.path.ismount(dir):
-        utils.run('fuser -k %s' % dir, ignore_status=True)
-        utils.run('umount %s' % dir)
-    if os.path.isdir(dir):
-        shutil.rmtree(dir)
-
-
-@error.context_aware
-def clean_old_image(image):
-    """
-    Clean a leftover image file from previous processes. If it contains a
-    mounted file system, do the proper cleanup procedures.
-
-    @param image: Path to image to be cleaned up.
-    """
-    error.context("cleaning up old leftover image %s" % image)
-    if os.path.exists(image):
-        mtab = open('/etc/mtab', 'r')
-        mtab_contents = mtab.read()
-        mtab.close()
-        if image in mtab_contents:
-            utils.run('fuser -k %s' % image, ignore_status=True)
-            utils.run('umount %s' % image)
-        os.remove(image)
-
-
-def display_attributes(instance):
-    """
-    Inspects a given class instance attributes and displays them, convenient
-    for debugging.
-    """
-    logging.debug("Attributes set:")
-    for member in inspect.getmembers(instance):
-        name, value = member
-        attribute = getattr(instance, name)
-        if not (name.startswith("__") or callable(attribute) or not value):
-            logging.debug("    %s: %s", name, value)
-
-
-class Disk(object):
-    """
-    Abstract class for Disk objects, with the common methods implemented.
-    """
-    def __init__(self):
-        self.path = None
-
-
-    def setup_answer_file(self, filename, contents):
-        utils.open_write_close(os.path.join(self.mount, filename), contents)
-
-
-    def copy_to(self, src):
-        logging.debug("Copying %s to disk image mount", src)
-        dst = os.path.join(self.mount, os.path.basename(src))
-        if os.path.isdir(src):
-            shutil.copytree(src, dst)
-        elif os.path.isfile(src):
-            shutil.copyfile(src, dst)
-
-
-    def close(self):
-        os.chmod(self.path, 0755)
-        cleanup(self.mount)
-        logging.debug("Disk %s successfuly set", self.path)
-
-
-class FloppyDisk(Disk):
-    """
-    Represents a 1.44 MB floppy disk. We can copy files to it, and setup it in
-    convenient ways.
-    """
-    @error.context_aware
-    def __init__(self, path, qemu_img_binary, tmpdir):
-        error.context("Creating unattended install floppy image %s" % path)
-        self.tmpdir = tmpdir
-        self.mount = tempfile.mkdtemp(prefix='floppy_', dir=self.tmpdir)
-        self.virtio_mount = None
-        self.path = path
-        clean_old_image(path)
-        if not os.path.isdir(os.path.dirname(path)):
-            os.makedirs(os.path.dirname(path))
-
-        try:
-            c_cmd = '%s create -f raw %s 1440k' % (qemu_img_binary, path)
-            utils.run(c_cmd)
-            f_cmd = 'mkfs.msdos -s 1 %s' % path
-            utils.run(f_cmd)
-            m_cmd = 'mount -o loop,rw %s %s' % (path, self.mount)
-            utils.run(m_cmd)
-        except error.CmdError, e:
-            cleanup(self.mount)
-            raise
-
-
-    def _copy_virtio_drivers(self, virtio_floppy):
-        """
-        Copy the virtio drivers on the virtio floppy to the install floppy.
-
-        1) Mount the floppy containing the viostor drivers
-        2) Copy its contents to the root of the install floppy
-        """
-        virtio_mount = tempfile.mkdtemp(prefix='virtio_floppy_',
-                                        dir=self.tmpdir)
-
-        pwd = os.getcwd()
-        try:
-            m_cmd = 'mount -o loop %s %s' % (virtio_floppy, virtio_mount)
-            utils.run(m_cmd)
-            os.chdir(virtio_mount)
-            path_list = glob.glob('*')
-            for path in path_list:
-                self.copy_to(path)
-        finally:
-            os.chdir(pwd)
-            cleanup(virtio_mount)
-
-
-    def setup_virtio_win2003(self, virtio_floppy, virtio_oemsetup_id):
-        """
-        Setup the install floppy with the virtio storage drivers, win2003 style.
-
-        Win2003 and WinXP depend on the file txtsetup.oem file to install
-        the virtio drivers from the floppy, which is a .ini file.
-        Process:
-
-        1) Copy the virtio drivers on the virtio floppy to the install floppy
-        2) Parse the ini file with config parser
-        3) Modify the identifier of the default session that is going to be
-           executed on the config parser object
-        4) Re-write the config file to the disk
-        """
-        self._copy_virtio_drivers(virtio_floppy)
-        txtsetup_oem = os.path.join(self.mount, 'txtsetup.oem')
-        if not os.path.isfile(txtsetup_oem):
-            raise IOError('File txtsetup.oem not found on the install '
-                          'floppy. Please verify if your floppy virtio '
-                          'driver image has this file')
-        parser = ConfigParser.ConfigParser()
-        parser.read(txtsetup_oem)
-        if not parser.has_section('Defaults'):
-            raise ValueError('File txtsetup.oem does not have the session '
-                             '"Defaults". Please check txtsetup.oem')
-        default_driver = parser.get('Defaults', 'SCSI')
-        if default_driver != virtio_oemsetup_id:
-            parser.set('Defaults', 'SCSI', virtio_oemsetup_id)
-            fp = open(txtsetup_oem, 'w')
-            parser.write(fp)
-            fp.close()
-
-
-    def setup_virtio_win2008(self, virtio_floppy):
-        """
-        Setup the install floppy with the virtio storage drivers, win2008 style.
-
-        Win2008, Vista and 7 require people to point out the path to the drivers
-        on the unattended file, so we just need to copy the drivers to the
-        driver floppy disk.
-        Process:
-
-        1) Copy the virtio drivers on the virtio floppy to the install floppy
-        """
-        self._copy_virtio_drivers(virtio_floppy)
-
-
-class CdromDisk(Disk):
-    """
-    Represents a CDROM disk that we can master according to our needs.
-    """
-    def __init__(self, path, tmpdir):
-        self.mount = tempfile.mkdtemp(prefix='cdrom_unattended_', dir=tmpdir)
-        self.path = path
-        clean_old_image(path)
-        if not os.path.isdir(os.path.dirname(path)):
-            os.makedirs(os.path.dirname(path))
-
-
-    @error.context_aware
-    def close(self):
-        error.context("Creating unattended install CD image %s" % self.path)
-        g_cmd = ('mkisofs -o %s -max-iso9660-filenames '
-                 '-relaxed-filenames -D --input-charset iso8859-1 '
-                 '%s' % (self.path, self.mount))
-        utils.run(g_cmd)
-
-        os.chmod(self.path, 0755)
-        cleanup(self.mount)
-        logging.debug("unattended install CD image %s successfuly created",
-                      self.path)
-
-
-class UnattendedInstallConfig(object):
-    """
-    Creates a floppy disk image that will contain a config file for unattended
-    OS install. The parameters to the script are retrieved from environment
-    variables.
-    """
-    def __init__(self, test, params):
-        """
-        Sets class atributes from test parameters.
-
-        @param test: KVM test object.
-        @param params: Dictionary with test parameters.
-        """
-        root_dir = test.bindir
-        images_dir = os.path.join(root_dir, 'images')
-        self.deps_dir = os.path.join(root_dir, 'deps')
-        self.unattended_dir = os.path.join(root_dir, 'unattended')
-
-        attributes = ['kernel_args', 'finish_program', 'cdrom_cd1',
-                      'unattended_file', 'medium', 'url', 'kernel', 'initrd',
-                      'nfs_server', 'nfs_dir', 'install_virtio', 'floppy',
-                      'cdrom_unattended', 'boot_path', 'extra_params',
-                      'qemu_img_binary', 'cdkey', 'finish_program']
-
-        for a in attributes:
-            setattr(self, a, params.get(a, ''))
-
-        if self.install_virtio == 'yes':
-            v_attributes = ['virtio_floppy', 'virtio_storage_path',
-                            'virtio_network_path', 'virtio_oemsetup_id',
-                            'virtio_network_installer']
-            for va in v_attributes:
-                setattr(self, va, params.get(va, ''))
-
-        self.tmpdir = test.tmpdir
-
-        if getattr(self, 'unattended_file'):
-            self.unattended_file = os.path.join(root_dir, self.unattended_file)
-
-        if getattr(self, 'finish_program'):
-            self.finish_program = os.path.join(root_dir, self.finish_program)
-
-        if getattr(self, 'qemu_img_binary'):
-            if not os.path.isfile(getattr(self, 'qemu_img_binary')):
-                self.qemu_img_binary = os.path.join(root_dir,
-                                                    self.qemu_img_binary)
-
-        if getattr(self, 'cdrom_cd1'):
-            self.cdrom_cd1 = os.path.join(root_dir, self.cdrom_cd1)
-        self.cdrom_cd1_mount = tempfile.mkdtemp(prefix='cdrom_cd1_',
-                                                dir=self.tmpdir)
-        if self.medium == 'nfs':
-            self.nfs_mount = tempfile.mkdtemp(prefix='nfs_',
-                                              dir=self.tmpdir)
-
-        if getattr(self, 'floppy'):
-            self.floppy = os.path.join(root_dir, self.floppy)
-            if not os.path.isdir(os.path.dirname(self.floppy)):
-                os.makedirs(os.path.dirname(self.floppy))
-
-        self.image_path = os.path.dirname(self.kernel)
-
-
-    @error.context_aware
-    def render_answer_file(self):
-        """
-        Replace KVM_TEST_CDKEY (in the unattended file) with the cdkey
-        provided for this test and replace the KVM_TEST_MEDIUM with
-        the tree url or nfs address provided for this test.
-
-        @return: Answer file contents
-        """
-        error.base_context('Rendering final answer file')
-        error.context('Reading answer file %s' % self.unattended_file)
-        unattended_contents = open(self.unattended_file).read()
-        dummy_cdkey_re = r'\bKVM_TEST_CDKEY\b'
-        if re.search(dummy_cdkey_re, unattended_contents):
-            if self.cdkey:
-                unattended_contents = re.sub(dummy_cdkey_re, self.cdkey,
-                                             unattended_contents)
-            else:
-                print ("WARNING: 'cdkey' required but not specified for "
-                       "this unattended installation")
-
-        dummy_medium_re = r'\bKVM_TEST_MEDIUM\b'
-        if self.medium == "cdrom":
-            content = "cdrom"
-        elif self.medium == "url":
-            content = "url --url %s" % self.url
-        elif self.medium == "nfs":
-            content = "nfs --server=%s --dir=%s" % (self.nfs_server,
-                                                    self.nfs_dir)
-        else:
-            raise ValueError("Unexpected installation medium %s" % self.url)
-
-        unattended_contents = re.sub(dummy_medium_re, content,
-                                     unattended_contents)
-
-        def replace_virtio_key(contents, dummy_re, attribute_name):
-            """
-            Replace a virtio dummy string with contents.
-
-            If install_virtio is not set, replace it with a dummy string.
-
-            @param contents: Contents of the unattended file
-            @param dummy_re: Regular expression used to search on the.
-                    unattended file contents.
-            @param env: Name of the environment variable.
-            """
-            dummy_path = "C:"
-            driver = getattr(self, attribute_name, '')
-
-            if re.search(dummy_re, contents):
-                if self.install_virtio == "yes":
-                    if driver.endswith("msi"):
-                        driver = 'msiexec /passive /package ' + driver
-                    else:
-                        try:
-                            # Let's escape windows style paths properly
-                            drive, path = driver.split(":")
-                            driver = drive + ":" + re.escape(path)
-                        except:
-                            pass
-                    contents = re.sub(dummy_re, driver, contents)
-                else:
-                    contents = re.sub(dummy_re, dummy_path, contents)
-            return contents
-
-        vdict = {r'\bKVM_TEST_STORAGE_DRIVER_PATH\b':
-                 'virtio_storage_path',
-                 r'\bKVM_TEST_NETWORK_DRIVER_PATH\b':
-                 'virtio_network_path',
-                 r'\bKVM_TEST_VIRTIO_NETWORK_INSTALLER\b':
-                 'virtio_network_installer_path'}
-
-        for vkey in vdict:
-            unattended_contents = replace_virtio_key(
-                                                   contents=unattended_contents,
-                                                   dummy_re=vkey,
-                                                   attribute_name=vdict[vkey])
-
-        logging.debug("Unattended install contents:")
-        for line in unattended_contents.splitlines():
-            logging.debug(line)
-        return unattended_contents
-
-
-    def setup_boot_disk(self):
-        answer_contents = self.render_answer_file()
-
-        if self.unattended_file.endswith('.sif'):
-            dest_fname = 'winnt.sif'
-            setup_file = 'winnt.bat'
-            boot_disk = FloppyDisk(self.floppy, self.qemu_img_binary,
-                                   self.tmpdir)
-            boot_disk.setup_answer_file(dest_fname, answer_contents)
-            setup_file_path = os.path.join(self.unattended_dir, setup_file)
-            boot_disk.copy_to(setup_file_path)
-            if self.install_virtio == "yes":
-                boot_disk.setup_virtio_win2003(self.virtio_floppy,
-                                               self.virtio_oemsetup_id)
-            boot_disk.copy_to(self.finish_program)
-
-        elif self.unattended_file.endswith('.ks'):
-            # Red Hat kickstart install
-            dest_fname = 'ks.cfg'
-            if self.cdrom_unattended:
-                boot_disk = CdromDisk(self.cdrom_unattended, self.tmpdir)
-            elif self.floppy:
-                boot_disk = FloppyDisk(self.floppy, self.qemu_img_binary,
-                                       self.tmpdir)
-            else:
-                raise ValueError("Neither cdrom_unattended nor floppy set "
-                                 "on the config file, please verify")
-            boot_disk.setup_answer_file(dest_fname, answer_contents)
-
-        elif self.unattended_file.endswith('.xml'):
-            if "autoyast" in self.extra_params:
-                # SUSE autoyast install
-                dest_fname = "autoinst.xml"
-                if self.cdrom_unattended:
-                    boot_disk = CdromDisk(self.cdrom_unattended)
-                elif self.floppy:
-                    boot_disk = FloppyDisk(self.floppy, self.qemu_img_binary,
-                                           self.tmpdir)
-                else:
-                    raise ValueError("Neither cdrom_unattended nor floppy set "
-                                     "on the config file, please verify")
-                boot_disk.setup_answer_file(dest_fname, answer_contents)
-
-            else:
-                # Windows unattended install
-                dest_fname = "autounattend.xml"
-                boot_disk = FloppyDisk(self.floppy, self.qemu_img_binary,
-                                       self.tmpdir)
-                boot_disk.setup_answer_file(dest_fname, answer_contents)
-                if self.install_virtio == "yes":
-                    boot_disk.setup_virtio_win2008(self.virtio_floppy)
-                boot_disk.copy_to(self.finish_program)
-
-        else:
-            raise ValueError('Unknown answer file type: %s' %
-                             self.unattended_file)
-
-        boot_disk.close()
-
-
-    @error.context_aware
-    def setup_cdrom(self):
-        """
-        Mount cdrom and copy vmlinuz and initrd.img.
-        """
-        error.context("Copying vmlinuz and initrd.img from install cdrom %s" %
-                      self.cdrom_cd1)
-        m_cmd = ('mount -t iso9660 -v -o loop,ro %s %s' %
-                 (self.cdrom_cd1, self.cdrom_cd1_mount))
-        utils.run(m_cmd)
-
-        try:
-            if not os.path.isdir(self.image_path):
-                os.makedirs(self.image_path)
-            kernel_fetch_cmd = ("cp %s/%s/%s %s" %
-                                (self.cdrom_cd1_mount, self.boot_path,
-                                 os.path.basename(self.kernel), self.kernel))
-            utils.run(kernel_fetch_cmd)
-            initrd_fetch_cmd = ("cp %s/%s/%s %s" %
-                                (self.cdrom_cd1_mount, self.boot_path,
-                                 os.path.basename(self.initrd), self.initrd))
-            utils.run(initrd_fetch_cmd)
-        finally:
-            cleanup(self.cdrom_cd1_mount)
-
-
-    @error.context_aware
-    def setup_url(self):
-        """
-        Download the vmlinuz and initrd.img from URL.
-        """
-        error.context("downloading vmlinuz and initrd.img from %s" % self.url)
-        os.chdir(self.image_path)
-        kernel_fetch_cmd = "wget -q %s/%s/%s" % (self.url, self.boot_path,
-                                                 os.path.basename(self.kernel))
-        initrd_fetch_cmd = "wget -q %s/%s/%s" % (self.url, self.boot_path,
-                                                 os.path.basename(self.initrd))
-
-        if os.path.exists(self.kernel):
-            os.remove(self.kernel)
-        if os.path.exists(self.initrd):
-            os.remove(self.initrd)
-
-        utils.run(kernel_fetch_cmd)
-        utils.run(initrd_fetch_cmd)
-
-
-    def setup_nfs(self):
-        """
-        Copy the vmlinuz and initrd.img from nfs.
-        """
-        error.context("copying the vmlinuz and initrd.img from NFS share")
-
-        m_cmd = ("mount %s:%s %s -o ro" %
-                 (self.nfs_server, self.nfs_dir, self.nfs_mount))
-        utils.run(m_cmd)
-
-        try:
-            kernel_fetch_cmd = ("cp %s/%s/%s %s" %
-                                (self.nfs_mount, self.boot_path,
-                                os.path.basename(self.kernel), self.image_path))
-            utils.run(kernel_fetch_cmd)
-            initrd_fetch_cmd = ("cp %s/%s/%s %s" %
-                                (self.nfs_mount, self.boot_path,
-                                os.path.basename(self.initrd), self.image_path))
-            utils.run(initrd_fetch_cmd)
-        finally:
-            cleanup(self.nfs_mount)
-
-
-    def setup(self):
-        """
-        Configure the environment for unattended install.
-
-        Uses an appropriate strategy according to each install model.
-        """
-        logging.info("Starting unattended install setup")
-        display_attributes(self)
-
-        if self.unattended_file and (self.floppy or self.cdrom_unattended):
-            self.setup_boot_disk()
-        if self.medium == "cdrom":
-            if self.kernel and self.initrd:
-                self.setup_cdrom()
-        elif self.medium == "url":
-            self.setup_url()
-        elif self.medium == "nfs":
-            self.setup_nfs()
-        else:
-            raise ValueError("Unexpected installation method %s" %
-                             self.medium)
-
-
-class HugePageConfig(object):
-    def __init__(self, params):
-        """
-        Gets environment variable values and calculates the target number
-        of huge memory pages.
-
-        @param params: Dict like object containing parameters for the test.
-        """
-        self.vms = len(params.objects("vms"))
-        self.mem = int(params.get("mem"))
-        self.max_vms = int(params.get("max_vms", 0))
-        self.hugepage_path = '/mnt/kvm_hugepage'
-        self.hugepage_size = self.get_hugepage_size()
-        self.target_hugepages = self.get_target_hugepages()
-        self.kernel_hp_file = '/proc/sys/vm/nr_hugepages'
-
-
-    def get_hugepage_size(self):
-        """
-        Get the current system setting for huge memory page size.
-        """
-        meminfo = open('/proc/meminfo', 'r').readlines()
-        huge_line_list = [h for h in meminfo if h.startswith("Hugepagesize")]
-        try:
-            return int(huge_line_list[0].split()[1])
-        except ValueError, e:
-            raise ValueError("Could not get huge page size setting from "
-                             "/proc/meminfo: %s" % e)
-
-
-    def get_target_hugepages(self):
-        """
-        Calculate the target number of hugepages for testing purposes.
-        """
-        if self.vms < self.max_vms:
-            self.vms = self.max_vms
-        # memory of all VMs plus qemu overhead of 64MB per guest
-        vmsm = (self.vms * self.mem) + (self.vms * 64)
-        return int(vmsm * 1024 / self.hugepage_size)
-
-
-    @error.context_aware
-    def set_hugepages(self):
-        """
-        Sets the hugepage limit to the target hugepage value calculated.
-        """
-        error.context("setting hugepages limit to %s" % self.target_hugepages)
-        hugepage_cfg = open(self.kernel_hp_file, "r+")
-        hp = hugepage_cfg.readline()
-        while int(hp) < self.target_hugepages:
-            loop_hp = hp
-            hugepage_cfg.write(str(self.target_hugepages))
-            hugepage_cfg.flush()
-            hugepage_cfg.seek(0)
-            hp = int(hugepage_cfg.readline())
-            if loop_hp == hp:
-                raise ValueError("Cannot set the kernel hugepage setting "
-                                 "to the target value of %d hugepages." %
-                                 self.target_hugepages)
-        hugepage_cfg.close()
-        logging.debug("Successfuly set %s large memory pages on host ",
-                      self.target_hugepages)
-
-
-    @error.context_aware
-    def mount_hugepage_fs(self):
-        """
-        Verify if there's a hugetlbfs mount set. If there's none, will set up
-        a hugetlbfs mount using the class attribute that defines the mount
-        point.
-        """
-        error.context("mounting hugepages path")
-        if not os.path.ismount(self.hugepage_path):
-            if not os.path.isdir(self.hugepage_path):
-                os.makedirs(self.hugepage_path)
-            cmd = "mount -t hugetlbfs none %s" % self.hugepage_path
-            utils.system(cmd)
-
-
-    def setup(self):
-        logging.debug("Number of VMs this test will use: %d", self.vms)
-        logging.debug("Amount of memory used by each vm: %s", self.mem)
-        logging.debug("System setting for large memory page size: %s",
-                      self.hugepage_size)
-        logging.debug("Number of large memory pages needed for this test: %s",
-                      self.target_hugepages)
-        self.set_hugepages()
-        self.mount_hugepage_fs()
-
-
-    @error.context_aware
-    def cleanup(self):
-        error.context("trying to dealocate hugepage memory")
-        try:
-            utils.system("umount %s" % self.hugepage_path)
-        except error.CmdError:
-            return
-        utils.system("echo 0 > %s" % self.kernel_hp_file)
-        logging.debug("Hugepage memory successfuly dealocated")
-
-
-class EnospcConfig(object):
-    """
-    Performs setup for the test enospc. This is a borg class, similar to a
-    singleton. The idea is to keep state in memory for when we call cleanup()
-    on postprocessing.
-    """
-    __shared_state = {}
-    def __init__(self, test, params):
-        self.__dict__ = self.__shared_state
-        root_dir = test.bindir
-        self.tmpdir = test.tmpdir
-        self.qemu_img_binary = params.get('qemu_img_binary')
-        if not os.path.isfile(self.qemu_img_binary):
-            self.qemu_img_binary = os.path.join(root_dir,
-                                                self.qemu_img_binary)
-        self.raw_file_path = os.path.join(self.tmpdir, 'enospc.raw')
-        # Here we're trying to choose fairly explanatory names so it's less
-        # likely that we run in conflict with other devices in the system
-        self.vgtest_name = params.get("vgtest_name")
-        self.lvtest_name = params.get("lvtest_name")
-        self.lvtest_device = "/dev/%s/%s" % (self.vgtest_name, self.lvtest_name)
-        image_dir = os.path.dirname(params.get("image_name"))
-        self.qcow_file_path = os.path.join(image_dir, 'enospc.qcow2')
-        try:
-            getattr(self, 'loopback')
-        except AttributeError:
-            self.loopback = ''
-
-
-    @error.context_aware
-    def setup(self):
-        logging.debug("Starting enospc setup")
-        error.context("performing enospc setup")
-        display_attributes(self)
-        # Double check if there aren't any leftovers
-        self.cleanup()
-        try:
-            utils.run("%s create -f raw %s 10G" %
-                      (self.qemu_img_binary, self.raw_file_path))
-            # Associate a loopback device with the raw file.
-            # Subject to race conditions, that's why try here to associate
-            # it with the raw file as quickly as possible
-            l_result = utils.run("losetup -f")
-            utils.run("losetup -f %s" % self.raw_file_path)
-            self.loopback = l_result.stdout.strip()
-            # Add the loopback device configured to the list of pvs
-            # recognized by LVM
-            utils.run("pvcreate %s" % self.loopback)
-            utils.run("vgcreate %s %s" % (self.vgtest_name, self.loopback))
-            # Create an lv inside the vg with starting size of 200M
-            utils.run("lvcreate -L 200M -n %s %s" %
-                      (self.lvtest_name, self.vgtest_name))
-            # Create a 10GB qcow2 image in the logical volume
-            utils.run("%s create -f qcow2 %s 10G" %
-                      (self.qemu_img_binary, self.lvtest_device))
-            # Let's symlink the logical volume with the image name that autotest
-            # expects this device to have
-            os.symlink(self.lvtest_device, self.qcow_file_path)
-        except Exception, e:
-            self.cleanup()
-            raise
-
-    @error.context_aware
-    def cleanup(self):
-        error.context("performing enospc cleanup")
-        if os.path.isfile(self.lvtest_device):
-            utils.run("fuser -k %s" % self.lvtest_device)
-            time.sleep(2)
-        l_result = utils.run("lvdisplay")
-        # Let's remove all volumes inside the volume group created
-        if self.lvtest_name in l_result.stdout:
-            utils.run("lvremove -f %s" % self.lvtest_device)
-        # Now, removing the volume group itself
-        v_result = utils.run("vgdisplay")
-        if self.vgtest_name in v_result.stdout:
-            utils.run("vgremove -f %s" % self.vgtest_name)
-        # Now, if we can, let's remove the physical volume from lvm list
-        if self.loopback:
-            p_result = utils.run("pvdisplay")
-            if self.loopback in p_result.stdout:
-                utils.run("pvremove -f %s" % self.loopback)
-        l_result = utils.run('losetup -a')
-        if self.loopback and (self.loopback in l_result.stdout):
-            try:
-                utils.run("losetup -d %s" % self.loopback)
-            except error.CmdError:
-                logging.error("Failed to liberate loopback %s", self.loopback)
-        if os.path.islink(self.qcow_file_path):
-            os.remove(self.qcow_file_path)
-        if os.path.isfile(self.raw_file_path):
-            os.remove(self.raw_file_path)
-- 
1.7.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 6/7] KVM test: Try to load subtests on a shared tests location
  2011-03-09  9:21 [PATCH 0/7] [RFC] KVM autotest refactor stage 1 Lucas Meneghel Rodrigues
                   ` (4 preceding siblings ...)
  2011-03-09  9:21 ` [PATCH 5/7] KVM test: Removing the old libraries and programs Lucas Meneghel Rodrigues
@ 2011-03-09  9:21 ` Lucas Meneghel Rodrigues
  2011-03-09  9:21 ` [PATCH 7/7] KVM test: Moving generic tests to common tests area Lucas Meneghel Rodrigues
  2011-03-09 11:54 ` [PATCH 0/7] [RFC] KVM autotest refactor stage 1 Lucas Meneghel Rodrigues
  7 siblings, 0 replies; 12+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-03-09  9:21 UTC (permalink / raw)
  To: autotest; +Cc: kvm

As we have several subtests that can be shared among different
virtualization tests (kvm, xen), manipulate kvm.py to try loading
subtests from the common area (planned to be client/virt/tests) first,
then falling back to the specific test area (client/tests/kvm/tests).

Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
---
 client/tests/kvm/kvm.py |   18 +++++++++++++-----
 1 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
index 6981b1b..54535ae 100644
--- a/client/tests/kvm/kvm.py
+++ b/client/tests/kvm/kvm.py
@@ -54,11 +54,19 @@ class kvm(test.test):
                     # test type
                     t_type = params.get("type")
                     # Verify if we have the correspondent source file for it
-                    subtest_dir = os.path.join(self.bindir, "tests")
-                    module_path = os.path.join(subtest_dir, "%s.py" % t_type)
-                    if not os.path.isfile(module_path):
-                        raise error.TestError("No %s.py test file found" %
-                                              t_type)
+                    virt_dir = os.path.dirname(virt_utils.__file__)
+                    subtest_dir_virt = os.path.join(virt_dir, "tests")
+                    subtest_dir_kvm = os.path.join(self.bindir, "tests")
+                    subtest_dir = None
+                    for d in [subtest_dir_virt, subtest_dir_kvm]:
+                        module_path = os.path.join(d, "%s.py" % t_type)
+                        if os.path.isfile(module_path):
+                            subtest_dir = d
+                            break
+                    if subtest_dir is None:
+                        raise error.TestError("Could not find test file %s.py "
+                                              "on either %s or %s directory" %
+                                              subtest_dir_virt, subtest_dir_kvm)
                     # Load the test module
                     f, p, d = imp.find_module(t_type, [subtest_dir])
                     test_module = imp.load_module(t_type, f, p, d)
-- 
1.7.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 7/7] KVM test: Moving generic tests to common tests area
  2011-03-09  9:21 [PATCH 0/7] [RFC] KVM autotest refactor stage 1 Lucas Meneghel Rodrigues
                   ` (5 preceding siblings ...)
  2011-03-09  9:21 ` [PATCH 6/7] KVM test: Try to load subtests on a shared tests location Lucas Meneghel Rodrigues
@ 2011-03-09  9:21 ` Lucas Meneghel Rodrigues
  2011-03-09 11:54 ` [PATCH 0/7] [RFC] KVM autotest refactor stage 1 Lucas Meneghel Rodrigues
  7 siblings, 0 replies; 12+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-03-09  9:21 UTC (permalink / raw)
  To: autotest; +Cc: kvm

So other virt tests can benefit from them. The tests that
don't depend on any KVM VM specific features (ie, monitors)
were moved. As soon as we abstract some of the monitor
functionality, the KVM specific tests can be rewritten on
a virt tech agnostic way and then be moved.

Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
---
 client/tests/kvm/tests/autotest.py            |   25 ---
 client/tests/kvm/tests/boot.py                |   26 ---
 client/tests/kvm/tests/clock_getres.py        |   37 ----
 client/tests/kvm/tests/ethtool.py             |  235 ---------------------
 client/tests/kvm/tests/file_transfer.py       |   84 --------
 client/tests/kvm/tests/guest_s4.py            |   76 -------
 client/tests/kvm/tests/guest_test.py          |   80 -------
 client/tests/kvm/tests/image_copy.py          |   45 ----
 client/tests/kvm/tests/iofuzz.py              |  136 ------------
 client/tests/kvm/tests/ioquit.py              |   31 ---
 client/tests/kvm/tests/iozone_windows.py      |   40 ----
 client/tests/kvm/tests/jumbo.py               |  127 ------------
 client/tests/kvm/tests/kdump.py               |   75 -------
 client/tests/kvm/tests/linux_s3.py            |   41 ----
 client/tests/kvm/tests/mac_change.py          |   60 ------
 client/tests/kvm/tests/multicast.py           |   90 --------
 client/tests/kvm/tests/netperf.py             |   90 --------
 client/tests/kvm/tests/nic_promisc.py         |   39 ----
 client/tests/kvm/tests/nicdriver_unload.py    |   56 -----
 client/tests/kvm/tests/ping.py                |   73 -------
 client/tests/kvm/tests/pxe.py                 |   29 ---
 client/tests/kvm/tests/shutdown.py            |   43 ----
 client/tests/kvm/tests/stress_boot.py         |   53 -----
 client/tests/kvm/tests/vlan.py                |  175 ----------------
 client/tests/kvm/tests/whql_client_install.py |  136 ------------
 client/tests/kvm/tests/whql_submission.py     |  275 -------------------------
 client/tests/kvm/tests/yum_update.py          |   49 -----
 client/virt/tests/autotest.py                 |   25 +++
 client/virt/tests/boot.py                     |   26 +++
 client/virt/tests/clock_getres.py             |   37 ++++
 client/virt/tests/ethtool.py                  |  235 +++++++++++++++++++++
 client/virt/tests/file_transfer.py            |   84 ++++++++
 client/virt/tests/guest_s4.py                 |   76 +++++++
 client/virt/tests/guest_test.py               |   80 +++++++
 client/virt/tests/image_copy.py               |   45 ++++
 client/virt/tests/iofuzz.py                   |  136 ++++++++++++
 client/virt/tests/ioquit.py                   |   31 +++
 client/virt/tests/iozone_windows.py           |   40 ++++
 client/virt/tests/jumbo.py                    |  127 ++++++++++++
 client/virt/tests/kdump.py                    |   75 +++++++
 client/virt/tests/linux_s3.py                 |   41 ++++
 client/virt/tests/mac_change.py               |   60 ++++++
 client/virt/tests/multicast.py                |   90 ++++++++
 client/virt/tests/netperf.py                  |   90 ++++++++
 client/virt/tests/nic_promisc.py              |   39 ++++
 client/virt/tests/nicdriver_unload.py         |   56 +++++
 client/virt/tests/ping.py                     |   73 +++++++
 client/virt/tests/pxe.py                      |   29 +++
 client/virt/tests/shutdown.py                 |   43 ++++
 client/virt/tests/stress_boot.py              |   53 +++++
 client/virt/tests/vlan.py                     |  175 ++++++++++++++++
 client/virt/tests/whql_client_install.py      |  136 ++++++++++++
 client/virt/tests/whql_submission.py          |  275 +++++++++++++++++++++++++
 client/virt/tests/yum_update.py               |   49 +++++
 54 files changed, 2226 insertions(+), 2226 deletions(-)
 delete mode 100644 client/tests/kvm/tests/autotest.py
 delete mode 100644 client/tests/kvm/tests/boot.py
 delete mode 100644 client/tests/kvm/tests/clock_getres.py
 delete mode 100644 client/tests/kvm/tests/ethtool.py
 delete mode 100644 client/tests/kvm/tests/file_transfer.py
 delete mode 100644 client/tests/kvm/tests/guest_s4.py
 delete mode 100644 client/tests/kvm/tests/guest_test.py
 delete mode 100644 client/tests/kvm/tests/image_copy.py
 delete mode 100644 client/tests/kvm/tests/iofuzz.py
 delete mode 100644 client/tests/kvm/tests/ioquit.py
 delete mode 100644 client/tests/kvm/tests/iozone_windows.py
 delete mode 100644 client/tests/kvm/tests/jumbo.py
 delete mode 100644 client/tests/kvm/tests/kdump.py
 delete mode 100644 client/tests/kvm/tests/linux_s3.py
 delete mode 100644 client/tests/kvm/tests/mac_change.py
 delete mode 100644 client/tests/kvm/tests/multicast.py
 delete mode 100644 client/tests/kvm/tests/netperf.py
 delete mode 100644 client/tests/kvm/tests/nic_promisc.py
 delete mode 100644 client/tests/kvm/tests/nicdriver_unload.py
 delete mode 100644 client/tests/kvm/tests/ping.py
 delete mode 100644 client/tests/kvm/tests/pxe.py
 delete mode 100644 client/tests/kvm/tests/shutdown.py
 delete mode 100644 client/tests/kvm/tests/stress_boot.py
 delete mode 100644 client/tests/kvm/tests/vlan.py
 delete mode 100644 client/tests/kvm/tests/whql_client_install.py
 delete mode 100644 client/tests/kvm/tests/whql_submission.py
 delete mode 100644 client/tests/kvm/tests/yum_update.py
 create mode 100644 client/virt/tests/autotest.py
 create mode 100644 client/virt/tests/boot.py
 create mode 100644 client/virt/tests/clock_getres.py
 create mode 100644 client/virt/tests/ethtool.py
 create mode 100644 client/virt/tests/file_transfer.py
 create mode 100644 client/virt/tests/guest_s4.py
 create mode 100644 client/virt/tests/guest_test.py
 create mode 100644 client/virt/tests/image_copy.py
 create mode 100644 client/virt/tests/iofuzz.py
 create mode 100644 client/virt/tests/ioquit.py
 create mode 100644 client/virt/tests/iozone_windows.py
 create mode 100644 client/virt/tests/jumbo.py
 create mode 100644 client/virt/tests/kdump.py
 create mode 100644 client/virt/tests/linux_s3.py
 create mode 100644 client/virt/tests/mac_change.py
 create mode 100644 client/virt/tests/multicast.py
 create mode 100644 client/virt/tests/netperf.py
 create mode 100644 client/virt/tests/nic_promisc.py
 create mode 100644 client/virt/tests/nicdriver_unload.py
 create mode 100644 client/virt/tests/ping.py
 create mode 100644 client/virt/tests/pxe.py
 create mode 100644 client/virt/tests/shutdown.py
 create mode 100644 client/virt/tests/stress_boot.py
 create mode 100644 client/virt/tests/vlan.py
 create mode 100644 client/virt/tests/whql_client_install.py
 create mode 100644 client/virt/tests/whql_submission.py
 create mode 100644 client/virt/tests/yum_update.py

diff --git a/client/tests/kvm/tests/autotest.py b/client/tests/kvm/tests/autotest.py
deleted file mode 100644
index cdea31a..0000000
--- a/client/tests/kvm/tests/autotest.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import os
-from autotest_lib.client.virt import virt_test_utils
-
-
-def run_autotest(test, params, env):
-    """
-    Run an autotest test inside a guest.
-
-    @param test: kvm test object.
-    @param params: Dictionary with test parameters.
-    @param env: Dictionary with the test environment.
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    timeout = int(params.get("login_timeout", 360))
-    session = vm.wait_for_login(timeout=timeout)
-
-    # Collect test parameters
-    timeout = int(params.get("test_timeout", 300))
-    control_path = os.path.join(test.bindir, "autotest_control",
-                                params.get("test_control_file"))
-    outputdir = test.outputdir
-
-    virt_test_utils.run_autotest(vm, session, control_path, timeout, outputdir,
-                                 params)
diff --git a/client/tests/kvm/tests/boot.py b/client/tests/kvm/tests/boot.py
deleted file mode 100644
index 4fabcd5..0000000
--- a/client/tests/kvm/tests/boot.py
+++ /dev/null
@@ -1,26 +0,0 @@
-import time
-
-
-def run_boot(test, params, env):
-    """
-    KVM reboot test:
-    1) Log into a guest
-    2) Send a reboot command or a system_reset monitor command (optional)
-    3) Wait until the guest is up again
-    4) Log into the guest to verify it's up again
-
-    @param test: kvm test object
-    @param params: Dictionary with the test parameters
-    @param env: Dictionary with test environment.
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    timeout = float(params.get("login_timeout", 240))
-    session = vm.wait_for_login(timeout=timeout)
-
-    if params.get("reboot_method"):
-        if params["reboot_method"] == "system_reset":
-            time.sleep(int(params.get("sleep_before_reset", 10)))
-        session = vm.reboot(session, params["reboot_method"], 0, timeout)
-
-    session.close()
diff --git a/client/tests/kvm/tests/clock_getres.py b/client/tests/kvm/tests/clock_getres.py
deleted file mode 100644
index d1baf88..0000000
--- a/client/tests/kvm/tests/clock_getres.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import logging, os
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.bin import utils
-
-
-def run_clock_getres(test, params, env):
-    """
-    Verify if guests using kvm-clock as the time source have a sane clock
-    resolution.
-
-    @param test: kvm test object.
-    @param params: Dictionary with test parameters.
-    @param env: Dictionary with the test environment.
-    """
-    t_name = "test_clock_getres"
-    base_dir = "/tmp"
-
-    deps_dir = os.path.join(test.bindir, "deps", t_name)
-    os.chdir(deps_dir)
-    try:
-        utils.system("make clean")
-        utils.system("make")
-    except:
-        raise error.TestError("Failed to compile %s" % t_name)
-
-    test_clock = os.path.join(deps_dir, t_name)
-    if not os.path.isfile(test_clock):
-        raise error.TestError("Could not find %s" % t_name)
-
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    timeout = int(params.get("login_timeout", 360))
-    session = vm.wait_for_login(timeout=timeout)
-    vm.copy_files_to(test_clock, base_dir)
-    session.cmd(os.path.join(base_dir, t_name))
-    logging.info("PASS: Guest reported appropriate clock resolution")
-    logging.info("Guest's dmesg:\n%s", session.cmd_output("dmesg").strip())
diff --git a/client/tests/kvm/tests/ethtool.py b/client/tests/kvm/tests/ethtool.py
deleted file mode 100644
index 1152f00..0000000
--- a/client/tests/kvm/tests/ethtool.py
+++ /dev/null
@@ -1,235 +0,0 @@
-import logging, re
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.bin import utils
-from autotest_lib.client.virt import virt_test_utils, virt_utils, aexpect
-
-
-def run_ethtool(test, params, env):
-    """
-    Test offload functions of ethernet device by ethtool
-
-    1) Log into a guest.
-    2) Initialize the callback of sub functions.
-    3) Enable/disable sub function of NIC.
-    4) Execute callback function.
-    5) Check the return value.
-    6) Restore original configuration.
-
-    @param test: KVM test object.
-    @param params: Dictionary with the test parameters.
-    @param env: Dictionary with test environment.
-
-    @todo: Not all guests have ethtool installed, so
-        find a way to get it installed using yum/apt-get/
-        whatever
-    """
-    def ethtool_get(f_type):
-        feature_pattern = {
-            'tx':  'tx.*checksumming',
-            'rx':  'rx.*checksumming',
-            'sg':  'scatter.*gather',
-            'tso': 'tcp.*segmentation.*offload',
-            'gso': 'generic.*segmentation.*offload',
-            'gro': 'generic.*receive.*offload',
-            'lro': 'large.*receive.*offload',
-            }
-        o = session.cmd("ethtool -k %s" % ethname)
-        try:
-            return re.findall("%s: (.*)" % feature_pattern.get(f_type), o)[0]
-        except IndexError:
-            logging.debug("Could not get %s status", f_type)
-
-
-    def ethtool_set(f_type, status):
-        """
-        Set ethernet device offload status
-
-        @param f_type: Offload type name
-        @param status: New status will be changed to
-        """
-        logging.info("Try to set %s %s", f_type, status)
-        if status not in ["off", "on"]:
-            return False
-        cmd = "ethtool -K %s %s %s" % (ethname, f_type, status)
-        if ethtool_get(f_type) != status:
-            try:
-                session.cmd(cmd)
-                return True
-            except:
-                return False
-        if ethtool_get(f_type) != status:
-            logging.error("Fail to set %s %s", f_type, status)
-            return False
-        return True
-
-
-    def ethtool_save_params():
-        logging.info("Save ethtool configuration")
-        for i in supported_features:
-            feature_status[i] = ethtool_get(i)
-
-
-    def ethtool_restore_params():
-        logging.info("Restore ethtool configuration")
-        for i in supported_features:
-            ethtool_set(i, feature_status[i])
-
-
-    def compare_md5sum(name):
-        logging.info("Compare md5sum of the files on guest and host")
-        host_result = utils.hash_file(name, method="md5")
-        try:
-            o = session.cmd_output("md5sum %s" % name)
-            guest_result = re.findall("\w+", o)[0]
-        except IndexError:
-            logging.error("Could not get file md5sum in guest")
-            return False
-        logging.debug("md5sum: guest(%s), host(%s)", guest_result, host_result)
-        return guest_result == host_result
-
-
-    def transfer_file(src="guest"):
-        """
-        Transfer file by scp, use tcpdump to capture packets, then check the
-        return string.
-
-        @param src: Source host of transfer file
-        @return: Tuple (status, error msg/tcpdump result)
-        """
-        session2.cmd_output("rm -rf %s" % filename)
-        dd_cmd = ("dd if=/dev/urandom of=%s bs=1M count=%s" %
-                  (filename, params.get("filesize")))
-        failure = (False, "Failed to create file using dd, cmd: %s" % dd_cmd)
-        logging.info("Creating file in source host, cmd: %s", dd_cmd)
-        tcpdump_cmd = "tcpdump -lep -s 0 tcp -vv port ssh"
-        if src == "guest":
-            tcpdump_cmd += " and src %s" % guest_ip
-            copy_files_from = vm.copy_files_from
-            try:
-                session.cmd_output(dd_cmd, timeout=360)
-            except aexpect.ShellCmdError, e:
-                return failure
-        else:
-            tcpdump_cmd += " and dst %s" % guest_ip
-            copy_files_from = vm.copy_files_to
-            try:
-                utils.system(dd_cmd)
-            except error.CmdError, e:
-                return failure
-
-        # only capture the new tcp port after offload setup
-        original_tcp_ports = re.findall("tcp.*:(\d+).*%s" % guest_ip,
-                                      utils.system_output("/bin/netstat -nap"))
-        for i in original_tcp_ports:
-            tcpdump_cmd += " and not port %s" % i
-        logging.debug("Listen using command: %s", tcpdump_cmd)
-        session2.sendline(tcpdump_cmd)
-        if not virt_utils.wait_for(
-                           lambda:session.cmd_status("pgrep tcpdump") == 0, 30):
-            return (False, "Tcpdump process wasn't launched")
-
-        logging.info("Start to transfer file")
-        try:
-            copy_files_from(filename, filename)
-        except virt_utils.SCPError, e:
-            return (False, "File transfer failed (%s)" % e)
-        logging.info("Transfer file completed")
-        session.cmd("killall tcpdump")
-        try:
-            tcpdump_string = session2.read_up_to_prompt(timeout=60)
-        except aexpect.ExpectError:
-            return (False, "Fail to read tcpdump's output")
-
-        if not compare_md5sum(filename):
-            return (False, "Files' md5sum mismatched")
-        return (True, tcpdump_string)
-
-
-    def tx_callback(status="on"):
-        s, o = transfer_file(src="guest")
-        if not s:
-            logging.error(o)
-            return False
-        return True
-
-
-    def rx_callback(status="on"):
-        s, o = transfer_file(src="host")
-        if not s:
-            logging.error(o)
-            return False
-        return True
-
-
-    def so_callback(status="on"):
-        s, o = transfer_file(src="guest")
-        if not s:
-            logging.error(o)
-            return False
-        logging.info("Check if contained large frame")
-        # MTU: default IPv4 MTU is 1500 Bytes, ethernet header is 14 Bytes
-        return (status == "on") ^ (len([i for i in re.findall(
-                                   "length (\d*):", o) if int(i) > mtu]) == 0)
-
-
-    def ro_callback(status="on"):
-        s, o = transfer_file(src="host")
-        if not s:
-            logging.error(o)
-            return False
-        return True
-
-
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    session = vm.wait_for_login(timeout=int(params.get("login_timeout", 360)))
-    # Let's just error the test if we identify that there's no ethtool installed
-    session.cmd("ethtool -h")
-    session2 = vm.wait_for_login(timeout=int(params.get("login_timeout", 360)))
-    mtu = 1514
-    feature_status = {}
-    filename = "/tmp/ethtool.dd"
-    guest_ip = vm.get_address()
-    ethname = virt_test_utils.get_linux_ifname(session, vm.get_mac_address(0))
-    supported_features = params.get("supported_features")
-    if supported_features:
-        supported_features = supported_features.split()
-    else:
-        supported_features = []
-    test_matrix = {
-        # type:(callback,    (dependence), (exclude)
-        "tx":  (tx_callback, (), ()),
-        "rx":  (rx_callback, (), ()),
-        "sg":  (tx_callback, ("tx",), ()),
-        "tso": (so_callback, ("tx", "sg",), ("gso",)),
-        "gso": (so_callback, (), ("tso",)),
-        "gro": (ro_callback, ("rx",), ("lro",)),
-        "lro": (rx_callback, (), ("gro",)),
-        }
-    ethtool_save_params()
-    success = True
-    try:
-        for f_type in supported_features:
-            callback = test_matrix[f_type][0]
-            for i in test_matrix[f_type][2]:
-                if not ethtool_set(i, "off"):
-                    logging.error("Fail to disable %s", i)
-                    success = False
-            for i in [f for f in test_matrix[f_type][1]] + [f_type]:
-                if not ethtool_set(i, "on"):
-                    logging.error("Fail to enable %s", i)
-                    success = False
-            if not callback():
-                raise error.TestFail("Test failed, %s: on", f_type)
-
-            if not ethtool_set(f_type, "off"):
-                logging.error("Fail to disable %s", f_type)
-                success = False
-            if not callback(status="off"):
-                raise error.TestFail("Test failed, %s: off", f_type)
-        if not success:
-            raise error.TestError("Enable/disable offload function fail")
-    finally:
-        ethtool_restore_params()
-        session.close()
-        session2.close()
diff --git a/client/tests/kvm/tests/file_transfer.py b/client/tests/kvm/tests/file_transfer.py
deleted file mode 100644
index 5f6672d..0000000
--- a/client/tests/kvm/tests/file_transfer.py
+++ /dev/null
@@ -1,84 +0,0 @@
-import logging, time, os
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.bin import utils
-from autotest_lib.client.virt import virt_utils
-
-
-def run_file_transfer(test, params, env):
-    """
-    Test ethrnet device function by ethtool
-
-    1) Boot up a VM.
-    2) Create a large file by dd on host.
-    3) Copy this file from host to guest.
-    4) Copy this file from guest to host.
-    5) Check if file transfers ended good.
-
-    @param test: KVM test object.
-    @param params: Dictionary with the test parameters.
-    @param env: Dictionary with test environment.
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    login_timeout = int(params.get("login_timeout", 360))
-
-    session = vm.wait_for_login(timeout=login_timeout)
-
-    dir_name = test.tmpdir
-    transfer_timeout = int(params.get("transfer_timeout"))
-    transfer_type = params.get("transfer_type")
-    tmp_dir = params.get("tmp_dir", "/tmp/")
-    clean_cmd = params.get("clean_cmd", "rm -f")
-    filesize = int(params.get("filesize", 4000))
-    count = int(filesize / 10)
-    if count == 0:
-        count = 1
-
-    host_path = os.path.join(dir_name, "tmp-%s" %
-                             virt_utils.generate_random_string(8))
-    host_path2 = host_path + ".2"
-    cmd = "dd if=/dev/zero of=%s bs=10M count=%d" % (host_path, count)
-    guest_path = (tmp_dir + "file_transfer-%s" %
-                  virt_utils.generate_random_string(8))
-
-    try:
-        logging.info("Creating %dMB file on host", filesize)
-        utils.run(cmd)
-
-        if transfer_type == "remote":
-            logging.info("Transfering file host -> guest, timeout: %ss",
-                         transfer_timeout)
-            t_begin = time.time()
-            vm.copy_files_to(host_path, guest_path, timeout=transfer_timeout)
-            t_end = time.time()
-            throughput = filesize / (t_end - t_begin)
-            logging.info("File transfer host -> guest succeed, "
-                         "estimated throughput: %.2fMB/s", throughput)
-
-            logging.info("Transfering file guest -> host, timeout: %ss",
-                         transfer_timeout)
-            t_begin = time.time()
-            vm.copy_files_from(guest_path, host_path2, timeout=transfer_timeout)
-            t_end = time.time()
-            throughput = filesize / (t_end - t_begin)
-            logging.info("File transfer guest -> host succeed, "
-                         "estimated throughput: %.2fMB/s", throughput)
-        else:
-            raise error.TestError("Unknown test file transfer mode %s" %
-                                  transfer_type)
-
-        if (utils.hash_file(host_path, method="md5") !=
-            utils.hash_file(host_path2, method="md5")):
-            raise error.TestFail("File changed after transfer host -> guest "
-                                 "and guest -> host")
-
-    finally:
-        logging.info('Cleaning temp file on guest')
-        session.cmd("rm -rf %s" % guest_path)
-        logging.info('Cleaning temp files on host')
-        try:
-            os.remove(host_path)
-            os.remove(host_path2)
-        except OSError:
-            pass
-        session.close()
diff --git a/client/tests/kvm/tests/guest_s4.py b/client/tests/kvm/tests/guest_s4.py
deleted file mode 100644
index 5b5708d..0000000
--- a/client/tests/kvm/tests/guest_s4.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import logging, time
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.virt import virt_utils
-
-
-@error.context_aware
-def run_guest_s4(test, params, env):
-    """
-    Suspend guest to disk, supports both Linux & Windows OSes.
-
-    @param test: kvm test object.
-    @param params: Dictionary with test parameters.
-    @param env: Dictionary with the test environment.
-    """
-    error.base_context("before S4")
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    timeout = int(params.get("login_timeout", 360))
-    session = vm.wait_for_login(timeout=timeout)
-
-    error.context("checking whether guest OS supports S4", logging.info)
-    session.cmd(params.get("check_s4_support_cmd"))
-    error.context()
-
-    logging.info("Waiting until all guest OS services are fully started...")
-    time.sleep(float(params.get("services_up_timeout", 30)))
-
-    # Start up a program (tcpdump for linux & ping for Windows), as a flag.
-    # If the program died after suspend, then fails this testcase.
-    test_s4_cmd = params.get("test_s4_cmd")
-    session.sendline(test_s4_cmd)
-    time.sleep(5)
-
-    # Get the second session to start S4
-    session2 = vm.wait_for_login(timeout=timeout)
-
-    # Make sure the background program is running as expected
-    error.context("making sure background program is running")
-    check_s4_cmd = params.get("check_s4_cmd")
-    session2.cmd(check_s4_cmd)
-    logging.info("Launched background command in guest: %s", test_s4_cmd)
-    error.context()
-    error.base_context()
-
-    # Suspend to disk
-    logging.info("Starting suspend to disk now...")
-    session2.sendline(params.get("set_s4_cmd"))
-
-    # Make sure the VM goes down
-    error.base_context("after S4")
-    suspend_timeout = 240 + int(params.get("smp")) * 60
-    if not virt_utils.wait_for(vm.is_dead, suspend_timeout, 2, 2):
-        raise error.TestFail("VM refuses to go down. Suspend failed.")
-    logging.info("VM suspended successfully. Sleeping for a while before "
-                 "resuming it.")
-    time.sleep(10)
-
-    # Start vm, and check whether the program is still running
-    logging.info("Resuming suspended VM...")
-    vm.create()
-
-    # Log into the resumed VM
-    relogin_timeout = int(params.get("relogin_timeout", 240))
-    logging.info("Logging into resumed VM, timeout %s", relogin_timeout)
-    session2 = vm.wait_for_login(timeout=relogin_timeout)
-
-    # Check whether the test command is still alive
-    error.context("making sure background program is still running",
-                  logging.info)
-    session2.cmd(check_s4_cmd)
-    error.context()
-
-    logging.info("VM resumed successfuly after suspend to disk")
-    session2.cmd_output(params.get("kill_test_s4_cmd"))
-    session.close()
-    session2.close()
diff --git a/client/tests/kvm/tests/guest_test.py b/client/tests/kvm/tests/guest_test.py
deleted file mode 100644
index 3bc7da7..0000000
--- a/client/tests/kvm/tests/guest_test.py
+++ /dev/null
@@ -1,80 +0,0 @@
-import os, logging
-from autotest_lib.client.virt import virt_utils
-
-
-def run_guest_test(test, params, env):
-    """
-    A wrapper for running customized tests in guests.
-
-    1) Log into a guest.
-    2) Run script.
-    3) Wait for script execution to complete.
-    4) Pass/fail according to exit status of script.
-
-    @param test: KVM test object.
-    @param params: Dictionary with test parameters.
-    @param env: Dictionary with the test environment.
-    """
-    login_timeout = int(params.get("login_timeout", 360))
-    reboot = params.get("reboot", "no")
-
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    if params.get("serial_login") == "yes":
-        session = vm.wait_for_serial_login(timeout=login_timeout)
-    else:
-        session = vm.wait_for_login(timeout=login_timeout)
-
-    if reboot == "yes":
-        logging.debug("Rebooting guest before test ...")
-        session = vm.reboot(session, timeout=login_timeout)
-
-    try:
-        logging.info("Starting script...")
-
-        # Collect test parameters
-        interpreter = params.get("interpreter")
-        script = params.get("guest_script")
-        dst_rsc_path = params.get("dst_rsc_path", "script.au3")
-        script_params = params.get("script_params", "")
-        test_timeout = float(params.get("test_timeout", 600))
-
-        logging.debug("Starting preparing resouce files...")
-        # Download the script resource from a remote server, or
-        # prepare the script using rss?
-        if params.get("download") == "yes":
-            download_cmd = params.get("download_cmd")
-            rsc_server = params.get("rsc_server")
-            rsc_dir = os.path.basename(rsc_server)
-            dst_rsc_dir = params.get("dst_rsc_dir")
-
-            # Change dir to dst_rsc_dir, and remove the guest script dir there
-            rm_cmd = "cd %s && (rmdir /s /q %s || del /s /q %s)" % \
-                     (dst_rsc_dir, rsc_dir, rsc_dir)
-            session.cmd(rm_cmd, timeout=test_timeout)
-            logging.debug("Clean directory succeeded.")
-
-            # then download the resource.
-            rsc_cmd = "cd %s && %s %s" % (dst_rsc_dir, download_cmd, rsc_server)
-            session.cmd(rsc_cmd, timeout=test_timeout)
-            logging.info("Download resource finished.")
-        else:
-            session.cmd_output("del %s" % dst_rsc_path, internal_timeout=0)
-            script_path = virt_utils.get_path(test.bindir, script)
-            vm.copy_files_to(script_path, dst_rsc_path, timeout=60)
-
-        cmd = "%s %s %s" % (interpreter, dst_rsc_path, script_params)
-
-        try:
-            logging.info("------------ Script output ------------")
-            session.cmd(cmd, print_func=logging.info, timeout=test_timeout)
-        finally:
-            logging.info("------------ End of script output ------------")
-
-        if reboot == "yes":
-            logging.debug("Rebooting guest after test ...")
-            session = vm.reboot(session, timeout=login_timeout)
-
-        logging.debug("guest test PASSED.")
-    finally:
-        session.close()
diff --git a/client/tests/kvm/tests/image_copy.py b/client/tests/kvm/tests/image_copy.py
deleted file mode 100644
index cc921ab..0000000
--- a/client/tests/kvm/tests/image_copy.py
+++ /dev/null
@@ -1,45 +0,0 @@
-import os, logging
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.bin import utils
-from autotest_lib.client.virt import virt_utils
-
-
-def run_image_copy(test, params, env):
-    """
-    Copy guest images from nfs server.
-    1) Mount the NFS share directory
-    2) Check the existence of source image
-    3) If it exists, copy the image from NFS
-
-    @param test: kvm test object
-    @param params: Dictionary with the test parameters
-    @param env: Dictionary with test environment.
-    """
-    mount_dest_dir = params.get('dst_dir', '/mnt/images')
-    if not os.path.exists(mount_dest_dir):
-        try:
-            os.makedirs(mount_dest_dir)
-        except OSError, err:
-            logging.warning('mkdir %s error:\n%s', mount_dest_dir, err)
-
-    if not os.path.exists(mount_dest_dir):
-        raise error.TestError('Failed to create NFS share dir %s' %
-                              mount_dest_dir)
-
-    src = params.get('images_good')
-    image = '%s.%s' % (os.path.split(params['image_name'])[1],
-                       params['image_format'])
-    src_path = os.path.join(mount_dest_dir, image)
-    dst_path = '%s.%s' % (params['image_name'], params['image_format'])
-    cmd = 'cp %s %s' % (src_path, dst_path)
-
-    if not virt_utils.mount(src, mount_dest_dir, 'nfs', 'ro'):
-        raise error.TestError('Could not mount NFS share %s to %s' %
-                              (src, mount_dest_dir))
-
-    # Check the existence of source image
-    if not os.path.exists(src_path):
-        raise error.TestError('Could not find %s in NFS share' % src_path)
-
-    logging.debug('Copying image %s...', image)
-    utils.system(cmd)
diff --git a/client/tests/kvm/tests/iofuzz.py b/client/tests/kvm/tests/iofuzz.py
deleted file mode 100644
index d244012..0000000
--- a/client/tests/kvm/tests/iofuzz.py
+++ /dev/null
@@ -1,136 +0,0 @@
-import logging, re, random
-from autotest_lib.client.common_lib import error, aexpect
-from autotest_lib.client.virt import aexpect
-
-
-def run_iofuzz(test, params, env):
-    """
-    KVM iofuzz test:
-    1) Log into a guest
-    2) Enumerate all IO port ranges through /proc/ioports
-    3) On each port of the range:
-        * Read it
-        * Write 0 to it
-        * Write a random value to a random port on a random order
-
-    If the guest SSH session hangs, the test detects the hang and the guest
-    is then rebooted. The test fails if we detect the qemu process to terminate
-    while executing the process.
-
-    @param test: kvm test object
-    @param params: Dictionary with the test parameters
-    @param env: Dictionary with test environment.
-    """
-    def outb(session, port, data):
-        """
-        Write data to a given port.
-
-        @param session: SSH session stablished to a VM
-        @param port: Port where we'll write the data
-        @param data: Integer value that will be written on the port. This
-                value will be converted to octal before its written.
-        """
-        logging.debug("outb(0x%x, 0x%x)", port, data)
-        outb_cmd = ("echo -e '\\%s' | dd of=/dev/port seek=%d bs=1 count=1" %
-                    (oct(data), port))
-        try:
-            session.cmd(outb_cmd)
-        except aexpect.ShellError, e:
-            logging.debug(e)
-
-
-    def inb(session, port):
-        """
-        Read from a given port.
-
-        @param session: SSH session stablished to a VM
-        @param port: Port where we'll read data
-        """
-        logging.debug("inb(0x%x)", port)
-        inb_cmd = "dd if=/dev/port seek=%d of=/dev/null bs=1 count=1" % port
-        try:
-            session.cmd(inb_cmd)
-        except aexpect.ShellError, e:
-            logging.debug(e)
-
-
-    def fuzz(session, inst_list):
-        """
-        Executes a series of read/write/randwrite instructions.
-
-        If the guest SSH session hangs, an attempt to relogin will be made.
-        If it fails, the guest will be reset. If during the process the VM
-        process abnormally ends, the test fails.
-
-        @param inst_list: List of instructions that will be executed.
-        @raise error.TestFail: If the VM process dies in the middle of the
-                fuzzing procedure.
-        """
-        for (op, operand) in inst_list:
-            if op == "read":
-                inb(session, operand[0])
-            elif op == "write":
-                outb(session, operand[0], operand[1])
-            else:
-                raise error.TestError("Unknown command %s" % op)
-
-            if not session.is_responsive():
-                logging.debug("Session is not responsive")
-                if vm.process.is_alive():
-                    logging.debug("VM is alive, try to re-login")
-                    try:
-                        session = vm.wait_for_login(timeout=10)
-                    except:
-                        logging.debug("Could not re-login, reboot the guest")
-                        session = vm.reboot(method="system_reset")
-                else:
-                    raise error.TestFail("VM has quit abnormally during %s",
-                                         (op, operand))
-
-
-    login_timeout = float(params.get("login_timeout", 240))
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    session = vm.wait_for_login(timeout=login_timeout)
-
-    try:
-        ports = {}
-        r = random.SystemRandom()
-
-        logging.info("Enumerate guest devices through /proc/ioports")
-        ioports = session.cmd_output("cat /proc/ioports")
-        logging.debug(ioports)
-        devices = re.findall("(\w+)-(\w+)\ : (.*)", ioports)
-
-        skip_devices = params.get("skip_devices","")
-        fuzz_count = int(params.get("fuzz_count", 10))
-
-        for (beg, end, name) in devices:
-            ports[(int(beg, base=16), int(end, base=16))] = name.strip()
-
-        for (beg, end) in ports.keys():
-            name = ports[(beg, end)]
-            if name in skip_devices:
-                logging.info("Skipping device %s", name)
-                continue
-
-            logging.info("Fuzzing %s, port range 0x%x-0x%x", name, beg, end)
-            inst = []
-
-            # Read all ports of the range
-            for port in range(beg, end + 1):
-                inst.append(("read", [port]))
-
-            # Write 0 to all ports of the range
-            for port in range(beg, end + 1):
-                inst.append(("write", [port, 0]))
-
-            # Write random values to random ports of the range
-            for seq in range(fuzz_count * (end - beg + 1)):
-                inst.append(("write",
-                             [r.randint(beg, end), r.randint(0,255)]))
-
-            fuzz(session, inst)
-
-    finally:
-        session.close()
diff --git a/client/tests/kvm/tests/ioquit.py b/client/tests/kvm/tests/ioquit.py
deleted file mode 100644
index 34b4fb5..0000000
--- a/client/tests/kvm/tests/ioquit.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import logging, time, random
-
-
-def run_ioquit(test, params, env):
-    """
-    Emulate the poweroff under IO workload(dd so far) using kill -9.
-
-    @param test: Kvm test object
-    @param params: Dictionary with the test parameters.
-    @param env: Dictionary with test environment.
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    login_timeout = int(params.get("login_timeout", 360))
-    session = vm.wait_for_login(timeout=login_timeout)
-    session2 = vm.wait_for_login(timeout=login_timeout)
-    try:
-        bg_cmd = params.get("background_cmd")
-        logging.info("Add IO workload for guest OS.")
-        session.cmd_output(bg_cmd, timeout=60)
-        check_cmd = params.get("check_cmd")
-        session2.cmd(check_cmd, timeout=60)
-
-        logging.info("Sleep for a while")
-        time.sleep(random.randrange(30, 100))
-        session2.cmd(check_cmd, timeout=60)
-        logging.info("Kill the virtual machine")
-        vm.process.close()
-    finally:
-        session.close()
-        session2.close()
diff --git a/client/tests/kvm/tests/iozone_windows.py b/client/tests/kvm/tests/iozone_windows.py
deleted file mode 100644
index 4046106..0000000
--- a/client/tests/kvm/tests/iozone_windows.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import logging, os
-from autotest_lib.client.bin import utils
-from autotest_lib.client.tests.iozone import postprocessing
-
-
-def run_iozone_windows(test, params, env):
-    """
-    Run IOzone for windows on a windows guest:
-    1) Log into a guest
-    2) Execute the IOzone test contained in the winutils.iso
-    3) Get results
-    4) Postprocess it with the IOzone postprocessing module
-
-    @param test: kvm test object
-    @param params: Dictionary with the test parameters
-    @param env: Dictionary with test environment.
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    timeout = int(params.get("login_timeout", 360))
-    session = vm.wait_for_login(timeout=timeout)
-    results_path = os.path.join(test.resultsdir,
-                                'raw_output_%s' % test.iteration)
-    analysisdir = os.path.join(test.resultsdir, 'analysis_%s' % test.iteration)
-
-    # Run IOzone and record its results
-    c = params.get("iozone_cmd")
-    t = int(params.get("iozone_timeout"))
-    logging.info("Running IOzone command on guest, timeout %ss", t)
-    results = session.cmd_output(cmd=c, timeout=t, print_func=logging.debug)
-    utils.open_write_close(results_path, results)
-
-    # Postprocess the results using the IOzone postprocessing module
-    logging.info("Iteration succeed, postprocessing")
-    a = postprocessing.IOzoneAnalyzer(list_files=[results_path],
-                                      output_dir=analysisdir)
-    a.analyze()
-    p = postprocessing.IOzonePlotter(results_file=results_path,
-                                     output_dir=analysisdir)
-    p.plot_all()
diff --git a/client/tests/kvm/tests/jumbo.py b/client/tests/kvm/tests/jumbo.py
deleted file mode 100644
index 5108227..0000000
--- a/client/tests/kvm/tests/jumbo.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import logging, commands, random
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.bin import utils
-from autotest_lib.client.virt import virt_utils, virt_test_utils
-
-
-def run_jumbo(test, params, env):
-    """
-    Test the RX jumbo frame function of vnics:
-
-    1) Boot the VM.
-    2) Change the MTU of guest nics and host taps depending on the NIC model.
-    3) Add the static ARP entry for guest NIC.
-    4) Wait for the MTU ok.
-    5) Verify the path MTU using ping.
-    6) Ping the guest with large frames.
-    7) Increment size ping.
-    8) Flood ping the guest with large frames.
-    9) Verify the path MTU.
-    10) Recover the MTU.
-
-    @param test: KVM test object.
-    @param params: Dictionary with the test parameters.
-    @param env: Dictionary with test environment.
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    session = vm.wait_for_login(timeout=int(params.get("login_timeout", 360)))
-    mtu = params.get("mtu", "1500")
-    flood_time = params.get("flood_time", "300")
-    max_icmp_pkt_size = int(mtu) - 28
-
-    ifname = vm.get_ifname(0)
-    ip = vm.get_address(0)
-    if ip is None:
-        raise error.TestError("Could not get the IP address")
-
-    try:
-        # Environment preparation
-        ethname = virt_test_utils.get_linux_ifname(session, vm.get_mac_address(0))
-
-        logging.info("Changing the MTU of guest ...")
-        guest_mtu_cmd = "ifconfig %s mtu %s" % (ethname , mtu)
-        session.cmd(guest_mtu_cmd)
-
-        logging.info("Chaning the MTU of host tap ...")
-        host_mtu_cmd = "ifconfig %s mtu %s" % (ifname, mtu)
-        utils.run(host_mtu_cmd)
-
-        logging.info("Add a temporary static ARP entry ...")
-        arp_add_cmd = "arp -s %s %s -i %s" % (ip, vm.get_mac_address(0), ifname)
-        utils.run(arp_add_cmd)
-
-        def is_mtu_ok():
-            s, o = virt_test_utils.ping(ip, 1, interface=ifname,
-                                       packetsize=max_icmp_pkt_size,
-                                       hint="do", timeout=2)
-            return s == 0
-
-        def verify_mtu():
-            logging.info("Verify the path MTU")
-            s, o = virt_test_utils.ping(ip, 10, interface=ifname,
-                                       packetsize=max_icmp_pkt_size,
-                                       hint="do", timeout=15)
-            if s != 0 :
-                logging.error(o)
-                raise error.TestFail("Path MTU is not as expected")
-            if virt_test_utils.get_loss_ratio(o) != 0:
-                logging.error(o)
-                raise error.TestFail("Packet loss ratio during MTU "
-                                     "verification is not zero")
-
-        def flood_ping():
-            logging.info("Flood with large frames")
-            virt_test_utils.ping(ip, interface=ifname,
-                                packetsize=max_icmp_pkt_size,
-                                flood=True, timeout=float(flood_time))
-
-        def large_frame_ping(count=100):
-            logging.info("Large frame ping")
-            s, o = virt_test_utils.ping(ip, count, interface=ifname,
-                                       packetsize=max_icmp_pkt_size,
-                                       timeout=float(count) * 2)
-            ratio = virt_test_utils.get_loss_ratio(o)
-            if ratio != 0:
-                raise error.TestFail("Loss ratio of large frame ping is %s" %
-                                     ratio)
-
-        def size_increase_ping(step=random.randrange(90, 110)):
-            logging.info("Size increase ping")
-            for size in range(0, max_icmp_pkt_size + 1, step):
-                logging.info("Ping %s with size %s", ip, size)
-                s, o = virt_test_utils.ping(ip, 1, interface=ifname,
-                                           packetsize=size,
-                                           hint="do", timeout=1)
-                if s != 0:
-                    s, o = virt_test_utils.ping(ip, 10, interface=ifname,
-                                               packetsize=size,
-                                               adaptive=True, hint="do",
-                                               timeout=20)
-
-                    if virt_test_utils.get_loss_ratio(o) > int(params.get(
-                                                      "fail_ratio", 50)):
-                        raise error.TestFail("Ping loss ratio is greater "
-                                             "than 50% for size %s" % size)
-
-        logging.info("Waiting for the MTU to be OK")
-        wait_mtu_ok = 10
-        if not virt_utils.wait_for(is_mtu_ok, wait_mtu_ok, 0, 1):
-            logging.debug(commands.getoutput("ifconfig -a"))
-            raise error.TestError("MTU is not as expected even after %s "
-                                  "seconds" % wait_mtu_ok)
-
-        # Functional Test
-        verify_mtu()
-        large_frame_ping()
-        size_increase_ping()
-
-        # Stress test
-        flood_ping()
-        verify_mtu()
-
-    finally:
-        # Environment clean
-        session.close()
-        logging.info("Removing the temporary ARP entry")
-        utils.run("arp -d %s -i %s" % (ip, ifname))
diff --git a/client/tests/kvm/tests/kdump.py b/client/tests/kvm/tests/kdump.py
deleted file mode 100644
index 90c004b..0000000
--- a/client/tests/kvm/tests/kdump.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import logging
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.virt import virt_utils
-
-
-def run_kdump(test, params, env):
-    """
-    KVM reboot test:
-    1) Log into a guest
-    2) Check and enable the kdump
-    3) For each vcpu, trigger a crash and check the vmcore
-
-    @param test: kvm test object
-    @param params: Dictionary with the test parameters
-    @param env: Dictionary with test environment.
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    timeout = float(params.get("login_timeout", 240))
-    crash_timeout = float(params.get("crash_timeout", 360))
-    session = vm.wait_for_login(timeout=timeout)
-    def_kernel_param_cmd = ("grubby --update-kernel=`grubby --default-kernel`"
-                            " --args=crashkernel=128M")
-    kernel_param_cmd = params.get("kernel_param_cmd", def_kernel_param_cmd)
-    def_kdump_enable_cmd = "chkconfig kdump on && service kdump start"
-    kdump_enable_cmd = params.get("kdump_enable_cmd", def_kdump_enable_cmd)
-    def_crash_kernel_prob_cmd = "grep -q 1 /sys/kernel/kexec_crash_loaded"
-    crash_kernel_prob_cmd = params.get("crash_kernel_prob_cmd",
-                                       def_crash_kernel_prob_cmd)
-
-    def crash_test(vcpu):
-        """
-        Trigger a crash dump through sysrq-trigger
-
-        @param vcpu: vcpu which is used to trigger a crash
-        """
-        session = vm.wait_for_login(timeout=timeout)
-        session.cmd_output("rm -rf /var/crash/*")
-
-        logging.info("Triggering crash on vcpu %d ...", vcpu)
-        crash_cmd = "taskset -c %d echo c > /proc/sysrq-trigger" % vcpu
-        session.sendline(crash_cmd)
-
-        if not virt_utils.wait_for(lambda: not session.is_responsive(), 240, 0,
-                                  1):
-            raise error.TestFail("Could not trigger crash on vcpu %d" % vcpu)
-
-        logging.info("Waiting for kernel crash dump to complete")
-        session = vm.wait_for_login(timeout=crash_timeout)
-
-        logging.info("Probing vmcore file...")
-        session.cmd("ls -R /var/crash | grep vmcore")
-        logging.info("Found vmcore.")
-
-        session.cmd_output("rm -rf /var/crash/*")
-
-    try:
-        logging.info("Checking the existence of crash kernel...")
-        try:
-            session.cmd(crash_kernel_prob_cmd)
-        except:
-            logging.info("Crash kernel is not loaded. Trying to load it")
-            session.cmd(kernel_param_cmd)
-            session = vm.reboot(session, timeout=timeout)
-
-        logging.info("Enabling kdump service...")
-        # the initrd may be rebuilt here so we need to wait a little more
-        session.cmd(kdump_enable_cmd, timeout=120)
-
-        nvcpu = int(params.get("smp", 1))
-        for i in range (nvcpu):
-            crash_test(i)
-
-    finally:
-        session.close()
diff --git a/client/tests/kvm/tests/linux_s3.py b/client/tests/kvm/tests/linux_s3.py
deleted file mode 100644
index 5a04fca..0000000
--- a/client/tests/kvm/tests/linux_s3.py
+++ /dev/null
@@ -1,41 +0,0 @@
-import logging, time
-from autotest_lib.client.common_lib import error
-
-
-def run_linux_s3(test, params, env):
-    """
-    Suspend a guest Linux OS to memory.
-
-    @param test: kvm test object.
-    @param params: Dictionary with test parameters.
-    @param env: Dictionary with the test environment.
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    timeout = int(params.get("login_timeout", 360))
-    session = vm.wait_for_login(timeout=timeout)
-
-    logging.info("Checking that VM supports S3")
-    session.cmd("grep -q mem /sys/power/state")
-
-    logging.info("Waiting for a while for X to start")
-    time.sleep(10)
-
-    src_tty = session.cmd_output("fgconsole").strip()
-    logging.info("Current virtual terminal is %s", src_tty)
-    if src_tty not in map(str, range(1, 10)):
-        raise error.TestFail("Got a strange current vt (%s)" % src_tty)
-
-    dst_tty = "1"
-    if src_tty == "1":
-        dst_tty = "2"
-
-    logging.info("Putting VM into S3")
-    command = "chvt %s && echo mem > /sys/power/state && chvt %s" % (dst_tty,
-                                                                     src_tty)
-    suspend_timeout = 120 + int(params.get("smp")) * 60
-    session.cmd(command, timeout=suspend_timeout)
-
-    logging.info("VM resumed after S3")
-
-    session.close()
diff --git a/client/tests/kvm/tests/mac_change.py b/client/tests/kvm/tests/mac_change.py
deleted file mode 100644
index d2eaf01..0000000
--- a/client/tests/kvm/tests/mac_change.py
+++ /dev/null
@@ -1,60 +0,0 @@
-import logging
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.virt import virt_utils, virt_test_utils
-
-
-def run_mac_change(test, params, env):
-    """
-    Change MAC address of guest.
-
-    1) Get a new mac from pool, and the old mac addr of guest.
-    2) Set new mac in guest and regain new IP.
-    3) Re-log into guest with new MAC.
-
-    @param test: KVM test object.
-    @param params: Dictionary with the test parameters.
-    @param env: Dictionary with test environment.
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    timeout = int(params.get("login_timeout", 360))
-    session_serial = vm.wait_for_serial_login(timeout=timeout)
-    # This session will be used to assess whether the IP change worked
-    session = vm.wait_for_login(timeout=timeout)
-    old_mac = vm.get_mac_address(0)
-    while True:
-        vm.free_mac_address(0)
-        new_mac = virt_utils.generate_mac_address(vm.instance, 0)
-        if old_mac != new_mac:
-            break
-    logging.info("The initial MAC address is %s", old_mac)
-    interface = virt_test_utils.get_linux_ifname(session_serial, old_mac)
-    # Start change MAC address
-    logging.info("Changing MAC address to %s", new_mac)
-    change_cmd = ("ifconfig %s down && ifconfig %s hw ether %s && "
-                  "ifconfig %s up" % (interface, interface, new_mac, interface))
-    session_serial.cmd(change_cmd)
-
-    # Verify whether MAC address was changed to the new one
-    logging.info("Verifying the new mac address")
-    session_serial.cmd("ifconfig | grep -i %s" % new_mac)
-
-    # Restart `dhclient' to regain IP for new mac address
-    logging.info("Restart the network to gain new IP")
-    dhclient_cmd = "dhclient -r && dhclient %s" % interface
-    session_serial.sendline(dhclient_cmd)
-
-    # Re-log into the guest after changing mac address
-    if virt_utils.wait_for(session.is_responsive, 120, 20, 3):
-        # Just warning when failed to see the session become dead,
-        # because there is a little chance the ip does not change.
-        logging.warn("The session is still responsive, settings may fail.")
-    session.close()
-
-    # Re-log into guest and check if session is responsive
-    logging.info("Re-log into the guest")
-    session = vm.wait_for_login(timeout=timeout)
-    if not session.is_responsive():
-        raise error.TestFail("The new session is not responsive.")
-
-    session.close()
diff --git a/client/tests/kvm/tests/multicast.py b/client/tests/kvm/tests/multicast.py
deleted file mode 100644
index 13e3f0d..0000000
--- a/client/tests/kvm/tests/multicast.py
+++ /dev/null
@@ -1,90 +0,0 @@
-import logging, os, re
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.bin import utils
-from autotest_lib.client.virt import virt_test_utils, aexpect
-
-
-def run_multicast(test, params, env):
-    """
-    Test multicast function of nic (rtl8139/e1000/virtio)
-
-    1) Create a VM.
-    2) Join guest into multicast groups.
-    3) Ping multicast addresses on host.
-    4) Flood ping test with different size of packets.
-    5) Final ping test and check if lose packet.
-
-    @param test: KVM test object.
-    @param params: Dictionary with the test parameters.
-    @param env: Dictionary with test environment.
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    session = vm.wait_for_login(timeout=int(params.get("login_timeout", 360)))
-
-    def run_guest(cmd):
-        try:
-            session.cmd(cmd)
-        except aexpect.ShellError, e:
-            logging.warn(e)
-
-    def run_host_guest(cmd):
-        run_guest(cmd)
-        utils.system(cmd, ignore_status=True)
-
-    # flush the firewall rules
-    cmd_flush = "iptables -F"
-    cmd_selinux = ("if [ -e /selinux/enforce ]; then setenforce 0; "
-                   "else echo 'no /selinux/enforce file present'; fi")
-    run_host_guest(cmd_flush)
-    run_host_guest(cmd_selinux)
-    # make sure guest replies to broadcasts
-    cmd_broadcast = "echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts"
-    cmd_broadcast_2 = "echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_all"
-    run_guest(cmd_broadcast)
-    run_guest(cmd_broadcast_2)
-
-    # base multicast address
-    mcast = params.get("mcast", "225.0.0.1")
-    # count of multicast addresses, less than 20
-    mgroup_count = int(params.get("mgroup_count", 5))
-    flood_minutes = float(params.get("flood_minutes", 10))
-    ifname = vm.get_ifname()
-    prefix = re.findall("\d+.\d+.\d+", mcast)[0]
-    suffix = int(re.findall("\d+", mcast)[-1])
-    # copy python script to guest for joining guest to multicast groups
-    mcast_path = os.path.join(test.bindir, "scripts/multicast_guest.py")
-    vm.copy_files_to(mcast_path, "/tmp")
-    output = session.cmd_output("python /tmp/multicast_guest.py %d %s %d" %
-                                (mgroup_count, prefix, suffix))
-
-    # if success to join multicast, the process will be paused, and return PID.
-    try:
-        pid = re.findall("join_mcast_pid:(\d+)", output)[0]
-    except IndexError:
-        raise error.TestFail("Can't join multicast groups,output:%s" % output)
-
-    try:
-        for i in range(mgroup_count):
-            new_suffix = suffix + i
-            mcast = "%s.%d" % (prefix, new_suffix)
-
-            logging.info("Initial ping test, mcast: %s", mcast)
-            s, o = virt_test_utils.ping(mcast, 10, interface=ifname, timeout=20)
-            if s != 0:
-                raise error.TestFail(" Ping return non-zero value %s" % o)
-
-            logging.info("Flood ping test, mcast: %s", mcast)
-            virt_test_utils.ping(mcast, None, interface=ifname, flood=True,
-                                output_func=None, timeout=flood_minutes*60)
-
-            logging.info("Final ping test, mcast: %s", mcast)
-            s, o = virt_test_utils.ping(mcast, 10, interface=ifname, timeout=20)
-            if s != 0:
-                raise error.TestFail("Ping failed, status: %s, output: %s" %
-                                     (s, o))
-
-    finally:
-        logging.debug(session.cmd_output("ipmaddr show"))
-        session.cmd_output("kill -s SIGCONT %s" % pid)
-        session.close()
diff --git a/client/tests/kvm/tests/netperf.py b/client/tests/kvm/tests/netperf.py
deleted file mode 100644
index 72d9cde..0000000
--- a/client/tests/kvm/tests/netperf.py
+++ /dev/null
@@ -1,90 +0,0 @@
-import logging, os, signal
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.bin import utils
-from autotest_lib.client.virt import aexpect
-
-def run_netperf(test, params, env):
-    """
-    Network stress test with netperf.
-
-    1) Boot up a VM.
-    2) Launch netserver on guest.
-    3) Execute netperf client on host with different protocols.
-    4) Output the test result.
-
-    @param test: KVM test object.
-    @param params: Dictionary with the test parameters.
-    @param env: Dictionary with test environment.
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    login_timeout = int(params.get("login_timeout", 360))
-    session_serial = vm.wait_for_serial_login(timeout=login_timeout)
-
-    netperf_dir = os.path.join(os.environ['AUTODIR'], "tests/netperf2")
-    setup_cmd = params.get("setup_cmd")
-    guest_ip = vm.get_address()
-    result_file = os.path.join(test.resultsdir, "output_%s" % test.iteration)
-
-    firewall_flush = "iptables -F"
-    session_serial.cmd_output(firewall_flush)
-    try:
-        utils.run("iptables -F")
-    except:
-        pass
-
-    for i in params.get("netperf_files").split():
-        vm.copy_files_to(os.path.join(netperf_dir, i), "/tmp")
-
-    try:
-        session_serial.cmd(firewall_flush)
-    except aexpect.ShellError:
-        logging.warning("Could not flush firewall rules on guest")
-
-    session_serial.cmd(setup_cmd % "/tmp", timeout=200)
-    session_serial.cmd(params.get("netserver_cmd") % "/tmp")
-
-    tcpdump = env.get("tcpdump")
-    pid = None
-    if tcpdump:
-        # Stop the background tcpdump process
-        try:
-            pid = int(utils.system_output("pidof tcpdump"))
-            logging.debug("Stopping the background tcpdump")
-            os.kill(pid, signal.SIGSTOP)
-        except:
-            pass
-
-    try:
-        logging.info("Setup and run netperf client on host")
-        utils.run(setup_cmd % netperf_dir)
-        list_fail = []
-        result = open(result_file, "w")
-        result.write("Netperf test results\n")
-
-        for i in params.get("protocols").split():
-            packet_size = params.get("packet_size", "1500")
-            for size in packet_size.split():
-                cmd = params.get("netperf_cmd") % (netperf_dir, i,
-                                                   guest_ip, size)
-                logging.info("Netperf: protocol %s", i)
-                try:
-                    netperf_output = utils.system_output(cmd,
-                                                         retain_output=True)
-                    result.write("%s\n" % netperf_output)
-                except:
-                    logging.error("Test of protocol %s failed", i)
-                    list_fail.append(i)
-
-        result.close()
-
-        if list_fail:
-            raise error.TestFail("Some netperf tests failed: %s" %
-                                 ", ".join(list_fail))
-
-    finally:
-        session_serial.cmd_output("killall netserver")
-        if tcpdump and pid:
-            logging.debug("Resuming the background tcpdump")
-            logging.info("pid is %s" % pid)
-            os.kill(pid, signal.SIGCONT)
diff --git a/client/tests/kvm/tests/nic_promisc.py b/client/tests/kvm/tests/nic_promisc.py
deleted file mode 100644
index 0ff07b8..0000000
--- a/client/tests/kvm/tests/nic_promisc.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import logging, threading
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.bin import utils
-from autotest_lib.client.tests.kvm.tests import file_transfer
-from autotest_lib.client.virt import virt_test_utils, virt_utils
-
-
-def run_nic_promisc(test, params, env):
-    """
-    Test nic driver in promisc mode:
-
-    1) Boot up a VM.
-    2) Repeatedly enable/disable promiscuous mode in guest.
-    3) Transfer file from host to guest, and from guest to host in the same time
-
-    @param test: KVM test object.
-    @param params: Dictionary with the test parameters.
-    @param env: Dictionary with test environment.
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    timeout = int(params.get("login_timeout", 360))
-    session_serial = vm.wait_for_serial_login(timeout=timeout)
-
-    ethname = virt_test_utils.get_linux_ifname(session_serial,
-                                              vm.get_mac_address(0))
-
-    try:
-        transfer_thread = virt_utils.Thread(file_transfer.run_file_transfer,
-                                           (test, params, env))
-        transfer_thread.start()
-        while transfer_thread.isAlive():
-            session_serial.cmd("ip link set %s promisc on" % ethname)
-            session_serial.cmd("ip link set %s promisc off" % ethname)
-    except:
-        transfer_thread.join(suppress_exception=True)
-        raise
-    else:
-        transfer_thread.join()
diff --git a/client/tests/kvm/tests/nicdriver_unload.py b/client/tests/kvm/tests/nicdriver_unload.py
deleted file mode 100644
index 6d3d4da..0000000
--- a/client/tests/kvm/tests/nicdriver_unload.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import logging, threading, os, time
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.bin import utils
-from autotest_lib.client.tests.kvm.tests import file_transfer
-from autotest_lib.client.virt import virt_test_utils, virt_utils
-
-
-def run_nicdriver_unload(test, params, env):
-    """
-    Test nic driver.
-
-    1) Boot a VM.
-    2) Get the NIC driver name.
-    3) Repeatedly unload/load NIC driver.
-    4) Multi-session TCP transfer on test interface.
-    5) Check whether the test interface should still work.
-
-    @param test: KVM test object.
-    @param params: Dictionary with the test parameters.
-    @param env: Dictionary with test environment.
-    """
-    timeout = int(params.get("login_timeout", 360))
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    session_serial = vm.wait_for_serial_login(timeout=timeout)
-
-    ethname = virt_test_utils.get_linux_ifname(session_serial,
-                                               vm.get_mac_address(0))
-    sys_path = "/sys/class/net/%s/device/driver" % (ethname)
-    driver = os.path.basename(session_serial.cmd("readlink -e %s" %
-                                                 sys_path).strip())
-    logging.info("driver is %s", driver)
-
-    try:
-        threads = []
-        for t in range(int(params.get("sessions_num", "10"))):
-            thread = virt_utils.Thread(file_transfer.run_file_transfer,
-                                      (test, params, env))
-            thread.start()
-            threads.append(thread)
-
-        time.sleep(10)
-        while threads[0].isAlive():
-            session_serial.cmd("sleep 10")
-            session_serial.cmd("ifconfig %s down" % ethname)
-            session_serial.cmd("modprobe -r %s" % driver)
-            session_serial.cmd("modprobe %s" % driver)
-            session_serial.cmd("ifconfig %s up" % ethname)
-    except:
-        for thread in threads:
-            thread.join(suppress_exception=True)
-            raise
-    else:
-        for thread in threads:
-            thread.join()
-
diff --git a/client/tests/kvm/tests/ping.py b/client/tests/kvm/tests/ping.py
deleted file mode 100644
index 08791fb..0000000
--- a/client/tests/kvm/tests/ping.py
+++ /dev/null
@@ -1,73 +0,0 @@
-import logging
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.virt import virt_test_utils
-
-
-def run_ping(test, params, env):
-    """
-    Ping the guest with different size of packets.
-
-    Packet Loss Test:
-    1) Ping the guest with different size/interval of packets.
-
-    Stress Test:
-    1) Flood ping the guest.
-    2) Check if the network is still usable.
-
-    @param test: KVM test object.
-    @param params: Dictionary with the test parameters.
-    @param env: Dictionary with test environment.
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    session = vm.wait_for_login(timeout=int(params.get("login_timeout", 360)))
-
-    counts = params.get("ping_counts", 100)
-    flood_minutes = float(params.get("flood_minutes", 10))
-    nics = params.get("nics").split()
-    strict_check = params.get("strict_check", "no") == "yes"
-
-    packet_size = [0, 1, 4, 48, 512, 1440, 1500, 1505, 4054, 4055, 4096, 4192,
-                   8878, 9000, 32767, 65507]
-
-    try:
-        for i, nic in enumerate(nics):
-            ip = vm.get_address(i)
-            if not ip:
-                logging.error("Could not get the ip of nic index %d: %s",
-                              i, nic)
-                continue
-
-            for size in packet_size:
-                logging.info("Ping with packet size %s", size)
-                status, output = virt_test_utils.ping(ip, 10,
-                                                     packetsize=size,
-                                                     timeout=20)
-                if strict_check:
-                    ratio = virt_test_utils.get_loss_ratio(output)
-                    if ratio != 0:
-                        raise error.TestFail("Loss ratio is %s for packet size"
-                                             " %s" % (ratio, size))
-                else:
-                    if status != 0:
-                        raise error.TestFail("Ping failed, status: %s,"
-                                             " output: %s" % (status, output))
-
-            logging.info("Flood ping test")
-            virt_test_utils.ping(ip, None, flood=True, output_func=None,
-                                timeout=flood_minutes * 60)
-
-            logging.info("Final ping test")
-            status, output = virt_test_utils.ping(ip, counts,
-                                                 timeout=float(counts) * 1.5)
-            if strict_check:
-                ratio = virt_test_utils.get_loss_ratio(output)
-                if ratio != 0:
-                    raise error.TestFail("Ping failed, status: %s,"
-                                         " output: %s" % (status, output))
-            else:
-                if status != 0:
-                    raise error.TestFail("Ping returns non-zero value %s" %
-                                         output)
-    finally:
-        session.close()
diff --git a/client/tests/kvm/tests/pxe.py b/client/tests/kvm/tests/pxe.py
deleted file mode 100644
index 325e353..0000000
--- a/client/tests/kvm/tests/pxe.py
+++ /dev/null
@@ -1,29 +0,0 @@
-import logging
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.virt import aexpect
-
-def run_pxe(test, params, env):
-    """
-    PXE test:
-
-    1) Snoop the tftp packet in the tap device.
-    2) Wait for some seconds.
-    3) Check whether we could capture TFTP packets.
-
-    @param test: KVM test object.
-    @param params: Dictionary with the test parameters.
-    @param env: Dictionary with test environment.
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    timeout = int(params.get("pxe_timeout", 60))
-
-    logging.info("Try to boot from PXE")
-    output = aexpect.run_fg("tcpdump -nli %s" % vm.get_ifname(),
-                                   logging.debug, "(pxe capture) ", timeout)[1]
-
-    logging.info("Analyzing the tcpdump result...")
-    if not "tftp" in output:
-        raise error.TestFail("Couldn't find any TFTP packets after %s seconds" %
-                             timeout)
-    logging.info("Found TFTP packet")
diff --git a/client/tests/kvm/tests/shutdown.py b/client/tests/kvm/tests/shutdown.py
deleted file mode 100644
index ac41a4a..0000000
--- a/client/tests/kvm/tests/shutdown.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import logging, time
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.virt import virt_utils
-
-
-def run_shutdown(test, params, env):
-    """
-    KVM shutdown test:
-    1) Log into a guest
-    2) Send a shutdown command to the guest, or issue a system_powerdown
-       monitor command (depending on the value of shutdown_method)
-    3) Wait until the guest is down
-
-    @param test: kvm test object
-    @param params: Dictionary with the test parameters
-    @param env: Dictionary with test environment
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    timeout = int(params.get("login_timeout", 360))
-    session = vm.wait_for_login(timeout=timeout)
-
-    try:
-        if params.get("shutdown_method") == "shell":
-            # Send a shutdown command to the guest's shell
-            session.sendline(vm.get_params().get("shutdown_command"))
-            logging.info("Shutdown command sent; waiting for guest to go "
-                         "down...")
-        elif params.get("shutdown_method") == "system_powerdown":
-            # Sleep for a while -- give the guest a chance to finish booting
-            time.sleep(float(params.get("sleep_before_powerdown", 10)))
-            # Send a system_powerdown monitor command
-            vm.monitor.cmd("system_powerdown")
-            logging.info("system_powerdown monitor command sent; waiting for "
-                         "guest to go down...")
-
-        if not virt_utils.wait_for(vm.is_dead, 240, 0, 1):
-            raise error.TestFail("Guest refuses to go down")
-
-        logging.info("Guest is down")
-
-    finally:
-        session.close()
diff --git a/client/tests/kvm/tests/stress_boot.py b/client/tests/kvm/tests/stress_boot.py
deleted file mode 100644
index e3ac14d..0000000
--- a/client/tests/kvm/tests/stress_boot.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import logging
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.virt import virt_env_process
-
-
-@error.context_aware
-def run_stress_boot(test, params, env):
-    """
-    Boots VMs until one of them becomes unresponsive, and records the maximum
-    number of VMs successfully started:
-    1) boot the first vm
-    2) boot the second vm cloned from the first vm, check whether it boots up
-       and all booted vms respond to shell commands
-    3) go on until cannot create VM anymore or cannot allocate memory for VM
-
-    @param test:   kvm test object
-    @param params: Dictionary with the test parameters
-    @param env:    Dictionary with test environment.
-    """
-    error.base_context("waiting for the first guest to be up", logging.info)
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    login_timeout = float(params.get("login_timeout", 240))
-    session = vm.wait_for_login(timeout=login_timeout)
-
-    num = 2
-    sessions = [session]
-
-    # Boot the VMs
-    try:
-        while num <= int(params.get("max_vms")):
-            # Clone vm according to the first one
-            error.base_context("booting guest #%d" % num, logging.info)
-            vm_name = "vm%d" % num
-            vm_params = vm.params.copy()
-            curr_vm = vm.clone(vm_name, vm_params)
-            env.register_vm(vm_name, curr_vm)
-            virt_env_process.preprocess_vm(test, vm_params, env, vm_name)
-            params["vms"] += " " + vm_name
-
-            sessions.append(curr_vm.wait_for_login(timeout=login_timeout))
-            logging.info("Guest #%d booted up successfully", num)
-
-            # Check whether all previous shell sessions are responsive
-            for i, se in enumerate(sessions):
-                error.context("checking responsiveness of guest #%d" % (i + 1),
-                              logging.debug)
-                se.cmd(params.get("alive_test_cmd"))
-            num += 1
-    finally:
-        for se in sessions:
-            se.close()
-        logging.info("Total number booted: %d" % (num -1))
diff --git a/client/tests/kvm/tests/vlan.py b/client/tests/kvm/tests/vlan.py
deleted file mode 100644
index 9fc1f64..0000000
--- a/client/tests/kvm/tests/vlan.py
+++ /dev/null
@@ -1,175 +0,0 @@
-import logging, time, re
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.virt import virt_utils, virt_test_utils, aexpect
-
-
-def run_vlan(test, params, env):
-    """
-    Test 802.1Q vlan of NIC, config it by vconfig command.
-
-    1) Create two VMs.
-    2) Setup guests in 10 different vlans by vconfig and using hard-coded
-       ip address.
-    3) Test by ping between same and different vlans of two VMs.
-    4) Test by TCP data transfer, floop ping between same vlan of two VMs.
-    5) Test maximal plumb/unplumb vlans.
-    6) Recover the vlan config.
-
-    @param test: KVM test object.
-    @param params: Dictionary with the test parameters.
-    @param env: Dictionary with test environment.
-    """
-    vm = []
-    session = []
-    ifname = []
-    vm_ip = []
-    digest_origin = []
-    vlan_ip = ['', '']
-    ip_unit = ['1', '2']
-    subnet = params.get("subnet")
-    vlan_num = int(params.get("vlan_num"))
-    maximal = int(params.get("maximal"))
-    file_size = params.get("file_size")
-
-    vm.append(env.get_vm(params["main_vm"]))
-    vm.append(env.get_vm("vm2"))
-    for vm_ in vm:
-        vm_.verify_alive()
-
-    def add_vlan(session, v_id, iface="eth0"):
-        session.cmd("vconfig add %s %s" % (iface, v_id))
-
-    def set_ip_vlan(session, v_id, ip, iface="eth0"):
-        iface = "%s.%s" % (iface, v_id)
-        session.cmd("ifconfig %s %s" % (iface, ip))
-
-    def set_arp_ignore(session, iface="eth0"):
-        ignore_cmd = "echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore"
-        session.cmd(ignore_cmd)
-
-    def rem_vlan(session, v_id, iface="eth0"):
-        rem_vlan_cmd = "if [[ -e /proc/net/vlan/%s ]];then vconfig rem %s;fi"
-        iface = "%s.%s" % (iface, v_id)
-        return session.cmd_status(rem_vlan_cmd % (iface, iface))
-
-    def nc_transfer(src, dst):
-        nc_port = virt_utils.find_free_port(1025, 5334, vm_ip[dst])
-        listen_cmd = params.get("listen_cmd")
-        send_cmd = params.get("send_cmd")
-
-        #listen in dst
-        listen_cmd = listen_cmd % (nc_port, "receive")
-        session[dst].sendline(listen_cmd)
-        time.sleep(2)
-        #send file from src to dst
-        send_cmd = send_cmd % (vlan_ip[dst], str(nc_port), "file")
-        session[src].cmd(send_cmd, timeout=60)
-        try:
-            session[dst].read_up_to_prompt(timeout=60)
-        except aexpect.ExpectError:
-            raise error.TestFail ("Fail to receive file"
-                                    " from vm%s to vm%s" % (src+1, dst+1))
-        #check MD5 message digest of receive file in dst
-        output = session[dst].cmd_output("md5sum receive").strip()
-        digest_receive = re.findall(r'(\w+)', output)[0]
-        if digest_receive == digest_origin[src]:
-            logging.info("file succeed received in vm %s", vlan_ip[dst])
-        else:
-            logging.info("digest_origin is  %s", digest_origin[src])
-            logging.info("digest_receive is %s", digest_receive)
-            raise error.TestFail("File transfered differ from origin")
-        session[dst].cmd_output("rm -f receive")
-
-    for i in range(2):
-        session.append(vm[i].wait_for_login(
-            timeout=int(params.get("login_timeout", 360))))
-        if not session[i] :
-            raise error.TestError("Could not log into guest(vm%d)" % i)
-        logging.info("Logged in")
-
-        ifname.append(virt_test_utils.get_linux_ifname(session[i],
-                      vm[i].get_mac_address()))
-        #get guest ip
-        vm_ip.append(vm[i].get_address())
-
-        #produce sized file in vm
-        dd_cmd = "dd if=/dev/urandom of=file bs=1024k count=%s"
-        session[i].cmd(dd_cmd % file_size)
-        #record MD5 message digest of file
-        output = session[i].cmd("md5sum file", timeout=60)
-        digest_origin.append(re.findall(r'(\w+)', output)[0])
-
-        #stop firewall in vm
-        session[i].cmd_output("/etc/init.d/iptables stop")
-
-        #load 8021q module for vconfig
-        session[i].cmd("modprobe 8021q")
-
-    try:
-        for i in range(2):
-            for vlan_i in range(1, vlan_num+1):
-                add_vlan(session[i], vlan_i, ifname[i])
-                set_ip_vlan(session[i], vlan_i, "%s.%s.%s" %
-                            (subnet, vlan_i, ip_unit[i]), ifname[i])
-            set_arp_ignore(session[i], ifname[i])
-
-        for vlan in range(1, vlan_num+1):
-            logging.info("Test for vlan %s", vlan)
-
-            logging.info("Ping between vlans")
-            interface = ifname[0] + '.' + str(vlan)
-            for vlan2 in range(1, vlan_num+1):
-                for i in range(2):
-                    interface = ifname[i] + '.' + str(vlan)
-                    dest = subnet +'.'+ str(vlan2)+ '.' + ip_unit[(i+1)%2]
-                    s, o = virt_test_utils.ping(dest, count=2,
-                                              interface=interface,
-                                              session=session[i], timeout=30)
-                    if ((vlan == vlan2) ^ (s == 0)):
-                        raise error.TestFail ("%s ping %s unexpected" %
-                                                    (interface, dest))
-
-            vlan_ip[0] = subnet + '.' + str(vlan) + '.' + ip_unit[0]
-            vlan_ip[1] = subnet + '.' + str(vlan) + '.' + ip_unit[1]
-
-            logging.info("Flood ping")
-            def flood_ping(src, dst):
-                # we must use a dedicated session becuase the aexpect
-                # does not have the other method to interrupt the process in
-                # the guest rather than close the session.
-                session_flood = vm[src].wait_for_login(timeout=60)
-                virt_test_utils.ping(vlan_ip[dst], flood=True,
-                                   interface=ifname[src],
-                                   session=session_flood, timeout=10)
-                session_flood.close()
-
-            flood_ping(0, 1)
-            flood_ping(1, 0)
-
-            logging.info("Transfering data through nc")
-            nc_transfer(0, 1)
-            nc_transfer(1, 0)
-
-    finally:
-        for vlan in range(1, vlan_num+1):
-            rem_vlan(session[0], vlan, ifname[0])
-            rem_vlan(session[1], vlan, ifname[1])
-            logging.info("rem vlan: %s", vlan)
-
-    # Plumb/unplumb maximal number of vlan interfaces
-    i = 1
-    s = 0
-    try:
-        logging.info("Testing the plumb of vlan interface")
-        for i in range (1, maximal+1):
-            add_vlan(session[0], i, ifname[0])
-    finally:
-        for j in range (1, i+1):
-            s = s or rem_vlan(session[0], j, ifname[0])
-        if s == 0:
-            logging.info("maximal interface plumb test done")
-        else:
-            logging.error("maximal interface plumb test failed")
-
-    session[0].close()
-    session[1].close()
diff --git a/client/tests/kvm/tests/whql_client_install.py b/client/tests/kvm/tests/whql_client_install.py
deleted file mode 100644
index 2d72a5e..0000000
--- a/client/tests/kvm/tests/whql_client_install.py
+++ /dev/null
@@ -1,136 +0,0 @@
-import logging, time, os
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.virt import virt_utils, virt_test_utils, rss_client
-
-
-def run_whql_client_install(test, params, env):
-    """
-    WHQL DTM client installation:
-    1) Log into the guest (the client machine) and into a DTM server machine
-    2) Stop the DTM client service (wttsvc) on the client machine
-    3) Delete the client machine from the server's data store
-    4) Rename the client machine (give it a randomly generated name)
-    5) Move the client machine into the server's workgroup
-    6) Reboot the client machine
-    7) Install the DTM client software
-    8) Setup auto logon for the user created by the installation
-       (normally DTMLLUAdminUser)
-    9) Reboot again
-
-    @param test: kvm test object
-    @param params: Dictionary with the test parameters
-    @param env: Dictionary with test environment.
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    session = vm.wait_for_login(timeout=int(params.get("login_timeout", 360)))
-
-    # Collect test params
-    server_address = params.get("server_address")
-    server_shell_port = int(params.get("server_shell_port"))
-    server_file_transfer_port = int(params.get("server_file_transfer_port"))
-    server_studio_path = params.get("server_studio_path", "%programfiles%\\ "
-                                    "Microsoft Driver Test Manager\\Studio")
-    server_username = params.get("server_username")
-    server_password = params.get("server_password")
-    client_username = params.get("client_username")
-    client_password = params.get("client_password")
-    dsso_delete_machine_binary = params.get("dsso_delete_machine_binary",
-                                            "deps/whql_delete_machine_15.exe")
-    dsso_delete_machine_binary = virt_utils.get_path(test.bindir,
-                                                    dsso_delete_machine_binary)
-    install_timeout = float(params.get("install_timeout", 600))
-    install_cmd = params.get("install_cmd")
-    wtt_services = params.get("wtt_services")
-
-    # Stop WTT service(s) on client
-    for svc in wtt_services.split():
-        virt_test_utils.stop_windows_service(session, svc)
-
-    # Copy dsso_delete_machine_binary to server
-    rss_client.upload(server_address, server_file_transfer_port,
-                             dsso_delete_machine_binary, server_studio_path,
-                             timeout=60)
-
-    # Open a shell session with server
-    server_session = virt_utils.remote_login("nc", server_address,
-                                            server_shell_port, "", "",
-                                            session.prompt, session.linesep)
-    server_session.set_status_test_command(session.status_test_command)
-
-    # Get server and client information
-    cmd = "echo %computername%"
-    server_name = server_session.cmd_output(cmd).strip()
-    client_name = session.cmd_output(cmd).strip()
-    cmd = "wmic computersystem get domain"
-    server_workgroup = server_session.cmd_output(cmd).strip()
-    server_workgroup = server_workgroup.splitlines()[-1]
-    regkey = r"HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters"
-    cmd = "reg query %s /v Domain" % regkey
-    o = server_session.cmd_output(cmd).strip().splitlines()[-1]
-    try:
-        server_dns_suffix = o.split(None, 2)[2]
-    except IndexError:
-        server_dns_suffix = ""
-
-    # Delete the client machine from the server's data store (if it's there)
-    server_session.cmd("cd %s" % server_studio_path)
-    cmd = "%s %s %s" % (os.path.basename(dsso_delete_machine_binary),
-                        server_name, client_name)
-    server_session.cmd(cmd, print_func=logging.info)
-    server_session.close()
-
-    # Rename the client machine
-    client_name = "autotest_%s" % virt_utils.generate_random_string(4)
-    logging.info("Renaming client machine to '%s'", client_name)
-    cmd = ('wmic computersystem where name="%%computername%%" rename name="%s"'
-           % client_name)
-    session.cmd(cmd, timeout=600)
-
-    # Join the server's workgroup
-    logging.info("Joining workgroup '%s'", server_workgroup)
-    cmd = ('wmic computersystem where name="%%computername%%" call '
-           'joindomainorworkgroup name="%s"' % server_workgroup)
-    session.cmd(cmd, timeout=600)
-
-    # Set the client machine's DNS suffix
-    logging.info("Setting DNS suffix to '%s'", server_dns_suffix)
-    cmd = 'reg add %s /v Domain /d "%s" /f' % (regkey, server_dns_suffix)
-    session.cmd(cmd, timeout=300)
-
-    # Reboot
-    session = vm.reboot(session)
-
-    # Access shared resources on the server machine
-    logging.info("Attempting to access remote share on server")
-    cmd = r"net use \\%s /user:%s %s" % (server_name, server_username,
-                                         server_password)
-    end_time = time.time() + 120
-    while time.time() < end_time:
-        try:
-            session.cmd(cmd)
-            break
-        except:
-            pass
-        time.sleep(5)
-    else:
-        raise error.TestError("Could not access server share from client "
-                              "machine")
-
-    # Install
-    logging.info("Installing DTM client (timeout=%ds)", install_timeout)
-    install_cmd = r"cmd /c \\%s\%s" % (server_name, install_cmd.lstrip("\\"))
-    session.cmd(install_cmd, timeout=install_timeout)
-
-    # Setup auto logon
-    logging.info("Setting up auto logon for user '%s'", client_username)
-    cmd = ('reg add '
-           '"HKLM\\Software\\Microsoft\\Windows NT\\CurrentVersion\\winlogon" '
-           '/v "%s" /d "%s" /t REG_SZ /f')
-    session.cmd(cmd % ("AutoAdminLogon", "1"))
-    session.cmd(cmd % ("DefaultUserName", client_username))
-    session.cmd(cmd % ("DefaultPassword", client_password))
-
-    # Reboot one more time
-    session = vm.reboot(session)
-    session.close()
diff --git a/client/tests/kvm/tests/whql_submission.py b/client/tests/kvm/tests/whql_submission.py
deleted file mode 100644
index bbeb836..0000000
--- a/client/tests/kvm/tests/whql_submission.py
+++ /dev/null
@@ -1,275 +0,0 @@
-import logging, os, re
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.virt import virt_utils, rss_client, aexpect
-
-
-def run_whql_submission(test, params, env):
-    """
-    WHQL submission test:
-    1) Log into the client machines and into a DTM server machine
-    2) Copy the automation program binary (dsso_test_binary) to the server machine
-    3) Run the automation program
-    4) Pass the program all relevant parameters (e.g. device_data)
-    5) Wait for the program to terminate
-    6) Parse and report job results
-    (logs and HTML reports are placed in test.debugdir)
-
-    @param test: kvm test object
-    @param params: Dictionary with the test parameters
-    @param env: Dictionary with test environment.
-    """
-    # Log into all client VMs
-    login_timeout = int(params.get("login_timeout", 360))
-    vms = []
-    sessions = []
-    for vm_name in params.objects("vms"):
-        vms.append(env.get_vm(vm_name))
-        vms[-1].verify_alive()
-        sessions.append(vms[-1].wait_for_login(timeout=login_timeout))
-
-    # Make sure all NICs of all client VMs are up
-    for vm in vms:
-        nics = vm.params.objects("nics")
-        for nic_index in range(len(nics)):
-            s = vm.wait_for_login(nic_index, 600)
-            s.close()
-
-    # Collect parameters
-    server_address = params.get("server_address")
-    server_shell_port = int(params.get("server_shell_port"))
-    server_file_transfer_port = int(params.get("server_file_transfer_port"))
-    server_studio_path = params.get("server_studio_path", "%programfiles%\\ "
-                                    "Microsoft Driver Test Manager\\Studio")
-    dsso_test_binary = params.get("dsso_test_binary",
-                                  "deps/whql_submission_15.exe")
-    dsso_test_binary = virt_utils.get_path(test.bindir, dsso_test_binary)
-    dsso_delete_machine_binary = params.get("dsso_delete_machine_binary",
-                                            "deps/whql_delete_machine_15.exe")
-    dsso_delete_machine_binary = virt_utils.get_path(test.bindir,
-                                                    dsso_delete_machine_binary)
-    test_timeout = float(params.get("test_timeout", 600))
-
-    # Copy dsso binaries to the server
-    for filename in dsso_test_binary, dsso_delete_machine_binary:
-        rss_client.upload(server_address, server_file_transfer_port,
-                                 filename, server_studio_path, timeout=60)
-
-    # Open a shell session with the server
-    server_session = virt_utils.remote_login("nc", server_address,
-                                            server_shell_port, "", "",
-                                            sessions[0].prompt,
-                                            sessions[0].linesep)
-    server_session.set_status_test_command(sessions[0].status_test_command)
-
-    # Get the computer names of the server and clients
-    cmd = "echo %computername%"
-    server_name = server_session.cmd_output(cmd).strip()
-    client_names = [session.cmd_output(cmd).strip() for session in sessions]
-
-    # Delete all client machines from the server's data store
-    server_session.cmd("cd %s" % server_studio_path)
-    for client_name in client_names:
-        cmd = "%s %s %s" % (os.path.basename(dsso_delete_machine_binary),
-                            server_name, client_name)
-        server_session.cmd(cmd, print_func=logging.debug)
-
-    # Reboot the client machines
-    sessions = virt_utils.parallel((vm.reboot, (session,))
-                                  for vm, session in zip(vms, sessions))
-
-    # Check the NICs again
-    for vm in vms:
-        nics = vm.params.objects("nics")
-        for nic_index in range(len(nics)):
-            s = vm.wait_for_login(nic_index, 600)
-            s.close()
-
-    # Run whql_pre_command and close the sessions
-    if params.get("whql_pre_command"):
-        for session in sessions:
-            session.cmd(params.get("whql_pre_command"),
-                        int(params.get("whql_pre_command_timeout", 600)))
-            session.close()
-
-    # Run the automation program on the server
-    pool_name = "%s_pool" % client_names[0]
-    submission_name = "%s_%s" % (client_names[0],
-                                 params.get("submission_name"))
-    cmd = "%s %s %s %s %s %s" % (os.path.basename(dsso_test_binary),
-                                 server_name, pool_name, submission_name,
-                                 test_timeout, " ".join(client_names))
-    server_session.sendline(cmd)
-
-    # Helper function: wait for a given prompt and raise an exception if an
-    # error occurs
-    def find_prompt(prompt):
-        m, o = server_session.read_until_last_line_matches(
-            [prompt, server_session.prompt], print_func=logging.info,
-            timeout=600)
-        if m != 0:
-            errors = re.findall("^Error:.*$", o, re.I | re.M)
-            if errors:
-                raise error.TestError(errors[0])
-            else:
-                raise error.TestError("Error running automation program: "
-                                      "could not find '%s' prompt" % prompt)
-
-    # Tell the automation program which device to test
-    find_prompt("Device to test:")
-    server_session.sendline(params.get("test_device"))
-
-    # Tell the automation program which jobs to run
-    find_prompt("Jobs to run:")
-    server_session.sendline(params.get("job_filter", ".*"))
-
-    # Set submission DeviceData
-    find_prompt("DeviceData name:")
-    for dd in params.objects("device_data"):
-        dd_params = params.object_params(dd)
-        if dd_params.get("dd_name") and dd_params.get("dd_data"):
-            server_session.sendline(dd_params.get("dd_name"))
-            server_session.sendline(dd_params.get("dd_data"))
-    server_session.sendline()
-
-    # Set submission descriptors
-    find_prompt("Descriptor path:")
-    for desc in params.objects("descriptors"):
-        desc_params = params.object_params(desc)
-        if desc_params.get("desc_path"):
-            server_session.sendline(desc_params.get("desc_path"))
-    server_session.sendline()
-
-    # Set machine dimensions for each client machine
-    for vm_name in params.objects("vms"):
-        vm_params = params.object_params(vm_name)
-        find_prompt(r"Dimension name\b.*:")
-        for dp in vm_params.objects("dimensions"):
-            dp_params = vm_params.object_params(dp)
-            if dp_params.get("dim_name") and dp_params.get("dim_value"):
-                server_session.sendline(dp_params.get("dim_name"))
-                server_session.sendline(dp_params.get("dim_value"))
-        server_session.sendline()
-
-    # Set extra parameters for tests that require them (e.g. NDISTest)
-    for vm_name in params.objects("vms"):
-        vm_params = params.object_params(vm_name)
-        find_prompt(r"Parameter name\b.*:")
-        for dp in vm_params.objects("device_params"):
-            dp_params = vm_params.object_params(dp)
-            if dp_params.get("dp_name") and dp_params.get("dp_regex"):
-                server_session.sendline(dp_params.get("dp_name"))
-                server_session.sendline(dp_params.get("dp_regex"))
-                # Make sure the prompt appears again (if the device isn't found
-                # the automation program will terminate)
-                find_prompt(r"Parameter name\b.*:")
-        server_session.sendline()
-
-    # Wait for the automation program to terminate
-    try:
-        o = server_session.read_up_to_prompt(print_func=logging.info,
-                                             timeout=test_timeout + 300)
-        # (test_timeout + 300 is used here because the automation program is
-        # supposed to terminate cleanly on its own when test_timeout expires)
-        done = True
-    except aexpect.ExpectError, e:
-        o = e.output
-        done = False
-    server_session.close()
-
-    # Look for test results in the automation program's output
-    result_summaries = re.findall(r"---- \[.*?\] ----", o, re.DOTALL)
-    if not result_summaries:
-        raise error.TestError("The automation program did not return any "
-                              "results")
-    results = result_summaries[-1].strip("-")
-    results = eval("".join(results.splitlines()))
-
-    # Download logs and HTML reports from the server
-    for i, r in enumerate(results):
-        if "report" in r:
-            try:
-                rss_client.download(server_address,
-                                           server_file_transfer_port,
-                                           r["report"], test.debugdir)
-            except rss_client.FileTransferNotFoundError:
-                pass
-        if "logs" in r:
-            try:
-                rss_client.download(server_address,
-                                           server_file_transfer_port,
-                                           r["logs"], test.debugdir)
-            except rss_client.FileTransferNotFoundError:
-                pass
-            else:
-                try:
-                    # Create symlinks to test log dirs to make it easier
-                    # to access them (their original names are not human
-                    # readable)
-                    link_name = "logs_%s" % r["report"].split("\\")[-1]
-                    link_name = link_name.replace(" ", "_")
-                    link_name = link_name.replace("/", "_")
-                    os.symlink(r["logs"].split("\\")[-1],
-                               os.path.join(test.debugdir, link_name))
-                except (KeyError, OSError):
-                    pass
-
-    # Print result summary (both to the regular logs and to a file named
-    # 'summary' in test.debugdir)
-    def print_summary_line(f, line):
-        logging.info(line)
-        f.write(line + "\n")
-    if results:
-        # Make sure all results have the required keys
-        for r in results:
-            r["id"] = str(r.get("id"))
-            r["job"] = str(r.get("job"))
-            r["status"] = str(r.get("status"))
-            r["pass"] = int(r.get("pass", 0))
-            r["fail"] = int(r.get("fail", 0))
-            r["notrun"] = int(r.get("notrun", 0))
-            r["notapplicable"] = int(r.get("notapplicable", 0))
-        # Sort the results by failures and total test count in descending order
-        results = [(r["fail"],
-                    r["pass"] + r["fail"] + r["notrun"] + r["notapplicable"],
-                    r) for r in results]
-        results.sort(reverse=True)
-        results = [r[-1] for r in results]
-        # Print results
-        logging.info("")
-        logging.info("Result summary:")
-        name_length = max(len(r["job"]) for r in results)
-        fmt = "%%-6s %%-%ds %%-15s %%-8s %%-8s %%-8s %%-15s" % name_length
-        f = open(os.path.join(test.debugdir, "summary"), "w")
-        print_summary_line(f, fmt % ("ID", "Job", "Status", "Pass", "Fail",
-                                     "NotRun", "NotApplicable"))
-        print_summary_line(f, fmt % ("--", "---", "------", "----", "----",
-                                     "------", "-------------"))
-        for r in results:
-            print_summary_line(f, fmt % (r["id"], r["job"], r["status"],
-                                         r["pass"], r["fail"], r["notrun"],
-                                         r["notapplicable"]))
-        f.close()
-        logging.info("(see logs and HTML reports in %s)", test.debugdir)
-
-    # Kill the client VMs and fail if the automation program did not terminate
-    # on time
-    if not done:
-        virt_utils.parallel(vm.destroy for vm in vms)
-        raise error.TestFail("The automation program did not terminate "
-                             "on time")
-
-    # Fail if there are failed or incomplete jobs (kill the client VMs if there
-    # are incomplete jobs)
-    failed_jobs = [r["job"] for r in results
-                   if r["status"].lower() == "investigate"]
-    running_jobs = [r["job"] for r in results
-                    if r["status"].lower() == "inprogress"]
-    errors = []
-    if failed_jobs:
-        errors += ["Jobs failed: %s." % failed_jobs]
-    if running_jobs:
-        for vm in vms:
-            vm.destroy()
-        errors += ["Jobs did not complete on time: %s." % running_jobs]
-    if errors:
-        raise error.TestFail(" ".join(errors))
diff --git a/client/tests/kvm/tests/yum_update.py b/client/tests/kvm/tests/yum_update.py
deleted file mode 100644
index 7c9b96c..0000000
--- a/client/tests/kvm/tests/yum_update.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import logging, time
-
-
-def internal_yum_update(session, command, prompt, timeout):
-    """
-    Helper function to perform the yum update test.
-
-    @param session: shell session stablished to the host
-    @param command: Command to be sent to the shell session
-    @param prompt: Machine prompt
-    @param timeout: How long to wait until we get an appropriate output from
-            the shell session.
-    """
-    session.sendline(command)
-    end_time = time.time() + timeout
-    while time.time() < end_time:
-        match = session.read_until_last_line_matches(
-                                                ["[Ii]s this [Oo][Kk]", prompt],
-                                                timeout=timeout)[0]
-        if match == 0:
-            logging.info("Got 'Is this ok'; sending 'y'")
-            session.sendline("y")
-        elif match == 1:
-            logging.info("Got shell prompt")
-            return True
-        else:
-            logging.info("Timeout or process exited")
-            return False
-
-
-def run_yum_update(test, params, env):
-    """
-    Runs yum update and yum update kernel on the remote host (yum enabled
-    hosts only).
-
-    @param test: kvm test object.
-    @param params: Dictionary with test parameters.
-    @param env: Dictionary with the test environment.
-    """
-    vm = env.get_vm(params["main_vm"])
-    vm.verify_alive()
-    timeout = int(params.get("login_timeout", 360))
-    session = vm.wait_for_login(timeout=timeout)
-
-    internal_yum_update(session, "yum update", params.get("shell_prompt"), 600)
-    internal_yum_update(session, "yum update kernel",
-                        params.get("shell_prompt"), 600)
-
-    session.close()
diff --git a/client/virt/tests/autotest.py b/client/virt/tests/autotest.py
new file mode 100644
index 0000000..cdea31a
--- /dev/null
+++ b/client/virt/tests/autotest.py
@@ -0,0 +1,25 @@
+import os
+from autotest_lib.client.virt import virt_test_utils
+
+
+def run_autotest(test, params, env):
+    """
+    Run an autotest test inside a guest.
+
+    @param test: kvm test object.
+    @param params: Dictionary with test parameters.
+    @param env: Dictionary with the test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    timeout = int(params.get("login_timeout", 360))
+    session = vm.wait_for_login(timeout=timeout)
+
+    # Collect test parameters
+    timeout = int(params.get("test_timeout", 300))
+    control_path = os.path.join(test.bindir, "autotest_control",
+                                params.get("test_control_file"))
+    outputdir = test.outputdir
+
+    virt_test_utils.run_autotest(vm, session, control_path, timeout, outputdir,
+                                 params)
diff --git a/client/virt/tests/boot.py b/client/virt/tests/boot.py
new file mode 100644
index 0000000..4fabcd5
--- /dev/null
+++ b/client/virt/tests/boot.py
@@ -0,0 +1,26 @@
+import time
+
+
+def run_boot(test, params, env):
+    """
+    KVM reboot test:
+    1) Log into a guest
+    2) Send a reboot command or a system_reset monitor command (optional)
+    3) Wait until the guest is up again
+    4) Log into the guest to verify it's up again
+
+    @param test: kvm test object
+    @param params: Dictionary with the test parameters
+    @param env: Dictionary with test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    timeout = float(params.get("login_timeout", 240))
+    session = vm.wait_for_login(timeout=timeout)
+
+    if params.get("reboot_method"):
+        if params["reboot_method"] == "system_reset":
+            time.sleep(int(params.get("sleep_before_reset", 10)))
+        session = vm.reboot(session, params["reboot_method"], 0, timeout)
+
+    session.close()
diff --git a/client/virt/tests/clock_getres.py b/client/virt/tests/clock_getres.py
new file mode 100644
index 0000000..d1baf88
--- /dev/null
+++ b/client/virt/tests/clock_getres.py
@@ -0,0 +1,37 @@
+import logging, os
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+
+
+def run_clock_getres(test, params, env):
+    """
+    Verify if guests using kvm-clock as the time source have a sane clock
+    resolution.
+
+    @param test: kvm test object.
+    @param params: Dictionary with test parameters.
+    @param env: Dictionary with the test environment.
+    """
+    t_name = "test_clock_getres"
+    base_dir = "/tmp"
+
+    deps_dir = os.path.join(test.bindir, "deps", t_name)
+    os.chdir(deps_dir)
+    try:
+        utils.system("make clean")
+        utils.system("make")
+    except:
+        raise error.TestError("Failed to compile %s" % t_name)
+
+    test_clock = os.path.join(deps_dir, t_name)
+    if not os.path.isfile(test_clock):
+        raise error.TestError("Could not find %s" % t_name)
+
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    timeout = int(params.get("login_timeout", 360))
+    session = vm.wait_for_login(timeout=timeout)
+    vm.copy_files_to(test_clock, base_dir)
+    session.cmd(os.path.join(base_dir, t_name))
+    logging.info("PASS: Guest reported appropriate clock resolution")
+    logging.info("Guest's dmesg:\n%s", session.cmd_output("dmesg").strip())
diff --git a/client/virt/tests/ethtool.py b/client/virt/tests/ethtool.py
new file mode 100644
index 0000000..1152f00
--- /dev/null
+++ b/client/virt/tests/ethtool.py
@@ -0,0 +1,235 @@
+import logging, re
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+from autotest_lib.client.virt import virt_test_utils, virt_utils, aexpect
+
+
+def run_ethtool(test, params, env):
+    """
+    Test offload functions of ethernet device by ethtool
+
+    1) Log into a guest.
+    2) Initialize the callback of sub functions.
+    3) Enable/disable sub function of NIC.
+    4) Execute callback function.
+    5) Check the return value.
+    6) Restore original configuration.
+
+    @param test: KVM test object.
+    @param params: Dictionary with the test parameters.
+    @param env: Dictionary with test environment.
+
+    @todo: Not all guests have ethtool installed, so
+        find a way to get it installed using yum/apt-get/
+        whatever
+    """
+    def ethtool_get(f_type):
+        feature_pattern = {
+            'tx':  'tx.*checksumming',
+            'rx':  'rx.*checksumming',
+            'sg':  'scatter.*gather',
+            'tso': 'tcp.*segmentation.*offload',
+            'gso': 'generic.*segmentation.*offload',
+            'gro': 'generic.*receive.*offload',
+            'lro': 'large.*receive.*offload',
+            }
+        o = session.cmd("ethtool -k %s" % ethname)
+        try:
+            return re.findall("%s: (.*)" % feature_pattern.get(f_type), o)[0]
+        except IndexError:
+            logging.debug("Could not get %s status", f_type)
+
+
+    def ethtool_set(f_type, status):
+        """
+        Set ethernet device offload status
+
+        @param f_type: Offload type name
+        @param status: New status will be changed to
+        """
+        logging.info("Try to set %s %s", f_type, status)
+        if status not in ["off", "on"]:
+            return False
+        cmd = "ethtool -K %s %s %s" % (ethname, f_type, status)
+        if ethtool_get(f_type) != status:
+            try:
+                session.cmd(cmd)
+                return True
+            except:
+                return False
+        if ethtool_get(f_type) != status:
+            logging.error("Fail to set %s %s", f_type, status)
+            return False
+        return True
+
+
+    def ethtool_save_params():
+        logging.info("Save ethtool configuration")
+        for i in supported_features:
+            feature_status[i] = ethtool_get(i)
+
+
+    def ethtool_restore_params():
+        logging.info("Restore ethtool configuration")
+        for i in supported_features:
+            ethtool_set(i, feature_status[i])
+
+
+    def compare_md5sum(name):
+        logging.info("Compare md5sum of the files on guest and host")
+        host_result = utils.hash_file(name, method="md5")
+        try:
+            o = session.cmd_output("md5sum %s" % name)
+            guest_result = re.findall("\w+", o)[0]
+        except IndexError:
+            logging.error("Could not get file md5sum in guest")
+            return False
+        logging.debug("md5sum: guest(%s), host(%s)", guest_result, host_result)
+        return guest_result == host_result
+
+
+    def transfer_file(src="guest"):
+        """
+        Transfer file by scp, use tcpdump to capture packets, then check the
+        return string.
+
+        @param src: Source host of transfer file
+        @return: Tuple (status, error msg/tcpdump result)
+        """
+        session2.cmd_output("rm -rf %s" % filename)
+        dd_cmd = ("dd if=/dev/urandom of=%s bs=1M count=%s" %
+                  (filename, params.get("filesize")))
+        failure = (False, "Failed to create file using dd, cmd: %s" % dd_cmd)
+        logging.info("Creating file in source host, cmd: %s", dd_cmd)
+        tcpdump_cmd = "tcpdump -lep -s 0 tcp -vv port ssh"
+        if src == "guest":
+            tcpdump_cmd += " and src %s" % guest_ip
+            copy_files_from = vm.copy_files_from
+            try:
+                session.cmd_output(dd_cmd, timeout=360)
+            except aexpect.ShellCmdError, e:
+                return failure
+        else:
+            tcpdump_cmd += " and dst %s" % guest_ip
+            copy_files_from = vm.copy_files_to
+            try:
+                utils.system(dd_cmd)
+            except error.CmdError, e:
+                return failure
+
+        # only capture the new tcp port after offload setup
+        original_tcp_ports = re.findall("tcp.*:(\d+).*%s" % guest_ip,
+                                      utils.system_output("/bin/netstat -nap"))
+        for i in original_tcp_ports:
+            tcpdump_cmd += " and not port %s" % i
+        logging.debug("Listen using command: %s", tcpdump_cmd)
+        session2.sendline(tcpdump_cmd)
+        if not virt_utils.wait_for(
+                           lambda:session.cmd_status("pgrep tcpdump") == 0, 30):
+            return (False, "Tcpdump process wasn't launched")
+
+        logging.info("Start to transfer file")
+        try:
+            copy_files_from(filename, filename)
+        except virt_utils.SCPError, e:
+            return (False, "File transfer failed (%s)" % e)
+        logging.info("Transfer file completed")
+        session.cmd("killall tcpdump")
+        try:
+            tcpdump_string = session2.read_up_to_prompt(timeout=60)
+        except aexpect.ExpectError:
+            return (False, "Fail to read tcpdump's output")
+
+        if not compare_md5sum(filename):
+            return (False, "Files' md5sum mismatched")
+        return (True, tcpdump_string)
+
+
+    def tx_callback(status="on"):
+        s, o = transfer_file(src="guest")
+        if not s:
+            logging.error(o)
+            return False
+        return True
+
+
+    def rx_callback(status="on"):
+        s, o = transfer_file(src="host")
+        if not s:
+            logging.error(o)
+            return False
+        return True
+
+
+    def so_callback(status="on"):
+        s, o = transfer_file(src="guest")
+        if not s:
+            logging.error(o)
+            return False
+        logging.info("Check if contained large frame")
+        # MTU: default IPv4 MTU is 1500 Bytes, ethernet header is 14 Bytes
+        return (status == "on") ^ (len([i for i in re.findall(
+                                   "length (\d*):", o) if int(i) > mtu]) == 0)
+
+
+    def ro_callback(status="on"):
+        s, o = transfer_file(src="host")
+        if not s:
+            logging.error(o)
+            return False
+        return True
+
+
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    session = vm.wait_for_login(timeout=int(params.get("login_timeout", 360)))
+    # Let's just error the test if we identify that there's no ethtool installed
+    session.cmd("ethtool -h")
+    session2 = vm.wait_for_login(timeout=int(params.get("login_timeout", 360)))
+    mtu = 1514
+    feature_status = {}
+    filename = "/tmp/ethtool.dd"
+    guest_ip = vm.get_address()
+    ethname = virt_test_utils.get_linux_ifname(session, vm.get_mac_address(0))
+    supported_features = params.get("supported_features")
+    if supported_features:
+        supported_features = supported_features.split()
+    else:
+        supported_features = []
+    test_matrix = {
+        # type:(callback,    (dependence), (exclude)
+        "tx":  (tx_callback, (), ()),
+        "rx":  (rx_callback, (), ()),
+        "sg":  (tx_callback, ("tx",), ()),
+        "tso": (so_callback, ("tx", "sg",), ("gso",)),
+        "gso": (so_callback, (), ("tso",)),
+        "gro": (ro_callback, ("rx",), ("lro",)),
+        "lro": (rx_callback, (), ("gro",)),
+        }
+    ethtool_save_params()
+    success = True
+    try:
+        for f_type in supported_features:
+            callback = test_matrix[f_type][0]
+            for i in test_matrix[f_type][2]:
+                if not ethtool_set(i, "off"):
+                    logging.error("Fail to disable %s", i)
+                    success = False
+            for i in [f for f in test_matrix[f_type][1]] + [f_type]:
+                if not ethtool_set(i, "on"):
+                    logging.error("Fail to enable %s", i)
+                    success = False
+            if not callback():
+                raise error.TestFail("Test failed, %s: on", f_type)
+
+            if not ethtool_set(f_type, "off"):
+                logging.error("Fail to disable %s", f_type)
+                success = False
+            if not callback(status="off"):
+                raise error.TestFail("Test failed, %s: off", f_type)
+        if not success:
+            raise error.TestError("Enable/disable offload function fail")
+    finally:
+        ethtool_restore_params()
+        session.close()
+        session2.close()
diff --git a/client/virt/tests/file_transfer.py b/client/virt/tests/file_transfer.py
new file mode 100644
index 0000000..5f6672d
--- /dev/null
+++ b/client/virt/tests/file_transfer.py
@@ -0,0 +1,84 @@
+import logging, time, os
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+from autotest_lib.client.virt import virt_utils
+
+
+def run_file_transfer(test, params, env):
+    """
+    Test ethrnet device function by ethtool
+
+    1) Boot up a VM.
+    2) Create a large file by dd on host.
+    3) Copy this file from host to guest.
+    4) Copy this file from guest to host.
+    5) Check if file transfers ended good.
+
+    @param test: KVM test object.
+    @param params: Dictionary with the test parameters.
+    @param env: Dictionary with test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    login_timeout = int(params.get("login_timeout", 360))
+
+    session = vm.wait_for_login(timeout=login_timeout)
+
+    dir_name = test.tmpdir
+    transfer_timeout = int(params.get("transfer_timeout"))
+    transfer_type = params.get("transfer_type")
+    tmp_dir = params.get("tmp_dir", "/tmp/")
+    clean_cmd = params.get("clean_cmd", "rm -f")
+    filesize = int(params.get("filesize", 4000))
+    count = int(filesize / 10)
+    if count == 0:
+        count = 1
+
+    host_path = os.path.join(dir_name, "tmp-%s" %
+                             virt_utils.generate_random_string(8))
+    host_path2 = host_path + ".2"
+    cmd = "dd if=/dev/zero of=%s bs=10M count=%d" % (host_path, count)
+    guest_path = (tmp_dir + "file_transfer-%s" %
+                  virt_utils.generate_random_string(8))
+
+    try:
+        logging.info("Creating %dMB file on host", filesize)
+        utils.run(cmd)
+
+        if transfer_type == "remote":
+            logging.info("Transfering file host -> guest, timeout: %ss",
+                         transfer_timeout)
+            t_begin = time.time()
+            vm.copy_files_to(host_path, guest_path, timeout=transfer_timeout)
+            t_end = time.time()
+            throughput = filesize / (t_end - t_begin)
+            logging.info("File transfer host -> guest succeed, "
+                         "estimated throughput: %.2fMB/s", throughput)
+
+            logging.info("Transfering file guest -> host, timeout: %ss",
+                         transfer_timeout)
+            t_begin = time.time()
+            vm.copy_files_from(guest_path, host_path2, timeout=transfer_timeout)
+            t_end = time.time()
+            throughput = filesize / (t_end - t_begin)
+            logging.info("File transfer guest -> host succeed, "
+                         "estimated throughput: %.2fMB/s", throughput)
+        else:
+            raise error.TestError("Unknown test file transfer mode %s" %
+                                  transfer_type)
+
+        if (utils.hash_file(host_path, method="md5") !=
+            utils.hash_file(host_path2, method="md5")):
+            raise error.TestFail("File changed after transfer host -> guest "
+                                 "and guest -> host")
+
+    finally:
+        logging.info('Cleaning temp file on guest')
+        session.cmd("rm -rf %s" % guest_path)
+        logging.info('Cleaning temp files on host')
+        try:
+            os.remove(host_path)
+            os.remove(host_path2)
+        except OSError:
+            pass
+        session.close()
diff --git a/client/virt/tests/guest_s4.py b/client/virt/tests/guest_s4.py
new file mode 100644
index 0000000..5b5708d
--- /dev/null
+++ b/client/virt/tests/guest_s4.py
@@ -0,0 +1,76 @@
+import logging, time
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.virt import virt_utils
+
+
+@error.context_aware
+def run_guest_s4(test, params, env):
+    """
+    Suspend guest to disk, supports both Linux & Windows OSes.
+
+    @param test: kvm test object.
+    @param params: Dictionary with test parameters.
+    @param env: Dictionary with the test environment.
+    """
+    error.base_context("before S4")
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    timeout = int(params.get("login_timeout", 360))
+    session = vm.wait_for_login(timeout=timeout)
+
+    error.context("checking whether guest OS supports S4", logging.info)
+    session.cmd(params.get("check_s4_support_cmd"))
+    error.context()
+
+    logging.info("Waiting until all guest OS services are fully started...")
+    time.sleep(float(params.get("services_up_timeout", 30)))
+
+    # Start up a program (tcpdump for linux & ping for Windows), as a flag.
+    # If the program died after suspend, then fails this testcase.
+    test_s4_cmd = params.get("test_s4_cmd")
+    session.sendline(test_s4_cmd)
+    time.sleep(5)
+
+    # Get the second session to start S4
+    session2 = vm.wait_for_login(timeout=timeout)
+
+    # Make sure the background program is running as expected
+    error.context("making sure background program is running")
+    check_s4_cmd = params.get("check_s4_cmd")
+    session2.cmd(check_s4_cmd)
+    logging.info("Launched background command in guest: %s", test_s4_cmd)
+    error.context()
+    error.base_context()
+
+    # Suspend to disk
+    logging.info("Starting suspend to disk now...")
+    session2.sendline(params.get("set_s4_cmd"))
+
+    # Make sure the VM goes down
+    error.base_context("after S4")
+    suspend_timeout = 240 + int(params.get("smp")) * 60
+    if not virt_utils.wait_for(vm.is_dead, suspend_timeout, 2, 2):
+        raise error.TestFail("VM refuses to go down. Suspend failed.")
+    logging.info("VM suspended successfully. Sleeping for a while before "
+                 "resuming it.")
+    time.sleep(10)
+
+    # Start vm, and check whether the program is still running
+    logging.info("Resuming suspended VM...")
+    vm.create()
+
+    # Log into the resumed VM
+    relogin_timeout = int(params.get("relogin_timeout", 240))
+    logging.info("Logging into resumed VM, timeout %s", relogin_timeout)
+    session2 = vm.wait_for_login(timeout=relogin_timeout)
+
+    # Check whether the test command is still alive
+    error.context("making sure background program is still running",
+                  logging.info)
+    session2.cmd(check_s4_cmd)
+    error.context()
+
+    logging.info("VM resumed successfuly after suspend to disk")
+    session2.cmd_output(params.get("kill_test_s4_cmd"))
+    session.close()
+    session2.close()
diff --git a/client/virt/tests/guest_test.py b/client/virt/tests/guest_test.py
new file mode 100644
index 0000000..3bc7da7
--- /dev/null
+++ b/client/virt/tests/guest_test.py
@@ -0,0 +1,80 @@
+import os, logging
+from autotest_lib.client.virt import virt_utils
+
+
+def run_guest_test(test, params, env):
+    """
+    A wrapper for running customized tests in guests.
+
+    1) Log into a guest.
+    2) Run script.
+    3) Wait for script execution to complete.
+    4) Pass/fail according to exit status of script.
+
+    @param test: KVM test object.
+    @param params: Dictionary with test parameters.
+    @param env: Dictionary with the test environment.
+    """
+    login_timeout = int(params.get("login_timeout", 360))
+    reboot = params.get("reboot", "no")
+
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    if params.get("serial_login") == "yes":
+        session = vm.wait_for_serial_login(timeout=login_timeout)
+    else:
+        session = vm.wait_for_login(timeout=login_timeout)
+
+    if reboot == "yes":
+        logging.debug("Rebooting guest before test ...")
+        session = vm.reboot(session, timeout=login_timeout)
+
+    try:
+        logging.info("Starting script...")
+
+        # Collect test parameters
+        interpreter = params.get("interpreter")
+        script = params.get("guest_script")
+        dst_rsc_path = params.get("dst_rsc_path", "script.au3")
+        script_params = params.get("script_params", "")
+        test_timeout = float(params.get("test_timeout", 600))
+
+        logging.debug("Starting preparing resouce files...")
+        # Download the script resource from a remote server, or
+        # prepare the script using rss?
+        if params.get("download") == "yes":
+            download_cmd = params.get("download_cmd")
+            rsc_server = params.get("rsc_server")
+            rsc_dir = os.path.basename(rsc_server)
+            dst_rsc_dir = params.get("dst_rsc_dir")
+
+            # Change dir to dst_rsc_dir, and remove the guest script dir there
+            rm_cmd = "cd %s && (rmdir /s /q %s || del /s /q %s)" % \
+                     (dst_rsc_dir, rsc_dir, rsc_dir)
+            session.cmd(rm_cmd, timeout=test_timeout)
+            logging.debug("Clean directory succeeded.")
+
+            # then download the resource.
+            rsc_cmd = "cd %s && %s %s" % (dst_rsc_dir, download_cmd, rsc_server)
+            session.cmd(rsc_cmd, timeout=test_timeout)
+            logging.info("Download resource finished.")
+        else:
+            session.cmd_output("del %s" % dst_rsc_path, internal_timeout=0)
+            script_path = virt_utils.get_path(test.bindir, script)
+            vm.copy_files_to(script_path, dst_rsc_path, timeout=60)
+
+        cmd = "%s %s %s" % (interpreter, dst_rsc_path, script_params)
+
+        try:
+            logging.info("------------ Script output ------------")
+            session.cmd(cmd, print_func=logging.info, timeout=test_timeout)
+        finally:
+            logging.info("------------ End of script output ------------")
+
+        if reboot == "yes":
+            logging.debug("Rebooting guest after test ...")
+            session = vm.reboot(session, timeout=login_timeout)
+
+        logging.debug("guest test PASSED.")
+    finally:
+        session.close()
diff --git a/client/virt/tests/image_copy.py b/client/virt/tests/image_copy.py
new file mode 100644
index 0000000..cc921ab
--- /dev/null
+++ b/client/virt/tests/image_copy.py
@@ -0,0 +1,45 @@
+import os, logging
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+from autotest_lib.client.virt import virt_utils
+
+
+def run_image_copy(test, params, env):
+    """
+    Copy guest images from nfs server.
+    1) Mount the NFS share directory
+    2) Check the existence of source image
+    3) If it exists, copy the image from NFS
+
+    @param test: kvm test object
+    @param params: Dictionary with the test parameters
+    @param env: Dictionary with test environment.
+    """
+    mount_dest_dir = params.get('dst_dir', '/mnt/images')
+    if not os.path.exists(mount_dest_dir):
+        try:
+            os.makedirs(mount_dest_dir)
+        except OSError, err:
+            logging.warning('mkdir %s error:\n%s', mount_dest_dir, err)
+
+    if not os.path.exists(mount_dest_dir):
+        raise error.TestError('Failed to create NFS share dir %s' %
+                              mount_dest_dir)
+
+    src = params.get('images_good')
+    image = '%s.%s' % (os.path.split(params['image_name'])[1],
+                       params['image_format'])
+    src_path = os.path.join(mount_dest_dir, image)
+    dst_path = '%s.%s' % (params['image_name'], params['image_format'])
+    cmd = 'cp %s %s' % (src_path, dst_path)
+
+    if not virt_utils.mount(src, mount_dest_dir, 'nfs', 'ro'):
+        raise error.TestError('Could not mount NFS share %s to %s' %
+                              (src, mount_dest_dir))
+
+    # Check the existence of source image
+    if not os.path.exists(src_path):
+        raise error.TestError('Could not find %s in NFS share' % src_path)
+
+    logging.debug('Copying image %s...', image)
+    utils.system(cmd)
diff --git a/client/virt/tests/iofuzz.py b/client/virt/tests/iofuzz.py
new file mode 100644
index 0000000..d244012
--- /dev/null
+++ b/client/virt/tests/iofuzz.py
@@ -0,0 +1,136 @@
+import logging, re, random
+from autotest_lib.client.common_lib import error, aexpect
+from autotest_lib.client.virt import aexpect
+
+
+def run_iofuzz(test, params, env):
+    """
+    KVM iofuzz test:
+    1) Log into a guest
+    2) Enumerate all IO port ranges through /proc/ioports
+    3) On each port of the range:
+        * Read it
+        * Write 0 to it
+        * Write a random value to a random port on a random order
+
+    If the guest SSH session hangs, the test detects the hang and the guest
+    is then rebooted. The test fails if we detect the qemu process to terminate
+    while executing the process.
+
+    @param test: kvm test object
+    @param params: Dictionary with the test parameters
+    @param env: Dictionary with test environment.
+    """
+    def outb(session, port, data):
+        """
+        Write data to a given port.
+
+        @param session: SSH session stablished to a VM
+        @param port: Port where we'll write the data
+        @param data: Integer value that will be written on the port. This
+                value will be converted to octal before its written.
+        """
+        logging.debug("outb(0x%x, 0x%x)", port, data)
+        outb_cmd = ("echo -e '\\%s' | dd of=/dev/port seek=%d bs=1 count=1" %
+                    (oct(data), port))
+        try:
+            session.cmd(outb_cmd)
+        except aexpect.ShellError, e:
+            logging.debug(e)
+
+
+    def inb(session, port):
+        """
+        Read from a given port.
+
+        @param session: SSH session stablished to a VM
+        @param port: Port where we'll read data
+        """
+        logging.debug("inb(0x%x)", port)
+        inb_cmd = "dd if=/dev/port seek=%d of=/dev/null bs=1 count=1" % port
+        try:
+            session.cmd(inb_cmd)
+        except aexpect.ShellError, e:
+            logging.debug(e)
+
+
+    def fuzz(session, inst_list):
+        """
+        Executes a series of read/write/randwrite instructions.
+
+        If the guest SSH session hangs, an attempt to relogin will be made.
+        If it fails, the guest will be reset. If during the process the VM
+        process abnormally ends, the test fails.
+
+        @param inst_list: List of instructions that will be executed.
+        @raise error.TestFail: If the VM process dies in the middle of the
+                fuzzing procedure.
+        """
+        for (op, operand) in inst_list:
+            if op == "read":
+                inb(session, operand[0])
+            elif op == "write":
+                outb(session, operand[0], operand[1])
+            else:
+                raise error.TestError("Unknown command %s" % op)
+
+            if not session.is_responsive():
+                logging.debug("Session is not responsive")
+                if vm.process.is_alive():
+                    logging.debug("VM is alive, try to re-login")
+                    try:
+                        session = vm.wait_for_login(timeout=10)
+                    except:
+                        logging.debug("Could not re-login, reboot the guest")
+                        session = vm.reboot(method="system_reset")
+                else:
+                    raise error.TestFail("VM has quit abnormally during %s",
+                                         (op, operand))
+
+
+    login_timeout = float(params.get("login_timeout", 240))
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    session = vm.wait_for_login(timeout=login_timeout)
+
+    try:
+        ports = {}
+        r = random.SystemRandom()
+
+        logging.info("Enumerate guest devices through /proc/ioports")
+        ioports = session.cmd_output("cat /proc/ioports")
+        logging.debug(ioports)
+        devices = re.findall("(\w+)-(\w+)\ : (.*)", ioports)
+
+        skip_devices = params.get("skip_devices","")
+        fuzz_count = int(params.get("fuzz_count", 10))
+
+        for (beg, end, name) in devices:
+            ports[(int(beg, base=16), int(end, base=16))] = name.strip()
+
+        for (beg, end) in ports.keys():
+            name = ports[(beg, end)]
+            if name in skip_devices:
+                logging.info("Skipping device %s", name)
+                continue
+
+            logging.info("Fuzzing %s, port range 0x%x-0x%x", name, beg, end)
+            inst = []
+
+            # Read all ports of the range
+            for port in range(beg, end + 1):
+                inst.append(("read", [port]))
+
+            # Write 0 to all ports of the range
+            for port in range(beg, end + 1):
+                inst.append(("write", [port, 0]))
+
+            # Write random values to random ports of the range
+            for seq in range(fuzz_count * (end - beg + 1)):
+                inst.append(("write",
+                             [r.randint(beg, end), r.randint(0,255)]))
+
+            fuzz(session, inst)
+
+    finally:
+        session.close()
diff --git a/client/virt/tests/ioquit.py b/client/virt/tests/ioquit.py
new file mode 100644
index 0000000..34b4fb5
--- /dev/null
+++ b/client/virt/tests/ioquit.py
@@ -0,0 +1,31 @@
+import logging, time, random
+
+
+def run_ioquit(test, params, env):
+    """
+    Emulate the poweroff under IO workload(dd so far) using kill -9.
+
+    @param test: Kvm test object
+    @param params: Dictionary with the test parameters.
+    @param env: Dictionary with test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    login_timeout = int(params.get("login_timeout", 360))
+    session = vm.wait_for_login(timeout=login_timeout)
+    session2 = vm.wait_for_login(timeout=login_timeout)
+    try:
+        bg_cmd = params.get("background_cmd")
+        logging.info("Add IO workload for guest OS.")
+        session.cmd_output(bg_cmd, timeout=60)
+        check_cmd = params.get("check_cmd")
+        session2.cmd(check_cmd, timeout=60)
+
+        logging.info("Sleep for a while")
+        time.sleep(random.randrange(30, 100))
+        session2.cmd(check_cmd, timeout=60)
+        logging.info("Kill the virtual machine")
+        vm.process.close()
+    finally:
+        session.close()
+        session2.close()
diff --git a/client/virt/tests/iozone_windows.py b/client/virt/tests/iozone_windows.py
new file mode 100644
index 0000000..4046106
--- /dev/null
+++ b/client/virt/tests/iozone_windows.py
@@ -0,0 +1,40 @@
+import logging, os
+from autotest_lib.client.bin import utils
+from autotest_lib.client.tests.iozone import postprocessing
+
+
+def run_iozone_windows(test, params, env):
+    """
+    Run IOzone for windows on a windows guest:
+    1) Log into a guest
+    2) Execute the IOzone test contained in the winutils.iso
+    3) Get results
+    4) Postprocess it with the IOzone postprocessing module
+
+    @param test: kvm test object
+    @param params: Dictionary with the test parameters
+    @param env: Dictionary with test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    timeout = int(params.get("login_timeout", 360))
+    session = vm.wait_for_login(timeout=timeout)
+    results_path = os.path.join(test.resultsdir,
+                                'raw_output_%s' % test.iteration)
+    analysisdir = os.path.join(test.resultsdir, 'analysis_%s' % test.iteration)
+
+    # Run IOzone and record its results
+    c = params.get("iozone_cmd")
+    t = int(params.get("iozone_timeout"))
+    logging.info("Running IOzone command on guest, timeout %ss", t)
+    results = session.cmd_output(cmd=c, timeout=t, print_func=logging.debug)
+    utils.open_write_close(results_path, results)
+
+    # Postprocess the results using the IOzone postprocessing module
+    logging.info("Iteration succeed, postprocessing")
+    a = postprocessing.IOzoneAnalyzer(list_files=[results_path],
+                                      output_dir=analysisdir)
+    a.analyze()
+    p = postprocessing.IOzonePlotter(results_file=results_path,
+                                     output_dir=analysisdir)
+    p.plot_all()
diff --git a/client/virt/tests/jumbo.py b/client/virt/tests/jumbo.py
new file mode 100644
index 0000000..5108227
--- /dev/null
+++ b/client/virt/tests/jumbo.py
@@ -0,0 +1,127 @@
+import logging, commands, random
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+from autotest_lib.client.virt import virt_utils, virt_test_utils
+
+
+def run_jumbo(test, params, env):
+    """
+    Test the RX jumbo frame function of vnics:
+
+    1) Boot the VM.
+    2) Change the MTU of guest nics and host taps depending on the NIC model.
+    3) Add the static ARP entry for guest NIC.
+    4) Wait for the MTU ok.
+    5) Verify the path MTU using ping.
+    6) Ping the guest with large frames.
+    7) Increment size ping.
+    8) Flood ping the guest with large frames.
+    9) Verify the path MTU.
+    10) Recover the MTU.
+
+    @param test: KVM test object.
+    @param params: Dictionary with the test parameters.
+    @param env: Dictionary with test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    session = vm.wait_for_login(timeout=int(params.get("login_timeout", 360)))
+    mtu = params.get("mtu", "1500")
+    flood_time = params.get("flood_time", "300")
+    max_icmp_pkt_size = int(mtu) - 28
+
+    ifname = vm.get_ifname(0)
+    ip = vm.get_address(0)
+    if ip is None:
+        raise error.TestError("Could not get the IP address")
+
+    try:
+        # Environment preparation
+        ethname = virt_test_utils.get_linux_ifname(session, vm.get_mac_address(0))
+
+        logging.info("Changing the MTU of guest ...")
+        guest_mtu_cmd = "ifconfig %s mtu %s" % (ethname , mtu)
+        session.cmd(guest_mtu_cmd)
+
+        logging.info("Chaning the MTU of host tap ...")
+        host_mtu_cmd = "ifconfig %s mtu %s" % (ifname, mtu)
+        utils.run(host_mtu_cmd)
+
+        logging.info("Add a temporary static ARP entry ...")
+        arp_add_cmd = "arp -s %s %s -i %s" % (ip, vm.get_mac_address(0), ifname)
+        utils.run(arp_add_cmd)
+
+        def is_mtu_ok():
+            s, o = virt_test_utils.ping(ip, 1, interface=ifname,
+                                       packetsize=max_icmp_pkt_size,
+                                       hint="do", timeout=2)
+            return s == 0
+
+        def verify_mtu():
+            logging.info("Verify the path MTU")
+            s, o = virt_test_utils.ping(ip, 10, interface=ifname,
+                                       packetsize=max_icmp_pkt_size,
+                                       hint="do", timeout=15)
+            if s != 0 :
+                logging.error(o)
+                raise error.TestFail("Path MTU is not as expected")
+            if virt_test_utils.get_loss_ratio(o) != 0:
+                logging.error(o)
+                raise error.TestFail("Packet loss ratio during MTU "
+                                     "verification is not zero")
+
+        def flood_ping():
+            logging.info("Flood with large frames")
+            virt_test_utils.ping(ip, interface=ifname,
+                                packetsize=max_icmp_pkt_size,
+                                flood=True, timeout=float(flood_time))
+
+        def large_frame_ping(count=100):
+            logging.info("Large frame ping")
+            s, o = virt_test_utils.ping(ip, count, interface=ifname,
+                                       packetsize=max_icmp_pkt_size,
+                                       timeout=float(count) * 2)
+            ratio = virt_test_utils.get_loss_ratio(o)
+            if ratio != 0:
+                raise error.TestFail("Loss ratio of large frame ping is %s" %
+                                     ratio)
+
+        def size_increase_ping(step=random.randrange(90, 110)):
+            logging.info("Size increase ping")
+            for size in range(0, max_icmp_pkt_size + 1, step):
+                logging.info("Ping %s with size %s", ip, size)
+                s, o = virt_test_utils.ping(ip, 1, interface=ifname,
+                                           packetsize=size,
+                                           hint="do", timeout=1)
+                if s != 0:
+                    s, o = virt_test_utils.ping(ip, 10, interface=ifname,
+                                               packetsize=size,
+                                               adaptive=True, hint="do",
+                                               timeout=20)
+
+                    if virt_test_utils.get_loss_ratio(o) > int(params.get(
+                                                      "fail_ratio", 50)):
+                        raise error.TestFail("Ping loss ratio is greater "
+                                             "than 50% for size %s" % size)
+
+        logging.info("Waiting for the MTU to be OK")
+        wait_mtu_ok = 10
+        if not virt_utils.wait_for(is_mtu_ok, wait_mtu_ok, 0, 1):
+            logging.debug(commands.getoutput("ifconfig -a"))
+            raise error.TestError("MTU is not as expected even after %s "
+                                  "seconds" % wait_mtu_ok)
+
+        # Functional Test
+        verify_mtu()
+        large_frame_ping()
+        size_increase_ping()
+
+        # Stress test
+        flood_ping()
+        verify_mtu()
+
+    finally:
+        # Environment clean
+        session.close()
+        logging.info("Removing the temporary ARP entry")
+        utils.run("arp -d %s -i %s" % (ip, ifname))
diff --git a/client/virt/tests/kdump.py b/client/virt/tests/kdump.py
new file mode 100644
index 0000000..90c004b
--- /dev/null
+++ b/client/virt/tests/kdump.py
@@ -0,0 +1,75 @@
+import logging
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.virt import virt_utils
+
+
+def run_kdump(test, params, env):
+    """
+    KVM reboot test:
+    1) Log into a guest
+    2) Check and enable the kdump
+    3) For each vcpu, trigger a crash and check the vmcore
+
+    @param test: kvm test object
+    @param params: Dictionary with the test parameters
+    @param env: Dictionary with test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    timeout = float(params.get("login_timeout", 240))
+    crash_timeout = float(params.get("crash_timeout", 360))
+    session = vm.wait_for_login(timeout=timeout)
+    def_kernel_param_cmd = ("grubby --update-kernel=`grubby --default-kernel`"
+                            " --args=crashkernel=128M")
+    kernel_param_cmd = params.get("kernel_param_cmd", def_kernel_param_cmd)
+    def_kdump_enable_cmd = "chkconfig kdump on && service kdump start"
+    kdump_enable_cmd = params.get("kdump_enable_cmd", def_kdump_enable_cmd)
+    def_crash_kernel_prob_cmd = "grep -q 1 /sys/kernel/kexec_crash_loaded"
+    crash_kernel_prob_cmd = params.get("crash_kernel_prob_cmd",
+                                       def_crash_kernel_prob_cmd)
+
+    def crash_test(vcpu):
+        """
+        Trigger a crash dump through sysrq-trigger
+
+        @param vcpu: vcpu which is used to trigger a crash
+        """
+        session = vm.wait_for_login(timeout=timeout)
+        session.cmd_output("rm -rf /var/crash/*")
+
+        logging.info("Triggering crash on vcpu %d ...", vcpu)
+        crash_cmd = "taskset -c %d echo c > /proc/sysrq-trigger" % vcpu
+        session.sendline(crash_cmd)
+
+        if not virt_utils.wait_for(lambda: not session.is_responsive(), 240, 0,
+                                  1):
+            raise error.TestFail("Could not trigger crash on vcpu %d" % vcpu)
+
+        logging.info("Waiting for kernel crash dump to complete")
+        session = vm.wait_for_login(timeout=crash_timeout)
+
+        logging.info("Probing vmcore file...")
+        session.cmd("ls -R /var/crash | grep vmcore")
+        logging.info("Found vmcore.")
+
+        session.cmd_output("rm -rf /var/crash/*")
+
+    try:
+        logging.info("Checking the existence of crash kernel...")
+        try:
+            session.cmd(crash_kernel_prob_cmd)
+        except:
+            logging.info("Crash kernel is not loaded. Trying to load it")
+            session.cmd(kernel_param_cmd)
+            session = vm.reboot(session, timeout=timeout)
+
+        logging.info("Enabling kdump service...")
+        # the initrd may be rebuilt here so we need to wait a little more
+        session.cmd(kdump_enable_cmd, timeout=120)
+
+        nvcpu = int(params.get("smp", 1))
+        for i in range (nvcpu):
+            crash_test(i)
+
+    finally:
+        session.close()
diff --git a/client/virt/tests/linux_s3.py b/client/virt/tests/linux_s3.py
new file mode 100644
index 0000000..5a04fca
--- /dev/null
+++ b/client/virt/tests/linux_s3.py
@@ -0,0 +1,41 @@
+import logging, time
+from autotest_lib.client.common_lib import error
+
+
+def run_linux_s3(test, params, env):
+    """
+    Suspend a guest Linux OS to memory.
+
+    @param test: kvm test object.
+    @param params: Dictionary with test parameters.
+    @param env: Dictionary with the test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    timeout = int(params.get("login_timeout", 360))
+    session = vm.wait_for_login(timeout=timeout)
+
+    logging.info("Checking that VM supports S3")
+    session.cmd("grep -q mem /sys/power/state")
+
+    logging.info("Waiting for a while for X to start")
+    time.sleep(10)
+
+    src_tty = session.cmd_output("fgconsole").strip()
+    logging.info("Current virtual terminal is %s", src_tty)
+    if src_tty not in map(str, range(1, 10)):
+        raise error.TestFail("Got a strange current vt (%s)" % src_tty)
+
+    dst_tty = "1"
+    if src_tty == "1":
+        dst_tty = "2"
+
+    logging.info("Putting VM into S3")
+    command = "chvt %s && echo mem > /sys/power/state && chvt %s" % (dst_tty,
+                                                                     src_tty)
+    suspend_timeout = 120 + int(params.get("smp")) * 60
+    session.cmd(command, timeout=suspend_timeout)
+
+    logging.info("VM resumed after S3")
+
+    session.close()
diff --git a/client/virt/tests/mac_change.py b/client/virt/tests/mac_change.py
new file mode 100644
index 0000000..d2eaf01
--- /dev/null
+++ b/client/virt/tests/mac_change.py
@@ -0,0 +1,60 @@
+import logging
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.virt import virt_utils, virt_test_utils
+
+
+def run_mac_change(test, params, env):
+    """
+    Change MAC address of guest.
+
+    1) Get a new mac from pool, and the old mac addr of guest.
+    2) Set new mac in guest and regain new IP.
+    3) Re-log into guest with new MAC.
+
+    @param test: KVM test object.
+    @param params: Dictionary with the test parameters.
+    @param env: Dictionary with test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    timeout = int(params.get("login_timeout", 360))
+    session_serial = vm.wait_for_serial_login(timeout=timeout)
+    # This session will be used to assess whether the IP change worked
+    session = vm.wait_for_login(timeout=timeout)
+    old_mac = vm.get_mac_address(0)
+    while True:
+        vm.free_mac_address(0)
+        new_mac = virt_utils.generate_mac_address(vm.instance, 0)
+        if old_mac != new_mac:
+            break
+    logging.info("The initial MAC address is %s", old_mac)
+    interface = virt_test_utils.get_linux_ifname(session_serial, old_mac)
+    # Start change MAC address
+    logging.info("Changing MAC address to %s", new_mac)
+    change_cmd = ("ifconfig %s down && ifconfig %s hw ether %s && "
+                  "ifconfig %s up" % (interface, interface, new_mac, interface))
+    session_serial.cmd(change_cmd)
+
+    # Verify whether MAC address was changed to the new one
+    logging.info("Verifying the new mac address")
+    session_serial.cmd("ifconfig | grep -i %s" % new_mac)
+
+    # Restart `dhclient' to regain IP for new mac address
+    logging.info("Restart the network to gain new IP")
+    dhclient_cmd = "dhclient -r && dhclient %s" % interface
+    session_serial.sendline(dhclient_cmd)
+
+    # Re-log into the guest after changing mac address
+    if virt_utils.wait_for(session.is_responsive, 120, 20, 3):
+        # Just warning when failed to see the session become dead,
+        # because there is a little chance the ip does not change.
+        logging.warn("The session is still responsive, settings may fail.")
+    session.close()
+
+    # Re-log into guest and check if session is responsive
+    logging.info("Re-log into the guest")
+    session = vm.wait_for_login(timeout=timeout)
+    if not session.is_responsive():
+        raise error.TestFail("The new session is not responsive.")
+
+    session.close()
diff --git a/client/virt/tests/multicast.py b/client/virt/tests/multicast.py
new file mode 100644
index 0000000..13e3f0d
--- /dev/null
+++ b/client/virt/tests/multicast.py
@@ -0,0 +1,90 @@
+import logging, os, re
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+from autotest_lib.client.virt import virt_test_utils, aexpect
+
+
+def run_multicast(test, params, env):
+    """
+    Test multicast function of nic (rtl8139/e1000/virtio)
+
+    1) Create a VM.
+    2) Join guest into multicast groups.
+    3) Ping multicast addresses on host.
+    4) Flood ping test with different size of packets.
+    5) Final ping test and check if lose packet.
+
+    @param test: KVM test object.
+    @param params: Dictionary with the test parameters.
+    @param env: Dictionary with test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    session = vm.wait_for_login(timeout=int(params.get("login_timeout", 360)))
+
+    def run_guest(cmd):
+        try:
+            session.cmd(cmd)
+        except aexpect.ShellError, e:
+            logging.warn(e)
+
+    def run_host_guest(cmd):
+        run_guest(cmd)
+        utils.system(cmd, ignore_status=True)
+
+    # flush the firewall rules
+    cmd_flush = "iptables -F"
+    cmd_selinux = ("if [ -e /selinux/enforce ]; then setenforce 0; "
+                   "else echo 'no /selinux/enforce file present'; fi")
+    run_host_guest(cmd_flush)
+    run_host_guest(cmd_selinux)
+    # make sure guest replies to broadcasts
+    cmd_broadcast = "echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts"
+    cmd_broadcast_2 = "echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_all"
+    run_guest(cmd_broadcast)
+    run_guest(cmd_broadcast_2)
+
+    # base multicast address
+    mcast = params.get("mcast", "225.0.0.1")
+    # count of multicast addresses, less than 20
+    mgroup_count = int(params.get("mgroup_count", 5))
+    flood_minutes = float(params.get("flood_minutes", 10))
+    ifname = vm.get_ifname()
+    prefix = re.findall("\d+.\d+.\d+", mcast)[0]
+    suffix = int(re.findall("\d+", mcast)[-1])
+    # copy python script to guest for joining guest to multicast groups
+    mcast_path = os.path.join(test.bindir, "scripts/multicast_guest.py")
+    vm.copy_files_to(mcast_path, "/tmp")
+    output = session.cmd_output("python /tmp/multicast_guest.py %d %s %d" %
+                                (mgroup_count, prefix, suffix))
+
+    # if success to join multicast, the process will be paused, and return PID.
+    try:
+        pid = re.findall("join_mcast_pid:(\d+)", output)[0]
+    except IndexError:
+        raise error.TestFail("Can't join multicast groups,output:%s" % output)
+
+    try:
+        for i in range(mgroup_count):
+            new_suffix = suffix + i
+            mcast = "%s.%d" % (prefix, new_suffix)
+
+            logging.info("Initial ping test, mcast: %s", mcast)
+            s, o = virt_test_utils.ping(mcast, 10, interface=ifname, timeout=20)
+            if s != 0:
+                raise error.TestFail(" Ping return non-zero value %s" % o)
+
+            logging.info("Flood ping test, mcast: %s", mcast)
+            virt_test_utils.ping(mcast, None, interface=ifname, flood=True,
+                                output_func=None, timeout=flood_minutes*60)
+
+            logging.info("Final ping test, mcast: %s", mcast)
+            s, o = virt_test_utils.ping(mcast, 10, interface=ifname, timeout=20)
+            if s != 0:
+                raise error.TestFail("Ping failed, status: %s, output: %s" %
+                                     (s, o))
+
+    finally:
+        logging.debug(session.cmd_output("ipmaddr show"))
+        session.cmd_output("kill -s SIGCONT %s" % pid)
+        session.close()
diff --git a/client/virt/tests/netperf.py b/client/virt/tests/netperf.py
new file mode 100644
index 0000000..72d9cde
--- /dev/null
+++ b/client/virt/tests/netperf.py
@@ -0,0 +1,90 @@
+import logging, os, signal
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+from autotest_lib.client.virt import aexpect
+
+def run_netperf(test, params, env):
+    """
+    Network stress test with netperf.
+
+    1) Boot up a VM.
+    2) Launch netserver on guest.
+    3) Execute netperf client on host with different protocols.
+    4) Output the test result.
+
+    @param test: KVM test object.
+    @param params: Dictionary with the test parameters.
+    @param env: Dictionary with test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    login_timeout = int(params.get("login_timeout", 360))
+    session_serial = vm.wait_for_serial_login(timeout=login_timeout)
+
+    netperf_dir = os.path.join(os.environ['AUTODIR'], "tests/netperf2")
+    setup_cmd = params.get("setup_cmd")
+    guest_ip = vm.get_address()
+    result_file = os.path.join(test.resultsdir, "output_%s" % test.iteration)
+
+    firewall_flush = "iptables -F"
+    session_serial.cmd_output(firewall_flush)
+    try:
+        utils.run("iptables -F")
+    except:
+        pass
+
+    for i in params.get("netperf_files").split():
+        vm.copy_files_to(os.path.join(netperf_dir, i), "/tmp")
+
+    try:
+        session_serial.cmd(firewall_flush)
+    except aexpect.ShellError:
+        logging.warning("Could not flush firewall rules on guest")
+
+    session_serial.cmd(setup_cmd % "/tmp", timeout=200)
+    session_serial.cmd(params.get("netserver_cmd") % "/tmp")
+
+    tcpdump = env.get("tcpdump")
+    pid = None
+    if tcpdump:
+        # Stop the background tcpdump process
+        try:
+            pid = int(utils.system_output("pidof tcpdump"))
+            logging.debug("Stopping the background tcpdump")
+            os.kill(pid, signal.SIGSTOP)
+        except:
+            pass
+
+    try:
+        logging.info("Setup and run netperf client on host")
+        utils.run(setup_cmd % netperf_dir)
+        list_fail = []
+        result = open(result_file, "w")
+        result.write("Netperf test results\n")
+
+        for i in params.get("protocols").split():
+            packet_size = params.get("packet_size", "1500")
+            for size in packet_size.split():
+                cmd = params.get("netperf_cmd") % (netperf_dir, i,
+                                                   guest_ip, size)
+                logging.info("Netperf: protocol %s", i)
+                try:
+                    netperf_output = utils.system_output(cmd,
+                                                         retain_output=True)
+                    result.write("%s\n" % netperf_output)
+                except:
+                    logging.error("Test of protocol %s failed", i)
+                    list_fail.append(i)
+
+        result.close()
+
+        if list_fail:
+            raise error.TestFail("Some netperf tests failed: %s" %
+                                 ", ".join(list_fail))
+
+    finally:
+        session_serial.cmd_output("killall netserver")
+        if tcpdump and pid:
+            logging.debug("Resuming the background tcpdump")
+            logging.info("pid is %s" % pid)
+            os.kill(pid, signal.SIGCONT)
diff --git a/client/virt/tests/nic_promisc.py b/client/virt/tests/nic_promisc.py
new file mode 100644
index 0000000..0ff07b8
--- /dev/null
+++ b/client/virt/tests/nic_promisc.py
@@ -0,0 +1,39 @@
+import logging, threading
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+from autotest_lib.client.tests.kvm.tests import file_transfer
+from autotest_lib.client.virt import virt_test_utils, virt_utils
+
+
+def run_nic_promisc(test, params, env):
+    """
+    Test nic driver in promisc mode:
+
+    1) Boot up a VM.
+    2) Repeatedly enable/disable promiscuous mode in guest.
+    3) Transfer file from host to guest, and from guest to host in the same time
+
+    @param test: KVM test object.
+    @param params: Dictionary with the test parameters.
+    @param env: Dictionary with test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    timeout = int(params.get("login_timeout", 360))
+    session_serial = vm.wait_for_serial_login(timeout=timeout)
+
+    ethname = virt_test_utils.get_linux_ifname(session_serial,
+                                              vm.get_mac_address(0))
+
+    try:
+        transfer_thread = virt_utils.Thread(file_transfer.run_file_transfer,
+                                           (test, params, env))
+        transfer_thread.start()
+        while transfer_thread.isAlive():
+            session_serial.cmd("ip link set %s promisc on" % ethname)
+            session_serial.cmd("ip link set %s promisc off" % ethname)
+    except:
+        transfer_thread.join(suppress_exception=True)
+        raise
+    else:
+        transfer_thread.join()
diff --git a/client/virt/tests/nicdriver_unload.py b/client/virt/tests/nicdriver_unload.py
new file mode 100644
index 0000000..6d3d4da
--- /dev/null
+++ b/client/virt/tests/nicdriver_unload.py
@@ -0,0 +1,56 @@
+import logging, threading, os, time
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+from autotest_lib.client.tests.kvm.tests import file_transfer
+from autotest_lib.client.virt import virt_test_utils, virt_utils
+
+
+def run_nicdriver_unload(test, params, env):
+    """
+    Test nic driver.
+
+    1) Boot a VM.
+    2) Get the NIC driver name.
+    3) Repeatedly unload/load NIC driver.
+    4) Multi-session TCP transfer on test interface.
+    5) Check whether the test interface should still work.
+
+    @param test: KVM test object.
+    @param params: Dictionary with the test parameters.
+    @param env: Dictionary with test environment.
+    """
+    timeout = int(params.get("login_timeout", 360))
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    session_serial = vm.wait_for_serial_login(timeout=timeout)
+
+    ethname = virt_test_utils.get_linux_ifname(session_serial,
+                                               vm.get_mac_address(0))
+    sys_path = "/sys/class/net/%s/device/driver" % (ethname)
+    driver = os.path.basename(session_serial.cmd("readlink -e %s" %
+                                                 sys_path).strip())
+    logging.info("driver is %s", driver)
+
+    try:
+        threads = []
+        for t in range(int(params.get("sessions_num", "10"))):
+            thread = virt_utils.Thread(file_transfer.run_file_transfer,
+                                      (test, params, env))
+            thread.start()
+            threads.append(thread)
+
+        time.sleep(10)
+        while threads[0].isAlive():
+            session_serial.cmd("sleep 10")
+            session_serial.cmd("ifconfig %s down" % ethname)
+            session_serial.cmd("modprobe -r %s" % driver)
+            session_serial.cmd("modprobe %s" % driver)
+            session_serial.cmd("ifconfig %s up" % ethname)
+    except:
+        for thread in threads:
+            thread.join(suppress_exception=True)
+            raise
+    else:
+        for thread in threads:
+            thread.join()
+
diff --git a/client/virt/tests/ping.py b/client/virt/tests/ping.py
new file mode 100644
index 0000000..08791fb
--- /dev/null
+++ b/client/virt/tests/ping.py
@@ -0,0 +1,73 @@
+import logging
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.virt import virt_test_utils
+
+
+def run_ping(test, params, env):
+    """
+    Ping the guest with different size of packets.
+
+    Packet Loss Test:
+    1) Ping the guest with different size/interval of packets.
+
+    Stress Test:
+    1) Flood ping the guest.
+    2) Check if the network is still usable.
+
+    @param test: KVM test object.
+    @param params: Dictionary with the test parameters.
+    @param env: Dictionary with test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    session = vm.wait_for_login(timeout=int(params.get("login_timeout", 360)))
+
+    counts = params.get("ping_counts", 100)
+    flood_minutes = float(params.get("flood_minutes", 10))
+    nics = params.get("nics").split()
+    strict_check = params.get("strict_check", "no") == "yes"
+
+    packet_size = [0, 1, 4, 48, 512, 1440, 1500, 1505, 4054, 4055, 4096, 4192,
+                   8878, 9000, 32767, 65507]
+
+    try:
+        for i, nic in enumerate(nics):
+            ip = vm.get_address(i)
+            if not ip:
+                logging.error("Could not get the ip of nic index %d: %s",
+                              i, nic)
+                continue
+
+            for size in packet_size:
+                logging.info("Ping with packet size %s", size)
+                status, output = virt_test_utils.ping(ip, 10,
+                                                     packetsize=size,
+                                                     timeout=20)
+                if strict_check:
+                    ratio = virt_test_utils.get_loss_ratio(output)
+                    if ratio != 0:
+                        raise error.TestFail("Loss ratio is %s for packet size"
+                                             " %s" % (ratio, size))
+                else:
+                    if status != 0:
+                        raise error.TestFail("Ping failed, status: %s,"
+                                             " output: %s" % (status, output))
+
+            logging.info("Flood ping test")
+            virt_test_utils.ping(ip, None, flood=True, output_func=None,
+                                timeout=flood_minutes * 60)
+
+            logging.info("Final ping test")
+            status, output = virt_test_utils.ping(ip, counts,
+                                                 timeout=float(counts) * 1.5)
+            if strict_check:
+                ratio = virt_test_utils.get_loss_ratio(output)
+                if ratio != 0:
+                    raise error.TestFail("Ping failed, status: %s,"
+                                         " output: %s" % (status, output))
+            else:
+                if status != 0:
+                    raise error.TestFail("Ping returns non-zero value %s" %
+                                         output)
+    finally:
+        session.close()
diff --git a/client/virt/tests/pxe.py b/client/virt/tests/pxe.py
new file mode 100644
index 0000000..325e353
--- /dev/null
+++ b/client/virt/tests/pxe.py
@@ -0,0 +1,29 @@
+import logging
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.virt import aexpect
+
+def run_pxe(test, params, env):
+    """
+    PXE test:
+
+    1) Snoop the tftp packet in the tap device.
+    2) Wait for some seconds.
+    3) Check whether we could capture TFTP packets.
+
+    @param test: KVM test object.
+    @param params: Dictionary with the test parameters.
+    @param env: Dictionary with test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    timeout = int(params.get("pxe_timeout", 60))
+
+    logging.info("Try to boot from PXE")
+    output = aexpect.run_fg("tcpdump -nli %s" % vm.get_ifname(),
+                                   logging.debug, "(pxe capture) ", timeout)[1]
+
+    logging.info("Analyzing the tcpdump result...")
+    if not "tftp" in output:
+        raise error.TestFail("Couldn't find any TFTP packets after %s seconds" %
+                             timeout)
+    logging.info("Found TFTP packet")
diff --git a/client/virt/tests/shutdown.py b/client/virt/tests/shutdown.py
new file mode 100644
index 0000000..ac41a4a
--- /dev/null
+++ b/client/virt/tests/shutdown.py
@@ -0,0 +1,43 @@
+import logging, time
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.virt import virt_utils
+
+
+def run_shutdown(test, params, env):
+    """
+    KVM shutdown test:
+    1) Log into a guest
+    2) Send a shutdown command to the guest, or issue a system_powerdown
+       monitor command (depending on the value of shutdown_method)
+    3) Wait until the guest is down
+
+    @param test: kvm test object
+    @param params: Dictionary with the test parameters
+    @param env: Dictionary with test environment
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    timeout = int(params.get("login_timeout", 360))
+    session = vm.wait_for_login(timeout=timeout)
+
+    try:
+        if params.get("shutdown_method") == "shell":
+            # Send a shutdown command to the guest's shell
+            session.sendline(vm.get_params().get("shutdown_command"))
+            logging.info("Shutdown command sent; waiting for guest to go "
+                         "down...")
+        elif params.get("shutdown_method") == "system_powerdown":
+            # Sleep for a while -- give the guest a chance to finish booting
+            time.sleep(float(params.get("sleep_before_powerdown", 10)))
+            # Send a system_powerdown monitor command
+            vm.monitor.cmd("system_powerdown")
+            logging.info("system_powerdown monitor command sent; waiting for "
+                         "guest to go down...")
+
+        if not virt_utils.wait_for(vm.is_dead, 240, 0, 1):
+            raise error.TestFail("Guest refuses to go down")
+
+        logging.info("Guest is down")
+
+    finally:
+        session.close()
diff --git a/client/virt/tests/stress_boot.py b/client/virt/tests/stress_boot.py
new file mode 100644
index 0000000..e3ac14d
--- /dev/null
+++ b/client/virt/tests/stress_boot.py
@@ -0,0 +1,53 @@
+import logging
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.virt import virt_env_process
+
+
+@error.context_aware
+def run_stress_boot(test, params, env):
+    """
+    Boots VMs until one of them becomes unresponsive, and records the maximum
+    number of VMs successfully started:
+    1) boot the first vm
+    2) boot the second vm cloned from the first vm, check whether it boots up
+       and all booted vms respond to shell commands
+    3) go on until cannot create VM anymore or cannot allocate memory for VM
+
+    @param test:   kvm test object
+    @param params: Dictionary with the test parameters
+    @param env:    Dictionary with test environment.
+    """
+    error.base_context("waiting for the first guest to be up", logging.info)
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    login_timeout = float(params.get("login_timeout", 240))
+    session = vm.wait_for_login(timeout=login_timeout)
+
+    num = 2
+    sessions = [session]
+
+    # Boot the VMs
+    try:
+        while num <= int(params.get("max_vms")):
+            # Clone vm according to the first one
+            error.base_context("booting guest #%d" % num, logging.info)
+            vm_name = "vm%d" % num
+            vm_params = vm.params.copy()
+            curr_vm = vm.clone(vm_name, vm_params)
+            env.register_vm(vm_name, curr_vm)
+            virt_env_process.preprocess_vm(test, vm_params, env, vm_name)
+            params["vms"] += " " + vm_name
+
+            sessions.append(curr_vm.wait_for_login(timeout=login_timeout))
+            logging.info("Guest #%d booted up successfully", num)
+
+            # Check whether all previous shell sessions are responsive
+            for i, se in enumerate(sessions):
+                error.context("checking responsiveness of guest #%d" % (i + 1),
+                              logging.debug)
+                se.cmd(params.get("alive_test_cmd"))
+            num += 1
+    finally:
+        for se in sessions:
+            se.close()
+        logging.info("Total number booted: %d" % (num -1))
diff --git a/client/virt/tests/vlan.py b/client/virt/tests/vlan.py
new file mode 100644
index 0000000..9fc1f64
--- /dev/null
+++ b/client/virt/tests/vlan.py
@@ -0,0 +1,175 @@
+import logging, time, re
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.virt import virt_utils, virt_test_utils, aexpect
+
+
+def run_vlan(test, params, env):
+    """
+    Test 802.1Q vlan of NIC, config it by vconfig command.
+
+    1) Create two VMs.
+    2) Setup guests in 10 different vlans by vconfig and using hard-coded
+       ip address.
+    3) Test by ping between same and different vlans of two VMs.
+    4) Test by TCP data transfer, floop ping between same vlan of two VMs.
+    5) Test maximal plumb/unplumb vlans.
+    6) Recover the vlan config.
+
+    @param test: KVM test object.
+    @param params: Dictionary with the test parameters.
+    @param env: Dictionary with test environment.
+    """
+    vm = []
+    session = []
+    ifname = []
+    vm_ip = []
+    digest_origin = []
+    vlan_ip = ['', '']
+    ip_unit = ['1', '2']
+    subnet = params.get("subnet")
+    vlan_num = int(params.get("vlan_num"))
+    maximal = int(params.get("maximal"))
+    file_size = params.get("file_size")
+
+    vm.append(env.get_vm(params["main_vm"]))
+    vm.append(env.get_vm("vm2"))
+    for vm_ in vm:
+        vm_.verify_alive()
+
+    def add_vlan(session, v_id, iface="eth0"):
+        session.cmd("vconfig add %s %s" % (iface, v_id))
+
+    def set_ip_vlan(session, v_id, ip, iface="eth0"):
+        iface = "%s.%s" % (iface, v_id)
+        session.cmd("ifconfig %s %s" % (iface, ip))
+
+    def set_arp_ignore(session, iface="eth0"):
+        ignore_cmd = "echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore"
+        session.cmd(ignore_cmd)
+
+    def rem_vlan(session, v_id, iface="eth0"):
+        rem_vlan_cmd = "if [[ -e /proc/net/vlan/%s ]];then vconfig rem %s;fi"
+        iface = "%s.%s" % (iface, v_id)
+        return session.cmd_status(rem_vlan_cmd % (iface, iface))
+
+    def nc_transfer(src, dst):
+        nc_port = virt_utils.find_free_port(1025, 5334, vm_ip[dst])
+        listen_cmd = params.get("listen_cmd")
+        send_cmd = params.get("send_cmd")
+
+        #listen in dst
+        listen_cmd = listen_cmd % (nc_port, "receive")
+        session[dst].sendline(listen_cmd)
+        time.sleep(2)
+        #send file from src to dst
+        send_cmd = send_cmd % (vlan_ip[dst], str(nc_port), "file")
+        session[src].cmd(send_cmd, timeout=60)
+        try:
+            session[dst].read_up_to_prompt(timeout=60)
+        except aexpect.ExpectError:
+            raise error.TestFail ("Fail to receive file"
+                                    " from vm%s to vm%s" % (src+1, dst+1))
+        #check MD5 message digest of receive file in dst
+        output = session[dst].cmd_output("md5sum receive").strip()
+        digest_receive = re.findall(r'(\w+)', output)[0]
+        if digest_receive == digest_origin[src]:
+            logging.info("file succeed received in vm %s", vlan_ip[dst])
+        else:
+            logging.info("digest_origin is  %s", digest_origin[src])
+            logging.info("digest_receive is %s", digest_receive)
+            raise error.TestFail("File transfered differ from origin")
+        session[dst].cmd_output("rm -f receive")
+
+    for i in range(2):
+        session.append(vm[i].wait_for_login(
+            timeout=int(params.get("login_timeout", 360))))
+        if not session[i] :
+            raise error.TestError("Could not log into guest(vm%d)" % i)
+        logging.info("Logged in")
+
+        ifname.append(virt_test_utils.get_linux_ifname(session[i],
+                      vm[i].get_mac_address()))
+        #get guest ip
+        vm_ip.append(vm[i].get_address())
+
+        #produce sized file in vm
+        dd_cmd = "dd if=/dev/urandom of=file bs=1024k count=%s"
+        session[i].cmd(dd_cmd % file_size)
+        #record MD5 message digest of file
+        output = session[i].cmd("md5sum file", timeout=60)
+        digest_origin.append(re.findall(r'(\w+)', output)[0])
+
+        #stop firewall in vm
+        session[i].cmd_output("/etc/init.d/iptables stop")
+
+        #load 8021q module for vconfig
+        session[i].cmd("modprobe 8021q")
+
+    try:
+        for i in range(2):
+            for vlan_i in range(1, vlan_num+1):
+                add_vlan(session[i], vlan_i, ifname[i])
+                set_ip_vlan(session[i], vlan_i, "%s.%s.%s" %
+                            (subnet, vlan_i, ip_unit[i]), ifname[i])
+            set_arp_ignore(session[i], ifname[i])
+
+        for vlan in range(1, vlan_num+1):
+            logging.info("Test for vlan %s", vlan)
+
+            logging.info("Ping between vlans")
+            interface = ifname[0] + '.' + str(vlan)
+            for vlan2 in range(1, vlan_num+1):
+                for i in range(2):
+                    interface = ifname[i] + '.' + str(vlan)
+                    dest = subnet +'.'+ str(vlan2)+ '.' + ip_unit[(i+1)%2]
+                    s, o = virt_test_utils.ping(dest, count=2,
+                                              interface=interface,
+                                              session=session[i], timeout=30)
+                    if ((vlan == vlan2) ^ (s == 0)):
+                        raise error.TestFail ("%s ping %s unexpected" %
+                                                    (interface, dest))
+
+            vlan_ip[0] = subnet + '.' + str(vlan) + '.' + ip_unit[0]
+            vlan_ip[1] = subnet + '.' + str(vlan) + '.' + ip_unit[1]
+
+            logging.info("Flood ping")
+            def flood_ping(src, dst):
+                # we must use a dedicated session becuase the aexpect
+                # does not have the other method to interrupt the process in
+                # the guest rather than close the session.
+                session_flood = vm[src].wait_for_login(timeout=60)
+                virt_test_utils.ping(vlan_ip[dst], flood=True,
+                                   interface=ifname[src],
+                                   session=session_flood, timeout=10)
+                session_flood.close()
+
+            flood_ping(0, 1)
+            flood_ping(1, 0)
+
+            logging.info("Transfering data through nc")
+            nc_transfer(0, 1)
+            nc_transfer(1, 0)
+
+    finally:
+        for vlan in range(1, vlan_num+1):
+            rem_vlan(session[0], vlan, ifname[0])
+            rem_vlan(session[1], vlan, ifname[1])
+            logging.info("rem vlan: %s", vlan)
+
+    # Plumb/unplumb maximal number of vlan interfaces
+    i = 1
+    s = 0
+    try:
+        logging.info("Testing the plumb of vlan interface")
+        for i in range (1, maximal+1):
+            add_vlan(session[0], i, ifname[0])
+    finally:
+        for j in range (1, i+1):
+            s = s or rem_vlan(session[0], j, ifname[0])
+        if s == 0:
+            logging.info("maximal interface plumb test done")
+        else:
+            logging.error("maximal interface plumb test failed")
+
+    session[0].close()
+    session[1].close()
diff --git a/client/virt/tests/whql_client_install.py b/client/virt/tests/whql_client_install.py
new file mode 100644
index 0000000..2d72a5e
--- /dev/null
+++ b/client/virt/tests/whql_client_install.py
@@ -0,0 +1,136 @@
+import logging, time, os
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.virt import virt_utils, virt_test_utils, rss_client
+
+
+def run_whql_client_install(test, params, env):
+    """
+    WHQL DTM client installation:
+    1) Log into the guest (the client machine) and into a DTM server machine
+    2) Stop the DTM client service (wttsvc) on the client machine
+    3) Delete the client machine from the server's data store
+    4) Rename the client machine (give it a randomly generated name)
+    5) Move the client machine into the server's workgroup
+    6) Reboot the client machine
+    7) Install the DTM client software
+    8) Setup auto logon for the user created by the installation
+       (normally DTMLLUAdminUser)
+    9) Reboot again
+
+    @param test: kvm test object
+    @param params: Dictionary with the test parameters
+    @param env: Dictionary with test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    session = vm.wait_for_login(timeout=int(params.get("login_timeout", 360)))
+
+    # Collect test params
+    server_address = params.get("server_address")
+    server_shell_port = int(params.get("server_shell_port"))
+    server_file_transfer_port = int(params.get("server_file_transfer_port"))
+    server_studio_path = params.get("server_studio_path", "%programfiles%\\ "
+                                    "Microsoft Driver Test Manager\\Studio")
+    server_username = params.get("server_username")
+    server_password = params.get("server_password")
+    client_username = params.get("client_username")
+    client_password = params.get("client_password")
+    dsso_delete_machine_binary = params.get("dsso_delete_machine_binary",
+                                            "deps/whql_delete_machine_15.exe")
+    dsso_delete_machine_binary = virt_utils.get_path(test.bindir,
+                                                    dsso_delete_machine_binary)
+    install_timeout = float(params.get("install_timeout", 600))
+    install_cmd = params.get("install_cmd")
+    wtt_services = params.get("wtt_services")
+
+    # Stop WTT service(s) on client
+    for svc in wtt_services.split():
+        virt_test_utils.stop_windows_service(session, svc)
+
+    # Copy dsso_delete_machine_binary to server
+    rss_client.upload(server_address, server_file_transfer_port,
+                             dsso_delete_machine_binary, server_studio_path,
+                             timeout=60)
+
+    # Open a shell session with server
+    server_session = virt_utils.remote_login("nc", server_address,
+                                            server_shell_port, "", "",
+                                            session.prompt, session.linesep)
+    server_session.set_status_test_command(session.status_test_command)
+
+    # Get server and client information
+    cmd = "echo %computername%"
+    server_name = server_session.cmd_output(cmd).strip()
+    client_name = session.cmd_output(cmd).strip()
+    cmd = "wmic computersystem get domain"
+    server_workgroup = server_session.cmd_output(cmd).strip()
+    server_workgroup = server_workgroup.splitlines()[-1]
+    regkey = r"HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters"
+    cmd = "reg query %s /v Domain" % regkey
+    o = server_session.cmd_output(cmd).strip().splitlines()[-1]
+    try:
+        server_dns_suffix = o.split(None, 2)[2]
+    except IndexError:
+        server_dns_suffix = ""
+
+    # Delete the client machine from the server's data store (if it's there)
+    server_session.cmd("cd %s" % server_studio_path)
+    cmd = "%s %s %s" % (os.path.basename(dsso_delete_machine_binary),
+                        server_name, client_name)
+    server_session.cmd(cmd, print_func=logging.info)
+    server_session.close()
+
+    # Rename the client machine
+    client_name = "autotest_%s" % virt_utils.generate_random_string(4)
+    logging.info("Renaming client machine to '%s'", client_name)
+    cmd = ('wmic computersystem where name="%%computername%%" rename name="%s"'
+           % client_name)
+    session.cmd(cmd, timeout=600)
+
+    # Join the server's workgroup
+    logging.info("Joining workgroup '%s'", server_workgroup)
+    cmd = ('wmic computersystem where name="%%computername%%" call '
+           'joindomainorworkgroup name="%s"' % server_workgroup)
+    session.cmd(cmd, timeout=600)
+
+    # Set the client machine's DNS suffix
+    logging.info("Setting DNS suffix to '%s'", server_dns_suffix)
+    cmd = 'reg add %s /v Domain /d "%s" /f' % (regkey, server_dns_suffix)
+    session.cmd(cmd, timeout=300)
+
+    # Reboot
+    session = vm.reboot(session)
+
+    # Access shared resources on the server machine
+    logging.info("Attempting to access remote share on server")
+    cmd = r"net use \\%s /user:%s %s" % (server_name, server_username,
+                                         server_password)
+    end_time = time.time() + 120
+    while time.time() < end_time:
+        try:
+            session.cmd(cmd)
+            break
+        except:
+            pass
+        time.sleep(5)
+    else:
+        raise error.TestError("Could not access server share from client "
+                              "machine")
+
+    # Install
+    logging.info("Installing DTM client (timeout=%ds)", install_timeout)
+    install_cmd = r"cmd /c \\%s\%s" % (server_name, install_cmd.lstrip("\\"))
+    session.cmd(install_cmd, timeout=install_timeout)
+
+    # Setup auto logon
+    logging.info("Setting up auto logon for user '%s'", client_username)
+    cmd = ('reg add '
+           '"HKLM\\Software\\Microsoft\\Windows NT\\CurrentVersion\\winlogon" '
+           '/v "%s" /d "%s" /t REG_SZ /f')
+    session.cmd(cmd % ("AutoAdminLogon", "1"))
+    session.cmd(cmd % ("DefaultUserName", client_username))
+    session.cmd(cmd % ("DefaultPassword", client_password))
+
+    # Reboot one more time
+    session = vm.reboot(session)
+    session.close()
diff --git a/client/virt/tests/whql_submission.py b/client/virt/tests/whql_submission.py
new file mode 100644
index 0000000..bbeb836
--- /dev/null
+++ b/client/virt/tests/whql_submission.py
@@ -0,0 +1,275 @@
+import logging, os, re
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.virt import virt_utils, rss_client, aexpect
+
+
+def run_whql_submission(test, params, env):
+    """
+    WHQL submission test:
+    1) Log into the client machines and into a DTM server machine
+    2) Copy the automation program binary (dsso_test_binary) to the server machine
+    3) Run the automation program
+    4) Pass the program all relevant parameters (e.g. device_data)
+    5) Wait for the program to terminate
+    6) Parse and report job results
+    (logs and HTML reports are placed in test.debugdir)
+
+    @param test: kvm test object
+    @param params: Dictionary with the test parameters
+    @param env: Dictionary with test environment.
+    """
+    # Log into all client VMs
+    login_timeout = int(params.get("login_timeout", 360))
+    vms = []
+    sessions = []
+    for vm_name in params.objects("vms"):
+        vms.append(env.get_vm(vm_name))
+        vms[-1].verify_alive()
+        sessions.append(vms[-1].wait_for_login(timeout=login_timeout))
+
+    # Make sure all NICs of all client VMs are up
+    for vm in vms:
+        nics = vm.params.objects("nics")
+        for nic_index in range(len(nics)):
+            s = vm.wait_for_login(nic_index, 600)
+            s.close()
+
+    # Collect parameters
+    server_address = params.get("server_address")
+    server_shell_port = int(params.get("server_shell_port"))
+    server_file_transfer_port = int(params.get("server_file_transfer_port"))
+    server_studio_path = params.get("server_studio_path", "%programfiles%\\ "
+                                    "Microsoft Driver Test Manager\\Studio")
+    dsso_test_binary = params.get("dsso_test_binary",
+                                  "deps/whql_submission_15.exe")
+    dsso_test_binary = virt_utils.get_path(test.bindir, dsso_test_binary)
+    dsso_delete_machine_binary = params.get("dsso_delete_machine_binary",
+                                            "deps/whql_delete_machine_15.exe")
+    dsso_delete_machine_binary = virt_utils.get_path(test.bindir,
+                                                    dsso_delete_machine_binary)
+    test_timeout = float(params.get("test_timeout", 600))
+
+    # Copy dsso binaries to the server
+    for filename in dsso_test_binary, dsso_delete_machine_binary:
+        rss_client.upload(server_address, server_file_transfer_port,
+                                 filename, server_studio_path, timeout=60)
+
+    # Open a shell session with the server
+    server_session = virt_utils.remote_login("nc", server_address,
+                                            server_shell_port, "", "",
+                                            sessions[0].prompt,
+                                            sessions[0].linesep)
+    server_session.set_status_test_command(sessions[0].status_test_command)
+
+    # Get the computer names of the server and clients
+    cmd = "echo %computername%"
+    server_name = server_session.cmd_output(cmd).strip()
+    client_names = [session.cmd_output(cmd).strip() for session in sessions]
+
+    # Delete all client machines from the server's data store
+    server_session.cmd("cd %s" % server_studio_path)
+    for client_name in client_names:
+        cmd = "%s %s %s" % (os.path.basename(dsso_delete_machine_binary),
+                            server_name, client_name)
+        server_session.cmd(cmd, print_func=logging.debug)
+
+    # Reboot the client machines
+    sessions = virt_utils.parallel((vm.reboot, (session,))
+                                  for vm, session in zip(vms, sessions))
+
+    # Check the NICs again
+    for vm in vms:
+        nics = vm.params.objects("nics")
+        for nic_index in range(len(nics)):
+            s = vm.wait_for_login(nic_index, 600)
+            s.close()
+
+    # Run whql_pre_command and close the sessions
+    if params.get("whql_pre_command"):
+        for session in sessions:
+            session.cmd(params.get("whql_pre_command"),
+                        int(params.get("whql_pre_command_timeout", 600)))
+            session.close()
+
+    # Run the automation program on the server
+    pool_name = "%s_pool" % client_names[0]
+    submission_name = "%s_%s" % (client_names[0],
+                                 params.get("submission_name"))
+    cmd = "%s %s %s %s %s %s" % (os.path.basename(dsso_test_binary),
+                                 server_name, pool_name, submission_name,
+                                 test_timeout, " ".join(client_names))
+    server_session.sendline(cmd)
+
+    # Helper function: wait for a given prompt and raise an exception if an
+    # error occurs
+    def find_prompt(prompt):
+        m, o = server_session.read_until_last_line_matches(
+            [prompt, server_session.prompt], print_func=logging.info,
+            timeout=600)
+        if m != 0:
+            errors = re.findall("^Error:.*$", o, re.I | re.M)
+            if errors:
+                raise error.TestError(errors[0])
+            else:
+                raise error.TestError("Error running automation program: "
+                                      "could not find '%s' prompt" % prompt)
+
+    # Tell the automation program which device to test
+    find_prompt("Device to test:")
+    server_session.sendline(params.get("test_device"))
+
+    # Tell the automation program which jobs to run
+    find_prompt("Jobs to run:")
+    server_session.sendline(params.get("job_filter", ".*"))
+
+    # Set submission DeviceData
+    find_prompt("DeviceData name:")
+    for dd in params.objects("device_data"):
+        dd_params = params.object_params(dd)
+        if dd_params.get("dd_name") and dd_params.get("dd_data"):
+            server_session.sendline(dd_params.get("dd_name"))
+            server_session.sendline(dd_params.get("dd_data"))
+    server_session.sendline()
+
+    # Set submission descriptors
+    find_prompt("Descriptor path:")
+    for desc in params.objects("descriptors"):
+        desc_params = params.object_params(desc)
+        if desc_params.get("desc_path"):
+            server_session.sendline(desc_params.get("desc_path"))
+    server_session.sendline()
+
+    # Set machine dimensions for each client machine
+    for vm_name in params.objects("vms"):
+        vm_params = params.object_params(vm_name)
+        find_prompt(r"Dimension name\b.*:")
+        for dp in vm_params.objects("dimensions"):
+            dp_params = vm_params.object_params(dp)
+            if dp_params.get("dim_name") and dp_params.get("dim_value"):
+                server_session.sendline(dp_params.get("dim_name"))
+                server_session.sendline(dp_params.get("dim_value"))
+        server_session.sendline()
+
+    # Set extra parameters for tests that require them (e.g. NDISTest)
+    for vm_name in params.objects("vms"):
+        vm_params = params.object_params(vm_name)
+        find_prompt(r"Parameter name\b.*:")
+        for dp in vm_params.objects("device_params"):
+            dp_params = vm_params.object_params(dp)
+            if dp_params.get("dp_name") and dp_params.get("dp_regex"):
+                server_session.sendline(dp_params.get("dp_name"))
+                server_session.sendline(dp_params.get("dp_regex"))
+                # Make sure the prompt appears again (if the device isn't found
+                # the automation program will terminate)
+                find_prompt(r"Parameter name\b.*:")
+        server_session.sendline()
+
+    # Wait for the automation program to terminate
+    try:
+        o = server_session.read_up_to_prompt(print_func=logging.info,
+                                             timeout=test_timeout + 300)
+        # (test_timeout + 300 is used here because the automation program is
+        # supposed to terminate cleanly on its own when test_timeout expires)
+        done = True
+    except aexpect.ExpectError, e:
+        o = e.output
+        done = False
+    server_session.close()
+
+    # Look for test results in the automation program's output
+    result_summaries = re.findall(r"---- \[.*?\] ----", o, re.DOTALL)
+    if not result_summaries:
+        raise error.TestError("The automation program did not return any "
+                              "results")
+    results = result_summaries[-1].strip("-")
+    results = eval("".join(results.splitlines()))
+
+    # Download logs and HTML reports from the server
+    for i, r in enumerate(results):
+        if "report" in r:
+            try:
+                rss_client.download(server_address,
+                                           server_file_transfer_port,
+                                           r["report"], test.debugdir)
+            except rss_client.FileTransferNotFoundError:
+                pass
+        if "logs" in r:
+            try:
+                rss_client.download(server_address,
+                                           server_file_transfer_port,
+                                           r["logs"], test.debugdir)
+            except rss_client.FileTransferNotFoundError:
+                pass
+            else:
+                try:
+                    # Create symlinks to test log dirs to make it easier
+                    # to access them (their original names are not human
+                    # readable)
+                    link_name = "logs_%s" % r["report"].split("\\")[-1]
+                    link_name = link_name.replace(" ", "_")
+                    link_name = link_name.replace("/", "_")
+                    os.symlink(r["logs"].split("\\")[-1],
+                               os.path.join(test.debugdir, link_name))
+                except (KeyError, OSError):
+                    pass
+
+    # Print result summary (both to the regular logs and to a file named
+    # 'summary' in test.debugdir)
+    def print_summary_line(f, line):
+        logging.info(line)
+        f.write(line + "\n")
+    if results:
+        # Make sure all results have the required keys
+        for r in results:
+            r["id"] = str(r.get("id"))
+            r["job"] = str(r.get("job"))
+            r["status"] = str(r.get("status"))
+            r["pass"] = int(r.get("pass", 0))
+            r["fail"] = int(r.get("fail", 0))
+            r["notrun"] = int(r.get("notrun", 0))
+            r["notapplicable"] = int(r.get("notapplicable", 0))
+        # Sort the results by failures and total test count in descending order
+        results = [(r["fail"],
+                    r["pass"] + r["fail"] + r["notrun"] + r["notapplicable"],
+                    r) for r in results]
+        results.sort(reverse=True)
+        results = [r[-1] for r in results]
+        # Print results
+        logging.info("")
+        logging.info("Result summary:")
+        name_length = max(len(r["job"]) for r in results)
+        fmt = "%%-6s %%-%ds %%-15s %%-8s %%-8s %%-8s %%-15s" % name_length
+        f = open(os.path.join(test.debugdir, "summary"), "w")
+        print_summary_line(f, fmt % ("ID", "Job", "Status", "Pass", "Fail",
+                                     "NotRun", "NotApplicable"))
+        print_summary_line(f, fmt % ("--", "---", "------", "----", "----",
+                                     "------", "-------------"))
+        for r in results:
+            print_summary_line(f, fmt % (r["id"], r["job"], r["status"],
+                                         r["pass"], r["fail"], r["notrun"],
+                                         r["notapplicable"]))
+        f.close()
+        logging.info("(see logs and HTML reports in %s)", test.debugdir)
+
+    # Kill the client VMs and fail if the automation program did not terminate
+    # on time
+    if not done:
+        virt_utils.parallel(vm.destroy for vm in vms)
+        raise error.TestFail("The automation program did not terminate "
+                             "on time")
+
+    # Fail if there are failed or incomplete jobs (kill the client VMs if there
+    # are incomplete jobs)
+    failed_jobs = [r["job"] for r in results
+                   if r["status"].lower() == "investigate"]
+    running_jobs = [r["job"] for r in results
+                    if r["status"].lower() == "inprogress"]
+    errors = []
+    if failed_jobs:
+        errors += ["Jobs failed: %s." % failed_jobs]
+    if running_jobs:
+        for vm in vms:
+            vm.destroy()
+        errors += ["Jobs did not complete on time: %s." % running_jobs]
+    if errors:
+        raise error.TestFail(" ".join(errors))
diff --git a/client/virt/tests/yum_update.py b/client/virt/tests/yum_update.py
new file mode 100644
index 0000000..7c9b96c
--- /dev/null
+++ b/client/virt/tests/yum_update.py
@@ -0,0 +1,49 @@
+import logging, time
+
+
+def internal_yum_update(session, command, prompt, timeout):
+    """
+    Helper function to perform the yum update test.
+
+    @param session: shell session stablished to the host
+    @param command: Command to be sent to the shell session
+    @param prompt: Machine prompt
+    @param timeout: How long to wait until we get an appropriate output from
+            the shell session.
+    """
+    session.sendline(command)
+    end_time = time.time() + timeout
+    while time.time() < end_time:
+        match = session.read_until_last_line_matches(
+                                                ["[Ii]s this [Oo][Kk]", prompt],
+                                                timeout=timeout)[0]
+        if match == 0:
+            logging.info("Got 'Is this ok'; sending 'y'")
+            session.sendline("y")
+        elif match == 1:
+            logging.info("Got shell prompt")
+            return True
+        else:
+            logging.info("Timeout or process exited")
+            return False
+
+
+def run_yum_update(test, params, env):
+    """
+    Runs yum update and yum update kernel on the remote host (yum enabled
+    hosts only).
+
+    @param test: kvm test object.
+    @param params: Dictionary with test parameters.
+    @param env: Dictionary with the test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    timeout = int(params.get("login_timeout", 360))
+    session = vm.wait_for_login(timeout=timeout)
+
+    internal_yum_update(session, "yum update", params.get("shell_prompt"), 600)
+    internal_yum_update(session, "yum update kernel",
+                        params.get("shell_prompt"), 600)
+
+    session.close()
-- 
1.7.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/7] [RFC] KVM autotest refactor stage 1
  2011-03-09  9:21 [PATCH 0/7] [RFC] KVM autotest refactor stage 1 Lucas Meneghel Rodrigues
                   ` (6 preceding siblings ...)
  2011-03-09  9:21 ` [PATCH 7/7] KVM test: Moving generic tests to common tests area Lucas Meneghel Rodrigues
@ 2011-03-09 11:54 ` Lucas Meneghel Rodrigues
  7 siblings, 0 replies; 12+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-03-09 11:54 UTC (permalink / raw)
  To: autotest; +Cc: kvm

On Wed, 2011-03-09 at 06:21 -0300, Lucas Meneghel Rodrigues wrote:
> In order to maximize code reuse among different virtualization
> technologies, refactor the KVM test code in a way that will allow
> new implementations of virtualization testing, such as xen testing.
> 
> What was done
> • Create autotest_lib.client.virt and move the libraries in there,
> with some renaming and abstracting the KVM specific functions
> • Create a dispatcher that can instantiate the appropriate vm class,
> controlled by a new parameter 'vm_type'
> (can be kvm, xen, futurely libvirt...)
> • Make all the code use the new libraries
> • Remove the 'old' libraries
> • Make the KVM test loader to try finding the tests on a common
> location, and if the test can't be found there, look for it on the
> kvm subtest dir. This way other virt tests can benefit from thm
> • Move the tests that have virt tech agnostic code to the common
> location

I have published a tree with the result of the refactor on the github
repo:

https://github.com/autotest/autotest

https://github.com/autotest/autotest/tree/refactor

So please refer to it when doing reviews, thanks!

> Lucas Meneghel Rodrigues (7):
>   KVM test: Move test utilities to client/tools
>   KVM test: Create autotest_lib.client.virt namespace
>   KVM test: tests_base.cfg: Introduce parameter 'vm_type'
>   KVM test: Adapt the test code to use the new virt namespace
>   KVM test: Removing the old libraries and programs
>   KVM test: Try to load subtests on a shared tests location
>   KVM test: Moving generic tests to common tests area
> 
>  client/common_lib/cartesian_config.py              |  698 ++++++++
>  client/tests/kvm/cd_hash.py                        |   48 -
>  client/tests/kvm/control                           |   18 +-
>  client/tests/kvm/control.parallel                  |    8 +-
>  client/tests/kvm/control.unittests                 |   14 +-
>  client/tests/kvm/get_started.py                    |    5 +-
>  client/tests/kvm/html_report.py                    | 1727 -------------------
>  client/tests/kvm/installer.py                      |  797 ---------
>  client/tests/kvm/kvm.py                            |   32 +-
>  client/tests/kvm/kvm_config.py                     |  698 --------
>  client/tests/kvm/kvm_monitor.py                    |  744 --------
>  client/tests/kvm/kvm_preprocessing.py              |  467 -----
>  client/tests/kvm/kvm_scheduler.py                  |  229 ---
>  client/tests/kvm/kvm_subprocess.py                 | 1351 ---------------
>  client/tests/kvm/kvm_test_utils.py                 |  753 ---------
>  client/tests/kvm/kvm_utils.py                      | 1728 -------------------
>  client/tests/kvm/kvm_vm.py                         | 1777 --------------------
>  client/tests/kvm/migration_control.srv             |   12 +-
>  client/tests/kvm/ppm_utils.py                      |  237 ---
>  client/tests/kvm/rss_file_transfer.py              |  519 ------
>  client/tests/kvm/scan_results.py                   |   97 --
>  client/tests/kvm/stepeditor.py                     | 1401 ---------------
>  client/tests/kvm/test_setup.py                     |  700 --------
>  client/tests/kvm/tests/autotest.py                 |   25 -
>  client/tests/kvm/tests/balloon_check.py            |    2 +-
>  client/tests/kvm/tests/boot.py                     |   26 -
>  client/tests/kvm/tests/boot_savevm.py              |    2 +-
>  client/tests/kvm/tests/build.py                    |    6 +-
>  client/tests/kvm/tests/clock_getres.py             |   37 -
>  client/tests/kvm/tests/enospc.py                   |    2 +-
>  client/tests/kvm/tests/ethtool.py                  |  235 ---
>  client/tests/kvm/tests/file_transfer.py            |   83 -
>  client/tests/kvm/tests/guest_s4.py                 |   76 -
>  client/tests/kvm/tests/guest_test.py               |   80 -
>  client/tests/kvm/tests/image_copy.py               |   45 -
>  client/tests/kvm/tests/iofuzz.py                   |  136 --
>  client/tests/kvm/tests/ioquit.py                   |   31 -
>  client/tests/kvm/tests/iozone_windows.py           |   40 -
>  client/tests/kvm/tests/jumbo.py                    |  127 --
>  client/tests/kvm/tests/kdump.py                    |   75 -
>  client/tests/kvm/tests/ksm_overcommit.py           |   37 +-
>  client/tests/kvm/tests/linux_s3.py                 |   41 -
>  client/tests/kvm/tests/mac_change.py               |   60 -
>  client/tests/kvm/tests/migration.py                |    6 +-
>  .../kvm/tests/migration_with_file_transfer.py      |    8 +-
>  client/tests/kvm/tests/migration_with_reboot.py    |    4 +-
>  client/tests/kvm/tests/module_probe.py             |    4 +-
>  client/tests/kvm/tests/multicast.py                |   90 -
>  client/tests/kvm/tests/netperf.py                  |   91 -
>  client/tests/kvm/tests/nic_bonding.py              |    6 +-
>  client/tests/kvm/tests/nic_hotplug.py              |   24 +-
>  client/tests/kvm/tests/nic_promisc.py              |   39 -
>  client/tests/kvm/tests/nicdriver_unload.py         |   56 -
>  client/tests/kvm/tests/pci_hotplug.py              |   18 +-
>  client/tests/kvm/tests/physical_resources_check.py |    2 +-
>  client/tests/kvm/tests/ping.py                     |   73 -
>  client/tests/kvm/tests/pxe.py                      |   30 -
>  client/tests/kvm/tests/qemu_img.py                 |   22 +-
>  client/tests/kvm/tests/qmp_basic.py                |    2 +-
>  client/tests/kvm/tests/qmp_basic_rhel6.py          |    2 +-
>  client/tests/kvm/tests/set_link.py                 |   14 +-
>  client/tests/kvm/tests/shutdown.py                 |   43 -
>  client/tests/kvm/tests/stepmaker.py                |   11 +-
>  client/tests/kvm/tests/steps.py                    |    5 +-
>  client/tests/kvm/tests/stress_boot.py              |   53 -
>  client/tests/kvm/tests/timedrift.py                |   16 +-
>  client/tests/kvm/tests/timedrift_with_migration.py |   10 +-
>  client/tests/kvm/tests/timedrift_with_reboot.py    |   10 +-
>  client/tests/kvm/tests/timedrift_with_stop.py      |   10 +-
>  client/tests/kvm/tests/unattended_install.py       |    4 +-
>  client/tests/kvm/tests/unittest.py                 |    6 +-
>  client/tests/kvm/tests/virtio_console.py           |   22 +-
>  client/tests/kvm/tests/vlan.py                     |  175 --
>  client/tests/kvm/tests/vmstop.py                   |    6 +-
>  client/tests/kvm/tests/whql_client_install.py      |  136 --
>  client/tests/kvm/tests/whql_submission.py          |  275 ---
>  client/tests/kvm/tests/yum_update.py               |   49 -
>  client/tests/kvm/tests_base.cfg.sample             |    1 +
>  client/tools/cd_hash.py                            |   48 +
>  client/tools/html_report.py                        | 1727 +++++++++++++++++++
>  client/tools/scan_results.py                       |   97 ++
>  client/virt/aexpect.py                             | 1352 +++++++++++++++
>  client/virt/kvm_installer.py                       |  797 +++++++++
>  client/virt/kvm_monitor.py                         |  745 ++++++++
>  client/virt/kvm_vm.py                              | 1500 +++++++++++++++++
>  client/virt/ppm_utils.py                           |  237 +++
>  client/virt/rss_client.py                          |  519 ++++++
>  client/virt/tests/autotest.py                      |   25 +
>  client/virt/tests/boot.py                          |   26 +
>  client/virt/tests/clock_getres.py                  |   37 +
>  client/virt/tests/ethtool.py                       |  235 +++
>  client/virt/tests/file_transfer.py                 |   84 +
>  client/virt/tests/guest_s4.py                      |   76 +
>  client/virt/tests/guest_test.py                    |   80 +
>  client/virt/tests/image_copy.py                    |   45 +
>  client/virt/tests/iofuzz.py                        |  136 ++
>  client/virt/tests/ioquit.py                        |   31 +
>  client/virt/tests/iozone_windows.py                |   40 +
>  client/virt/tests/jumbo.py                         |  127 ++
>  client/virt/tests/kdump.py                         |   75 +
>  client/virt/tests/linux_s3.py                      |   41 +
>  client/virt/tests/mac_change.py                    |   60 +
>  client/virt/tests/multicast.py                     |   90 +
>  client/virt/tests/netperf.py                       |   90 +
>  client/virt/tests/nic_promisc.py                   |   39 +
>  client/virt/tests/nicdriver_unload.py              |   56 +
>  client/virt/tests/ping.py                          |   73 +
>  client/virt/tests/pxe.py                           |   29 +
>  client/virt/tests/shutdown.py                      |   43 +
>  client/virt/tests/stress_boot.py                   |   53 +
>  client/virt/tests/vlan.py                          |  175 ++
>  client/virt/tests/whql_client_install.py           |  136 ++
>  client/virt/tests/whql_submission.py               |  275 +++
>  client/virt/tests/yum_update.py                    |   49 +
>  client/virt/virt_env_process.py                    |  438 +++++
>  client/virt/virt_scheduler.py                      |  229 +++
>  client/virt/virt_step_editor.py                    | 1401 +++++++++++++++
>  client/virt/virt_test_setup.py                     |  700 ++++++++
>  client/virt/virt_test_utils.py                     |  754 +++++++++
>  client/virt/virt_utils.py                          | 1760 +++++++++++++++++++
>  client/virt/virt_vm.py                             |  298 ++++
>  121 files changed, 15706 insertions(+), 15671 deletions(-)
>  create mode 100755 client/common_lib/cartesian_config.py
>  delete mode 100755 client/tests/kvm/cd_hash.py
>  delete mode 100755 client/tests/kvm/html_report.py
>  delete mode 100644 client/tests/kvm/installer.py
>  delete mode 100755 client/tests/kvm/kvm_config.py
>  delete mode 100644 client/tests/kvm/kvm_monitor.py
>  delete mode 100644 client/tests/kvm/kvm_preprocessing.py
>  delete mode 100644 client/tests/kvm/kvm_scheduler.py
>  delete mode 100755 client/tests/kvm/kvm_subprocess.py
>  delete mode 100644 client/tests/kvm/kvm_test_utils.py
>  delete mode 100644 client/tests/kvm/kvm_utils.py
>  delete mode 100755 client/tests/kvm/kvm_vm.py
>  delete mode 100644 client/tests/kvm/ppm_utils.py
>  delete mode 100755 client/tests/kvm/rss_file_transfer.py
>  delete mode 100755 client/tests/kvm/scan_results.py
>  delete mode 100755 client/tests/kvm/stepeditor.py
>  delete mode 100644 client/tests/kvm/test_setup.py
>  delete mode 100644 client/tests/kvm/tests/autotest.py
>  delete mode 100644 client/tests/kvm/tests/boot.py
>  delete mode 100644 client/tests/kvm/tests/clock_getres.py
>  delete mode 100644 client/tests/kvm/tests/ethtool.py
>  delete mode 100644 client/tests/kvm/tests/file_transfer.py
>  delete mode 100644 client/tests/kvm/tests/guest_s4.py
>  delete mode 100644 client/tests/kvm/tests/guest_test.py
>  delete mode 100644 client/tests/kvm/tests/image_copy.py
>  delete mode 100644 client/tests/kvm/tests/iofuzz.py
>  delete mode 100644 client/tests/kvm/tests/ioquit.py
>  delete mode 100644 client/tests/kvm/tests/iozone_windows.py
>  delete mode 100644 client/tests/kvm/tests/jumbo.py
>  delete mode 100644 client/tests/kvm/tests/kdump.py
>  delete mode 100644 client/tests/kvm/tests/linux_s3.py
>  delete mode 100644 client/tests/kvm/tests/mac_change.py
>  delete mode 100644 client/tests/kvm/tests/multicast.py
>  delete mode 100644 client/tests/kvm/tests/netperf.py
>  delete mode 100644 client/tests/kvm/tests/nic_promisc.py
>  delete mode 100644 client/tests/kvm/tests/nicdriver_unload.py
>  delete mode 100644 client/tests/kvm/tests/ping.py
>  delete mode 100644 client/tests/kvm/tests/pxe.py
>  delete mode 100644 client/tests/kvm/tests/shutdown.py
>  delete mode 100644 client/tests/kvm/tests/stress_boot.py
>  delete mode 100644 client/tests/kvm/tests/vlan.py
>  delete mode 100644 client/tests/kvm/tests/whql_client_install.py
>  delete mode 100644 client/tests/kvm/tests/whql_submission.py
>  delete mode 100644 client/tests/kvm/tests/yum_update.py
>  create mode 100644 client/tools/__init__.py
>  create mode 100755 client/tools/cd_hash.py
>  create mode 100755 client/tools/html_report.py
>  create mode 100755 client/tools/scan_results.py
>  create mode 100644 client/virt/__init__.py
>  create mode 100755 client/virt/aexpect.py
>  create mode 100644 client/virt/kvm_installer.py
>  create mode 100644 client/virt/kvm_monitor.py
>  create mode 100755 client/virt/kvm_vm.py
>  create mode 100644 client/virt/ppm_utils.py
>  create mode 100755 client/virt/rss_client.py
>  create mode 100644 client/virt/tests/autotest.py
>  create mode 100644 client/virt/tests/boot.py
>  create mode 100644 client/virt/tests/clock_getres.py
>  create mode 100644 client/virt/tests/ethtool.py
>  create mode 100644 client/virt/tests/file_transfer.py
>  create mode 100644 client/virt/tests/guest_s4.py
>  create mode 100644 client/virt/tests/guest_test.py
>  create mode 100644 client/virt/tests/image_copy.py
>  create mode 100644 client/virt/tests/iofuzz.py
>  create mode 100644 client/virt/tests/ioquit.py
>  create mode 100644 client/virt/tests/iozone_windows.py
>  create mode 100644 client/virt/tests/jumbo.py
>  create mode 100644 client/virt/tests/kdump.py
>  create mode 100644 client/virt/tests/linux_s3.py
>  create mode 100644 client/virt/tests/mac_change.py
>  create mode 100644 client/virt/tests/multicast.py
>  create mode 100644 client/virt/tests/netperf.py
>  create mode 100644 client/virt/tests/nic_promisc.py
>  create mode 100644 client/virt/tests/nicdriver_unload.py
>  create mode 100644 client/virt/tests/ping.py
>  create mode 100644 client/virt/tests/pxe.py
>  create mode 100644 client/virt/tests/shutdown.py
>  create mode 100644 client/virt/tests/stress_boot.py
>  create mode 100644 client/virt/tests/vlan.py
>  create mode 100644 client/virt/tests/whql_client_install.py
>  create mode 100644 client/virt/tests/whql_submission.py
>  create mode 100644 client/virt/tests/yum_update.py
>  create mode 100644 client/virt/virt_env_process.py
>  create mode 100644 client/virt/virt_scheduler.py
>  create mode 100755 client/virt/virt_step_editor.py
>  create mode 100644 client/virt/virt_test_setup.py
>  create mode 100644 client/virt/virt_test_utils.py
>  create mode 100644 client/virt/virt_utils.py
>  create mode 100644 client/virt/virt_vm.py
> 


_______________________________________________
Autotest mailing list
Autotest@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/7] KVM test: Move test utilities to client/tools
  2011-03-09  9:21 ` [PATCH 1/7] KVM test: Move test utilities to client/tools Lucas Meneghel Rodrigues
@ 2011-03-11  6:47   ` Amos Kong
  2011-03-11 11:52     ` [Autotest] " Lucas Meneghel Rodrigues
  2011-03-11 21:18     ` Lucas Meneghel Rodrigues
  0 siblings, 2 replies; 12+ messages in thread
From: Amos Kong @ 2011-03-11  6:47 UTC (permalink / raw)
  To: Lucas Meneghel Rodrigues; +Cc: autotest, kvm

On Wed, Mar 09, 2011 at 06:21:04AM -0300, Lucas Meneghel Rodrigues wrote:
> The programs cd_hash, html_report, scan_results can be
> used by other users of autotest, so move them to the
> tools directory inside the client directory.
> 
> Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
> ---
>  client/tools/cd_hash.py      |   48 ++
>  client/tools/html_report.py  | 1727 ++++++++++++++++++++++++++++++++++++++++++
>  client/tools/scan_results.py |   97 +++
>  3 files changed, 1872 insertions(+), 0 deletions(-)
>  create mode 100644 client/tools/__init__.py
>  create mode 100755 client/tools/cd_hash.py
>  create mode 100755 client/tools/html_report.py
>  create mode 100755 client/tools/scan_results.py
> 
> diff --git a/client/tools/__init__.py b/client/tools/__init__.py
> new file mode 100644
> index 0000000..e69de29
> diff --git a/client/tools/cd_hash.py b/client/tools/cd_hash.py
> new file mode 100755
> index 0000000..04f8cbe
> --- /dev/null
> +++ b/client/tools/cd_hash.py
> @@ -0,0 +1,48 @@
> +#!/usr/bin/python
> +"""
> +Program that calculates several hashes for a given CD image.
> +
> +@copyright: Red Hat 2008-2009
> +"""
> +
> +import os, sys, optparse, logging
> +import common
> +import kvm_utils

is it allowed to execute tools singlely? 
and kvm_utils.py has been dropped.

# client/tools/scanf_results.py .... (ok)

# client/tools/cd_hash.py  (failed)
Traceback (most recent call last):
  File "client/tools/cd_hash.py", line 9, in <module>
    import common
ImportError: No module named common


> +from autotest_lib.client.common_lib import logging_manager
> +from autotest_lib.client.bin import utils
> +
> +
> +if __name__ == "__main__":
> +    parser = optparse.OptionParser("usage: %prog [options] [filenames]")
> +    options, args = parser.parse_args()
> +
> +    logging_manager.configure_logging(kvm_utils.KvmLoggingConfig())
> +
> +    if args:
> +        filenames = args
> +    else:
> +        parser.print_help()
> +        sys.exit(1)

....
> diff --git a/client/tools/html_report.py b/client/tools/html_report.py


I've executed a serial tests, but no result.html produced.

> new file mode 100755
> index 0000000..8b4b109
> --- /dev/null
> +++ b/client/tools/html_report.py
> @@ -0,0 +1,1727 @@
> +#!/usr/bin/python
> +"""
> +Script used to parse the test results and generate an HTML report.
> +
> +@copyright: (c)2005-2007 Matt Kruse (javascripttoolbox.com)
> +@copyright: Red Hat 2008-2009
> +@author: Dror Russo (drusso@redhat.com)
> +"""

 ....

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Autotest] [PATCH 1/7] KVM test: Move test utilities to client/tools
  2011-03-11  6:47   ` Amos Kong
@ 2011-03-11 11:52     ` Lucas Meneghel Rodrigues
  2011-03-11 21:18     ` Lucas Meneghel Rodrigues
  1 sibling, 0 replies; 12+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-03-11 11:52 UTC (permalink / raw)
  To: Amos Kong; +Cc: autotest, kvm

On Fri, 2011-03-11 at 14:47 +0800, Amos Kong wrote:
> On Wed, Mar 09, 2011 at 06:21:04AM -0300, Lucas Meneghel Rodrigues wrote:
> > The programs cd_hash, html_report, scan_results can be
> > used by other users of autotest, so move them to the
> > tools directory inside the client directory.
> > 
> > Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
> > ---
> >  client/tools/cd_hash.py      |   48 ++
> >  client/tools/html_report.py  | 1727 ++++++++++++++++++++++++++++++++++++++++++
> >  client/tools/scan_results.py |   97 +++
> >  3 files changed, 1872 insertions(+), 0 deletions(-)
> >  create mode 100644 client/tools/__init__.py
> >  create mode 100755 client/tools/cd_hash.py
> >  create mode 100755 client/tools/html_report.py
> >  create mode 100755 client/tools/scan_results.py
> > 
> > diff --git a/client/tools/__init__.py b/client/tools/__init__.py
> > new file mode 100644
> > index 0000000..e69de29
> > diff --git a/client/tools/cd_hash.py b/client/tools/cd_hash.py
> > new file mode 100755
> > index 0000000..04f8cbe
> > --- /dev/null
> > +++ b/client/tools/cd_hash.py
> > @@ -0,0 +1,48 @@
> > +#!/usr/bin/python
> > +"""
> > +Program that calculates several hashes for a given CD image.
> > +
> > +@copyright: Red Hat 2008-2009
> > +"""
> > +
> > +import os, sys, optparse, logging
> > +import common
> > +import kvm_utils
> 
> is it allowed to execute tools singlely? 
> and kvm_utils.py has been dropped.

Ok, those were a mistake of mine. Will fix and update the branch,
thanks!

> # client/tools/scanf_results.py .... (ok)
> 
> # client/tools/cd_hash.py  (failed)
> Traceback (most recent call last):
>   File "client/tools/cd_hash.py", line 9, in <module>
>     import common
> ImportError: No module named common
> 
> 
> > +from autotest_lib.client.common_lib import logging_manager
> > +from autotest_lib.client.bin import utils
> > +
> > +
> > +if __name__ == "__main__":
> > +    parser = optparse.OptionParser("usage: %prog [options] [filenames]")
> > +    options, args = parser.parse_args()
> > +
> > +    logging_manager.configure_logging(kvm_utils.KvmLoggingConfig())
> > +
> > +    if args:
> > +        filenames = args
> > +    else:
> > +        parser.print_help()
> > +        sys.exit(1)
> 
> ....
> > diff --git a/client/tools/html_report.py b/client/tools/html_report.py
> 
> 
> I've executed a serial tests, but no result.html produced.
> 
> > new file mode 100755
> > index 0000000..8b4b109
> > --- /dev/null
> > +++ b/client/tools/html_report.py
> > @@ -0,0 +1,1727 @@
> > +#!/usr/bin/python
> > +"""
> > +Script used to parse the test results and generate an HTML report.
> > +
> > +@copyright: (c)2005-2007 Matt Kruse (javascripttoolbox.com)
> > +@copyright: Red Hat 2008-2009
> > +@author: Dror Russo (drusso@redhat.com)
> > +"""
> 
>  ....
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Autotest] [PATCH 1/7] KVM test: Move test utilities to client/tools
  2011-03-11  6:47   ` Amos Kong
  2011-03-11 11:52     ` [Autotest] " Lucas Meneghel Rodrigues
@ 2011-03-11 21:18     ` Lucas Meneghel Rodrigues
  1 sibling, 0 replies; 12+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-03-11 21:18 UTC (permalink / raw)
  To: Amos Kong; +Cc: autotest, kvm

On Fri, 2011-03-11 at 14:47 +0800, Amos Kong wrote:
> On Wed, Mar 09, 2011 at 06:21:04AM -0300, Lucas Meneghel Rodrigues wrote:
> > The programs cd_hash, html_report, scan_results can be
> > used by other users of autotest, so move them to the
> > tools directory inside the client directory.
> > 
> > Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>

> ....
> > diff --git a/client/tools/html_report.py b/client/tools/html_report.py
> 
> 
> I've executed a serial tests, but no result.html produced.

I have fixed all problems with the utilities and republished the
refactor branch at my git repo

git clone git://github.com/lmr/autotest.git

git checkout refactor

Thanks Amos!


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2011-03-11 21:18 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-03-09  9:21 [PATCH 0/7] [RFC] KVM autotest refactor stage 1 Lucas Meneghel Rodrigues
2011-03-09  9:21 ` [PATCH 1/7] KVM test: Move test utilities to client/tools Lucas Meneghel Rodrigues
2011-03-11  6:47   ` Amos Kong
2011-03-11 11:52     ` [Autotest] " Lucas Meneghel Rodrigues
2011-03-11 21:18     ` Lucas Meneghel Rodrigues
2011-03-09  9:21 ` [PATCH 2/7] KVM test: Create autotest_lib.client.virt namespace Lucas Meneghel Rodrigues
2011-03-09  9:21 ` [PATCH 3/7] KVM test: tests_base.cfg: Introduce parameter 'vm_type' Lucas Meneghel Rodrigues
2011-03-09  9:21 ` [PATCH 4/7] KVM test: Adapt the test code to use the new virt namespace Lucas Meneghel Rodrigues
2011-03-09  9:21 ` [PATCH 5/7] KVM test: Removing the old libraries and programs Lucas Meneghel Rodrigues
2011-03-09  9:21 ` [PATCH 6/7] KVM test: Try to load subtests on a shared tests location Lucas Meneghel Rodrigues
2011-03-09  9:21 ` [PATCH 7/7] KVM test: Moving generic tests to common tests area Lucas Meneghel Rodrigues
2011-03-09 11:54 ` [PATCH 0/7] [RFC] KVM autotest refactor stage 1 Lucas Meneghel Rodrigues

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox