kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [KVM-autotest] Cgroup-kvm rework
@ 2012-02-27 18:42 Lukas Doktor
  2012-02-27 18:42 ` [PATCH 1/4] [KVM-autotest] virt.kvm_vm: Make snapshot and boot params optional Lukas Doktor
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Lukas Doktor @ 2012-02-27 18:42 UTC (permalink / raw)
  To: autotest, kvm, kvm-autotest, ldoktor, jzupka

Hi,

This is a complete rework of cgroup test from subtests to singe-test-execution. It improves stability of testing and allows better test customisation. The speed is similar/faster in single variant execution and a bit slower in all-variants execution compare to previous version.

It also contains a lot of important bugfixes and some cool enhancements described in patch.

Checkout current version on:
https://github.com/autotest/autotest/pull/209

Regards,
Lukáš

_______________________________________________
Autotest mailing list
Autotest@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/4] [KVM-autotest] virt.kvm_vm: Make snapshot and boot params optional
  2012-02-27 18:42 [KVM-autotest] Cgroup-kvm rework Lukas Doktor
@ 2012-02-27 18:42 ` Lukas Doktor
  2012-02-27 18:42 ` [PATCH 2/4] [KVM-autotest] virt.virt_vm: Add option to create raw images with dd Lukas Doktor
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Lukas Doktor @ 2012-02-27 18:42 UTC (permalink / raw)
  To: autotest, kvm, kvm-autotest, ldoktor, jzupka

Currently boot and snapshot parameters are either 'yes' or not present.
This patch enables specify 'yes', 'no' or not present. 'no' option
is necessarily eg. when -snapshot is present and we want to override
the 'snapshot=off' value on one device.

Signed-off-by: Lukas Doktor <ldoktor@redhat.com>
---
 client/virt/kvm_vm.py |   17 +++++++++--------
 1 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/client/virt/kvm_vm.py b/client/virt/kvm_vm.py
index cc181d7..fcbdde4 100644
--- a/client/virt/kvm_vm.py
+++ b/client/virt/kvm_vm.py
@@ -201,11 +201,8 @@ class VM(virt_vm.BaseVM):
             Add option to qemu parameters.
             """
             fmt=",%s=%s"
-            if value and isinstance(value, bool):
-                if value:
-                    return fmt % (option, "on")
-                else:
-                    return fmt % (option, "off")
+            if isinstance(value, bool):
+                return fmt % (option, "on" if value else "off")
             elif value and isinstance(value, str):
                 # "EMPTY_STRING" and "NULL_STRING" is used for testing illegal
                 # foramt of option.
@@ -301,7 +298,7 @@ class VM(virt_vm.BaseVM):
                 return " -cdrom '%s'" % filename
 
         def add_drive(help, filename, index=None, format=None, cache=None,
-                      werror=None, rerror=None, serial=None, snapshot=False,
+                      werror=None, rerror=None, serial=None, snapshot=None,
                       boot=False, blkdebug=None, bus=None, port=None,
                       bootindex=None, removable=None, min_io_size=None,
                       opt_io_size=None, physical_block_size=None,
@@ -648,8 +645,12 @@ class VM(virt_vm.BaseVM):
                     image_params.get("drive_werror"),
                     image_params.get("drive_rerror"),
                     image_params.get("drive_serial"),
-                    image_params.get("image_snapshot") == "yes",
-                    image_params.get("image_boot") == "yes",
+                    True if image_params.get("image_snapshot") == "yes" else (
+                        False if image_params.get("image_snapshot") == "no" else
+                        None),
+                    True if image_params.get("image_boot") == "yes" else (
+                        False if image_params.get("image_boot") == "no" else
+                        None),
                     virt_vm.get_image_blkdebug_filename(image_params,
                                                         self.virt_dir),
                     bus,
-- 
1.7.7.6

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/4] [KVM-autotest] virt.virt_vm: Add option to create raw images with dd
  2012-02-27 18:42 [KVM-autotest] Cgroup-kvm rework Lukas Doktor
  2012-02-27 18:42 ` [PATCH 1/4] [KVM-autotest] virt.kvm_vm: Make snapshot and boot params optional Lukas Doktor
@ 2012-02-27 18:42 ` Lukas Doktor
  2012-02-27 18:42 ` [PATCH 3/4] [KVM-autotest] tests.cfg.sample: change import order Lukas Doktor
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Lukas Doktor @ 2012-02-27 18:42 UTC (permalink / raw)
  To: autotest, kvm, kvm-autotest, ldoktor, jzupka

Adds option to create raw images with dd (non-sparse file unlike
qemu-img one).

Signed-off-by: Lukas Doktor <ldoktor@redhat.com>
---
 client/virt/virt_vm.py |   38 ++++++++++++++++++++++++++------------
 1 files changed, 26 insertions(+), 12 deletions(-)

diff --git a/client/virt/virt_vm.py b/client/virt/virt_vm.py
index 6cdb91e..06db7a6 100644
--- a/client/virt/virt_vm.py
+++ b/client/virt/virt_vm.py
@@ -336,23 +336,37 @@ def create_image(params, root_dir):
            image_cluster_size (optional) -- the cluster size for the image
            image_size -- the requested size of the image (a string
            qemu-img can understand, such as '10G')
+           create_with_dd -- use dd to create the image (raw format only)
     """
-    qemu_img_cmd = virt_utils.get_path(root_dir, params.get("qemu_img_binary",
-                                                           "qemu-img"))
-    qemu_img_cmd += " create"
-
     format = params.get("image_format", "qcow2")
-    qemu_img_cmd += " -f %s" % format
+    image_filename = get_image_filename(params, root_dir)
+    size = params.get("image_size", "10G")
+    if params.get("create_with_dd") == "yes" and format == "raw":
+        # maps K,M,G,T => (count, bs)
+        human = {'K': (1, 1),
+                 'M': (1, 1024),
+                 'G': (1024, 1024),
+                 'T': (1024, 1048576),
+                }
+        if human.has_key(size[-1]):
+            block_size = human[size[-1]][1]
+            size = int(size[:-1]) * human[size[-1]][0]
+        qemu_img_cmd = ("dd if=/dev/zero of=%s count=%s bs=%sK"
+                        % (image_filename, size, block_size))
+    else:
+        qemu_img_cmd = virt_utils.get_path(root_dir,
+                                    params.get("qemu_img_binary", "qemu-img"))
+        qemu_img_cmd += " create"
 
-    image_cluster_size = params.get("image_cluster_size", None)
-    if image_cluster_size is not None:
-        qemu_img_cmd += " -o cluster_size=%s" % image_cluster_size
+        qemu_img_cmd += " -f %s" % format
 
-    image_filename = get_image_filename(params, root_dir)
-    qemu_img_cmd += " %s" % image_filename
+        image_cluster_size = params.get("image_cluster_size", None)
+        if image_cluster_size is not None:
+            qemu_img_cmd += " -o cluster_size=%s" % image_cluster_size
 
-    size = params.get("image_size", "10G")
-    qemu_img_cmd += " %s" % size
+        qemu_img_cmd += " %s" % image_filename
+
+        qemu_img_cmd += " %s" % size
 
     utils.system(qemu_img_cmd)
     return image_filename
-- 
1.7.7.6

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/4] [KVM-autotest] tests.cfg.sample: change import order
  2012-02-27 18:42 [KVM-autotest] Cgroup-kvm rework Lukas Doktor
  2012-02-27 18:42 ` [PATCH 1/4] [KVM-autotest] virt.kvm_vm: Make snapshot and boot params optional Lukas Doktor
  2012-02-27 18:42 ` [PATCH 2/4] [KVM-autotest] virt.virt_vm: Add option to create raw images with dd Lukas Doktor
@ 2012-02-27 18:42 ` Lukas Doktor
  2012-03-12  3:34   ` lei yang
  2012-02-27 18:42 ` [PATCH 4/4] [KVM-autotest] tests.cgroup: Rework test execution Lukas Doktor
  2012-02-28 19:07 ` [KVM-AUTOTEST] [KVM-autotest] Cgroup-kvm rework Lucas Meneghel Rodrigues
  4 siblings, 1 reply; 10+ messages in thread
From: Lukas Doktor @ 2012-02-27 18:42 UTC (permalink / raw)
  To: autotest, kvm, kvm-autotest, ldoktor, jzupka

Currently subtests.cfg is proceeded and then all other configs. My test
needs to override smp parameter in some variant which is currently
impossible.

Using words current order means: we define subtests variants, than we
specify base and guest and other details. In the end we limit what
we want to execute.

My proposed order enables forcing base/guest params in subtest variants.

By words this means we specify base, guest system, cdkeys, etc. and in
the end we define subtests with various variants. Then we limit what
we actually want to execute but now subtest can force varius base/guest
settings.

Signed-off-by: Lukas Doktor <ldoktor@redhat.com>
---
 client/tests/kvm/tests-shared.cfg.sample |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/tests-shared.cfg.sample b/client/tests/kvm/tests-shared.cfg.sample
index c6304b3..bda982d 100644
--- a/client/tests/kvm/tests-shared.cfg.sample
+++ b/client/tests/kvm/tests-shared.cfg.sample
@@ -5,11 +5,11 @@
 
 # Include the base config files.
 include base.cfg
-include subtests.cfg
 include guest-os.cfg
 include guest-hw.cfg
 include cdkeys.cfg
 include virtio-win.cfg
+include subtests.cfg
 
 # Virtualization type (kvm or libvirt)
 vm_type = kvm
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/4] [KVM-autotest] tests.cgroup: Rework test execution
  2012-02-27 18:42 [KVM-autotest] Cgroup-kvm rework Lukas Doktor
                   ` (2 preceding siblings ...)
  2012-02-27 18:42 ` [PATCH 3/4] [KVM-autotest] tests.cfg.sample: change import order Lukas Doktor
@ 2012-02-27 18:42 ` Lukas Doktor
  2012-02-28 19:07 ` [KVM-AUTOTEST] [KVM-autotest] Cgroup-kvm rework Lucas Meneghel Rodrigues
  4 siblings, 0 replies; 10+ messages in thread
From: Lukas Doktor @ 2012-02-27 18:42 UTC (permalink / raw)
  To: autotest, kvm, kvm-autotest, ldoktor, jzupka

Complete rework of cgroup test from subtests to singe-test-execution.
It improves stability of testing and allows better test
customiuations. The speed is similar/faster in single variant
execution and a bit slower in all-variants execution compare to
previous version.

Most tests now supports cgroup_test_time, cgroup_test_limit*
and other cfg params for test tweaking.

Also some enhancements were made:
* blkio_bandwidth - support for multiple speeds
* blkio_throttle - support for multiple VMs
* blkio_throttle_multi - support for multiple VMs
* cpu_cfs_util - per vcpu stats
* cpu_share - support for multiple speeds
* cpuset_cpus - per vcpu stats
* cpuset_cpus_switching - time-driven finish
* devices_access - RW and RO testing and QMP support
* freezer
* memory_limit
* memory_memsw_limit - Newly added
* memory_move

+ a lot of bugfixes.

See descriptions for details.

Signed-off-by: Lukas Doktor <ldoktor@redhat.com>
---
 client/tests/cgroup/cgroup_common.py |   15 +-
 client/tests/kvm/tests/cgroup.py     | 3270 +++++++++++++++++-----------------
 client/virt/kvm_vm.py                |   10 +
 client/virt/subtests.cfg.sample      |   87 +-
 4 files changed, 1730 insertions(+), 1652 deletions(-)

diff --git a/client/tests/cgroup/cgroup_common.py b/client/tests/cgroup/cgroup_common.py
index 56856c0..90f1d2c 100755
--- a/client/tests/cgroup/cgroup_common.py
+++ b/client/tests/cgroup/cgroup_common.py
@@ -50,17 +50,18 @@ class Cgroup(object):
                                                                 % self.module)
 
 
-    def mk_cgroup(self, root=None):
+    def mk_cgroup(self, pwd=None):
         """
         Creates new temporary cgroup
         @param root: where to create this cgroup (default: self.root)
         @return: 0 when PASSED
         """
+        if pwd == None:
+            pwd = self.root
+        if isinstance(pwd, int):
+            pwd = self.cgroups[pwd]
         try:
-            if root:
-                pwd = mkdtemp(prefix='cgroup-', dir=root) + '/'
-            else:
-                pwd = mkdtemp(prefix='cgroup-', dir=self.root) + '/'
+            pwd = mkdtemp(prefix='cgroup-', dir=pwd) + '/'
         except Exception, inst:
             raise error.TestError("cg.mk_cgroup(): %s" % inst)
         self.cgroups.append(pwd)
@@ -122,12 +123,14 @@ class Cgroup(object):
         return self.is_cgroup(pid, self.root)
 
 
-    def set_cgroup(self, pid, pwd):
+    def set_cgroup(self, pid, pwd=None):
         """
         Sets cgroup membership
         @param pid: pid of the process
         @param pwd: cgroup directory
         """
+        if pwd == None:
+            pwd = self.root
         if isinstance(pwd, int):
             pwd = self.cgroups[pwd]
         try:
diff --git a/client/tests/kvm/tests/cgroup.py b/client/tests/kvm/tests/cgroup.py
index c01b3c1..e501881 100644
--- a/client/tests/kvm/tests/cgroup.py
+++ b/client/tests/kvm/tests/cgroup.py
@@ -3,47 +3,29 @@ cgroup autotest test (on KVM guest)
 @author: Lukas Doktor <ldoktor@redhat.com>
 @copyright: 2011 Red Hat, Inc.
 """
-import logging, os, re, sys, tempfile, time
+import logging
+import os
+import re
+import time
 from random import random
 from autotest_lib.client.common_lib import error
 from autotest_lib.client.bin import utils
-from autotest_lib.client.tests.cgroup.cgroup_common import Cgroup, CgroupModules
+from autotest_lib.client.tests.cgroup.cgroup_common import Cgroup
+from autotest_lib.client.tests.cgroup.cgroup_common import CgroupModules
 from autotest_lib.client.tests.cgroup.cgroup_common import get_load_per_cpu
+from autotest_lib.client.virt.virt_env_process import preprocess
+from autotest_lib.client.virt import kvm_monitor
 from autotest_lib.client.virt.aexpect import ExpectTimeoutError
 from autotest_lib.client.virt.aexpect import ExpectProcessTerminatedError
+from autotest_lib.client.virt.aexpect import ShellTimeoutError
 
 
+@error.context_aware
 def run_cgroup(test, params, env):
     """
     Tests the cgroup functions on KVM guests.
-     * Uses variable tests (marked by TODO comment) to map the subtests
     """
-    vms = None
-    tests = None
-
-
     # Func
-    def _check_vms(vms):
-        """
-        Checks if the VM is alive.
-        @param vms: list of vm's
-        """
-        err = ""
-        for i in range(len(vms)):
-            try:
-                vms[i].verify_alive()
-                vms[i].verify_kernel_crash()
-                vms[i].wait_for_login(timeout=30).close()
-            except Exception, failure_detail:
-                logging.error("_check_vms: %s", failure_detail)
-                logging.warn("recreate VM(%s)", i)
-                # The vm has to be recreated to reset the qemu PCI state
-                vms[i].create()
-                err += "%s, " % vms[i].name
-        if err:
-            raise error.TestFail("WM [%s] had to be recreated" % err[:-2])
-
-
     def assign_vm_into_cgroup(vm, cgroup, pwd=None):
         """
         Assigns all threads of VM into cgroup
@@ -52,10 +34,20 @@ def run_cgroup(test, params, env):
         @param pwd: desired cgroup's pwd, cgroup index or None for root cgroup
         """
         cgroup.set_cgroup(vm.get_shell_pid(), pwd)
-        for pid in utils.get_children_pids(vm.get_shell_pid()):
-            cgroup.set_cgroup(int(pid), pwd)
-
-
+        for i in range(10):
+            for pid in utils.get_children_pids(vm.get_shell_pid()):
+                try:
+                    cgroup.set_cgroup(int(pid), pwd)
+                except Exception, detail:   # Process might not already exist
+                    if os.path.exists("/proc/%s/" % pid):
+                        raise detail
+                    else:   # Thread doesn't exist, try it again
+                        break
+            else:   # All PIDs moved
+                break
+        else:
+            raise error.TestFail("Failed to move all VM threads to new cgroup"
+                                 " in %d trials" % i)
 
     def distance(actual, reference):
         """
@@ -64,16 +56,15 @@ def run_cgroup(test, params, env):
         @param reference: reference value
         @return: relative distance abs((a-r)/r) (float)
         """
-        return abs(float(actual-reference) / reference)
+        return abs(float(actual - reference) / reference)
 
-
-    def get_dd_cmd(direction, dev=None, count=None, bs=None):
+    def get_dd_cmd(direction, dev=None, count=None, blocksize=None):
         """
         Generates dd_cmd string
         @param direction: {read,write,bi} dd direction
         @param dev: used device ('vd?')
         @param count: count parameter of dd
-        @param bs: bs parameter of dd
+        @param blocksize: blocksize parameter of dd
         @return: dd command string
         """
         if dev is None:
@@ -87,15 +78,14 @@ def run_cgroup(test, params, env):
             params = "if=/dev/zero of=$FILE oflag=direct"
         else:
             params = "if=$FILE of=$FILE iflag=direct oflag=direct"
-        if bs:
-            params += " bs=%s" % (bs)
+        if blocksize:
+            params += " bs=%s" % (blocksize)
         if count:
             params += " count=%s" % (count)
-        return ("export FILE=$(ls /dev/%s | tail -n 1); touch /tmp/cgroup_lock "
-                "; while [ -e /tmp/cgroup_lock ]; do dd %s ; done"
+        return ("export FILE=$(ls /dev/%s | tail -n 1); touch /tmp/cgroup_lock"
+                " ; while [ -e /tmp/cgroup_lock ]; do dd %s ; done"
                 % (dev, params))
 
-
     def get_device_driver():
         """
         Discovers the used block device driver {ide, scsi, virtio_blk}
@@ -103,7 +93,6 @@ def run_cgroup(test, params, env):
         """
         return params.get('drive_format', 'virtio')
 
-
     def get_maj_min(dev):
         """
         Returns the major and minor numbers of the dev device
@@ -117,1759 +106,1756 @@ def run_cgroup(test, params, env):
                                   (dev, details))
         return ret
 
-
-    def add_file_drive(vm, driver=get_device_driver(), host_file=None):
+    def rm_scsi_disks(no_disks):
         """
-        Hot-add a drive based on file to a vm
-        @param vm: Desired VM
-        @param driver: which driver should be used (default: same as in test)
-        @param host_file: Which file on host is the image (default: create new)
-        @return: Tuple(ret_file, device)
-                    ret_file: created file handler (None if not created)
-                    device: PCI id of the virtual disk
+        Removes no_disks scsi_debug disks from the last one.
+        @param no_disks: How many disks to remove
+        @note: params['cgroup_rmmod_scsi_debug'] == "yes" => rmmod scsi_debug
         """
-        # TODO: Implement also via QMP
-        if not host_file:
-            host_file = tempfile.NamedTemporaryFile(prefix="cgroup-disk-",
-                                               suffix=".iso")
-            utils.system("dd if=/dev/zero of=%s bs=1M count=8 &>/dev/null"
-                         % (host_file.name))
-            ret_file = host_file
-            logging.debug("add_file_drive: new file %s as drive",
-                          host_file.name)
-        else:
-            ret_file = None
-            logging.debug("add_file_drive: using file %s as drive",
-                          host_file.name)
-
-        out = vm.monitor.cmd("pci_add auto storage file=%s,if=%s,snapshot=off,"
-                             "cache=off" % (host_file.name, driver))
-        dev = re.search(r'OK domain (\d+), bus (\d+), slot (\d+), function \d+',
-                        out)
-        if not dev:
-            raise error.TestFail("Can't add device(%s, %s, %s): %s" % (vm.name,
-                                                host_file.name, driver, out))
-        device = "%02x:%02x" % (int(dev.group(2)), int(dev.group(3)))
-        time.sleep(3)
-        out = vm.monitor.info('qtree', debug=False)
-        if out.count('addr %s.0' % device) != 1:
-            raise error.TestFail("Can't add device(%s, %s, %s): device in qtree"
-                            ":\n%s" % (vm.name, host_file.name, driver, out))
-        return (ret_file, device)
-
-
-    def add_scsi_drive(vm, driver=get_device_driver(), host_file=None):
+        utils.system("echo -%d > /sys/bus/pseudo/drivers/scsi_debug/add_host"
+                     % no_disks)
+
+        if params.get('cgroup_rmmod_scsi_debug', "no") == "yes":
+            utils.system("rmmod scsi_debug")
+
+    def param_add_scsi_disks(prefix="scsi-debug-"):
         """
-        Hot-add a drive based on scsi_debug device to a vm
-        @param vm: Desired VM
-        @param driver: which driver should be used (default: same as in test)
-        @param host_file: Which dev on host is the image (default: create new)
-        @return: Tuple(ret_file, device)
-                    ret_file: string of the created dev (None if not created)
-                    device: PCI id of the virtual disk
+        Adds scsi_debug disk to every VM in params['vms']
+        @param prefix: adds prefix to drive name
         """
-        # TODO: Implement also via QMP
-        if not host_file:
-            if utils.system("lsmod | grep scsi_debug", ignore_status=True):
-                utils.system("modprobe scsi_debug dev_size_mb=8 add_host=0")
-            utils.system("echo 1 > /sys/bus/pseudo/drivers/scsi_debug/add_host")
+        if utils.system("lsmod | grep scsi_debug", ignore_status=True):
+            utils.system("modprobe scsi_debug dev_size_mb=8 add_host=0")
+        for name in params.get('vms').split(' '):
+            disk_name = prefix + name
+            utils.system("echo 1 >/sys/bus/pseudo/drivers/scsi_debug/add_host")
             time.sleep(1)   # Wait for device init
-            host_file = utils.system_output("ls /dev/sd* | tail -n 1")
+            dev = utils.system_output("ls /dev/sd* | tail -n 1")
             # Enable idling in scsi_debug drive
             utils.system("echo 1 > /sys/block/%s/queue/rotational"
-                         % (host_file.split('/')[-1]))
-            ret_file = host_file
-            logging.debug("add_scsi_drive: add %s device", host_file)
-        else:
-            # Don't remove this device during cleanup
-            # Reenable idling in scsi_debug drive (in case it's not)
-            utils.system("echo 1 > /sys/block/%s/queue/rotational"
-                         % (host_file.split('/')[-1]))
-            ret_file = None
-            logging.debug("add_scsi_drive: using %s device", host_file)
-
-        out = vm.monitor.cmd("pci_add auto storage file=%s,if=%s,snapshot=off,"
-                             "cache=off" % (host_file, driver))
-        dev = re.search(r'OK domain (\d+), bus (\d+), slot (\d+), function \d+',
-                        out)
-        if not dev:
-            raise error.TestFail("Can't add device(%s, %s, %s): %s" % (vm.name,
-                                                        host_file, driver, out))
-        device = "%02x:%02x" % (int(dev.group(2)), int(dev.group(3)))
-        time.sleep(3)
-        out = vm.monitor.info('qtree', debug=False)
-        if out.count('addr %s.0' % device) != 1:
-            raise error.TestFail("Can't add device(%s, %s, %s): device in qtree"
-                            ":\n%s" % (vm.name, host_file.name, driver, out))
-        return (ret_file, device)
-
-
-    def rm_drive(vm, host_file, device):
+                         % (dev.split('/')[-1]))
+            vm_disks = params.get('images_%s' % name,
+                                  params.get('images', 'image1'))
+            params['images_%s' % name] = "%s %s" % (vm_disks, disk_name)
+            params['image_name_%s' % disk_name] = dev
+            params['image_snapshot_%s' % disk_name] = "no"
+            params['image_format_%s' % disk_name] = "raw"
+            params['remove_image_%s' % disk_name] = "no"
+            params['image_raw_device_%s' % disk_name] = "yes"
+
+    def param_add_file_disks(size, prefix="hd2-"):
         """
-        Remove drive from vm and device on disk
-        ! beware to remove scsi devices in reverse order !
+        Adds file disk to every VM in params['vms']
+        @param size: Disk size (1M)
+        @param prefix: adds prefix to drive name
         """
-        err = False
-        # TODO: Implement also via QMP
-        if device:
-            vm.monitor.cmd("pci_del %s" % device)
-            time.sleep(3)
-            qtree = vm.monitor.info('qtree', debug=False)
-            if qtree.count('addr %s.0' % device) != 0:
-                err = True
-                vm.destroy()
-
-        if host_file is None:   # Do not remove
-            pass
-        elif isinstance(host_file, str):    # scsi device
-            utils.system("echo -1> /sys/bus/pseudo/drivers/scsi_debug/add_host")
-        else:     # file
-            host_file.close()
-
-        if err:
-            logging.error("Cant del device(%s, %s, %s):\n%s", vm.name,
-                                                    host_file, device, qtree)
-
+        for name in params.get('vms').split(' '):
+            vm_disks = params.get('images_%s' % name,
+                               params.get('images', 'image1'))
+            disk_name = prefix + name
+            params['images_%s' % name] = "%s %s" % (vm_disks, disk_name)
+            params['image_size_%s' % disk_name] = size
+            params['image_name_%s' % disk_name] = disk_name
+            params['image_snapshot_%s' % disk_name] = "no"
+            params['force_create_image_%s' % disk_name] = "yes"
+            params['image_format_%s' % disk_name] = "raw"
+            params['create_with_dd_%s' % disk_name] = "yes"
+            params['remove_image_%s' % disk_name] = "yes"
+
+    def param_add_vms(no_vms):
+        """
+        Defines $no_vms in params
+        @param no_vms: Desired number of VMs
+        @note: All defined VMs are overwritten.
+        """
+        params['vms'] = ""
+        for i in range(no_vms):
+            params['vms'] += "vm%s " % i
+        params['vms'] = params['vms'][:-1]
 
     # Tests
-    class _TestBlkioBandwidth(object):
+    @error.context_aware
+    def blkio_bandwidth():
         """
-        BlkioBandwidth dummy test
-         * Use it as a base class to an actual test!
-         * self.dd_cmd and attr '_set_properties' have to be implemented
-         * It prepares 2 vms and run self.dd_cmd to simultaneously stress the
-            machines. After 1 minute it kills the dd and gathers the throughput
-            information.
+        Sets blkio.weight for each VM and measure the actual distribution
+        of read/write speeds.
+        @note: VMs are created in test
+        @param cfg: cgroup_test_time - test duration '60'
+        @param cfg: cgroup_weights - list of R/W weights '[100, 1000]'
+        @param cfg: cgroup_limit{ ,_read,_write} - allowed R/W threshold '0.1'
         """
-        def __init__(self, vms, modules):
+        def _test(direction):
             """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
-            """
-            self.vms = vms      # Virt machines
-            self.modules = modules          # cgroup module handler
-            self.blkio = Cgroup('blkio', '')    # cgroup blkio handler
-            self.files = []     # Temporary files (files of virt disks)
-            self.devices = []   # Temporary virt devices (PCI drive 1 per vm)
-            self.dd_cmd = None  # DD command used to test the throughput
-
-        def cleanup(self):
-            """
-            Cleanup
+            Executes loop of dd commands, kills it after $test_time and
+            verifies the speeds using median.
+            @param direction: "read" / "write"
+            @return: "" on success or err message when fails
             """
+            out = []
+            # Initiate dd loop on all VMs (2 sessions per VM)
+            dd_cmd = get_dd_cmd(direction, blocksize="100K")
+            for i in range(no_vms):
+                sessions[i * 2].sendline(dd_cmd)
+            time.sleep(test_time)
+            for i in range(no_vms):
+                # Force stats in case no dd cmd finished
+                sessions[i * 2 + 1].sendline(stat_cmd)
+            for i in range(no_vms):
+                out.append(sessions[i * 2].read_until_output_matches(
+                                                                [re_dd])[1])
+            # Stop all transfers (on 2nd sessions)
+            for i in range(no_vms):
+                sessions[i * 2 + 1].sendline(kill_cmd)
+            # Read the rest of the stats
+            for i in range(no_vms):
+                out[-1] = out[-1] + sessions[i * 2].read_up_to_prompt(
+                                                      timeout=120 + test_time)
+
+            for i in range(no_vms):
+                # Get all dd loops' statistics
+                # calculate avg from duration and data
+                duration = 0
+                data = 0
+                if len(out[i]) > 5:
+                    out[i] = out[i][1:-1]
+                for _ in  re.findall(re_dd, out[i])[1:-1]:
+                    data += int(_[0])
+                    duration += float(_[1])
+                out[i] = int(data / duration)
+
+            # normalize each output according to cgroup_weights
+            # Calculate the averages from medians / weights
+            sum_out = float(sum(out))
+            sum_weights = float(sum(weights))
+            for i in range(len(weights)):
+                # [status, norm_weights, norm_out, actual]
+                out[i] = ['PASS', weights[i] / sum_weights, out[i] / sum_out,
+                          out[i]]
+
             err = ""
-            try:
-                for i in range(1, -1, -1):
-                    rm_drive(vms[i], self.files[i], self.devices[i])
-            except Exception, failure_detail:
-                err += "\nCan't remove PCI drive: %s" % failure_detail
-            try:
-                del(self.blkio)
-            except Exception, failure_detail:
-                err += "\nCan't remove Cgroup: %s" % failure_detail
+            limit = float(params.get('cgroup_limit_%s' % direction,
+                                     params.get('cgroup_limit', 0.1)))
+            # if any of norm_output doesn't ~ match norm_weights, log it.
+            for i in range(len(out)):
+                if (out[i][2] > (out[i][1] + limit)
+                        or out[i][2] < (out[i][1] - limit)):
+                    out[i][0] = 'FAIL'
+                    err += "%d, " % i
+
+            logging.info("blkio_bandwidth_%s: dd statistics\n%s", direction,
+                         utils.matrix_to_string(out, ['status', 'norm_weights',
+                                'norm_out', 'actual']))
 
             if err:
-                logging.error("Some cleanup operations failed: %s", err)
-                raise error.TestError("Some cleanup operations failed: %s"
-                                      % err)
+                err = ("blkio_bandwidth_%s: limits [%s] were broken"
+                                                    % (direction, err[:-2]))
+                logging.debug(err)
+                return err + '\n'
+            return ""
 
-        def init(self):
-            """
-            Initialization
-             * assigns vm1 and vm2 into cgroups and sets the properties
-             * creates a new virtio device and adds it into vms
-            """
-            if get_device_driver() != 'virtio':
-                logging.warn("The main disk for this VM is non-virtio, keep in "
-                             "mind that this particular subtest will add a new "
-                             "virtio_blk disk to it")
-            if self.dd_cmd is None:
-                raise error.TestError("Corrupt class, aren't you trying to run "
-                                      "parent _TestBlkioBandwidth() function?")
-            if len(self.vms) < 2:
-                raise error.TestError("Test needs at least 2 vms.")
-
-            # cgroups
-            pwd = []
-            blkio = self.blkio
-            blkio.initialize(self.modules)
-            for i in range(2):
-                pwd.append(blkio.mk_cgroup())
-                assign_vm_into_cgroup(self.vms[i], blkio, pwd[i])
-            self.blkio.set_property("blkio.weight", 100, pwd[0])
-            self.blkio.set_property("blkio.weight", 1000, pwd[1])
-
-            for i in range(2):
-                (host_file, device) = add_file_drive(vms[i], "virtio")
-                self.files.append(host_file)
-                self.devices.append(device)
-
-        def run(self):
-            """
-            Actual test:
-             * executes self.dd_cmd in a loop simultaneously on both vms and
-               gather the throughputs. After 1m finish and calculate the avg.
-            """
-            sessions = []
-            out = []
-            sessions.append(vms[0].wait_for_login(timeout=30))
-            sessions.append(vms[1].wait_for_login(timeout=30))
-            sessions.append(vms[0].wait_for_login(timeout=30))
-            sessions.append(vms[1].wait_for_login(timeout=30))
-            sessions[0].sendline(self.dd_cmd)
-            sessions[1].sendline(self.dd_cmd)
-            time.sleep(60)
-
-            # Stop the dd loop and kill all remaining dds
-            cmd = "rm -f /tmp/cgroup_lock; killall -9 dd"
-            sessions[2].sendline(cmd)
-            sessions[3].sendline(cmd)
-            re_dd = (r'(\d+) bytes \(\d+\.*\d* \w*\) copied, (\d+\.*\d*) s, '
-                      '\d+\.*\d* \w./s')
-            out = []
-            for i in range(2):
-                out.append(sessions[i].read_up_to_prompt())
-                out[i] = [int(_[0])/float(_[1])
-                            for _ in re.findall(re_dd, out[i])[1:-1]]
-                logging.debug("dd(%d) output: %s", i, out[i])
-                out[i] = [min(out[i]), sum(out[i])/len(out[i]), max(out[i]),
-                          len(out[i])]
+        error.context("Init")
+        try:
+            weights = eval(params.get('cgroup_weights', "[100, 1000]"))
+            if type(weights) is not list:
+                raise TypeError
+        except TypeError:
+            raise error.TestError("Incorrect configuration: param "
+                        "cgroup_weights have to be list-like string '[1, 2]'")
+        test_time = int(params.get("cgroup_test_time", 60))
+        error.context("Prepare VMs")
+        # Prepare enough VMs each with 1 disk for testing
+        no_vms = len(weights)
+        param_add_vms(no_vms)
+        param_add_file_disks("1M")
+        preprocess(test, params, env)
+
+        vms = []
+        sessions = []   # 2 sessions per VM
+        timeout = int(params.get("login_timeout", 360))
+        for name in params['vms'].split():
+            vms.append(env.get_vm(name))
+            sessions.append(vms[-1].wait_for_login(timeout=timeout))
+            sessions.append(vms[-1].wait_for_login(timeout=30))
+
+        error.context("Setup test")
+        modules = CgroupModules()
+        if (modules.init(['blkio']) != 1):
+            raise error.TestFail("Can't mount blkio cgroup modules")
+        blkio = Cgroup('blkio', '')
+        blkio.initialize(modules)
+        for i in range(no_vms):
+            blkio.mk_cgroup()
+            assign_vm_into_cgroup(vms[i], blkio, i)
+            blkio.set_property("blkio.weight", weights[i], i)
+
+        # Fails only when the session is occupied (Timeout)
+        # ; true is necessarily when there is no dd present at the time
+        kill_cmd = "rm -f /tmp/cgroup_lock; killall -9 dd; true"
+        stat_cmd = "killall -SIGUSR1 dd; true"
+        re_dd = (r'(\d+) bytes \(\d+\.*\d* \w*\) copied, (\d+\.*\d*) s, '
+                  '\d+\.*\d* \w./s')
+        err = ""
+        try:
+            error.context("Read test")
+            err += _test("read")
+            # verify sessions between tests
+            for session in sessions:
+                session.cmd("true")
+            error.context("Write test")
+            err += _test("write")
+            if err:
+                logging.error("Results:\n" + err)
+            else:
+                logging.info("Speeds distributed accordingly to blkio.weight.")
 
+        finally:
+            error.context("Cleanup")
+            for i in range(no_vms):
+                # stop all workers
+                sessions[i * 2 + 1].sendline(kill_cmd)
             for session in sessions:
+                # try whether all sessions are clean
+                session.cmd("true")
                 session.close()
 
-            logging.debug("dd values (min, avg, max, ddloops):\nout1: %s\nout2:"
-                          " %s", out[0], out[1])
-
-            out1 = out[0][1]
-            out2 = out[1][1]
-            # Cgroup are limitting weights of guests 100:1000. On bare mettal it
-            # works in virtio_blk we are satisfied with the ratio 1:3.
-            if out1 == 0:
-                raise error.TestFail("No data transfered: %d:%d (1:10)" %
-                                      (out1, out2))
-            if out1*3  > out2:
-                raise error.TestFail("dd values: %d:%d (1:%.2f), limit 1:3"
-                                     ", theoretical: 1:10"
-                                     % (out1, out2, out2/out1))
-            else:
-                logging.info("dd values: %d:%d (1:%.2f)", out1, out2, out2/out1)
-            return "dd values: %d:%d (1:%.2f)" % (out1, out2, out2/out1)
+            del(blkio)
+            del(modules)
 
+            for i in range(len(vms)):
+                vms[i].destroy()
 
+        error.context("Results")
+        if err:
+            raise error.TestFail(err)
+        else:
+            return "Speeds distributed accordingly to blkio.weight."
 
-    class TestBlkioBandwidthWeigthRead(_TestBlkioBandwidth):
+    @error.context_aware
+    def blkio_throttle():
         """
-        Tests the blkio.weight capability using simultaneous read on 2 vms
+        Tests the blkio.throttle.{read,write}_bps_device cgroup capability.
+        It sets speeds accordingly to current scenario and let it run for
+        $test_time seconds. Afterwards it verifies whether the speeds matches.
+        @note: VMs are created in test
+        @note: Uses scsi_debug disks
+        @param cfg: cgroup_test_time - test duration '60'
+        @param cfg: cgroup_limit{ ,_read,_write} - allowed R/W threshold '0.1'
+        @param cfg: cgroup_speeds list of simultaneous speeds
+                    [speed1, speed2,..] '[1024]'
         """
-        def __init__(self, vms, modules):
-            """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
-            """
-            super(TestBlkioBandwidthWeigthRead, self).__init__(vms, modules)
-            # Read from the last vd* in a loop until test removes the
-            # /tmp/cgroup_lock file (and kills us)
-            self.dd_cmd = get_dd_cmd("read", dev='vd?', bs="100K")
-
-
-    class TestBlkioBandwidthWeigthWrite(_TestBlkioBandwidth):
+        error.context("Init")
+        try:
+            speeds = eval(params.get('cgroup_speeds', "[1024]"))
+            if type(speeds) is not list:
+                raise TypeError
+        except TypeError:
+            raise error.TestError("Incorrect configuration: param "
+                                  "cgroup_speeds have to be list of strings"
+                                  "eg. [1024] or [1024,2048,8192].")
+
+        # Make param suitable for multitest and execute it.
+        return blkio_throttle_multi([[_] for _ in speeds])
+
+    @error.context_aware
+    def blkio_throttle_multi(speeds=None):
         """
-        Tests the blkio.weight capability using simultaneous write on 2 vms
+        Tests the blkio.throttle.{read,write}_bps_device cgroup capability.
+        It sets speeds accordingly to current scenario and let it run for
+        $test_time seconds. Afterwards it verifies whether the speeds matches.
+        All scenarios have to have the same number of speeds (= no_vms).
+        @note: VMs are created in test
+        @note: Uses scsi_debug disks
+        @param cfg: cgroup_test_time - test duration '60'
+        @param cfg: cgroup_limit{ ,_read,_write} - allowed R/W threshold '0.1'
+        @param cfg: cgroup_speeds list of lists defining [[vm1],[vm2],..]]
+                    and speeds [[speed1],[speed2],..],..].
+                    '[[1024,0,2048,0,8192]]'
         """
-        def __init__(self, vms, modules):
+        def _test(direction, blkio):
             """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
+            Executes loop of small dd transfers changes cgroups and measures
+            speeds.
+            @param direction: "read" / "write"
+            @return: "" on success or err message when fails
             """
-            # Write on the last vd* in a loop until test removes the
-            # /tmp/cgroup_lock file (and kills us)
-            super(TestBlkioBandwidthWeigthWrite, self).__init__(vms, modules)
-            self.dd_cmd = get_dd_cmd("write", dev='vd?', bs="100K")
-
+            # Test
+            dd_cmd = get_dd_cmd(direction)
+            limit = float(params.get('cgroup_limit_%s' % direction,
+                                     params.get('cgroup_limit', 0.1)))
+            # every scenario have list of results [[][][]]
+            out = []
+            # every VM have one output []
+            for i in range(no_vms):
+                out.append([])
+                sessions[i * 2].sendline(dd_cmd)
+            for j in range(no_speeds):
+                _ = ""
+                for i in range(no_vms):
+                    # assign all VMs to current scenario cgroup
+                    assign_vm_into_cgroup(vms[i], blkio, i * no_speeds + j)
+                    _ += "vm%d:%d, " % (i, speeds[i][j])
+                logging.debug("blkio_throttle_%s: Current speeds: %s",
+                             direction, _[:-2])
+                time.sleep(test_time)
+                # Read stats
+                for i in range(no_vms):
+                    # Force stats in case no dd cmd finished
+                    sessions[i * 2 + 1].sendline(stat_cmd)
+                for i in range(no_vms):
+                    out[i].append(sessions[i * 2].read_until_output_matches(
+                                                                [re_dd])[1])
+                # Stop all transfers (on 2nd sessions)
+                for i in range(no_vms):
+                    sessions[i * 2 + 1].sendline(kill_cmd)
+                # Read the rest of the stats
+                for i in range(no_vms):
+                    out[i][-1] = (out[i][-1] +
+                                    sessions[i * 2].read_up_to_prompt(
+                                                      timeout=120 + test_time))
+                # Restart all transfers (on 1st sessions)
+                for i in range(no_vms):
+                    sessions[i * 2].sendline(dd_cmd)
+
+            # bash needs some time...
+            time.sleep(1)
+            for i in range(no_vms):
+                sessions[i * 2 + 1].sendline(kill_cmd)
 
-    class _TestBlkioThrottle(object):
-        """
-        BlkioThrottle dummy test
-         * Use it as a base class to an actual test!
-         * self.dd_cmd and throughputs have to be implemented
-         * It prepares a vm and runs self.dd_cmd. Always after 1 minute switches
-           the cgroup. At the end verifies, that the throughputs matches the
-           theoretical values.
-        """
-        def __init__(self, vms, modules):
-            """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
-            """
-            self.vm = vms[0]    # Virt machines
-            self.modules = modules  # cgroup module handler
-            self.cgroup = Cgroup('blkio', '')   # cgroup blkio handler
-            self.cgroups = []   # List of cgroups directories
-            self.files = None   # Temporary files (files of virt disks)
-            self.devices = None # Temporary virt devices (PCI drive 1 per vm)
-            self.dd_cmd = None  # DD command used to test the throughput
-            self.speeds = None  # cgroup throughput
-
-        def cleanup(self):
-            """
-            Cleanup
-            """
+            # Verification
             err = ""
-            try:
-                rm_drive(self.vm, self.files, self.devices)
-            except Exception, failure_detail:
-                err += "\nCan't remove PCI drive: %s" % failure_detail
-            try:
-                del(self.cgroup)
-            except Exception, failure_detail:
-                err += "\nCan't remove Cgroup: %s" % failure_detail
-
+            # [PASS/FAIL, iteration, vm, speed, actual]
+            output = []
+            for j in range(len(out[i])):
+                for i in range(no_vms):
+                    # calculate avg from duration and data
+                    duration = 0
+                    data = 0
+                    for _ in  re.findall(re_dd, out[i][j]):
+                        data += int(_[0])
+                        duration += float(_[1])
+                    output.append(['PASS', j, 'vm%d' % i, speeds[i][j],
+                                   int(data / duration)])
+                    # Don't meassure unlimited speeds
+                    if (speeds[i][j] == 0):
+                        output[-1][0] = "INF"
+                        output[-1][3] = "(inf)"
+                    elif distance(output[-1][4], speeds[i][j]) > limit:
+                        err += "vm%d:%d, " % (i, j)
+                        output[-1][0] = "FAIL"
+
+            # TODO: Unlimited speed fluctates during test
+            logging.info("blkio_throttle_%s: dd statistics\n%s", direction,
+                         utils.matrix_to_string(output, ['result', 'it',
+                            'vm', 'speed', 'actual']))
             if err:
-                logging.error("Some cleanup operations failed: %s", err)
-                raise error.TestError("Some cleanup operations failed: %s"
-                                      % err)
-
-        def init(self):
-            """
-            Initialization
-             * creates a new virtio device and adds it into vm
-             * creates a cgroup for each throughput
-            """
-            if (self.dd_cmd is None) or (self.speeds) is None:
-                raise error.TestError("Corrupt class, aren't you trying to run "
-                                      "parent _TestBlkioThrottle() function?")
-
-            if get_device_driver() == "ide":
-                logging.warn("The main disk for this VM is ide wich doesn't "
-                             "support hot-plug. Using virtio_blk instead")
-                (self.files, self.devices) = add_scsi_drive(self.vm,
-                                                            driver="virtio")
-            else:
-                (self.files, self.devices) = add_scsi_drive(self.vm)
-            time.sleep(3)
-            dev = get_maj_min(self.files)
-
-            cgroup = self.cgroup
-            cgroup.initialize(self.modules)
-            for i in range(len(self.speeds)):
-                speed = self.speeds[i]
-                self.cgroups.append(cgroup.mk_cgroup())
+                err = ("blkio_throttle_%s: limits [%s] were broken"
+                                                    % (direction, err[:-2]))
+                logging.debug(err)
+                return err + '\n'
+            return ""
+
+        error.context("Init")
+        no_speeds = 0
+        if speeds:  # blkio_throttle
+            no_speeds = len(speeds[0])
+        else:   # blkio_throttle_multi
+            try:
+                speeds = eval(params.get('cgroup_speeds',
+                                         "[[1024,0,2048,0,8192]]"))
+                if type(speeds) is not list:
+                    raise TypeError
+                if type(speeds[0]) is not list:
+                    logging.warn("cgroup_speeds have to be listOfLists")
+                    speeds = [speeds]
+                no_speeds = len(speeds[0])
+                for speed in speeds:
+                    if type(speed) is not list:
+                        logging.error("One of cgroup_speeds sublists is not "
+                                      "list")
+                        raise TypeError
+                    if len(speed) != no_speeds:
+                        logging.error("cgroup_speeds sublists have different "
+                                      "lengths")
+                        raise TypeError
+            except TypeError:
+                raise error.TestError("Incorrect configuration: param "
+                                      "cgroup_speeds have to be listOfList-"
+                                      "like string with same lengths. "
+                                      "([[1024]] or [[0,1024],[1024,2048]])")
+        # Minimum testing time is 30s (dd must copy few blocks)
+        test_time = max(int(params.get("cgroup_test_time", 60)) / no_speeds,
+                        30)
+
+        error.context("Prepare VMs")
+        # create enough of VMs with scsi_debug attached disks
+        no_vms = len(speeds)
+        param_add_vms(no_vms)
+        param_add_scsi_disks()
+        preprocess(test, params, env)
+
+        vms = []
+        sessions = []
+        timeout = int(params.get("login_timeout", 360))
+        # 2 sessions per VM
+        for name in params['vms'].split():
+            vms.append(env.get_vm(name))
+            sessions.append(vms[-1].wait_for_login(timeout=timeout))
+            sessions.append(vms[-1].wait_for_login(timeout=30))
+
+        error.context("Setup test")
+        modules = CgroupModules()
+        if (modules.init(['blkio']) != 1):
+            raise error.TestFail("Can't mount blkio cgroup modules")
+        blkio = Cgroup('blkio', '')
+        blkio.initialize(modules)
+        for i in range(no_vms):
+            # Set speeds for each scsi_debug device for each VM
+            dev = get_maj_min(params['image_name_scsi-debug-%s' % vms[i].name])
+            for j in range(no_speeds):
+                speed = speeds[i][j]
+                blkio.mk_cgroup()
                 if speed == 0:  # Disable limit (removes the limit)
-                    cgroup.set_property("blkio.throttle.write_bps_device",
-                                        "%s:%s %s" % (dev[0], dev[1], speed),
-                                        check="")
-                    cgroup.set_property("blkio.throttle.read_bps_device",
-                                        "%s:%s %s" % (dev[0], dev[1], speed),
-                                        check="")
-                else:       # Enable limit (input separator ' ', output '\t')
-                    cgroup.set_property("blkio.throttle.write_bps_device",
-                                        "%s:%s %s" % (dev[0], dev[1], speed),
-                                        self.cgroups[i], check="%s:%s\t%s"
+                    blkio.set_property("blkio.throttle.write_bps_device",
+                                       "%s:%s %s" % (dev[0], dev[1], speed),
+                                       i * no_speeds + j, check="")
+                    blkio.set_property("blkio.throttle.read_bps_device",
+                                       "%s:%s %s" % (dev[0], dev[1], speed),
+                                       i * no_speeds + j, check="")
+                else:   # Enable limit (input separator ' ', output '\t')
+                    blkio.set_property("blkio.throttle.write_bps_device",
+                                       "%s:%s %s" % (dev[0], dev[1], speed),
+                                       i * no_speeds + j, check="%s:%s\t%s"
                                                     % (dev[0], dev[1], speed))
-                    cgroup.set_property("blkio.throttle.read_bps_device",
-                                        "%s:%s %s" % (dev[0], dev[1], speed),
-                                        self.cgroups[i], check="%s:%s\t%s"
+                    blkio.set_property("blkio.throttle.read_bps_device",
+                                       "%s:%s %s" % (dev[0], dev[1], speed),
+                                       i * no_speeds + j, check="%s:%s\t%s"
                                                     % (dev[0], dev[1], speed))
 
-        def run(self):
-            """
-            Actual test:
-             * executes self.dd_cmd in vm while limiting it's throughput using
-               different cgroups (or in a special case only one). At the end
-               it verifies the throughputs.
-            """
-            out = []
-            sessions = []
-            sessions.append(self.vm.wait_for_login(timeout=30))
-            sessions.append(self.vm.wait_for_login(timeout=30))
-            sessions[0].sendline(self.dd_cmd)
-            for i in range(len(self.cgroups)):
-                logging.info("Limiting speed to: %s", (self.speeds[i]))
-                # Assign all threads of vm
-                assign_vm_into_cgroup(self.vm, self.cgroup, self.cgroups[i])
-
-                # Standard test-time is 60s. If the slice time is less than 30s,
-                # test-time is prolonged to 30s per slice.
-                time.sleep(max(60/len(self.speeds), 30))
-                sessions[1].sendline("rm -f /tmp/cgroup_lock; killall -9 dd")
-                out.append(sessions[0].read_up_to_prompt())
-                sessions[0].sendline(self.dd_cmd)
-                time.sleep(random()*0.05)
-
-            sessions[1].sendline("rm -f /tmp/cgroup_lock; killall -9 dd")
-            # Verification
-            re_dd = (r'(\d+) bytes \(\d+\.*\d* \w*\) copied, (\d+\.*\d*) s, '
-                      '\d+\.*\d* \w./s')
-            err = []
-            for i in range(len(out)):
-                out[i] = [int(int(_[0])/float(_[1]))
-                              for _ in re.findall(re_dd, out[i])]
-                if not out[i]:
-                    raise error.TestFail("Not enough samples; please increase"
-                                         "throughput speed or testing time;"
-                                         "\nsamples: %s" % (out[i]))
-                # First samples might be corrupted, use only last sample when
-                # not enough data. (which are already an avg of 3xBS)
-                warn = False
-                if len(out[i]) < 3:
-                    warn = True
-                    out[i] = [out[i][-1]]
-                count = len(out[i])
-                out[i].sort()
-                # out = [min, med, max, number_of_samples]
-                out[i] = [out[i][0], out[i][count/2], out[i][-1], count]
-                if warn:
-                    logging.warn("Not enough samples, using the last one (%s)",
-                                 out[i])
-                if ((self.speeds[i] != 0) and
-                        (distance(out[i][1], self.speeds[i]) > 0.1)):
-                    logging.error("The throughput didn't match the requirements"
-                                  "(%s !~ %s)", out[i], self.speeds[i])
-                    err.append(i)
-
-            if self.speeds.count(0) > 1:
-                unlimited = []
-                for i in range(len(self.speeds)):
-                    if self.speeds[i] == 0:
-                        unlimited.append(out[i][1])
-                        self.speeds[i] = "(inf)"
-
-                avg = sum(unlimited) / len(unlimited)
-                if avg == 0:
-                    logging.warn("Average unlimited speed is 0 (%s)", out)
-                else:
-                    for speed in unlimited:
-                        if distance(speed, avg) > 0.1:
-                            logging.warning("Unlimited speeds variates during "
-                                            "the test: %s", unlimited)
-                            break
-
+        # ; true is necessarily when there is no dd present at the time
+        kill_cmd = "rm -f /tmp/cgroup_lock; killall -9 dd; true"
+        stat_cmd = "killall -SIGUSR1 dd; true"
+        re_dd = (r'(\d+) bytes \(\d+\.*\d* \w*\) copied, (\d+\.*\d*) s, '
+                  '\d+\.*\d* \w./s')
+        err = ""
+        try:
+            error.context("Read test")
+            err += _test("read", blkio)
+            # verify sessions between tests
+            for session in sessions:
+                session.cmd("true")
+            error.context("Write test")
+            err += _test("write", blkio)
 
-            out_speeds = ["%s ~ %s" % (out[i][1], self.speeds[i])
-                                        for i in range(len(self.speeds))]
             if err:
-                if len(out) == 1:
-                    raise error.TestFail("Actual throughput: %s, theoretical: "
-                                         "%s" % (out[0][1], self.speeds[0]))
-                elif len(err) == len(out):
-                    raise error.TestFail("All throughput limits were broken "
-                                         "(%s)" % (out_speeds))
-                else:
-                    raise error.TestFail("Limits (%s) were broken (%s)"
-                                         % (err, out_speeds))
+                logging.error("Results\n" + err)
 
-            return ("All throughputs matched their limits (%s)" % out_speeds)
+        finally:
+            error.context("Cleanup")
+            for i in range(no_vms):
+                # stop all workers
+                sessions[i * 2 + 1].sendline(kill_cmd)
 
+            del(blkio)
+            del(modules)
 
-    class TestBlkioThrottleRead(_TestBlkioThrottle):
-        """ Tests the blkio.throttle.read_bps_device """
-        def __init__(self, vms, modules):
-            """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
-            """
-            super(TestBlkioThrottleRead, self).__init__(vms, modules)
-            self.dd_cmd = get_dd_cmd("read", count=1)
-            self.speeds = [1024]
+            for session in sessions:
+                # try whether all sessions are clean
+                session.cmd("true")
+                session.close()
 
+            for i in range(len(vms)):
+                vms[i].destroy()
 
-    class TestBlkioThrottleWrite(_TestBlkioThrottle):
-        """ Tests the blkio.throttle.write_bps_device """
-        def __init__(self, vms, modules):
-            """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
-            """
-            super(TestBlkioThrottleWrite, self).__init__(vms, modules)
-            self.dd_cmd = get_dd_cmd("write", count=1)
-            self.speeds = [1024]
+            rm_scsi_disks(no_vms)
 
+        error.context("Results")
+        if err:
+            raise error.TestFail(err)
+        else:
+            return "Throughputs matched the prescriptions."
 
-    class TestBlkioThrottleMultipleRead(_TestBlkioThrottle):
+    @error.context_aware
+    def cpu_cfs_util():
         """
-        Tests the blkio.throttle.read_bps_device while switching multiple
-        cgroups with different speeds.
+        Tests cfs scheduler utilisation when cfs_period_us and cfs_quota_us
+        are set for each virtual CPU with multiple VMs.
+        Each VM have double the previous created one (1, 2, 4, 8..) upto
+        twice physical CPUs overcommit. cfs quotas are set to 1/2 thus VMs
+        should consume exactly 100%. It measures the difference.
+        @note: VMs are created in test
+        @param cfg: cgroup_test_time - test duration '60'
+        @param cfg: cgroup_limit - allowed threshold '0.05' (5%)
         """
-        def __init__(self, vms, modules):
-            """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
-            """
-            super(TestBlkioThrottleMultipleRead, self).__init__(vms, modules)
-            self.dd_cmd = get_dd_cmd("read", count=1)
-            self.speeds = [0, 1024, 0, 2048, 0, 4096]
+        error.context("Setup test")
+        modules = CgroupModules()
+        if (modules.init(['cpu']) != 1):
+            raise error.TestFail("Can't mount cpu cgroup modules")
+        cgroup = Cgroup('cpu', '')
+        cgroup.initialize(modules)
+        host_cpus = open('/proc/cpuinfo').read().count('model name')
+
+        error.context("Prepare VMs")
+        vms = []
+        sessions = []
+        serials = []
+        timeout = 1.5 * int(params.get("login_timeout", 360))
+        # First one
+        vms.append(env.get_all_vms()[0])
+        cpu_pids = vms[0].get_vcpu_pids()
+        smp = len(cpu_pids)
+        cgroup.mk_cgroup()
+        cgroup.set_property("cpu.cfs_period_us", 100000, 0)
+        cgroup.set_property("cpu.cfs_quota_us", 50000 * smp, 0)
+        assign_vm_into_cgroup(vms[0], cgroup, 0)
+        for j in range(smp):
+            cgroup.mk_cgroup(0)
+            cgroup.set_property("cpu.cfs_period_us", 100000, -1)
+            cgroup.set_property("cpu.cfs_quota_us", 50000, -1)
+            cgroup.set_cgroup(cpu_pids[j], -1)
+            sessions.append(vms[0].wait_for_login(timeout=timeout))
+        serials.append(vms[0].wait_for_serial_login(timeout=30))
+        serials[0].cmd("touch /tmp/cgroup-cpu-lock")
+        vm_cpus = smp
+
+        # Clone the first one with different 'smp' setting
+        _params = params
+        i = 1
+        while vm_cpus < 2 * host_cpus:
+            vm_name = "clone%d" % i
+            smp = min(2 * smp, 2 * host_cpus - vm_cpus)
+            _params['smp'] = smp
+            vms.append(vms[0].clone(vm_name, _params))
+            env.register_vm(vm_name, vms[-1])
+            vms[-1].create()
+            pwd = cgroup.mk_cgroup()
+            cgroup.set_property("cpu.cfs_period_us", 100000, -1)
+            # Total quota is for ALL vCPUs
+            cgroup.set_property("cpu.cfs_quota_us", 50000 * smp, -1)
+            assign_vm_into_cgroup(vms[-1], cgroup, -1)
+            cpu_pids = vms[-1].get_vcpu_pids()
+            for j in range(smp):
+                cgroup.mk_cgroup(pwd)
+                cgroup.set_property("cpu.cfs_period_us", 100000, -1)
+                # Quota for current vcpu
+                cgroup.set_property("cpu.cfs_quota_us", 50000, -1)
+                cgroup.set_cgroup(cpu_pids[j], -1)
+                sessions.append(vms[-1].wait_for_login(timeout=timeout))
+            serials.append(vms[-1].wait_for_serial_login(timeout=30))
+            serials[-1].cmd("touch /tmp/cgroup-cpu-lock")
+            vm_cpus += smp
+            i += 1
 
+        cmd = "renice -n 10 $$; "
+        cmd += "while [ -e /tmp/cgroup-cpu-lock ] ; do :; done"
+        kill_cmd = 'rm -f /tmp/cgroup-cpu-lock'
 
-    class TestBlkioThrottleMultipleWrite(_TestBlkioThrottle):
-        """
-        Tests the blkio.throttle.write_bps_device while switching multiple
-        cgroups with different speeds.
-        """
-        def __init__(self, vms, modules):
-            """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
-            """
-            super(TestBlkioThrottleMultipleWrite, self).__init__(vms, modules)
-            self.dd_cmd = get_dd_cmd("write", count=1)
-            self.speeds = [0, 1024, 0, 2048, 0, 4096]
+        stats = []
+        # test_time is 1s stabilization, 1s first meass., 9s second and the
+        # rest of cgroup_test_time as 3rd meassurement.
+        test_time = max(1, int(params.get('cgroup_test_time', 60)) - 11)
+        err = ""
+        try:
+            error.context("Test")
+            for session in sessions:
+                session.sendline(cmd)
+
+            time.sleep(1)
+            stats.append(open('/proc/stat', 'r').readline())
+            time.sleep(1)
+            stats.append(open('/proc/stat', 'r').readline())
+            time.sleep(9)
+            stats.append(open('/proc/stat', 'r').readline())
+            time.sleep(test_time)
+            stats.append(open('/proc/stat', 'r').readline())
+            for session in serials:
+                session.sendline('rm -f /tmp/cgroup-cpu-lock')
+
+            # /proc/stat first line is cumulative CPU usage
+            # 1-8 are host times, 8-9 are guest times (on older kernels only 8)
+            error.context("Verification")
+            # Start of the test (time 0)
+            stats[0] = [int(_) for _ in stats[0].split()[1:]]
+            stats[0] = [sum(stats[0][0:8]), sum(stats[0][8:])]
+            # Calculate relative stats from time 0
+            for i in range(1, len(stats)):
+                stats[i] = [int(_) for _ in stats[i].split()[1:]]
+                try:
+                    stats[i] = (float(sum(stats[i][8:]) - stats[0][1]) /
+                                        (sum(stats[i][0:8]) - stats[0][0]))
+                except ZeroDivisionError:
+                    logging.error("ZeroDivisionError in stats calculation")
+                    stats[i] = False
+
+            limit = 1 - float(params.get("cgroup_limit", 0.05))
+            for i in range(1, len(stats)):
+                # Utilisation should be 100% - allowed treshold (limit)
+                if stats[i] < (100 - limit):
+                    logging.debug("%d: guest time is not >%s%% %s" % (i, limit,
+                                                                     stats[i]))
+
+            if err:
+                err = "Guest time is not >%s%% %s" % (limit, stats[1:])
+                logging.error(err)
+                logging.info("Guest times are over %s%%: %s", limit, stats[1:])
+
+        finally:
+            error.context("Cleanup")
+            del(cgroup)
+            del(modules)
+
+            for i in range(len(serials)):
+                # stop all workers
+                serials[i].sendline(kill_cmd)
+            for session in sessions:
+                # try whether all sessions are clean
+                session.cmd("true")
+                session.close()
 
+            for i in range(1, len(vms)):
+                vms[i].destroy()
+
+        error.context("Results")
+        if err:
+            raise error.TestFail(err)
+        else:
+            return "Guest times are over %s%%: %s" % (limit, stats[1:])
 
-    class TestDevicesAccess:
+    @error.context_aware
+    def cpu_share():
         """
-        It tries to attach scsi_debug disk with different cgroup devices.list
-        setting.
-         * self.permissions are defined as a list of dictionaries:
-           {'property': control property, 'value': permition value,
-            'check_value': check value (from devices.list property),
-            'read_results': excepced read results T/F,
-            'write_results': expected write results T/F}
+        Sets cpu.share shares for different VMs and measure the actual
+        utilisation distribution over physical CPUs
+        @param cfg: cgroup_test_time - test duration '60'
+        @param cfg: smp - no_vcpus per VM. When smp <= 0 .. smp = no_host_cpus
+        @param cfg: cgroup_speeds - list of speeds of each vms [vm0, vm1,..].
+                    List is sorted in test! '[10000, 100000]'
         """
-        def __init__(self, vms, modules):
-            """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
-            """
-            self.vm = vms[0]      # Virt machines
-            self.modules = modules          # cgroup module handler
-            self.cgroup = Cgroup('devices', '')   # cgroup blkio handler
-            self.files = None   # Temporary files (files of virt disks)
-            self.devices = None # Temporary virt devices
-            self.permissions = None  # Test dictionary, see init for details
+        def _get_stat(f_stats, _stats=None):
+            """ Reads CPU times from f_stats[] files and sumarize them. """
+            if _stats is None:
+                _stats = []
+                for i in range(len(f_stats)):
+                    _stats.append(0)
+            stats = []
+            for i in range(len(f_stats)):
+                f_stats[i].seek(0)
+                stats.append(f_stats[i].read().split()[13:17])
+                stats[i] = sum([int(_) for _ in stats[i]]) - _stats[i]
+            return stats
 
+        error.context("Init")
+        try:
+            speeds = eval(params.get('cgroup_speeds', '[10000, 100000]'))
+            if type(speeds) is not list:
+                raise TypeError
+        except TypeError:
+            raise error.TestError("Incorrect configuration: param "
+                        "cgroup_speeds have to be list-like string '[1, 2]'")
+
+        host_cpus = open('/proc/cpuinfo').read().count('model name')
+        # when smp <= 0 use smp = no_host_cpus
+        vm_cpus = int(params.get('smp', 0))     # cpus per VM
+        if vm_cpus <= 0:        # smp = no_host_cpu
+            params['smp'] = host_cpus
+            vm_cpus = host_cpus
+        no_speeds = len(speeds)
+        # All host_cpus have to be used with no_speeds overcommit
+        no_vms = host_cpus * no_speeds / vm_cpus
+        no_threads = no_vms * vm_cpus
+        sessions = []
+        serials = []
+        modules = CgroupModules()
+        if (modules.init(['cpu']) != 1):
+            raise error.TestFail("Can't mount cpu cgroup modules")
+        cgroup = Cgroup('cpu', '')
+        cgroup.initialize(modules)
+
+        error.context("Prepare VMs")
+        param_add_vms(no_vms)
+        preprocess(test, params, env)
+
+        # session connections are spread vm1, vm2, vm3, ... With more vcpus
+        # the second round is similar after the whole round (vm1, vm2, vm1, ..)
+        # vms are spread into cgroups vm1=cg1, vm2=cg2, vm3=cg3 // % no_cgroup
+        # when we go incrementally through sessions we got always different cg
+        vms = env.get_all_vms()
+        timeout = 1.5 * int(params.get("login_timeout", 360))
+        for i in range(no_threads):
+            sessions.append(vms[i % no_vms].wait_for_login(timeout=timeout))
+
+        for i in range(no_speeds):
+            cgroup.mk_cgroup()
+            cgroup.set_property('cpu.shares', speeds[i], i)
+        for i in range(no_vms):
+            assign_vm_into_cgroup(vms[i], cgroup, i % no_speeds)
+            sessions[i].cmd("touch /tmp/cgroup-cpu-lock")
+            serials.append(vms[i].wait_for_serial_login(timeout=30))
+
+        error.context("Test")
+        try:
+            f_stats = []
+            err = []
+            # Time 0
+            for vm in vms:
+                f_stats.append(open("/proc/%d/stat" % vm.get_pid(), 'r'))
 
-        def cleanup(self):
-            """ Cleanup """
-            err = ""
-            try:
-                rm_drive(self.vm, self.files, self.devices)
-            except Exception, failure_detail:
-                err += "\nCan't remove PCI drive: %s" % failure_detail
-            try:
-                del(self.cgroup)
-            except Exception, failure_detail:
-                err += "\nCan't remove Cgroup: %s" % failure_detail
+            time_init = 2
+            # there are 6 tests
+            time_test = max(int(params.get("cgroup_test_time", 60)) / 6, 5)
+            thread_count = 0    # actual thread number
+            stats = []
+            cmd = "renice -n 10 $$; "       # new ssh login should pass
+            cmd += "while [ -e /tmp/cgroup-cpu-lock ]; do :; done"
+            # Occupy all host_cpus with 1 task (no overcommit)
+            for thread_count in range(0, host_cpus):
+                sessions[thread_count].sendline(cmd)
+            time.sleep(time_init)
+            _stats = _get_stat(f_stats)
+            time.sleep(time_test)
+            stats.append(_get_stat(f_stats, _stats))
 
-            if err:
-                logging.error("Some cleanup operations failed: %s", err)
-                raise error.TestError("Some cleanup operations failed: %s"
-                                      % err)
+            # Overcommit on 1 cpu
+            thread_count += 1
+            sessions[thread_count].sendline(cmd)
+            time.sleep(time_init)
+            _stats = _get_stat(f_stats)
+            time.sleep(time_test)
+            stats.append(_get_stat(f_stats, _stats))
 
+            # no_speeds overcommit on all CPUs
+            for i in range(thread_count + 1, no_threads):
+                sessions[i].sendline(cmd)
+            time.sleep(time_init)
+            _stats = _get_stat(f_stats)
+            for j in range(3):
+                __stats = _get_stat(f_stats)
+                time.sleep(time_test)
+                stats.append(_get_stat(f_stats, __stats))
+            stats.append(_get_stat(f_stats, _stats))
 
-        def init(self):
-            """
-            Initialization
-             * creates a new scsi_debug device
-             * prepares one cgroup and assign vm to it
-            """
-            # Only create the host /dev/sd? device
-            (self.files, self.devices) = add_scsi_drive(self.vm)
-            rm_drive(self.vm, host_file=None, device=self.devices)
-            self.devices = None # We don't want to mess cleanup
-
-            time.sleep(3)
-            dev = "%s:%s" % get_maj_min(self.files)
-
-            self.cgroup.initialize(self.modules)
-            self.cgroup.mk_cgroup()
-            assign_vm_into_cgroup(self.vm, self.cgroup, 0)
-
-            # Test dictionary
-            # Beware of persistence of some setting to another round!!!
-            self.permissions = [
-                               {'property'      : 'deny',
-                                'value'         : 'a',
-                                'check_value'   : '',
-                                'result'        : False},
-                               {'property'      : 'allow',
-                                'value'         : 'b %s rm' % dev,
-                                'check_value'   : True,
-                                'result'        : False},
-                               {'property'      : 'allow',
-                                'value'         : 'b %s w' % dev,
-                                'check_value'   : 'b %s rwm' % dev,
-                                'result'        : True},
-                               {'property'      : 'deny',
-                                'value'         : 'b %s r' % dev,
-                                'check_value'   : 'b %s wm' % dev,
-                                'result'        : False},
-                               {'property'      : 'deny',
-                                'value'         : 'b %s wm' % dev,
-                                'check_value'   : '',
-                                'result'        : False},
-                               {'property'      : 'allow',
-                                'value'         : 'a',
-                                'check_value'   : 'a *:* rwm',
-                                'result'        : True},
-                              ]
-
-
-
-        def run(self):
-            """
-            Actual test:
-             * For each self.permissions sets the cgroup devices permition
-               and tries attach the disk. Checks the results with prescription.
-            """
-            def set_permissions(cgroup, permissions):
-                """
-                Wrapper for setting permissions to first cgroup
-                @param self.permissions: is defined as a list of dictionaries:
-                   {'property': control property, 'value': permition value,
-                    'check_value': check value (from devices.list property),
-                    'read_results': excepced read results T/F,
-                    'write_results': expected write results T/F}
-                """
-                cgroup.set_property('devices.'+permissions['property'],
-                                    permissions['value'],
-                                    cgroup.cgroups[0],
-                                    check=permissions['check_value'],
-                                    checkprop='devices.list')
-
-
-            session = self.vm.wait_for_login(timeout=30)
-
-            cgroup = self.cgroup
-            results = ""
-            for perm in self.permissions:
-                set_permissions(cgroup, perm)
-                logging.debug("Setting permissions: {%s: %s}, value: %s",
-                              perm['property'], perm['value'],
-                              cgroup.get_property('devices.list',
-                                                  cgroup.cgroups[0]))
+            # Verify results
+            err = ""
+            # accumulate stats from each cgroup
+            for j in range(len(stats)):
+                for i in range(no_speeds, len(stats[j])):
+                    stats[j][i % no_speeds] += stats[j][i]
+                stats[j] = stats[j][:no_speeds]
+            # I.
+            i = 0
+            # only first #host_cpus guests were running
+            dist = distance(min(stats[i][:host_cpus]),
+                            max(stats[i][:host_cpus]))
+            # less vms, lower limit. Maximal limit is 0.2
+            if dist > min(0.10 + 0.01 * len(vms), 0.2):
+                err += "1, "
+                logging.error("1st part's limits broken. Utilisation should be"
+                              " equal. stats = %s, distance = %s", stats[i],
+                              dist)
+            else:
+                logging.info("1st part's distance = %s", dist)
+            # II.
+            i += 1
+            dist = distance(min(stats[i]), max(stats[i]))
+            if host_cpus % no_speeds == 0 and no_speeds <= host_cpus:
+                if dist > min(0.10 + 0.01 * len(vms), 0.2):
+                    err += "2, "
+                    logging.error("2nd part's limits broken, Utilisation "
+                                  "should be equal. stats = %s, distance = %s",
+                                  stats[i], dist)
+                else:
+                    logging.info("2nd part's distance = %s", dist)
+            else:
+                logging.warn("2nd part's verification skipped (#cgroup,#cpu),"
+                             " stats = %s,distance = %s", stats[i], dist)
 
-                try:
-                    (_, self.devices) = add_scsi_drive(self.vm,
-                                                        host_file=self.files)
-                except Exception, details:
-                    if perm['result']:
-                        logging.error("Perm: {%s: %s}: drive was not attached:"
-                                      " %s", perm['property'], perm['value'],
-                                      details)
-                        results += ("{%s: %s => NotAttached}, " %
-                                     (perm['property'], perm['value']))
+            # III.
+            # normalize stats, then they should have equal values
+            i += 1
+            for i in range(i, len(stats)):
+                norm_stats = [float(stats[i][_]) / speeds[_]
+                                                for _ in range(len(stats[i]))]
+                dist = distance(min(norm_stats), max(norm_stats))
+                if dist > min(0.10 + 0.02 * len(vms), 0.25):
+                    err += "3, "
+                    logging.error("3rd part's limits broken; utilisation "
+                                  "should be in accordance to self.speeds. "
+                                  "stats=%s, norm_stats=%s, distance=%s, "
+                                  "speeds=%s,it=%d", stats[i], norm_stats,
+                                  dist, speeds, i - 1)
                 else:
-                    if not perm['result']:
-                        logging.error("Perm: {%s: %s}: drive was attached",
-                                      perm['property'], perm['value'])
-                        results += ("{%s: %s => Attached}, " %
-                                     (perm['property'], perm['value']))
-                    rm_drive(self.vm, host_file=None, device=self.devices)
-                    self.devices = None
+                    logging.info("3rd part's norm_dist = %s", dist)
+
+            if err:
+                err = "[%s] parts broke their limits" % err[:-2]
+                logging.error(err)
+            else:
+                logging.info("Cpu utilisation enforced successfully")
+
+        finally:
+            error.context("Cleanup")
+            del(cgroup)
 
-            session.close()
-            if results:
-                raise error.TestFail("Some restrictions were broken: {%s}" %
-                                      results[:-2])
+            for i in range(len(serials)):
+                # stop all workers
+                serials[i].sendline("rm -f /tmp/cgroup-cpu-lock")
+            for session in sessions:
+                # try whether all sessions are clean
+                session.cmd("true")
+                session.close()
 
-            time.sleep(10)
+            for i in range(len(vms)):
+                vms[i].destroy()
 
-            return ("All restrictions enforced successfully.")
+            del(modules)
 
+        error.context("Results")
+        if err:
+            raise error.TestFail(err)
+        else:
+            return ("Cpu utilisation enforced succesfully")
 
-    class TestFreezer:
+    @error.context_aware
+    def cpuset_cpus():
         """
-        Tests the freezer.state cgroup functionality. (it freezes the guest
-        and unfreeze it again)
+        Pins main_thread and each vcpu acoordingly to scenario setup
+        and measures physical CPU utilisation.
+        @warning: Default verification method assumes 100% utilisation on each
+                  used CPU. You can force cgroup_verify results.
+        @param cfg: cgroup_test_time - scenerio duration '1'
+        @param cfg: smp - number of virtual cpus
+        @param cfg: cgroup_limit - allowed threshold '0.05' (5%)
+        @params cfg: cgroup_cpuset - list of lists defining cpu pinning.
+                     [[1st_scenario],[2nd_scenario], ...]
+                     [[main_thread, vcpu0, vcpu1, ...], ...]
+                     eg. [[None, '0,3', '1', '2', '1-2'], ['0', '0', '1'.....]]
+                     'by default 5 specific scenarios'
+        @params cfg: cgroup_verify - list of lists defining verification
+                     physical CPUs utilisations
+                     [[1st_scenario],[2nd_scenario], ...]
+                     [[cpu0_util,cpu1_util,...], ...]
+                     eg. [[50, 100, 100, 50], [100, 100, 0, 0]]
+                     'by default it assumes each used CPU will be 100%
+                     utilised'
         """
-        def __init__(self, vms, modules):
+        def _generate_cpusets(vm_cpus, no_cpus):
             """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
+            Generates 5 cpusets scenerios
+            @param vm_cpus: number of virtual CPUs
+            @param no_cpus: number of physical CPUs
             """
-            self.vm = vms[0]      # Virt machines
-            self.modules = modules          # cgroup module handler
-            self.cgroup = Cgroup('freezer', '')   # cgroup blkio handler
-            self.files = None   # Temporary files (files of virt disks)
-            self.devices = None # Temporary virt devices
+            cpusets = []
+            # OO__
+            if no_cpus > vm_cpus:
+                cpuset = '0-%d' % (vm_cpus - 1)
+                # all cpus + main_thread
+                cpusets.append([cpuset for _ in range(no_cpus + 1)])
+            # __OO
+            if no_cpus > vm_cpus:
+                cpuset = '%d-%d' % (no_cpus - vm_cpus - 1, no_cpus - 1)
+                cpusets.append([cpuset for _ in range(no_cpus + 1)])
+            # O___
+            cpusets.append(['0' for _ in range(no_cpus + 1)])
+            # _OO_
+            if no_cpus == 2:
+                cpuset = '1'
+            else:
+                cpuset = '1-%d' % min(no_cpus, vm_cpus - 1)
+            cpusets.append([cpuset for _ in range(no_cpus + 1)])
+            # O_O_
+            cpuset = '0'
+            for i in range(1, min(vm_cpus, (no_cpus / 2))):
+                cpuset += ',%d' % (i * 2)
+            cpusets.append([cpuset for i in range(no_cpus + 1)])
+            return cpusets
+
+        def _generate_verification(cpusets, no_cpus):
+            """
+            Calculates verification data.
+            @warning: Inaccurate method, every pinned CPU have to have 100%
+                      utilisation!
+            @param cpusets: cpusets scenarios
+            @param no_cpus: number of physical CPUs
+            """
+            verify = []
+            # For every scenerio
+            for cpuset in cpusets:
+                verify.append([0 for _ in range(no_cpus)])
+                # For every vcpu (skip main_thread, it doesn't consume much)
+                for vcpu in cpuset[1:]:
+                    vcpu.split(',')
+                    # Get all usable CPUs for this vcpu
+                    for vcpu_pin in vcpu:
+                        _ = vcpu_pin.split('-')
+                        if len(_) == 2:
+                            # Range of CPUs
+                            for cpu in range(int(_[0]), int(_[1]) + 1):
+                                verify[-1][cpu] = 100
+                        else:
+                            # Single CPU
+                            verify[-1][int(_[0])] = 100
+            return verify
+
+        error.context("Init")
+        vm_cpus = int(params.get("smp", 1))
+        cpusets = None
+        verify = None
+        if vm_cpus <= 1:
+            raise error.TestNAError("This test requires at least 2 VCPUs.")
+        try:
+            cpusets = eval(params.get("cgroup_cpuset", "None"))
+            if cpusets:
+                for _ in cpusets:
+                    if len(_) != (vm_cpus + 1):
+                        raise Exception
+        except Exception:
+            raise error.TestError("Incorrect configuration: param cgroup_"
+                                  "cpuset have to be list of lists, where "
+                                  "all sublist have the same length and "
+                                  "the length is ('smp' + 1). Or 'None' for "
+                                  "default.\n%s" % cpusets)
+        try:
+            verify = eval(params.get("cgroup_verify", "None"))
+        except Exception:
+            raise error.TestError("Incorrect configuration: param cgroup_"
+                                  "verify have to be list of lists or 'None' "
+                                  "for default/automatic.\n%s" % verify)
 
+        limit = float(params.get("cgroup_limit", 0.05)) * 100
 
-        def cleanup(self):
-            """ Cleanup """
-            err = ""
-            try:
-                self.cgroup.set_property('freezer.state', 'THAWED',
-                                         self.cgroup.cgroups[0])
-            except Exception, failure_detail:
-                err += "\nCan't unfreeze vm: %s" % failure_detail
+        test_time = int(params.get("cgroup_test_time", 1))
 
-            try:
-                _ = self.vm.wait_for_login(timeout=30)
-                _.cmd('rm -f /tmp/freeze-lock')
-                _.close()
-            except Exception, failure_detail:
-                err += "\nCan't stop the stresses."
+        vm = env.get_all_vms()[0]
+        modules = CgroupModules()
+        if (modules.init(['cpuset']) != 1):
+            raise error.TestFail("Can't mount cpu cgroup modules")
+        cgroup = Cgroup('cpuset', '')
+        cgroup.initialize(modules)
 
+        all_cpus = cgroup.get_property("cpuset.cpus")[0]
+        all_mems = cgroup.get_property("cpuset.mems")[0]
+
+        # parse all available host_cpus from cgroups
+        try:
+            no_cpus = int(all_cpus.split('-')[1]) + 1
+        except (ValueError, IndexError):
+            raise error.TestFail("Failed to get #CPU from root cgroup. (%s)",
+                                 all_cpus)
+        vcpus = vm.get_vcpu_pids()
+        if len(vcpus) != vm_cpus:
+            raise error.TestFail("Incorrect number of vcpu PIDs; smp=%s vcpus="
+                                 "%s" % (vm_cpus, vcpus))
+
+        if not cpusets:
+            error.context("Generating cpusets scenerios")
+            cpusets = _generate_cpusets(vm_cpus, no_cpus)
+
+        # None == all_cpus
+        for i in range(len(cpusets)):
+            cpusets[i] = [(all_cpus if _ is None else _) for _ in cpusets[i]]
+
+        if not verify:
+            error.context("Generating cpusets expected results")
             try:
-                del(self.cgroup)
-            except Exception, failure_detail:
-                err += "\nCan't remove Cgroup: %s" % failure_detail
+                verify = _generate_verification(cpusets, no_cpus)
+            except IndexError:
+                raise error.TestError("IndexError occured while generatin "
+                                      "verification data. Probably missmatched"
+                                      " no_host_cpus and cgroup_cpuset cpus")
+
+        error.context("Prepare")
+        for i in range(no_cpus + 1):
+            cgroup.mk_cgroup()
+            cgroup.set_property('cpuset.cpus', all_cpus, i)
+            cgroup.set_property('cpuset.mems', all_mems, i)
+            if i == 0:
+                assign_vm_into_cgroup(vm, cgroup, 0)
+            else:
+                cgroup.set_cgroup(vcpus[i - 1], i)
 
-            if err:
-                logging.error("Some cleanup operations failed: %s", err)
-                raise error.TestFail("Some cleanup operations failed: %s" %
-                                      err)
+        timeout = int(params.get("login_timeout", 360))
+        sessions = []
+        stats = []
+        serial = vm.wait_for_serial_login(timeout=timeout)
+        cmd = "renice -n 10 $$; "   # new ssh login should pass
+        cmd += "while [ -e /tmp/cgroup-cpu-lock ]; do :; done"
+        for i in range(vm_cpus):
+            sessions.append(vm.wait_for_login(timeout=timeout))
+            sessions[-1].cmd("touch /tmp/cgroup-cpu-lock")
+            sessions[-1].sendline(cmd)
 
+        try:
+            error.context("Test")
+            for i in range(len(cpusets)):
+                cpuset = cpusets[i]
+                logging.debug("testing: %s", cpuset)
+                # setup scenario
+                for i in range(len(cpuset)):
+                    cgroup.set_property('cpuset.cpus', cpuset[i], i)
+                # Time 0
+                _load = get_load_per_cpu()
+                time.sleep(test_time)
+                # Stats after test_time
+                stats.append(get_load_per_cpu(_load)[1:])
 
-        def init(self):
-            """
-            Initialization
-             * prepares one cgroup and assign vm to it
-            """
-            self.cgroup.initialize(self.modules)
-            self.cgroup.mk_cgroup()
-            assign_vm_into_cgroup(self.vm, self.cgroup, 0)
-
+            serial.cmd("rm -f /tmp/cgroup-cpu-lock")
+            err = ""
 
-        def run(self):
-            """
-            Actual test:
-             * Freezes the guest and thaws it again couple of times
-             * verifies that guest is frozen and runs when expected
-            """
-            def _get_stat(pid):
-                """
-                Gather statistics of pid+1st level subprocesses cpu usage
-                @param pid: PID of the desired process
-                @return: sum of all cpu-related values of 1st level subprocesses
-                """
-                out = None
-                for i in range(10):
-                    try:
-                        out = utils.system_output("cat /proc/%s/task/*/stat" %
-                                                   pid)
-                    except error.CmdError:
-                        out = None
+            error.context("Verification")
+            # Normalize stats
+            for i in range(len(stats)):
+                stats[i] = [(_ / test_time) for _ in stats[i]]
+            # Check
+            # header and matrix variables are only for "beautiful" log
+            header = ['scen']
+            header.extend([' cpu%d' % i for i in range(no_cpus)])
+            matrix = []
+            for i in range(len(stats)):
+                matrix.append(['%d' % i])
+                for j in range(len(stats[i])):
+                    if ((stats[i][j] < (verify[i][j] - limit)) or
+                            (stats[i][j] > (verify[i][j] + limit))):
+                        err += "%d(%d), " % (i, j)
+                        matrix[-1].append("%3d ! %d" % (verify[i][j],
+                                                         stats[i][j]))
                     else:
-                        break
-                out = out.split('\n')
-                ret = 0
-                for i in out:
-                    ret += sum([int(_) for _ in i.split(' ')[13:17]])
-                return ret
-
-
-            session = self.vm.wait_for_serial_login(timeout=30)
-            session.cmd('touch /tmp/freeze-lock')
-            session.sendline('while [ -e /tmp/freeze-lock ]; do :; done')
-            cgroup = self.cgroup
-            pid = self.vm.get_pid()
-
-            for tsttime in [0.5, 3, 20]:
-                # Let it work for short, mid and long period of time
-                logging.info("FREEZING (%ss)", tsttime)
-                # Death line for freezing is 1s
-                cgroup.set_property('freezer.state', 'FROZEN',
-                                    cgroup.cgroups[0], check=False)
-                time.sleep(1)
-                _ = cgroup.get_property('freezer.state', cgroup.cgroups[0])
-                if 'FROZEN' not in _:
-                    raise error.TestFail("Couldn't freeze the VM: state %s" % _)
-                stat_ = _get_stat(pid)
-                time.sleep(tsttime)
-                stat = _get_stat(pid)
-                if stat != stat_:
-                    raise error.TestFail('Process was running in FROZEN state; '
-                                         'stat=%s, stat_=%s, diff=%s' %
-                                          (stat, stat_, stat-stat_))
-                logging.info("THAWING (%ss)", tsttime)
-                self.cgroup.set_property('freezer.state', 'THAWED',
-                                         self.cgroup.cgroups[0])
-                stat_ = _get_stat(pid)
-                time.sleep(tsttime)
-                stat = _get_stat(pid)
-                if (stat - stat_) < (90*tsttime):
-                    raise error.TestFail('Process was not active in FROZEN'
-                                         'state; stat=%s, stat_=%s, diff=%s' %
-                                          (stat, stat_, stat-stat_))
+                        matrix[-1].append("%3d ~ %d" % (verify[i][j],
+                                                         stats[i][j]))
+            logging.info("Results (theoretical ~ actual):\n%s" %
+                         utils.matrix_to_string(matrix, header))
+            if err:
+                err = "Scenerios %s FAILED" % err
+                logging.error(err)
+            else:
+                logging.info("All utilisations match prescriptions.")
 
-            return ("Freezer works fine")
+        finally:
+            error.context("Cleanup")
+            serial.cmd("rm -f /tmp/cgroup-cpu-lock")
+            del(cgroup)
+            del(modules)
 
+        error.context("Results")
+        if err:
+            raise error.TestFail(err)
+        else:
+            return ("All utilisations match prescriptions.")
 
-    class TestMemoryMove:
+    @error.context_aware
+    def cpuset_cpus_switching():
         """
-        Tests the memory.move_charge_at_immigrate cgroup capability. It changes
-        memory cgroup while running the guest system.
+        Tests the cpuset.cpus cgroup feature. It stresses all VM's CPUs
+        while switching between cgroups with different setting.
+        @param cfg: cgroup_test_time - test duration '60'
         """
-        def __init__(self, vms, modules):
-            """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
-            """
-            self.vm = vms[0]      # Virt machines
-            self.modules = modules          # cgroup module handler
-            self.cgroup = Cgroup('memory', '')   # cgroup blkio handler
-
-        def cleanup(self):
-            """ Cleanup """
-            err = ""
-            try:
-                del(self.cgroup)
-            except Exception, failure_detail:
-                err += "\nCan't remove Cgroup: %s" % failure_detail
-
-            if err:
-                logging.error("Some cleanup operations failed: %s", err)
-                raise error.TestError("Some cleanup operations failed: %s" %
-                                       err)
-
-
-        def init(self):
-            """ Initialization: prepares two cgroups """
-            self.cgroup.initialize(self.modules)
-            self.cgroup.mk_cgroup()
-            self.cgroup.mk_cgroup()
-            assign_vm_into_cgroup(self.vm, self.cgroup, 0)
-
-            self.cgroup.set_property('memory.move_charge_at_immigrate', '3',
-                                     self.cgroup.cgroups[0])
-            self.cgroup.set_property('memory.move_charge_at_immigrate', '3',
-                                     self.cgroup.cgroups[1])
+        error.context("Init")
+        try:
+            test_time = int(params.get("cgroup_test_time", 60))
+        except ValueError:
+            raise error.TestError("Incorrect configuration: param "
+                                  "cgroup_test_time have to be an integer")
+
+        error.context("Prepare")
+        modules = CgroupModules()
+        if (modules.init(['cpuset']) != 1):
+            raise error.TestFail("Can't mount cpuset cgroup modules")
+        cgroup = Cgroup('cpuset', '')
+        cgroup.initialize(modules)
 
+        timeout = int(params.get("login_timeout", 360))
+        vm = env.get_all_vms()[0]
+        serial = vm.wait_for_serial_login(timeout=timeout)
+        vm_cpus = int(params.get('smp', 1))
+        all_cpus = cgroup.get_property("cpuset.cpus")[0]
+        if all_cpus == "0":
+            raise error.TestFail("This test needs at least 2 CPUs on "
+                                 "host, cpuset=%s" % all_cpus)
+        try:
+            last_cpu = int(all_cpus.split('-')[1])
+        except Exception:
+            raise error.TestFail("Failed to get #CPU from root cgroup.")
 
-        def run(self):
-            """ Actual test: change cgroup while running test command """
+        if last_cpu == 1:
+            second2last_cpu = "1"
+        else:
+            second2last_cpu = "1-%s" % last_cpu
+
+        # Comments are for vm_cpus=2, no_cpus=4, _SC_CLK_TCK=100
+        cgroup.mk_cgroup()  # oooo
+        cgroup.set_property('cpuset.cpus', all_cpus, 0)
+        cgroup.set_property('cpuset.mems', 0, 0)
+        cgroup.mk_cgroup()  # O___
+        cgroup.set_property('cpuset.cpus', 0, 1)
+        cgroup.set_property('cpuset.mems', 0, 1)
+        cgroup.mk_cgroup()  # _OO_
+        cgroup.set_property('cpuset.cpus', second2last_cpu, 2)
+        cgroup.set_property('cpuset.mems', 0, 2)
+        assign_vm_into_cgroup(vm, cgroup, 0)
+
+        error.context("Test")
+        err = ""
+        try:
+            cmd = "renice -n 10 $$; "   # new ssh login should pass
+            cmd += "while [ -e /tmp/cgroup-cpu-lock ]; do :; done"
             sessions = []
-            sessions.append(self.vm.wait_for_login(timeout=30))
-            sessions.append(self.vm.wait_for_login(timeout=30))
-
-            # Don't allow to specify more than 1/2 of the VM's memory
-            size = int(params.get('mem', 1024)) / 2
-            if params.get('cgroup_memory_move_mb') is not None:
-                size = min(size, int(params.get('cgroup_memory_move_mb')))
-
-            sessions[0].sendline('dd if=/dev/zero of=/dev/null bs=%dM '
-                                 'iflag=fullblock' % size)
-            time.sleep(2)
-
-            sessions[1].cmd('killall -SIGUSR1 dd')
-            for i in range(10):
-                logging.debug("Moving vm into cgroup %s.", (i%2))
-                assign_vm_into_cgroup(self.vm, self.cgroup, i%2)
-                time.sleep(0.1)
-            time.sleep(2)
-            sessions[1].cmd('killall -SIGUSR1 dd')
-            try:
-                out = sessions[0].read_until_output_matches(
-                                                ['(\d+)\+\d records out'])[1]
-                if len(re.findall(r'(\d+)\+\d records out', out)) < 2:
-                    out += sessions[0].read_until_output_matches(
-                                                ['(\d+)\+\d records out'])[1]
-            except ExpectTimeoutError:
-                raise error.TestFail("dd didn't produce expected output: %s" %
-                                      out)
-
-            sessions[1].cmd('killall dd')
-            dd_res = re.findall(r'(\d+)\+(\d+) records in', out)
-            dd_res += re.findall(r'(\d+)\+(\d+) records out', out)
-            dd_res = [int(_[0]) + int(_[1]) for _ in dd_res]
-            if dd_res[1] <= dd_res[0] or dd_res[3] <= dd_res[2]:
-                raise error.TestFail("dd stoped sending bytes: %s..%s, %s..%s" %
-                                      (dd_res[0], dd_res[1], dd_res[2],
-                                       dd_res[3]))
-
-            return ("Guest moved 10times while creating %dMB blocks" % size)
-
+            # start stressers
+            for i in range(vm_cpus):
+                sessions.append(vm.wait_for_login(timeout=30))
+                sessions[i].cmd("touch /tmp/cgroup-cpu-lock")
+                sessions[i].sendline(cmd)
 
-    class TestMemoryLimit:
-        """ Tests the memory.limit_in_bytes by triyng to break the limit """
-        def __init__(self, vms, modules):
-            """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
-            """
-            self.vm = vms[0]      # Virt machines
-            self.modules = modules          # cgroup module handler
-            self.cgroup = Cgroup('memory', '')   # cgroup blkio handler
+            logging.info("Some harmless IOError messages of non-existing "
+                         "processes might occur.")
+            i = 0
+            t_stop = time.time() + test_time  # run for $test_time seconds
+            while time.time() < t_stop:
+                assign_vm_into_cgroup(vm, cgroup, i % 3)
+                i += 1
 
+            error.context("Verification")
+            serial.sendline("rm -f /tmp/cgroup-cpu-lock")
 
-        def cleanup(self):
-            """ Cleanup """
-            err = ""
             try:
-                del(self.cgroup)
-            except Exception, failure_detail:
-                err += "\nCan't remove Cgroup: %s" % failure_detail
+                vm.verify_alive()
+            except Exception, exc_details:
+                err += "VM died (no_switches=%s): %s\n" % (i, exc_details)
 
             if err:
-                logging.error("Some cleanup operations failed: %s", err)
-                raise error.TestError("Some cleanup operations failed: %s" %
-                                       err)
-
-
-        def init(self):
-            """
-            Initialization: prepares the cgroup and starts new VM inside it.
-            """
-            # Don't allow to specify more than 1/2 of the VM's memory
-            mem = int(params.get('mem', 1024)) * 512
-            if params.get('cgroup_memory_limit_kb') is not None:
-                mem = min(mem, int(params.get('cgroup_memory_limit_kb')))
-
-            self.cgroup.initialize(self.modules)
-            self.cgroup.mk_cgroup()
-            self.cgroup.set_property('memory.move_charge_at_immigrate', '3',
-                                     self.cgroup.cgroups[0])
-            self.cgroup.set_property_h('memory.limit_in_bytes', "%dK" % mem,
-                                     self.cgroup.cgroups[0])
-
-            logging.info("Expected VM reload")
-            try:
-                self.vm.create()
-            except Exception, failure_detail:
-                raise error.TestFail("init: Failed to recreate the VM: %s" %
-                                      failure_detail)
-            assign_vm_into_cgroup(self.vm, self.cgroup, 0)
-            timeout = int(params.get("login_timeout", 360))
-            self.vm.wait_for_login(timeout=timeout).close()
-            status = open('/proc/%s/status' % self.vm.get_pid(), 'r').read()
-            rss = int(re.search(r'VmRSS:[\t ]*(\d+) kB', status).group(1))
-            if rss > mem:
-                raise error.TestFail("Init failed to move VM into cgroup, VmRss"
-                                     "=%s, expected=%s" % (rss, mem))
-
-        def run(self):
-            """
-            Run dd with bs > memory limit. Verify that qemu survives and
-            success in executing the command without breaking off the limit.
-            """
-            session = self.vm.wait_for_login(timeout=30)
+                err = err[:-1]
+                logging.error(err)
+            else:
+                logging.info("VM survived %d cgroup switches", i)
 
-            # Use 1.1 * memory_limit block size
-            mem = int(params.get('mem', 1024)) * 512
-            if params.get('cgroup_memory_limit_kb') is not None:
-                mem = min(mem, int(params.get('cgroup_memory_limit_kb')))
-            mem *= 1.1
-            session.sendline('dd if=/dev/zero of=/dev/null bs=%dK count=1 '
-                             'iflag=fullblock' %mem)
+        finally:
+            error.context("Cleanup")
+            del(cgroup)
+            del(modules)
 
-            # Check every 0.1s VM memory usage. Limit the maximum execution time
-            # to mem / 10 (== mem * 0.1 sleeps)
-            max_rss = 0
-            max_swap = 0
-            out = ""
-            for _ in range(int(mem / 1024)):
-                status = open('/proc/%s/status' % self.vm.get_pid(), 'r').read()
-                rss = int(re.search(r'VmRSS:[\t ]*(\d+) kB', status).group(1))
-                max_rss = max(rss, max_rss)
-                swap = int(re.search(r'VmSwap:[\t ]*(\d+) kB', status).group(1))
-                max_swap = max(swap + rss, max_swap)
-                try:
-                    out += session.read_up_to_prompt(timeout=0.1)
-                except ExpectTimeoutError:
-                    #0.1s passed, lets begin the next round
-                    pass
-                except ExpectProcessTerminatedError, failure_detail:
-                    raise error.TestFail("VM failed executing the command: %s" %
-                                          failure_detail)
-                else:
-                    break
+            serial.sendline("rm -f /tmp/cgroup-cpu-lock")
 
-            if max_rss > mem:
-                raise error.TestFail("The limit was broken: max_rss=%s, limit="
-                                     "%s" % (max_rss, mem))
-            exit_nr = session.cmd_output("echo $?")[:-1]
-            if exit_nr != '0':
-                raise error.TestFail("dd command failed: %s, output: %s" %
-                                      (exit_nr, out))
-            if (max_rss + max_swap) < mem:
-                raise error.TestFail("VM didn't consume expected amount of "
-                                     "memory. Output of dd cmd: %s" % out)
-
-            return ("Created %dMB block with 1.1 limit overcommit" % (mem/1024))
+            for session in sessions:
+                # try whether all sessions are clean
+                session.cmd("true")
+                session.close()
 
+        error.context("Results")
+        if err:
+            raise error.TestFail(err)
+        else:
+            return ("VM survived %d cgroup switches" % i)
 
-    class _TestCpuShare(object):
+    @error.context_aware
+    def devices_access():
         """
-        Tests the cpu.share cgroup capability. It creates n cgroups accordingly
-        to self.speeds variable and sufficient VMs to symetricaly test three
-        different scenerios.
-        1) #threads == #CPUs
-        2) #threads + 1 == #CPUs, +1thread have the lowest priority (or equal)
-        3) #threads * #cgroups == #CPUs
-        Cgroup shouldn't slow down VMs on unoccupied CPUs. With thread
-        overcommit the scheduler should stabilize accordingly to speeds
-        value.
+        Tests devices.list capability. It tries hot-adding disk with different
+        devices.list permittions and verifies whether it pass or fails.
+        It tests booth RO and RW mode.
+        @note: VM is destroyed after this test (in order to remove the attached
+               disks)
+        @note: supported monitor CMDs are pci_add, drive_add and RH-drive_add
+                RH-QMP-drive_add
         """
-        def __init__(self, vms, modules):
-            self.vms = vms[:]      # Copy of virt machines
-            self.vms_count = len(vms) # Original number of vms
-            self.modules = modules          # cgroup module handler
-            self.cgroup = Cgroup('cpu', '')   # cgroup blkio handler
-            self.speeds = None  # cpu.share values [cg1, cg2]
-            self.sessions = []    # ssh sessions
-            self.serials = []   # serial consoles
-
-
-        def cleanup(self):
-            """ Cleanup """
-            err = ""
-            try:
-                del(self.cgroup)
-            except Exception, failure_detail:
-                err += "\nCan't remove Cgroup: %s" % failure_detail
-
-            # Stop all VMS in parallel, then check for success.
-            for i in range(len(self.vms)):
-                self.serials[i].sendline('rm -f /tmp/cgroup-cpu-lock')
-            time.sleep(2)
-            for i in range(len(self.vms)):
-                try:
-                    out = self.serials[i].cmd_output('echo $?', timeout=10)
-                    if out != "0\n":
-                        err += ("\nCan't stop the stresser on %s: %s" %
-                                self.vms[i].name)
-                except Exception, failure_detail:
-                    err += ("\nCan't stop the stresser on %s: %s" %
-                             (self.vms[i].name, failure_detail))
-            del self.serials
-
-            for i in range(len(self.sessions)):
-                try:
-                    self.sessions[i].close()
-                except Exception, failure_detail:
-                    err += ("\nCan't close ssh connection %s" % i)
-            del self.sessions
-
-            for vm in self.vms[self.vms_count:]:
+        def _set_permissions(cgroup, permissions):
+            """
+            Wrapper for setting permissions to first cgroup
+            @param self.permissions: is defined as a list of dictionaries:
+               {'property': control property, 'value': permition value,
+                'check_value': check value (from devices.list property),
+                'read_results': excepced read results T/F,
+                'write_results': expected write results T/F}
+            """
+            cgroup.set_property('devices.' + permissions['property'],
+                                permissions['value'],
+                                cgroup.cgroups[0],
+                                check=permissions['check_value'],
+                                checkprop='devices.list')
+
+        def _add_drive(monitor, monitor_type, disk, name, readonly=False):
+            """
+            Hot-adds disk to monitor's VM.
+            @param monitor: VM's monitor.
+            @param monitor_type: which command to use for hot-adding. (string)
+            @param disk: pwd to disk
+            @param name: id name given to this disk in VM
+            @param readonly: Use readonly? 'False'
+            """
+            if monitor_type == "HUMAN PCI_ADD":
+                out = monitor.cmd("pci_add auto storage file=%s,readonly=%s,"
+                                  "if=virtio,id=%s" %
+                                  (disk, ("on" if readonly else "off"), name))
+                if "all in use" in out:     # All PCIs used
+                    return -1   # restart machine and try again
+                if "%s: " % name not in monitor.cmd("info block"):
+                    return False
+            elif monitor_type == "HUMAN DRIVE_ADD":
+                monitor.cmd("drive_add auto file=%s,readonly=%s,if=none,id=%s"
+                            % (disk, ("on" if readonly else "off"), name))
+                if "%s: " % name not in monitor.cmd("info block"):
+                    return False
+            elif monitor_type == "HUMAN RH":
+                monitor.cmd("__com.redhat_drive_add id=%s,file=%s,readonly=%s"
+                            % (name, disk, ("on" if readonly else "off")))
+                if "%s: " % name not in monitor.cmd("info block"):
+                    return False
+            elif monitor_type == "QMP RH":
+                monitor.cmd_obj({"execute": "__com.redhat_drive_add",
+                                 "arguments": {"file": disk, "id": name,
+                                               "readonly": readonly}})
+                output = monitor.cmd_obj({"execute": "query-block"})
+                for out in output['return']:
+                    try:
+                        if out['device'] == name:
+                            return True
+                    except KeyError:
+                        pass
+                return False
+            else:
+                return False
+
+            return True
+
+        error.context("Setup test")
+        vm = env.get_all_vms()[0]
+        # Try to find suitable monitor
+        monitor_type = None
+        for i_monitor in range(len(vm.monitors)):
+            monitor = vm.monitors[i_monitor]
+            if isinstance(monitor, kvm_monitor.QMPMonitor):
+                out = monitor.cmd_obj({"execute": "query-commands"})
                 try:
-                    vm.destroy(gracefully=False)
-                except Exception, failure_detail:
-                    err += "\nCan't destroy added VM: %s" % failure_detail
-            del self.vms
-
-            if err:
-                logging.error("Some cleanup operations failed: %s", err)
-                raise error.TestError("Some cleanup operations failed: %s"
-                                      % err)
-
-
-        def init(self):
-            """
-            Initialization
-             * creates additional VMs to fit the  no_cgroups * host_cpus /
-               vm_cpus requirement (self.vms + additional VMs)
-             * creates two cgroups and sets cpu.share accordingly to self.speeds
-            """
-            self.speeds.sort()
-            host_cpus = open('/proc/cpuinfo').read().count('model name')
-            vm_cpus = int(params.get('smp', 1)) # cpus per VM
-            no_speeds = len(self.speeds)        # #cgroups
-            no_vms = host_cpus * no_speeds / vm_cpus    # #VMs used by test
-            no_threads = no_vms * vm_cpus       # total #threads
-            sessions = self.sessions
-            for i in range(no_vms - self.vms_count):    # create needed VMs
-                vm_name = "clone%s" % i
-                self.vms.append(self.vms[0].clone(vm_name, params))
-                env.register_vm(vm_name, self.vms[-1])
-                self.vms[-1].create()
-            timeout = 1.5 * int(params.get("login_timeout", 360))
-            for i in range(no_threads):
-                sessions.append(self.vms[i%no_vms].wait_for_login(
-                                                            timeout=timeout))
-            self.cgroup.initialize(self.modules)
-            for i in range(no_speeds):
-                self.cgroup.mk_cgroup()
-                self.cgroup.set_property('cpu.shares', self.speeds[i], i)
-            for i in range(no_vms):
-                assign_vm_into_cgroup(self.vms[i], self.cgroup, i%no_speeds)
-                sessions[i].cmd("touch /tmp/cgroup-cpu-lock")
-                self.serials.append(self.vms[i].wait_for_serial_login(
-                                                                timeout=30))
-
-
-        def run(self):
-            """
-            Actual test:
-            Let each of 3 scenerios (described in test specification) stabilize
-            and then measure the CPU utilisation for time_test time.
-            """
-            def _get_stat(f_stats, _stats=None):
-                """ Reads CPU times from f_stats[] files and sumarize them. """
-                if _stats is None:
-                    _stats = []
-                    for i in range(len(f_stats)):
-                        _stats.append(0)
-                stats = []
-                for i in range(len(f_stats)):
-                    f_stats[i].seek(0)
-                    stats.append(f_stats[i].read().split()[13:17])
-                    stats[i] = sum([int(_) for _ in stats[i]]) - _stats[i]
-                return stats
-
-
-            host_cpus = open('/proc/cpuinfo').read().count('model name')
-            no_speeds = len(self.speeds)
-            no_threads = host_cpus * no_speeds       # total #threads
-            sessions = self.sessions
-            f_stats = []
-            err = []
-            for vm in self.vms:
-                f_stats.append(open("/proc/%d/stat" % vm.get_pid(), 'r'))
-
-            time_init = 10
-            time_test = 10
-            thread_count = 0    # actual thread number
-            stats = []
-            cmd = "renice -n 10 $$; " # new ssh login should pass
-            cmd += "while [ -e /tmp/cgroup-cpu-lock ]; do :; done"
-            for thread_count in range(0, host_cpus):
-                sessions[thread_count].sendline(cmd)
-            time.sleep(time_init)
-            _stats = _get_stat(f_stats)
-            time.sleep(time_test)
-            stats.append(_get_stat(f_stats, _stats))
-
-            thread_count += 1
-            sessions[thread_count].sendline(cmd)
-            if host_cpus % no_speeds == 0 and no_speeds <= host_cpus:
-                time.sleep(time_init)
-                _stats = _get_stat(f_stats)
-                time.sleep(time_test)
-                stats.append(_get_stat(f_stats, _stats))
-
-            for i in range(thread_count+1, no_threads):
-                sessions[i].sendline(cmd)
-            time.sleep(time_init)
-            _stats = _get_stat(f_stats)
-            for j in range(3):
-                __stats = _get_stat(f_stats)
-                time.sleep(time_test)
-                stats.append(_get_stat(f_stats, __stats))
-            stats.append(_get_stat(f_stats, _stats))
-
-            # Verify results
-            err = ""
-            # accumulate stats from each cgroup
-            for j in range(len(stats)):
-                for i in range(no_speeds, len(stats[j])):
-                    stats[j][i % no_speeds] += stats[j][i]
-                stats[j] = stats[j][:no_speeds]
-            # I.
+                    if {'name': '__com.redhat_drive_add'} in out['return']:
+                        monitor_type = "QMP RH"
+                        break
+                except KeyError:
+                    logging.info("Incorrect data from QMP, skipping: %s", out)
+                    continue
+            else:
+                out = monitor.cmd("help")
+                if "\ndrive_add " in out:
+                    monitor_type = "HUMAN DRIVE_ADD"
+                    break
+                elif "\n__com.redhat_drive_add " in out:
+                    monitor_type = "HUMAN RH"
+                    break
+                elif "\npci_add " in out:
+                    monitor_type = "HUMAN PCI_ADD"
+                    break
+        if monitor_type is None:
+            raise error.TestNAError("Not detected any suitable monitor cmd. "
+                                    "Supported methods:\nQMP: __com.redhat_"
+                                    "drive_add\nHuman: drive_add, pci_add, "
+                                    "__com.redhat_drive_add")
+        logging.debug("Using monitor type: %s", monitor_type)
+
+        modules = CgroupModules()
+        if (modules.init(['devices']) != 1):
+            raise error.TestFail("Can't mount blkio cgroup modules")
+        devices = Cgroup('devices', '')
+        devices.initialize(modules)
+        devices.mk_cgroup()
+
+        # Add one scsi_debug disk which will be used in testing
+        if utils.system("lsmod | grep scsi_debug", ignore_status=True):
+            utils.system("modprobe scsi_debug dev_size_mb=8 add_host=0")
+        utils.system("echo 1 > /sys/bus/pseudo/drivers/scsi_debug/add_host")
+        time.sleep(0.1)
+        disk = utils.system_output("ls /dev/sd* | tail -n 1")
+        dev = "%s:%s" % get_maj_min(disk)
+        permissions = [
+                       {'property':     'deny',
+                        'value':        'a',
+                        'check_value':  '',
+                        'result':       False,
+                        'result_read':  False},
+                       {'property':     'allow',
+                        'value':        'b %s r' % dev,
+                        'check_value':  True,
+                        'result':       False,
+                        'result_read':  True},
+                       {'property':     'allow',
+                        'value':        'b %s w' % dev,
+                        'check_value':  'b %s rw' % dev,
+                        'result':       True,
+                        'result_read':  True},
+                       {'property':     'deny',
+                        'value':        'b %s r' % dev,
+                        'check_value':  'b %s w' % dev,
+                        'result':       False,
+                        'result_read':  False},
+                       {'property':     'deny',
+                        'value':        'b %s w' % dev,
+                        'check_value':  '',
+                        'result':       False,
+                        'result_read':  False},
+                       {'property':     'allow',
+                        'value':        'a',
+                        'check_value':  'a *:* rwm',
+                        'result':       True,
+                        'result_read':  True},
+                      ]
+
+        assign_vm_into_cgroup(vm, devices, 0)
+
+        error.context("Test")
+        err = ""
+        name = "idTest%s%d"
+        try:
             i = 0
-            dist = distance(min(stats[i]), max(stats[i]))
-            if dist > min(0.10 + 0.01 * len(self.vms), 0.2):
-                err += "1, "
-                logging.error("1st part's limits broken. Utilisation should be "
-                              "equal. stats = %s, distance = %s", stats[i],
-                              dist)
-            # II.
-            i += 1
-            if len(stats) == 6:
-                dist = distance(min(stats[i]), max(stats[i]))
-                if dist > min(0.10 + 0.01 * len(self.vms), 0.2):
-                    err += "2, "
-                    logging.error("2nd part's limits broken, Utilisation "
-                                  "should be equal. stats = %s, distance = %s",
-                                  stats[i], dist)
-
-            # III.
-            # normalize stats, then they should have equal values
-            i += 1
-            for i in range(i, len(stats)):
-                norm_stats = [float(stats[i][_]) / self.speeds[_]
-                                                for _ in range(len(stats[i]))]
-                dist = distance(min(norm_stats), max(norm_stats))
-                if dist > min(0.10 + 0.02 * len(self.vms), 0.25):
-                    err += "3, "
-                    logging.error("3rd part's limits broken; utilisation should"
-                                  " be in accordance to self.speeds. stats=%s"
-                                  ", norm_stats=%s, distance=%s, speeds=%s,it="
-                                  "%d", stats[i], norm_stats, dist,
-                                  self.speeds, i-1)
+            while i < len(permissions):
+                perm = permissions[i]
+                _set_permissions(devices, perm)
+                logging.debug("Setting permissions: {%s: %s}, value: %s",
+                              perm['property'], perm['value'],
+                              devices.get_property('devices.list', 0))
+                results = ""
+                out = _add_drive(monitor, monitor_type, disk, name % ("R", i),
+                                True)
+                if out == -1:
+                    logging.warn("All PCIs full, recreating VM")
+                    vm.create()
+                    monitor = vm.monitors[i_monitor]
+                    assign_vm_into_cgroup(vm, devices, 0)
+                    continue
+                if perm['result_read'] and not out:
+                    results += "ReadNotAttached, "
+                elif not perm['result_read'] and out:
+                    results += "ReadAttached, "
+
+                out = _add_drive(monitor, monitor_type, disk, name % ("RW", i),
+                                False)
+                if out == -1:
+                    logging.warn("All PCIs full, recreating VM")
+                    vm.create()
+                    monitor = vm.monitors[i_monitor]
+                    assign_vm_into_cgroup(vm, devices, 0)
+                    continue
+                if perm['result'] and not out:
+                    results += "RWNotAttached, "
+                elif not perm['result'] and out:
+                    results += "RWAttached, "
+
+                if results:
+                    logging.debug("%d: FAIL: %s", i, results[:-2])
+                    err += "{%d: %s}, " % (i, results[:-2])
+                else:
+                    logging.info("%d: PASS", i)
+                i += 1
 
             if err:
-                err = "[%s] parts broke their limits" % err[:-2]
+                err = "Some restrictions weren't enforced:\n%s" % err[:-2]
                 logging.error(err)
-                raise error.TestFail(err)
-
-            return ("Cpu utilisation enforced succesfully")
-
-
-    class TestCpuShare10(_TestCpuShare):
-        """
-        1:10 variant of _TestCpuShare test.
-        """
-        def __init__(self, vms, modules):
-            """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
-            """
-            super(TestCpuShare10, self).__init__(vms, modules)
-            self.speeds = [10000, 100000]
-
+            else:
+                logging.info("All restrictions enforced.")
 
-    class TestCpuShare50(_TestCpuShare):
-        """
-        1:1 variant of _TestCpuShare test.
-        """
-        def __init__(self, vms, modules):
-            """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
-            """
-            super(TestCpuShare50, self).__init__(vms, modules)
-            self.speeds = [100000, 100000]
+        finally:
+            error.context("Cleanup")
+            vm.destroy()     # "Safely" remove devices :-)
+            rm_scsi_disks(1)
+            del(devices)
+            del(modules)
 
+        error.context("Results")
+        if err:
+            raise error.TestFail(err)
+        else:
+            return("All restrictions enforced.")
 
-    class TestCpuCFSUtil:
+    @error.context_aware
+    def freezer():
         """
-        Tests the utilisation of scheduler when cgroup cpu.cfs_* setting is
-        set. There is a known issue with scheduler and multiple CPUs.
+        Tests the freezer.state cgroup functionality. (it freezes the guest
+        and unfreeze it again)
+        @param cfg: cgroup_test_time - test duration '60'
         """
-        def __init__(self, vms, modules):
+        def _get_stat(pid):
             """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
+            Gather statistics of pid+1st level subprocesses cpu usage
+            @param pid: PID of the desired process
+            @return: sum of all cpu-related values of 1st level subprocesses
             """
-            self.vms = vms[:]      # Copy of virt machines
-            self.vms_count = len(vms) # Original number of vms
-            self.modules = modules          # cgroup module handler
-            self.cgroup = Cgroup('cpu', '')   # cgroup blkio handler
-            self.sessions = []    # ssh sessions
-            self.serials = []   # serial consoles
-
-
-        def cleanup(self):
-            """ Cleanup """
-            err = ""
-            del(self.cgroup)
-
-            for i in range(len(self.vms)):
-                self.serials[i].sendline('rm -f /tmp/cgroup-cpu-lock')
-            del self.serials
-
-            for i in range(len(self.sessions)):
-                try:
-                    self.sessions[i].close()
-                except Exception, failure_detail:
-                    err += ("\nCan't close ssh connection %s" % i)
-            del self.sessions
-
-            for vm in self.vms[self.vms_count:]:
+            out = None
+            for i in range(10):
                 try:
-                    vm.destroy(gracefully=False)
-                except Exception, failure_detail:
-                    err += "\nCan't destroy added VM: %s" % failure_detail
-            del self.vms
+                    out = utils.system_output("cat /proc/%s/task/*/stat" %
+                                               pid)
+                except error.CmdError:
+                    out = None
+                else:
+                    break
+            out = out.split('\n')
+            ret = 0
+            for i in out:
+                ret += sum([int(_) for _ in i.split(' ')[13:17]])
+            return ret
 
-            if err:
-                logging.error("Some cleanup operations failed: %s", err)
-                raise error.TestError("Some cleanup operations failed: %s"
-                                      % err)
+        error.context("Init")
+        try:
+            test_time = int(params.get("cgroup_test_time", 60))
+        except ValueError:
+            raise error.TestError("Incorrect configuration: param "
+                                  "cgroup_test_time have to be an integer")
 
+        timeout = int(params.get("login_timeout", 360))
+        vm = env.get_all_vms()[0]
+        vm_cpus = int(params.get('smp', 0))     # cpus per VM
+        serial = vm.wait_for_serial_login(timeout=timeout)
+        sessions = []
+        for _ in range(vm_cpus):
+            sessions.append(vm.wait_for_login(timeout=timeout))
+
+        error.context("Prepare")
+        modules = CgroupModules()
+        if (modules.init(['freezer']) != 1):
+            raise error.TestFail("Can't mount freezer cgroup modules")
+        cgroup = Cgroup('freezer', '')
+        cgroup.initialize(modules)
+        cgroup.mk_cgroup()
+        assign_vm_into_cgroup(vm, cgroup, 0)
+
+        error.context("Test")
+        err = ""
+        try:
+            for session in sessions:
+                session.cmd('touch /tmp/freeze-lock')
+                session.sendline('while [ -e /tmp/freeze-lock ]; do :; done')
+            cgroup = cgroup
+            pid = vm.get_pid()
+
+            # Let it work for short, mid and long period of time
+            for tsttime in [0.5, 3, test_time]:
+                logging.debug("FREEZING (%ss)", tsttime)
+                # Freezing takes some time, DL is 1s
+                cgroup.set_property('freezer.state', 'FROZEN',
+                                    cgroup.cgroups[0], check=False)
+                time.sleep(1)
+                _ = cgroup.get_property('freezer.state', 0)
+                if 'FROZEN' not in _:
+                    err = "Coundn't freze the VM: state %s" % _
+                    break
+                stat_ = _get_stat(pid)
+                time.sleep(tsttime)
+                stat = _get_stat(pid)
+                if stat != stat_:
+                    err = ('Process was running in FROZEN state; stat=%s, '
+                           'stat_=%s, diff=%s' % (stat, stat_, stat - stat_))
+                    break
+                logging.debug("THAWING (%ss)", tsttime)
+                cgroup.set_property('freezer.state', 'THAWED', 0)
+                stat_ = _get_stat(pid)
+                time.sleep(tsttime)
+                stat = _get_stat(pid)
+                if (stat - stat_) < (90 * tsttime):
+                    err = ('Process was not active in FROZEN state; stat=%s, '
+                           'stat_=%s, diff=%s' % (stat, stat_, stat - stat_))
+                    break
 
-        def init(self):
-            """
-            Initialization
-             * creates additional VMs (vm_cpus = 2 * host_cpus)
-             * creates cgroup for each VM and subcgroup for theirs vCPUs
-               (../vm[123..]/vcpu[012..])
-            """
-            def get_cpu_pids(vm, smp=None):
-                """ Get pids of all VM's vcpus """
-                cpu_pids = re.findall(r'thread_id=(\d+)',
-                                      vm.monitor.info("cpus"))
-                if not cpu_pids:
-                    raise error.TestFail("Can't get 'info cpus' from monitor")
-                if smp is not None and len(cpu_pids) != smp:
-                    raise error.TestFail("Incorrect no vcpus: monitor = %s, "
-                                         "params = %s" % (len(cpu_pids), smp))
-                return cpu_pids
-
-            self.cgroup.initialize(self.modules)
-            host_cpus = open('/proc/cpuinfo').read().count('model name')
-            smp = int(params.get('smp', 1))
-            vm_cpus = 0
-            # Prepare existing vms (if necessarily)
-            for i in range(min(len(self.vms), 2 * host_cpus / smp)):
-                # create "/vm[123]/ cgroups and set cfs_quota_us to no_vcpus*50%
-                vm_pwd = self.cgroup.mk_cgroup()
-                self.cgroup.set_property("cpu.cfs_period_us", 100000, vm_pwd)
-                self.cgroup.set_property("cpu.cfs_quota_us", 50000*smp, vm_pwd)
-                assign_vm_into_cgroup(self.vms[i], self.cgroup, vm_pwd)
-                cpu_pids = get_cpu_pids(self.vms[i], smp)
-                for j in range(smp):
-                    # create "/vm*/vcpu[123] cgroups and set cfs_quota_us to 50%
-                    vcpu_pwd = self.cgroup.mk_cgroup(vm_pwd)
-                    self.cgroup.set_property("cpu.cfs_period_us", 100000,
-                                                                    vcpu_pwd)
-                    self.cgroup.set_property("cpu.cfs_quota_us", 50000,
-                                                                    vcpu_pwd)
-                    self.cgroup.set_cgroup(int(cpu_pids[j]), vcpu_pwd)
-                    self.sessions.append(self.vms[i].wait_for_login(timeout=30))
-                    vm_cpus += 1
-                self.serials.append(self.vms[i].wait_for_serial_login(
-                                                                    timeout=30))
-                self.serials[-1].cmd("touch /tmp/cgroup-cpu-lock")
-            timeout = 1.5 * int(params.get("login_timeout", 360))
-            _params = params
-            # Add additional vms (if necessarily)
-            i = 0
-            while vm_cpus < 2 * host_cpus:
-                vm_name = "clone%s" % i
-                smp = min(vm_cpus, 2 * host_cpus - vm_cpus)
-                _params['smp'] = smp
-                self.vms.append(self.vms[0].clone(vm_name, _params))
-                env.register_vm(vm_name, self.vms[-1])
-                self.vms[-1].create()
-                vm_pwd = self.cgroup.mk_cgroup()
-                self.cgroup.set_property("cpu.cfs_period_us", 100000, vm_pwd)
-                self.cgroup.set_property("cpu.cfs_quota_us", 50000*smp, vm_pwd)
-                assign_vm_into_cgroup(self.vms[-1], self.cgroup, vm_pwd)
-                cpu_pids = get_cpu_pids(self.vms[-1], smp)
-                for j in range(smp):
-                    vcpu_pwd = self.cgroup.mk_cgroup(vm_pwd)
-                    self.cgroup.set_property("cpu.cfs_period_us", 100000,
-                                                                    vcpu_pwd)
-                    self.cgroup.set_property("cpu.cfs_quota_us", 50000,
-                                                                    vcpu_pwd)
-                    self.cgroup.set_cgroup(int(cpu_pids[j]), vcpu_pwd)
-                    self.sessions.append(self.vms[-1].wait_for_login(
-                                                            timeout=timeout))
-                self.serials.append(self.vms[-1].wait_for_serial_login(
-                                                                    timeout=30))
-                self.serials[-1].cmd("touch /tmp/cgroup-cpu-lock")
-                vm_cpus += smp
-                i += 1
+            if err:
+                logging.error(err)
+            else:
+                logging.info("Freezer works fine")
 
+        finally:
+            error.context("Cleanup")
+            del(cgroup)
+            serial.sendline("rm -f /tmp/freeze-lock")
 
-        def run(self):
-            """
-            Actual test:
-            It run stressers on all vcpus, gather host CPU utilisation and
-            verifies that guests use at least 95% of CPU time.
-            """
-            stats = []
-            cmd = "renice -n 10 $$; "
-            cmd += "while [ -e /tmp/cgroup-cpu-lock ]; do :; done"
-            for session in self.sessions:
-                session.sendline(cmd)
-
-            # Test
-            time.sleep(1)
-            stats.append(open('/proc/stat', 'r').readline())
-            time.sleep(1)
-            stats.append(open('/proc/stat', 'r').readline())
-            time.sleep(9)
-            stats.append(open('/proc/stat', 'r').readline())
-            time.sleep(49)
-            stats.append(open('/proc/stat', 'r').readline())
-            for session in self.serials:
-                session.sendline('rm -f /tmp/cgroup-cpu-lock')
+            for session in sessions:
+                session.cmd("true")
+                session.close()
 
-            # Verification
-            print stats
-            stats[0] = [int(_) for _ in stats[0].split()[1:]]
-            stats[0] = [sum(stats[0][0:8]), sum(stats[0][8:])]
-            for i in range(1, len(stats)):
-                stats[i] = [int(_) for _ in stats[i].split()[1:]]
-                try:
-                    stats[i] = (float(sum(stats[i][8:]) - stats[0][1]) /
-                                        (sum(stats[i][0:8]) - stats[0][0]))
-                except ZeroDivisionError:
-                    logging.error("ZeroDivisionError in stats calculation")
-                    stats[i] = False
-            print stats
-            for i in range(1, len(stats)):
-                if stats[i] < 0.95:
-                    raise error.TestFail("Guest time is not >95%% %s" % stats)
+            del(modules)
 
-            logging.info("Guest times are over 95%%: %s", stats)
-            return "Guest times are over 95%%: %s" % stats
+        if err:
+            raise error.TestFail(err)
+        else:
+            return ("Freezer works fine")
 
-    class TestCpusetCpus:
+    @error.context_aware
+    def memory_limit(memsw=False):
         """
-        Tests the cpuset.cpus cgroup feature. It stresses all VM's CPUs
-        and changes the CPU affinity. Verifies correct behaviour.
+        Tests the memory.limit_in_bytes or memory.memsw.limit_in_bytes cgroup
+        capability. It tries to allocate bigger block than allowed limit.
+        memory.limit_in_bytes: Qemu process should be swaped out and the
+                               block created.
+        memory.memsw.limit_in_bytes: Qemu should be killed with err 137.
+        @param memsw: Whether to run memsw or rss mem only test
+        @param cfg: cgroup_memory_limit_kb - test uses 1.1 * memory_limit
+                    memory blocks for testing 'by default 1/2 of VM memory'
         """
-        def __init__(self, vms, modules):
-            """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
-            """
-            self.vm = vms[0]      # Virt machines
-            self.modules = modules          # cgroup module handler
-            self.cgroup = Cgroup('cpuset', '')   # cgroup handler
-            self.sessions = []
-
-
-        def cleanup(self):
-            """ Cleanup """
-            err = ""
-            try:
-                del(self.cgroup)
-            except Exception, failure_detail:
-                err += "\nCan't remove Cgroup: %s" % failure_detail
-
-            self.sessions[-1].sendline('rm -f /tmp/cgroup-cpu-lock')
-            for i in range(len(self.sessions)):
-                try:
-                    self.sessions[i].close()
-                except Exception, failure_detail:
-                    err += ("\nCan't close ssh connection %s" % i)
-
-            if err:
-                logging.error("Some cleanup operations failed: %s", err)
-                raise error.TestError("Some cleanup operations failed: %s" %
-                                      err)
-
-
-        def init(self):
-            """
-            Prepares cgroup, moves VM into it and execute stressers.
-            """
-            self.cgroup.initialize(self.modules)
-            # We need only machine with more than 1 CPU
-            all_cpus = self.cgroup.get_property("cpuset.cpus")[0]
-            all_mems = self.cgroup.get_property("cpuset.mems")[0]
-            vm_cpus = int(params.get('smp', 1)) # cpus per VM
-            if all_cpus == "0" or vm_cpus < 2:
-                raise error.TestFail("This test needs at least 2 CPUs on "
-                                     "host and the first guest")
+        error.context("Init")
+        try:
+            mem_limit = params.get('cgroup_memory_limit_kb', None)
+            if mem_limit is not None:
+                mem_limit = int(mem_limit)
+        except ValueError:
+            raise error.TestError("Incorrect configuration: param cgroup_"
+                                  "memory_limit_kb have to be an integer")
+
+        vm = env.get_all_vms()[0]
+
+        error.context("Prepare")
+        # Don't allow to specify more than 1/2 of the VM's memory
+        mem = int(params.get('mem', 1024)) * 512
+        if mem_limit:
+            mem = min(mem, mem_limit)
+        else:
+            mem_limit = mem
+        # There have to be enough free swap space and hugepages can't be used
+        if not memsw:
+            if params.get('setup_hugepages') == 'yes':
+                err = "Hugepages can't be used in this test."
+                logging.error(err)
+                raise error.TestNAError(err)
+            if utils.read_from_meminfo('SwapFree') < (mem * 0.1):
+                err = "Not enough free swap space"
+                logging.error(err)
+                raise error.TestNAError(err)
+        # We want to copy slightely over "mem" limit
+        mem *= 1.1
+        modules = CgroupModules()
+        if (modules.init(['memory']) != 1):
+            raise error.TestFail("Can't mount memory cgroup modules")
+        cgroup = Cgroup('memory', '')
+        cgroup.initialize(modules)
+        cgroup.mk_cgroup()
+        cgroup.set_property('memory.move_charge_at_immigrate', '3', 0)
+        cgroup.set_property_h('memory.limit_in_bytes', "%dK" % mem_limit, 0)
+        if memsw:
             try:
-                no_cpus = int(all_cpus.split('-')[1]) + 1
-            except Exception:
-                raise error.TestFail("Failed to get #CPU from root cgroup.")
-            self.cgroup.mk_cgroup()
-            self.cgroup.set_property('cpuset.cpus', all_cpus, 0)
-            self.cgroup.set_property('cpuset.mems', all_mems, 0)
-            assign_vm_into_cgroup(self.vm, self.cgroup, 0)
-
-            cmd = "renice -n 10 $$; " # new ssh login should pass
-            cmd += "while [ -e /tmp/cgroup-cpu-lock ]; do :; done"
-            for i in range(vm_cpus):
-                self.sessions.append(self.vm.wait_for_login(timeout=30))
-                self.sessions[i].cmd("touch /tmp/cgroup-cpu-lock")
-                self.sessions[i].sendline(cmd)
-            self.sessions.append(self.vm.wait_for_login(timeout=30))   # cleanup
+                cgroup.get_property("memory.memsw.limit_in_bytes", 0)
+            except error.TestError, details:
+                logging.error("Can't get memory.memsw.limit_in_bytes info."
+                              "Do you have support for memsw? (try passing"
+                              "swapaccount=1 parameter to kernel):%s", details)
+                raise error.TestNAError("System doesn't support memory.memsw.*"
+                                        " or swapaccount is disabled.")
+            cgroup.set_property_h('memory.memsw.limit_in_bytes',
+                                  "%dK" % mem_limit, 0)
+
+        logging.info("Expected VM reload")
+        try:
+            vm.create()
+        except Exception, failure_detail:
+            raise error.TestFail("init: Failed to recreate the VM: %s" %
+                                 failure_detail)
+        assign_vm_into_cgroup(vm, cgroup, 0)
+        timeout = int(params.get("login_timeout", 360))
+        session = vm.wait_for_login(timeout=timeout)
 
+        # VM already eat-up more than allowed by this cgroup
+        fstats = open('/proc/%s/status' % vm.get_pid(), 'r')
+        rss = int(re.search(r'VmRSS:[\t ]*(\d+) kB', fstats.read()).group(1))
+        if rss > mem_limit:
+            raise error.TestFail("Init failed to move VM into cgroup, VmRss"
+                                 "=%s, expected=%s" % (rss, mem_limit))
 
-        def run(self):
+        try:
+            error.context("Test")
             """
-            Actual test; stress VM and verifies the impact on host.
+            Let VM allocate huge block:
+            1) memsw: During allocation limit of rss+swap should be exceeded
+                      and VM should be killed with err 137.
+            2) rsslimit: Allocation should pass, rss+swap should be greater
+                         than mem_limit.
+            * Max execution time is limited to mem / 10
+            * Checking every 0.1s
             """
-            def _test_it(tst_time):
-                """ Helper; gets stat differences during test_time period. """
-                _load = get_load_per_cpu()
-                time.sleep(tst_time)
-                return (get_load_per_cpu(_load)[1:])
-
-            tst_time = 1    # 1s
-            vm_cpus = int(params.get('smp', 1)) # cpus per VM
-            all_cpus = self.cgroup.get_property("cpuset.cpus")[0]
-            no_cpus = int(all_cpus.split('-')[1]) + 1
-            stats = []
-
-            # Comments are for vm_cpus=2, no_cpus=4, _SC_CLK_TCK=100
-            # All CPUs are used, utilisation should be maximal
-            # CPUs: oooo, Stat: 200
-            cpus = False
-            stats.append((_test_it(tst_time), cpus))
-
-            if no_cpus > vm_cpus:
-                # CPUs: OO__, Stat: 200
-                cpus = '0-%d' % (vm_cpus-1)
-                self.cgroup.set_property('cpuset.cpus', cpus, 0)
-                stats.append((_test_it(tst_time), cpus))
-
-            if no_cpus > vm_cpus:
-                # CPUs: __OO, Stat: 200
-                cpus = "%d-%d" % (no_cpus-vm_cpus-1, no_cpus-1)
-                self.cgroup.set_property('cpuset.cpus', cpus, 0)
-                stats.append((_test_it(tst_time), cpus))
-
-            # CPUs: O___, Stat: 100
-            cpus = "0"
-            self.cgroup.set_property('cpuset.cpus', cpus, 0)
-            stats.append((_test_it(tst_time), cpus))
-
-            # CPUs: _OO_, Stat: 200
-            if no_cpus == 2:
-                cpus = "1"
-            else:
-                cpus = "1-%d" % min(no_cpus, vm_cpus-1)
-            self.cgroup.set_property('cpuset.cpus', cpus, 0)
-            stats.append((_test_it(tst_time), cpus))
-
-            # CPUs: O_O_, Stat: 200
-            cpus = "0"
-            for i in range(1, min(vm_cpus, (no_cpus/2))):
-                cpus += ",%d" % (i*2)
-            self.cgroup.set_property('cpuset.cpus', cpus, 0)
-            stats.append((_test_it(tst_time), cpus))
-
-            # CPUs: oooo, Stat: 200
-            cpus = False
-            self.cgroup.set_property('cpuset.cpus', all_cpus, 0)
-            stats.append((_test_it(tst_time), cpus))
+            session.sendline('dd if=/dev/zero of=/dev/null bs=%dK count=1 '
+                             'iflag=fullblock' % mem)
 
+            max_rss = 0
+            max_rssswap = 0
+            out = ""
             err = ""
-
-            max_cpu_stat = os.sysconf(os.sysconf_names['SC_CLK_TCK'])
-            max_stat = max_cpu_stat * vm_cpus
-            for i in range(len(stats)):
-                if stats[i][1] is False:
-                    dist = distance(sum(stats[i][0]), max_stat)
-                    if dist > 0.25:
-                        err += "%d, " % i
-                        logging.error("%s part; incorrect utilisation, dist=%s,"
-                                      "%s", i, dist, stats[i])
-                    continue    # No restrictions, don't check per_cpu_load
-
-                if stats[i][1].count('-') == 1:
-                    # cpus defined by range
-                    cpus = []
-                    _ = stats[i][1].split('-')
-                    for cpu in range(_[0], _[1] + 1):
-                        cpus.append(cpu)
-                else:
-                    # cpus defined by ',' separated list
-                    cpus = [int(_) for _ in stats[i][1].split(',')]
-
-                for cpu in range(no_cpus):
-                    dist = distance(stats[i][0][cpu], max_cpu_stat)
-                    if cpu in cpus:
-                        if dist > 0.2:
-                            err += "%d, " % cpu
-                            logging.error("%s part; per_cpu_load failed; dist="
-                                          "%s, cpu=%s, stat=%s", i, dist, cpu,
-                                          stats[i])
+            for _ in range(int(mem / 1024)):
+                try:
+                    fstats.seek(0)
+                    status = fstats.read()
+                    rss = int(re.search(r'VmRSS:[\t ]*(\d+) kB', status)
+                                                                    .group(1))
+                    max_rss = max(rss, max_rss)
+                    swap = int(re.search(r'VmSwap:[\t ]*(\d+) kB', status)
+                                                                    .group(1))
+                    max_rssswap = max(rss + swap, max_rssswap)
+                except Exception, details:
+                    if memsw and not vm.is_alive():
+                        # VM got SIGTERM as expected, finish the test
+                        break
                     else:
-                        if dist < 0.75: # this CPU serves other processes
-                            err += "%d, " % i
-                            logging.error("%s part; per_cpu_load failed; 1-dist"
-                                          "=%s, cpu=%s, stat=%s", i, 1-dist,
-                                          cpu, stats[i])
+                        err = details
+                        break
+                try:
+                    out += session.read_up_to_prompt(timeout=0.1)
+                except ExpectTimeoutError:
+                    #0.1s passed, lets begin the next round
+                    pass
+                except ShellTimeoutError, detail:
+                    if memsw and not vm.is_alive():
+                        # VM was killed, finish the test
+                        break
+                    else:
+                        err = details
+                        break
+                except ExpectProcessTerminatedError, detail:
+                    if memsw:
+                        err = ("dd command died (VM should die instead): %s\n"
+                               "Output:\n" % (detail, out))
+                    else:
+                        err = ("dd command died (should pass): %s\nOutput:"
+                               "\n%s" % (detail, out))
+                    break
+                else:   # dd command finished
+                    break
 
+            error.context("Verification")
             if err:
-                err = "[%s] parts worked incorrectly, check the log" % err[:-2]
                 logging.error(err)
-                raise error.TestFail(err)
+            elif memsw:
+                if max_rssswap > mem_limit:
+                    err = ("The limit was broken: max_rssswap=%s, limit=%s" %
+                           (max_rssswap, mem_limit))
+                elif vm.process.get_status() != 137:  # err: Limit exceeded
+                    err = ("VM exit code is %s (should be %s)" %
+                           (vm.process.get_status(), 137))
+                else:
+                    out = ("VM terminated as expected. Used rss+swap: %d, "
+                           "limit %s" % (max_rssswap, mem_limit))
+                    logging.info(out)
+            else:   # only RSS limit
+                exit_nr = session.cmd_output("echo $?")[:-1]
+                if max_rss > mem_limit:
+                    err = ("The limit was broken: max_rss=%s, limit=%s" %
+                           (max_rss, mem_limit))
+                elif exit_nr != '0':
+                    err = ("dd command failed(%s) output: %s" % (exit_nr, out))
+                elif (max_rssswap) < mem_limit:
+                    err = ("VM didn't consume expected amount of memory. %d:%d"
+                           " Output of dd cmd: %s" % (max_rssswap, mem_limit,
+                                                      out))
+                else:
+                    out = ("Created %dMB block with %.2f memory overcommit" %
+                           (mem / 1024, float(max_rssswap) / mem_limit))
+                    logging.info(out)
 
-            logging.info("Test passed successfully")
-            return ("All clear")
+        finally:
+            error.context("Cleanup")
+            del(cgroup)
+            del(modules)
 
+        error.context("Results")
+        if err:
+            raise error.TestFail(err)
+        else:
+            return out
 
-    class TestCpusetCpusSwitching:
+    def memory_memsw_limit():
         """
-        Tests the cpuset.cpus cgroup feature. It stresses all VM's CPUs
-        while switching between cgroups with different setting.
+        Executes the memory_limit test with parameter memsw.
+        It tries to allocate bigger block than allowed limit. Qemu should be
+        killed with err 137.
+        @param cfg: cgroup_memory_limit_kb - test uses 1.1 * memory_limit
+                    memory blocks for testing 'by default 1/2 of VM memory'
         """
-        def __init__(self, vms, modules):
-            """
-            Initialization
-            @param vms: list of vms
-            @param modules: initialized cgroup module class
-            """
-            self.vm = vms[0]      # Virt machines
-            self.modules = modules          # cgroup module handler
-            self.cgroup = Cgroup('cpuset', '')   # cgroup handler
-            self.sessions = []
-
-
-        def cleanup(self):
-            """ Cleanup """
-            err = ""
-            try:
-                del(self.cgroup)
-            except Exception, failure_detail:
-                err += "\nCan't remove Cgroup: %s" % failure_detail
+        return memory_limit(memsw=True)
 
-            self.sessions[-1].sendline('rm -f /tmp/cgroup-cpu-lock')
-            for i in range(len(self.sessions)):
-                try:
-                    self.sessions[i].close()
-                except Exception, failure_detail:
-                    err += ("\nCan't close ssh connection %s" % i)
-
-            if err:
-                logging.error("Some cleanup operations failed: %s", err)
-                raise error.TestError("Some cleanup operations failed: %s" %
-                                      err)
-
-
-        def init(self):
-            """
-            Prepares cgroup, moves VM into it and execute stressers.
-            """
-            self.cgroup.initialize(self.modules)
-            vm_cpus = int(params.get('smp', 1))
-            all_cpus = self.cgroup.get_property("cpuset.cpus")[0]
-            if all_cpus == "0":
-                raise error.TestFail("This test needs at least 2 CPUs on "
-                                     "host, cpuset=%s" % all_cpus)
-            try:
-                last_cpu = int(all_cpus.split('-')[1])
-            except Exception:
-                raise error.TestFail("Failed to get #CPU from root cgroup.")
+    def memory_move():
+        """
+        Tests the memory.move_charge_at_immigrate cgroup capability. It changes
+        memory cgroup while running the guest system.
+        @param cfg: cgroup_test_time - test duration '60'
+        @param cfg: cgroup_memory_move_mb - override the size of memory blocks
+                    'by default 1/2 of VM memory'
+        """
+        error.context("Init")
+        test_time = int(params.get('cgroup_test_time', 10))
+        vm = env.get_all_vms()[0]
+
+        error.context("Prepare")
+        modules = CgroupModules()
+        if (modules.init(['memory']) != 1):
+            raise error.TestFail("Can't mount memory cgroup modules")
+        cgroup = Cgroup('memory', '')
+        cgroup.initialize(modules)
+        # Two cgroups
+        cgroup.mk_cgroup()
+        cgroup.mk_cgroup()
+        cgroup.set_property('memory.move_charge_at_immigrate', '3', 0)
+        cgroup.set_property('memory.move_charge_at_immigrate', '3', 1)
 
-            if last_cpu == 1:
-                last_cpu = "1"
-            else:
-                last_cpu = "1-%s" % last_cpu
-
-            # Comments are for vm_cpus=2, no_cpus=4, _SC_CLK_TCK=100
-            self.cgroup.mk_cgroup() # oooo
-            self.cgroup.set_property('cpuset.cpus', all_cpus, 0)
-            self.cgroup.set_property('cpuset.mems', 0, 0)
-            self.cgroup.mk_cgroup() # O___
-            self.cgroup.set_property('cpuset.cpus', 0, 1)
-            self.cgroup.set_property('cpuset.mems', 0, 1)
-            self.cgroup.mk_cgroup() #_OO_
-            self.cgroup.set_property('cpuset.cpus', last_cpu, 2)
-            self.cgroup.set_property('cpuset.mems', 0, 2)
-
-            assign_vm_into_cgroup(self.vm, self.cgroup, 0)
-            cmd = "renice -n 10 $$; " # new ssh login should pass
-            cmd += "while [ -e /tmp/cgroup-cpu-lock ]; do :; done"
-            for i in range(vm_cpus):
-                self.sessions.append(self.vm.wait_for_login(timeout=30))
-                self.sessions[i].cmd("touch /tmp/cgroup-cpu-lock")
-                self.sessions[i].sendline(cmd)
+        timeout = int(params.get("login_timeout", 360))
+        sessions = []
+        sessions.append(vm.wait_for_login(timeout=timeout))
+        sessions.append(vm.wait_for_login(timeout=30))
 
+        # Don't allow to specify more than 1/2 of the VM's memory
+        size = int(params.get('mem', 1024)) / 2
+        if params.get('cgroup_memory_move_mb') is not None:
+            size = min(size, int(params.get('cgroup_memory_move_mb')))
 
-        def run(self):
-            """
-            Actual test; stress VM while simultanously changing the cgroups.
-            """
+        err = ""
+        try:
+            error.context("Test")
             logging.info("Some harmless IOError messages of non-existing "
                          "processes might occur.")
+            sessions[0].sendline('dd if=/dev/zero of=/dev/null bs=%dM '
+                                 'iflag=fullblock' % size)
+            time.sleep(2)
+
             i = 0
-            t_stop = time.time() + 60 # run for 60 seconds
+            sessions[1].cmd('killall -SIGUSR1 dd')
+            t_stop = time.time() + test_time
             while time.time() < t_stop:
-                for pid in utils.get_children_pids(self.vm.get_shell_pid()):
-                    try:
-                        self.cgroup.set_cgroup(int(pid), i % 3)
-                    except Exception, inst: # Process might already not exist
-                        if os.path.exists("/proc/%s/" % pid):
-                            raise error.TestFail("Failed to switch cgroup;"
-                                                 " it=%s; %s" % (i, inst))
                 i += 1
+                assign_vm_into_cgroup(vm, cgroup, i % 2)
+            time.sleep(2)
+            sessions[1].cmd('killall -SIGUSR1 dd; true')
+            try:
+                out = sessions[0].read_until_output_matches(
+                                                ['(\d+)\+\d records out'])[1]
+                if len(re.findall(r'(\d+)\+\d records out', out)) < 2:
+                    out += sessions[0].read_until_output_matches(
+                                                ['(\d+)\+\d records out'])[1]
+            except ExpectTimeoutError:
+                err = ("dd didn't produce expected output: %s" % out)
 
-            self.vm.verify_alive()
-
-            logging.info("Cgroups %s-times successfully switched", i)
-            return ("Cgroups %s-times successfully switched" % i)
-
-
-
-    # Setup
-    # TODO: Add all new tests here
-    tests = {"blkio_bandwidth_weigth_read"  : TestBlkioBandwidthWeigthRead,
-             "blkio_bandwidth_weigth_write" : TestBlkioBandwidthWeigthWrite,
-             "blkio_throttle_read"          : TestBlkioThrottleRead,
-             "blkio_throttle_write"         : TestBlkioThrottleWrite,
-             "blkio_throttle_multiple_read" : TestBlkioThrottleMultipleRead,
-             "blkio_throttle_multiple_write" : TestBlkioThrottleMultipleWrite,
-             "devices_access"               : TestDevicesAccess,
-             "freezer"                      : TestFreezer,
-             "memory_move"                  : TestMemoryMove,
-             "memory_limit"                 : TestMemoryLimit,
-             "cpu_share_10"                 : TestCpuShare10,
-             "cpu_share_50"                 : TestCpuShare50,
-             "cpu_cfs_util"                 : TestCpuCFSUtil,
-             "cpuset_cpus"                  : TestCpusetCpus,
-             "cpuset_cpus_switching"        : TestCpusetCpusSwitching,
-            }
-    modules = CgroupModules()
-    if (modules.init(['blkio', 'cpu', 'cpuset', 'devices', 'freezer',
-                      'memory']) <= 0):
-        raise error.TestFail('Can\'t mount any cgroup modules')
-    # Add all vms
-    vms = []
-    for vm in params.get("vms", "main_vm").split():
-        vm = env.get_vm(vm)
-        vm.verify_alive()
-        timeout = int(params.get("login_timeout", 360))
-        _ = vm.wait_for_login(timeout=timeout)
-        _.close()
-        del(_)
-        vms.append(vm)
+            if not err:
+                sessions[1].cmd('killall dd; true')
+                dd_res = re.findall(r'(\d+)\+(\d+) records in', out)
+                dd_res += re.findall(r'(\d+)\+(\d+) records out', out)
+                dd_res = [int(_[0]) + int(_[1]) for _ in dd_res]
+                if dd_res[1] <= dd_res[0] or dd_res[3] <= dd_res[2]:
+                    err = ("dd stoped sending bytes: %s..%s, %s..%s" %
+                           (dd_res[0], dd_res[1], dd_res[2], dd_res[3]))
 
+            if err:
+                logging.error(err)
+            else:
+                out = ("Guest moved %stimes in %s seconds while moving %d "
+                       "blocks of %dMB each" % (i, test_time, dd_res[3], size))
+                logging.info(out)
 
-    # Execute tests
-    results = ""
-    # cgroup_tests = "re1[:loops] re2[:loops] ... ... ..."
-    for rexpr in params.get("cgroup_tests").split():
-        try:
-            loops = int(rexpr[rexpr.rfind(':')+1:])
-            rexpr = rexpr[:rexpr.rfind(':')]
-        except Exception:
-            loops = 1
-        # number of loops per regular expression
-        for _loop in range(loops):
-            # cg_test is the subtest name from regular expression
-            for cg_test in sorted(
-                            [_ for _ in tests.keys() if re.match(rexpr, _)]):
-                logging.info("%s: Entering the test", cg_test)
-                err = ""
-                try:
-                    tst = tests[cg_test](vms, modules)
-                    tst.init()
-                    out = tst.run()
-                except error.TestFail, failure_detail:
-                    logging.error("%s: Leaving, test FAILED (TestFail): %s",
-                                  cg_test, failure_detail)
-                    err += "test, "
-                    out = failure_detail
-                except error.TestError, failure_detail:
-                    tb = utils.etraceback(cg_test, sys.exc_info())
-                    logging.error("%s: Leaving, test FAILED (TestError): %s",
-                                  cg_test, tb)
-                    err += "testErr, "
-                    out = failure_detail
-                except Exception, failure_detail:
-                    tb = utils.etraceback(cg_test, sys.exc_info())
-                    logging.error("%s: Leaving, test FAILED (Exception): %s",
-                                  cg_test, tb)
-                    err += "testUnknownErr, "
-                    out = failure_detail
+        finally:
+            error.context("Cleanup")
+            sessions[1].cmd('killall dd; true')
+            for session in sessions:
+                session.cmd("true")
+                session.close()
 
-                try:
-                    tst.cleanup()
-                except Exception, failure_detail:
-                    logging.warn("%s: cleanup failed: %s\n", cg_test,
-                                 failure_detail)
-                    err += "cleanup, "
+            del(cgroup)
+            del(modules)
 
-                try:
-                    _check_vms(vms)
-                except Exception, failure_detail:
-                    logging.warn("%s: _check_vms failed: %s\n", cg_test,
-                                 failure_detail)
-                    err += "VM check, "
-
-                if err.startswith("test"):
-                    results += ("\n [F] %s: {%s} FAILED: %s" %
-                                 (cg_test, err[:-2], out))
-                elif err:
-                    results += ("\n [W] %s: Test passed but {%s} FAILED: %s" %
-                                 (cg_test, err[:-2], out))
-                else:
-                    results += ("\n [P] %s: PASSED: %s" % (cg_test, out))
-
-    out = ("SUM: All tests finished (%d PASS / %d WARN / %d FAIL = %d TOTAL)%s"%
-           (results.count("\n [P]"), results.count("\n [W]"),
-            results.count("\n [F]"), (results.count("\n [P]") +
-            results.count("\n [F]") + results.count("\n [W]")), results))
-    logging.info(out)
-    if results.count("FAILED"):
-        raise error.TestFail("Some subtests failed\n%s" % out)
+        if err:
+            logging.error(err)
+        else:
+            return (out)
+
+    # Main
+    # Executes test specified by cgroup_test variable in cfg
+    fce = None
+    _fce = params.get('cgroup_test')
+    error.context("Executing test: %s" % _fce)
+    try:
+        fce = locals()[_fce]
+    except KeyError:
+        raise error.TestNAError("Test %s doesn't exist. Check 'cgroup_test' "
+                                "variable in subtest.cfg" % _fce)
+    else:
+        return fce()
diff --git a/client/virt/kvm_vm.py b/client/virt/kvm_vm.py
index fcbdde4..8f84342 100644
--- a/client/virt/kvm_vm.py
+++ b/client/virt/kvm_vm.py
@@ -1439,6 +1439,16 @@ class VM(virt_vm.BaseVM):
         return self.process.get_pid()
 
 
+    def get_vcpu_pids(self):
+        """
+        Return the list of vcpu PIDs
+
+        @return: the list of vcpu PIDs
+        """
+        return [int(_) for _ in re.findall(r'thread_id=(\d+)',
+                                           self.monitor.info("cpus"))]
+
+
     def get_shared_meminfo(self):
         """
         Returns the VM's shared memory information.
diff --git a/client/virt/subtests.cfg.sample b/client/virt/subtests.cfg.sample
index 93021e3..a9c83fc 100644
--- a/client/virt/subtests.cfg.sample
+++ b/client/virt/subtests.cfg.sample
@@ -1182,10 +1182,89 @@ variants:
     - cgroup:
         only Linux
         type = cgroup
-        # cgroup_tests = "re1[:loops] re2[:loops] ..."
-        cgroup_tests = ".*:1"
-        vms += " vm2"
-        extra_params += " -snapshot"
+        cgroup_test_time = 60
+        cgroup_limit = 0.1
+        # rmmod scsi_debug instead of writing into /sys/../add_host (safer)
+        cgroup_rmmod_scsi_debug = "yes"
+        variants:
+            - blkio_bandwidth:
+                # Test creates VMs with disks according to weights
+                vms = ""
+                # problem with multiple preprocess, turn off the screendumps
+                take_regular_screendumps = "no"
+                extra_params += " -snapshot"
+                cgroup_test = "blkio_bandwidth"
+                # cgroup_test_time, cgroup_weights, cgroup_limit{ ,_read,_write}
+                # cgroup_weights = "[100, 1000, 500]"
+            - blkio_throttle:
+                # Test creats VMs with disks according to speeds
+                vms = ""
+                # problem with multiple preprocess, turn off the screendumps
+                take_regular_screendumps = "no"
+                extra_params += " -snapshot"
+                cgroup_test = "blkio_throttle"
+                # cgroup_test_time, cgroup_limit{ ,_read,_write}, cgroup_speeds
+                # cgroup_speeds = [1024, 2048, 4096, 8192]
+            - blkio_throttle_multi:
+                # Test creats VMs with disks according to speeds
+                vms = ""
+                # problem with multiple preprocess, turn off the screendumps
+                take_regular_screendumps = "no"
+                extra_params += " -snapshot"
+                cgroup_test = "blkio_throttle_multi"
+                # cgroup_test_time, cgroup_limit{ ,_read,_write}, cgroup_speeds
+                # cgroup_speeds = "[[0, 1024, 0, 2048, 0, 4096],"
+                # cgroup_speeds += "[1024, 1024, 1024, 1024, 1024, 1024]]"
+            - cpu_cfs_util:
+                # Test creats VMs according to no_host_cpus
+                # We want to be sure 1st VM have only 1 vCPU
+                vms = vm1
+                smp = 1
+                extra_params += " -snapshot"
+                cgroup_test = "cpu_cfs_util"
+                # cgroup_test_time, cgroup_limit
+            - cpu_share:
+                # Test creats VMs according to smp and cgroup_speeds
+                vms = ""
+                # When smp = 0 => it sets smp = no_host_cpus (optimal for test)
+                smp = 0
+                # problem with multiple preprocess, turn off the screendumps
+                take_regular_screendumps = "no"
+                extra_params += " -snapshot"
+                cgroup_test = "cpu_share"
+                # cgroup_test_time, cgroup_speeds
+                # cgroup_speeds = "[1000, 10000, 100000]"
+            - cpuset_cpus:
+                cgroup_test = "cpuset_cpus"
+                # cgroup_test_time, cgroup_limit, cgroup_cpuset, cgroup_verify
+                cgroup_test_time = 10
+                # smp have to match cpuset scenerios!
+                # smp = 4
+                # cgroup_cpuset = "[[None, '0,3', '1', '2', '1-2'],
+                # cgroup_cpuset += "[None, '0', '1', '0-1', '0-1']]
+                # cgroup_verify = [[50, 100, 100, 50], [100, 100, 5, 5]]
+            - cpuset_cpus_switching:
+                cgroup_test = "cpuset_cpus_switching"
+                # cgroup_test_time
+            - devices_access:
+                cgroup_test = "devices_access"
+            - freezer:
+                cgroup_test = "freezer"
+                # cgroup_test_time
+            - memory_limit:
+                # Test creats VMs
+                cgroup_test = "memory_limit"
+                # cgroup_memory_limit_kb(4kb aligned)
+                # cgroup_memory_limit_kb = 2097152
+            - memory_memsw_limit:
+                # Test creats VMs
+                cgroup_test = "memory_memsw_limit"
+                # cgroup_memory_limit_kb(4kb aligned)
+                # cgroup_memory_limit_kb = 2097152
+            - memory_move:
+                cgroup_test = "memory_move"
+                # cgroup_test_time, cgroup_memory_move_mb
+                # cgroup_memory_move_mb = 2048
 
     - virtio_console: install setup image_copy unattended_install.cdrom
         only Linux
-- 
1.7.7.6

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [KVM-AUTOTEST] [KVM-autotest] Cgroup-kvm rework
  2012-02-27 18:42 [KVM-autotest] Cgroup-kvm rework Lukas Doktor
                   ` (3 preceding siblings ...)
  2012-02-27 18:42 ` [PATCH 4/4] [KVM-autotest] tests.cgroup: Rework test execution Lukas Doktor
@ 2012-02-28 19:07 ` Lucas Meneghel Rodrigues
  4 siblings, 0 replies; 10+ messages in thread
From: Lucas Meneghel Rodrigues @ 2012-02-28 19:07 UTC (permalink / raw)
  To: Lukas Doktor; +Cc: autotest, kvm-autotest, kvm

On 02/27/2012 03:42 PM, Lukas Doktor wrote:
> Hi,
>
> This is a complete rework of cgroup test from subtests to singe-test-execution. It improves stability of testing and allows better test customisation. The speed is similar/faster in single variant execution and a bit slower in all-variants execution compare to previous version.
>
> It also contains a lot of important bugfixes and some cool enhancements described in patch.
>
> Checkout current version on:
> https://github.com/autotest/autotest/pull/209

Patchset applied, thanks Lukas!

> Regards,
> Lukáš
>

_______________________________________________
Autotest mailing list
Autotest@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/4] [KVM-autotest] tests.cfg.sample: change import order
  2012-02-27 18:42 ` [PATCH 3/4] [KVM-autotest] tests.cfg.sample: change import order Lukas Doktor
@ 2012-03-12  3:34   ` lei yang
  2012-03-12  7:15     ` Lukáš Doktor
  0 siblings, 1 reply; 10+ messages in thread
From: lei yang @ 2012-03-12  3:34 UTC (permalink / raw)
  To: Lukas Doktor; +Cc: autotest, kvm, kvm-autotest, jzupka

Howerver, you did the opposite thing or you did it two times

commit 6e4b5cffe999714357116884fcc4eb27fae41260
Author: Lucas Meneghel Rodrigues <lmr@redhat.com>
Date:   Wed Feb 29 18:47:14 2012 -0300

    Revert "tests.cfg.sample: change import order"

    This reverts commit e64b17d7a15602db0cd26ec55ccc902010985d0c,
    as it's causing problems with the test execution order.

    Signed-off-by: Lucas Meneghel Rodrigues

diff --git a/client/tests/kvm/tests-shared.cfg.sample
b/client/tests/kvm/tests-shared.cfg.sample
index bda982d..c6304b3 100644
--- a/client/tests/kvm/tests-shared.cfg.sample
+++ b/client/tests/kvm/tests-shared.cfg.sample
@@ -5,11 +5,11 @@

 # Include the base config files.
 include base.cfg
+include subtests.cfg
 include guest-os.cfg
 include guest-hw.cfg
 include cdkeys.cfg
 include virtio-win.cfg
-include subtests.cfg

 # Virtualization type (kvm or libvirt)
 vm_type = kvm


Lei


On Tue, Feb 28, 2012 at 2:42 AM, Lukas Doktor <ldoktor@redhat.com> wrote:
> Currently subtests.cfg is proceeded and then all other configs. My test
> needs to override smp parameter in some variant which is currently
> impossible.
>
> Using words current order means: we define subtests variants, than we
> specify base and guest and other details. In the end we limit what
> we want to execute.
>
> My proposed order enables forcing base/guest params in subtest variants.
>
> By words this means we specify base, guest system, cdkeys, etc. and in
> the end we define subtests with various variants. Then we limit what
> we actually want to execute but now subtest can force varius base/guest
> settings.
>
> Signed-off-by: Lukas Doktor <ldoktor@redhat.com>
> ---
>  client/tests/kvm/tests-shared.cfg.sample |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/client/tests/kvm/tests-shared.cfg.sample b/client/tests/kvm/tests-shared.cfg.sample
> index c6304b3..bda982d 100644
> --- a/client/tests/kvm/tests-shared.cfg.sample
> +++ b/client/tests/kvm/tests-shared.cfg.sample
> @@ -5,11 +5,11 @@
>
>  # Include the base config files.
>  include base.cfg
> -include subtests.cfg
>  include guest-os.cfg
>  include guest-hw.cfg
>  include cdkeys.cfg
>  include virtio-win.cfg
> +include subtests.cfg
>
>  # Virtualization type (kvm or libvirt)
>  vm_type = kvm
> --
> 1.7.7.6
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/4] [KVM-autotest] tests.cfg.sample: change import order
  2012-03-12  3:34   ` lei yang
@ 2012-03-12  7:15     ` Lukáš Doktor
  2012-03-12  7:49       ` lei yang
  0 siblings, 1 reply; 10+ messages in thread
From: Lukáš Doktor @ 2012-03-12  7:15 UTC (permalink / raw)
  To: lei yang; +Cc: autotest, kvm-autotest, kvm

Hi,

it caused problems so I had to modify it a bit. It's already fixed and 
applied in upstream.

Regards,
Lukáš

Dne 12.3.2012 04:34, lei yang napsal(a):
> Howerver, you did the opposite thing or you did it two times
>
> commit 6e4b5cffe999714357116884fcc4eb27fae41260
> Author: Lucas Meneghel Rodrigues<lmr@redhat.com>
> Date:   Wed Feb 29 18:47:14 2012 -0300
>
>      Revert "tests.cfg.sample: change import order"
>
>      This reverts commit e64b17d7a15602db0cd26ec55ccc902010985d0c,
>      as it's causing problems with the test execution order.
>
>      Signed-off-by: Lucas Meneghel Rodrigues
>
> diff --git a/client/tests/kvm/tests-shared.cfg.sample
> b/client/tests/kvm/tests-shared.cfg.sample
> index bda982d..c6304b3 100644
> --- a/client/tests/kvm/tests-shared.cfg.sample
> +++ b/client/tests/kvm/tests-shared.cfg.sample
> @@ -5,11 +5,11 @@
>
>   # Include the base config files.
>   include base.cfg
> +include subtests.cfg
>   include guest-os.cfg
>   include guest-hw.cfg
>   include cdkeys.cfg
>   include virtio-win.cfg
> -include subtests.cfg
>
>   # Virtualization type (kvm or libvirt)
>   vm_type = kvm
>
>
> Lei
>
>
> On Tue, Feb 28, 2012 at 2:42 AM, Lukas Doktor<ldoktor@redhat.com>  wrote:
>> Currently subtests.cfg is proceeded and then all other configs. My test
>> needs to override smp parameter in some variant which is currently
>> impossible.
>>
>> Using words current order means: we define subtests variants, than we
>> specify base and guest and other details. In the end we limit what
>> we want to execute.
>>
>> My proposed order enables forcing base/guest params in subtest variants.
>>
>> By words this means we specify base, guest system, cdkeys, etc. and in
>> the end we define subtests with various variants. Then we limit what
>> we actually want to execute but now subtest can force varius base/guest
>> settings.
>>
>> Signed-off-by: Lukas Doktor<ldoktor@redhat.com>
>> ---
>>   client/tests/kvm/tests-shared.cfg.sample |    2 +-
>>   1 files changed, 1 insertions(+), 1 deletions(-)
>>
>> diff --git a/client/tests/kvm/tests-shared.cfg.sample b/client/tests/kvm/tests-shared.cfg.sample
>> index c6304b3..bda982d 100644
>> --- a/client/tests/kvm/tests-shared.cfg.sample
>> +++ b/client/tests/kvm/tests-shared.cfg.sample
>> @@ -5,11 +5,11 @@
>>
>>   # Include the base config files.
>>   include base.cfg
>> -include subtests.cfg
>>   include guest-os.cfg
>>   include guest-hw.cfg
>>   include cdkeys.cfg
>>   include virtio-win.cfg
>> +include subtests.cfg
>>
>>   # Virtualization type (kvm or libvirt)
>>   vm_type = kvm
>> --
>> 1.7.7.6
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/4] [KVM-autotest] tests.cfg.sample: change import order
  2012-03-12  7:15     ` Lukáš Doktor
@ 2012-03-12  7:49       ` lei yang
  2012-03-12  8:11         ` Lukáš Doktor
  0 siblings, 1 reply; 10+ messages in thread
From: lei yang @ 2012-03-12  7:49 UTC (permalink / raw)
  To: Lukáš Doktor; +Cc: autotest, kvm, kvm-autotest, jzupka

On Mon, Mar 12, 2012 at 3:15 PM, Lukáš Doktor <ldoktor@redhat.com> wrote:
> Hi,
>
> it caused problems so I had to modify it a bit. It's already fixed and
> applied in upstream.
>


You mean you want put "include subtests.cfg" on the top or in the end?
from your idea you seems want it to be the end to change some
parameter easily

after I pull the tree,

I got something like (git show 6e4b5cffe999714357116884fcc4eb27fae41260)

include base.cfg
include subtests.cfg
include guest-os.cfg
include guest-hw.cfg
include cdkeys.cfg
include virtio-win.cfg

but I thought you may want to it be like

include base.cfg
include guest-os.cfg
include guest-hw.cfg
include cdkeys.cfg
include virtio-win.cfg
include subtests.cfg

or I'm wrong?




> Regards,
> Lukáš
>
> Dne 12.3.2012 04:34, lei yang napsal(a):
>
>> Howerver, you did the opposite thing or you did it two times
>>
>> commit 6e4b5cffe999714357116884fcc4eb27fae41260
>> Author: Lucas Meneghel Rodrigues<lmr@redhat.com>
>> Date:   Wed Feb 29 18:47:14 2012 -0300
>>
>>     Revert "tests.cfg.sample: change import order"
>>
>>     This reverts commit e64b17d7a15602db0cd26ec55ccc902010985d0c,
>>     as it's causing problems with the test execution order.
>>
>>     Signed-off-by: Lucas Meneghel Rodrigues
>>
>> diff --git a/client/tests/kvm/tests-shared.cfg.sample
>> b/client/tests/kvm/tests-shared.cfg.sample
>> index bda982d..c6304b3 100644
>> --- a/client/tests/kvm/tests-shared.cfg.sample
>> +++ b/client/tests/kvm/tests-shared.cfg.sample
>> @@ -5,11 +5,11 @@
>>
>>  # Include the base config files.
>>  include base.cfg
>> +include subtests.cfg
>>  include guest-os.cfg
>>  include guest-hw.cfg
>>  include cdkeys.cfg
>>  include virtio-win.cfg
>> -include subtests.cfg
>>
>>  # Virtualization type (kvm or libvirt)
>>  vm_type = kvm
>>
>>
>> Lei
>>
>>
>> On Tue, Feb 28, 2012 at 2:42 AM, Lukas Doktor<ldoktor@redhat.com>  wrote:
>>>
>>> Currently subtests.cfg is proceeded and then all other configs. My test
>>> needs to override smp parameter in some variant which is currently
>>> impossible.
>>>
>>> Using words current order means: we define subtests variants, than we
>>> specify base and guest and other details. In the end we limit what
>>> we want to execute.
>>>
>>> My proposed order enables forcing base/guest params in subtest variants.
>>>
>>> By words this means we specify base, guest system, cdkeys, etc. and in
>>> the end we define subtests with various variants. Then we limit what
>>> we actually want to execute but now subtest can force varius base/guest
>>> settings.
>>>
>>> Signed-off-by: Lukas Doktor<ldoktor@redhat.com>
>>> ---
>>>  client/tests/kvm/tests-shared.cfg.sample |    2 +-
>>>  1 files changed, 1 insertions(+), 1 deletions(-)
>>>
>>> diff --git a/client/tests/kvm/tests-shared.cfg.sample
>>> b/client/tests/kvm/tests-shared.cfg.sample
>>> index c6304b3..bda982d 100644
>>> --- a/client/tests/kvm/tests-shared.cfg.sample
>>> +++ b/client/tests/kvm/tests-shared.cfg.sample
>>> @@ -5,11 +5,11 @@
>>>
>>>  # Include the base config files.
>>>  include base.cfg
>>> -include subtests.cfg
>>>  include guest-os.cfg
>>>  include guest-hw.cfg
>>>  include cdkeys.cfg
>>>  include virtio-win.cfg
>>> +include subtests.cfg
>>>
>>>  # Virtualization type (kvm or libvirt)
>>>  vm_type = kvm
>>> --
>>> 1.7.7.6
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/4] [KVM-autotest] tests.cfg.sample: change import order
  2012-03-12  7:49       ` lei yang
@ 2012-03-12  8:11         ` Lukáš Doktor
  0 siblings, 0 replies; 10+ messages in thread
From: Lukáš Doktor @ 2012-03-12  8:11 UTC (permalink / raw)
  To: lei yang; +Cc: autotest, kvm-autotest, kvm

Yes, as I mentioned before, I wanted to put subtests.cfg after the other 
imports, but it caused a lot of trouble. So I find another solution 
without changing the import order.

Dne 12.3.2012 08:49, lei yang napsal(a):
> On Mon, Mar 12, 2012 at 3:15 PM, Lukáš Doktor<ldoktor@redhat.com>  wrote:
>> Hi,
>>
>> it caused problems so I had to modify it a bit. It's already fixed and
>> applied in upstream.
>>
>
> You mean you want put "include subtests.cfg" on the top or in the end?
> from your idea you seems want it to be the end to change some
> parameter easily
>
> after I pull the tree,
>
> I got something like (git show 6e4b5cffe999714357116884fcc4eb27fae41260)
>
> include base.cfg
> include subtests.cfg
> include guest-os.cfg
> include guest-hw.cfg
> include cdkeys.cfg
> include virtio-win.cfg
>
> but I thought you may want to it be like
>
> include base.cfg
> include guest-os.cfg
> include guest-hw.cfg
> include cdkeys.cfg
> include virtio-win.cfg
> include subtests.cfg
>
> or I'm wrong?
>
>
>
>
>> Regards,
>> Lukáš
>>
>> Dne 12.3.2012 04:34, lei yang napsal(a):
>>
>>> Howerver, you did the opposite thing or you did it two times
>>>
>>> commit 6e4b5cffe999714357116884fcc4eb27fae41260
>>> Author: Lucas Meneghel Rodrigues<lmr@redhat.com>
>>> Date:   Wed Feb 29 18:47:14 2012 -0300
>>>
>>>      Revert "tests.cfg.sample: change import order"
>>>
>>>      This reverts commit e64b17d7a15602db0cd26ec55ccc902010985d0c,
>>>      as it's causing problems with the test execution order.
>>>
>>>      Signed-off-by: Lucas Meneghel Rodrigues
>>>
>>> diff --git a/client/tests/kvm/tests-shared.cfg.sample
>>> b/client/tests/kvm/tests-shared.cfg.sample
>>> index bda982d..c6304b3 100644
>>> --- a/client/tests/kvm/tests-shared.cfg.sample
>>> +++ b/client/tests/kvm/tests-shared.cfg.sample
>>> @@ -5,11 +5,11 @@
>>>
>>>   # Include the base config files.
>>>   include base.cfg
>>> +include subtests.cfg
>>>   include guest-os.cfg
>>>   include guest-hw.cfg
>>>   include cdkeys.cfg
>>>   include virtio-win.cfg
>>> -include subtests.cfg
>>>
>>>   # Virtualization type (kvm or libvirt)
>>>   vm_type = kvm
>>>
>>>
>>> Lei
>>>
>>>
>>> On Tue, Feb 28, 2012 at 2:42 AM, Lukas Doktor<ldoktor@redhat.com>    wrote:
>>>> Currently subtests.cfg is proceeded and then all other configs. My test
>>>> needs to override smp parameter in some variant which is currently
>>>> impossible.
>>>>
>>>> Using words current order means: we define subtests variants, than we
>>>> specify base and guest and other details. In the end we limit what
>>>> we want to execute.
>>>>
>>>> My proposed order enables forcing base/guest params in subtest variants.
>>>>
>>>> By words this means we specify base, guest system, cdkeys, etc. and in
>>>> the end we define subtests with various variants. Then we limit what
>>>> we actually want to execute but now subtest can force varius base/guest
>>>> settings.
>>>>
>>>> Signed-off-by: Lukas Doktor<ldoktor@redhat.com>
>>>> ---
>>>>   client/tests/kvm/tests-shared.cfg.sample |    2 +-
>>>>   1 files changed, 1 insertions(+), 1 deletions(-)
>>>>
>>>> diff --git a/client/tests/kvm/tests-shared.cfg.sample
>>>> b/client/tests/kvm/tests-shared.cfg.sample
>>>> index c6304b3..bda982d 100644
>>>> --- a/client/tests/kvm/tests-shared.cfg.sample
>>>> +++ b/client/tests/kvm/tests-shared.cfg.sample
>>>> @@ -5,11 +5,11 @@
>>>>
>>>>   # Include the base config files.
>>>>   include base.cfg
>>>> -include subtests.cfg
>>>>   include guest-os.cfg
>>>>   include guest-hw.cfg
>>>>   include cdkeys.cfg
>>>>   include virtio-win.cfg
>>>> +include subtests.cfg
>>>>
>>>>   # Virtualization type (kvm or libvirt)
>>>>   vm_type = kvm
>>>> --
>>>> 1.7.7.6
>>>>
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>

_______________________________________________
Autotest mailing list
Autotest@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-03-12  8:11 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-02-27 18:42 [KVM-autotest] Cgroup-kvm rework Lukas Doktor
2012-02-27 18:42 ` [PATCH 1/4] [KVM-autotest] virt.kvm_vm: Make snapshot and boot params optional Lukas Doktor
2012-02-27 18:42 ` [PATCH 2/4] [KVM-autotest] virt.virt_vm: Add option to create raw images with dd Lukas Doktor
2012-02-27 18:42 ` [PATCH 3/4] [KVM-autotest] tests.cfg.sample: change import order Lukas Doktor
2012-03-12  3:34   ` lei yang
2012-03-12  7:15     ` Lukáš Doktor
2012-03-12  7:49       ` lei yang
2012-03-12  8:11         ` Lukáš Doktor
2012-02-27 18:42 ` [PATCH 4/4] [KVM-autotest] tests.cgroup: Rework test execution Lukas Doktor
2012-02-28 19:07 ` [KVM-AUTOTEST] [KVM-autotest] Cgroup-kvm rework Lucas Meneghel Rodrigues

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).